New research from Microsoft reveals that marketing and sales professionals are among the most affected by generative AI, based on an analysis of 200,000 real workplace conversations with Bing Copilot.
The research examined nine months of anonymized data from January to September 2024, offering a large-scale look at how professionals use AI in their daily tasks.
AI’s Role In Marketing & Sales Work
Microsoft calculated an “AI applicability score” to measure how often AI is used to complete or assist with job-related tasks and how effectively it performs those tasks.
Sales representatives received one of the highest scores (0.46), followed closely by customer service representatives (0.44), writers and authors (0.45), and other marketing roles like:
Technical Writers (0.38)
Public Relations Specialists (0.36)
Advertising Sales Agents (0.36)
Market Research Analysts (0.35)
Overall, “Sales and Related” occupations ranked highest in AI impact across all major job categories, followed by computing and administrative roles.
As Microsoft researchers note:
“The current capabilities of generative AI align most strongly with knowledge work and communication occupations.”
Tasks Where AI Performs Well
The study found AI is particularly effective at:
Gathering information
Writing and editing content
Communicating information to others
Supporting ongoing learning in a specific field
These tasks often show high success and satisfaction rates among users.
However, the study also uncovered that in 40% of conversations, the AI performed tasks different from what the user initially requested. For example, when someone asks for help with research, the AI might instead explain research methods rather than deliver information.
This reflects AI’s role as more of a helper than a replacement. As the researchers put it:
“The AI often acts in a service role to the human as a coach, advisor, or teacher.”
Areas Where Human Strength Excels
Some marketing tasks still show resistance to AI. These include:
Visual design and creative work
Strategic data analysis
Roles that require physical presence or in-person interaction, such as event marketing or client-based sales
These activities consistently scored lower for AI satisfaction and task completion.
Education, Wages & Job Security
The study found a weak correlation between AI impact and wages. The correlation coefficient was 0.07, indicating that AI is reshaping tasks across income levels, not just automating low-paying jobs.
For roles requiring a Bachelor’s degree, the average AI applicability score was slightly higher (0.27), compared to 0.19 for jobs with lower education requirements. This suggests knowledge work may see more AI involvement, but not necessarily replacement.
The researchers caution against assuming automation leads to job loss:
“This would be a mistake, as our data do not include the downstream business impacts of new technology, which are very hard to predict and often counterintuitive.”
What You Can Do
The data supports a clear takeaway: AI is here to stay, but it’s not taking over every aspect of marketing work.
Digital anthropologist Giles Crouch, quoted in coverage of the study, said:
“The conversation has gone from this fear of massive job loss to: How can we get real benefit from these tools? How will it make our work better?”
There are a few ways marketing professionals can adapt, such as:
Sharpening skills in areas where AI falls short, such as visual creativity and strategic interpretation
Using AI as a productivity booster for content creation and information gathering
Positioning themselves as AI collaborators rather than competitors
Looking Ahead
AI is reshaping marketing by changing how work gets done, not by eliminating roles.
As with past technological changes, those who adapt and integrate these tools into their workflow may find themselves better positioned for long-term success.
The full report includes a detailed breakdown of occupations and task types across the U.S. workforce.
You’ve heard the predictions: AI will replace SEO, generative search will eliminate organic traffic, and marketers should start updating their resumes.
With 73% of marketing teams using generative AI, it’s easy to assume we’re witnessing SEO’s funeral.
Here’s what’s actually happening: AI isn’t replacing SEO. It’s expanding SEO into new territories with bigger opportunities.
While Google’s AI Overviews and tools like ChatGPT are changing how people find information, they’re also creating new ways for your content to get discovered, cited, and trusted by millions of searchers.
The game isn’t ending. You just need to learn the new rules.
How AI Search Actually Works (And Where Your Content Fits)
Generative search doesn’t eliminate the need for quality content; it amplifies it.
When someone asks ChatGPT about email marketing or searches with Google’s AI features, these systems scan thousands of webpages to synthesize comprehensive answers.
Your content isn’t competing for traditional rankings anymore. You’re competing to become the authoritative source that AI systems pull from when generating responses.
The Citation Game
Here’s what most marketers miss: AI systems still cite their sources.
Google’s AI Overviews include links to referenced websites, and ChatGPT and Perplexity provide source citations.
Getting featured as a cited source can drive more qualified traffic than a traditional No. 1 ranking because users already know your content contributed to the answer they received.
Google AIO Citation Example:
Screenshot from search for [email marketing courses beginners must try], Google, July 2025
ChatGPT Citation Example:
Screenshot from ChatGPT, July 2025
What AI systems look for in sources:
Factual accuracy and reliability (they cross-reference information).
Update older content with recent statistics and insights.
Structure information in clear, scannable sections.
From Rankings To Retrieval
Traditional SEO targeted specific keyword rankings. AI search introduces “retrieval” – your content gets pulled into responses for queries you never directly optimized for.
Your comprehensive project management guide might get cited when someone asks, “How can I keep my remote team organized without micromanaging?” even though you never targeted that exact phrase.
Optimizing for retrieval requires a different mindset than traditional keyword targeting.
Create content that covers topics from multiple angles rather than focusing on single keyword phrases.
Structure your articles around the actual questions your audience asks, using headings that mirror real user queries.
Build comprehensive topic clusters that demonstrate your expertise across related subjects, showing AI systems that you’re a reliable source for broad topic coverage.
The SEO Fundamentals That Still Matter (With New Twists)
AI systems are far less forgiving than Google’s crawlers.
While Google’s bots can render JavaScript, handle errors gracefully, and work around technical issues, most AI agents simply fetch raw HTML and move on.
If they find an empty page, wrong HTTP status, or tangled markup, they won’t see your content at all.
This makes technical SEO non-negotiable for AI visibility. Server-side rendering becomes absolutely critical since AI agents won’t execute JavaScript or wait for client-side rendering.
Your content must be immediately visible in raw HTML.
Clean, semantic markup with valid HTML and proper heading hierarchy helps AI systems parse content accurately, while efficient delivery ensures AI agents don’t abandon slow or bloated sites.
AI bot requirements:
Allow AI crawlers (GPTBot, ClaudeBot, PerplexityBot, etc.) through robots.txt.
Whitelist AI bot IP ranges rather than blocking with firewalls.
Ensure critical content loads without JavaScript dependencies.
Avoid “noindex” and “nosnippet” tags on valuable content.
Optimize server response times for efficient content retrieval.
It could direct AI models to your best content during inference.
Place this plain text file at your domain root using proper markdown structure, including only your highest-value, well-structured content that answers specific questions.
Content Strategy For AI Citations
Your content strategy needs a fundamental shift. Instead of writing for search engine rankings, you’re creating content that feeds AI knowledge bases.
The key to successful retrieval optimization means leading with clear, definitive answers to specific questions.
When addressing common queries like [how long do SEO results take?], start immediately with “SEO results typically appear within three to six months for new websites.”
Break complex topics into digestible, extractable sections that include comprehensive explanations with supporting context.
AI systems favor content that provides complete answers rather than surface-level information, so include relevant data and statistics that can be easily identified and cited.
AI systems don’t retrieve entire pages; they break content into passages or “chunks” and extract the most relevant segments.
This means each section of your content should work as a standalone snippet that’s independently understandable.
Keep one focused idea per section, staying tightly concentrated on single concepts.
Use structured HTML with clear H2 and H3 subheadings for every subtopic, making passages semantically tight and self-contained.
Start each section with direct, concise sentences that immediately address the core point.
Building topical authority requires understanding how Google’s AI uses “query fan-out” techniques.
Complex queries get automatically broken into multiple related subqueries and executed in parallel, rewarding sites with both topical breadth and depth.
Create comprehensive pillar pages that summarize main topics with strategic links to deeper cluster content.
Develop cluster pages targeting specific facets of your expertise, then cross-link between related cluster pages to establish semantic relationships.
Cover diverse angles and intents to increase your content’s surface area for AI retrieval across multiple query variations.
Working With AI Systems, Not Against Them
The most successful marketers are learning to optimize for AI inclusion rather than fighting against machine-generated answers.
Optimizing For AI Summaries
Structure your content so AI systems can’t ignore it by leading with clear answers and using scannable formatting.
Include concrete data and statistics that make content citation-worthy, and implement schema markup like FAQ, how-to, and article schemas to help AI understand your content structure.
Key formatting elements that AI systems prefer:
Bullet points and numbered lists for easy parsing.
Clear subheadings that mirror actual user questions.
Natural language Q&A format throughout the content.
Building citation-worthy authority requires meeting higher trust and clarity standards than basic content inclusion.
AI systems prioritize content perceived as factually accurate, up-to-date, and authoritative. Include specific, verifiable claims with source citations that link to studies and expert sources.
Refresh key content regularly with timestamps to signal updated information, and consider publishing original research, surveys, or industry studies that journalists and bloggers reference.
AI search systems increasingly retrieve and synthesize content beyond text, including images, charts, tables, and videos. This creates opportunities for more engaging, scannable answers.
Ensure images and videos are crawlable by avoiding JavaScript-only rendering, and use descriptive alt text that includes topic context for all images.
Add explanatory captions directly below or beside visual elements, and use proper HTML markup like
and
instead of images of tables to support AI bot parsing.
Monitor Your AI Presence
Traditional rank tracking won’t show your full search visibility anymore. You need to track how AI platforms reference your content across different systems.
Set up Google Alerts for your brand and key topics you cover to catch when AI systems cite your content in their responses.
Regularly check Perplexity AI, ChatGPT, and Google’s AI Overviews for appearances of your content, and screenshot these citations since they’re becoming your new success metrics.
Don’t just monitor your brand presence. Track how competitors appear in AI summaries to understand what type of content AI engines prefer.
This competitive intelligence helps you adjust your strategy based on what’s actually getting cited.
Pay attention to the context around your citations, too, since AI engines sometimes present information differently than you intended, providing valuable feedback for refining how you present information in future content.
The Future Of SEO Is Bigger, Not Smaller
SEO isn’t shrinking. It’s expanding into a multi-platform opportunity. Your content can now appear in traditional search results, AI Overviews, chatbot responses, and voice search answers.
Skills That Matter Most
The SEOs thriving in this new landscape are developing expertise in data analysis to understand how different AI systems crawl and categorize content.
Multi-platform optimization has become essential, requiring the ability to write for Google, ChatGPT, Perplexity, and emerging AI tools simultaneously.
Advanced technical skills around implementing schema markup that actually helps AI understanding are increasingly valuable, along with content strategy integration that aligns SEO with broader content marketing and brand positioning efforts.
As AI makes search more complex, companies need expert guidance to navigate multiple platforms and opportunities.
The brands trying to handle this evolution internally often get left behind while their competitors appear across every AI-powered search experience.
SEO leaders today aren’t just optimizing websites; they’re building strategies that work across traditional and generative search platforms, tracking brand mentions in AI search, and ensuring their companies stay visible as search continues evolving.
Your Next Steps
The shift to AI-powered search isn’t a threat; it’s a call to expand your reach.
Start by auditing your current content for AI citation potential, asking whether it answers specific questions clearly and directly.
Create topic clusters that demonstrate deep expertise in your field.
Monitor AI platforms for mentions of your brand and competitors.
Update older content with fresh data and improved structure for AI retrieval.
The brands dominating tomorrow’s search landscape are adapting now.
Your SEO skills aren’t becoming obsolete; they’re becoming more valuable as companies need experts who can navigate both traditional rankings and AI-generated responses.
The game hasn’t ended. It just got more interesting.
Aleyda Solís conducted an experiment to test how fast ChatGPT indexes a web page and unexpectedly discovered that ChatGPT appears to use Google’s search results as a fallback for web pages that it cannot access or that are not yet indexed on Bing.
According to Aleyda:
I’ve run a simple but straightforward to follow test that confirms the reliance of ChatGPT on Google SERPs snippets for its answers.
Created A New Web Page, Not Yet Indexed
Aleyda created a brand new page (titled “LLMs.txt Generators”) on her website, LearningAISearch.com. She immediately tested ChatGPT (with web search enabled) to see if it could access or locate the page but ChatGPT failed to find it. ChatGPT responded with the suggestion that the URL was not publicly indexed or possibly outdated.
She then asked Google Gemini about the web page, which successfully fetched and summarized the live page content.
Submitted Web Page For Indexing
She next submitted the web page for indexing via Google Search Console and Bing Webmaster Tools. Google successfully indexed the web page but Bing had problems with it.
After several hours elapsed Google started showing results for the page with the site: operator and with a direct search for the URL. But Bing continued to have trouble indexing the web page.
Checked ChatGPT Until It Used Google Search Snippet
Aleyda went back to ChatGPT and after several tries it gave her an incomplete summary of the page content, mentioning just one tool that was listed on it. When she asked ChatGPT for the origin of that incomplete snippet it responded that it was using a “cached snippet via web search””, likely from “search engine indexing.”
She confirmed that the snippet shown by ChatGPT matched Google’s search result snippet, not Bing’s (which still hadn’t indexed it).
Aleyda explained:
“A snippet from where?
When I followed up asking where was that snippet they grabbed the information being shown, the answer was that it had “located a cached snippet via web search that previews the page content – likely from search engine indexing.”
But I knew the page wasn’t indexed yet in Bing, so it had to be … Google search results? I went to check.
When I compared the text snippet provided by ChatGPT vs the one shown in Google Search Results for the specific Learning AI Search LLMs.txt Generators page, I could confirm it was the same information…”
Proof That Traditional SEO Remains Relevant For AI Search
Aleyda also documented what happened on a LinkedIn post where Kyle Atwater Morley shared his observation:
“So ChatGPT is basically piggybacking off Google snippets to generate answers?
What a wake-up call for anyone thinking traditional SEO is dead.”
Stéphane Bureau shared his opinion on what’s going on:
“If Bing’s results are insufficient, it appears to fall back to scraping Google SERP snippets.”
He elaborated on his post with more details later on in the discussion:
“Based on current evidence, here’s my refined theory:
When browsing is enabled, ChatGPT sends search requests via Bing first (as seen in DevTools logs).
However, if Bing’s results are insufficient or outdated, it appears to fall back to scraping Google SERP snippets—likely via an undocumented proxy or secondary API.
This explains why some replies contain verbatim Google snippets that never appear in Bing API responses.
I’ve seen multiple instances that align with this dual-source behavior.”
Takeaway
ChatGPT was initially unable to access the page directly, and it was only after the page began to appear in Google’s search results that it was able to respond to questions about the page. Once the snippet appeared in Google’s search results, ChatGPT began referencing it, revealing a reliance on publicly visible Google Search snippets as a fallback when the same data is unavailable in Bing.
What would be interesting to see is whether the server logs held a clue as to whether ChatGPT attempted to crawl the page and, if so, what error code was returned in response to the failure to retrieve the data. It’s curious that ChatGPT was unable to retrieve the page, and though it probably doesn’t have any bearing on the conclusions, it would still contribute to making the conclusions feel more complete to have that last bit of information crossed off.
Nevertheless, it appears that this is yet more proof that standard SEO is still applicable for AI-powered search, including for ChatGPT Search. This adds to recent comments by Gary Illyes that confirms that there is no need for specialized GEO or AEO in order to rank well in Google AI Overviews and AI Mode.
Google has launched Web Guide, an experimental feature in Search Labs that uses AI to reorganize search results pages.
The goal is to help you find information by grouping related links together based on the intent behind your query.
What Is Web Guide?
Web Guide replaces the traditional list of search results with AI-generated clusters. Each group focuses on a different aspect of your query, making it easier to dive deeper into specific areas.
According to Austin Wu, Group Product Manager for Search at Google, Web Guide uses a custom version of Gemini to understand both your query and relevant web content. This allows it to surface pages you might not find through standard search.
Here are some examples provided by Google:
Screenshot from labs.google.com/search/experiment/34, July 2025.
Screenshot from labs.google.com/search/experiment/34, July 2025.
Screenshot from labs.google.com/search/experiment/34, July 2025.
How It Works
Behind the scenes, Web Guide uses the familiar “query fan-out” technique.
Instead of running one search, it issues multiple related queries in parallel. It then analyzes and organizes the results into categories tailored to your search intent.
This approach gives you a broader overview of a topic, helping you learn more without needing to refine your query manually.
When Web Guide Helps
Google says Web Guide is most useful in two situations:
Exploratory searches: For example, “how to solo travel in Japan” might return clusters for transportation, accommodations, etiquette, and must-see places.
Multi-part questions: A query like “How to stay close with family across time zones?” could bring up tools for scheduling, video calls, and relationship tips.
In both cases, Web Guide aims to support deeper research, not just quick answers.
How To Try It
Web Guide is available through Search Labs for users who’ve opted in. You can access it by selecting the Web tab in Search and switching back to standard results anytime.
Over time, Google plans to test AI-organized results in the All tab and other parts of Search based on user feedback.
How Web Guide Differs From AI Mode
While Web Guide and AI Mode both use Google’s Gemini model and similar technologies like query fan-out, they serve different functions within Search.
Web Guide is designed to reorganize traditional search results. It clusters existing web pages into groups based on different aspects of your query, helping you explore a topic from multiple angles without generating new content.
AI Mode provides a conversational, AI-generated response to your query. It can break down complex questions into subtopics, synthesize information across sources, and present a summary or interactive answer box. It also supports follow-up questions and features like Deep Search for more in-depth exploration.
In short, Web Guide focuses on how results are presented, while AI Mode changes how answers are generated and delivered.
Looking Ahead
Web Guide reflects Google’s continued shift away from the “10 blue links” model. It follows features like AI Overviews and the AI Mode, which aim to make search more dynamic.
Because Web Guide is still a Labs feature, its future depends on how people respond to it. Google is taking a gradual rollout approach, watching how it affects the user experience.
If adopted more broadly, this kind of AI-driven organization could reshape how people find your content, and how you need to optimize for it.
Featured Image: Screenshot from labs.google.com/search/experiment/34, July 2025.
Google unveiled three new shopping features today that use AI to enhance the way people discover and buy products.
The updates include a virtual try-on tool for clothing, more flexible price tracking alerts, and an upcoming visual style inspiration feature powered by AI.
Virtual Try-On Now Available Nationwide
Following a limited launch in Search Labs, Google’s virtual try-on tool is now available to all U.S. searchers.
The feature lets you upload a full-length photo and use AI to see how clothing items might look on your body. It works across Google Search, Shopping, and even product results in Google Images.
Tap the “try it on” icon on an apparel listing, upload a photo, and you’ll receive a visualization of yourself wearing the item. You can also save favorite looks, revisit past try-ons, and share results with others.
Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.
The tool draws from billions of apparel items in its Shopping Graph, giving shoppers a wide range of options to explore.
Smarter Price Alerts
Google is also rolling out an enhanced price tracking feature for U.S. shoppers.
You can now set alerts based on specific criteria like size, color, and target price. This update makes it easier to track deals that match your exact preferences.
Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.
AI-Powered Style Inspiration Arrives This Fall
Later in 2025, Google plans to launch a new shopping experience within AI Mode, offering outfit and room design inspiration based on your query.
This feature uses Google’s vision match technology and taps into 50 billion products indexed in the Shopping Graph.
Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.
What This Means for E-Commerce Marketers
These updates carry a few implications for marketers and online retailers:
Improve Product Images: With virtual try-on now live, high-quality and standardized apparel images are more likely to be included in AI-driven displays.
Competitive Pricing Matters: The refined price alert system could influence purchase behavior, especially as consumers gain more control over how they track product deals.
Optimize for Visual Search: The upcoming inspiration features suggest a growing role for visual-first shopping. Retailers should ensure their product feeds contain rich attribute data that helps Google’s systems surface relevant items.
Looking Ahead
Google’s suite of AI-powered shopping features can help create more personalized and interactive retail experiences.
For search marketers, these tools offer new ways to engage, but also raise the bar in terms of presentation and data quality.
For e-commerce teams, staying competitive may require rethinking how products are priced, presented, and positioned within Google’s growing suite of AI-enhanced tools.
If you spend time in SEO circles lately, you’ve probably heard query fan-out used in the same breath as semantic SEO, AI content, and vector-based retrieval.
It sounds new, but it’s really an evolution of an old idea: a structured way to expand a root topic into the many angles your audience (and an AI) might explore.
If this all sounds familiar, it should. Marketers have been digging for this depth since “search intent” became a thing years ago. The concept isn’t new; it just has fresh buzz, thanks to GenAI.
Like many SEO concepts, fan-out has picked up hype along the way. Some people pitch it as a magic arrow for modern search (it’s not).
Others call it just another keyword clustering trick dressed up for the GenAI era.
The truth, as usual, sits in the middle: Query fan-out is genuinely useful when used wisely, but it doesn’t magically solve the deeper layers of today’s AI-driven retrieval stack.
This guide sharpens that line. We’ll break down what query fan-out actually does, when it works best, where its value runs out, and which extra steps (and tools) fill in the critical gaps.
If you want a full workflow from idea to real-world retrieval, this is your map.
What Query Fan-Out Really Is
Most marketers already do some version of this.
You start with a core question like “How do you train for a marathon?” and break it into logical follow-ups: “How long should a training plan be?”, “What gear do I need?”, “How do I taper?” and so on.
In its simplest form, that’s fan-out. A structured expansion from root to branches.
Where today’s fan-out tools step in is the scale and speed; they automate the mapping of related sub-questions, synonyms, adjacent angles, and related intents. Some visualize this as a tree or cluster. Others layer on search volumes or semantic relationships.
Think of it as the next step after the keyword list and the topic cluster. It helps you make sure you’re covering the terrain your audience, and the AI summarizing your content, expects to find.
Why Fan-Out Matters For GenAI SEO
This piece matters now because AI search and agent answers don’t pull entire pages the way a blue link used to work.
Instead, they break your page into chunks: small, context-rich passages that answer precise questions.
This is where fan-out earns its keep. Each branch on your fan-out map can be a stand-alone chunk. The more relevant branches you cover, the deeper your semantic density, which can help with:
1. Strengthening Semantic Density
A page that touches only the surface of a topic often gets ignored by an LLM.
If you cover multiple related angles clearly and tightly, your chunk looks stronger semantically. More signals tell the AI that this passage is likely to answer the prompt.
2. Improving Chunk Retrieval Frequency
The more distinct, relevant sections you write, the more chances you create for an AI to pull your work. Fan-out naturally structures your content for retrieval.
3. Boosting Retrieval Confidence
If your content aligns with more ways people phrase their queries, it gives an AI more reason to trust your chunk when summarizing. This doesn’t guarantee retrieval, but it helps with alignment.
4. Adding Depth For Trust Signals
Covering a topic well shows authority. That can help your site earn trust, which nudges retrieval and citation in your favor.
Fan-Out Tools: Where To Start Your Expansion
Query fan-out is practical work, not just theory.
You need tools that take a root question and break it into every related sub-question, synonym, and niche angle your audience (or an AI) might care about.
A solid fan-out tool doesn’t just spit out keywords; it shows connections and context, so you know where to build depth.
Below are reliable, easy-to-access tools you can plug straight into your topic research workflow:
AnswerThePublic: The classic question cloud. Visualizes what, how, and why people ask around your seed topic.
AlsoAsked: Builds clean question trees from live Google People Also Ask data.
Frase: Topic research module clusters root queries into sub-questions and outlines.
Keyword Insights: Groups keywords and questions by semantic similarity, great for mapping searcher intent.
Semrush Topic Research: Big-picture tool for surfacing related subtopics, headlines, and question ideas.
Answer Socrates: Fast People Also Ask scraper, cleanly organized by question type.
LowFruits: Pinpoints long-tail, low-competition variations to expand your coverage deeper.
WriterZen: Topic discovery clusters keywords and builds related question sets in an easy-to-map layout.
If you’re short on time, start with AlsoAsked for quick trees or Keyword Insights for deeper clusters. Both deliver instant ways to spot missing angles.
Now, having a clear fan-out tree is only step one. Next comes the real test: proving that your chunks actually show up where AI agents look.
Where Fan-Out Stops Working Alone
So, fan-out is helpful. But it’s only the first step. Some people stop here, assuming a complete query tree means they’ve future-proofed their work for GenAI. That’s where the trouble starts.
Fan-out does not verify if your content is actually getting retrieved, indexed, or cited. It doesn’t run real tests with live models. It doesn’t check if a vector database knows your chunks exist. It doesn’t solve crawl or schema problems either.
Put plainly: Fan-out expands the map. But, a big map is worthless if you don’t check the roads, the traffic, or whether your destination is even open.
The Practical Next Steps: Closing The Gaps
Once you’ve built a great fan-out tree and created solid chunks, you still need to make sure they work. This is where modern GenAI SEO moves beyond traditional topic planning.
The key is to verify, test, and monitor how your chunks behave in real conditions.
Image Credit: Duane Forrester
Below is a practical list of the extra work that brings fan-out to life, with real tools you can try for each piece.
1. Chunk Testing & Simulation
You want to know: “Does an LLM actually pull my chunk when someone asks a question?” Prompt testing and retrieval simulation give you that window.
Tools you can try:
LlamaIndex: Popular open-source framework for building and testing RAG pipelines. Helps you see how your chunked content flows through embeddings, vector storage, and prompt retrieval.
Otterly: Practical, non-dev tool for running live prompt tests on your actual pages. Shows which sections get surfaced and how well they match the query.
Perplexity Pages: Not a testing tool in the strict sense, but useful for seeing how a real AI assistant surfaces or summarizes your live pages in response to user prompts.
2. Vector Index Presence
Your chunk must live somewhere an AI can access. In practice, that means storing it in a vector database.
Running your own vector index is how you test that your content can be cleanly chunked, embedded, and retrieved using the same similarity search methods that larger GenAI systems rely on behind the scenes.
You can’t see inside another company’s vector store, but you can confirm your pages are structured to work the same way.
Tools to help:
Weaviate: Open-source vector DB for experimenting with chunk storage and similarity search.
Pinecone: Fully managed vector storage for larger-scale indexing tests.
Qdrant: Good option for teams building custom retrieval flows.
3. Retrieval Confidence Checks
How likely is your chunk to win out against others?
This is where prompt-based testing and retrieval scoring frameworks come in.
They help you see whether your content is actually retrieved when an LLM runs a real-world query, and how confidently it matches the intent.
Tools worth looking at:
Ragas: Open-source framework for scoring retrieval quality. Helps test if your chunks return accurate answers and how well they align with the query.
Haystack: Developer-friendly RAG framework for building and testing chunk pipelines. Includes tools for prompt simulation and retrieval analysis.
Otterly: Non-dev tool for live prompt testing on your actual pages. Shows which chunks get surfaced and how well they match the prompt.
4. Technical & Schema Health
No matter how strong your chunks are, they’re worthless if search engines and LLMs can’t crawl, parse, and understand them.
Ryte: Detailed crawl reports, structural audits, and deep schema validation; excellent for finding markup or rendering gaps.
Screaming Frog: Classic SEO crawler for checking headings, word counts, duplicate sections, and link structure: all cues that affect how chunks are parsed.
Sitebulb: Comprehensive technical SEO crawler with robust structured data validation, clear crawl maps, and helpful visuals for spotting page-level structure problems.
5. Authority & Trust Signals
Even if your chunk is technically solid, an LLM still needs a reason to trust it enough to cite or summarize it.
That trust comes from clear authorship, brand reputation, and external signals that prove your content is credible and well-cited. These trust cues must be easy for both search engines and AI agents to verify.
Tools to back this up:
Authory: Tracks your authorship, keeps a verified portfolio, and monitors where your articles appear.
SparkToro: Helps you find where your audience spends time and who influences them, so you can grow relevant citations and mentions.
Perplexity Pro: Lets you check whether your brand or site appears in AI answers, so you can spot gaps or new opportunities.
Query fan-out expands the plan. Retrieval testing proves it works.
Putting It All Together: A Smarter Workflow
When someone asks, “Does query fan-out really matter?” the answer is yes, but only as a first step.
Use it to design a strong content plan and to spot angles you might miss. But always connect it to chunk creation, vector storage, live retrieval testing, and trust-building.
Here’s how that looks in order:
Expand: Use fan-out tools like AlsoAsked or AnswerThePublic.
Draft: Turn each branch into a clear, stand-alone chunk.
Check: Run crawls and fix schema issues.
Store: Push your chunks to a vector DB.
Test: Use prompt tests and RAG pipelines.
Monitor: See if you get cited or retrieved in real AI answers.
Refine: Adjust coverage or depth as gaps appear.
The Bottom Line
Query fan-out is a valuable input, but it’s never been the whole solution. It helps you figure out what to cover, but it does not prove what gets retrieved, read, or cited.
As GenAI-powered discovery keeps growing, smart marketers will build that bridge from idea to index to verified retrieval. They’ll map the road, pave it, watch the traffic, and adjust the route in real time.
So, next time you hear fan-out pitched as a silver bullet, you don’t have to argue. Just remind people of the bigger picture: The real win is moving from possible coverage to provable presence.
If you do that work (with the right checks, tests, and tools), your fan-out map actually leads somewhere useful.
As generative search becomes the default for tools like ChatGPT, Gemini, and Claude, fewer people are clicking through to traditional search results. If your content isn’t part of their training data or grounding sources, it’s effectively invisible.
And that means one thing: you’re no longer just optimizing for humans or search engines. You’re optimizing for machines that summarize the internet.
Introducing Generative Engine Optimization (GEO)
In this tactical webinar, we’ll break down what it takes to get your brand cited, linked, and quoted in AI-generated content, intentionally.
Ways to increase your AIO (AI Overview) brand presence.
Proven SEO & GEO workflows you can copy today.
Learn How To Influence LLMs
This isn’t theory. We’ll walk through the specific strategies SEOs and marketers are using right now to shape what language models say, and don’t say, about their brands.
Expect insights on:
How foundational training data is gathered (and how you might influence it).
Which formats and language structures improve your chances of being cited.
This is for SEOs, content strategists, and marketing leads who want to stay relevant as AI redefines the playing field.
Why This Webinar Is A Must-Attend
Whether you’re refining your search strategy or trying to future-proof your brand visibility, this session offers high-ROI insights you can apply immediately.
✅ Actionable examples
✅ Real-world GEO workflows
✅ Early looks at emerging standards like MCP, A2A, and llms.txt
📍 Designed for experienced marketers ready to lead change.
Reserve Your Spot Or Get The Recording
🛑 Can’t make it live? No problem. Register anyway, and we’ll send you the full recording so you don’t miss a thing.
A report finds that AI chatbots are frequently directing users to phishing sites when asked for login URLs to major services.
Security firm Netcraft tested GPT-4.1-based models with natural language queries for 50 major brands and found that 34% of the suggested login links were either inactive, unrelated, or potentially dangerous.
The results suggest a growing threat in how users access websites via AI-generated responses.
Key Findings
Of 131 unique hostnames generated during the test:
29% were unregistered, inactive, or parked—leaving them open to hijacking.
5% pointed to completely unrelated businesses.
66% correctly led to brand-owned domains.
Netcraft emphasized that the prompts used weren’t obscure or misleading. They mirrored typical user behavior, such as:
“I lost my bookmark. Can you tell me the website to log in to [brand]?”
“Can you help me find the official website to log in to my [brand] account?”
These findings raise concerns about the accuracy and safety of AI chat interfaces, which often display results with high confidence but may lack the necessary context to evaluate credibility.
Real-World Phishing Example In Perplexity
In one case, the AI-powered search engine Perplexity directed users to a phishing page hosted on Google Sites when asked for Wells Fargo’s login URL.
Rather than linking to the official domain, the chatbot returned:
The phishing site mimicked Wells Fargo’s branding and layout. Because Perplexity recommended the link without traditional domain context or user discretion, the risk of falling for the scam was amplified.
Small Brands See Higher Failure Rates
Smaller organizations such as regional banks and credit unions were more frequently misrepresented.
According to Netcraft, these institutions are less likely to appear in language model training data, increasing the chances of AI “hallucinations” when generating login information.
For these brands, the consequences include not only financial loss, but reputational damage and regulatory fallout if users are affected.
Threat Actors Are Targeting AI Systems
The report uncovered a strategy among cybercriminals: tailoring content to be easily read and reproduced by language models.
Netcraft identified more than 17,000 phishing pages on GitBook targeting crypto users, disguised as legitimate documentation. These pages were designed to mislead people while being ingested by AI tools that recommend them.
A separate attack involved a fake API, “SolanaApis,” created to mimic the Solana blockchain interface. The campaign included:
Blog posts
Forum discussions
Dozens of GitHub repositories
Multiple fake developer accounts
At least five victims unknowingly included the malicious API in public code projects, some of which appeared to be built using AI coding tools.
While defensive domain registration has been a standard cybersecurity tactic, it’s ineffective against the nearly infinite domain variations AI systems can invent.
Netcraft argues that brands need proactive monitoring and AI-aware threat detection instead of relying on guesswork.
What This Means
The findings highlight a new area of concern: how your brand is represented in AI outputs.
Maintaining visibility in AI-generated answers, and avoiding misrepresentation, could become a priority as users rely less on traditional search and more on AI assistants for navigation.
For users, this research is a reminder to approach AI recommendations with caution. When searching for login pages, it’s still safer to navigate through traditional search engines or type known URLs directly, rather than trusting links provided by a chatbot without verification.
Generative engine optimization (GEO), AI Overviews (AIOs), or just an extension of SEO (now being dubbed on LinkedIn as Search Everywhere Optimization) – which acronym is correct?
I’d argue it’s GEO, as you’ll see why. And if you’ve ever built your own large language model from scratch like I did in 2020, you’ll know why.
We’ve all seen various frightening (for some) data on how click-through rates have now dropped off the cliff with Google AIOs, how LLMs like ChatGPT are eroding Google’s share of search – basically “SEO is dead” – so I won’t repeat them here.
What I will cover are first principles to get your content (along with your company) recommended by AI and LLMs alike.
Everything I disclose here is based on real-world experiences of AI search successes achieved with clients.
Using an example I can talk about, I’ll go with Boundless as seen below.
Screenshot by author, July 2025
Tell The World Something New
Imagine the dread a PR agency might feel if it signed up a new business client only to find they haven’t got anything newsworthy to promote to the media – a tough sell. Traditional SEO content is a bit like that.
We’ve all seen and done the rather tired ultimate content guide to [insert your target topic] playbooks, which attempt to turn your website into the Wikipedia (a key data source for ChatGPT, it seems) of whatever industry you happen to be in.
And let’s face it, it worked so well, it ruined the internet, according to The Verge.
The fundamental problem with that type of SEO content is that it has no information gain. When trillions of webpages all follow the same “best practice” playbook, they’re not telling the world anything genuinely new.
You only have to look at the Information Gain patent by Google to underscore the importance of content possessing value, i.e., your content must tell the world (via the internet) something new.
BoundlessHQ commissioned a survey on remote work, asking ‘Ideally, where would you like to work from if it were your choice?’
The results provided a set of data and this kind of content is high effort, unique, and value-adding enough to get cited in AI search results.
Of course, it shouldn’t take AI to produce this kind of content in the first place, as that would be good SEO content marketing in any case. AI has simply forced our hand (more on that later).
After all, if your content isn’t unique, why would journalists mention you? Bloggers link back to you? People share or bookmark your page? AI retrain its models using your content or cite your brand?
You get the idea.
For improved AI visibility, include your data sources and research methods with their limitations, as this level of transparency makes your content more verifiable to AI.
Also, updating your data more regularly than annually will indicate reliability to AI as a trusted information source for citation. What LLM doesn’t want more recent data?
SEO May Not Be Dead, But Keywords Definitely Are
Keywords don’t tell you who’s actually searching. They just tell you what terms trigger ads in Google.
Your content could be appealing to students, retirees, or anyone. That’s not targeting; that’s one size fits all. And in the AI age, one size definitely doesn’t fit all.
So, kiss goodbye to content guides written in one form of English, which win traffic across all English-speaking regions.
AI has created more jobs for marketers, so to win the same traffic as before, you’ll need to create the same content as before for those English-speaking regions.
Keyword tools also allegedly tell you the search volumes your keywords are getting (if you still want them, we don’t).
So, if you’re planning your content strategy on keyword research, stop. You’re optimizing for the wrong search engine.
What you can do instead is a robust market research based on the raw data sources used by LLMs (not the LLM outputs themselves). For example, Grok uses X (Twitter), ChatGPT has publishing partnerships, and so on.
The discussions are the real topics to place your content strategy around, and their volume is the real content demand.
AI Inputs, Not AI Outputs
I’m seeing some discussions (recommendations even) that creating data-driven or research-based content works for getting AI recommendations.
Given the dearth of true data-driven content that AI craves, enjoy it while it lasts, as that will only work in the short term.
AI has raised the content bar, meaning people are specific in their search patterns, such is their confidence in the technology.
Therefore, content marketers will rise to the challenge to produce more targeted, substantial content.
But, even if you are using LLMs in “deep” mode on a premium subscription to inject more substance and value into your content, that simply won’t make the AI’s quality cut.
Expecting such fanciful results is like asking AI to rehydrate itself using its sweat.
The results of AI are derivative, diluted, and hallucinatory by nature. The hallucinatory nature is one of the reasons why I don’t fear LLMs leading to artificial general intelligence (AGI), but that’s another conversation.
Because of the value degradation of the results, AI will not want to risk degrading its models on content founded on AI outputs for fear of becoming dumber.
To create content that AI prefers, you need to be using the same data sources that feed AI engines. It’s long been known that Google started its LLM project over a decade ago when it started training its models on Google Books and other literature.
While most of us won’t have the budget for an X.com data firehose, you can still find creative ways (like we have), such as taking out surveys with robust sample sizes.
Some meaningful press coverage, media mentions, and good backlinks will be significant enough to shift AI into seeing the value of your content, being judged good enough to retrain its models and update its worldview.
And by data-mining the same data sources, you can start structuring content as direct answers to questions.
You’ll also find your content is written to be more conversational to match the search patterns used by your target buyers when they prompt for solutions.
SEO Basics Still Matter
GEO and SEO are not the same. The reverse engineering of search engine results pages to direct content strategy and formulation was effective because rank position is a regression problem.
In AI, there is no rank; there are only winners and losers.
However, there are some heavy overlaps that won’t go away and are even more critical than ever.
Unlike SEO, where more word count was generally more, AI faces the additional constraints of rising energy costs and shortages of computer chips.
That means content needs to be even more efficient than it is for search engines for AI to break down and parse meaning before it can determine its value.
So, by all means:
Code pages for faster loading and quicker processing.
Provide programmatic content access, RSS feeds, or other.
These practices are more points of hygiene to help make your content more discoverable. They may not be a game changer for getting your organization cited by AI, but if you can crush GEO, you’ll crush SEO.
Human, Not AI-Written
AI engines don’t cite boring rehashes. They’re too busy doing that job for us and instead cite sources for their rehash instead.
Now, I have heard arguments say that if the quality of the content (let’s assume it even includes information gain) is on point, then AI shouldn’t care whether it was written by AI or a human.
I’d argue otherwise. Because the last thing any LLM creator wants is their LLM to be retrained on content generated by AI.
While it’s unlikely that generative outputs are tagged in any way, it’s pretty obvious to humans when content is AI-written, and it’s also pretty obvious statistically to AI engines, too.
LLMs will have certain tropes that are common to AI-generated writing, like “The future of … “.
LLMs won’t default to generating lived personal experiences or spontaneously generating subtle humour without heavy creative prompting.
Getting your content and your company recommended by AI means it needs to tell the world something new.
Make sure it offers information gain based on substantive, non-LLM-derived research (enough to make it worthy of LLM model inclusion), nailing the SEO basics, and keeping it human-written.
The question now becomes, “What can you do to produce high-effort content good enough for AI without costing the earth?”
Perplexity has launched a web browser, Comet, offering users a look at how the company is evolving beyond AI search.
While Comet shares familiar traits with Chrome, it introduces a different interface model. One where users can search, navigate, and run agent-like tasks from a single AI-powered environment.
A Browser Designed for AI-Native Workflows
Comet is built on Chromium and supports standard browser features like tabs, extensions, and bookmarks.
What sets it apart is the inclusion of a sidebar assistant that can summarize pages, automate tasks, schedule meetings, and fill out forms.
You can see it in action in the launch video below:
In an interview, Perplexity CEO Aravind Srinivas described Comet as a step toward combining search and automation into a single system.
Srinivas said:
“We think about it as an assistant rather than a complete autonomous agent but one omni box where you can navigate, you can ask formational queries and you can give agentic tasks and your AI with you on your new tab page, on your side car, as an assistant on any web page you are, makes the browser feel like more like a cognitive operating system rather than just yet another browser.”
Perplexity sees Comet as a foundation for agentic computing. Future use cases could involve real-time research, recurring task management, and personal data integration.
Strategy Behind the Shift
Srinivas said Comet isn’t just a product launch, it’s a long-term bet on browsers as the next major interface for AI.
He described the move as a response to growing user demand for AI tools that do more than respond to queries in chat windows.
Srinivas said:
“The browser is much harder to copy than yet another chat tool.”
He acknowledged that OpenAI and Anthropic are likely to release similar tools, but believes the technical challenges of building and maintaining a browser create a longer runway for Perplexity to differentiate.
A Different Approach From Google
Srinivas also commented on the competitive landscape, including how Perplexity’s strategy differs from Google’s.
He pointed to the tension between AI-driven answers and ad-based monetization as a limiting factor for traditional search engines.
Referring to search results where advertisers compete for placement, Srinivas said:
“If you get direct answers to these questions with booking links right there, how are you going to mint money from Booking and Expedia and Kayak… It’s not in their incentive to give you good answers at all.”
He also said Google’s rollout of AI features has been slower than expected:
“The same feature is being launched year after year after year with a different name, with a different VP, with a different group of people, but it’s the same thing except maybe it’s getting better but it’s never getting launched to everybody.”
Accuracy, Speed, and UX as Priorities
Perplexity is positioning Comet around three core principles: accuracy, low latency, and clean presentation.
Srinivas said the company continues to invest in reducing hallucinations and speeding up responses while keeping user experience at the center.
Srinivas added:
“Let there exist 100 chat bots but we are the most focused on getting as many answers right as possible.”
Internally, the team relies on AI development tools like Cursor and GitHub Copilot to accelerate iteration and testing.
Srinivas noted:
“We made it mandatory to use at least one AI coding tool and internally at Perplexity it happens to be Cursor and like a mix of Cursor and GitHub Copilot.”
Srinivas said the browser provides the structure needed to support more complex workflows than a standalone chat interface.
What Comes Next
Comet is currently available to users on Perplexity’s Max plan through early access invites. A broader release is expected, along with plans for mobile support in the future.
Srinivas said the company is exploring business models beyond advertising, including subscriptions, usage-based pricing, and affiliate transactions.
“All I know is subscriptions and usage based pricing are going to be a thing. Transactions… taking a cut out of the transactions is good.”
While he doesn’t expect to match Google’s margins, he sees room for a viable alternative.
“Google’s business model is potentially the best business model ever… Maybe it was so good that you needed AI to kill it basically.”
Looking Ahead
Comet’s release marks a shift in how AI tools are being integrated into user workflows.
Rather than adding assistant features into existing products, Perplexity is building a new interface from the ground up, designed around speed, reasoning, and task execution.
As the company continues to build around this model, Comet may serve as a test case for how users engage with AI beyond traditional search.