What Content Works Well In LLMs? via @sejournal, @Kevin_Indig

Over the last 12 months, we filled significant gaps in our understanding of AI Chatbots like ChatGPT & Co.

We know:

  1. Adoption is growing rapidly.
  2. AI chatbots send more referrals to websites over time.
  3. Referral traffic from AI chatbots has a higher quality than that from Google.

You can read all about it in the state of AI chatbots and SEO.

But there isn’t much content about examples and success factors of content that drives citations and mentions in AI chatbots.

To get an answer, I analyzed over 7,000 citations across 1,600 URLs to content-heavy sites (think: Integrators) in # AI chatbots (ChatGPT, Perplexity, AI Overviews) in February 2024 with the help of Profound.

My goal is to figure out:

  1. Why some pages are more cited than others, so we can optimize content for AI chatbots.
  2. Whether classic SEO factors matter for AI chatbot visibility, so we can prioritize.
  3. What traps to avoid, so we don’t have to learn the same lessons many times.
  4. If different factors influence mentions and citations, so we can be more targeted in our efforts.

Here are my findings:

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

The Key To Brand Citation In AI Chatbots: Deep Content

Image Credit: Kevin Indig

🔍 Context: We know that AI chatbots use Retrieval Augmented Generation (RAG) to weigh their answers with results from Google and Bing. However, does that mean classic SEO ranking factors also translate to AI chatbot citations? No.

My correlation analysis shows that none of the classic SEO metrics have strong relationships with citations. LLMs have light preferences: Perplexity and in AIOs weigh word and sentence count higher. ChatGPT weighs domain rating and Flesch Score.

💡Takeaway: Classic SEO metrics don’t matter nearly as much for AI chatbot mentions and citations. The best thing you can do for content optimization is to aim for depth, comprehensiveness, and readability (how easy the text is to understand).

The following examples all demonstrate those attributes:

  • https://www.byrdie.com/digital-prescription-services-dermatologist-5179537
  • https://www.healthline.com/nutrition/best-weight-loss-programs
  • https://www.verywellmind.com/we-tried-online-therapy-com-these-were-our-experiences-8780086

Broad correlations didn’t reveal enough meat on the bone and left me with too many open questions.

So, I looked at what the most-cited content does differently than the rest. That approach showed much stronger patterns.

Image Credit: Kevin Indig

🔍Context: Because I didn’t get much out of statistical correlations, I wanted to see how the top 10% of most cited content stacks up against the bottom 90%.

The bigger the difference, the more critical the factor for the top 10%. In other words, the multiplier (x-axis on the chart) indicates what factors LLMs reward with citations.

The results:

  • The two factors that stand out are sentence and word count, followed by the Flesch Score. Metrics related to backlinks and traffic seem to have a negative effect, which doesn’t mean that AI chatbots weigh them negatively but simply that they don’t matter for mentions or citations.
  • The top 10% of most cited pages across all three LLMs have much less traffic, rank for fewer keywords, and get fewer total backlinks. How does that make sense? It almost looks like being strong in traditional SEO metrics is bad for AI chatbot visibility.
  • Copilot (not included in the chart) has the starkest inequality, by the way. The top 10% have 17.6 more citations than the bottom 90%. However, top 10% also rank for 1.7x more keywords in organic search. So, Copilot seems to have stronger preferences than other AI Chatbots.

Splitting the data up by AI Chatbot shows you their unique preferences:

Image Credit: Kevin Indig

💡Takeaway: Content depth (word and sentence count) and readability (Flesch Score) have the biggest impact on citations in AI chatbots.

This is important to understand: Longer content isn’t better because it’s longer, but because it has a higher chance of answering a specific question prompted in an AI chatbot.

Examples:

  • www.verywellmind.com/best-online-psychiatrists-5119854 has 187 citations, over 10,000 words, and over 1,500 sentences, with a Flesch Score of 55, and is cited 72 times by ChatGPT.
  • On the other hand, www.onlinetherapy.com/best-online-psychiatrists/ has only three citations, also a low Flesch Score, with 48, but comes “short” with only 3,900 words and 580 sentences.

🔍Context: We don’t yet know the value of a brand being mentioned by an AI chatbot.

Early research indicates it’s high, especially when prompts indicate purchase intent.

However, I wanted to get a step closer by understanding what leads to brand mentions in AI chatbots in the first place.

After matching many metrics with AI chatbot visibility, I found one factor that stands out more than anything else: Brand search volume.

The number of AI chatbot mentions, and brand search volume have a correlation of .334 – pretty good in this field. In other words, the popularity of a brand broadly decides how visible it is in AI chatbots.

Image Credit: Kevin Indig

Popularity is the most significant predictor for ChatGPT, which also sends the most traffic and has the highest usage of all AI chatbots.

When breaking it down by AI chatbot, I found ChatGPT has the highest correlation with .542 (strong) ,but Perplexity (.196) and Google AIOs (.254) have lower correlations.

To be clear, there is a lot of nuance on the prompt and category level. But broadly, a brand’s visibility seems to be severely impacted by how popular it is.

Example of popular brands and their visibility in the health category (Image Credit: Kevin Indig)

However, when brands are mentioned, all AI chatbots prefer popular brands and consistently rank them in the same order.

  • There is a clear link between the categories of the users’ questions (mental health, skincare, weight loss, hair loss, erectile dysfunction) and brands.
  • Early data shows that the most visible brands are digital-first and invest heavily in their online presence with content, SEO, reviews, social media, and digital advertising.

💡Takeaway: Popularity is the biggest criterion that decides whether a brand is mentioned in AI chatbots or not. The way consumers connect brands to product categories also matters.

Comparing brand search volume and product category presence with your competitors gives you the best idea of how competitive you are on ChatGPT & Co.

Examples: All models in my analysis cite Healthline most often. Not a single other domain was in the top 10 citations for all four models, showing their distinctly different tastes and how important it is to keep track of many models as opposed to only ChatGPT – if those models also send you traffic.

Image Credit: Kevin Indig

Other well-cited domains across most models:

  • verywellmind.com
  • onlinedoctor.com
  • medicalnewstoday.com
  • byrdie.com
  • cnet.com
  • ncoa.org
Image Credit: Kevin Indig

Context: Not all AI chatbots mentioned brands with the same frequency. Even though ChatGPT has the highest adoption and sends the most referral traffic to sources, Perplexity mentions the most brands per average in answers.

Prompt structure matters for brand visibility:

  • The word “best” was a strong trigger for brand mentions in 69.71% of prompts.
  • Words like “trusted” (5.77%), “source” (2.88%), “recommend” (0.96%), and “reliable” (0.96%) were also associated with an increased likelihood of brand mentions.
  • Prompts including “recommend” often mention public organizations like the FDA, especially when the prompt includes words like “trusted” or “leading.”
  • Google AIOs show the highest brand diversity, followed by Perplexity, then ChatGPT.

💡Takeaway: Prompt structure has a meaningful impact on the brands that come up in the answer.

However, we’re not yet able to truly know what prompts users utilize. This is important to keep in mind: All prompts we look at and track are just proxies for what users might be doing.

Image Credit: Kevin Indig

🔍Context: In my research, I encountered several ways brands unintentionally sabotage their AI chatbot visibility.

I surface them here because the pre-requisite to being visible in LLMs is, of course, their ability to crawl your site, whether that’s directly or through training data.

For example, Copilot doesn’t cite onlinedoctor.com because it’s not indexed in Bing. I couldn’t find indicators that this was done on purpose, so I assume it’s an accident that could quickly be fixed and rewarded with referral traffic.

On the other hand, ChatGPT 4o doesn’t cite cnet.com, and Perplexity doesn’t cite everydayhealth.com because both sites intentionally block the respective LLM in their robots.txt.

But there are also cases in which AI chatbots reference sites even though they technically shouldn’t.

The most cited domain in Perplexity in my dataset is blocked.goodrx.com. GoodRX blocks users from non-U.S. countries, and it seems it accidentally or intentionally blocks Perplexity.

Image Credit: Kevin Indig

It’s important to single out Google’s AI Overviews here: There is no opt-out for AIOs, meaning if you want to get organic traffic from Google, you need to allow it to crawl your site, potentially use your content to train its models and surface it in AI Overviews. Chegg recently filed a lawsuit against Google for this.

💡Takeaway: Monitor your site, especially if all wanted URLs are indexed, in Google Search Console and Bing Webmaster Tools.

Double-check whether you accidentally block an LLM crawler in your robots.txt or through your CDN.

If you intentionally block LLM crawlers, double-check whether you appear in their answers simply by asking them what they know about your domain.

Summary: 6 Key Learnings

  • Classic SEO metrics don’t strongly influence AI chatbot citations.
  • Content depth (higher word and sentence counts) and readability (good Flesch Score) matter more.
  • Different AI chatbots have distinct preferences – monitoring multiple platforms is important.
  • Brand popularity (measured by search volume) is the strongest predictor of brand mentions in AI chatbots, especially in ChatGPT.
  • Prompt structure influences brand visibility, and we don’t yet know how user phrase prompts.
  • Technical issues can sabotage AI visibility – ensure your site isn’t accidentally blocking LLM crawlers through robots.txt or CDN settings.

Featured Image: Paulo Bobita/Search Engine Journal

Google Researchers Improve RAG With “Sufficient Context” Signal via @sejournal, @martinibuster

Google researchers introduced a method to improve AI search and assistants by enhancing Retrieval-Augmented Generation (RAG) models’ ability to recognize when retrieved information lacks sufficient context to answer a query. If implemented, these findings could help AI-generated responses avoid relying on incomplete information and improve answer reliability. This shift may also encourage publishers to create content with sufficient context, making their pages more useful for AI-generated answers.

Their research finds that models like Gemini and GPT often attempt to answer questions when retrieved data contains insufficient context, leading to hallucinations instead of abstaining. To address this, they developed a system to reduce hallucinations by helping LLMs determine when retrieved content contains enough information to support an answer.

Retrieval-Augmented Generation (RAG) systems augment LLMs with external context to improve question-answering accuracy, but hallucinations still occur. It wasn’t clearly understood whether these hallucinations stemmed from LLM misinterpretation or from insufficient retrieved context. The research paper introduces the concept of sufficient context and describes a method for determining when enough information is available to answer a question.

Their analysis found that proprietary models like Gemini, GPT, and Claude tend to provide correct answers when given sufficient context. However, when context is insufficient, they sometimes hallucinate instead of abstaining, but they also answer correctly 35–65% of the time. That last discovery adds another challenge: knowing when to intervene to force abstention (to not answer) and when to trust the model to get it right.

Defining Sufficient Context

The researchers define sufficient context as meaning that the retrieved information (from RAG) contains all the necessary details to derive a correct answer​. The classification that something contains sufficient context doesn’t require it to be a verified answer. It’s only assessing whether an answer can be plausibly derived from the provided content.

This means that the classification is not verifying correctness. It’s evaluating whether the retrieved information provides a reasonable foundation for answering the query.

Insufficient context means the retrieved information is incomplete, misleading, or missing critical details needed to construct an answer​.

Sufficient Context Autorater

The Sufficient Context Autorater is an LLM-based system that classifies query-context pairs as having sufficient or insufficient context. The best performing autorater model was Gemini 1.5 Pro (1-shot), achieving a 93% accuracy rate, outperforming other models and methods​.

Reducing Hallucinations With Selective Generation

The researchers discovered that RAG-based LLM responses were able to correctly answer questions 35–62% of the time when the retrieved data had insufficient context. That meant that sufficient context wasn’t always necessary for improving accuracy because the models were able to return the right answer without it 35-62% of the time.

They used their discovery about this behavior to create a Selective Generation method that uses confidence scores and sufficient context signals to decide when to generate an answer and when to abstain (to avoid making incorrect statements and hallucinating).

The confidence scores are self-rated probabilities that the answer is correct. This achieves a balance between allowing the LLM to answer a question when there’s a strong certainty it is correct while also receiving intervention for when there’s sufficient or insufficient context for answering a question, to further increase accuracy.

The researchers describe how it works:

“…we use these signals to train a simple linear model to predict hallucinations, and then use it to set coverage-accuracy trade-off thresholds.
This mechanism differs from other strategies for improving abstention in two key ways. First, because it operates independently from generation, it mitigates unintended downstream effects…Second, it offers a controllable mechanism for tuning abstention, which allows for different operating settings in differing applications, such as strict accuracy compliance in medical domains or maximal coverage on creative generation tasks.”

Takeaways

Before anyone starts claiming that context sufficiency is a ranking factor, it’s important to note that the research paper does not state that AI will always prioritize well-structured pages. Context sufficiency is one factor, but with this specific method, confidence scores also influence AI-generated responses by intervening with abstention decisions. The abstention thresholds dynamically adjust based on these signals, which means the model may choose to not answer if confidence and sufficiency are both low.

While pages with complete and well-structured information are more likely to contain sufficient context, other factors such as how well the AI selects and ranks relevant information, the system that determines which sources are retrieved, and how the LLM is trained also play a role. You can’t isolate one factor without considering the broader system that determines how AI retrieves and generates answers.

If these methods are implemented into an AI assistant or chatbot, it could lead to AI-generated answers that increasingly rely on web pages that provide complete, well-structured information, as these are more likely to contain sufficient context to answer a query. The key is providing enough information in a single source so that the answer makes sense without requiring additional research.

What are pages with insufficient context?

  • Lacking enough details to answer a query
  • Misleading
  • Incomplete
  • Contradictory​
  • Incomplete information
  • The content requires prior knowledge

The necessary information to make the answer complete is scattered across different sections instead of presented in a unified response.

Google’s third party Quality Raters Guidelines (QRG) has concepts that are similar to context sufficiency. For example, the QRG defines low quality pages as those that don’t achieve their purpose well because they fail to provide necessary background, details, or relevant information for the topic.

Passages from the Quality Raters Guidelines:

“Low quality pages do not achieve their purpose well because they are lacking in an important dimension or have a problematic aspect”

“A page titled ‘How many centimeters are in a meter?’ with a large amount of off-topic and unhelpful content such that the very small amount of helpful information is hard to find.”

“A crafting tutorial page with instructions on how to make a basic craft and lots of unhelpful ‘filler’ at the top, such as commonly known facts about the supplies needed or other non-crafting information.”

“…a large amount of ‘filler’ or meaningless content…”

Even if Google’s Gemini or AI Overviews doesn’t not implement the inventions in this research paper, many of the concepts described in it have analogues in Google’s Quality Rater’s guidelines which themselves describe concepts about high quality web pages that SEOs and publishers that want to rank should be internalizing.

Read the research paper:

Sufficient Context: A New Lens on Retrieval Augmented Generation Systems

Featured Image by Shutterstock/Chris WM Willemsen

Agentic AI In SEO: AI Agents & Workflows For Ideation (Part 1) via @sejournal, @VincentTerrasi

For more than two years, a new concept has been emerging called Agentic SEO.

The idea is to perform SEO using agents based on language models (LLMs) that perform complex tasks autonomously or semi-autonomously to save time for SEO experts.

Of course, humans remain in the loop to guide these agents and validate the results.

Today, with the advent of ChatGPT, Claude, Gemini, and other powerful LLM tools, it is easy to automate complex processes using agents.

Agentic SEO is, therefore, the use of AI agents to optimize SEO productivity. It differs from Generative Engine Optimization (GEO), which aims to improve SEO to be visible on search engines powered by LLMs such as SearchGPT, Perplexity, or AI Overviews.

This concept is based on three main levers: Ideation, Audit, and Generation.

In this first chapter, I will focus on ideation because there is so much to explore.

In our next article, we will see how this concept can be applied to auditing (full website analysis with real-time corrections), and how missing content can be generated using a “Human in the Loop” – or rather “SEO Expert in the Loop” – approach.

AI Agents And Workflows

Before presenting detailed use cases regarding ideation, it is essential to explain the concept of an agent.

AI Agent

Image from author, February 2025

AI agents need at least five key elements to function:

  • Tools: These are all the resources and technical functionalities available to the agent.
  • Memory: This is used to store all interactions so that the agent can remember information previously shared in the discussion.
  • Instructions: Which define its limits, its rules.
  • Knowledge: This is the database that contains the concepts that the agent can use to solve problems; it can use the knowledge of the LLM or external databases.
  • Persona: Which defines its “personality” and often its level of expertise, including, in particular, its way of interacting.

Workflow

Workflows allow complex tasks to be broken down into simpler subtasks and chained together logically.

They are useful in SEO because they facilitate the collection and manipulation of data needed to perform specific SEO actions.

Furthermore, in recent months, AI providers (OpenAI, Claude, etc.) have moved from simply offering the model as such to enriching the user experience.

For example, the Deep Research feature in ChatGPT or Perplexity is not a new model, but a workflow that allows complex searches to be performed in several steps.

This process, which would take a human several hours, is carried out by AI agents in a few tens of minutes.

Image from author, February 2025

The diagram above illustrates a simple SEO workflow that starts with “Data & Constraints,” which feeds a tool called “Tools SEO 1” to perform a specific action (such as SERP analysis or scraping).

Next, we have two AIs (IA 1 and IA 2) that intervene to generate specific content, and then comes the “HITL” (Human In The Loop) step before reaching the deliverables.

Although AI and automation play a central role, human supervision and expertise remain essential to ensure quality results.

Use-Case: Ideation

Let’s start with ideation. As you know, AI excels at opening up possibilities.

With the right methods, it is possible to push AI to explore every conceivable idea on a topic.

An SEO expert will then select, refine, and prioritize the best suggestions based on their experience.

Numerous experiments have demonstrated the positive impact of this synergy between human creativity and artificial intelligence.

Below, Ethan Mollick’s diagram posted on X (Twitter) illustrates a benchmark of the creative process with and without AI:

The figure shows the distribution of creativity scores (from 0 to 10) assigned to different sources: ChatGPT, Bard (now Gemini), a human control group (HumanBaseline), a human group working with AI (HumanPlusAI), and another group working against AI (HumanAgainstAI).

The horizontal axis represents the perceived level of creativity, while the vertical axis indicates the frequency of each score (density).

We can see that the curve corresponding to HumanPlusAI is generally shifted to the right, meaning that evaluators consider this human+AI collaboration to be the most creative approach.

Conversely, the average scores of ChatGPT and Gemini, although high, remain below those obtained by the human-machine synergy.

Finally, the HumanBaseline group (humans alone) is just below the performance of the Human+AI duo, while the HumanAgainstAI group is the least creative.

AI alone can produce impressive results, but it is in combination with human expertise and sensitivity that the highest levels of creativity are achieved. Let me give you some concrete examples.

Tools Like Deep Research

Among the tools available, Deep Research stands out for its ability to conduct in-depth research in several steps, providing a valuable source of inspiration for ideation.

I recommend using this open-source version; if you prefer, you can also use the OpenAI or Perplexity versions.

How Does It Work?

This diagram describes the operation of the Open Source Deep Research tool.

It generates and executes search queries, crawls the resulting pages, then recursively explores promising leads, and finally produces a detailed report in Markdown format.

Image from author, February 2025

There are several steps to using Deep Research:

  1. Enter your query: You will be asked to enter your query. You must try to be as precise as possible. Do not hesitate to ask ChatGPT or Claude to create your DeepResearch search.
  2. Specify the depth of the search (recommended: between 3 and 10, default: 6): How many topics can be found in each iteration?
  3. Specify the depth of exploration (recommended: between 1 and 5, default: 3): If the crawler finds an interesting topic, how many pages deep will it explore?
  4. Refinement: Sometimes, you need to answer follow-up questions to refine the direction of the search.

With this open-source version, you can turn this open-source project into a real SEO tool. I have identified more than four use cases:

  • Competitor Content Analysis: The tool can automate the collection and analysis of competitors’ content to identify their strategies and spot opportunities for differentiation.
  • Long-Tail Keyword Research: By analyzing the web, it can identify specific keywords with high potential and less competition, facilitating content optimization.
  • SERP Analysis: It can collect and analyze search engine results to understand trends and competitors’ positioning.
  • Content Idea Generation: Based on in-depth research, it can identify relevant topics and frequently asked questions in a given niche.

For example, you can install CursorAI, a code generation tool, and ask it to modify the code to create a SERP analysis. The tool will easily make all the necessary changes.

With Agentic SEO, it is possible not only to customize and improve existing tools but, more importantly, to create your own tool to suit your specific needs.

On the other hand, if you are not a developer at all, I advise you to use a no-code solution.

No-Code Agent Workflow Tools

Here is an example of a no-code tool called Dng.ai.

We use a CSV file provided by Moz, which we analyze using an agent capable of processing the data, generating Python code, and extracting all the necessary information.

In blue, you have the input fields that serve as a starting point; then, in orange, you have tools like scrapers, crawlers, and keyword tools to extract all the necessary data; and finally, in purple, you have the AIs that identify all the clusters that need to be created.

Image from author, February 2025

The agent then compares this data with the topics already on your site to identify missing content.

Finally, it generates a complete list of topics to create, ensuring optimal coverage of your SEO strategy. There are many no-code tools for building Agentic workflows.

I won’t list them all, but as you can see here on this tool, an interface is automatically generated from the workflow, and all you need to do is specify your topic and a URL and press the run button to get the results in less than two minutes.

Image from author, February 2025

Explore The Full Potential Of This Tool For Yourself

I leave you to appreciate the results of a tool that is built from the SEO data of any tool.

Image from author, February 2025

I think I could have made more than two hours of video on YouTube just on the ideation aspect, as there is so much to say and test.

I now invite you to explore the full potential of these tools and experiment with them to optimize your SEO strategy, and next time, I will cover audit use cases with Agentic SEO.

More Resources:


Featured Image: jenny on the moon/Shutterstock

Google Opens Gemini Deep Research To Free Users (With Limits) via @sejournal, @MattGSouthern

Google announced it will make its Deep Research feature available to all users for free on a limited basis, while introducing several updates to Gemini.

With this rollout, Gemini is now equipped with enhanced reasoning capabilities, personalization features, and expanded app connectivity.

Free Access with Limitations

Google’s Deep Research tool, which processes information from multiple websites and documents, will now be accessible to non-paying users “a few times a month.”

Gemini Advanced subscribers will continue to have more extensive access to the feature.

The company describes Deep Research as an AI research assistant that searches and synthesizes web information.

Google reports the feature has been updated with its Flash Thinking 2.0 model, which displays its reasoning process while browsing.

Google stated in its announcement:

“Gemini users can try Deep Research a few times a month at no cost, and Gemini Advanced users get expanded access to Deep Research.”

The feature is rolling out in more than 45 languages.

Model Updates

The Flash Thinking 2.0 model has been updated to include file upload capabilities and faster processing speeds.

For paid subscribers, the system now processes up to 1 million tokens in a context window.

Dave Citron, Senior Director of Product Management for the Gemini app, stated in the announcement that the updated model is “trained to break down prompts into a series of steps to strengthen its reasoning capabilities.”

Testing has shown the system can still make errors in both analysis and conclusions, the company acknowledged.

Additional Features

Google also announced a new experimental personalization feature that connects with users’ Google apps and services. The feature uses data from search history to provide tailored responses to queries such as restaurant recommendations.

Additional app integrations now include Calendar, Notes, Tasks, and Photos, allowing users to make requests involving multiple applications. Google Photos integration is planned for the coming weeks.

Lastly, announced that its Gems feature, which lets users create customized AI assistants for specific topics, is now available to all users at no cost.

These updates are available now at gemini.google.com.


Featured Image: Screenshot from blog.google.com, March 2025. 

Google Search History Can Now Power Gemini AI Answers via @sejournal, @martinibuster

Google announced an update to their Gemini personal AI assistant that increases personalization of responses so that it anticipates user’s needs and feels more like a natural personal assistant instead of a tool. Examples of how the new Gemini will help users is for brainstorming travel ideas and making personalized recommendations.

The new feature rolls out first to desktop and then to mobile apps.

Gemini With Personalization

Google announced a new version of Gemini that adapts responses to a user’s unique interests. It does this based on their search history which enables Gemini to deliver responses with a higher level of contextual relevance and personalization. Google intends to expand personalization by integrating other Google apps and services, naming Photos and Images as examples.

Google explained:

“In the coming months, Gemini will expand its ability to understand you by connecting with other Google apps and services, including Photos and YouTube. This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you.”

How Personalization Works

Users can share their personal preferences and details like dietary requirements or their partner’s names in order to obtain a greater degree of personalization in responses that feel specific to the individual. Advanced users can allow Gemini to access past chats to further improve the relevance of responses.

Google’s access to search history and data from other apps may give it an advantage that competing apps like ChatGPT may not be able to match.

Personalization Is Opt-In

There are four key points to understand about personalization in Gemini:

  1. Personalization is currently an opt-in feature that’s labeled “experimental.”
  2. Users need to choose to use Personalization from the model drop-down menu in order to activate it.
  3. Gemini asks for permission to connect to search history and other Google services and apps before it uses them for personalization.
  4. Users can also disconnect from the feature.

That means that millions of Gemini users won’t suddenly begin accessing an increasing amount of information from a contextual AI assistant instead of search. But it does mean the door to that happening exists and the next step is for Google users to open it.

What Publishers Need To Know

This update increasingly blurs the distance between traditional Search and Google’s Assistant while simultaneously making information increasingly accessible in a way that publishers and SEOs should be concerned enough to research to identify how to respond.

Considerations about privacy issues may keep Google from turning personalization into an opt-out feature. And while personalization is currently an opt-in from a drop-down menu because it’s still an experimental feature. But once it’s mature it’s not unreasonable to assume that Google may begin nudging users to adopt it.

Even though this is an experimental feature, publishers and SEOs may want to understand how this impacts them, such as if it’s possible to track personalized Gemini referral traffic or will it be masked because of privacy considerations? Will answers from Gemini reduce the need for clicks to publisher sites?

Read Google’s announcement:

Gemini gets personal, with tailored help from your Google app

Featured Image by Shutterstock/Tada Images

AI Model Showdown: Top Choices For Text, Image, & Video Generation via @sejournal, @MattGSouthern

With so many AI models available today, it’s tough to decide where to begin. A recent study from Quora’s Poe provides guidance for those unsure about which models to choose.

The study analyzes millions of interactions to highlight the most popular tools for generating text, images, and videos.

With nearly every tech company offering an AI solution, it’s easy to get overwhelmed by choices. Poe’s data clarifies which models are trusted and widely used.

Whether you’re new to AI or experienced, this report shows trends that can help you find the best models. Remember that this data represents Poe subscribers and may not reflect the broader AI community.

Text Generation Trends

A Two-Way Race

The study shows that among Poe subscribers, Anthropic models are quickly becoming as popular as OpenAI, especially after the release of Claude 3.5 Sonnet. The usage of text models from both providers is now almost evenly split.

Rapid Adoption of New Releases

Poe users often switch to the latest models, even if loyal to a specific brand. For example, people rapidly move from OpenAI’s GPT-4 to GPT-4o or from Claude 3 to Claude 3.5.

Emerging Players

DeepSeek’s R1 and V3 have captured about 7% of the messages on Poe. Google’s Gemini family has seen a slight decline in use among Poe subscribers but remains a key player.

Image Generation Trends

Market Share of Early Movers

DALL-E-3 and StableDiffusion were once leaders in image generation, but their shares have dropped by about 80%. This decline occurred as the number of image generation models increased from three to around 25.

Leading Models

The FLUX family from BlackForestLabs is now the leading image model, holding a nearly 40% share, while Google’s Imagen3 family has about a 30% share.

Smaller Models

Smaller image providers like Playground and Ideogram update their services frequently, which helps them maintain a loyal user base. However, they only account for about 10% of Poe’s image generation usage.

Video Generation Trends

An Emerging Industry

Video generation was almost nonexistent on Poe until late 2024, but it has quickly grown in popularity. Now, at least eight providers offer this ability.

Runway: Most Used Model

Runway’s single video model handles 30–50% of video generation requests. Although its usage is lower than it used to be, many people still choose this brand.

New Player: Veo-2

Since launching on Poe, Google’s Veo-2 has gained about 40% of the market, showing how quickly customer preferences can change. Other new models, such as Kling-Pro v1.5, Hailuo-AI, HunyuanVideo, and Wan-2.1, have captured around 15% of the market.

Key Takeaway & Looking Ahead

The data shows a clear pattern of newer models replacing older ones in user preference. If you want the best performance, use the latest version rather than stick with familiar but outdated models.

Whether these usage patterns will hold steady or continue to shift remains to be seen. At some point, cost will be a barrier to adoption, as new models tend to get more expensive with every release.

In future reports, Poe plans to share insights on how different models fit various tasks and price points.


Featured Image: stokkete/Shutterstock

Deep SEO: The Potential Impact Of AI Mode And Deep Search Models via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Last week, Google officially launched AI Mode, an AI Overview on steroids, in beta.

Almost one year after the launch of AI Answers in the search results, we have enough data to show that the net impact on the open web is as positive as Trump’s tariffs on the U.S. economy. Spoiler: Not very positive.

AI Overviews take more clicks away from classic search results than they give back to cited sources. AI Mode has the potential to rip the gaping trade deficit AI Overviews are already causing on traffic wide open.

I argue that the appearance of AI Mode is the natural response to Deep Search and the potential outlook of the future of AI Search.

Image Credit: Lyna ™

Deep Research Threatens Google

Deep Search is an AI Agent by Open AI that creates lengthy reports about a subject of your choice: “An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you.”

The most obvious application is market research, but the agent can also deliver rich insights into consumer topics like buying a car, booking a trip, or getting a credit.

Deep research is built for people who do intensive knowledge work in areas like finance, science, policy, and engineering and need thorough, precise, and reliable research. It can be equally useful for discerning shoppers looking for hyper-personalized recommendations on purchases that typically require careful research, like cars, appliances, and furniture.

Image Credit: Kevin Indig

Deep Search performs tens to hundreds of searches to compile a report. I’ve tried prompts for purchase decisions.

When I asked for “the best hybrid family car with 7 seats in the price range of $50,000 to $80,000,” Deep Research browsed through 41 search results and reasoned its way through the content.

Some examples of search reasoning:

I found a Kelley Blue Book article on 7-seat hybrids. It’s a good starting point, though not all details are always included.

I’m digging into cargo capacities for the Toyota Highlander and Grand Highlander. The Grand Highlander is proving to be more spacious, making it a solid option for larger families.

I’m looking at the XC90 Recharge’s electric and combined ranges, and the discrepancies in MPGe figures from different sources.

The report took 10 minutes to put together but probably saved human hours of research and at least 41 clicks – clicks that could’ve gone to Google ads.

The Case For Deep Search

Are Deep Research agents a threat to Google? I think so.

Here’s why:

  1. The results are impressive, and the time savings are massive. Early on, Google bragged about how quickly it put the search results together. But that was speed to results, not speed to answer. Today, Deep Search agents take minutes to get an answer, but it’s all you need.
  2. There is massive potential for personalization, from sources to search criteria.
  3. Conversational back and forth, just like with a salesperson in a store. Deep Re-search agents provide a concise summary that users can expand and explore at their own pace.
  4. It turns out that every search engine or AI chatbot either already has a Deep Search agent or is working on one. It could truly be the future of Search for complex queries.
Image Credit: Kevin Indig

Bing had a “Deep Search” feature since December 2023! And it does exactly what the name promises, just faster and not as deep as ChatGPT’s agent.

Today’s search engines are powerful tools that help us find information on the web, but sometimes they fall short of our expectations. When we have complex, nuanced, or specific questions, we often struggle to find the answers we need. We ourselves know what we’re looking for, but the search engine just doesn’t seem to understand.

That’s why we created deep search, a new Microsoft Bing feature that provides even more relevant and comprehensive answers to the most complex search queries. Deep search is not a replacement for Bing’s existing web search, but an enhancement that offers the option for a deeper and richer exploration of the web.1

I didn’t think I’d live long enough to see the day that Google copies Bing … But they’re not alone.

Grok has “Deep Search” and Gemini and Perplexity have “Deep Research.” Everyone is copying each other, and they’re not even putting in the effort to choose a different name. What a strong sign of commoditization.

Google’s AI Mode (source)

My theory: Google modeled AI Mode after Bing’s Deep Search after seeing what ChatGPT’s Deep Search can do.

Using a custom version of Gemini 2.0, AI Mode is particularly helpful for questions that need further exploration, comparisons and reasoning. You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.2

Interestingly, AI Mode has the opposite of AI Overviews: In Google’s Q3 earnings announcement, Sundar PichAI said Google sees an “increase in search usage among people who use the new AI overviews”.3

So, AI Overviews lead to more searches, but AI Mode saves users time and queries:

You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.4

I don’t think we’ll ever go back to the pre-AI way of search. The universal key challenge of AI answers, whatever their form, is trust. The obvious problem is hallucination.

It’s ironic that ChatGPT Deep Research tells me it browsed through 29 sources, but when I counted, I found 41.

However, reasoning models are getting better at solving this problem with raw computing, i.e., by “thinking harder” about their answers.

The bigger solvable problem for Deep Search agents is source selection.

Untrustworthy sources are the microplastics of AI answers. There is a good reason why all reasoning models openly show their reasoning.

Even though we might pay as much attention to the reasoning details as to any Terms of Service, they make us feel like a lot is happening in the background.

Perception is important for trust. However, source selection is a very solvable problem: Users can simply tell the model to ignore the sources they don’t want, and the model memorizes that behavior over time.

Two less solvable problems remain:

  • Bias: In my analysis of AI chatbot research, I pointed out that LLMs have a bias towards global brands, luxury brands, corporate sources and prompt sentiment.
  • Access: Information needs to be on the internet for Deep Search agents to find it (that’s where Google and Bing have a big competitive advantage).

The biggest question, of course, is whether Deep Search Agents will find broad adoption or stay in the knowledge worker bubble.

AI Mode could bring it to the masses and drive the stake deeper into the heart of informational clicks.

The Impact On SEO

AI Overviews spiked in November ‘24 and February ‘25 (Image Credit: Kevin Indig)

The impact of AI Overviews on SEO traffic is negative.

In my meta-analysis of 19 studies about AI Overviews, I found that AIOs reduce click-through rates across the board. Will AI Mode make it worse? Most likely. But there is hope.

First of all, Deep Search agents are very transparent with their sources and sometimes queries.

ChatGPT’s Deep Search literally calls out what it’s searching for, so we can hopefully track and optimize for these queries. So far, LLMs still rely on search results a lot.

Second, only because searchers get answers before clicking to websites, their purchase intent doesn’t go away.

What goes away for marketers is the ability to influence buyers on their website before they buy  – as long as AI chatbots don’t offer a direct checkout.

We’ll need to find other ways to influence buyers: brand marketing, Reddit, YouTube, social media, advertising.

Third, there is a chance that AI Mode shows up predominantly for informational keywords, just like AI Overviews. In that case, a lot of weight will fall on high-intent keywords, like “buy x” or “order y.”

Fourth, Bing doesn’t separate the Deep Search answer but parks it in the middle of organic and paid results, garnished with links to sources. Hopefully, users will still click outside the deep answer.

I wonder how Google plans to monetize AI Mode, which must be more costly and resource-intensive.

To be fair, Google reduced the cost of an AI Overview by 90%, which tells me they figured out the unit economics. So, it’s possible.

But could this be an opportunity to bring the idea of monetizing Search partially with subscriptions back on the table?

Based on a report by The Information, OpenAI considers charging “up to $20,000 per month for specialized AI agents” that could perform PhD level research, $10,000 for a software developer agent, and $2,000 for a knowledge worker agent.5

Still a long way to go, but it brings up a nice theory about AI Mode: What if Google users could pay for better models that give better answers, or have better skills?


1 Introducing deep search

2 Expanding AI Overviews and introducing AI Mode

3 Q3 earnings call: CEO’s remarks

4 Expanding AI Overviews and introducing AI Mode

5 OpenAI Plots Charging $20,000 a Month For PhD-Level Agents


Featured Image: Paulo Bobita/Search Engine Journal

Ask An SEO: How Can You Distinguish Yourself In This Era Of AI Search Engines? via @sejournal, @HelenPollitt1

Today’s question comes from FC, who asks:

“As an SEO specialist for over 6 years now, what and where does one need to focus with regard to SEO in this current dispensation.

How can you distinguish yourself and standout as an SEO specialist in this era of generative AI and AI search engines?”

This is an excellent question because it goes right to the heart of concerns I hear from a lot of SEO professionals. They have managed to build a solid career and name for themselves as an SEO specialist, but it now feels like the game has changed.

They worry that the skills and experience that got them to this point will not be enough to keep them excelling.

I want to address those concerns, both from the perspective of job seekers and those looking to make an impression in their current role.

What’s Changed

Up until a couple of years ago, it felt like there were clear career choices for SEO specialists to make.

Employed or self-employed? In-house or agency? Technical SEO or content SEO? Small business or enterprise sites? People manager or hands-on practitioner?

These series of decisions, or simply circumstances we found ourselves in, shaped our career paths.

There were central components to SEO. Primarily, you would be working with Google. You would be measured on key performance indicators (KPIs) like clicks and conversions.

You could impress stakeholders by linking your work directly to revenue.

It doesn’t seem as simple as that now, though.

LLMs And Social Media

More recently, there has been a focus on looking at optimizing brands’ presence in other search platforms, not just Bing, Yandex, Baidu, and other regionally relevant search engines.

It now includes platforms not traditionally thought of as belonging to the purview of SEO: TikTok, Perplexity AI, and app stores.

KPIs And Metrics

Google’s walled garden is growing larger, and proving the worth of SEO is getting harder. It’s increasingly difficult to show growth in your share of organic clicks when the pot is getting smaller.

With more answers being given in the search results themselves, and a reduction in the need for clicks off the SERPs, tracking the impact of SEO isn’t straightforward.

With potential – and current – employers still looking at year-on-year clicks, impressions, and revenue growth as their measure of an SEO’s success, this makes standing out quite challenging.

The Skills That Remain Important

I fundamentally believe that the foundational principles of SEO remain unchanged.

However, how we apply them may change with the advent of LLMs and other search platforms.

Technical SEO

A crawl issue that is preventing Googlebot smartphone from accessing the key pages on your site will likely also affect PerplexityBot and OpenAI’s OAI-SearchBot.

As an SEO, we will need to be able to identify where these bots are struggling to crawl pages. We will need to find solutions that enable them to access the pages we want to have served in their search results.

To stand out, make sure you are not just thinking Google-first with your technical solutions.

Consider the other sources of traffic, like LLMs and social media, which might be impacted by the decisions you are making.

Ensure you are also tracking and reporting on the impact of these changes across these other platforms.

Content SEO

Understanding what content searchers are looking for, how search engines perceive it, and what they are choosing to serve as search results is a fundamental aspect of SEO. This won’t change.

However, how you discuss it and the actions you take will change.

From now on, not only are the Google algorithms important for how you create and optimize content, but so are a host of other algorithms.

You will need to consider how searchers are surfacing content through other search platforms. You will also need to know how to make sure your content is served as the result.

Make sure you are moving away from Google as the only algorithm to optimize for and towards the other drivers of traffic and visibility.

Digital PR

I would suggest that digital PR is becoming even more important.

As the search engines we are optimizing for become more numerous, the key factor that seems to unite them is a reward of “authority.”

That is, to give your content a chance of being served as a result in any search engine, it needs to be perceived as authoritative on the subject.

These newer search platforms will still need to use similar methods to Google in identifying expertise and authoritativeness. Digital PR will be key in that.

I do feel that we need to stop making backlinks the main priority of digital PR, however.

Instead, we need to start focusing on how we report on mentions, citations, and conversations about brands and products.

For example, we can look at social media engagement metrics as an indicator of authority. Brand perception may well be formed through forum discussions, reviews, and comments on social media sites.

Just because we know that Googlebot discounts links from some social media platforms in attributing authority doesn’t mean that the newer search engines will. Indeed, they will not rely on social media sites heavily to understand brands.

For now, set yourself apart by rethinking the purpose of digital PR for SEO. Look at the benefits to the brand as a whole and start factoring this into your strategies.

“Soft” Skills

I maintain that the most successful SEO professionals are those who have mastered the non-SEO-specific skills that make businesses work.

Strategic thinking, stakeholder management, and leadership skills are all critical to success not only in SEO, but also in any career.

To really stand out in the changing SEO industry, focus on how these skills will need to be applied.

For example, factor in social media and LLMs into your SEO strategies. Make sure you are not just focusing on Google, but introducing the idea that SEO is broader than that.

Make sure you are liaising with development teams to loop them into your ideas for how to make the site accessible to AI bots. Work on being a thought leader in LLMs and new search platforms for your company.

These sorts of skills are those that will really make you stand out, but you need to apply them with the future of SEO in mind. Future-proof your careers as well as your websites!

Cross-Platform Knowledge

This is probably the hardest one for some SEO specialists to do. Stop looking at Google as the source of all SEO performance and widen the net.

Get comfortable with the other AI search platforms that are beginning to send traffic to your site. Use them yourself, and get familiar with what sort of content they serve and why.

Use social media sites and forums that are where your audience discusses brands like yours. Make sure that you are aware of how they work, and how to participate in those discussions without negative backlash.

Stand out by looking outside of the narrow “Google is SEO” box.

Being An Expert In The New Era Of SEO

How, then, can you guarantee that you are still perceived as an expert in SEO while the goalposts are changing?

What will make you stand out when you are applying for new jobs right now?

How can you prove that your skillset is still relevant whilst others are proclaiming “SEO is dead” (again)?

Demonstrate Impact Through Other Channels

Look at how you can collaborate more with adjacent channels.

For example, I’ve mentioned that social media and forums will be key areas where LLMs will discern brand relevancy and trustworthiness. Work with your teams who are already on those platforms.

Start helping them in areas that you are already an expert, for example: understanding algorithms, creating optimized content and measuring brand authority.

Drive impact in those areas and report on it alongside your more traditional SEO metrics.

Demonstrate Impact Through Other Metrics That Still Line Up With Corporate Goals

Although we are used to reporting on metrics like clicks, rankings, and impressions for SEO, we may need to start looking at other metrics if we want to continue showing the worth of SEO.

For example, consider utilizing tools like Otterly and Goodie to measure visibility in AI search platforms. Or, at the very least, some of the more traditional search engine rankings tools also cover Google’s AI Overview visibility.

Use these tools to demonstrate how the work you are doing is impacting the brand’s performance in AI search platforms.

Continue to relate all work you do back to revenue, or other core conversion goals for your business. Don’t forget to show how traffic from LLMs is converting on your site.

Continue Learning

A key way to stand out in your SEO career at the moment is to show a willingness to upskill and diversify your skillset.

The SEO landscape is shifting, and as such, it’s important to stay on top of new platforms and how they work.

Make sure you are utilizing training that is available on LLM optimization. Use the platforms yourself so you can understand what search real estate is available on them.

Share your findings in interviews and discussions with colleagues so you are highlighting what you’ve learned.

Although this may seem basic, you may find there are a lot of SEO professionals out there with their heads still buried in the sand when it comes to the evolution of the discipline.

Stand Out By Being Adaptable

At the end of the day, SEO is changing. That doesn’t mean that the skills we’ve developed over the past years are obsolete.

Instead, they are even more in demand as new platforms promise new avenues to reach prospective audiences.

The best way to stand out as an SEO in the current era of SEO is by being adaptable.

Learn how to apply your SEO skills to these emerging platforms and track your success.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

AI Writing Fingerprints: How To Spot (& Fix) AI-Generated Content via @sejournal, @MattGSouthern

New research shows that ChatGPT, Claude, and other AI systems leave distinctive “fingerprints” in their writing.

Here’s how you can use this knowledge to identify AI content and improve your AI-assisted output.

The AI Fingerprint: What You Need to Know

Researchers have discovered that different AI writing systems produce text with unique, identifiable patterns.

Analyzing these patterns, researchers achieved 97.1% accuracy in determining which AI wrote a particular piece of content.

The study (PDF link) reads:

“We find that a classifier based upon simple fine-tuning text embedding models on LLM outputs is able to achieve remarkably high accuracy on this task. This indicates the clear presence of idiosyncrasies in LLMs.”

This matters for two reasons:

  • For readers: As the web becomes increasingly saturated with AI-generated content, knowing how to spot it helps you evaluate information sources.
  • For writers: Understanding these patterns can help you better edit AI-generated drafts to sound more human and authentic.

How To Spot AI-Generated Content By Model

Each major AI system has specific writing habits that give it away.

The researchers discovered these patterns remain even in rewritten content:

“These patterns persist even when the texts are rewritten, translated, or summarized by an external LLM, suggesting that they are also encoded in the semantic content.”

1. ChatGPT

Characteristic Phrases

  • Frequently uses transition words like “certainly,” “such as,” and “overall.”
  • Sometimes begins answers with phrases like “Below is…” or “Sure!”
  • Periodically employs qualifiers (e.g., “typically,” “various,” “in-depth”).

Formatting Habits

  • Utilizes bold or italic styling, bullet points, and headings for clarity.
  • Often includes explicit step-by-step or enumerated lists to organize information.

Semantic/Stylistic Tendencies

  • Provides more detailed, explanatory, and context-rich answers.
  • Prefers a somewhat formal, “helpful explainer” tone, often giving thorough background details.

2. Claude

Characteristic Phrases

  • Uses language like “according to the text,” “based on,” or “here is a summary.”
  • Tends to include shorter transitions: “while,” “both,” “the text.”

Formatting Habits

  • Relies on simple bullet points or minimal lists rather than elaborate markdown.
  • Often includes direct references back to the prompt or text snippet.

Semantic/Stylistic Tendencies

  • Offers concise and direct explanations, focusing on the key point rather than lengthy detail.
  • Adopts a practical, succinct voice, prioritizing clarity over elaboration.

3. Grok

Characteristic Phrases

  • May use words like “remember,” “might,” “but also,” or “helps in.”
  • Occasionally starts with “which” or “where,” creating direct statements.

Formatting Habits

  • Uses headings or enumerations but may do so sparingly.
  • Less likely to embed rich markdown elements compared to ChatGPT.

Semantic/Stylistic Tendencies

  • Often thorough in explanations but uses a more “functional” style, mixing direct instructions with reminders.
  • Doesn’t rely heavily on nuance phrases like “certainly” or “overall,” but rather more factual connectors.

4. Gemini

Characteristic Phrases

  • Known to use “below,” “example,” “for instance,” sometimes joined with “in summary.”
  • Might employ exclamation prompts like “certainly! below.”

Formatting Habits

  • Integrates short markdown-like structures, such as bullet points and occasional headers.
  • Occasionally highlights key instructions in enumerated lists.

Semantic/Stylistic Tendencies

  • Balances concise summaries with moderately detailed explanations.
  • Prefers a clear, instructional tone, sometimes with direct language like “here is how…”

5. DeepSeek

Characteristic Phrases

  • Uses words like “crucial,” “key improvements,” “here’s a breakdown,” “essentially,” “etc.”
  • Sometimes includes transitional phrases like “at the same time” or “also.”

Formatting Habits

  • Frequently employs enumerations and bullet points for organization.
  • May have inline emphasis (e.g., “key improvements”) but not always.

Semantic/Stylistic Tendencies

  • Generally thorough responses that highlight the main takeaways or “breakdowns.”
  • Maintains a relatively explanatory style but can be more succinct than ChatGPT.

6. Llama (Instruct Version)

Characteristic Phrases

  • “Including,” “such as,” “explanation the,” “the following,” which signal examples or expansions.
  • Sometimes references step-by-step guides or “how-tos” within text.

Formatting Habits

  • Levels of markdown usage vary; often places important points in numbered lists or bullet points.
  • Can include simple headers (e.g., “## Topic”) but less likely to use intricate formatting than ChatGPT.

Semantic/Stylistic Tendencies

  • Maintains a somewhat formal, academic tone but can shift to more conversational for instructions.
  • Sometimes offers deeper analysis or context (like definitions or background) embedded in the response.

7. Gemma (Instruct Version)

Characteristic Phrases

  • Phrases like “let me,” “know if,” or “remember” often appear.
  • Tends to include “below is,” “specific,” or “detailed” within clarifications.

Formatting Habits

  • Similar to Llama, frequently uses bullet points, enumerations, and occasionally bold headings.
  • May incorporate transitions (e.g., “## Key Points”) to segment content.

Semantic/Stylistic Tendencies

  • Blends direct instructions with explanatory detail.
  • Often partial to a more narrative approach, referencing how or why a task is done.

8. Qwen (Instruct Version)

Characteristic Phrases

  • Includes “certainly,” “in summary,” or “title” for headings.
  • May appear with transitions like “comprehensive,” “based,” or “example use.”

Formatting Habits

  • Uses lists (sometimes nested) for clarity.
  • Periodically includes short code blocks or snippet-like formatting for technical explanations.

Semantic/Stylistic Tendencies

  • Detailed, with emphasis on step-by-step instructions or bullet-labeled points.
  • Paraphrase-friendly structure, meaning it can rephrase or re-organize content extensively if prompted.

9. Mistral (Instruct Version)

Characteristic Phrases

  • Words like “creating,” “absolutely,” “subject,” or “yes” can appear early in responses.
  • Tends to rely on direct verbs for commands (e.g., “try,” “build,” “test”).

Formatting Habits

  • Usually applies straightforward bullet points without heavy markdown.
  • Occasionally includes headings but often keeps the structure minimal.

Semantic/Stylistic Tendencies

  • Prefers concise, direct instructions or overviews.
  • Focuses on brevity while still aiming to be thorough, giving core details in an organized manner.

How to Make AI-Generated Content More Human

The study revealed that word choice is a primary identifier of AI-generated text:

“After randomly shuffling words in the LLM-generated responses, we observe a minimal decline in classification accuracy. This suggests that a substantial portion of distinctive features is encoded in the word-level distribution.”

If you’re using AI writing tools, here are practical steps to reduce these telltale patterns:

  • Vary your beginnings: The research found that first words are highly predictable in AI content. Edit opening sentences to avoid typical AI starters.
  • Replace characteristic phrases: Watch for and replace model-specific phrases mentioned above.
  • Adjust formatting patterns: Each AI has distinct formatting preferences. Modify these to break recognizable patterns.
  • Restructure content: AI tends to follow predictable organization. Rearrange sections to create a more unique flow.
  • Add personal elements: Incorporate your own experiences, opinions, and industry-specific insights that an AI couldn’t generate.

Top Takeaway

While this research focuses on distinguishing different AI models, it also demonstrates how AI-generated text differs from human writing.

As search engines improve their ability to spot AI content, heavily templated AI writing may lose value.

By understanding how to identify AI text, you can create content that rises above the average chatbot output, appealing to both readers and search engines.

Combining AI’s efficiency with human creativity and expertise is the best approach.

Featured Image: Pixel-Shot/Shutterstock

3 Ways AI Is Changing PPC Reporting (With Examples To Streamline Your Reporting) via @sejournal, @siliconvallaeys

PPC reporting has always been both essential and frustrating. It’s essential to keep clients engaged by informing them of the results you’re driving.

But it’s also frustrating because of data discrepancies, cumbersome analysis, and the time required to share understandable, jargon-free reports with different stakeholders.

Fortunately, AI is turning these obstacles into opportunities by filling in gaps left by privacy-compliant tracking, surfacing insights hidden in overwhelming data sets, and automating reporting so it meets the needs of every stakeholder.

In this article, I’ll walk you through some of the technology used by modern marketers and share examples of how I’ve used AI to streamline my PPC reporting.

1. Collect Complete And High-Quality PPC Data

We need data to guide us before we can optimize accounts and share our wins, so let’s start there.

The Problems With Data Before AI

Inconsistent and missing data plague PPC efforts.

Google, Meta, Microsoft, and Amazon operate in their own silos, each taking credit for all conversions that have any touchpoint with their platforms. This leads to double counting, making it difficult to decide where to allocate budgets for optimal results.

In other words, the data between the various ad platforms is inconsistent. Specifically, the conversion value advertisers see in their business data may be lower than the sum of all conversion values reported by the ad platforms.

Add to this the challenge of missing data. Privacy regulations like GDPR and Apple’s iOS changes limit tracking capabilities, which causes data loss, incomplete conversion paths, and gaps in attribution.

Marketers who rely heavily on pixel-based or third-party cookie tracking, both of which became unreliable due to browser restrictions and user opt-outs, see a continuous decline in the quality of the data they need to operate.

While AI can’t magically give us perfect data, it can fill in gaps and restore insights, so let’s take a look at some of the solutions in this space.

AI-Driven Solutions For Data Hygiene And Compliance

1. Data Clean Rooms And Privacy-First Measurement

Clean rooms like Amazon Marketing Cloud (AMC) and Google Ads Data Hub allow advertisers to securely analyze anonymized cross-channel performance data without violating privacy laws.

These platforms aggregate data from multiple sources, giving marketers a comprehensive view of the customer journey.

Example:

A retail brand can use AMC to evaluate how its Google and Facebook ads influence Amazon purchases. Based on what they find, they can re-allocate budgets between platforms to maximize overall return on investment (ROI).

Clean rooms themselves aren’t an AI innovation; however, they benefit significantly from several AI capabilities.

For example, Meta’s Advantage+ uses clean room insights to build lookalike audiences while staying privacy-compliant.

2. Modeled Conversions

While clean rooms are great for unifying cross-platform data, their usefulness is predicated on data completeness.

When privacy regulations make it impossible to get all the data, clean rooms like Google Ads Data Hub and Amazon Marketing Cloud use AI-powered modeled conversions to estimate user journeys that can’t be fully tracked.

Modeled data is also used by tools like Smart Bidding, which leverages machine learning to predict conversions for users who opted out of tracking.

For users who opt out of tracking, Consent Mode still allows the collection of anonymized signals, which machine learning models can then use to predict conversion likelihood.

Example:

Google’s Smart Bidding leverages machine learning to optimize bids for conversions or conversion value.

In cases where conversion data is incomplete due to user consent choices or other factors, Smart Bidding can use modeled conversions to fill in gaps and make good bidding decisions.

The models do this by identifying patterns and correlations between user attributes, actions, and conversion outcomes.

While modeled conversions offer significant benefits in their ease of use (they’re basically provided without any extra effort by the ad platforms), it’s important to remember that they are only estimates and may not be perfectly accurate in all cases.

Advertisers should consider using modeled conversions in conjunction with other ways to get a more complete picture of campaign performance.

For example, advertisers can use Media Mix Models (MMM), a Marketing Efficiency Ratio (MER), or incrementality lift tests to validate that the data they are using is directionally correct.

3. Server-Side Tagging And First-Party Data Integration

Server-side tagging lets marketers control data collection on their servers, bypassing cookie restrictions.

Platforms like Google Tag Manager now support server-side implementations that improve tracking accuracy while maintaining privacy compliance.

Server-side tagging captures anonymous pings even when cookies are declined, feeding better signals into Google’s AI models for more accurate conversion modeling.

This gives AI more complete data when doing things like data-driven attribution (DDA) or automated bidding.

Illustration by author, February 2025

Example:

An ecommerce company transitions to server-side tagging to retain high-quality data even when technologies like Safari’s Intelligent Tracking Prevention (ITP) break JavaScript-based tracking.

As a result, the advertiser sees a complete picture of all the conversions driven by digital marketing and can now justify higher bids, which makes them more competitive in the ad auction and boosts total sales for their brand.

Actionable Tips:

  • Implement GA4 Consent Mode and server-side tagging to maintain accurate performance data.
  • Leverage data clean rooms to analyze cross-platform conversions securely.
  • Use modeled conversions to fill tracking gaps caused by privacy restrictions.

2. Extract Data Insights And Make Smarter Decisions

Now that we’ve covered technologies that can stem the decline in access to data, let’s examine how AI can help make sense of it all.

The Problem With Data Analysis Before AI

Marketers may struggle to extract actionable insights when looking at a mountain of PPC data.

Humans simply aren’t as good as machines at detecting patterns or spotting anomalies in large data sets.

While statistical methods have long been used to find these patterns, many marketing teams lack the expertise to do it themselves or have no access to a qualified analyst to help them.

As a result, teams miss opportunities or spend more time than they can afford looking for signals to guide optimization efforts.

AI Solutions For Data Analysis And Attribution

1. Data-Driven Attribution Models (DDA)

DDA isn’t the newest solution in attribution modeling, but it exists largely because AI has become cheaper and more accessible.

It solves the problem of assigning values to different parts of the consumer journey when users take a multitude of paths from discovery to purchase.

Static attribution models lack the sophistication to account for this and cause advertisers to bid incorrectly.

Google’s data-driven attribution (DDA) uses machine learning to analyze conversion paths and assign credit based on a more complete analysis of a user’s consumer journey.

Unlike static models, DDA dynamically adjusts credit allocation to reflect the many ways consumers behave.

Machine learning, a form of AI, is what enabled Google to make this more advanced attribution model available to all advertisers and what has driven the steady improvement in results from Smart Bidding.

2. Automating Auction Insights Visualization

Generative AI is not only enhancing attribution but also automating repetitive tasks.

Recently, I tested GPT Operator to streamline several PPC reporting workflows.

Operator is OpenAI’s tool that lets the AI use a web browser to achieve tasks. It goes beyond searching on the web; it allows you to follow links, fill in forms, and interact intelligently with websites.

In one task, I asked Operator to download auction insights, visualize the data using Optmyzr’s Auction Insights Visualizer, and email a report.

It handled the data transfer and visualization steps flawlessly, though it struggled with taking a clean screenshot instead of attempting to attach HTML.

Illustration by author, February 2025

This illustrates how AI agents can help when data lives in disparate places. There are no APIs available to move it, as is the case with auction insights data from Google.

While Operator still needs too much hand-holding to be helpful today, it seems likely that we’re less than a year away from when it can do many tedious tasks for us.

3. Advanced Statistical Analysis Available To Anyone

Before AI advancements, conducting a statistical analysis could be a labor-intensive process requiring specialized software or data science expertise.

But today, generative AI enables marketers to explore these areas that were previously firmly outside their realm of expertise.

For example, GPT can explain and execute a process like a seasonality decomposition. AI can quickly write Python code that breaks down campaign data into trend, seasonal, and residual components, helping marketers uncover patterns they can act on.

How AI Automates Seasonal Analysis

In one of my PPC Town Hall podcast episodes, Cory Lindholm demonstrated how GPT can handle complex seasonality analysis in minutes.

Inspired by this, I used GPT’s Advanced Data Analysis feature to upload weekly Google Ads data and run a full decomposition.

GPT efficiently cleaned the data, identified issues like formatting errors, and generated a breakdown of trends, seasonal variations, and residual fluctuations.

In the analysis, GPT flagged recurring trends, allowing me to pinpoint peak demand periods and optimize bid strategies ahead of time. Tasks that previously took hours now take just a few minutes.

On a side note, I have found large language models (LLMs) so helpful with coding that I am now using v0.dev almost weekly to create apps, browser extensions, and scripts on a weekly basis.

3. Communicate Results Effectively Across Teams

With solid data in place and AI-fueled ways to speed up analysis, we should have some great results to share with stakeholders.

But sharing results through reports has traditionally been one of the most time-consuming and least loved tasks that fall on the plate of the typical account manager. And there were other problems, too.

The Problem With Sharing Reports Before AI

Reports were often static, one-size-fits-all documents that failed to meet the needs of different stakeholders.

Executives required high-level summaries focused on ROI, marketing strategists needed cross-channel insights, and PPC specialists required detailed campaign data.

Customizing reports for each audience was time-consuming and prone to error.

AI Solutions For Tailored Reporting

1. LLM Report Summarization

LLMs such as Claude, Gemini, and ChatGPT can quickly generate different explanations of reports from the same underlying data, enabling efficient customization for each audience.

For example, ChatGPT can produce a concise executive summary alongside a more detailed keyword-level report for PPC teams.

But that customization can and should be taken even further. In OpenAI, it’s possible to create custom GPTs, each with its own instructions. This can be used to create a different ChatGPT flavor for every client.

Whereas today, agencies depend on their people to remember how each client likes to get their reports, GPT can be trained to remember these preferences.

Things like how well they know PPC, what jargon they tend to use at their company, and even what the year’s strategic initiatives are.

Then, the LLM can word the summary in a way that resonates with the reader and even explain how the search marketing campaign’s results are key to the company’s strategic objectives for the year.

2. Interactive Dashboards For Real-Time Transparency

AI-driven dashboards provide live, customizable views of campaign performance. Stakeholders can explore data interactively, filtering by date ranges, platforms, or key performance indicators (KPIs), reducing the need for frequent manual report updates.

And while dashboards have been around for a long time, AI can be used to quickly highlight the most salient insights.

For example, AMC lets marketers use AI to generate SQL to explore the data by using natural language.

At my company, Optmyzr, we deployed Sidekick, which can instantly answer questions about data in any account, for example, the biggest optimization opportunities or wins in the last month.

Before AI, these insights might have remained hidden in the data.

Actionable Tips:

  • Set up custom GPTs for every client you work with.
  • Implement reporting tools that use natural language to explore the data.

Conclusion: From Reporting To Strategic Decision-Making With Generative AI

Generative AI has redefined PPC reporting, transforming a once fragmented and time-consuming process into a streamlined, insight-driven workflow.

It doesn’t just automate data collection and report generation; it also surfaces hidden trends, correlations, and anomalies that might otherwise go unnoticed.

This enables marketers to make smarter, faster, and more strategic decisions based on real-time insights.

With AI-driven tools, marketers can see beyond surface-level metrics, discovering patterns and opportunities that traditional reporting might take hours or days to uncover.

This improved understanding of performance empowers teams to refine budget allocation, creative strategy, and campaign targeting more effectively, leading to more substantial outcomes and greater profitability.

The conclusion is simple. With Generative AI, PPC managers have more complete data, leading to better insights and better decisions – all of which can be shared more meaningfully with all involved stakeholders.

More Resources:


Featured Image: Igor Link/Shutterstock