Google announced an update to their Gemini personal AI assistant that increases personalization of responses so that it anticipates user’s needs and feels more like a natural personal assistant instead of a tool. Examples of how the new Gemini will help users is for brainstorming travel ideas and making personalized recommendations.
The new feature rolls out first to desktop and then to mobile apps.
Gemini With Personalization
Google announced a new version of Gemini that adapts responses to a user’s unique interests. It does this based on their search history which enables Gemini to deliver responses with a higher level of contextual relevance and personalization. Google intends to expand personalization by integrating other Google apps and services, naming Photos and Images as examples.
Google explained:
“In the coming months, Gemini will expand its ability to understand you by connecting with other Google apps and services, including Photos and YouTube. This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you.”
How Personalization Works
Users can share their personal preferences and details like dietary requirements or their partner’s names in order to obtain a greater degree of personalization in responses that feel specific to the individual. Advanced users can allow Gemini to access past chats to further improve the relevance of responses.
Google’s access to search history and data from other apps may give it an advantage that competing apps like ChatGPT may not be able to match.
Personalization Is Opt-In
There are four key points to understand about personalization in Gemini:
Personalization is currently an opt-in feature that’s labeled “experimental.”
Users need to choose to use Personalization from the model drop-down menu in order to activate it.
Gemini asks for permission to connect to search history and other Google services and apps before it uses them for personalization.
Users can also disconnect from the feature.
That means that millions of Gemini users won’t suddenly begin accessing an increasing amount of information from a contextual AI assistant instead of search. But it does mean the door to that happening exists and the next step is for Google users to open it.
What Publishers Need To Know
This update increasingly blurs the distance between traditional Search and Google’s Assistant while simultaneously making information increasingly accessible in a way that publishers and SEOs should be concerned enough to research to identify how to respond.
Considerations about privacy issues may keep Google from turning personalization into an opt-out feature. And while personalization is currently an opt-in from a drop-down menu because it’s still an experimental feature. But once it’s mature it’s not unreasonable to assume that Google may begin nudging users to adopt it.
Even though this is an experimental feature, publishers and SEOs may want to understand how this impacts them, such as if it’s possible to track personalized Gemini referral traffic or will it be masked because of privacy considerations? Will answers from Gemini reduce the need for clicks to publisher sites?
With so many AI models available today, it’s tough to decide where to begin. A recent study from Quora’s Poe provides guidance for those unsure about which models to choose.
The study analyzes millions of interactions to highlight the most popular tools for generating text, images, and videos.
With nearly every tech company offering an AI solution, it’s easy to get overwhelmed by choices. Poe’s data clarifies which models are trusted and widely used.
Whether you’re new to AI or experienced, this report shows trends that can help you find the best models. Remember that this data represents Poe subscribers and may not reflect the broader AI community.
Text Generation Trends
A Two-Way Race
The study shows that among Poe subscribers, Anthropic models are quickly becoming as popular as OpenAI, especially after the release of Claude 3.5 Sonnet. The usage of text models from both providers is now almost evenly split.
Rapid Adoption of New Releases
Poe users often switch to the latest models, even if loyal to a specific brand. For example, people rapidly move from OpenAI’s GPT-4 to GPT-4o or from Claude 3 to Claude 3.5.
Emerging Players
DeepSeek’s R1 and V3 have captured about 7% of the messages on Poe. Google’s Gemini family has seen a slight decline in use among Poe subscribers but remains a key player.
Image Generation Trends
Market Share of Early Movers
DALL-E-3 and StableDiffusion were once leaders in image generation, but their shares have dropped by about 80%. This decline occurred as the number of image generation models increased from three to around 25.
Leading Models
The FLUX family from BlackForestLabs is now the leading image model, holding a nearly 40% share, while Google’s Imagen3 family has about a 30% share.
Smaller Models
Smaller image providers like Playground and Ideogram update their services frequently, which helps them maintain a loyal user base. However, they only account for about 10% of Poe’s image generation usage.
Video Generation Trends
An Emerging Industry
Video generation was almost nonexistent on Poe until late 2024, but it has quickly grown in popularity. Now, at least eight providers offer this ability.
Runway: Most Used Model
Runway’s single video model handles 30–50% of video generation requests. Although its usage is lower than it used to be, many people still choose this brand.
New Player: Veo-2
Since launching on Poe, Google’s Veo-2 has gained about 40% of the market, showing how quickly customer preferences can change. Other new models, such as Kling-Pro v1.5, Hailuo-AI, HunyuanVideo, and Wan-2.1, have captured around 15% of the market.
Key Takeaway & Looking Ahead
The data shows a clear pattern of newer models replacing older ones in user preference. If you want the best performance, use the latest version rather than stick with familiar but outdated models.
Whether these usage patterns will hold steady or continue to shift remains to be seen. At some point, cost will be a barrier to adoption, as new models tend to get more expensive with every release.
In future reports, Poe plans to share insights on how different models fit various tasks and price points.
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
Last week, Google officially launched AI Mode, an AI Overview on steroids, in beta.
Almost one year after the launch of AI Answers in the search results, we have enough data to show that the net impact on the open web is as positive as Trump’s tariffs on the U.S. economy. Spoiler: Not very positive.
AI Overviews take more clicks away from classic search results than they give back to cited sources. AI Mode has the potential to rip the gaping trade deficit AI Overviews are already causing on traffic wide open.
I argue that the appearance of AI Mode is the natural response to Deep Search and the potential outlook of the future of AI Search.
Image Credit: Lyna ™
Deep Research Threatens Google
Deep Search is an AI Agent by Open AI that creates lengthy reports about a subject of your choice: “An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you.”
The most obvious application is market research, but the agent can also deliver rich insights into consumer topics like buying a car, booking a trip, or getting a credit.
Deep research is built for people who do intensive knowledge work in areas like finance, science, policy, and engineering and need thorough, precise, and reliable research. It can be equally useful for discerning shoppers looking for hyper-personalized recommendations on purchases that typically require careful research, like cars, appliances, and furniture.
Image Credit: Kevin Indig
Deep Search performs tens to hundreds of searches to compile a report. I’ve tried prompts for purchase decisions.
When I asked for “the best hybrid family car with 7 seats in the price range of $50,000 to $80,000,” Deep Research browsed through 41 search results and reasoned its way through the content.
Some examples of search reasoning:
I found a Kelley Blue Book article on 7-seat hybrids. It’s a good starting point, though not all details are always included.
I’m digging into cargo capacities for the Toyota Highlander and Grand Highlander. The Grand Highlander is proving to be more spacious, making it a solid option for larger families.
I’m looking at the XC90 Recharge’s electric and combined ranges, and the discrepancies in MPGe figures from different sources.
The report took 10 minutes to put together but probably saved human hours of research and at least 41 clicks – clicks that could’ve gone to Google ads.
The Case For Deep Search
Are Deep Research agents a threat to Google? I think so.
Here’s why:
The results are impressive, and the time savings are massive. Early on, Google bragged about how quickly it put the search results together. But that was speed to results, not speed to answer. Today, Deep Search agents take minutes to get an answer, but it’s all you need.
There is massive potential for personalization, from sources to search criteria.
Conversational back and forth, just like with a salesperson in a store. Deep Re-search agents provide a concise summary that users can expand and explore at their own pace.
It turns out that every search engine or AI chatbot either already has a Deep Search agent or is working on one. It could truly be the future of Search for complex queries.
Image Credit: Kevin Indig
Bing had a “Deep Search” feature since December 2023! And it does exactly what the name promises, just faster and not as deep as ChatGPT’s agent.
Today’s search engines are powerful tools that help us find information on the web, but sometimes they fall short of our expectations. When we have complex, nuanced, or specific questions, we often struggle to find the answers we need. We ourselves know what we’re looking for, but the search engine just doesn’t seem to understand.
That’s why we created deep search, a new Microsoft Bing feature that provides even more relevant and comprehensive answers to the most complex search queries. Deep search is not a replacement for Bing’s existing web search, but an enhancement that offers the option for a deeper and richer exploration of the web.1
I didn’t think I’d live long enough to see the day that Google copies Bing … But they’re not alone.
Grok has “Deep Search” and Gemini and Perplexity have “Deep Research.” Everyone is copying each other, and they’re not even putting in the effort to choose a different name. What a strong sign of commoditization.
My theory: Google modeled AI Mode after Bing’s Deep Search after seeing what ChatGPT’s Deep Search can do.
Using a custom version of Gemini 2.0, AI Mode is particularly helpful for questions that need further exploration, comparisons and reasoning. You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.2
Interestingly, AI Mode has the opposite of AI Overviews: In Google’s Q3 earnings announcement, Sundar PichAI said Google sees an “increase in search usage among people who use the new AI overviews”.3
So, AI Overviews lead to more searches, but AI Mode saves users time and queries:
You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.4
I don’t think we’ll ever go back to the pre-AI way of search. The universal key challenge of AI answers, whatever their form, is trust. The obvious problem is hallucination.
It’s ironic that ChatGPT Deep Research tells me it browsed through 29 sources, but when I counted, I found 41.
However, reasoning models are getting better at solving this problem with raw computing, i.e., by “thinking harder” about their answers.
The bigger solvable problem for Deep Search agents is source selection.
Untrustworthy sources are the microplastics of AI answers. There is a good reason why all reasoning models openly show their reasoning.
Even though we might pay as much attention to the reasoning details as to any Terms of Service, they make us feel like a lot is happening in the background.
Perception is important for trust. However, source selection is a very solvable problem: Users can simply tell the model to ignore the sources they don’t want, and the model memorizes that behavior over time.
Two less solvable problems remain:
Bias: In my analysis of AI chatbot research, I pointed out that LLMs have a bias towards global brands, luxury brands, corporate sources and prompt sentiment.
Access: Information needs to be on the internet for Deep Search agents to find it (that’s where Google and Bing have a big competitive advantage).
The biggest question, of course, is whether Deep Search Agents will find broad adoption or stay in the knowledge worker bubble.
AI Mode could bring it to the masses and drive the stake deeper into the heart of informational clicks.
The Impact On SEO
AI Overviews spiked in November ‘24 and February ‘25 (Image Credit: Kevin Indig)
The impact of AI Overviews on SEO traffic is negative.
In my meta-analysis of 19 studies about AI Overviews, I found that AIOs reduce click-through rates across the board. Will AI Mode make it worse? Most likely. But there is hope.
First of all, Deep Search agents are very transparent with their sources and sometimes queries.
ChatGPT’s Deep Search literally calls out what it’s searching for, so we can hopefully track and optimize for these queries. So far, LLMs still rely on search results a lot.
Second, only because searchers get answers before clicking to websites, their purchase intent doesn’t go away.
What goes away for marketers is the ability to influence buyers on their website before they buy – as long as AI chatbots don’t offer a direct checkout.
We’ll need to find other ways to influence buyers: brand marketing, Reddit, YouTube, social media, advertising.
Third, there is a chance that AI Mode shows up predominantly for informational keywords, just like AI Overviews. In that case, a lot of weight will fall on high-intent keywords, like “buy x” or “order y.”
Fourth, Bing doesn’t separate the Deep Search answer but parks it in the middle of organic and paid results, garnished with links to sources. Hopefully, users will still click outside the deep answer.
I wonder how Google plans to monetize AI Mode, which must be more costly and resource-intensive.
To be fair, Google reduced the cost of an AI Overview by 90%, which tells me they figured out the unit economics. So, it’s possible.
But could this be an opportunity to bring the idea of monetizing Search partially with subscriptions back on the table?
Based on a report by The Information, OpenAI considers charging “up to $20,000 per month for specialized AI agents” that could perform PhD level research, $10,000 for a software developer agent, and $2,000 for a knowledge worker agent.5
Still a long way to go, but it brings up a nice theory about AI Mode: What if Google users could pay for better models that give better answers, or have better skills?
“As an SEO specialist for over 6 years now, what and where does one need to focus with regard to SEO in this current dispensation.
How can you distinguish yourself and standout as an SEO specialist in this era of generative AI and AI search engines?”
This is an excellent question because it goes right to the heart of concerns I hear from a lot of SEO professionals. They have managed to build a solid career and name for themselves as an SEO specialist, but it now feels like the game has changed.
They worry that the skills and experience that got them to this point will not be enough to keep them excelling.
I want to address those concerns, both from the perspective of job seekers and those looking to make an impression in their current role.
What’s Changed
Up until a couple of years ago, it felt like there were clear career choices for SEO specialists to make.
Employed or self-employed? In-house or agency? Technical SEO or content SEO? Small business or enterprise sites? People manager or hands-on practitioner?
These series of decisions, or simply circumstances we found ourselves in, shaped our career paths.
There were central components to SEO. Primarily, you would be working with Google. You would be measured on key performance indicators (KPIs) like clicks and conversions.
You could impress stakeholders by linking your work directly to revenue.
It doesn’t seem as simple as that now, though.
LLMs And Social Media
More recently, there has been a focus on looking at optimizing brands’ presence in other search platforms, not just Bing, Yandex, Baidu, and other regionally relevant search engines.
It now includes platforms not traditionally thought of as belonging to the purview of SEO: TikTok, Perplexity AI, and app stores.
KPIs And Metrics
Google’s walled garden is growing larger, and proving the worth of SEO is getting harder. It’s increasingly difficult to show growth in your share of organic clicks when the pot is getting smaller.
With more answers being given in the search results themselves, and a reduction in the need for clicks off the SERPs, tracking the impact of SEO isn’t straightforward.
With potential – and current – employers still looking at year-on-year clicks, impressions, and revenue growth as their measure of an SEO’s success, this makes standing out quite challenging.
The Skills That Remain Important
I fundamentally believe that the foundational principles of SEO remain unchanged.
However, how we apply them may change with the advent of LLMs and other search platforms.
Technical SEO
A crawl issue that is preventing Googlebot smartphone from accessing the key pages on your site will likely also affect PerplexityBot and OpenAI’s OAI-SearchBot.
As an SEO, we will need to be able to identify where these bots are struggling to crawl pages. We will need to find solutions that enable them to access the pages we want to have served in their search results.
To stand out, make sure you are not just thinking Google-first with your technical solutions.
Consider the other sources of traffic, like LLMs and social media, which might be impacted by the decisions you are making.
Ensure you are also tracking and reporting on the impact of these changes across these other platforms.
However, how you discuss it and the actions you take will change.
From now on, not only are the Google algorithms important for how you create and optimize content, but so are a host of other algorithms.
You will need to consider how searchers are surfacing content through other search platforms. You will also need to know how to make sure your content is served as the result.
Make sure you are moving away from Google as the only algorithm to optimize for and towards the other drivers of traffic and visibility.
Digital PR
I would suggest that digital PR is becoming even more important.
As the search engines we are optimizing for become more numerous, the key factor that seems to unite them is a reward of “authority.”
That is, to give your content a chance of being served as a result in any search engine, it needs to be perceived as authoritative on the subject.
These newer search platforms will still need to use similar methods to Google in identifying expertise and authoritativeness. Digital PR will be key in that.
I do feel that we need to stop making backlinks the main priority of digital PR, however.
Instead, we need to start focusing on how we report on mentions, citations, and conversations about brands and products.
For example, we can look at social media engagement metrics as an indicator of authority. Brand perception may well be formed through forum discussions, reviews, and comments on social media sites.
Just because we know that Googlebot discounts links from some social media platforms in attributing authority doesn’t mean that the newer search engines will. Indeed, they will not rely on social media sites heavily to understand brands.
For now, set yourself apart by rethinking the purpose of digital PR for SEO. Look at the benefits to the brand as a whole and start factoring this into your strategies.
“Soft” Skills
I maintain that the most successful SEO professionals are those who have mastered the non-SEO-specific skills that make businesses work.
Strategic thinking, stakeholder management, and leadership skills are all critical to success not only in SEO, but also in any career.
To really stand out in the changing SEO industry, focus on how these skills will need to be applied.
For example, factor in social media and LLMs into your SEO strategies. Make sure you are not just focusing on Google, but introducing the idea that SEO is broader than that.
Make sure you are liaising with development teams to loop them into your ideas for how to make the site accessible to AI bots. Work on being a thought leader in LLMs and new search platforms for your company.
These sorts of skills are those that will really make you stand out, but you need to apply them with the future of SEO in mind. Future-proof your careers as well as your websites!
Cross-Platform Knowledge
This is probably the hardest one for some SEO specialists to do. Stop looking at Google as the source of all SEO performance and widen the net.
Get comfortable with the other AI search platforms that are beginning to send traffic to your site. Use them yourself, and get familiar with what sort of content they serve and why.
Use social media sites and forums that are where your audience discusses brands like yours. Make sure that you are aware of how they work, and how to participate in those discussions without negative backlash.
Stand out by looking outside of the narrow “Google is SEO” box.
Being An Expert In The New Era Of SEO
How, then, can you guarantee that you are still perceived as an expert in SEO while the goalposts are changing?
What will make you stand out when you are applying for new jobs right now?
How can you prove that your skillset is still relevant whilst others are proclaiming “SEO is dead” (again)?
Demonstrate Impact Through Other Channels
Look at how you can collaborate more with adjacent channels.
For example, I’ve mentioned that social media and forums will be key areas where LLMs will discern brand relevancy and trustworthiness. Work with your teams who are already on those platforms.
Start helping them in areas that you are already an expert, for example: understanding algorithms, creating optimized content and measuring brand authority.
Drive impact in those areas and report on it alongside your more traditional SEO metrics.
Demonstrate Impact Through Other Metrics That Still Line Up With Corporate Goals
Although we are used to reporting on metrics like clicks, rankings, and impressions for SEO, we may need to start looking at other metrics if we want to continue showing the worth of SEO.
For example, consider utilizing tools like Otterly and Goodie to measure visibility in AI search platforms. Or, at the very least, some of the more traditional search engine rankings tools also cover Google’s AI Overview visibility.
Use these tools to demonstrate how the work you are doing is impacting the brand’s performance in AI search platforms.
Continue to relate all work you do back to revenue, or other core conversion goals for your business. Don’t forget to show how traffic from LLMs is converting on your site.
Continue Learning
A key way to stand out in your SEO career at the moment is to show a willingness to upskill and diversify your skillset.
The SEO landscape is shifting, and as such, it’s important to stay on top of new platforms and how they work.
Share your findings in interviews and discussions with colleagues so you are highlighting what you’ve learned.
Although this may seem basic, you may find there are a lot of SEO professionals out there with their heads still buried in the sand when it comes to the evolution of the discipline.
Stand Out By Being Adaptable
At the end of the day, SEO is changing. That doesn’t mean that the skills we’ve developed over the past years are obsolete.
Instead, they are even more in demand as new platforms promise new avenues to reach prospective audiences.
The best way to stand out as an SEO in the current era of SEO is by being adaptable.
Learn how to apply your SEO skills to these emerging platforms and track your success.
More Resources:
Featured Image: Paulo Bobita/Search Engine Journal
“We find that a classifier based upon simple fine-tuning text embedding models on LLM outputs is able to achieve remarkably high accuracy on this task. This indicates the clear presence of idiosyncrasies in LLMs.”
This matters for two reasons:
For readers: As the web becomes increasingly saturated with AI-generated content, knowing how to spot it helps you evaluate information sources.
For writers: Understanding these patterns can help you better edit AI-generated drafts to sound more human and authentic.
How To Spot AI-Generated Content By Model
Each major AI system has specific writing habits that give it away.
The researchers discovered these patterns remain even in rewritten content:
“These patterns persist even when the texts are rewritten, translated, or summarized by an external LLM, suggesting that they are also encoded in the semantic content.”
1. ChatGPT
Characteristic Phrases
Frequently uses transition words like “certainly,” “such as,” and “overall.”
Sometimes begins answers with phrases like “Below is…” or “Sure!”
Utilizes bold or italic styling, bullet points, and headings for clarity.
Often includes explicit step-by-step or enumerated lists to organize information.
Semantic/Stylistic Tendencies
Provides more detailed, explanatory, and context-rich answers.
Prefers a somewhat formal, “helpful explainer” tone, often giving thorough background details.
2. Claude
Characteristic Phrases
Uses language like “according to the text,” “based on,” or “here is a summary.”
Tends to include shorter transitions: “while,” “both,” “the text.”
Formatting Habits
Relies on simple bullet points or minimal lists rather than elaborate markdown.
Often includes direct references back to the prompt or text snippet.
Semantic/Stylistic Tendencies
Offers concise and direct explanations, focusing on the key point rather than lengthy detail.
Adopts a practical, succinct voice, prioritizing clarity over elaboration.
3. Grok
Characteristic Phrases
May use words like “remember,” “might,” “but also,” or “helps in.”
Occasionally starts with “which” or “where,” creating direct statements.
Formatting Habits
Uses headings or enumerations but may do so sparingly.
Less likely to embed rich markdown elements compared to ChatGPT.
Semantic/Stylistic Tendencies
Often thorough in explanations but uses a more “functional” style, mixing direct instructions with reminders.
Doesn’t rely heavily on nuance phrases like “certainly” or “overall,” but rather more factual connectors.
4. Gemini
Characteristic Phrases
Known to use “below,” “example,” “for instance,” sometimes joined with “in summary.”
Might employ exclamation prompts like “certainly! below.”
Formatting Habits
Integrates short markdown-like structures, such as bullet points and occasional headers.
Occasionally highlights key instructions in enumerated lists.
Semantic/Stylistic Tendencies
Balances concise summaries with moderately detailed explanations.
Prefers a clear, instructional tone, sometimes with direct language like “here is how…”
5. DeepSeek
Characteristic Phrases
Uses words like “crucial,” “key improvements,” “here’s a breakdown,” “essentially,” “etc.”
Sometimes includes transitional phrases like “at the same time” or “also.”
Formatting Habits
Frequently employs enumerations and bullet points for organization.
May have inline emphasis (e.g., “key improvements”) but not always.
Semantic/Stylistic Tendencies
Generally thorough responses that highlight the main takeaways or “breakdowns.”
Maintains a relatively explanatory style but can be more succinct than ChatGPT.
6. Llama (Instruct Version)
Characteristic Phrases
“Including,” “such as,” “explanation the,” “the following,” which signal examples or expansions.
Sometimes references step-by-step guides or “how-tos” within text.
Formatting Habits
Levels of markdown usage vary; often places important points in numbered lists or bullet points.
Can include simple headers (e.g., “## Topic”) but less likely to use intricate formatting than ChatGPT.
Semantic/Stylistic Tendencies
Maintains a somewhat formal, academic tone but can shift to more conversational for instructions.
Sometimes offers deeper analysis or context (like definitions or background) embedded in the response.
7. Gemma (Instruct Version)
Characteristic Phrases
Phrases like “let me,” “know if,” or “remember” often appear.
Tends to include “below is,” “specific,” or “detailed” within clarifications.
Formatting Habits
Similar to Llama, frequently uses bullet points, enumerations, and occasionally bold headings.
May incorporate transitions (e.g., “## Key Points”) to segment content.
Semantic/Stylistic Tendencies
Blends direct instructions with explanatory detail.
Often partial to a more narrative approach, referencing how or why a task is done.
8. Qwen (Instruct Version)
Characteristic Phrases
Includes “certainly,” “in summary,” or “title” for headings.
May appear with transitions like “comprehensive,” “based,” or “example use.”
Formatting Habits
Uses lists (sometimes nested) for clarity.
Periodically includes short code blocks or snippet-like formatting for technical explanations.
Semantic/Stylistic Tendencies
Detailed, with emphasis on step-by-step instructions or bullet-labeled points.
Paraphrase-friendly structure, meaning it can rephrase or re-organize content extensively if prompted.
9. Mistral (Instruct Version)
Characteristic Phrases
Words like “creating,” “absolutely,” “subject,” or “yes” can appear early in responses.
Tends to rely on direct verbs for commands (e.g., “try,” “build,” “test”).
Formatting Habits
Usually applies straightforward bullet points without heavy markdown.
Occasionally includes headings but often keeps the structure minimal.
Semantic/Stylistic Tendencies
Prefers concise, direct instructions or overviews.
Focuses on brevity while still aiming to be thorough, giving core details in an organized manner.
How to Make AI-Generated Content More Human
The study revealed that word choice is a primary identifier of AI-generated text:
“After randomly shuffling words in the LLM-generated responses, we observe a minimal decline in classification accuracy. This suggests that a substantial portion of distinctive features is encoded in the word-level distribution.”
If you’re using AI writing tools, here are practical steps to reduce these telltale patterns:
Vary your beginnings: The research found that first words are highly predictable in AI content. Edit opening sentences to avoid typical AI starters.
Replace characteristic phrases: Watch for and replace model-specific phrases mentioned above.
Adjust formatting patterns: Each AI has distinct formatting preferences. Modify these to break recognizable patterns.
Restructure content: AI tends to follow predictable organization. Rearrange sections to create a more unique flow.
Add personal elements: Incorporate your own experiences, opinions, and industry-specific insights that an AI couldn’t generate.
Top Takeaway
While this research focuses on distinguishing different AI models, it also demonstrates how AI-generated text differs from human writing.
As search engines improve their ability to spot AI content, heavily templated AI writing may lose value.
By understanding how to identify AI text, you can create content that rises above the average chatbot output, appealing to both readers and search engines.
Combining AI’s efficiency with human creativity and expertise is the best approach.
PPC reporting has always been both essential and frustrating. It’s essential to keep clients engaged by informing them of the results you’re driving.
But it’s also frustrating because of data discrepancies, cumbersome analysis, and the time required to share understandable, jargon-free reports with different stakeholders.
Fortunately, AI is turning these obstacles into opportunities by filling in gaps left by privacy-compliant tracking, surfacing insights hidden in overwhelming data sets, and automating reporting so it meets the needs of every stakeholder.
In this article, I’ll walk you through some of the technology used by modern marketers and share examples of how I’ve used AI to streamline my PPC reporting.
1. Collect Complete And High-Quality PPC Data
We need data to guide us before we can optimize accounts and share our wins, so let’s start there.
The Problems With Data Before AI
Inconsistent and missing data plague PPC efforts.
Google, Meta, Microsoft, and Amazon operate in their own silos, each taking credit for all conversions that have any touchpoint with their platforms. This leads to double counting, making it difficult to decide where to allocate budgets for optimal results.
In other words, the data between the various ad platforms is inconsistent. Specifically, the conversion value advertisers see in their business data may be lower than the sum of all conversion values reported by the ad platforms.
Add to this the challenge of missing data. Privacy regulations like GDPR and Apple’s iOS changes limit tracking capabilities, which causes data loss, incomplete conversion paths, and gaps in attribution.
Marketers who rely heavily on pixel-based or third-party cookie tracking, both of which became unreliable due to browser restrictions and user opt-outs, see a continuous decline in the quality of the data they need to operate.
While AI can’t magically give us perfect data, it can fill in gaps and restore insights, so let’s take a look at some of the solutions in this space.
AI-Driven Solutions For Data Hygiene And Compliance
These platforms aggregate data from multiple sources, giving marketers a comprehensive view of the customer journey.
Example:
A retail brand can use AMC to evaluate how its Google and Facebook ads influence Amazon purchases. Based on what they find, they can re-allocate budgets between platforms to maximize overall return on investment (ROI).
Clean rooms themselves aren’t an AI innovation; however, they benefit significantly from several AI capabilities.
For example, Meta’s Advantage+ uses clean room insights to build lookalike audiences while staying privacy-compliant.
2. Modeled Conversions
While clean rooms are great for unifying cross-platform data, their usefulness is predicated on data completeness.
When privacy regulations make it impossible to get all the data, clean rooms like Google Ads Data Hub and Amazon Marketing Cloud use AI-powered modeled conversions to estimate user journeys that can’t be fully tracked.
Modeled data is also used by tools like Smart Bidding, which leverages machine learning to predict conversions for users who opted out of tracking.
For users who opt out of tracking, Consent Mode still allows the collection of anonymized signals, which machine learning models can then use to predict conversion likelihood.
Example:
Google’s Smart Bidding leverages machine learning to optimize bids for conversions or conversion value.
In cases where conversion data is incomplete due to user consent choices or other factors, Smart Bidding can use modeled conversions to fill in gaps and make good bidding decisions.
The models do this by identifying patterns and correlations between user attributes, actions, and conversion outcomes.
While modeled conversions offer significant benefits in their ease of use (they’re basically provided without any extra effort by the ad platforms), it’s important to remember that they are only estimates and may not be perfectly accurate in all cases.
Advertisers should consider using modeled conversions in conjunction with other ways to get a more complete picture of campaign performance.
For example, advertisers can use Media Mix Models (MMM), a Marketing Efficiency Ratio (MER), or incrementality lift tests to validate that the data they are using is directionally correct.
3. Server-Side Tagging And First-Party Data Integration
Server-side tagging lets marketers control data collection on their servers, bypassing cookie restrictions.
Server-side tagging captures anonymous pings even when cookies are declined, feeding better signals into Google’s AI models for more accurate conversion modeling.
This gives AI more complete data when doing things like data-driven attribution (DDA) or automated bidding.
Illustration by author, February 2025
Example:
An ecommerce company transitions to server-side tagging to retain high-quality data even when technologies like Safari’s Intelligent Tracking Prevention (ITP) break JavaScript-based tracking.
As a result, the advertiser sees a complete picture of all the conversions driven by digital marketing and can now justify higher bids, which makes them more competitive in the ad auction and boosts total sales for their brand.
Actionable Tips:
Implement GA4 Consent Mode and server-side tagging to maintain accurate performance data.
Leverage data clean rooms to analyze cross-platform conversions securely.
Use modeled conversions to fill tracking gaps caused by privacy restrictions.
2. Extract Data Insights And Make Smarter Decisions
Now that we’ve covered technologies that can stem the decline in access to data, let’s examine how AI can help make sense of it all.
The Problem With Data Analysis Before AI
Marketers may struggle to extract actionable insights when looking at a mountain of PPC data.
Humans simply aren’t as good as machines at detecting patterns or spotting anomalies in large data sets.
While statistical methods have long been used to find these patterns, many marketing teams lack the expertise to do it themselves or have no access to a qualified analyst to help them.
As a result, teams miss opportunities or spend more time than they can afford looking for signals to guide optimization efforts.
AI Solutions For Data Analysis And Attribution
1. Data-Driven Attribution Models (DDA)
DDA isn’t the newest solution in attribution modeling, but it exists largely because AI has become cheaper and more accessible.
It solves the problem of assigning values to different parts of the consumer journey when users take a multitude of paths from discovery to purchase.
Static attribution models lack the sophistication to account for this and cause advertisers to bid incorrectly.
Unlike static models, DDA dynamically adjusts credit allocation to reflect the many ways consumers behave.
Machine learning, a form of AI, is what enabled Google to make this more advanced attribution model available to all advertisers and what has driven the steady improvement in results from Smart Bidding.
Recently, I tested GPT Operator to streamline several PPC reporting workflows.
Operator is OpenAI’s tool that lets the AI use a web browser to achieve tasks. It goes beyond searching on the web; it allows you to follow links, fill in forms, and interact intelligently with websites.
In one task, I asked Operator to download auction insights, visualize the data using Optmyzr’s Auction Insights Visualizer, and email a report.
It handled the data transfer and visualization steps flawlessly, though it struggled with taking a clean screenshot instead of attempting to attach HTML.
Illustration by author, February 2025
This illustrates how AI agents can help when data lives in disparate places. There are no APIs available to move it, as is the case with auction insights data from Google.
While Operator still needs too much hand-holding to be helpful today, it seems likely that we’re less than a year away from when it can do many tedious tasks for us.
3. Advanced Statistical Analysis Available To Anyone
Before AI advancements, conducting a statistical analysis could be a labor-intensive process requiring specialized software or data science expertise.
But today, generative AI enables marketers to explore these areas that were previously firmly outside their realm of expertise.
For example, GPT can explain and execute a process like a seasonality decomposition. AI can quickly write Python code that breaks down campaign data into trend, seasonal, and residual components, helping marketers uncover patterns they can act on.
Inspired by this, I used GPT’s Advanced Data Analysis feature to upload weekly Google Ads data and run a full decomposition.
GPT efficiently cleaned the data, identified issues like formatting errors, and generated a breakdown of trends, seasonal variations, and residual fluctuations.
In the analysis, GPT flagged recurring trends, allowing me to pinpoint peak demand periods and optimize bid strategies ahead of time. Tasks that previously took hours now take just a few minutes.
On a side note, I have found large language models (LLMs) so helpful with coding that I am now using v0.dev almost weekly to create apps, browser extensions, and scripts on a weekly basis.
3. Communicate Results Effectively Across Teams
With solid data in place and AI-fueled ways to speed up analysis, we should have some great results to share with stakeholders.
But sharing results through reports has traditionally been one of the most time-consuming and least loved tasks that fall on the plate of the typical account manager. And there were other problems, too.
The Problem With Sharing Reports Before AI
Reports were often static, one-size-fits-all documents that failed to meet the needs of different stakeholders.
Executives required high-level summaries focused on ROI, marketing strategists needed cross-channel insights, and PPC specialists required detailed campaign data.
Customizing reports for each audience was time-consuming and prone to error.
AI Solutions For Tailored Reporting
1. LLM Report Summarization
LLMs such as Claude, Gemini, and ChatGPT can quickly generate different explanations of reports from the same underlying data, enabling efficient customization for each audience.
For example, ChatGPT can produce a concise executive summary alongside a more detailed keyword-level report for PPC teams.
But that customization can and should be taken even further. In OpenAI, it’s possible to create custom GPTs, each with its own instructions. This can be used to create a different ChatGPT flavor for every client.
Whereas today, agencies depend on their people to remember how each client likes to get their reports, GPT can be trained to remember these preferences.
Things like how well they know PPC, what jargon they tend to use at their company, and even what the year’s strategic initiatives are.
Then, the LLM can word the summary in a way that resonates with the reader and even explain how the search marketing campaign’s results are key to the company’s strategic objectives for the year.
2. Interactive Dashboards For Real-Time Transparency
AI-driven dashboards provide live, customizable views of campaign performance. Stakeholders can explore data interactively, filtering by date ranges, platforms, or key performance indicators (KPIs), reducing the need for frequent manual report updates.
And while dashboards have been around for a long time, AI can be used to quickly highlight the most salient insights.
For example, AMC lets marketers use AI to generate SQL to explore the data by using natural language.
At my company, Optmyzr, we deployed Sidekick, which can instantly answer questions about data in any account, for example, the biggest optimization opportunities or wins in the last month.
Before AI, these insights might have remained hidden in the data.
Actionable Tips:
Set up custom GPTs for every client you work with.
Implement reporting tools that use natural language to explore the data.
Conclusion: From Reporting To Strategic Decision-Making With Generative AI
Generative AI has redefined PPC reporting, transforming a once fragmented and time-consuming process into a streamlined, insight-driven workflow.
It doesn’t just automate data collection and report generation; it also surfaces hidden trends, correlations, and anomalies that might otherwise go unnoticed.
This enables marketers to make smarter, faster, and more strategic decisions based on real-time insights.
With AI-driven tools, marketers can see beyond surface-level metrics, discovering patterns and opportunities that traditional reporting might take hours or days to uncover.
This improved understanding of performance empowers teams to refine budget allocation, creative strategy, and campaign targeting more effectively, leading to more substantial outcomes and greater profitability.
The conclusion is simple. With Generative AI, PPC managers have more complete data, leading to better insights and better decisions – all of which can be shared more meaningfully with all involved stakeholders.
Google announced an expansion of its AI-powered search features, enhancing AI Overviews with Gemini 2.0 and introducing a new experimental “AI Mode.”
AI Overviews With Gemini 2.0
Google has upgraded its AI Overviews with Gemini 2.0 in the United States.
Users should see performance improvements for coding, advanced mathematics, and multimodal searches.
Google says it’s increasing the frequency of AI Overview appearances for these query types while making them faster and higher quality.
Additionally, Google is removing the sign-in requirement for AI Overviews, which could significantly increase their frequency.
Google’s announcement reads:
“Today, we’re sharing that we’ve launched Gemini 2.0 for AI Overviews in the U.S. to help with harder questions, starting with coding, advanced math and multimodal queries, with more on the way. With Gemini 2.0’s advanced capabilities, we provide faster and higher quality responses and show AI Overviews more often for these types of queries.
Plus, we’re rolling out to more people: teens can now use AI Overviews, and you’ll no longer need to sign in to get access.”
Launching Experimental “AI Mode”
Google is introducing “AI Mode,” an experimental feature initially available to Google One AI Premium subscribers through Google’s Labs program.
You can now pay to have more AI in your search results, which is worth emphasizing, given the vocal segment of users who want to turn off AI features.
This opt-in experience is designed for what Google calls “power users” who want AI-powered responses for a broader range of search queries.
AI Mode leverages a custom version of Gemini 2.0 with advanced reasoning capabilities to handle complex, multi-part questions that might otherwise require multiple searches.
The new feature allows you to:
Ask follow-up questions to continue conversations
Receive information drawn from multiple data sources simultaneously
Interact using voice, text, or images through multimodal capabilities
Here’s an example of how it looks on mobile and desktop:
Screenshot from: Google, March 2025.
Screenshot from: Google, March 2025.
How AI Mode Works
Google says AI mode is an upgrade over AI overviews:
“This new Search mode expands what AI Overviews can do with more advanced reasoning, thinking and multimodal capabilities so you can get help with even your toughest questions. You can ask anything on your mind and get a helpful AI-powered response with the ability to go further with follow-up questions and helpful web links.”
Google explained that AI Mode employs a “query fan-out” technique.
This works by issuing multiple related searches concurrently across subtopics and data sources. It then synthesizes the information into a comprehensive response.
The technology draws on Google’s Knowledge Graph, real-world information, and product data. Similar to AI overviews, it links to sources.
You can access AI Mode through multiple entry points: the AI Mode tab below the search bar on Google.com, directly at google.com/aimode, or via the AI Mode icon in the Google app.
The dedicated tab will look similar to the example below:
Screenshot from: Google, March 2025.
Quality Safeguards & Limitations
Google acknowledges that, as with any early-stage AI product, AI Mode “won’t always get it right.”
The company detailed several built-in safeguards, including:
Integration with core Search ranking and safety systems
Novel approaches using the model’s reasoning capabilities to improve factuality
Defaulting to standard web search results when confidence in AI-generated responses is low
Protection against hallucinations, opinionated responses, and misleading content
The company noted that AI Mode is mainly designed to handle queries requiring exploration, reasoning, or comparisons. However, it may default to traditional search results for current events or when up-to-the-minute accuracy is critical.
Looking Ahead
These updates affirm Google’s continued investment in AI-powered search experiences, which could further impact how people discover and interact with web content.
The company’s measured rollout of AI mode suggests it’s being cautious with this experimental feature.
It remains to be seen whether it will eventually roll out to paid users. Locking the AI mode behind a paywall may indicate that it’s expensive for Google to deploy.
Google is already working on enhancements, it says. Updates to AI mode may include more visual responses, richer formatting, and new ways to connect users with web content.
A new study by SE Ranking examines how AI search tools handle Your Money or Your Life (YMYL) queries.
The research compared Google AI Overviews (AIOs), ChatGPT, and DeepSeek across 40 health, legal, financial, and political queries.
This study is similar to one published by SE Ranking in October. The key difference is that this study examines multiple tools, whereas the October study focused solely on AIOs.
Here’s more about the latest study and what the findings mean.
Key Findings
1. YMYL Query Response Rate
The research found that Google generates AIOs for 51% of YMYL queries, slightly up from 50% in October.
ChatGPT has a 100% response rate for YMYL searches, and DeepSeek has a 90% rate.
Google’s selective approach was evident in political topics, displaying AI Overviews for only one query.
2. Response Patterns
Each platform showed unique patterns in generating responses to YMYL queries:
DeepSeek produces longer answers (391 words on average) with numerous sources (28 per response)
ChatGPT offers moderate-length content (234 words) with fewer sources (10 per response)
Google provides the briefest responses (190 words) with minimal citations (7 sources)
Google’s AI Overviews showed the highest percentage of responses with all unique links (61.9%), compared to ChatGPT (40%) and DeepSeek (32.5%), indicating Google prioritizes source diversity over quantity.
3. Fact vs. Opinion
Using subjectivity analysis, the study measured how factual versus opinion-based each platform’s content appeared:
ChatGPT delivered the most objective content overall (0.393 score)
Google AI Overviews ranked second (0.427 score)
DeepSeek showed the highest subjectivity (0.446 score)
These differences were most noticeable in political topics, where DeepSeek scored 0.497 (more opinionated) while Google scored 0.246 (more factual).
4. YMYL Category Strengths
The analysis revealed the following differences across various categories of YMYL queries:
Health Content
ChatGPT: Concise, disclaimer-heavy content citing medical sources
DeepSeek: Detailed responses with extensive citations, including news sources
Google: Conservative, heavily cautioned but brief content
Legal Content
ChatGPT: Bullet-point summaries with high-authority sources
DeepSeek: Comprehensive explanations with real-world examples
Google: Brief overviews with the highest disclaimer rate (50%)
Financial Content
ChatGPT: Risk-focused overviews with professional consultation recommendations
DeepSeek: Categorized information with numerical data and comparisons
Google: Avoids responding to highly sensitive financial queries entirely
5. DeepSeek Restrictions
The study documented that DeepSeek refused to respond to queries about Taiwan’s independence, Tiananmen Square, Chinese human rights issues, and websites banned in China.
DeepSeek’s responses often aligned with Chinese government perspectives when addressing related topics.
What Does The Data Mean?
A common thread throughout the data is how each AI chooses to protect users from potentially harmful advice while still trying to be helpful.
ChatGPT answers every YMYL query it sees, yet often leads with strong disclaimers and succinct takeaways.
Google AI Overviews, on the other hand, declines to generate content for almost half of the tested queries, leaning heavily on caution rather than risk providing the wrong guidance.
DeepSeek is at the opposite extreme. Sometimes, it offers staggering amounts of detail, and other times, it offers little detail if the response doesn’t align with political perspectives.
What unites all three is the balance between information and liability. Each model wants to appear authoritative in YMYL niches but must decide whether to be “helpful” or “safe” (and how much of each).
Key Takeaways For SEO
For SEO and content teams, here are key points to consider:
Google is selective. Content appearing in AIOs must meet high-quality standards, especially for YMYL topics.
Google’s AIOs cite unique and diverse sources for YMYL searches. This increases visibility but creates competition for clicks.
Different AI systems prefer specific styles, lengths, and details in content.
All three platforms prefer disclaimers on sensitive topics, with health content having the highest rate of cautionary notices at 37%.
Understanding these platform differences can help you improve visibility in AI search tools.
For more insights into AI search optimization, see:
The expected AIO-pocalypse hasn’t happened, at least not in the form we expected.
Instead of a meteor impact, it looks more like climate change: slowly raising temperatures that cause natural disasters. Chegg is one of the first victims.
Chegg is an ed-tech company that offers students homework help, textbook rentals, online tutoring, and career resources. Founded in 2005. IPO in 2013.
In 2024, it reported 6.6 million paying subscribers, and its revenue is down -14% YoY. The culprit: AI.
The big question I answer in this article is whether Chegg is an outlier (spoiler: it’s not) or the first of many. More companies are bleeding. And some direct competitors to Chegg are surprisingly thriving.
You should read this Memo if you want to understand:
Chegg filed a lawsuit against Google for abusing its monopoly position in Search to force companies to provide content that it repurposes for AI answers or Featured Snippets.1
The accusation has legs. Showing answers in the search results directly competes with Chegg’s business model.
Chegg claims (rightfully) it cannot opt out of them without cutting off vital organic traffic and calls Search a “Hobson’s Choice”: you either block Google and lose all organic traffic or don’t, and Google takes your content to give answers in the search results.
Up to this point, I agree.
What we’re witnessing is the old ecosystem of Search falling apart. The generational deal was that websites would create good content and allow Google to crawl it.
In return, Google sends them websites and shows ads to searchers. Now that clicking on websites is redundant in some cases, this deal is falling apart.
In my meta analysis of AI Overviews, I showed how AI Overviews reduce click-through rates, but they also show up much less often and more for informational queries than when they first started.
Skeptical
But this isn’t the whole puzzle of Chegg’s problem. Months before the lawsuit, Chegg’s CEO said AI, not AI Overviews, is eating into subscriber growth (as I mentioned in my Q1 Marketplaces Deep Dive):
“Rosensweig said on a May earnings call that ChatGPT had begun eating into subscriber growth. Chegg pulled financial forecasts for the rest of the year, and its stock dropped 48% in a day.”2
The article goes on:
“But within months, Chegg’s internal data showed students were increasingly turning to ChatGPT as a studying aid. Employees found some of the answers provided by GPT-4, the technology behind ChatGPT, scored higher on internal evaluations than answers from Chegg’s human experts.”
The problem goes beyond AI Overviews. Students around the world are using AI instead of web platforms. And you can see it in the numbers as well.
Image Credit: Kevin Indig
When you look at how much estimated traffic Chegg got from search results showing AI Overviews, you find it was only ~20% in December 2024, at its peak, and 15% in January 2025. Painful, but not enough to tank a company.
According to Semrush, Chegg’s organic traffic actually increased after May 2024, when AIOs launched, and only started tanking in October 2024.
According to Similarweb, total traffic declined before ChatGPT launched in November 2022.
Image Credit: Kevin Indig
Declining brand search volume is a sign of shrinking brand awareness, product/market-fit and user retention.
The fact that brand search volume has been shrinking since 2020 and searches for cancellations have peaked before AI entered the mainstream makes me believe that the brand already had issues.
Image Credit: Kevin Indig
Chegg’s engagement metrics declined over the last 3 years, which is not good for SEO and not good for the business.
Bottom Line
Chegg struggled before AI. AI just accelerated the decline.
So, why doesn’t Chegg sue OpenAI & Co as well? Maybe, because AI Overviews and their impact are easier to measure.
Or, maybe because Chegg’s case could build on the lawsuit DoJ vs. Google, which already ruled Google a monopoly. The timing would fit, since the remedies are coming out in August.
Chegg could at least block LLM crawlers in their robots.txt.
Don’t get me wrong – Chegg’s lawsuit has a strong point. But I also see it as a story for investors: Chegg wants to signal that it needs to take the company private or sell (right call) because of a structural change to its business model that it’s not responsible for. The fact that the announcement was made during an earnings call supports that theory.
Image Credit: Kevin Indig
Symbolic: AI homework helper outranks Chegg for “homework help,” one of its most important keywords.
Who Else Is Impacted By AI
Chegg is a harbinger. I looked at other ed-tech sites that lean heavily on SEO and found that almost all of them saw significant traffic losses since ChatGPT came out:
CourseHero.
Brainly.
Studocu.
Quizlet.
Numerade.
Wyzant.
Khan Academy.
Codepen.
Study.com.
W3schools.
Stackoverflow.
The traffic data is supported by research showing that students underwent significant behavior changes (first two quotes from the WSJ article linked above):
“A survey of college students by investment bank Needham found 30% intended to use Chegg this semester, down from 38% in the spring, and 62% planned to use ChatGPT, up from 43%.”
“Researchers at the University of Illinois at Urbana-Champaign conducted a study in the spring last year to see how ChatGPT had influenced cheating in an introductory programming course. They found students had overwhelmingly moved to ChatGPT from what the researchers called “plagiarism hubs” such as Chegg.”
“A survey of 1,000 students – both domestic and international – found there had been an “explosive increase” in the use of genAI in the past 12 months. Almost nine out of 10 (88%) in the 2025 poll said they used tools such as ChatGPT for their assessments, up from 53% last year.”3
ChatGPT & Co. destroy the value of online tutoring and study tools.
Red Flags
Chegg and the other affected sites show what red flags to watch out for:
> 80% organic traffic.
Young target audiences.
Information sites, especially marketplaces.
The companies that need to be most careful are overexposed to SEO, offer information as a product, and sell to young people.
Other industries that fit the bill and could be next on the list: Gig economy, Online Q&A, Quotes, lexica, encyclopedias, dictionaries.
Over 80% of Chegg’s traffic comes from SEO (Image Credit: Kevin Indig)
How To Build AI Immunity Cells
Not every ed-tech company is in the red. Scribd, Coursera, Udemy, Pearson.
Pearson is especially interesting because it’s the UK equivalent of Chegg. Even though revenue is down -3%, and its CEO acknowledged “digital learning trends” (a.k.a. AI) as a challenge, traffic is thriving.
Why? Because it’s better diversified: 65% of traffic comes direct, 18% from organic. It doesn’t have to be that little.
Each company I listed at the beginning of the paragraph is either less reliant on SEO traffic or offers content that’s hard to copy (e.g., courses).
Turning around structural declines, where user behavior and the market significantly shift, is hard. Sometimes, impossible. I’ve learned my own fair share of lessons when Shopify went through the COVID hangover.
So, what can Chegg do except find a time machine and go back 10 years to fix its overexposure on SEO?
First, taking the company private to turn it around is a good first step. The pressure of quarterly results makes a strong pivot impossible.
Second, Chegg is already working on two smart pivots:4
Get away from content that’s easy for Google to copy/synthesize and focus on interactive tools and experience. The company already offers tools like a citation manager or a plagiarism checker, but it could do a lot more here.
Explore related market. Chegg launched Busuu, a language learning service, and Chegg Skills, a pilot program to train students in business-relevant skills and connect them straight to businesses. But can it compete with Duolingo and Babbel? And, are new markets fruitful enough?
I’m rooting for Chegg. I want it to be a turnaround story. Godspeed.
New data provided to Search Engine Journal shows that the sites Google is ranking in AI Overviews varies by time and industry, offering an explanation of volatility in AIO rankings. The new research shows what industries are most impacted and may provide a clue as to way.
AIO Presence Varies Over Time and By Industry.
The research was provided by BrightEdge using their proprietary BrightEdge Generative Parser technology that tracks AI Overviews, detects patterns and offers insights useful for SEO and marketing.
Healthcare, Education, and B2B Technology topics continue to show greater presence in Google’s AI Overviews. Healthcare and Education are the two industries where BrightEdge saw the strongest growth as well as stability of which sites are shown.
Healthcare has the highest AIO presence at 84% as of late February 2025. AIOs shown for Education topics show a consistent growth pattern, now at 71% in February 2025.
The travel, restaurant and insurance sectors are also trending upward, with the travel queries being a notable trend. Travel had zero AIO presence in May 2024 but that’s completely different now. Travel is now up to 20-30% presence in the AIO search results.
The presence of restaurant related topics in AIO are up from 0 to 5%, suggesting a rising trend. Meanwhile insurance queries have grown from 18% of queries in May 2024 to a whopping 47% of queries by February 2025.
B2B technology queries that trigger AIO are at 57%. These kinds of queries are important because they are typically represent research related by people involved in decision making. Purchase decisions are different than with consumer queries. So the fact that 57% of queries are triggering AIOs may be a reflection of the complexity of the decision making process and the queries involved with that process.
Let’s face it, technology is complex and the people using it aren’t expert in concepts like “data modeling” and that’s the kind of queries BrightEdge is seeing, which could be reflective of the end user wrapping their minds around what the technology does and how it benefits users.
Having worked with B2B technology it’s not unusual for SaaS providers to use mind numbing jargon to sell their products but the decision makers or even the users of that technology aren’t necessarily going to understand that kind of language. That’s why Google shows AI Overviews for a keyword phrase like associative analytics engine instead of showing someone’s product.
Finance related queries, which had been on a moderate growth trend have doubled from 5% of queries in May 2024 to 10% of queries in February 2025.
Here’s the takeaway provided by BrightEdge:
B2B Tech is at 57%, in Feb-25. Finance has been growing moderately and doubled from 5% in May-24 to 10% in Fed-25
Ecommerce 4% (down from 23% in May-24). Entertainment has dropped to 3%.
Ecommerce and Entertainment presence drops from suggests more testing and alignment with traditional Google search where users can engage in platform experiences. For Ecommerce, the use of features like product grids may be the reason. Traditional search provides more in-platform experiences.
What Does This Mean?
This volatility could reflect variable quality of complex user queries. Given that these are complex queries that are triggering AIO then it may be reasonable to assume that they are longtail in nature. Longtail doesn’t mean that they’re long and complex queries, they can also be short queries like “what is docker compose?”
Screenshots of Google trends shows that more people query Docker Compose than they do What is Docker Compose or What is Docker. Why do more people do that?
Screenshot Of Google Trends
It’s clearly because people are querying Docker Compose as a navigational query. And you can prove that Docker Compose is a navigational query because Google’s search results don’t show an AIO for the query “Docker Compose” but it does show AIO for the other two.
Screenshot Shows SERPs For Docker Compose
Screenshot Shows “What Is” Query Triggers AIO
Changes In AIO Patterns: Gains For Authoritativeness
An interesting trend is that queries for some topics correlated to answers from big brand sites. This is interesting because it somewhat mirrors what happened with Google’s Medic update where SEOs noticed that non-scientific websites no longer ranked for medical queries. Some misunderstood this as Google betraying a bias for big brand sites but that’s not what happened.
What happened in that update was not limited to health related topics. It was a widespread effect that was more like a rebalancing of queries to user expectations- which means this was all about relevance. A query about diabetes should surface scientific data not herbal remedies.
What’s happening today with AIO, particularly with AIO, is a similar thing. Google is tightening up the kind of content AIO is showing to users for medical and technology queries.
Is it favoring brands or authoritativeness? The view that Google has favored brands is shallow and lacks substance. Google has consistently shown a preference for ranking what users expect to see and there are patents that support that observation. SEOs who expect to see rankings based on their made for search engines links, optimized for search engines content, and naïve “EEAT optimized” content completely miss the point of what’s really going on in today’s search engines that rank content based on topicality, user preferences and user expectations. Trustworthy signals of authoritativeness very likely derive from users themselves.
Here’s what BrightEdge shared:
“For example, in the healthcare category, where accuracy and trustworthiness are paramount, Google is increasingly showing search results from just a handful of websites.
Content from authoritative medical research centers account for 72% of AI Overview answers, which is an increase from 54% of all queries at the start of January.
15-22% of B2B technology search queries are derived from the top five technology companies, such as Amazon, IBM, and Microsoft.”
Takeaways:
AIO Presence Varies by Industry and Time
There is growth in AIO visibility for Healthcare, Travel, Insurance, and B2B Technology
Declining presence of AIO in Ecommerce and Entertainment
AIO patterns indicate a preference for authoritative sources. AIO results are increasingly sourced from authoritative sites, particularly in Healthcare and B2B Tech. In B2B Tech, 15-22% of AIO responses come from the top five companies. This shift may mirror previous Google updates like the Medic Update that appeared to rebalance search results based on authoritativeness and user expectations.