We Figured Out How AI Overviews Work [& Built A Tool To Prove It] via @sejournal, @mktbrew

This post was sponsored by Market Brew. The opinions expressed in this article are the sponsor’s own.

Wondering how to realign your SEO strategy for maximum SERP visibility in AI Overviews (AIO)?

Do you wish you had techniques that mirror how AI understands relevance?

Imagine if Google handed you the blueprint for AI Overviews:

  • Every signal.
  • Every scoring mechanism.
  • Every semantic pattern it uses to decide what content makes the cut.

That’s what our search engineers did.

They reverse-engineered how Google’s AI Overviews work and built a model that shows you exactly what to fix.

It’s no longer about superficial tweaks; it’s about aligning with how AI truly evaluates meaning and relevance.

In this article, we’ll show you how to rank in AIO SERPs by creating embeddings for your content and how to realign your content for maximum visibility by using AIO tools built by search engineers.

The 3 Key Features Of AI Overviews That Can Make Or Break Your Rankings

Let’s start with the basic building blocks of a Google AI Overviews (AIO) response:

What Are Embeddings?

Embeddings are high-dimensional numerical representations of text. They allow AI systems to understand the meaning of words, phrases, or even entire pages, beyond just the words themselves.

Rather than matching exact terms, embeddings turn language into vectors, or arrays of numbers, that capture the semantic relationships between concepts.

For example, “car,” “vehicle,” and “automobile” are different words, but their embeddings will be close in vector space because they mean similar things.

Large language models (LLMs) like ChatGPT or Google Gemini use embeddings to “understand” language; they don’t just see words, they see patterns of meaning.

What Are Embeddings?: InfographicImage created by MarketBrew.ai, April 2025

Why Do Embeddings Matter For SEO?

Understanding how Large Language Models (LLMs) interpret content is key to winning in AI-driven search results, especially with Google’s AI Overviews.

Search engines have shifted from simple keyword matching to deeper semantic understanding. Now, they rank content based on contextual relevance, topic clusters, and semantic similarity to user intent, not just isolated words.

Vector Representations of WordsImage created by MarketBrew.ai, April 2025

Embeddings power this evolution.

They enable search engines to group, compare, and rank content with a level of precision that traditional methods (like TF-IDF, keyword density, or Entity SEO) can’t match.

By learning how embeddings work, SEOs gain tools to align their content with how search engines actually think, opening the door to better rankings in semantic search.

The Semantic Algorithm GalaxyImage created by MarketBrew.ai, April 2025

How To Rank In AIO SERPs By Creating Embeddings

Step 1: Set Up Your OpenAI Account

  • Sign Up or Log In: If you haven’t already, sign up for an account on OpenAI’s platform at https://platform.openai.com/signup.
  • API Key: Once logged in, you’ll need to generate an API key to access OpenAI’s services. You can find this in your account settings under the API section.

Step 2: Install The OpenAI Python Client To Simplify This Step For SEO Pros

OpenAI provides a Python client that simplifies the process of interacting with their API. To install it, run the following command in your terminal or command prompt:

pip install openai

Step 3: Authenticate With Your API Key

Before making requests, you need to authenticate using your API key. Here’s how you can set it up in your Python script:

import openai

openai.api_key = 'your-api-key-here'

Step 4: Choose Your Embedding Model

At the time of this article’s creation, OpenAI’s text-embedding-3-small is considered one of the most advanced embedding models. It is highly efficient for a wide range of text processing tasks.

Step 5: Create Embeddings For Your Content

To generate embeddings for text:

response = openai.Embedding.create(

model="text-embedding-3-small",

input="This is an example sentence."

)

embeddings = response['data'][0]['embedding']

print(embeddings)

The result is a list of numbers representing the semantic meaning of your input in high-dimensional space.

Step 6: Storing Embeddings

Store embeddings in a database for future use; tools like Pinecone or PostgreSQL with pgvector are great options.

Step 7: Handling Large Text Inputs

For large content, break it down into paragraphs or sections and generate embeddings for each chunk.

Use similarly sized chunks for better cosine similarity calculations. To represent an entire document, you can average the embeddings for each chunk.

💡Pro Tip: Use Market Brew’s free AI Overviews Visualizer. The search engineer team at Market Brew has created this visualizer to help you understand exactly how embeddings, the fourth generation of text classifiers, are used by search engines.

Semantics: Comparing Embeddings With Cosine Similarity

Cosine similarity measures the similarity between two vectors (embeddings), regardless of their magnitude.

This is essential for comparing the semantic similarity between two pieces of text.

How Does Cosine Similarity Work? Image created by MarketBrew.ai, April 2025

Typical search engine comparisons include:

  1. Keywords with paragraphs,
  2. Groups of paragraphs with other paragraphs, and
  3. Groups of keywords with groups of paragraphs.

Next, search engines cluster these embeddings.

How Search Engines Cluster Embeddings

Search engines can organize content based on clusters of embeddings.

In the video below, we are going to illustrate why and how you can use embedding clusters, using Market Brew’s free AI Overviews Visualizer, to fix content alignment issues that may be preventing you from appearing in Google’s AI Overviews or even their regular search results!

Embedding clusters, or “semantic clouds”, form one of the most powerful ranking tools for search engineers today.

Semantic clouds are topic clusters in thousands of dimensions. The illustration above shows a 3D representation to simplify understanding.

Topic clusters are to entities as semantic clouds are to embeddings. Think of a semantic cloud as a topic cluster on steroids.

Search engineers use this like they do topic clusters.

When your content falls outside the top semantic cloud – what the AI deems most relevant – it is ignored, demoted, or excluded from AI Overviews (and even regular search results) entirely.

No matter how well-written or optimized your page might be in the traditional sense, it won’t surface if it doesn’t align with the right semantic cluster that the finely tuned AI system is seeking.

By using the AI Overviews Visualizer, you can finally see whether your content aligns with the dominant semantic cloud for a given query. If it doesn’t, the tool provides a realignment strategy to help you bridge that gap.

In a world where AI decides what gets shown, this level of visibility isn’t just helpful. It’s essential.

Free AI Overviews Visualizer: How To Fix Content Alignment

Step 1: Use The Visualizer

Input your URL into this AI Overviews Visualizer tool to see how search engines view your content using embeddings. The Cluster Analysis tab will display embedding clusters for your page and indicate whether your content aligns with the correct cluster.

MarketBrew.ai dashboard Screenshot from MarketBrew.ai, April 2025

Step 2: Read The Realignment Strategy

The tool provides a realignment strategy if needed. This provides a clear roadmap for adjusting your content to better align with the AI’s interpretation of relevance.

Example: If your page is semantically distant from the top embedding cluster, the realignment strategy will suggest changes, such as reworking your content or shifting focus.

Example: Embedding Cluster AnalysisScreenshot from MarketBrew.ai, April 2025
Example of New Page Content Aligned with Target EmbeddingScreenshot from MarketBrew.ai, April 2025

Step 3: Test New Changes

Use the “Test New Content” feature to check how well your content now fits the AIO’s top embedding cluster. Iterative testing and refinement are recommended as AI Overviews evolve.

AI Overviews authorScreenshot by MarketBrew.ai, April 2025

See Your Content Like A Search Engine & Tune It Like A Pro

You’ve just seen under the hood of modern SEO – embeddings, clusters, and AI Overviews. These aren’t abstract theories. They’re the same core systems that Google uses to determine what ranks.

Think of it like getting access to the Porsche service manual, not just the owner’s guide. Suddenly, you can stop guessing which tweaks matter and start making adjustments that actually move the needle.

At Market Brew, we’ve spent over two decades modeling these algorithms. Tools like the free AI Overviews Visualizer give you that mechanic’s-eye view of how search engines interpret your content.

And for teams that want to go further, a paid license unlocks Ranking Blueprints to help track and prioritize which AIO-based metrics most affect your rankings – like cosine similarity and top embedding clusters.

You have the manual now. The next move is yours.


Image Credits

Featured Image: Image by Market Brew. Used with permission.

In-Post Image: Images by Market Brew. Used with permission.

The Data Behind Google’s AI Overviews: What Sundar Pichai Won’t Tell You via @sejournal, @Kevin_Indig

Google claims AI Overviews are revolutionizing search behavior.

But the data tells a different story.

Since launching AI Overviews in 2024, Google CEO Sundar Pichai has repeatedly claimed they’re transforming search behavior:

People are using it to Search in entirely new ways, and asking new types of questions, longer and more complex queries… and getting back the best the web has to offer.1

Image Credit: Kevin Indig

The narrative has been consistent across earnings calls and interviews:

  • “We have been able to grow traffic to the ecosystem.”2
  • “Growth actually increases over time as people learn to adapt to that new behavior.”3
  • “Based on our testing, we are encouraged that we are seeing an increase in search usage among people who use the new AI overviews as well as increased user satisfaction with the results.” (Source)

Yet, Google has never backed these claims with actual data.

So, I partnered with Similarweb to analyze over 5 billion search queries across multiple markets.

There’s too much good data in this analysis not to share. So today, you’re getting part one of a two-part series. Here’s what we’ll cover across both parts:

  1. Part 1: How AI Overviews affect user behavior on Google.
  2. Part 2: How AI Overviews impact traffic and engagement on websites.
Image Credit: Lyna ™

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

The stand-out result?

Google’s claims are partially true but significantly oversimplified.

My analysis with Similarweb shows:

  • People visit Google more frequently (+9%) but spend less time per visit.
  • Query length has barely changed.
  • And – most importantly for SEO pros and marketers – the data reveals critical insights about how to adapt to this new search paradigm.
Image Credit: Kevin Indig

About The Data

The data set includes:

  • Over 5 billion search queries and 20 million websites.4
  • Average time on site, searches per session, and visits per user on Google.com – both in total and comparing the UK, U.S., and Germany.
  • A comparison of keywords with and without AI Overviews that analyze searches per session, average time spent on Google, and zero-click share.
  • Page views and time spent on Google.com for keywords showing AI Overviews vs. keywords without AI Overviews.
  • Average query length for the UK, U.S., and Germany.

In this overall claim, the phrase “Search usage” isn’t very well defined.

Is it more searches, more engagement with SERP features, or longer sessions?

Unfortunately, I wasn’t able to pinpoint the exact definition of search usage. It’s Google’s own wording, and so it might be intentionally vague. (Have your own thoughts on this? Let me know!)

Whether there has actually been an overall increase in Search usage because of AI Overviews is more complex, depending on several factors. And the analysis below shows clear patterns we can learn from.

For the U.S. market specifically, the data confirms that the claim “We are seeing an increase in Search usage among people who use the new AI Overviews” is directionally correct.

Here’s how we know it’s correct: Google visits rose +9% after the May 24 rollout (from 26.9% to 29.1%).

The initial drop could be explained by the PR disaster from the first two weeks. (Remember those strange results that mentioned smoking when pregnant or glue on your pizza?)

While U.S. visits to Google grew modestly from 2023 to mid-2024, a clearer upward trend began around August 2024.

Image Credit: Kevin Indig

When we look at the two comparative keyword sets – remember, one set in this study shows AIOs and one doesn’t – we can see that page views on websites from AIO keywords have increased by 22% since the U.S. launch. (Shown by the red line below).

I know we all want to talk about the effect of AIOs on organic clicks, but we’ll get there. I’ll come back to the fact that non-AIO queries drive more page views in Part 2.

However, the “Page views on websites” chart for U.S. searches below reveals two critical insights:

  1. Websites are receiving more views from AIO keyword searches over time, but
  2. Non-AIO queries drive about twice as many page views (shown by the black line).

This suggests AI Overviews may be increasing engagement for specific query types while having a limited impact on overall traffic patterns.

Image Credit: Kevin Indig

Next, let’s take a look at the US, UK, and Germany markets compared.

Although Google has claimed “We are seeing an increase in Search usage among people who use the new AI Overviews,” in general, the SimilarWeb data shows a more nuanced story.

Here’s how we know the claim is only partially correct, depending on the market:

The growth of U.S. visits to Google is proportionally higher than in Germany (chart below), which is our control market because AIOs didn’t roll out there until March 2025.

Image Credit: Kevin Indig

However, in the UK, where AI Overviews rolled out in August 2024, visits are trending flat to down after the rollout (shown via the green line above).

In fact, there was more engagement growth from 2023 to 2024 (before the AIO roll-out).

Ultimately, I consider Google’s claim incorrect for other markets outside the US:More SERP interaction does not translate into longer on-Google sessions.

In the chart below, we can see that time-on-site for Google.com in both the US and UK has been either flat or declining.

And something is reducing time-on-site in Google DE fairly significantly. Maybe it’s related toGoogle losing market share in the EU.

Image Credit: Kevin Indig

We see the same trend when we compare AIO-showing with non-AIO-showing queries in the chart below.

Time on Google for AIO queries falls by -1%.

While this isn’t a huge dip, it certainly isn’t an “increase in Search usage.”

Image Credit: Kevin Indig

Notably, you’ll see in the chart below that pages-per-visit on Google.com declined across the board in 2024 after rolling out AIOs, but then they start recovering and growing again in 2025.

This chart shows a clear dip in pages-per-visit immediately following the May 2024 AIO launch, suggesting users needed fewer results pages when AI Overviews answered their queries directly.

The subsequent recovery in 2025 indicates either user adaptation or Google adjustments to how AIOs function within the search experience.

Image Credit: Kevin Indig

But what about this sudden uplift in 2025?

It happens in our controlled market, Germany, as well. So, it’s not due to AIOs.

How do we know this? Pop back up to that Time on Site graph above that shows all three markets. And you’ll see that Germany’s time on site shows a decline after the AIO launch.

While I’m not sure what drives this trend, I do wonder how less time on Google impacts its bottom line.

MBI published a very interesting deep dive on Alphabet with a chart that indicates that AI Overviews do not monetize as well as claimed.

Instead, a rise in cost per click seems to drive Alphabet’s outstanding earnings.

To be fair, that trend started in 2018, so it’s not clear how much AIOs have accelerated it.

Chart from MBI’s latest deep dive on Alphabet (Image Credit: Kevin Indig)

We are seeing an increase in Search usage among people who use the new AI Overviews.

Based on the data, this claim has layers of truth and omission.

Google visits did increase post-AIO launch (+9%), and AIO keyword pageviews rose impressively (+22%).

However, the full picture reveals important nuances that we need to take into account:

  • UK visits remained flat after the AIO rollout, despite U.S. gains.
  • Time-on-site metrics are flat or declining across markets.
  • Pages-per-visit initially dropped after AIOs launched.

The data suggests users are visiting Google more frequently but spending less time per visit, likely because AI Overviews provide faster answers without deeper exploration.

This pattern aligns with a “resolve and leave” user behavior, rather than increased engagement with Google itself.

While it might be technically true that “Search usage” increased by some metrics, the claim obscures how AIOs are fundamentally changing search interaction patterns at the cost of web traffic.

When we look at the data closely, this claim doesn’t hold true.

Here’s how we know it’s incorrect:

The growth in query length is tiny – certainly not a step-change to “entirely new ways.”

We’re talking about a very gradual increase of 3.27 to 3.37 average words per query in the U.S. over the course of two years.

Sure, that’s only 3% – and maybe at the scale of Google, that has a huge impact.

But this is no step change.

Image Credit: Kevin Indig

The difference in query length between May 2024 and February 2025 is only +0.6%.

In the UK, query length decreased by -0.3% after AI Overviews launched from 3.18 words in August 2024 to 3.17 words in February 2025.

In Germany, query length increased a bit (+0.4%) before the AI Overviews launch.

Verdict: This Claim Is Overstated And Incorrect

While Google reports “People are using it to search…longer and more complex queries,” a closer look shows otherwise.

The data shows only minimal changes in query length in the US, with the UK seeing a decrease after AIOs rolled out.

The data simply doesn’t support the narrative that AI Overviews are driving users to construct “longer and more complex queries” in any meaningful way.

When we examine the data closely, a clear pattern emerges:

Google’s claims about how AI Overviews are fundamentally changing how we search are largely overstated.

Yes, users visit Google more frequently, but they’re spending less time per visit and not crafting significantly longer or more complex queries.

This suggests AI Overviews are creating a “quick answer” behavior pattern rather than deeper engagement with search.

The modest increases in visits are counterbalanced by decreases in time-on-site and pages-per-visit.

And the minimal change in query length across all markets – regardless of whether AI Overviews have launched – indicates that any evolution in search behavior is happening independently of AI features.

These findings matter because they challenge the narrative that AI Overviews represent a revolutionary enhancement to search.

Instead, they’re changing user interaction patterns in ways that Google hasn’t fully acknowledged.

Keep in mind that I’m working with third-party data, which can always be skewed or partial. I don’t think it’s wrong, but we always need to keep the limitations of the data in mind.

  1. Optimize for the new “visit more, stay less” pattern: Users are more frequently turning to Google, but they’re spending less time seeking answers. Your content strategy should focus on both being accurately represented in AI Overviews and providing deeper value that encourages clicks when the overview isn’t sufficient.
  2. Focus on engagement quality: The pattern suggests users are more selective about clicking through, making the quality of experience more important than ever when they do reach your site.
  3. Factor in regional differences: The significant variations between U.S. and UK behavior after AI Overview launches suggest regional testing is essential – what works in one market may not transfer directly to others.

In Part 2, we’ll explore the even more critical question: What happens to the broader web ecosystem when users get their answers directly on Google rather than clicking through to websites?

The answer will reveal whether Google’s claims about “growing traffic to the ecosystem” hold up to scrutiny.


1 Google I/O 2024: An I/O for a new generation

2 CNBC Exclusive: CNBC Transcript: Alphabet CEO Sundar Pichai Speaks with CNBC’s Deirdre Bosa on “Closing Bell: Overtime” Today

3 2024 Q3 Earnings Call

4 A 360-Degree View into the Digital Landscape


Featured Image: Paulo Bobita/Search Engine Journal

Reddit Mods Accuse AI Researchers Of Impersonating Sexual Assault Victims via @sejournal, @martinibuster

Researchers testing the ability of AI to influence people’s opinions violated the ChangeMyView subreddit’s rules and used deceptive practices that allegedly were not approved by their ethics committee, including impersonating victims of sexual assault and using background information about Reddit users to manipulate them.

They argue that those conditions may have introduced biases. Their solution was to introduce AI bots into a live environment without telling the forum members they were interacting with an AI bot. Their audience were unsuspecting Reddit users in the Change My View (CMV) subreddit (r/ChangeMyView), even though it was a violation of the subreddit’s rules which prohibit the use of undisclosed AI bots.

After the research was finished the researchers disclosed their deceit to the Reddit moderators who subsequently posted a notice about it in the subreddit, along with a draft copy of the completed research paper.

Ethical Questions About Research Paper

The CMV moderators posted a discussion that underlines that the subreddit prohibits undisclosed bots and that permission to conduct this experiment would never have been granted:

“CMV rules do not allow the use of undisclosed AI generated content or bots on our sub. The researchers did not contact us ahead of the study and if they had, we would have declined. We have requested an apology from the researchers and asked that this research not be published, among other complaints. As discussed below, our concerns have not been substantively addressed by the University of Zurich or the researchers.”

This fact that the researchers violated the Reddit rules was completely absent from the research paper.

Researchers Claim Research Was Ethical

While the researchers omit that the research broke the rules of the subreddit, they do create the impression that it was ethical by stating that their research methodology was approved by an ethics committee and that all generated comments were checked to assure they were not harmful or unethical:

“In this pre-registered study, we conduct the first large-scale field experiment on LLMs’ persuasiveness, carried out within r/ChangeMyView, a Reddit community of almost 4M users and ranking among the top 1% of subreddits by size. In r/ChangeMyView, users share opinions on various topics, challenging others to change their perspectives by presenting arguments and counterpoints while engaging in a civil conversation. If the original poster (OP) finds a response convincing enough to reconsider or modify their stance, they award a ∆ (delta) to acknowledge their shift in perspective.

…The study was approved by the University of Zurich’s Ethics Committee… Importantly, all generated comments were reviewed by a researcher from our team to ensure no harmful or unethical content was published.”

The moderators of the ChangeMyView subreddit dispute the researcher’s claim to the ethical high ground:

“During the experiment, researchers switched from the planned “values based arguments” originally authorized by the ethics commission to this type of “personalized and fine-tuned arguments.” They did not first consult with the University of Zurich ethics commission before making the change. Lack of formal ethics review for this change raises serious concerns.”

Why Reddit Moderators Believe Research Was Unethical

The Change My View subreddit moderators raised multiple concerns about why they believe the researchers engaged in a grave breach of ethics, including impersonating victims of sexual assault. They argue that this qualifies as “psychological manipulation” of the original posters (OPs), the people who started each discussion.

The Reddit moderators posted:

“The researchers argue that psychological manipulation of OPs on this sub is justified because the lack of existing field experiments constitutes an unacceptable gap in the body of knowledge. However, If OpenAI can create a more ethical research design when doing this, these researchers should be expected to do the same. Psychological manipulation risks posed by LLMs is an extensively studied topic. It is not necessary to experiment on non-consenting human subjects.

AI was used to target OPs in personal ways that they did not sign up for, compiling as much data on identifying features as possible by scrubbing the Reddit platform. Here is an excerpt from the draft conclusions of the research.

Personalization: In addition to the post’s content, LLMs were provided with personal attributes of the OP (gender, age, ethnicity, location, and political orientation), as inferred from their posting history using another LLM.

Some high-level examples of how AI was deployed include:

  • AI pretending to be a victim of rape
  • AI acting as a trauma counselor specializing in abuse
  • AI accusing members of a religious group of “caus[ing] the deaths of hundreds of innocent traders and farmers and villagers.”
  • AI posing as a black man opposed to Black Lives Matter
  • AI posing as a person who received substandard care in a foreign hospital.”

The moderator team have filed a complaint with the University Of Zurich

Are AI Bots Persuasive?

The researchers discovered that AI bots are highly persuasive and do a better job of changing people’s minds than humans can.

The research paper explains:

“Implications. In a first field experiment on AI-driven persuasion, we demonstrate that LLMs can be highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.”

One of the findings was that humans were unable to identify when they were talking to a bot and (unironically) they encourage social media platforms to deploy better ways to identify and block AI bots:

“Incidentally, our experiment confirms the challenge of distinguishing human from AI-generated content… Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments posted by our accounts. This hints at the potential effectiveness of AI-powered botnets… which could seamlessly blend into online communities.

Given these risks, we argue that online platforms must proactively develop and implement robust detection mechanisms, content verification protocols, and transparency measures to prevent the spread of AI-generated manipulation.”

Takeaways:

  • Ethical Violations in AI Persuasion Research
    Researchers conducted a live AI persuasion experiment without Reddit’s consent, violating subreddit rules and allegedly violating ethical norms.
  • Disputed Ethical Claims
    Researchers claim ethical high ground by citing ethics board approval but omitted citing rule violations; moderators argue they engaged in undisclosed psychological manipulation.
  • Use of Personalization in AI Arguments
    AI bots allegedly used scraped personal data to create highly tailored arguments targeting Reddit users.
  • Reddit Moderators Allege Profoundly Disturbing Deception
    The Reddit moderators claim that the AI bots impersonated sexual assault victims, trauma counselors, and other emotionally charged personas in an effort to manipulate opinions.
  • AI’s Superior Persuasiveness and Detection Challenges
    The researchers claim that AI bots proved more persuasive than humans and remained undetected by users, raising concerns about future bot-driven manipulation.
  • Research Paper Inadvertently Makes Case For Why AI Bots Should Be Banned From Social Media
    The study highlights the urgent need for social media platforms to develop tools for detecting and verifying AI-generated content. Ironically, the research paper itself is a reason why AI bots should be more aggressively banned from social media and forums.

Researchers from the University of Zurich tested whether AI bots could persuade people more effectively than humans by secretly deploying personalized AI arguments on the ChangeMyView subreddit without user consent, violating platform rules and allegedly going outside the ethical standards approved by their university ethics board. Their findings show that AI bots are highly persuasive and difficult to detect, but the way the research itself was conducted raises ethical concerns.

Read the concerns posted by the ChangeMyView subreddit moderators:

Unauthorized Experiment on CMV Involving AI-generated Comments

Featured Image by Shutterstock/Ausra Barysiene and manipulated by author

How LLMs Interpret Content: How To Structure Information For AI Search via @sejournal, @cshel

In the SEO world, when we talk about how to structure content for AI search, we often default to structured data – Schema.org, JSON-LD, rich results, knowledge graph eligibility – the whole shooting match.

While that layer of markup is still useful in many scenarios, this isn’t another article about how to wrap your content in tags.

Structuring content isn’t the same as structured data

Instead, we’re going deeper into something more fundamental and arguably more important in the age of generative AI: How your content is actually structured on the page and how that influences what large language models (LLMs) extract, understand, and surface in AI-powered search results.

Structured data is optional. Structured writing and formatting are not.

If you want your content to show up in AI Overviews, Perplexity summaries, ChatGPT citations, or any of the increasingly common “direct answer” features driven by LLMs, the architecture of your content matters: Headings. Paragraphs. Lists. Order. Clarity. Consistency.

In this article, I’m unpacking how LLMs interpret content — and what you can do to make sure your message is not just crawled, but understood.

How LLMs Actually Interpret Web Content

Let’s start with the basics.

Unlike traditional search engine crawlers that rely heavily on markup, metadata, and link structures, LLMs interpret content differently.

They don’t scan a page the way a bot does. They ingest it, break it into tokens, and analyze the relationships between words, sentences, and concepts using attention mechanisms.

They’re not looking for a tag or a JSON-LD snippet to tell them what a page is about. They’re looking for semantic clarity: Does this content express a clear idea? Is it coherent? Does it answer a question directly?

LLMs like GPT-4 or Gemini analyze:

  • The order in which information is presented.
  • The hierarchy of concepts (which is why headings still matter).
  • Formatting cues like bullet points, tables, bolded summaries.
  • Redundancy and reinforcement, which help models determine what’s most important.

This is why poorly structured content – even if it’s keyword-rich and marked up with schema – can fail to show up in AI summaries, while a clear, well-formatted blog post without a single line of JSON-LD might get cited or paraphrased directly.

Why Structure Matters More Than Ever In AI Search

Traditional search was about ranking; AI search is about representation.

When a language model generates a response to a query, it’s pulling from many sources – often sentence by sentence, paragraph by paragraph.

It’s not retrieving a whole page and showing it. It’s building a new answer based on what it can understand.

What gets understood most reliably?

Content that is:

  • Segmented logically, so each part expresses one idea.
  • Consistent in tone and terminology.
  • Presented in a format that lends itself to quick parsing (think FAQs, how-to steps, definition-style intros).
  • Written with clarity, not cleverness.

AI search engines don’t need schema to pull a step-by-step answer from a blog post.

But, they do need you to label your steps clearly, keep them together, and not bury them in long-winded prose or interrupt them with calls to action, pop-ups, or unrelated tangents.

Clean structure is now a ranking factor – not in the traditional SEO sense, but in the AI citation economy we’re entering.

What LLMs Look For When Parsing Content

Here’s what I’ve observed (both anecdotally and through testing across tools like Perplexity, ChatGPT Browse, Bing Copilot, and Google’s AI Overviews):

  • Clear Headings And Subheadings: LLMs use heading structure to understand hierarchy. Pages with proper H1–H2–H3 nesting are easier to parse than walls of text or div-heavy templates.
  • Short, Focused Paragraphs: Long paragraphs bury the lede. LLMs favor self-contained thoughts. Think one idea per paragraph.
  • Structured Formats (Lists, Tables, FAQs): If you want to get quoted, make it easy to lift your content. Bullets, tables, and Q&A formats are goldmines for answer engines.
  • Defined Topic Scope At The Top: Put your TL;DR early. Don’t make the model (or the user) scroll through 600 words of brand story before getting to the meat.
  • Semantic Cues In The Body: Words like “in summary,” “the most important,” “step 1,” and “common mistake” help LLMs identify relevance and structure. There’s a reason so much AI-generated content uses those “giveaway” phrases. It’s not because the model is lazy or formulaic. It’s because it actually knows how to structure information in a way that’s clear, digestible, and effective, which, frankly, is more than can be said for a lot of human writers.

A Real-World Example: Why My Own Article Didn’t Show Up

In December 2024, I wrote a piece about the relevance of schema in AI-first search.

It was structured for clarity, timeliness, and was highly relevant to this conversation, but didn’t show up in my research queries for this article (the one you are presently reading). The reason? I didn’t use the term “LLM” in the title or slug.

All of the articles returned in my search had “LLM” in the title. Mine said “AI Search” but didn’t mention LLMs explicitly.

You might assume that a large language model would understand “AI search” and “LLMs” are conceptually related – and it probably does – but understanding that two things are related and choosing what to return based on the prompt are two different things.

Where does the model get its retrieval logic? From the prompt. It interprets your question literally.

If you say, “Show me articles about LLMs using schema,” it will surface content that directly includes “LLMs” and “schema” – not necessarily content that’s adjacent, related, or semantically similar, especially when it has plenty to choose from that contains the words in the query (a.k.a. the prompt).

So, even though LLMs are smarter than traditional crawlers, retrieval is still rooted in surface-level cues.

This might sound suspiciously like keyword research still matters – and yes, it absolutely does. Not because LLMs are dumb, but because search behavior (even AI search) still depends on how humans phrase things.

The retrieval layer – the layer that decides what’s eligible to be summarized or cited – is still driven by surface-level language cues.

What Research Tells Us About Retrieval

Even recent academic work supports this layered view of retrieval.

A 2023 research paper by Doostmohammadi et al. found that simpler, keyword-matching techniques, like a method called BM25, often led to better results than approaches focused solely on semantic understanding.

The improvement was measured through a drop in perplexity, which tells us how confident or uncertain a language model is when predicting the next word.

In plain terms: Even in systems designed to be smart, clear and literal phrasing still made the answers better.

So, the lesson isn’t just to use the language they’ve been trained to recognize. The real lesson is: If you want your content to be found, understand how AI search works as a system – a chain of prompts, retrieval, and synthesis. Plus, make sure you’re aligned at the retrieval layer.

This isn’t about the limits of AI comprehension. It’s about the precision of retrieval.

Language models are incredibly capable of interpreting nuanced content, but when they’re acting as search agents, they still rely on the specificity of the queries they’re given.

That makes terminology, not just structure, a key part of being found.

How To Structure Content For AI Search

If you want to increase your odds of being cited, summarized, or quoted by AI-driven search engines, it’s time to think less like a writer and more like an information architect – and structure content for AI search accordingly.

That doesn’t mean sacrificing voice or insight, but it does mean presenting ideas in a format that makes them easy to extract, interpret, and reassemble.

Core Techniques For Structuring AI-Friendly Content

Here are some of the most effective structural tactics I recommend:

Use A Logical Heading Hierarchy

Structure your pages with a single clear H1 that sets the context, followed by H2s and H3s that nest logically beneath it.

LLMs, like human readers, rely on this hierarchy to understand the flow and relationship between concepts.

If every heading on your page is an H1, you’re signaling that everything is equally important, which means nothing stands out.

Good heading structure is not just semantic hygiene; it’s a blueprint for comprehension.

Keep Paragraphs Short And Self-Contained

Every paragraph should communicate one idea clearly.

Walls of text don’t just intimidate human readers; they also increase the likelihood that an AI model will extract the wrong part of the answer or skip your content altogether.

This is closely tied to readability metrics like the Flesch Reading Ease score, which rewards shorter sentences and simpler phrasing.

While it may pain those of us who enjoy a good, long, meandering sentence (myself included), clarity and segmentation help both humans and LLMs follow your train of thought without derailing.

Use Lists, Tables, And Predictable Formats

If your content can be turned into a step-by-step guide, numbered list, comparison table, or bulleted breakdown, do it. AI summarizers love structure, so do users.

Frontload Key Insights

Don’t save your best advice or most important definitions for the end.

LLMs tend to prioritize what appears early in the content. Give your thesis, definition, or takeaway up top, then expand on it.

Use Semantic Cues

Signal structure with phrasing like “Step 1,” “In summary,” “Key takeaway,” “Most common mistake,” and “To compare.”

These phrases help LLMs (and readers) identify the role each passage plays.

Avoid Noise

Interruptive pop-ups, modal windows, endless calls-to-action (CTAs), and disjointed carousels can pollute your content.

Even if the user closes them, they’re often still present in the Document Object Model (DOM), and they dilute what the LLM sees.

Think of your content like a transcript: What would it sound like if read aloud? If it’s hard to follow in that format, it might be hard for an LLM to follow, too.

The Role Of Schema: Still Useful, But Not A Magic Bullet

Let’s be clear: Structured data still has value. It helps search engines understand content, populate rich results, and disambiguate similar topics.

However, LLMs don’t require it to understand your content.

If your site is a semantic dumpster fire, schema might save you, but wouldn’t it be better to avoid building a dumpster fire in the first place?

Schema is a helpful boost, not a magic bullet. Prioritize clear structure and communication first, and use markup to reinforce – not rescue – your content.

How Schema Still Supports AI Understanding

That said, Google has recently confirmed that its LLM (Gemini), which powers AI Overviews, does leverage structured data to help understand content more effectively.

In fact, John Mueller stated that schema markup is “good for LLMs” because it gives models clearer signals about intent and structure.

That doesn’t contradict the point; it reinforces it. If your content isn’t already structured and understandable, schema can help fill the gaps. It’s a crutch, not a cure.

Schema is a helpful boost, but not a substitute, for structure and clarity.

In AI-driven search environments, we’re seeing content without any structured data show up in citations and summaries because the core content was well-organized, well-written, and easily parsed.

In short:

  • Use schema when it helps clarify the intent or context.
  • Don’t rely on it to fix bad content or a disorganized layout.
  • Prioritize content quality and layout before markup.

The future of content visibility is built on how well you communicate, not just how well you tag.

Conclusion: Structure For Meaning, Not Just For Machines

Optimizing for LLMs doesn’t mean chasing new tools or hacks. It means doubling down on what good communication has always required: clarity, coherence, and structure.

If you want to stay competitive, you’ll need to structure content for AI search just as carefully as you structure it for human readers.

The best-performing content in AI search isn’t necessarily the most optimized. It’s the most understandable. That means:

  • Anticipating how content will be interpreted, not just indexed.
  • Giving AI the framework it needs to extract your ideas.
  • Structuring pages for comprehension, not just compliance.
  • Anticipating and using the language your audience uses, because LLMs respond literally to prompts and retrieval depends on those exact terms being present.

As search shifts from links to language, we’re entering a new era of content design. One where meaning rises to the top, and the brands that structure for comprehension will rise right along with it.

More Resources:


Featured Image: Igor Link/Shutterstock

7 AI Terms Microsoft Wants You to Know In 2025 via @sejournal, @MattGSouthern

Microsoft released its 2025 Annual Work Trend Index this week.

The report claims this is the year companies will move beyond AI experiments and rebuild their core operations around AI.

Microsoft also introduced several new terms that it believes will shape the future of the workplace.

Let’s look at what Microsoft wants to add to your work vocabulary. Remember, Microsoft has invested heavily in AI, so they have good reasons to make these concepts seem normal.

The Microsoft AI Dictionary

1. The “Frontier Firm”

Microsoft says “Frontier Firms” are organizations built around on-demand AI, human-agent teams, and employees who act as “agent bosses.”

The report claims 71% of workers at these AI-forward companies say their organizations are thriving. That’s much higher than the global average of just 37%.

2. “Intelligence on Tap”

This refers to AI that’s easily accessible whenever needed. Microsoft calls it “abundant, affordable, and scalable on-demand.”

The company suggests AI is now a resource that isn’t limited by staff size or expertise but can be purchased and used as needed, conveniently through Microsoft’s products.

3. “The Capacity Gap”

This term refers to the growing disparity between what businesses require and what humans can provide.

Microsoft’s research indicates that 53% of leaders believe productivity must increase, while 80% of workers report a lack of time or energy to complete their work. They suggest that AI tools can fill this gap.

4. “Work Charts”

Forget traditional org charts. Microsoft envisions more flexible “Work Charts” that adapt to business needs by leveraging both human workers and AI.

These structures focus on results rather than rigid hierarchies. They allow companies to use the best mix of human and AI workers for each task.

5. “Human-Agent Ratio”

This term refers to the balance between AI agents and human workers required for optimal results.

Microsoft suggests that leaders need to determine the number of AI agents required for specific roles and the number of humans who should guide those agents. This essentially redefines how companies staff their teams.

6. “Agent Boss”

Perhaps the most interesting term is that of an “agent boss,” someone who builds, assigns tasks to, and manages AI agents to boost their impact and advance their career.

Microsoft predicts that within five years, teams will be training (41%) and managing (36%) AI agents as a regular part of their jobs.

7. “Digital Labor”

This is Microsoft’s preferred term for AI-powered work automation. Microsoft positions AI not as a replacement for humans, but as an addition to the workforce.

The report states that 82% of leaders plan to use digital labor to expand their workforce within the next year and a half.

However, this shift towards AI-powered work automation raises important questions about job displacement, the need for retraining, and the ethical use of AI.

These considerations are crucial as we navigate this new era of work.

Behind the Terminology

These terms reveal Microsoft’s vision for embedding AI deeper into workplace operations, with its products leading the way.

The company also announced updates to Microsoft 365 Copilot, including:

  • New Researcher and Analyst agents
  • An AI image generator
  • Copilot Notebooks
  • Enhanced search functions

Jared Spataro, Microsoft’s CMO of AI at Work, states in the report:

“2025 will be remembered as the year the Frontier Firm was born — the moment companies moved beyond experimenting with AI and began rebuilding around it.”

Looking Ahead

While Microsoft’s terms may or may not stick, the trends it describes are already changing digital marketing.

Whether you embrace the title “agent boss” or not, knowing how to use AI tools while maintaining human creativity will likely become essential in the changing marketing workforce.

Will Microsoft’s vision of “Frontier Firms” happen exactly as they describe? Time and the number of people who adopt these ideas will tell.


Featured Image: Drawlab19/Shutterstock

The CMO’s Guide To Winning In AI Search With Ahrefs [Webinar] via @sejournal, @lorenbaker

What happens when no one clicks, but your business still needs to grow?

In the age of AI answer engines and fewer clicks, your brand can’t afford to be invisible.

It’s time to rethink how people find, remember, and trust your brand online.

Join us for “The CMO’s Guide to Winning in AI Search with Ahrefs.” A powerful strategy session designed to help you stay visible, profitable, and one step ahead in 2025.

Why This Webinar Is A Must-Attend Event:

AI-first search is changing the rules. We’re giving you the roadmap to adapt and thrive.

In this session, you’ll learn how to:

  • Track the right brand awareness metrics that connect visibility to profit.
  • Increase your presence in AI Overviews, SERPs, and AI-generated answers.
  • Automate smart AI marketing tactics to grow across multiple platforms.

Featuring Andrei Țiț, Product Marketer at Ahrefs, who’ll guide you through proven techniques for standing out even when clicks are harder to come by.

Why You Can’t Miss This:

This isn’t just about SEO anymore. It’s about building a brand that people seek out, no matter how they search.

Live Q&A: Stick around after the demo to get your questions answered directly by Andrei.

Can’t make it live? Register anyway, and we’ll send you the recording.

Let’s future-proof your brand strategy together.

AI Use Jumps to 78% Among Businesses As Costs Drop via @sejournal, @MattGSouthern

Stanford University’s latest AI Index Report reveals a significant increase in AI adoption among businesses.

Now 78% of organizations use AI, up from 55% a year ago. At the same time, the cost of using AI has dropped, becoming 280 times cheaper in less than two years.

More Businesses Than Ever Are Using AI

The latest report, now in its eighth year, shows a turning point for AI in business.

The number of organizations using generative AI in at least one business area more than doubled, from 33% in 2023 to 71% in 2024.

“Business is all in on AI, fueling record investment and usage,” the report states.

In 2024, U.S. companies invested $109.1 billion in AI, nearly 12 times more than China’s $9.3 billion and 24 times more than the U.K.’s $4.5 billion.

AI Costs Are Dropping

One reason more companies are using AI is that it’s becoming increasingly affordable. The report indicates that the cost of running AI queries has decreased significantly.

The report highlights:

“The cost of querying an AI model that performs like GPT-3.5 dropped from $20.00 per million tokens in November 2022 to just $0.07 per million tokens by October 2024.”

That’s 280 times cheaper in about 18 months.

Prices have dropped between 9 and 900 times per year, depending on the use case for AI. This makes powerful AI tools much more affordable for companies of all sizes.

Regional Differences and Business Impact

Different regions are adopting AI at different rates.

North America remains the leader, but Greater China has shown the most significant jump, with a 27-point increase in company AI use. Europe was next with a 23-point increase.

For marketing teams, AI is starting to show financial benefits. About 71% of companies using AI in marketing and sales report increased revenue, although most say the increase is less than 5%.

This suggests that while AI is helping, most companies are still figuring out how to use it best.

What This Means for Marketers & SEO Pros

These findings matter for several reasons:

  1. The drop in AI costs means powerful tools are getting more affordable, even for smaller teams.
  2. Companies report that AI boosts productivity and helps bridge skill gaps. This can enable you to accomplish more with limited resources.
  3. The report notes that “smaller models drive stronger performance.” Today’s models are 142 times smaller than the 2022 versions, so more AI tools can run on regular computers.

The 2025 AI Index Report clarifies that AI is no longer an experimental technology, it’s a mainstream business tool. For marketers, the question isn’t whether to use AI, but how to utilize it effectively to stay ahead of competitors.

For more insights, see the full report.


Featured Image: kanlaya wanon/Shutterstock

How Is Answer Engine Optimization Different From SEO? via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Is doing SEO enough for an AI chatbot visibility?

The industry is divided down the middle: Half believe that optimization for large language models (LLMs) requires new strategies, while the other half insists good SEO already handles it.

This division has spawned new acronyms like GEO (Generative Engine Optimization) and AEO (Answer Engine Optimization) – terms that are equally loved and hated.

To settle this debate, I analyzed Similarweb data comparing Google organic traffic with ChatGPT brand visibility across four product categories.

The result?

The top search players are not always the top ChatGPT players, and the overlap varies between product categories.

This is important to understand because:

  1. There are real optimization opportunities for some categories, and
  2. You might miss them if you dismiss optimization for ChatGPT as “just doing good SEO.”

At the same time, we can apply the same tactics to ChatGPT as to Google.

I like to think of how we describe the differences between SEO and GEO/AEO like this:

SEO and GEO/AEO are like pianos and guitars.

They’re both instruments. They both make music. And they both share fundamental principles (notes, scales, harmony) that a musician must master to properly play both.

About The Data

Big thanks to Similarweb, especially Adelle and Sam, for sharing the data with me.

Here’s what I reviewed for this particular analysis:

  • Organic search traffic vs. AI chatbots visibility across four product categories:
    • Credit cards (finance).
    • Earbuds (tech).
    • CRM (software).
    • Handbags (fashion).
  • Methodology: Similarweb categorizes ChatGPT conversations based on their content and identifies the most common brands in ChatGPT’s response.
  • In total, the data covers 69.9 million clicks.

SEO Vs. GEO/AEO: Same, Same, But Different

If GEO/AEO and SEO were the same, the same sites getting organic traffic would also get the most citations/mentions in LLMs.

That’s only true in a few cases, but not for the overall picture.

Credit Cards

Image Credit: Kevin Indig
  • chase.com — 6.6 million clicks | 13.6% ChatGPT visibility
  • reddit.com — 5.2 million clicks | 0% ChatGPT visibility
  • capitalone.com — 4.1 million clicks | 10.3% ChatGPT visibility
  • citibankonline.com — 3.2 million clicks | 4.4% ChatGPT visibility
  • comenity.net — 2.9 million clicks | 0% ChatGPT visibility
Image Credit: Kevin Indig
  • google.com — 337,000 clicks | 20.3% ChatGPT visibility
  • paypal.com — 209,000 clicks | 19.7% ChatGPT visibility
  • americanexpress.com — 1.3 million clicks | 16.9% ChatGPT visibility
  • visa.com — 116,000 clicks | 15.7% ChatGPT visibility
  • chase.com — 6.6 million clicks | 13.6% ChatGPT visibility

Handbags

Image Credit: Kevin Indig
  • reddit.com — 241,000 clicks | 0% ChatGPT visibility
  • youtube.com — 152,000 clicks | 5.3% ChatGPT visibility
  • amazon.com — 77,000 clicks | 9.8% ChatGPT visibility
  • nordstrom.com — 51,000 clicks | 0% ChatGPT visibility
  • coach.com — 48,000 clicks | 6.1% ChatGPT visibility
Image Credit: Kevin Indig
  • target.com — 7,000 clicks | 24.2% ChatGPT visibility
  • instagram.com — 7,000 clicks | 13.8% ChatGPT visibility
  • louisvuitton.com — 27,000 clicks | 10.0% ChatGPT visibility
  • gucci.com — 15,000 clicks | 9.9% ChatGPT visibility
  • amazon.com — 77,000 clicks | 9.8% ChatGPT visibility

Earbuds

Image Credit: Kevin Indig
  • reddit.com — 1.2 million clicks | 0% ChatGPT visibility
  • youtube.com — 868,000 clicks | 7.1% ChatGPT visibility
  • cnet.com — 512,000 clicks | 0% ChatGPT visibility
  • amazon.com — 474,000 clicks | 15.1% ChatGPT visibility
  • bose.com — 407,000 clicks | 10.2% ChatGPT visibility
Image Credit: Kevin Indig
  • apple.com — 152,000 clicks | 16.8% ChatGPT visibility
  • amazon.com — 474,000 clicks | 15.1% ChatGPT visibility
  • bose.com — 407,000 clicks | 10.2% ChatGPT visibility
  • wired.com — 120,000 clicks | 9.5% ChatGPT visibility
  • google.com — 31,000 clicks | 9.5% ChatGPT visibility

CRM

Image Credit: Kevin Indig
  • zoho.com — 314,000 clicks | 8.7% ChatGPT visibility
  • salesforce.com — 225,000 clicks | 33.8% ChatGPT visibility
  • sfgcrm.com — 188,000 clicks | 0% ChatGPT visibility
  • yahoo.com — 179,000 clicks | 0% ChatGPT visibility
  • youtube.com — 167,000 clicks | 4.1% ChatGPT visibility
Image Credit: Kevin Indig
  • salesforce.com — 225,000 clicks | 33.8% ChatGPT visibility
  • google.com — 30,000 clicks | 25.8% ChatGPT visibility
  • hubspot.com — 104,000 clicks | 22.5% ChatGPT visibility
  • linkedin.com — 36,000 clicks | 20.7% ChatGPT visibility
  • facebook.com — 7,000 clicks | 10.1% ChatGPT visibility

The data shows that the top organic domains (by clicks) are not the ones getting the most mentions in ChatGPT.

As a result, just doing good SEO is not enough for LLM visibility when we look at specific domains.

Broad relationships between organic clicks and ChatGPT mentions tell a more nuanced story.

Whether or not “just doing good SEO” will be successful for LLM visibility can depend on the vertical or category.

Image Credit: Kevin Indig

In some verticals, AI chatbot optimization can really move the needle. In others, it might not help much.

Earbuds and CRM have a strong correlation between clicks and ChatGPT visibility.

Credit cards and handbags have a weak one.

In other words, credit cards and handbags are a much more open playing field for LLM optimization.

So clearly, that’s where optimizing for LLMs has the biggest payoff.

What Makes A Category Worthy Of Visibility Optimization?

The differentiator is unclear.

The factors that likely play a role are:

  • Product specs.
  • Reviews.
  • Developer docs.
  • Regulatory language, and/or
  • Ad spend.

But ultimately, we need more data to understand when product categories have a high or low overlap between AI visibility and organic search.

Image Credit: Kevin Indig

Besides the high correlation between organic and AI traffic, some categories have a higher degree of winner-takes-it-all dynamics than others.

In the CRM category, for example, three brands get almost 50% of visibility: Salesforce, HubSpot, and Google.

These dynamics seem to reflect market share – the CRM space is heavily dominated by Salesforce, HubSpot, and Google. (Google even thought about buying HubSpot, remember?)

In What content works well in LLMs?, I found that brand popularity has the strongest relationship with LLM visibility:

After matching many metrics with AI chatbot visibility, I found one factor that stands out more than anything else: brand search volume. The number of AI chatbot mentions and brand search volume have a correlation of .334 – pretty good in this field. In other words, the popularity of a brand broadly decides how visible it is in AI chatbots.

This effect is reflected here as well, and contextualized by market share.

Plainly, the more fragmented a category is, the higher the chance of gaining ChatGPT visibility.

This is great news for organizations or brands in emerging industries or products where there is plenty of room for competition.

However, categories that are dominated by a few brands are harder to optimize for LLM visibility, probably because there is already so much content on the web about these incumbents.

If you’re thinking, “Well, that’s not new, Kevin. That’s true of SEO, too.” I get it.

This information might feel fairly intuitive, but I’ve seen smaller brands or startups that heavily invest in high-quality SEO practices be able to find their way at the top of search results.

What the data I’m discussing today shows us is that it’s going to be even more challenging to optimize for LLM visibility in verticals or industries that are well-established and have long-time trusted incumbents dominating the vertical.

And depending on what vertical your site sits in, you’ll need to develop your organic visibility strategy accordingly.

So, here are the main takeaways from my findings:

  • It’s risky to dismiss AEO/GEO altogether. You could assume that no action is needed when you’re winning in “classic SEO,” but that would open the door to competitors taking your spot in ChatGPT.
  • Don’t pivot or panic if you’re already winning. It’s also not helpful to reflexively change tactics or practices in attempts to optimize for ChatGPT when you’re already doing well. Start brainstorming plans for changes (algorithms do change, after all), but no need to reinvent the wheel just yet.
  • Prioritize content and PR investments for ChatGPT when the overlap with organic search is low across your most prompts. Now’s the time to get the ball rolling on this. Record your actions and your results, and find out what works in your vertical.

The Biggest Differences Between SEO And GEO/AEO

Half of the community wants to put a new label on SEO; half says it’s the same.

Here’s where I think the disconnect stems from:

The fundamental principles overlap, but the implementation and context differ significantly.

Both SEO and GEO/AEO rely on these core elements:

  • Technical accessibility: Both require content to be easily crawlable and indexable (with JavaScript often creating challenges for both, though currently more problematic for LLM crawlers).
  • Content quality: High-quality, comprehensive, and accurate content performs better in both environments.
  • Authority signals: While implemented differently, both systems rely on signals that indicate trustworthiness and expertise.

Despite these shared foundations, how you optimize is different:

  1. User intent and query patterns: AI chatbots handle longer prompts where users express detailed intent, which requires more specific content that addresses nuanced questions. Google is moving in this direction with AI Overviews, but it still primarily serves shorter queries.
  2. Signal weighting and ranking factors: AI chatbots give significantly more weight to overall brand popularity and volume of mentions. Google has more robust ways to measure and incorporate user satisfaction (Chrome data, click patterns, return-to-search rates). In another study I’m working on, trends indicate search results are more stable and the emphasis on content freshness is higher.
  3. Quality and safety guardrails: Google has developed specific criteria for YMYL (Your Money Your Life) content that AI chatbots haven’t fully replicated. LLMs currently lack sophisticated spam detection and penalty systems.
  4. Rich results: Google uses a variety of SERP features to format different content types. ChatGPT only incorporates rich formatting for some content (maps, videos).

And like I mentioned at the start, SEO and GEO/AEO are like pianos and guitars.

They share fundamental musical principles, but require different techniques and additional knowledge to play both effectively.

And essentially, classic SEO professionals will need to train as multi-instrumentalists over time.

Strategic Adaptation, Not Reinvention – Yet

Despite the different dynamics, both SEO and GEO/AEO have the same optimizations:

  • Create better content.
  • Provide unique perspectives.
  • Increase your brand strength.
  • Ensure your site is properly crawled and indexed.

The difference lies in how much attention you should pay to certain content categories and how resource allocation works.

Rather than creating an entirely new practice, it’s about understanding when and how to prioritize your efforts.

By the way, I also think it’s too early to coin a new acronym.

The AI and chatbot landscape is evolving rapidly, and so is search. We haven’t reached the final form of AI yet.

In some verticals with low correlation between search and AI visibility, there’s a significant opportunity to stand out.

In others, your SEO efforts may already be giving you the visibility you need across both channels.

But I do expect GEO/AEO to differ more from SEO over time.

Why? The signals OpenAI gets from interaction with its models and from the richness of prompts should allow it to develop its own weighting signals for brands and answers.

OpenAI gets much better inputs to train its models.

As a result, it should be able to either:

  1. Develop its own web index that it can use to ground answers in facts, or
  2. Develop a whole new system of grounding rules.

What Should You Do Right Now?

Focus on understanding your category’s specific dynamics.

Are the SEO leaders in your category also dominating prompts on ChatGPT?

If so, focus on becoming a leader in search results.

If not, focus on becoming a leader in search and invest in monitoring and optimizing your visibility across relevant ChatGPT prompts with targeted content, PR campaigns, content syndication, and content repurposing across different formats.

And until we all see this technology evolve and distinguish itself further from traditional organic search, I say we just all stick with SEO as our agreed-upon acronym for what we do…

…at least for now.


Featured Image: Paulo Bobita/Search Engine Journal

How To Build Consensus Online To Gain Visibility In AI Search via @sejournal, @_kevinrowe

Just like with SEO, it can be tempting to use clever hacks to optimize for AI search.

But the problem with hacks is that, as soon as they’re discovered, changes will be made that make those hacks ineffective.

Consider the Rank Or Go Home Challenge, where Kyle Roof managed to get his website a top ranking for the string “Rhinoplasty Plano,” despite 98% of the site being “lorem ipsum” text.

Within 24 hours of Google hearing about this, the site was de-indexed.

The same holds true for AI search, but here, the system is changing at a breakneck pace. What works today may well not work a month from now.

Understanding GEO

Generative Engine Optimization (GEO) is the emerging field of optimization for AI search. This includes optimizing to appear in Google’s AI Overviews, Gemini, ChatGPT, Grok, and others.

This field is evolving rapidly, meaning that tactics used today may not work in a year.

Here are a few examples of how quickly generative AI evolves, according to a key benchmark analysis by Ithy about OpenAI’s o1 to o3 models.

  • Mathematical reasoning: AIME 2024 benchmark accuracy rose from 83.3% to 96.7%, a 13.4% improvement.
  • Scientific reasoning: Using the GPQA Diamond Benchmark, ChatGPT’s “o3 scored 87.7% accuracy compared to o1’s 78.0%, demonstrating a stronger capacity to handle complex, PhD-level science questions with greater precision and depth.”
  • Coding: ChatGPT has significantly improved from o1 to o3, with o3 “achieving a 71.7% accuracy rate, a significant increase from the o1 model’s 48.9%.”

This means that, in the long term, hacking the system simply won’t be cost-effective. Any hack you uncover will have a very limited shelf life.

Instead, we should turn to a tried and true tactic from SEO: aligning consensus.

What Is Consensus, And How Do You Align With It?

Put simply, consensus is when a variety of high-quality sources align on a topic.

For example, if you ask Google if the earth is round or flat, the resulting snippet will tell you it is round because the vast majority of high-quality sources agree on this fact.

Screenshot from search for [is the earth round or flat], Google, February 2025

The highest-ranking results will be sites that agree with this consensus, while results that don’t align rank poorly.

AI search works in much the same way. It can identify the general consensus on a topic and use this consensus to decide which information is most relevant to a user’s search intent.

Building Consensus Through PR

So, then, building consensus is key for GEO. But, how can you help build consensus?

The answer is through the use of experts.

How Experts Build Consensus

Let’s take an example from Mark Cuban, a financial expert and Florida resident.

When discussing the topic of the housing crisis in Florida on the platform Bluesky, he stated that a major issue is the affordability of home insurance.

This, was then cited by a variety of articles on sites like GoBankingRates.

Further articles may then also cite this article, perhaps bringing in other experts to comment.

Soon, a consensus forms: Florida’s housing crisis is due at least in part to homeowners’ insurance rates. And if we ask Google this question, the AI snippet reflects just that.

Screenshot from search for [what are the factors in florida’s housing crisis], Google, February 2025

 Even a single expert’s opinion can have a major impact on consensus, especially for smaller, more niche topics.

Positioning Expertise To Build Consensus

The important thing to keep in mind is consensus cannot be faked.

Building consensus requires convincing people. And to convince people, you’ll need to establish your expertise and credibility and get a conversation going to establish consensus on a topic.

In other words, you’ll need:

  • Credible expertise.
  • High-quality data or insights.
  • Enough coverage or references across the web to establish that your viewpoint is widely accepted (or at least seriously considered) by other experts.

Say you want to build consensus around the idea that the best way to pay off debts is to prioritize debts with the highest interest rates.

By publishing original research that shows this to be true, backed by the voice of an established expert, you can start a conversation on this topic.

As further blog posts and online conversations reference your data, your position will gain greater reach. Then, more experts may comment on it and agree with it, over time building that consensus on the topic.

Then, when somebody goes to research the topic with AI search, the AI will find that consensus you’ve built.

Consider the case of blue light.

In 2015, the Journal of Clinical Sleep Medicine published a study:

“Evening use of light-emitting eReaders negatively affects sleep, circadian timing, and next-morning alertness.”

This study showed that exposure to blue light suppresses melatonin production, leading to delayed sleep onset and reduced sleep quality.

This research was then cited by experts and major outlets before gaining traction on social media and blogs.

Now, if you AI search “Does blue light affect sleep?” you’ll be given this information (that blue light affects sleep), and it will cite this original research and the websites and experts who wrote about it.

perplexity ai resultsScreenshot by author from Perplexity, February 2025

Collaboration For Building Consensus

Of course, you don’t have to simply wait for a conversation to find its way to other experts. By collaborating directly, you can amplify the establishment of a consensus.

Let’s take the same example as before. But, this time, we make a small change: Instead of authoring studies or guest posts solo, we do so in collaboration with another established expert.

In doing so, you can essentially “hijack” your collaborator’s authority and audience:

  • Their followers will become aware of your research.
  • Their peers and fellow experts are more likely to consider your findings.
  • Media outlets also view collaborations as more credible than a single, lesser-known source, further boosting your reach.

Take the example of David Grossman’s article The Cost of Poor Communications.

Inclusion in an article published on Provoke Media’s The Holmes Report, allowed Grossman to present his ideas to a wider audience.

This information went on to be referenced in a variety of other articles, including sites such as Harvard Business School.

Then, over time, these ideas form part of the consensus on business communication, appearing in AI search results for platforms such as Perplexity.

Screenshot by author from Perplexity, February 2025

Even With The Best Methods, Building Consensus Is A Process

This is, of course, a simplification of the process.

Authoring a study, or collaborating with another expert, is no guarantee that you will build a new consensus.

Your study or collaboration may simply go unnoticed, even if you do everything right.

Or, it may go against the existing consensus. In this case, you’ll face a serious uphill battle to try and change that consensus.

Even if you are successful, it may take some time for a new consensus to emerge.

These things don’t happen overnight. But, that doesn’t mean you should give up; every time you publish a study or collaborate with another expert, your reach and authority grows.

And as you continue the conversation through further studies and guest posts, a new consensus can begin to form.

At the heart of that new consensus are your ideas and expertise.

In turn, when providing sources for its search results, AI search will surface your ideas and drive traffic to your website.

Long-Term Success Over Short-Term Hacks

While new hacks are always being found, tried and tested methods will always be the better choice.

Instead of chasing an ever-moving target trying to outsmart constantly evolving generative AI tools, your time is much better spent building consensus around a topic.

This means:

  • Establishing expertise through studies and guest posts.
  • Collaborating with other experts to boost your reach and authority.
  • Continuing this process of authority building over time.

Building consensus takes time, but the payoff is lasting influence, which sees AI search surfacing your content and treating you as a trusted source of information.

More Resources:


Featured Image: mentalmind/Shutterstock

LLMs That Code: Why Marketers Should Care (Even If You’ve Never Touched An IDE) via @sejournal, @siliconvallaeys

Large language models (LLMs) like ChatGPT and Claude are best known for their writing abilities, drafting ad copy, summarizing reports, and helping brainstorm blog content.

However, most marketers still know little about one of their most powerful features: They can write actual code.

First, it talked. Then, it wrote. Now, it builds.

We’re not just talking about basic snippets. These models can generate full scripts, fully functional browser extensions, small web apps, and automations, all from plain English prompts – or any other language you’re most comfortable with.

For marketers and PPC pros, that unlocks a new level of efficiency. You no longer need to know how to program to start benefiting from technical solutions to everyday problems.

In the past, I might have only written a script if it saved me hours of manual work every month.

Now, with LLMs, it’s so quick to build something that I’ll even create one-off tools for tasks that would’ve only taken me an hour or two. That’s how low the barrier has become.

In this article, I’ll walk through the types of problems you can solve with LLM-powered coding, the browser-based tools that make it accessible, and real examples of how marketers are already using this to move faster.

The Real Power: Turning Instructions Into Code

LLMs have ingested much of the world’s knowledge, and that includes scripts and computer code. That means if you can explain a process clearly, they can usually turn that explanation into working code.

Because they’re multi-modal, they can even understand a diagram you’ve whiteboarded at the office and turn that into code, too.

This makes them incredibly valuable for non-programmers who know what they want but don’t know how to build it.

Think of the marketer who understands how data should be formatted for a monthly client report but dreads the repetitive steps of reformatting CSV files. Or the account manager who wants to automate their process of eliminating underperforming search terms but doesn’t have a dev team to help them.

With LLMs, these tasks can be described in a few sentences, and AI can generate Python scripts, JavaScript tools, or even complete web apps that solve the problem.

This isn’t just about saving time. It’s about unlocking experimentation and removing the friction that keeps good ideas from scaling through technology.

What Problems Can LLM-Generated Code Solve?

Let’s break down the kinds of problems where LLM coding can shine. These aren’t hypothetical; they’re pulled from common workflows across agencies and in-house teams daily.

1. Automating Repetitive, Time-Consuming Tasks

You probably do at least one of these on a somewhat regular basis:

  • Reformatting exported Sheets or CSV files.
  • Copying Google Ads data into slides for reporting.
  • Cleaning up GPT’s output before sharing it with a client.
  • Manually reviewing ad copy for brand compliance.

With the help of an LLM, each of these can be turned into a repeatable, automatable workflow. You describe the task, and the LLM builds the script that does it.

This is especially valuable for marketers who are tired of being “spreadsheet operators” instead of strategists. By turning routine tasks into one-click tools, you free up hours a week and reduce human error.

2. Trying Something Entirely New

Unlike the tasks above, which you know exactly how to do but hate how much time they take, there are also some projects you may not have tried because you do not know how.

For my team, that included a quiz to make blog content more engaging. For me, it involved building a browser extension to blur sensitive data on the screen.

These are ideal use cases for LLM-powered coding. They allow you to prototype and test ideas without needing a development team, and if you’re lucky enough to have one, you don’t need to wait for your project to get prioritized.

You can get feedback quickly, iterate faster, and build an entire proof of concept before involving engineering.

Marketing innovation often dies in the backlog. LLM coding makes it easier to try things on your own.

3. Google Ads Scripts

This is one of the most exciting areas for PPC pros. Google Ads scripts are powerful, but let’s face it, they’ve always had a learning curve. Now, LLMs can flatten that curve dramatically.

You can tell a model:

Write a Google Ads script that checks all active campaigns with “Mother’s Day” in the name. If the current date is within seven days of Mother’s Day, increase the daily budget of those campaigns by 20%. Include comments to explain each part of the code so I can understand what the script is doing.

It will return a fully functional script that you can paste directly into your account’s scripts section.

This lowers the barrier to entry for marketers who want to automate common PPC maintenance or build lightweight tools for managing large accounts.

You can go from idea to automation in minutes, no JavaScript experience required.

Tools That Make LLM Coding Accessible

I hope the idea of becoming more efficient through code sparks your interest, especially if you’ve ever found yourself repeating the same task week after week.

Whether you’re managing ad campaigns, cleaning data, or formatting content, the ability to automate even small pieces of your workflow can save hours and reduce errors.

Here’s the best part: You don’t need to be a developer to start.

You don’t even need to install anything, understand programming languages, or know how to set up a server. You definitely don’t need to open a complicated integrated development environment (IDE).

The tools I’m about to show you run entirely in your browser. They’re designed to help you go from idea to functional code with nothing more than a clear description of what you want to achieve.

If you’ve never written code before, this is exactly where you want to start.

Claude (Anthropic)

For marketers, Claude’s ability to write, test, and execute code right inside the interface is a real standout.

No setup is required, no installations, and no APIs to connect.

You describe what you need, Claude writes the code, and you see the results in real-time. This fast, feedback-driven loop makes it easier than ever to experiment and iterate without the usual technical friction.

The 200,000-token context window is another game-changer. You can paste your entire campaign structure, a long analytics report, or even a full landing page copy, and Claude will process it all in one go.

It keeps track of every detail you’ve shared, so nothing gets lost as you build on your ideas.

There is a tradeoff, though. Claude currently runs code in a single-file execution environment. That’s fine for most marketing tasks, but for more complex, multi-file projects, it’s not as flexible as tools like Vercel’s V0.dev, which supports full project structures.

Still, for marketers building scrappy, high-impact tools fast, Claude handles a surprising amount.

Here’s what’s most exciting to me:

  • It can run JavaScript right in the browser, perfect for quick tasks like data filtering, simple visualizations, or interactive prototypes.
  • It translates technical concepts into plain, marketing-friendly language, so you’re never stuck decoding dev speak.
  • It surfaces insights from your data quickly, helping you spot trends and outliers that would otherwise go unnoticed.

One of the benefits of LLMs is that they can adapt to each user’s level.

If you’re not technical, it gives you just enough to feel empowered. If you are, it meets you there and helps you move even faster. Either way, it expands what’s possible without getting in your way.

Below is a view of Claude generating code based on a marketing-focused prompt, with both the prompt and working output visible in the interface.

Users can toggle to the code view if they prefer to see that instead of a preview of the tool.

Screenshot from Claude.ai, April 2025

V0.dev (Vercel)

As much as I’m excited about Claude, V0 takes it to a whole new level.

Vercel, the creator of Next.js, made V0.dev, which is designed to build working software by describing what you need.

Why it stands out:

  • Generates full React components, HTML, and CSS.
  • Lets you deploy working projects instantly.
  • Handles multi-file architecture (great for real apps).

Marketers can use V0.dev to build:

  • Text reformatting tools.
  • User interfaces (UI/UX).
  • Internal dashboards.
  • Fully working web apps.

It’s like having a front-end developer in your browser.

Here’s a screenshot of what I quickly tried building using V0.dev. I prompted it to create a simple tool for Search Engine Journal readers that takes a blog post and outputs key takeaways in bullet form.

V0.dev generated a clean, on-brand interface with just a single prompt, no coding required. It’s a great example of how fast you can go from idea to working prototype.

What’s especially cool is that you could even launch this tool so anyone can use it.

Screenshot from v0.dev, April 2025

When creating a tool that requires third-party integrations, V0 asks for the required API keys and credentials.

When building something that can’t be hosted online, like a Chrome extension, it explains how to install the files. In short, it helps anyone, regardless of ability, to create a working piece of software.

GPT-4o (OpenAI/ChatGPT)

GPT-4o is the LLM I’ve used the most for building ad scripts, as it was the first one to write an error-free piece of code. It’s also great for:

  • Creating data transformation scripts.
  • Debugging code.
  • Explaining errors.
  • Translating code from one language to another.

But, GPT is limited in that it can’t run the code it writes directly in the chat window. That means there is a lot of copy-and-pasting required to take the code, install it on a server, test it, and then iterate with GPT to debug it.

While I think GPT is awesome for writing code, if I need something quick and simple, I’ll prefer Claude. If I want something more complex and want to debug it in the LLM, I’ll use V0.

Real-World Example Use Cases

Let’s go deeper into actual examples. These aren’t just ideas; these are projects you can ask an LLM to help build today.

Example 1: Chrome Extension To Blur Sensitive Text

The Problem:

I’m frequently taking screenshots of dashboards or search results but need to hide client names, numbers, or other sensitive data.

The LLM Solution:

I asked V0.dev to generate a Chrome extension that adds a blur effect to any numerical values on the page.

It generates all the files needed and explains how to install my custom extension in my Chrome browser. It returns:

  • The manifest.json file.
  • JavaScript to inject CSS.
  • Instructions to package and install the extension.
Screenshot from Optmyzr.com, April 2025

Why It Matters:

This isn’t something most marketers would ever think to build, but with a few prompts, you’ve created a privacy-preserving utility that saves you editing time and protects sensitive info.

Example 2: Web App To Reformat GPT Output

The Problem:

I use Deep Research from ChatGPT to generate research for my team or future blogs, but I don’t love how source references are formatted when I copy the research into a Google Doc.

The LLM Solution:

Use V0.dev to create a web app that:

  • Accepts pasted text.
  • Accepts a list of formatting changes I would normally make manually (e.g., finding links and putting them in superscript).
  • Displays the cleaned version instantly.

Why It Matters:

It streamlines content workflows. Instead of editing output by hand, you get consistent formatting that meets your brand or platform guidelines.

Example 3: Interactive Blog Quiz Generator

The Problem:

We wanted to make our blogs more interactive, and my team had the idea to add quizzes.

The LLM Solution:

Use Claude to generate a quiz engine in HTML/CSS/JS. Feed it five to seven questions, then tie the result to different calls to action (“Download This Guide” or “Talk to an Expert”).

Why It Matters:

Interactive content improves time-on-page, reduces bounce, and personalizes the experience, without needing design or dev support.

Want to see it? Check out how AI is transforming our content about bidding strategies.

Screenshot from Optmyzr Blog, April 2025

Conclusion: Marketers Can Now Build What They Need With AI

Writing utility software is easier than it’s ever been before.

For marketers, the question used to be “What tools should I use?” Now, it might be: “What tools should I create?”

If you’ve ever been bottlenecked by engineering resources, or if your “wouldn’t it be cool if…” idea has sat in a notebook for months, this is your chance.

You don’t need an IDE. You don’t need to understand loops or classes. You just need a problem to solve, a clear description, and the right LLM at your side.

More Resources:


Featured Image: Thantaree/Shutterstock