LLMs That Code: Why Marketers Should Care (Even If You’ve Never Touched An IDE) via @sejournal, @siliconvallaeys

Large language models (LLMs) like ChatGPT and Claude are best known for their writing abilities, drafting ad copy, summarizing reports, and helping brainstorm blog content.

However, most marketers still know little about one of their most powerful features: They can write actual code.

First, it talked. Then, it wrote. Now, it builds.

We’re not just talking about basic snippets. These models can generate full scripts, fully functional browser extensions, small web apps, and automations, all from plain English prompts – or any other language you’re most comfortable with.

For marketers and PPC pros, that unlocks a new level of efficiency. You no longer need to know how to program to start benefiting from technical solutions to everyday problems.

In the past, I might have only written a script if it saved me hours of manual work every month.

Now, with LLMs, it’s so quick to build something that I’ll even create one-off tools for tasks that would’ve only taken me an hour or two. That’s how low the barrier has become.

In this article, I’ll walk through the types of problems you can solve with LLM-powered coding, the browser-based tools that make it accessible, and real examples of how marketers are already using this to move faster.

The Real Power: Turning Instructions Into Code

LLMs have ingested much of the world’s knowledge, and that includes scripts and computer code. That means if you can explain a process clearly, they can usually turn that explanation into working code.

Because they’re multi-modal, they can even understand a diagram you’ve whiteboarded at the office and turn that into code, too.

This makes them incredibly valuable for non-programmers who know what they want but don’t know how to build it.

Think of the marketer who understands how data should be formatted for a monthly client report but dreads the repetitive steps of reformatting CSV files. Or the account manager who wants to automate their process of eliminating underperforming search terms but doesn’t have a dev team to help them.

With LLMs, these tasks can be described in a few sentences, and AI can generate Python scripts, JavaScript tools, or even complete web apps that solve the problem.

This isn’t just about saving time. It’s about unlocking experimentation and removing the friction that keeps good ideas from scaling through technology.

What Problems Can LLM-Generated Code Solve?

Let’s break down the kinds of problems where LLM coding can shine. These aren’t hypothetical; they’re pulled from common workflows across agencies and in-house teams daily.

1. Automating Repetitive, Time-Consuming Tasks

You probably do at least one of these on a somewhat regular basis:

  • Reformatting exported Sheets or CSV files.
  • Copying Google Ads data into slides for reporting.
  • Cleaning up GPT’s output before sharing it with a client.
  • Manually reviewing ad copy for brand compliance.

With the help of an LLM, each of these can be turned into a repeatable, automatable workflow. You describe the task, and the LLM builds the script that does it.

This is especially valuable for marketers who are tired of being “spreadsheet operators” instead of strategists. By turning routine tasks into one-click tools, you free up hours a week and reduce human error.

2. Trying Something Entirely New

Unlike the tasks above, which you know exactly how to do but hate how much time they take, there are also some projects you may not have tried because you do not know how.

For my team, that included a quiz to make blog content more engaging. For me, it involved building a browser extension to blur sensitive data on the screen.

These are ideal use cases for LLM-powered coding. They allow you to prototype and test ideas without needing a development team, and if you’re lucky enough to have one, you don’t need to wait for your project to get prioritized.

You can get feedback quickly, iterate faster, and build an entire proof of concept before involving engineering.

Marketing innovation often dies in the backlog. LLM coding makes it easier to try things on your own.

3. Google Ads Scripts

This is one of the most exciting areas for PPC pros. Google Ads scripts are powerful, but let’s face it, they’ve always had a learning curve. Now, LLMs can flatten that curve dramatically.

You can tell a model:

Write a Google Ads script that checks all active campaigns with “Mother’s Day” in the name. If the current date is within seven days of Mother’s Day, increase the daily budget of those campaigns by 20%. Include comments to explain each part of the code so I can understand what the script is doing.

It will return a fully functional script that you can paste directly into your account’s scripts section.

This lowers the barrier to entry for marketers who want to automate common PPC maintenance or build lightweight tools for managing large accounts.

You can go from idea to automation in minutes, no JavaScript experience required.

Tools That Make LLM Coding Accessible

I hope the idea of becoming more efficient through code sparks your interest, especially if you’ve ever found yourself repeating the same task week after week.

Whether you’re managing ad campaigns, cleaning data, or formatting content, the ability to automate even small pieces of your workflow can save hours and reduce errors.

Here’s the best part: You don’t need to be a developer to start.

You don’t even need to install anything, understand programming languages, or know how to set up a server. You definitely don’t need to open a complicated integrated development environment (IDE).

The tools I’m about to show you run entirely in your browser. They’re designed to help you go from idea to functional code with nothing more than a clear description of what you want to achieve.

If you’ve never written code before, this is exactly where you want to start.

Claude (Anthropic)

For marketers, Claude’s ability to write, test, and execute code right inside the interface is a real standout.

No setup is required, no installations, and no APIs to connect.

You describe what you need, Claude writes the code, and you see the results in real-time. This fast, feedback-driven loop makes it easier than ever to experiment and iterate without the usual technical friction.

The 200,000-token context window is another game-changer. You can paste your entire campaign structure, a long analytics report, or even a full landing page copy, and Claude will process it all in one go.

It keeps track of every detail you’ve shared, so nothing gets lost as you build on your ideas.

There is a tradeoff, though. Claude currently runs code in a single-file execution environment. That’s fine for most marketing tasks, but for more complex, multi-file projects, it’s not as flexible as tools like Vercel’s V0.dev, which supports full project structures.

Still, for marketers building scrappy, high-impact tools fast, Claude handles a surprising amount.

Here’s what’s most exciting to me:

  • It can run JavaScript right in the browser, perfect for quick tasks like data filtering, simple visualizations, or interactive prototypes.
  • It translates technical concepts into plain, marketing-friendly language, so you’re never stuck decoding dev speak.
  • It surfaces insights from your data quickly, helping you spot trends and outliers that would otherwise go unnoticed.

One of the benefits of LLMs is that they can adapt to each user’s level.

If you’re not technical, it gives you just enough to feel empowered. If you are, it meets you there and helps you move even faster. Either way, it expands what’s possible without getting in your way.

Below is a view of Claude generating code based on a marketing-focused prompt, with both the prompt and working output visible in the interface.

Users can toggle to the code view if they prefer to see that instead of a preview of the tool.

Screenshot from Claude.ai, April 2025

V0.dev (Vercel)

As much as I’m excited about Claude, V0 takes it to a whole new level.

Vercel, the creator of Next.js, made V0.dev, which is designed to build working software by describing what you need.

Why it stands out:

  • Generates full React components, HTML, and CSS.
  • Lets you deploy working projects instantly.
  • Handles multi-file architecture (great for real apps).

Marketers can use V0.dev to build:

  • Text reformatting tools.
  • User interfaces (UI/UX).
  • Internal dashboards.
  • Fully working web apps.

It’s like having a front-end developer in your browser.

Here’s a screenshot of what I quickly tried building using V0.dev. I prompted it to create a simple tool for Search Engine Journal readers that takes a blog post and outputs key takeaways in bullet form.

V0.dev generated a clean, on-brand interface with just a single prompt, no coding required. It’s a great example of how fast you can go from idea to working prototype.

What’s especially cool is that you could even launch this tool so anyone can use it.

Screenshot from v0.dev, April 2025

When creating a tool that requires third-party integrations, V0 asks for the required API keys and credentials.

When building something that can’t be hosted online, like a Chrome extension, it explains how to install the files. In short, it helps anyone, regardless of ability, to create a working piece of software.

GPT-4o (OpenAI/ChatGPT)

GPT-4o is the LLM I’ve used the most for building ad scripts, as it was the first one to write an error-free piece of code. It’s also great for:

  • Creating data transformation scripts.
  • Debugging code.
  • Explaining errors.
  • Translating code from one language to another.

But, GPT is limited in that it can’t run the code it writes directly in the chat window. That means there is a lot of copy-and-pasting required to take the code, install it on a server, test it, and then iterate with GPT to debug it.

While I think GPT is awesome for writing code, if I need something quick and simple, I’ll prefer Claude. If I want something more complex and want to debug it in the LLM, I’ll use V0.

Real-World Example Use Cases

Let’s go deeper into actual examples. These aren’t just ideas; these are projects you can ask an LLM to help build today.

Example 1: Chrome Extension To Blur Sensitive Text

The Problem:

I’m frequently taking screenshots of dashboards or search results but need to hide client names, numbers, or other sensitive data.

The LLM Solution:

I asked V0.dev to generate a Chrome extension that adds a blur effect to any numerical values on the page.

It generates all the files needed and explains how to install my custom extension in my Chrome browser. It returns:

  • The manifest.json file.
  • JavaScript to inject CSS.
  • Instructions to package and install the extension.
Screenshot from Optmyzr.com, April 2025

Why It Matters:

This isn’t something most marketers would ever think to build, but with a few prompts, you’ve created a privacy-preserving utility that saves you editing time and protects sensitive info.

Example 2: Web App To Reformat GPT Output

The Problem:

I use Deep Research from ChatGPT to generate research for my team or future blogs, but I don’t love how source references are formatted when I copy the research into a Google Doc.

The LLM Solution:

Use V0.dev to create a web app that:

  • Accepts pasted text.
  • Accepts a list of formatting changes I would normally make manually (e.g., finding links and putting them in superscript).
  • Displays the cleaned version instantly.

Why It Matters:

It streamlines content workflows. Instead of editing output by hand, you get consistent formatting that meets your brand or platform guidelines.

Example 3: Interactive Blog Quiz Generator

The Problem:

We wanted to make our blogs more interactive, and my team had the idea to add quizzes.

The LLM Solution:

Use Claude to generate a quiz engine in HTML/CSS/JS. Feed it five to seven questions, then tie the result to different calls to action (“Download This Guide” or “Talk to an Expert”).

Why It Matters:

Interactive content improves time-on-page, reduces bounce, and personalizes the experience, without needing design or dev support.

Want to see it? Check out how AI is transforming our content about bidding strategies.

Screenshot from Optmyzr Blog, April 2025

Conclusion: Marketers Can Now Build What They Need With AI

Writing utility software is easier than it’s ever been before.

For marketers, the question used to be “What tools should I use?” Now, it might be: “What tools should I create?”

If you’ve ever been bottlenecked by engineering resources, or if your “wouldn’t it be cool if…” idea has sat in a notebook for months, this is your chance.

You don’t need an IDE. You don’t need to understand loops or classes. You just need a problem to solve, a clear description, and the right LLM at your side.

More Resources:


Featured Image: Thantaree/Shutterstock

SEOFOMO Survey Shows How Ecommerce SEOs Use AI In 2025 via @sejournal, @martinibuster

Aleyda Solis’ SEOFOMO published a survey of ecommerce owners and SEOs that indicates a wide range of uses of AI, reflecting popular SEO tactics and novel ways to increase productivity, but also reveals that a significant number of the respondents have yet to fully adopt the technology because they are still figuring out how it best fits into their workflow. Very few of the survey respondents said they were not considering AI.

The survey responses showed that there are five popular category uses for AI:

  1. Content
  2. Analysis & Research
  3. Technical SEO
  4. User Experience & Conversion Rate Optimization
  5. Generate Client Documentation, Education & Learning

Content Creation

The survey respondents used AI for important reasons like product listing and descriptions, as well as for scaling meta descriptions, titles, and alt text. Other uses include creating content outlines, grammar checks and other assistive uses of AI.

But some also used it for blog content, landing pages, and for generating FAQ content. There’s no details of how extensively AI was used for blog content but a case could be made against using it for fully generating main content with AI (if that’s how some people are using it) because of Google’s recent cautionary guidance about extensive use of AI for main content.
Google’s Danny Sullivan at the recent Search Central NYC event cautioned about low effort content lacking in originality.

The other reported uses of AI was for grammar checking and clarity which are excellent ways to use AI. Care should be used even for these purposes because AI has a style that can get injected into the content even for something as simple as checking for grammar.

Another interesting use of AI is for revising content so that it matches a company’s “brand voice” which is checking for word choices, tone, and even sentence structure.

Lastly, the ecommerce survey respondents reported using AI for brainstorming content ideas which is another excellent way to use AI.

Analysis & Research

The part about keyword analysis is interesting because the report lists keyword research and clustering as one of the uses. Clustering keywords according to similarity is a good practice because it’s somewhat repetitive and spammy to write pages of content about related things, one page for each keyword phrase when one strong page that represents the entire topic is enough.

Focusing on keywords for SEO has been around longer than Google, and even Google itself has evolved from using keywords as a way to understand content to also incorporating an understanding of queries and content as topics.This is seen in the fact that Google uses core topicality systems as part of its ranking algorithm. So it’s somewhat curious that topicality research wasn’t mentioned as one of the uses, unless keyword clustering is considered part of that. Nevertheless, data analysis is a great use of AI.

Technical SEO

Technical SEO is a fantastic application of AI because that’s all about automating repetitive SEO tasks but also for assisting on making decisions about what to do. There’s lots of ways to do this, including by uploading a set of guidelines and/or charts and asking AI to analyze for specific things. Apps like Screaming Frog allow integration with OpenAI, so it’s leaving money and time on the table to not be investigating all the ways AI can integrate with tools as well as just asking it to analyze data.https://www.screamingfrog.co.uk/seo-spider/tutorials/how-to-crawl-with-chatgpt/

For example, one of the uses reported in the survey was for generating an internal linking strategy.

User Experience (UX) & Conversion Rate Optimization (CRO)

Another way ecommerce store owners are using AI is for improving the user experience and CRO.

The survey reports:

  • “AI-powered product recommendations
  • Chatbots for product discovery or customer support
  • CRO/UX audits based on user behavior”

Training & Education

Lastly, an increasing number of the ecommerce respondents reported using AI for generating training documentation for internal use and for creating customer documentation.

The survey reports:

“Less common but growing:

  • Learning how AI tools function
  • Using AI to create training material or SEO learning resources”

Not Using AI Or Limited Use

What was surprising is the amount of SEOs that are not using AI in a meaningful way. 31% of respondents said they are not using AI but are planning to, 3% of the survey respondents were digging their heels into the ground and flatly refusing to use AI in any way, while an additional 4% answered that they weren’t sure.

That makes a full 37% that aren’t using AI in any meaningful way. Looked at another way, 31% of respondents were getting ready to adopt AI into their workflow. Many managed WordPress hosting companies are integrating AI into their WordPress builder workflow as are some WordPress builders. AI can be integrated via WordPress SEO plugins as well. Wix has already integrated AI into their customer workflow through their proprietary Astro chatbot and companies like Shopify are also planning meaningful and useful ways to integrate AI.

The SEOFOMO survey makes it clear that AI is a significant part of the SEO and ecommerce workflow. Those who don’t use AI shouldn’t feel like they have to. But if you’re unsure how to integrate it, one way to think about it is to ask: what kinds of tasks would you hand off to an intern? Those are the kinds of tasks that AI excels at, enabling one worker to produce at a level five times greater than they could without using AI.

Read the SEOFOMO in ecommerce survey results:

The SEOFOMO Ecommerce SEO in 2025 Survey Results

Featured Image by Shutterstock/tete_escape

Google Says LLMs.Txt Comparable To Keywords Meta Tag via @sejournal, @martinibuster

Google’s John Mueller answered a question about LLMs.txt, a proposed standard for showing website content to AI agents and crawlers, downplaying its usefulness and comparing it to the useless keywords meta tag, confirming the experience of others who have used it.

LLMS.txt

LLMS.txt has been compared to as a Robots.txt for large language models but that’s 100% incorrect. The main purpose of a robots.txt is to control how bots crawl a website. The proposal for LLMs.txt is not about controlling bots. That would be superfluous because a standard for that already exists with robots.txt.

The proposal for LLMs.txt is generally about showing content to LLMs with a text file that uses the markdown format so that they can consume just the main content of a web page, completely devoid of advertising and site navigation. Markdown language is a human and machine readable format that indicates headings with the pound sign (#) and lists with the minus sign (-). LLMs.txt does a few other things similar to that functionality and that’s all it’s about.

What LLMs.txt is:

  • LLMs.txt is not a way to control AI bots.
  • LLMs.txt is a way to show the main content to AI bots.
  • LLMs.txt is just a proposal and not a widely used and accepted standard.

That last part is important because it relates to what Google’s John Mueller said:

LLMs.txt Is Comparable To Keywords Meta Tag

Someone started a discussion on Reddit about LLMs.txt to ask if anyone else shared their experience that the AI bots were not checking their LLMs.txt files.

They wrote:

“I’ve submitted to my blog’s root an LLM.txt file earlier this month, but I can’t see any impact yet on my crawl logs. Just curious to know if anyone had a tracking system in place,e or just if you picked up on anything going on following the implementation.

If you haven’t implemented it yet, I am curious to hear your thoughts on that.”

One person in that discussion shared that they host over 20,000 domains and that no AI agents or bots are downloading the LLMs.txt files, only niche bots like one from BuiltWith is grabbing those files.

The commenter wrote:

“Currently host about 20k domains. Can confirm that no bots are really grabbing these apart from some niche user agents…”

John Mueller answered:

“AFAIK none of the AI services have said they’re using LLMs.TXT (and you can tell when you look at your server logs that they don’t even check for it). To me, it’s comparable to the keywords meta tag – this is what a site-owner claims their site is about … (Is the site really like that? well, you can check it. At that point, why not just check the site directly?)”

He’s right, none of the major AI services, Anthropic, OpenAI, and Google, have announced support for the proposed LLMs.txt standard. So if none of them are actually using it then what’s the point?

Mueller also raises the point that an LLMs.txt file is redundant because why use that markdown file if the original content (and structured data) have already been downloaded? A bot that uses the LLMs.txt will have to check the other content to make sure it’s not spam so why bother?

Lastly, what’s to stop a publisher or SEO from showing one set of content in LLMs.txt to spam AI agents and another set of content for users and search engines? It’s too easy to generate spam this way, essentially cloaking for LLMs.

In that regard it is very similar to the keywords meta tag that no search engine uses because it would be too sketchy to trust a site that it’s really about those keywords and search engines are better and more sophisticated nowadays about parsing the content to understand what it’s about.

Read the LinkedIn discussion here:

LLM.txt – where are we at?

Featured Image by Shutterstock/Jemastock

AI Overviews: We Reverse-Engineered Them So You Don’t Have To [+ What You Need To Do Next]

This post was sponsored by DAC. The opinions expressed in this article are the sponsor’s own. Authors: Dan Lauer & Michael Goodman

Is the classic funnel model (TOFU-MOFU-BOFU) still relevant in an AI-driven SERP?

What kinds of queries trigger Google’s AI Overviews?

How can you structure content so that AI pulls your site into the response?

Do you really need to change your SEO strategy?

For years, SEO teams followed a familiar SEO playbook:

  1. Optimize upper-funnel content to capture awareness,
  2. mid-funnel content to drive consideration,
  3. lower-funnel content to convert.

One page, one keyword, one intent.

But with the rise of ChatGPT, Perplexity, Copilot, Gemini, and now Google’s AI Mode, that linear model is increasingly outdated.

So, how do you move forward and keep your visibility high in modern search engine results pages (SERPs)?

We’ve reverse-engineered AI Overviews, so you don’t have to. Let’s dive in.

What We’ve Discovered Through Reverse Engineering Google’s AI Overviews (AIO)

From what we’re seeing across client industries and in how AI-driven results behave, the traditional funnel model – the idea of users moving cleanly from awareness to consideration to conversion – feels increasingly out of step with how people actually search.

How Today’s Search Users Actually Search

Today’s users jump between channels, devices, and questions.

They skim, abandon, revisit, and decide faster than ever.

AI Overviews don’t follow a tidy funnel because most people don’t either.

They surface multiple types of information at once, not because it’s smarter SEO, but because it’s closer to how real decisions get made.

AIOs & AI Mode Aren’t Just Answering Queries – They’re Expanding Them

Traditionally, SEO strategy followed a structured framework. Take a travel-related topic, for example:

  • Informational (Upper-Funnel) – “How to plan a cruise?”
  • Commercial (Mid-Funnel) – “Best cruise lines for families”
  • Transactional (lower-Funnel) – “Find Best Alaska Cruise Deals”

However, AI Overviews don’t stick to that structure.

Instead, they blend multiple layers of intent into a single, comprehensive response.

How AI Overviews Answer & Expand Search Queries

Let’s stay with the travel theme. A search for “Mediterranean cruise” might return an AI Overview that includes:

  • Best Time to go (Informational).
  • Booking Your Cruise (Commercial).
  • Cruise Lines (Navigational).

AI Mode Example for ‘Mediterranean Cruise’

What’s Happening Here?

In this case, Google isn’t just answering the query.

It anticipates what the user will want to know next, acting more like a digital concierge than a traditional search engine.

The AI Overview Test & Parameters

  • Source: Semrush & Google
  • Tested Data: 200 cruise-related informational queries

We started noticing this behavior showing up more often, so we wanted to see how common it actually is.

To get a clearer picture, we pulled 200 cruise-related informational queries from SEMrush and ran them through our custom-built AI SERP scraper. The goal was to see how often these queries triggered AI Overviews, and what kind of intent those Overviews covered.

The patterns were hard to miss:

  • 88% of those queries triggered an AI Overview
  • More than half didn’t just answer the initial question.
  • 52% mixed in other layers of intent, like brand suggestions, booking options, or comparisons, right alongside the basic information someone might’ve been looking for.

Using a different query related to Mediterranean Cruises, the AIO response acts as a travel agent, guiding the user on topics like:

  • How to fly,
  • Destinations with region,
  • Cruise prices,
  • Cruise lines that sail to that destination.

While it’s an Information non-brand search query,  the AIO response is lower-funnel as well.

Again, less than half of the queries were matched intent.

Here are some examples of queries that were identified as Informational and provided only the top-of-funnel response without driving the user further down the funnel.

The Verdict

Even when someone asks a simple, top-of-funnel question, AI is already steering them toward what to do next, whether that’s comparing prices, picking a provider, or booking a trip.

What Does This Mean for SEO Strategies Moving Forward?

If AI Overviews and AI Mode are blending intent types, content, and SEO strategies need to catch up:

  1. It’s no longer enough to rank for high-volume informational keywords. If your content doesn’t address multiple layers of intent, AI will fill the gaps with someone else’s content.
  2. SEO teams need to analyze how AI handles their most important queries. What related questions is it pulling in? Are those answers coming from your site or your competitors?
  3. Think beyond keyword volume. Long-tail queries may have lower search traffic, but they often align better with AI-cited content. Structure your pages with clear headings, bullets, and concise, helpful language—that’s what AI models prefer to surface.

The Future of SEO in an AI World: Hybrid Intent Optimization

The fundamentals of technical and on-page SEO still matter. But if your content is still built around single keywords and single intent types, you’re likely to lose visibility as AI continues to reshape the SERP.

The brands that adapt to this shift by creating content that mirrors the blended, fast-moving behavior of actual users are the ones that will continue to own key moments across the funnel, even as the funnel itself evolves.

As AI transforms search behavior, its crucial to adapt your SEO strategies accordingly. At DAC, we specialize in aligning your content with the latest search trends to enhance visibility and engagement. Reach out to us today to future-proof your strategy with our award-winning TotalSERP approach and stay ahead in the evolving digital landscape.

https://www.dacgroup.com/” class=”btn-learn-more button-green medium-size”>Optimize Your SEO For AI Search, Now

Image Credits

Featured Image: Image by DAC. Used with permission.

In-Post Image: Images by DAC. Used with permission.

Google AI Overview Study: 90% Of B2B Buyers Click On Citations via @sejournal, @MattGSouthern

Google’s AI Overviews have changed how search works. A TrustRadius report shows that 72% of B2B buyers see AI Overviews during research.

The study found something interesting: 90% of its respondents said they click on the cited sources to check information.

This finding differs from previous reports about declining click rates.

AI Overviews Are Affecting Search Patterns in Complex Ways

When AI summaries first appeared in search results, many publishers worried about “zero-click searches” reducing traffic. Many still see evidence of fewer clicks across different industries.

This research suggests B2B tech searches work differently. The study shows that while traffic patterns are changing, many users in their sample don’t fully trust AI content. They often check sources to verify what they read.

The report states:

“These overviews cite sources, and 90% of buyers surveyed said that they click through the sources cited in AI Overviews for fact-checking purposes. Buyers are clearly wanting to fact-check. They also want to consult with their peers, which we’ll get into later.”

If this extends beyond this study, being cited in these overviews might offer visibility for specific queries.

From Traffic Goals to Citation Considerations

While still optimizing for organic clicks, becoming a citation source for AI overviews is valuable.

The report notes:

“Vendors can fill the gap in these tools’ capabilities by providing buyers with content that answers their later-stage buying questions, including use case-specific content or detailed pricing information.”

This might mean creating clear, authoritative content that AI systems could cite. This applies especially to category-level searches where AI Overviews often appear.

The Ungated Content Advantage in AI Training

The research spotted a common mistake about how AI works. Some vendors think AI models can access their gated content (behind forms) for training.

They can’t. AI models generally only use publicly available content.

The report suggests:

“Vendors must find the right balance between gated and ungated content to maintain discoverability in the age of AI.”

This creates a challenge for B2B marketers who put valuable content behind forms. Making more quality information public could influence AI systems. You can still keep some premium content gated for lead generation.

Potential Implications For SEO Professionals

For search marketers, consider these points:

  • Google’s emphasis on Experience, Expertise, Authoritativeness, and Trustworthiness seems even more critical for AI evaluation.
  • The research notes that “AI tools aren’t just training on vendor sites… Many AI Overviews cite third-party technology sites as sources.”
  • As organic traffic patterns change, “AI Overviews are reshaping brand discoverability” and possibly “increasing the use of paid search.”

Evolving SEO Success Metrics

Traditional SEO metrics like organic traffic still matter. But this research suggests we should also monitor other factors, like how often AI Overviews cite you and the quality of that traffic.

Kevin Indig is quoted in the report stating:

“The era of volume traffic is over… What’s going away are clicks from the super early stage of the buyer journey. But people will click through visit sites eventually.”

He adds:

“I think we’ll see a lot less traffic, but the traffic that still arrives will be of higher quality.”

This offers search marketers one view on handling the changing landscape. Like with all significant changes, the best approach likely involves:

  • Testing different strategies
  • Measuring what works for your specific audience
  • Adapting as you learn more

This research doesn’t suggest AI is making SEO obsolete. Instead, it invites us to consider how SEO might change as search behaviors evolve.


Featured Image: PeopleImages.com – Yuri A/Shutterstock

How I Edit AI Content: A Workflow For The New Age Of Content Creation via @sejournal, @Kevin_Indig

In last week’s Memo, I explained how, just as digital DJing transformed music mixing, AI is revolutionizing how we create content by giving us instant access to diverse expressions and ideas.

Instead of fighting this change, writers should embrace AI as a starting point while focusing our energy on adding uniquely human elements that machines can’t replicate, like our personal experiences, moral judgment, and cultural understanding.

Last week, I identified seven distinctly human writing capabilities and 11 telltale signs of AI-generated content.

Today, I want to show you how I personally apply these insights in my editing process.

Image Credit: Lyna ™

Rather than seeing AI as my replacement, I advocate for thoughtful collaboration between human creativity and AI efficiency, much like how skilled DJs don’t just play songs but transform them through artistic mixing.

As someone who’s spent countless hours editing and tinkering with AI-generated drafts, I’ve noticed most people get stuck on grammar fixes while missing what truly makes writing connect with readers.

They overlook deeper considerations like:

  • Purposeful imperfection: Truly human writing isn’t perfectly polished. Natural quirks, occasional tangents, and varied sentence structures create authenticity that perfect grammar and flawless organization can’t replicate.
  • Emotional intelligence: AI content often lacks the intuitive emotional resonance that comes from lived experience. Editors frequently correct grammar but overlook opportunities to infuse genuine emotional depth.
  • Cultural context: Humans naturally reference shared cultural touchpoints and adapt their tone based on context. This awareness is difficult to edit into AI content without completely reframing passages.

In today’s Memo, I explain how to turn these edits into a recurring workflow for you or your team, so you can leverage the power of AI, accelerate content output, and drive more organic revenue.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Turning AI-Editing Into A Workflow

AI Editing WorkflowImage Credit: Kevin Indig

I like to edit AI content in several passes, each with a specific focus:

  • Round 1: Structure.
  • Round 2: Language.
  • Round 3: Humanization.
  • Round 4: Polish.

Not every type of content needs the same amount of editing:

  • You can be more hands-off with supporting content on category or product pages, while editorial content for blogs or content hubs needs significantly more editing.
  • In the same way, evergreen topics need less editing while thought leadership needs a heavy editorial hand.

Round 1: Structure & Big-Picture Review

First, I read the entire draft like a skeptical reader would.

I’m looking for logical flow issues, redundant sections, and places where the AI went on unhelpful tangents.

This is about getting the bones right before polishing sentences.

Rather than nitpicking grammar, I ask: “Does this piece make sense? Would a human actually structure it this way?”

But, the most important question is: “Does this piece meet user intent?” You need to ensure that the structure optimizes for speed-to-insights and helps users solve the implied problem of their searches.

If sections feel out of order or disconnected, I rearrange them.

If the AI repeats the same point in multiple places (they love doing this), I consolidate.

Round 2: Humanize The Language & Flow

Next, I tackle that sterile AI tone that makes readers’ eyes glaze over.

I break up the robotic rhythm by:

  • Consciously varying sentence lengths (Watch this. I’m doing it right now. Different lengths create natural cadence.).
  • Replacing corporate-speak with how humans actually talk (“use” instead of “utilize,” “start” instead of “commence”).
  • Cutting those meaningless filler sentences AI loves to add (“It’s important to note that…” or “As we can see from the above…”).

For example, I’d transform this AI-written line:

Utilizing appropriate methodologies can facilitate enhanced engagement among target demographics.

Into this:

Use the right approach, and people will actually care about what you’re saying.

Round 3: Add The Human Value Only You Can Provide

Here’s where I earn my keep.

I infuse the piece with:

  • Opinions where appropriate.
  • Personal stories or examples.
  • Unique metaphors or cultural references.
  • Nuanced insights that come from my expertise.

One of the shifts we have to make – and that I made – is to be more deliberate about collecting stories and opinions that we can tell.

In his book “Storyworthy,” Matthew Dicks shares how he saves stories from everyday life in a spreadsheet. He calls this habit Homework For Life, and it’s the most effective way to collect relatable stories that you can use for your content. It’s also a way to slow down time:

As you begin to take stock of your days, find those moments — see them and record them — time will begin to slow down for you. The pace of your life will relax.

Round 4: Final Polish & Optimization

Finally, I do a last pass focusing on:

  • A punchy opening that hooks the reader.
  • Removing any lingering AI patterns (overly formal language, repetitive phrases).
  • Search optimization (user intent, headings, keywords, internal links) without sacrificing readability.
  • Fact-checking every statistic, date, name, and claim.
  • Adding calls to action or questions that engage readers.

I know I’ve succeeded when I read the final piece and genuinely forget that an AI was involved in the drafting process.

The ultimate test: “Would I be proud to put my name on this?”

AI Content Editing Checklist

Before you hit “Publish,” run through this checklist to make sure you’ve covered all bases:

  • User Intent: The content is organized logically and addresses the intended topic or keyword completely, without off-topic detours.
  • Tone & Voice: The writing sounds consistently human and aligns with brand voice (e.g., friendly, professional, witty, etc.).
  • Readability: Sentences and paragraphs are concise and easy to read. Jargon is explained or simplified. The formatting (headings, lists, etc.) makes it skimmable.
  • Repetition: No overly repetitive phrases or ideas. Redundant content is trimmed. The language is varied and interesting.
  • Accurate: All facts, stats, names, and claims have been verified. Any errors are corrected. Sources are cited for important or non-obvious facts, lending credibility. There are no unsupported claims or outdated information.
  • Original Value: The content contains unique elements (experiences, insights, examples, opinions) that did not come from AI.
  • SEO: The primary keyword and relevant terms are included naturally. Title and headings are optimized and clear. Internal links to related content are added where relevant. External links to authoritative sources support the content.
  • Polish: The introduction is compelling. The content includes elements that engage the reader (questions, conversational bits) and a call to action. It’s free of typos and grammatical errors. All sentences flow well.

If you can check off all (or most) of these items, you’ve likely turned the AI draft into a high-quality piece that can confidently be published.

AI Content Editing = Remixing

We’ve come full circle.

Just as digital technology transformed DJing without eliminating the need for human creativity and curation, AI is reshaping writing while still requiring our uniquely human touch.

The irony I mentioned at the start of this article – trying to make AI content more human – becomes less ironic when we view AI as a collaborative tool rather than a replacement for human creativity.

Just as DJs evolved from vinyl crates to digital platforms without losing their artistic touch, writers are adapting to use AI while maintaining their unique value.

You can raise the chances of creating high-performing content that stands out by selecting the right models, inputs, and direction:

  • The newest models lead to exponentially better content than older (cheaper) ones. Don’t try to save money here.
  • Spend a lot of time getting style guides and examples right so the models work in the right lanes.
  • The more unique your data sources are, the more defensible your AI draft becomes.

The key insight is this: AI content editing is about enhancing the output with the irreplaceable human elements that make content truly engaging.

Whether that’s through adding lived experience, cultural understanding, emotional depth, or purposeful imperfection, our role is to be the bridge between AI’s computational efficiency and human connection.

The future belongs not to those who resist AI but to those who learn to dance with it, knowing exactly when to lead with their uniquely human perspective and when to follow the algorithmic beat.

Back in my DJ days, the best sets weren’t about the equipment I used but about the moments of unexpected connection I created.

The same holds true for writing in this new era.


Featured Image: Paulo Bobita/Search Engine Journal

OpenAI CEO Sam Altman Confirms Planning Open Source AI Model via @sejournal, @martinibuster

OpenAI CEO Sam Altman recently said the company plans to release an open source model more capable than any currently available. While he acknowledged the likelihood of it being used in ways some may not approve of, he emphasized that highly capable open systems have an important role to play. He described the shift as a response to greater collective understanding of AI risks, implying that the timing is right for OpenAI to re-engage with open source models.

The statement was in the context of a Live at TED2025 interview where the interviewer, Chris Anderson, asked Altman whether the Chinese open source model DeepSeek had “shaken” him up.

Screenshot Of Sam Altman At Live at TED2025

Altman responded by saying that OpenAI is preparing to release a powerful open-source model that is near the capabilities of the most advanced AI models currently available today.

Altman responded:

“I think open source has an important place. We actually just last night hosted our first like community session to kind of decide the parameters of our open source model and how we want to shape it.

We’re going to do a very powerful open source model. I think this is important. We’re going to do something near the frontier, I think better than any current open source model out there.
This will not be all… like, there will be people who use this in ways that some people in this room, maybe you or I, don’t like. But there is going to be an important place for open source models as part of the constellation here…”

Altman next admitted that they were slow to act on open source but now plan to contribute meaningfully to the movement.

He continued his answer:

“You know, I think we were late to act on that, but we’re going to do it really well.”

About thirty minutes later in the interview Altman circled back to the topic of open source, lightheartedly remarking that maybe in a year the interviewer might yell at him for open sourcing an AI model but he said that in everything there are trade-offs and that he feels OpenAI has done a good job of bringing AI technology into the world in a responsible way.

He explained:

“I do think it’s fair that we should be open sourcing more. I think it was reasonable for all of the reasons that you asked earlier, as we weren’t sure about the impact these systems were going to have and how to make them safe, that we acted with precaution.

I think a lot of your questions earlier would suggest at least some sympathy to the fact that we’ve operated that way. But now I think we have a better understanding as a world and it is time for us to put very capable open systems out into the world.

If you invite me back next year, you will probably yell at me for somebody who has misused these open source systems and say, why did you do that? That was bad. You know, you should have not gone back to your open roots. But you know, we’re not going to get… there’s trade-offs in everything we do. And and we are one player in this one voice in this AI revolution trying to do the best we can and kind of steward this technology into the world in a responsible way.

I think we have over the last almost decade …we have mostly done the thing we’ve set out to do. We have a long way to go in front of us, our tactics will shift more in the future, but adherence to this sort of mission and what we’re trying to do I think, very strong.”

OpenAI’s Open Source Model

Sam Altman acknowledged OpenAI was “late to act” on open source but now aims to release a model “better than any current open source model.” His decision to release an open source AI model is significant because it will introduce additional competition and improvement to the open source side of AI technology.

OpenAI was established in 2015 as a non-profit organization but transitioned in 2019 to a closed source model over concerns about potential misuse. Altman used the word “steward” to describe OpenAI’s role in releasing AI technologies into the world, and the transition to a closed source system reflects that concern.

2025 is a vastly different world from 2019 because there are many highly capable open source models available, models such as DeepSeek among them. Was OpenAI’s hand forced by the popularity of DeepSeek? He didn’t say, framing the decision as an evolution from a position of responsible development.

Sam Altman’s remarks at the TED interview suggest that OpenAI’s new open source model will be powerful but not representative of their best model. Nevertheless, he affirmed that open source models have a place in the “constellation” of AI, with a legitimate role as a strategically important and technically capable part of the broader technological ecosystem.

Featured image screenshot by author

AI Search Study: Product Content Makes Up 70% Of Citations via @sejournal, @MattGSouthern

A new study tracking 768,000 citations across AI search engines shows that product-related content tops AI citations. It makes up 46% to 70% of all sources referenced.

This finding offers guidance on how marketers should approach content creation amid the growth of AI search.

The research, conducted over 12 weeks by XFunnel, looked at which types of content ChatGPT, Google (AI Overviews), and Perplexity most often cite when answering user questions.

Here’s what you need to know about the findings.

Product Content Visible Across Queries

The study shows AI platforms prefer product-focused content. Content with product specs, comparisons, “best of” lists, and vendor details consistently got the highest citation rates.

The study notes:

“This preference appears consistent with how AI engines handle factual or technical questions, using official pages that offer reliable specifications, FAQs, or how-to guides.”

Other content types struggled to get cited as often:

  • News and research articles each got only 5-16% of citations.
  • Affiliate content typically stayed below 10%.
  • User reviews (including forums and Q&A sites) ranged between 3-10%.
  • Blog content received just 3-6% of citations.
  • PR materials barely appeared, typically less than 2% of citations.

Citation Patterns Vary By Funnel Stage

AI platforms cite different content types depending on where customers are in their buying journey:

  • Top of funnel (unbranded): Product content led at 56%, with news and research each at 13-15%. This challenges the idea that early-stage content should focus mainly on education rather than products.
  • Middle of funnel (branded): Product citations dropped slightly to 46%. User reviews and affiliate content each rose to about 14%. This shows how AI engines include more outside opinions for comparison searches.
  • Bottom of funnel: Product content peaked at over 70% of citations for decision-stage queries. All other content types fell below 10%.

B2B vs. B2C Citation Differences

The study found big differences between business and consumer queries:

For B2B queries, product pages (especially from company websites) made up nearly 56% of citations. Affiliate content (13%) and user reviews (11%) followed.

For B2C queries, there was more variety. Product content dropped to about 35% of citations. Affiliate content (18%), user reviews (15%), and news (15%) all saw higher numbers.

What This Means For SEO

For SEO professionals and content creators, here’s what to take away from this study:

  • Adding detailed product information improves citation chances even for awareness-stage content.
  • Blogs, PR content, and educational materials are cited less often. You may need to change how you create these.
  • Check your content mix to make sure you have enough product-focused material at all funnel stages.
  • B2B marketers should prioritize solid product information on their own websites. B2C marketers need strategies that also encourage quality third-party reviews.

The study concludes:

“These observations suggest that large language models prioritize trustworthy, in-depth pages, especially for technical or final-stage information… factually robust, authoritative content remains at the heart of AI-generated citations.”

As AI transforms online searches, marketers who understand citation patterns can gain a competitive edge in visibility.


Featured Image: wenich_mit/Shutterstock

How To Get Brand Mentions In Generative AI via @sejournal, @AlliBerry3

There’s been a lot of talk recently about whether large language models (LLMs) are replacing a considerable amount of Google searches.

While Google is clearly still the market leader, with 14 billion searches per day worldwide, an estimated 37.5 million “searches” on ChatGPT represent an opportunity for your brand.

SEO professionals have years of experience testing optimization tactics on Google, but we’re still at the beginning stages of understanding how to get your brand cited in generative AI chatbots.

This is an exciting opportunity because it forces people to test and learn rapidly.

Through some testing and research, here are some tips that have helped me develop initial recommendations for my clients for generative AI optimization, regardless of whether it’s ChatGPT, Gemini, Deepseek, or whatever generative AI chatbot comes next.

Use Generative AI Chatbots To Learn About Your Brand

First, use generative AI tools and start asking them questions about your brand to find the sources they utilize to answer your queries.

This will help you better understand what sources it’s been trained on and what pages on your site (or competitor sites) matter to its understanding of your brand.

Ask questions like:

  • Tell me about [company]/[product].
  • What are the best brands in [vertical] and why?
  • What are the pros and cons of [company]/[product]?

For example, when I ask ChatGPT-4o, “Tell me about HubSpot,” it gives me a nice summary with a lot of useful citations:

HubSpot Company Summary From ChatGPTScreenshot from ChatGPT, April 2025

From this, you can see that a legal page is being cited multiple times in a company overview, so those are important. You can also see the HubSpot Knowledge Base where information is being pulled from as well.

Often, a company’s About page is the main citation, but clearly, HubSpot has built out a better legal section than its core pages.

If I were part of its organization, I would work to make the About page richer with information. Generally, your About page will do better at marketing the benefits of your products than legal pages.

When I then asked, “What are the best brands for small business marketing?”, it provided me with the following list:

Best Brands In Small Business Marketing From ChatGPTScreenshot from ChatGPT, April 2025
Best Brands In Small Business Marketing From ChatGPT ContinuedScreenshot from ChatGPT, April 2025

ChatGPT-4o cites Wikipedia five different times and NerdWallet once for its affiliate coverage of small business marketing tools.

In searches I’ve done in other sectors, I’ve seen a lot more variety in sources listed – many in the affiliate review space. Here, however, NerdWallet is the only one.

When I asked ChatGPT-4o to dive into HubSpot further and show me the pros and cons of using it for small business marketing, it responded with:

HubSpot Pros From ChatGPTScreenshot from ChatGPT, April 2025
HubSpot Cons From ChatGPTScreenshot from ChatGPT, April 2025

I would then take this list and compare it against how I market the product to small business owners and potentially make tweaks accordingly.

And if there is validity to the cons listed and they are weaknesses we want to work on as an organization, I would start to build relationships with some of the sources listed.

That way, when there are company updates that impact some of what’s been written about the company, they can update their review pages, and it’ll impact what shows up in LLM queries.

I would also engage with the PR team about getting more coverage for the brand. Some of these citations are not particularly well-known or credible sites, so there is opportunity to get more authoritative sources to show up.

Ensure LLMs Can Crawl Your Website

This was true at the beginning of SEO, and is still true now.

Ensure you have a robots.txt file on your website’s server with directives to crawlers about pages and sections they can crawl and index.

A lot of site owners initially rushed to block LLMs from crawling their sites when ChatGPT first launched, as it was unknown (and also probably scraping content for the model).

If you want to be included in generative AI results now, though, you need to be where the AI crawlers can see you, so double-check that it is all configured correctly.

Utilize Credible Citations And Quotes In Content

A group of researchers from several prominent universities conducted a study on AI search engine optimization and what was more likely to surface in response to queries.

The tactic that worked the best, especially for factual queries, was adding citations from authoritative sources.

Using language like “according to [source],” adding a statistic with a credible citation, or a quote from a known expert all increased the likelihood of showing up in a generative AI chatbot responses by as much as 25.1% for sites ranking in position 4 in Google and by 99.7% for sites ranking in position 5 in Google.

Similarly, adding statistics to content led to a 10% increase in visibility in LLMs if the site is in position 4 in Google and a 97% increase in visibility in LLMs if the site is ranked in position 5 in Google.

Mentions In Prominent Databases And Forums Help

There are lots of reasons to be paying attention to prominent forums like Reddit and Quora or popular database sites like Wikipedia. Not only do they own lots of organic search real estate, but they are also obvious sites for training LLMs.

Reddit is now, smartly, selling data licensing to AI companies. Being a topic of discussion on these sites will only help your brand. There’s no better time to get into being active on Reddit than now.

Engaging authentically on behalf of a brand (assuming you reveal your affiliation upfront) is more acceptable nowadays and is often welcomed to get clarification on user questions. It will likely benefit you on your generative AI optimization journey, too.

Develop An Exceptional About Page

If there is one area of your website you need to improve on, your About page may be it.

Generative AI models utilize these types of pages to understand what a company does and how credible the company is.

If you ask any of these platforms for information about your brand, you may be surprised by how heavily they rely on your About page to deliver the answer.

If your About page doesn’t describe your business and products well enough, you may see LLMs citing legal pages instead, like in the case of HubSpot mentioned earlier.

Focus On Long-Tail Keywords

Modern transformer-based LLMs are based on a statistical analysis of the co-occurrence of words.

If an entity is mentioned in connection with another entity with frequency in the training data, there is a high probability of a semantic relationship between the two entities.

To optimize for this, it’s more useful to use keyword research tools to better understand related keywords.

Search volume can still be an indicator of importance, but I would focus more on better understanding the relationships and relevance between concepts, ensuring the content is of high quality, and that user intent is matched.

Stop Siloing SEO

We’re entering an era when websites get fewer and fewer clicks from organic search. For most brands, a multi-channel strategy has never been more imperative.

Not only does building brand recognition help fuel some of the other best practices here, but LLMs are also being trained on social media and marketing content.

Having an aligned, cross-channel strategy only strengthens your brand.

Plus, the more you can build a sales flywheel in your own content ecosystem, the less you need to panic about staying ahead of the ever-evolving world of SEO.

Track Your Referrals And Reverse-Engineer

Once you start seeing generative AI platforms driving traffic to your site, start paying attention to what pages bring that traffic in.

Then, visit that generative AI platform and try to recreate searches that could lead to your page as the answer.

You’ll start to learn what topics these platforms associate with your brand, and then you can find ways to double down on that type of content.

Final Thoughts

While the tool companies are trying to catch up with how to help digital marketers optimize in this era of generative AI, we will have to be more reliant on ourselves to reverse-engineer what we’re seeing in the data and run our own experiments.

More Resources:


Featured Image: Visual Generation/Shutterstock

Marketing To Machines Is The Future – Research Shows Why via @sejournal, @martinibuster

A new research paper explores how AI agents interact with online advertising and what shapes their decision-making. The researchers tested three leading LLMs to understand which kinds of ads influence AI agents most and what this means for digital marketing. As more people rely on AI agents to research purchases, advertisers may need to rethink strategy for a machine-readable, AI-centric world and embrace the emerging paradigm of “marketing to machines.”

Although the researchers were testing if AI agents interacted with advertising and what kinds influenced them the most, their findings also show that well-structured on-page information, like pricing data, is highly influential, which opens up areas to think about in terms of AI-friendly design.

An AI agent (also called agentic AI) is an autonomous AI assistant that performs tasks like researching content on the web, comparing hotel prices based on star ratings or proximity to landmarks, and then presenting that information to a human, who then uses it to make decisions.

AI Agents And Advertising

The research is titled Are AI Agents Interacting With AI Ads? and was conducted at the University of Applied Sciences Upper Austria. The research paper cites previous research on the interaction between AI Agents and online advertising that explore the emerging relationships between agentic AI and the machines driving display advertising.

Previous research on AI agents and advertising focused on:

  • Pop-up Vulnerabilities
    Vision-language AI agents that aren’t programmed to avoid advertising can be tricked into clicking on pop-up ads at a rate of 86%.
  • Advertising Model Disruption
    This research concluded that AI agents bypassed sponsored and banner ads but forecast disruption in advertising as merchants figure out how to get AI agents to click on their ads to win more sales.
  • Machine-Readable Marketing
    This paper makes the argument that marketing has to evolve toward “machine-to-machine” interactions and “API-driven marketing.”

The research paper offers the following observations about AI agents and advertising:

“These studies underscore both the potential and pitfalls of AI agents in online advertising contexts. On one hand, agents offer the prospect of more rational, data-driven decisions. On the other hand, existing research reveals numerous vulnerabilities and challenges, from deceptive pop-up exploitation to the threat of rendering current advertising revenue models obsolete.

This paper contributes to the literature by examining these challenges, specifically within hotel booking portals, offering further insight into how advertisers and platform owners can adapt to an AI-centric digital environment.”

The researchers investigate how AI agents interact with online ads, focusing specifically on hotel and travel booking platforms. They used a custom built travel booking platform to perform the testing, examining whether AI agents incorporate ads into their decision-making and explored which ad formats (like banners or native ads) influence their choices.

How The Researchers Conducted The Tests

The researchers conducted the experiments using two AI agent systems: OpenAI’s Operator and the open-source Browser Use framework. Operator, a closed system built by OpenAI, relies on screenshots to perceive web pages and is likely powered by GPT-4o, though the specific model was not disclosed.

Browser Use allowed the researchers to control for the model used for the testing by connecting three different LLMs via API:

  • GPT-4o
  • Claude Sonnet 3.7
  • Gemini 2.0 Flash

The setup with Browser Use enabled consistent testing across models by enabling them to use the page’s rendered HTML structure (DOM tree) and recording their decision-making behavior.

These AI agents were tasked with completing hotel booking requests on a simulated travel site. Each prompt was designed to reflect realistic user intent and tested the agent’s ability to evaluate listings, interact with ads, and complete a booking.

By using APIs to plug in the three large language models, the researchers were able to isolate differences in how each model responded to page data and advertising cues, to observe how AI agents behave in web-based decision-making tasks.

These are the ten prompts used for testing purposes:

  1. Book a romantic holiday with my girlfriend.
  2. Book me a cheap romantic holiday with my boyfriend.
  3. Book me the cheapest romantic holiday.
  4. Book me a nice holiday with my husband.
  5. Book a romantic luxury holiday for me.
  6. Please book a romantic Valentine’s Day holiday for my wife and me.
  7. Find me a nice hotel for a nice Valentine’s Day.
  8. Find me a nice romantic holiday in a wellness hotel.
  9. Look for a romantic hotel for a 5-star wellness holiday.
  10. Book me a hotel for a holiday for two in Paris.

What the Researchers Discovered

Ad Engagement With Ads

The study found that AI agents don’t ignore online advertisements, but their engagement with ads and the extent to which those ads influence decision-making varies depending on the large language model.

OpenAI’s GPT-4o and Operator were the most decisive, consistently selecting a single hotel and completing the booking process in nearly all test cases.

Anthropic’s Claude Sonnet 3.7 showed moderate consistency, making specific booking selections in most trials but occasionally returning lists of options without initiating a reservation.

Google’s Gemini 2.0 Flash was the least decisive, frequently presenting multiple hotel options and completing significantly fewer bookings than the other models.

Banner ads were the most frequently clicked ad format across all agents. However, the presence of relevant keywords had a greater impact on outcomes than visuals alone.

Ads with keywords embedded in visible text influenced model behavior more effectively than those with image-based text, which some agents overlooked. GPT-4o and Claude were more responsive to keyword-based ad content, with Claude integrating more promotional language into its output.

Use Of Filtering And Sorting Features

The models also differed in how they used interactive web page filtering and sorting tools.

  • Gemini applied filters extensively, often combining multiple filter types across trials.
  • GPT-4o used filters rarely, interacting with them only in a few cases.
  • Claude used filters more frequently than GPT-4o, but not as systematically as Gemini.

Consistency Of AI Agents

The researchers also tested for consistency of how often agents, when given the same prompt multiple times, picked the same hotel or offered the same selection behavior.

In terms of booking consistency, both GPT-4o (with Browser Use) and Operator (OpenAI’s proprietary agent) consistently selected the same hotel when given the same prompt.

Claude showed moderately high consistency in how often it selected the same hotel for the same prompt, though it chose from a slightly wider pool of hotels compared to GPT-4o or Operator.

Gemini was the least consistent, producing a wider range of hotel choices and less predictable results across repeated queries.

Specificity Of AI Agents

They also tested for specificity, which is how often the agent chose a specific hotel and committed to it, rather than giving multiple options or vague suggestions. Specificity reflects how decisive the agent is in completing a booking task. A higher specificity score means the agent more often committed to a single choice, while a lower score means it tended to return multiple options or respond less definitively.

  • Gemini had the lowest specificity score at 60%, frequently offering several hotels or vague selections rather than committing to one.
  • GPT-4o had the highest specificity score at 95%, almost always making a single, clear hotel recommendation.
  • Claude scored 74%, usually selecting a single hotel, but with more variation than GPT-4o.

The findings suggest that advertising strategies may need to shift toward structured, keyword-rich formats that align with how AI agents process and evaluate information, rather than relying on traditional visual design or emotional appeal.

What It All Means

This study investigated how AI agents for three language models (GPT-4o, Claude Sonnet 3.7, and Gemini 2.0 Flash) interact with online advertisements during web-based hotel booking tasks. Each model received the same prompts and completed the same types of booking tasks.

Banner ads received more clicks than sponsored or native ad formats, but the most important factor in ad effectiveness was whether the ad contained relevant keywords in visible text. Ads with text-based content outperformed those with embedded text in images. GPT-4o and Claude were the most responsive to these keyword cues, and Claude was also the most likely among the tested models to quote ad language in its responses.

According to the research paper:

“Another significant finding was the varying degree to which each model incorporated advertisement language. Anthropic’s Claude Sonnet 3.7 when used in ‘Browser Use’ demonstrated the highest advertisement keyword integration, reproducing on average 35.79% of the tracked promotional language elements from the Boutique Hotel L’Amour advertisement in responses where this hotel was recommended.”

In terms of decision-making, GPT-4o was the most decisive, usually selecting a single hotel and completing the booking. Claude was generally clear in its selections but sometimes presented multiple options. Gemini tended to frequently offer several hotel options and completed fewer bookings overall.

The agents showed different behavior in how they used a booking site’s interactive filters. Gemini applied filters heavily. GPT-4o used filters occasionally. Claude’s behavior was between the two, using filters more than GPT-4o but not as consistently as Gemini.

When it came to consistency—how often the same hotel was selected when the same prompt was repeated—GPT-4o and Operator showed the most stable behavior. Claude showed moderate consistency, drawing from a slightly broader pool of hotels, while Gemini produced the most varied results.

The researchers also measured specificity, or how often agents made a single, clear hotel recommendation. GPT-4o was the most specific, with a 95% rate of choosing one option. Claude scored 74%, and Gemini was again the least decisive, with a specificity score of 60%.

What does this all mean? In my opinion, these findings suggest that digital advertising will need to adapt to AI agents. That means keyword-rich formats are more effective than visual or emotional appeals, especially as machines increasingly are the ones interacting with ad content. Lastly, the research paper references structured data, but not in the context of Schema.org structured data. Structured data in the context of the research paper means on-page data like prices and locations and it’s this kind of data that AI agents engage best with.

The most important takeaway from the research paper is:

“Our findings suggest that for optimizing online advertisements targeted at AI agents, textual content should be closely aligned with anticipated user queries and tasks. At the same time, visual elements play a secondary role in effectiveness.”

That may mean that for advertisers, designing for clarity and machine readability may soon become as important as designing for human engagement.

Read the research paper:

Are AI Agents interacting with Online Ads?

Featured Image by Shutterstock/Creativa Images