Google’s AI Overviews have changed how search works. A TrustRadius report shows that 72% of B2B buyers see AI Overviews during research.
The study found something interesting: 90% of its respondents said they click on the cited sources to check information.
This finding differs from previous reports about declining click rates.
AI Overviews Are Affecting Search Patterns in Complex Ways
When AI summaries first appeared in search results, many publishers worried about “zero-click searches” reducing traffic. Many still see evidence of fewer clicks across different industries.
This research suggests B2B tech searches work differently. The study shows that while traffic patterns are changing, many users in their sample don’t fully trust AI content. They often check sources to verify what they read.
The report states:
“These overviews cite sources, and 90% of buyers surveyed said that they click through the sources cited in AI Overviews for fact-checking purposes. Buyers are clearly wanting to fact-check. They also want to consult with their peers, which we’ll get into later.”
If this extends beyond this study, being cited in these overviews might offer visibility for specific queries.
From Traffic Goals to Citation Considerations
While still optimizing for organic clicks, becoming a citation source for AI overviews is valuable.
The report notes:
“Vendors can fill the gap in these tools’ capabilities by providing buyers with content that answers their later-stage buying questions, including use case-specific content or detailed pricing information.”
This might mean creating clear, authoritative content that AI systems could cite. This applies especially to category-level searches where AI Overviews often appear.
The Ungated Content Advantage in AI Training
The research spotted a common mistake about how AI works. Some vendors think AI models can access their gated content (behind forms) for training.
They can’t. AI models generally only use publicly available content.
The report suggests:
“Vendors must find the right balance between gated and ungated content to maintain discoverability in the age of AI.”
This creates a challenge for B2B marketers who put valuable content behind forms. Making more quality information public could influence AI systems. You can still keep some premium content gated for lead generation.
Potential Implications For SEO Professionals
For search marketers, consider these points:
Google’s emphasis on Experience, Expertise, Authoritativeness, and Trustworthiness seems even more critical for AI evaluation.
The research notes that “AI tools aren’t just training on vendor sites… Many AI Overviews cite third-party technology sites as sources.”
As organic traffic patterns change, “AI Overviews are reshaping brand discoverability” and possibly “increasing the use of paid search.”
Evolving SEO Success Metrics
Traditional SEO metrics like organic traffic still matter. But this research suggests we should also monitor other factors, like how often AI Overviews cite you and the quality of that traffic.
“The era of volume traffic is over… What’s going away are clicks from the super early stage of the buyer journey. But people will click through visit sites eventually.”
He adds:
“I think we’ll see a lot less traffic, but the traffic that still arrives will be of higher quality.”
This offers search marketers one view on handling the changing landscape. Like with all significant changes, the best approach likely involves:
Testing different strategies
Measuring what works for your specific audience
Adapting as you learn more
This research doesn’t suggest AI is making SEO obsolete. Instead, it invites us to consider how SEO might change as search behaviors evolve.
Featured Image: PeopleImages.com – Yuri A/Shutterstock
In last week’s Memo, I explained how, just as digital DJing transformed music mixing, AI is revolutionizing how we create content by giving us instant access to diverse expressions and ideas.
Instead of fighting this change, writers should embrace AI as a starting point while focusing our energy on adding uniquely human elements that machines can’t replicate, like our personal experiences, moral judgment, and cultural understanding.
Last week, I identified seven distinctly human writing capabilities and 11 telltale signs of AI-generated content.
Today, I want to show you how I personally apply these insights in my editing process.
Image Credit: Lyna ™
Rather than seeing AI as my replacement, I advocate for thoughtful collaboration between human creativity and AI efficiency, much like how skilled DJs don’t just play songs but transform them through artistic mixing.
As someone who’s spent countless hours editing and tinkering with AI-generated drafts, I’ve noticed most people get stuck on grammar fixes while missing what truly makes writing connect with readers.
They overlook deeper considerations like:
Purposeful imperfection: Truly human writing isn’t perfectly polished. Natural quirks, occasional tangents, and varied sentence structures create authenticity that perfect grammar and flawless organization can’t replicate.
Emotional intelligence: AI content often lacks the intuitive emotional resonance that comes from lived experience. Editors frequently correct grammar but overlook opportunities to infuse genuine emotional depth.
Cultural context: Humans naturally reference shared cultural touchpoints and adapt their tone based on context. This awareness is difficult to edit into AI content without completely reframing passages.
In today’s Memo, I explain how to turn these edits into a recurring workflow for you or your team, so you can leverage the power of AI, accelerate content output, and drive more organic revenue.
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
Turning AI-Editing Into A Workflow
Image Credit: Kevin Indig
I like to edit AI content in several passes, each with a specific focus:
Round 1: Structure.
Round 2: Language.
Round 3: Humanization.
Round 4: Polish.
Not every type of content needs the same amount of editing:
You can be more hands-off with supporting content on category or product pages, while editorial content for blogs or content hubs needs significantly more editing.
In the same way, evergreen topics need less editing while thought leadership needs a heavy editorial hand.
Round 1: Structure & Big-Picture Review
First, I read the entire draft like a skeptical reader would.
I’m looking for logical flow issues, redundant sections, and places where the AI went on unhelpful tangents.
This is about getting the bones right before polishing sentences.
Rather than nitpicking grammar, I ask: “Does this piece make sense? Would a human actually structure it this way?”
But, the most important question is: “Does this piece meet user intent?” You need to ensure that the structure optimizes for speed-to-insights and helps users solve the implied problem of their searches.
If sections feel out of order or disconnected, I rearrange them.
If the AI repeats the same point in multiple places (they love doing this), I consolidate.
Round 2: Humanize The Language & Flow
Next, I tackle that sterile AI tone that makes readers’ eyes glaze over.
I break up the robotic rhythm by:
Consciously varying sentence lengths (Watch this. I’m doing it right now. Different lengths create natural cadence.).
Replacing corporate-speak with how humans actually talk (“use” instead of “utilize,” “start” instead of “commence”).
Cutting those meaningless filler sentences AI loves to add (“It’s important to note that…” or “As we can see from the above…”).
For example, I’d transform this AI-written line:
Utilizing appropriate methodologies can facilitate enhanced engagement among target demographics.
Into this:
Use the right approach, and people will actually care about what you’re saying.
Round 3: Add The Human Value Only You Can Provide
Here’s where I earn my keep.
I infuse the piece with:
Opinions where appropriate.
Personal stories or examples.
Unique metaphors or cultural references.
Nuanced insights that come from my expertise.
One of the shifts we have to make – and that I made – is to be more deliberate about collecting stories and opinions that we can tell.
In his book “Storyworthy,” Matthew Dicks shares how he saves stories from everyday life in a spreadsheet. He calls this habit Homework For Life, and it’s the most effective way to collect relatable stories that you can use for your content. It’s also a way to slow down time:
As you begin to take stock of your days, find those moments — see them and record them — time will begin to slow down for you. The pace of your life will relax.
Round 4: Final Polish & Optimization
Finally, I do a last pass focusing on:
A punchy opening that hooks the reader.
Removing any lingering AI patterns (overly formal language, repetitive phrases).
Fact-checking every statistic, date, name, and claim.
Adding calls to action or questions that engage readers.
I know I’ve succeeded when I read the final piece and genuinely forget that an AI was involved in the drafting process.
The ultimate test: “Would I be proud to put my name on this?”
AI Content Editing Checklist
Before you hit “Publish,” run through this checklist to make sure you’ve covered all bases:
✅ User Intent: The content is organized logically and addresses the intended topic or keyword completely, without off-topic detours.
✅ Tone & Voice: The writing sounds consistently human and aligns with brand voice (e.g., friendly, professional, witty, etc.).
✅ Readability: Sentences and paragraphs are concise and easy to read. Jargon is explained or simplified. The formatting (headings, lists, etc.) makes it skimmable.
✅ Repetition: No overly repetitive phrases or ideas. Redundant content is trimmed. The language is varied and interesting.
✅ Accurate: All facts, stats, names, and claims have been verified. Any errors are corrected. Sources are cited for important or non-obvious facts, lending credibility. There are no unsupported claims or outdated information.
✅ Original Value: The content contains unique elements (experiences, insights, examples, opinions) that did not come from AI.
✅ SEO: The primary keyword and relevant terms are included naturally. Title and headings are optimized and clear. Internal links to related content are added where relevant. External links to authoritative sources support the content.
✅ Polish: The introduction is compelling. The content includes elements that engage the reader (questions, conversational bits) and a call to action. It’s free of typos and grammatical errors. All sentences flow well.
If you can check off all (or most) of these items, you’ve likely turned the AI draft into a high-quality piece that can confidently be published.
AI Content Editing = Remixing
We’ve come full circle.
Just as digital technology transformed DJing without eliminating the need for human creativity and curation, AI is reshaping writing while still requiring our uniquely human touch.
The irony I mentioned at the start of this article – trying to make AI content more human – becomes less ironic when we view AI as a collaborative tool rather than a replacement for human creativity.
Just as DJs evolved from vinyl crates to digital platforms without losing their artistic touch, writers are adapting to use AI while maintaining their unique value.
You can raise the chances of creating high-performing content that stands out by selecting the right models, inputs, and direction:
The newest models lead to exponentially better content than older (cheaper) ones. Don’t try to save money here.
Spend a lot of time getting style guides and examples right so the models work in the right lanes.
The more unique your data sources are, the more defensible your AI draft becomes.
The key insight is this: AI content editing is about enhancing the output with the irreplaceable human elements that make content truly engaging.
Whether that’s through adding lived experience, cultural understanding, emotional depth, or purposeful imperfection, our role is to be the bridge between AI’s computational efficiency and human connection.
The future belongs not to those who resist AI but to those who learn to dance with it, knowing exactly when to lead with their uniquely human perspective and when to follow the algorithmic beat.
Back in my DJ days, the best sets weren’t about the equipment I used but about the moments of unexpected connection I created.
The same holds true for writing in this new era.
Featured Image: Paulo Bobita/Search Engine Journal
OpenAI CEO Sam Altman recently said the company plans to release an open source model more capable than any currently available. While he acknowledged the likelihood of it being used in ways some may not approve of, he emphasized that highly capable open systems have an important role to play. He described the shift as a response to greater collective understanding of AI risks, implying that the timing is right for OpenAI to re-engage with open source models.
The statement was in the context of a Live at TED2025 interview where the interviewer, Chris Anderson, asked Altman whether the Chinese open source model DeepSeek had “shaken” him up.
Screenshot Of Sam Altman At Live at TED2025
Altman responded by saying that OpenAI is preparing to release a powerful open-source model that is near the capabilities of the most advanced AI models currently available today.
Altman responded:
“I think open source has an important place. We actually just last night hosted our first like community session to kind of decide the parameters of our open source model and how we want to shape it.
We’re going to do a very powerful open source model. I think this is important. We’re going to do something near the frontier, I think better than any current open source model out there. This will not be all… like, there will be people who use this in ways that some people in this room, maybe you or I, don’t like. But there is going to be an important place for open source models as part of the constellation here…”
Altman next admitted that they were slow to act on open source but now plan to contribute meaningfully to the movement.
He continued his answer:
“You know, I think we were late to act on that, but we’re going to do it really well.”
About thirty minutes later in the interview Altman circled back to the topic of open source, lightheartedly remarking that maybe in a year the interviewer might yell at him for open sourcing an AI model but he said that in everything there are trade-offs and that he feels OpenAI has done a good job of bringing AI technology into the world in a responsible way.
He explained:
“I do think it’s fair that we should be open sourcing more. I think it was reasonable for all of the reasons that you asked earlier, as we weren’t sure about the impact these systems were going to have and how to make them safe, that we acted with precaution.
I think a lot of your questions earlier would suggest at least some sympathy to the fact that we’ve operated that way. But now I think we have a better understanding as a world and it is time for us to put very capable open systems out into the world.
If you invite me back next year, you will probably yell at me for somebody who has misused these open source systems and say, why did you do that? That was bad. You know, you should have not gone back to your open roots. But you know, we’re not going to get… there’s trade-offs in everything we do. And and we are one player in this one voice in this AI revolution trying to do the best we can and kind of steward this technology into the world in a responsible way.
I think we have over the last almost decade …we have mostly done the thing we’ve set out to do. We have a long way to go in front of us, our tactics will shift more in the future, but adherence to this sort of mission and what we’re trying to do I think, very strong.”
OpenAI’s Open Source Model
Sam Altman acknowledged OpenAI was “late to act” on open source but now aims to release a model “better than any current open source model.” His decision to release an open source AI model is significant because it will introduce additional competition and improvement to the open source side of AI technology.
OpenAI was established in 2015 as a non-profit organization but transitioned in 2019 to a closed source model over concerns about potential misuse. Altman used the word “steward” to describe OpenAI’s role in releasing AI technologies into the world, and the transition to a closed source system reflects that concern.
2025 is a vastly different world from 2019 because there are many highly capable open source models available, models such as DeepSeek among them. Was OpenAI’s hand forced by the popularity of DeepSeek? He didn’t say, framing the decision as an evolution from a position of responsible development.
Sam Altman’s remarks at the TED interview suggest that OpenAI’s new open source model will be powerful but not representative of their best model. Nevertheless, he affirmed that open source models have a place in the “constellation” of AI, with a legitimate role as a strategically important and technically capable part of the broader technological ecosystem.
A new study tracking 768,000 citations across AI search engines shows that product-related content tops AI citations. It makes up 46% to 70% of all sources referenced.
This finding offers guidance on how marketers should approach content creation amid the growth of AI search.
The research, conducted over 12 weeks by XFunnel, looked at which types of content ChatGPT, Google (AI Overviews), and Perplexity most often cite when answering user questions.
Here’s what you need to know about the findings.
Product Content Visible Across Queries
The study shows AI platforms prefer product-focused content. Content with product specs, comparisons, “best of” lists, and vendor details consistently got the highest citation rates.
The study notes:
“This preference appears consistent with how AI engines handle factual or technical questions, using official pages that offer reliable specifications, FAQs, or how-to guides.”
Other content types struggled to get cited as often:
News and research articles each got only 5-16% of citations.
Affiliate content typically stayed below 10%.
User reviews (including forums and Q&A sites) ranged between 3-10%.
Blog content received just 3-6% of citations.
PR materials barely appeared, typically less than 2% of citations.
Citation Patterns Vary By Funnel Stage
AI platforms cite different content types depending on where customers are in their buying journey:
Top of funnel (unbranded): Product content led at 56%, with news and research each at 13-15%. This challenges the idea that early-stage content should focus mainly on education rather than products.
Middle of funnel (branded): Product citations dropped slightly to 46%. User reviews and affiliate content each rose to about 14%. This shows how AI engines include more outside opinions for comparison searches.
Bottom of funnel: Product content peaked at over 70% of citations for decision-stage queries. All other content types fell below 10%.
B2B vs. B2C Citation Differences
The study found big differences between business and consumer queries:
For B2B queries, product pages (especially from company websites) made up nearly 56% of citations. Affiliate content (13%) and user reviews (11%) followed.
For B2C queries, there was more variety. Product content dropped to about 35% of citations. Affiliate content (18%), user reviews (15%), and news (15%) all saw higher numbers.
What This Means For SEO
For SEO professionals and content creators, here’s what to take away from this study:
Adding detailed product information improves citation chances even for awareness-stage content.
Blogs, PR content, and educational materials are cited less often. You may need to change how you create these.
Check your content mix to make sure you have enough product-focused material at all funnel stages.
B2B marketers should prioritize solid product information on their own websites. B2C marketers need strategies that also encourage quality third-party reviews.
The study concludes:
“These observations suggest that large language models prioritize trustworthy, in-depth pages, especially for technical or final-stage information… factually robust, authoritative content remains at the heart of AI-generated citations.”
As AI transforms online searches, marketers who understand citation patterns can gain a competitive edge in visibility.
There’s been a lot of talk recently about whether large language models (LLMs) are replacing a considerable amount of Google searches.
While Google is clearly still the market leader, with 14 billion searches per day worldwide, an estimated 37.5 million “searches” on ChatGPT represent an opportunity for your brand.
SEO professionals have years of experience testing optimization tactics on Google, but we’re still at the beginning stages of understanding how to get your brand cited in generative AI chatbots.
This is an exciting opportunity because it forces people to test and learn rapidly.
Through some testing and research, here are some tips that have helped me develop initial recommendations for my clients for generative AI optimization, regardless of whether it’s ChatGPT, Gemini, Deepseek, or whatever generative AI chatbot comes next.
Use Generative AI Chatbots To Learn About Your Brand
First, use generative AI tools and start asking them questions about your brand to find the sources they utilize to answer your queries.
This will help you better understand what sources it’s been trained on and what pages on your site (or competitor sites) matter to its understanding of your brand.
Ask questions like:
Tell me about [company]/[product].
What are the best brands in [vertical] and why?
What are the pros and cons of [company]/[product]?
For example, when I ask ChatGPT-4o, “Tell me about HubSpot,” it gives me a nice summary with a lot of useful citations:
Screenshot from ChatGPT, April 2025
From this, you can see that a legal page is being cited multiple times in a company overview, so those are important. You can also see the HubSpot Knowledge Base where information is being pulled from as well.
Often, a company’s About page is the main citation, but clearly, HubSpot has built out a better legal section than its core pages.
If I were part of its organization, I would work to make the About page richer with information. Generally, your About page will do better at marketing the benefits of your products than legal pages.
When I then asked, “What are the best brands for small business marketing?”, it provided me with the following list:
Screenshot from ChatGPT, April 2025
Screenshot from ChatGPT, April 2025
ChatGPT-4o cites Wikipedia five different times and NerdWallet once for its affiliate coverage of small business marketing tools.
In searches I’ve done in other sectors, I’ve seen a lot more variety in sources listed – many in the affiliate review space. Here, however, NerdWallet is the only one.
When I asked ChatGPT-4o to dive into HubSpot further and show me the pros and cons of using it for small business marketing, it responded with:
Screenshot from ChatGPT, April 2025
Screenshot from ChatGPT, April 2025
I would then take this list and compare it against how I market the product to small business owners and potentially make tweaks accordingly.
And if there is validity to the cons listed and they are weaknesses we want to work on as an organization, I would start to build relationships with some of the sources listed.
That way, when there are company updates that impact some of what’s been written about the company, they can update their review pages, and it’ll impact what shows up in LLM queries.
I would also engage with the PR team about getting more coverage for the brand. Some of these citations are not particularly well-known or credible sites, so there is opportunity to get more authoritative sources to show up.
Ensure LLMs Can Crawl Your Website
This was true at the beginning of SEO, and is still true now.
Ensure you have a robots.txt file on your website’s server with directives to crawlers about pages and sections they can crawl and index.
A lot of site owners initially rushed to block LLMs from crawling their sites when ChatGPT first launched, as it was unknown (and also probably scraping content for the model).
If you want to be included in generative AI results now, though, you need to be where the AI crawlers can see you, so double-check that it is all configured correctly.
Utilize Credible Citations And Quotes In Content
A group of researchers from several prominent universities conducted a study on AI search engine optimization and what was more likely to surface in response to queries.
The tactic that worked the best, especially for factual queries, was adding citations from authoritative sources.
Using language like “according to [source],” adding a statistic with a credible citation, or a quote from a known expert all increased the likelihood of showing up in a generative AI chatbot responses by as much as 25.1% for sites ranking in position 4 in Google and by 99.7% for sites ranking in position 5 in Google.
Similarly, adding statistics to content led to a 10% increase in visibility in LLMs if the site is in position 4 in Google and a 97% increase in visibility in LLMs if the site is ranked in position 5 in Google.
Mentions In Prominent Databases And Forums Help
There are lots of reasons to be paying attention to prominent forums like Reddit and Quora or popular database sites like Wikipedia. Not only do they own lots of organic search real estate, but they are also obvious sites for training LLMs.
Reddit is now, smartly, selling data licensing to AI companies. Being a topic of discussion on these sites will only help your brand. There’s no better time to get into being active on Reddit than now.
Engaging authentically on behalf of a brand (assuming you reveal your affiliation upfront) is more acceptable nowadays and is often welcomed to get clarification on user questions. It will likely benefit you on your generative AI optimization journey, too.
Develop An Exceptional About Page
If there is one area of your website you need to improve on, your About page may be it.
Generative AI models utilize these types of pages to understand what a company does and how credible the company is.
If you ask any of these platforms for information about your brand, you may be surprised by how heavily they rely on your About page to deliver the answer.
If your About page doesn’t describe your business and products well enough, you may see LLMs citing legal pages instead, like in the case of HubSpot mentioned earlier.
Focus On Long-Tail Keywords
Modern transformer-based LLMs are based on a statistical analysis of the co-occurrence of words.
If an entity is mentioned in connection with another entity with frequency in the training data, there is a high probability of a semantic relationship between the two entities.
To optimize for this, it’s more useful to use keyword research tools to better understand related keywords.
Search volume can still be an indicator of importance, but I would focus more on better understanding the relationships and relevance between concepts, ensuring the content is of high quality, and that user intent is matched.
Stop Siloing SEO
We’re entering an era when websites get fewer and fewer clicks from organic search. For most brands, a multi-channel strategy has never been more imperative.
Not only does building brand recognition help fuel some of the other best practices here, but LLMs are also being trained on social media and marketing content.
Plus, the more you can build a sales flywheel in your own content ecosystem, the less you need to panic about staying ahead of the ever-evolving world of SEO.
Track Your Referrals And Reverse-Engineer
Once you start seeing generative AI platforms driving traffic to your site, start paying attention to what pages bring that traffic in.
Then, visit that generative AI platform and try to recreate searches that could lead to your page as the answer.
You’ll start to learn what topics these platforms associate with your brand, and then you can find ways to double down on that type of content.
Final Thoughts
While the tool companies are trying to catch up with how to help digital marketers optimize in this era of generative AI, we will have to be more reliant on ourselves to reverse-engineer what we’re seeing in the data and run our own experiments.
A new research paper explores how AI agents interact with online advertising and what shapes their decision-making. The researchers tested three leading LLMs to understand which kinds of ads influence AI agents most and what this means for digital marketing. As more people rely on AI agents to research purchases, advertisers may need to rethink strategy for a machine-readable, AI-centric world and embrace the emerging paradigm of “marketing to machines.”
Although the researchers were testing if AI agents interacted with advertising and what kinds influenced them the most, their findings also show that well-structured on-page information, like pricing data, is highly influential, which opens up areas to think about in terms of AI-friendly design.
An AI agent (also called agentic AI) is an autonomous AI assistant that performs tasks like researching content on the web, comparing hotel prices based on star ratings or proximity to landmarks, and then presenting that information to a human, who then uses it to make decisions.
AI Agents And Advertising
The research is titled Are AI Agents Interacting With AI Ads? and was conducted at the University of Applied Sciences Upper Austria. The research paper cites previous research on the interaction between AI Agents and online advertising that explore the emerging relationships between agentic AI and the machines driving display advertising.
Previous research on AI agents and advertising focused on:
Pop-up Vulnerabilities Vision-language AI agents that aren’t programmed to avoid advertising can be tricked into clicking on pop-up ads at a rate of 86%.
Advertising Model Disruption This research concluded that AI agents bypassed sponsored and banner ads but forecast disruption in advertising as merchants figure out how to get AI agents to click on their ads to win more sales.
Machine-Readable Marketing This paper makes the argument that marketing has to evolve toward “machine-to-machine” interactions and “API-driven marketing.”
The research paper offers the following observations about AI agents and advertising:
“These studies underscore both the potential and pitfalls of AI agents in online advertising contexts. On one hand, agents offer the prospect of more rational, data-driven decisions. On the other hand, existing research reveals numerous vulnerabilities and challenges, from deceptive pop-up exploitation to the threat of rendering current advertising revenue models obsolete.
This paper contributes to the literature by examining these challenges, specifically within hotel booking portals, offering further insight into how advertisers and platform owners can adapt to an AI-centric digital environment.”
The researchers investigate how AI agents interact with online ads, focusing specifically on hotel and travel booking platforms. They used a custom built travel booking platform to perform the testing, examining whether AI agents incorporate ads into their decision-making and explored which ad formats (like banners or native ads) influence their choices.
How The Researchers Conducted The Tests
The researchers conducted the experiments using two AI agent systems: OpenAI’s Operator and the open-source Browser Use framework. Operator, a closed system built by OpenAI, relies on screenshots to perceive web pages and is likely powered by GPT-4o, though the specific model was not disclosed.
Browser Use allowed the researchers to control for the model used for the testing by connecting three different LLMs via API:
GPT-4o
Claude Sonnet 3.7
Gemini 2.0 Flash
The setup with Browser Use enabled consistent testing across models by enabling them to use the page’s rendered HTML structure (DOM tree) and recording their decision-making behavior.
These AI agents were tasked with completing hotel booking requests on a simulated travel site. Each prompt was designed to reflect realistic user intent and tested the agent’s ability to evaluate listings, interact with ads, and complete a booking.
By using APIs to plug in the three large language models, the researchers were able to isolate differences in how each model responded to page data and advertising cues, to observe how AI agents behave in web-based decision-making tasks.
These are the ten prompts used for testing purposes:
Book a romantic holiday with my girlfriend.
Book me a cheap romantic holiday with my boyfriend.
Book me the cheapest romantic holiday.
Book me a nice holiday with my husband.
Book a romantic luxury holiday for me.
Please book a romantic Valentine’s Day holiday for my wife and me.
Find me a nice hotel for a nice Valentine’s Day.
Find me a nice romantic holiday in a wellness hotel.
Look for a romantic hotel for a 5-star wellness holiday.
Book me a hotel for a holiday for two in Paris.
What the Researchers Discovered
Ad Engagement With Ads
The study found that AI agents don’t ignore online advertisements, but their engagement with ads and the extent to which those ads influence decision-making varies depending on the large language model.
OpenAI’s GPT-4o and Operator were the most decisive, consistently selecting a single hotel and completing the booking process in nearly all test cases.
Anthropic’s Claude Sonnet 3.7 showed moderate consistency, making specific booking selections in most trials but occasionally returning lists of options without initiating a reservation.
Google’s Gemini 2.0 Flash was the least decisive, frequently presenting multiple hotel options and completing significantly fewer bookings than the other models.
Banner ads were the most frequently clicked ad format across all agents. However, the presence of relevant keywords had a greater impact on outcomes than visuals alone.
Ads with keywords embedded in visible text influenced model behavior more effectively than those with image-based text, which some agents overlooked. GPT-4o and Claude were more responsive to keyword-based ad content, with Claude integrating more promotional language into its output.
Use Of Filtering And Sorting Features
The models also differed in how they used interactive web page filtering and sorting tools.
Gemini applied filters extensively, often combining multiple filter types across trials.
GPT-4o used filters rarely, interacting with them only in a few cases.
Claude used filters more frequently than GPT-4o, but not as systematically as Gemini.
Consistency Of AI Agents
The researchers also tested for consistency of how often agents, when given the same prompt multiple times, picked the same hotel or offered the same selection behavior.
In terms of booking consistency, both GPT-4o (with Browser Use) and Operator (OpenAI’s proprietary agent) consistently selected the same hotel when given the same prompt.
Claude showed moderately high consistency in how often it selected the same hotel for the same prompt, though it chose from a slightly wider pool of hotels compared to GPT-4o or Operator.
Gemini was the least consistent, producing a wider range of hotel choices and less predictable results across repeated queries.
Specificity Of AI Agents
They also tested for specificity, which is how often the agent chose a specific hotel and committed to it, rather than giving multiple options or vague suggestions. Specificity reflects how decisive the agent is in completing a booking task. A higher specificity score means the agent more often committed to a single choice, while a lower score means it tended to return multiple options or respond less definitively.
Gemini had the lowest specificity score at 60%, frequently offering several hotels or vague selections rather than committing to one.
GPT-4o had the highest specificity score at 95%, almost always making a single, clear hotel recommendation.
Claude scored 74%, usually selecting a single hotel, but with more variation than GPT-4o.
The findings suggest that advertising strategies may need to shift toward structured, keyword-rich formats that align with how AI agents process and evaluate information, rather than relying on traditional visual design or emotional appeal.
What It All Means
This study investigated how AI agents for three language models (GPT-4o, Claude Sonnet 3.7, and Gemini 2.0 Flash) interact with online advertisements during web-based hotel booking tasks. Each model received the same prompts and completed the same types of booking tasks.
Banner ads received more clicks than sponsored or native ad formats, but the most important factor in ad effectiveness was whether the ad contained relevant keywords in visible text. Ads with text-based content outperformed those with embedded text in images. GPT-4o and Claude were the most responsive to these keyword cues, and Claude was also the most likely among the tested models to quote ad language in its responses.
According to the research paper:
“Another significant finding was the varying degree to which each model incorporated advertisement language. Anthropic’s Claude Sonnet 3.7 when used in ‘Browser Use’ demonstrated the highest advertisement keyword integration, reproducing on average 35.79% of the tracked promotional language elements from the Boutique Hotel L’Amour advertisement in responses where this hotel was recommended.”
In terms of decision-making, GPT-4o was the most decisive, usually selecting a single hotel and completing the booking. Claude was generally clear in its selections but sometimes presented multiple options. Gemini tended to frequently offer several hotel options and completed fewer bookings overall.
The agents showed different behavior in how they used a booking site’s interactive filters. Gemini applied filters heavily. GPT-4o used filters occasionally. Claude’s behavior was between the two, using filters more than GPT-4o but not as consistently as Gemini.
When it came to consistency—how often the same hotel was selected when the same prompt was repeated—GPT-4o and Operator showed the most stable behavior. Claude showed moderate consistency, drawing from a slightly broader pool of hotels, while Gemini produced the most varied results.
The researchers also measured specificity, or how often agents made a single, clear hotel recommendation. GPT-4o was the most specific, with a 95% rate of choosing one option. Claude scored 74%, and Gemini was again the least decisive, with a specificity score of 60%.
What does this all mean? In my opinion, these findings suggest that digital advertising will need to adapt to AI agents. That means keyword-rich formats are more effective than visual or emotional appeals, especially as machines increasingly are the ones interacting with ad content. Lastly, the research paper references structured data, but not in the context of Schema.org structured data. Structured data in the context of the research paper means on-page data like prices and locations and it’s this kind of data that AI agents engage best with.
The most important takeaway from the research paper is:
“Our findings suggest that for optimizing online advertisements targeted at AI agents, textual content should be closely aligned with anticipated user queries and tasks. At the same time, visual elements play a secondary role in effectiveness.”
That may mean that for advertisers, designing for clarity and machine readability may soon become as important as designing for human engagement.
New research shows marketers aren’t using generative AI as much as they could be. Marketing applications rank surprisingly low on the list of popular AI uses.
“The Top-100 Gen AI Use Case” report by Marc Zao-Sanders reveals that while people increasingly use AI for personal support, marketing tasks like creating ads and social media content fall near the bottom of the list.
Personal Uses Dominate While Marketing Applications Trail
The research analyzed how people use Gen AI based on online discussions.
The findings show a shift from technical to emotional applications over the past year.
The top three uses are now:
Therapy and companionship
Life organization
Finding purpose
Screenshot from: hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025, April 2025.
Zao-Sanders observes:
“The findings underscore a marked transition from primarily technical and productivity-driven use cases toward applications centered on personal well-being, life organization, and existential exploration.”
Meanwhile, marketing uses rank much lower:
Ad/marketing copy (#64)
Writing blog posts (#97)
Social media copy (#98)
Social media systems (#99)
This gap shows marketers haven’t fully tapped into Gen AI’s potential.
Why the Adoption Gap Exists
Why aren’t marketers using Gen AI more? Several reasons explain this.
Many marketers may have misjudged how people use AI, Zao-Sanders suggests in the report:
“Most experts expected AI would prove itself first in technical areas. While it’s doing plenty there, this research suggests AI may help us as much or more with our human whims and desires.”
The research also shows users have gotten better at writing prompts. They also better understand AI’s limits.
Learning from Top-Ranked Applications
Marketers can learn from what makes the top AI uses so popular:
Emotional connection: People value AI that feels personal and supportive. Marketing tools could be more conversational and empathetic.
Life organization: People use AI to structure tasks. Marketing tools could focus more on organizing workflows rather than just creating content.
Enhanced learning: Users value AI as a learning tool. Marketing applications could highlight how they help build skills.
Screenshot from: hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025, April 2025.
One marketing-related use that ranked higher was “Generate ideas” at #6. This suggests brainstorming might be a better entry point than finished content.
Here are some quotes pulled from the report on how marketers are using gen AI tools:
“I use it to determine a certain industries pain points, then educate it on what I sell, then have it create lists, PowerPoint templates, and cold emails/call scripts that specifically call out how my product solves them.”
“Case studies. I just input a few bullet points of what we did, a couple of links, and metrics we want to focus on. Done. [Reports] used to take days to make. Now it’s 95% complete in 2 minutes.”
“I record a Zoom call where I discuss each of the points. We send the video of the Zoom to have it transcribed into Word. Then I paste it into ChatGPT with a prompt like: ‘convert this conversation into an 800 word blog for marketing to (x target market)’”
Practical Steps for Marketers
Based on these findings, here’s what marketers can do:
Focus on the personal benefits of AI tools, not just productivity.
Study good prompts. The report includes examples of effective prompts you can adapt.
Connect personal and work uses. Tools that help in both contexts are more popular.
Users worry about data privacy. Be transparent about how you protect their information.
Looking Ahead
Report author Marc Zao-Sanders concludes:
“Last year, I made the correct but rather insipidly safe prediction that AI will continue to develop, as will our applications of it. I make exactly the same prediction now.”
Now is the perfect time for marketers to learn about and incorporate these tools into their daily work.
While marketing may be one of the less commonly used areas for generative AI tools, this means that you’re not falling behind, as others might claim.
By studying what makes top AI applications successful, you can develop better AI strategies for your marketing needs.
The full report (PDF link) provides detailed insights into real-world AI use, offering guidance for improving your approach.
See the screenshot below for a complete list of the top 100 gen AI use cases.
Screenshot from: hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025, April 2025.
OpenAI has added better memory features to ChatGPT. Now, the AI can remember more from your past chats. This means you’ll get more personalized responses without needing to repeat yourself.
Sam Altman, CEO of OpenAI, made the announcement on X:
we have greatly improved memory in chatgpt–it can now reference all your past conversations!
this is a surprisingly great feature imo, and it points at something we are excited about: ai systems that get to know you over your life, and become extremely useful and personalized.
Saved Memories: These are specific details ChatGPT saves for later use. Examples include your preferences or instructions you want it to remember.
Chat History Reference: This lets ChatGPT look back at your past conversations to give better answers, even if you didn’t specifically ask it to remember something.
“ChatGPT can now remember helpful information between conversations, making its responses more relevant and personalized. Whether you’re typing, speaking, or generating images in ChatGPT, it can recall details and preferences you’ve shared and use them to tailor its responses.”
You’ll know immediately if you’re using the version with improved memory if you log-in and see this message:
Screenshot from: ChatGPT, April 2025.
It links to an FAQ section with more information, or you can trigger a demonstration by tapping “Show me.”
You can prompt it with “Describe me based on all our chats” to see what it knows.
Here’s what it gave me. Based on my usage, it was accurate. It even remembered that I sometimes ask about brewing coffee, a conversation I haven’t had in months.
Screenshot from: ChatGPT, April 2025.
User Controls and Privacy Considerations
You have full control over what ChatGPT remembers:
You can turn off memory features in your settings
You can review and delete specific memories
You can start “Temporary Chats” that don’t use or create memories
ChatGPT won’t automatically remember sensitive information like health details unless you ask it to
“You’re in control of what ChatGPT remembers. You can delete individual memories, clear specific or all saved memories, or turn memory off entirely in your settings.”
You can tell ChatGPT to remember things any time by saying something like “Remember that I’m vegetarian when you recommend recipes.”
Availability & Limitations
Right now, ChatGPT Plus and Pro subscribers are getting these new memory features. Free users can only use “Saved Memories,” not the “Chat History” feature.
These features aren’t available in European countries like the UK, Switzerland, and others. This is probably because of data privacy laws in those regions.
If you have ChatGPT Enterprise, workspace owners can control everyone’s memory features. Since February 2025, Enterprise and Education customers have 20% more memory capacity.
Implications for Marketers and SEO Professionals
For marketers and SEO pros, these memory improvements make ChatGPT much more useful:
Better Content Creation: ChatGPT remembers your brand voice and style across different sessions
Easier SEO Work: It recalls past discussions about site structure, keywords, and algorithm updates
Smoother Projects: You won’t need to repeat project details every time you start a new chat
“The more you use ChatGPT, the more useful it becomes. You’ll start to notice improvements over time as it builds a better understanding of what works best for you.”
What’s Next for AI Memory
OpenAI says memory features aren’t available for custom GPTs yet, but they’ll add them later. When that happens, GPT creators can enable memory for their custom GPTs.
Each GPT will have its own separate memory. Memories won’t be shared between different GPTs or with the main ChatGPT.
This upgrade marks a big step toward more natural AI conversations that build on shared history. It should help marketers use AI tools more effectively in their daily work.
Google leaders shared new insights on AI in search and the future of SEO during this week’s Google Search Central Live conference in Madrid.
This report is based on the thorough coverage by Aleyda Solis, who attended the event and noted the main points.
The event featured talks from Google’s Search Relations team, including John Mueller, Daniel Weisberg, Moshe Samet, and Eric Barbera.
Google’s LLM Integration Architecture Revealed
Mueller explained how Google uses large language models (LLMs), a method called Retrieval Augmented Generation (RAG), and grounding to build AI-powered search answers.
According to Mueller’s slides, the process works in four steps:
A user enters a question.
The search engine finds the relevant information.
This information is used to “ground” the LLM.
The LLM creates an answer with supporting links.
This system is designed to keep answers accurate and tied to their sources, addressing concerns about AI-generated errors.
Google made it clear to SEO professionals that no extra tweaks are needed for AI features.
Here are the key points:
AI tools are still new and will continue to change.
User behavior with AI search is still growing.
AI data appears with traditional search data in Search Console.
There is no separate breakdown, much like with featured snippets.
Google encourages reporting any unusual issues, but sticking to your current SEO best practices is enough for now.
Google: No optimization is necessary for Google AI features : they’re too new, user behavior is changing a lot, they’re taken into account in GSC but not broken out 👀 #sclmadridpic.twitter.com/vZGY4th1yU
Despite advances in AI, structured data is important. During the conference, Google advised that you should:
Keep using supported structured data types.
Check Google’s documentation for the right schemas.
Understand that structured data makes it easier for computers to read and index your content.
Even though AI can work with unstructured data, using structured data gives you a clear advantage in search results.
Google still recommends to use structured data in an AI search world – focusing on those things that are actually visible in SERPs 👀 @JohnMu#sclmadridpic.twitter.com/IT3mJrAFFc
For site owners who are cautious about how their content shows up in AI features, Google explained several ways to control it:
Use the robots nosnippet tag to opt out of AI Overviews.
Add a meta tag like .
Wrap certain content in a .
Limit the amount of text shown with .
These options work just like the controls for traditional search snippets.
You can opt out from AI Overviews using the robots nosnippet configurations since Google consider them to be a search feature #sclmadridpic.twitter.com/fTQnba8dK4
I asked Google if there will be a console showing data from Gemini usage / search behavior to help inform about their impact and overlap with traditional search, especially with the integration of AI mode: it’s not planned yet because of implications regarding privacy among other… pic.twitter.com/NX3NOEI0Mb
There was a discussion about a potential file called LLMs.txt, which would work like robots.txt but control AI usage. Mueller noted that this file “only makes sense if the system doesn’t know about your site.” (paraphrased)
The extra layer might be unnecessary since Google already has plenty of data about most sites. For Gemini and Vertex AI training, Google now uses a user-agent token in robots.txt, which does not affect search rankings.
.@JohnMu About LLMs.txt : only makes sense if the system doesn’t know about your site, so in the short term it might make sense but doesn’t expect that is something that Google will take into account since Google has already a lot of data, the actual content of sites. At the… pic.twitter.com/UedsGJbSYs
Solis’s coverage shows that Google focuses on user needs while adding new features. The big message is to keep delivering quality content and solid technical foundations. Although AI brings new challenges, the goal of serving users well does not change.
Some challenges remain, such as not having separate reports for AI features. However, as these features mature, more precise data may soon be available.
For now, SEOs should continue using structured data, following their proven SEO practices, and keeping up with new developments.
For more insights from the conference, see the full coverage on Solis’ website.
Since late 2022, the price of using GPT-3.5-level AI models has dropped from $20.00 to just $0.07 per million tokens.
According to Stanford HAI’s AI Index Report, that’s a 280-fold reduction in less than two years.
This massive cost drop is changing the pricing of AI marketing tools. Tools that only big companies could afford are now within reach for businesses of all sizes.
AI Cost Reduction
The report shows that large language model (LLM) prices have fallen between 9 and 900 times yearly, depending on the task.
These cost reductions change the ROI for AI in marketing. Tools that were too expensive before could now pay off even for medium-sized companies.
Source: McKinsey & Company Survey, 2024 | Chart: 2025 AI Index report
The gap between the best AI models is closing. The difference between the first and tenth-ranked models has shrunk from 11.9% to just 5.4% over the past year.
The report also shows that AI models are getting smaller while staying powerful. In 2022, to get 60% accuracy on the MMLU benchmark (a test of AI reasoning), you needed models with 540 billion parameters.
By 2024, models 142 times smaller could do the same job. This means businesses can now use advanced AI tools with less computing power and lower costs.
Chart: 2025 AI Index Report
Chart: 2025 AI Index Report
What This Means For Marketers
For marketers, these changes bring several potential benefits:
1. Advanced Content Creation at Scale The price drop makes it affordable to create and optimize content in bulk. Tasks can now be automated cheaply without losing quality.
2. Better Analysis Newer AI models can process up to 1-2 million tokens (pieces of text) at once. This is enough to analyze entire websites for competitive insights.
3. Smarter Knowledge Management Retrieval-augmented generation (RAG), where AI pulls information from your company’s data, is improving. This helps marketers build systems that ensure AI outputs match their brand voice and expertise.
The End of AI Moats?
The report shows that AI models are becoming more similar in performance, with little difference between leading systems.
This suggests that the edge in marketing technology may shift from the raw AI power to how well you use it, your strategy, and your integration skills.
As AI capabilities become more common, the real difference-maker for marketing teams will be how effectively they use these tools to create unique value for their companies.