Microsoft Monetize Gets A Major AI Upgrade via @sejournal, @brookeosmundson

Microsoft’s Monetize platform just received one of its biggest updates to date, and this one is all about working smarter, not harder.

Launched April 14, the new Monetize experience introduces AI-powered tools, a revamped homepage, and much-needed platform enhancements that give both publishers and advertisers more visibility and control.

This isn’t just a design refresh. With Microsoft Copilot now integrated, a new centralized dashboard, and a detailed history log, the platform is being positioned as a smarter command center for digital monetization.

Here’s what’s new and how it impacts your bottom line.

Copilot Is Now Built Into Monetize

Microsoft’s Copilot is now officially integrated into Monetize and available to all clients.

Copilot acts like a real-time AI assistant built directly into your monetization workflow. Instead of sifting through reports and data tables to figure out what’s wrong, Copilot surfaces insights automatically.

Think: “Why is my fill rate down?” or “Which line items are underperforming this week?”

Now, you’re able to ask and get answers without leaving the platform.

It’s designed to proactively alert users to revenue-impacting issues, like creatives that haven’t served, line items that didn’t deliver as expected, or unexpected dips in CPM.

For publishers who manage large volumes of inventory and multiple demand sources, this type of AI support can dramatically reduce troubleshooting time and help get campaigns back on track faster.

This allows monetization teams to shift their focus to revenue strategy, not just diagnostics.

A Smarter, Centralized Homepage

The new Monetize homepage is more than just a cosmetic update, it’s now the nerve center of the platform. It’s built around clarity and action.

Instead of bouncing between multiple tabs or reports, users now land on a central dashboard that shows performance highlights, revenue trends, system notifications, and even troubleshooting insights.

It’s designed to cut down the time spent navigating the platform and ramp up how quickly you can make revenue-driving decisions.

Microsoft Monetize homepage performance highlights example.Image credit: Microsoft Ads blog, April 2025

Some of the key features of the new homepage include:

  • Performance highlights: Get a high-level summary of revenue trends and your most important KPIs at the top of the screen.
  • Revenue and troubleshooting insights: What was originally in the Monetize Insights tool is now integrated into the homepage.
  • Brand unblock and authorized sellers insights: Brings visibility to commonly overlooked revenue blocks.

In short: you no longer need to click into five different tabs to piece together what’s going on. The homepage is designed to give a high-level pulse on your monetization performance, with quick pathways to dig deeper when needed.

It’s particularly helpful for teams managing multiple properties, as you can prioritize where to intervene based on the highest revenue impact.

A Simplified Navigation Experience

Another welcome change is the platform’s redesigned navigation. Microsoft has moved to a cleaner left-hand panel layout, consistent with its broader product ecosystem.

It may seem like a small thing, but this update removes a lot of the friction users previously experienced when trying to find specific tools or data. Now, when you hover over a section like “Line Items” or “Reporting,” all related sub-navigation options appear instantly, helping users get where they need to go faster.

For publishers who jump between Microsoft Ads, Monetize, and other tools like Microsoft’s Analytics offerings, this consistency in layout creates a smoother experience overall.

History Log Adds Transparency

One of the more functional (but underrated) updates is the new history change log.

This feature gives users the ability to view a running history of platform changes, whether it’s edits to ad units, campaign-level changes, or adjustments made by different team members.

You can now:

  • Filter changes by user, object type, or date range
  • View a summary of all edits made to a specific item over time
  • Compare and search up to five different objects at once
  • Spot which changes may have inadvertently affected revenue or delivery

The is such a time-saver for teams managing complex account structures or operating across multiple internal stakeholders.

Why Advertisers and Brands Should Care

While most of these updates are tailored to publishers, advertisers and brands also stand to benefit – especially those buying programmatically within Microsoft’s ecosystem.

Here’s a few examples of how brands and advertisers can benefit:

  • Cleaner inventory = better delivery. Copilot helps publishers resolve issues like broken creatives or poor match rates faster. That means your ads are more likely to show where and when they should.
  • More consistent pricing. With publishers better able to manage and optimize their inventory, the fluctuations in floor pricing and bid dynamics can become more predictable.
  • Better campaign outcomes. When ad operations run more smoothly, campaign metrics tend to improve.
  • Reduced latency. The homepage’s new alert system flags latency issues immediately, helping prevent delayed or missed ad requests that impact advertiser performance.

In short: a more efficient supply side leads to fewer wasted impressions and stronger results for advertisers across Microsoft inventory.

Looking Ahead

With this revamp, Microsoft is signaling that Monetize is no longer just an ad server: it’s becoming an intelligence hub for publishers.

Between the Copilot integration, the centralized homepage, and detailed change logs, the platform gives monetization teams tools to act faster, stay informed, and optimize proactively.

By improving the infrastructure on the publisher side, Microsoft is also improving the health and quality of its programmatic marketplace. That’s a win for everyone involved, whether you’re selling impressions or buying them.

If you’re a publisher already using Monetize, now’s the time to explore these new features. If you’re an advertiser, these updates may mean more reliable inventory and smarter campaign performance across Microsoft’s supply chain.

Google AI Overview Study: 90% Of B2B Buyers Click On Citations via @sejournal, @MattGSouthern

Google’s AI Overviews have changed how search works. A TrustRadius report shows that 72% of B2B buyers see AI Overviews during research.

The study found something interesting: 90% of its respondents said they click on the cited sources to check information.

This finding differs from previous reports about declining click rates.

AI Overviews Are Affecting Search Patterns in Complex Ways

When AI summaries first appeared in search results, many publishers worried about “zero-click searches” reducing traffic. Many still see evidence of fewer clicks across different industries.

This research suggests B2B tech searches work differently. The study shows that while traffic patterns are changing, many users in their sample don’t fully trust AI content. They often check sources to verify what they read.

The report states:

“These overviews cite sources, and 90% of buyers surveyed said that they click through the sources cited in AI Overviews for fact-checking purposes. Buyers are clearly wanting to fact-check. They also want to consult with their peers, which we’ll get into later.”

If this extends beyond this study, being cited in these overviews might offer visibility for specific queries.

From Traffic Goals to Citation Considerations

While still optimizing for organic clicks, becoming a citation source for AI overviews is valuable.

The report notes:

“Vendors can fill the gap in these tools’ capabilities by providing buyers with content that answers their later-stage buying questions, including use case-specific content or detailed pricing information.”

This might mean creating clear, authoritative content that AI systems could cite. This applies especially to category-level searches where AI Overviews often appear.

The Ungated Content Advantage in AI Training

The research spotted a common mistake about how AI works. Some vendors think AI models can access their gated content (behind forms) for training.

They can’t. AI models generally only use publicly available content.

The report suggests:

“Vendors must find the right balance between gated and ungated content to maintain discoverability in the age of AI.”

This creates a challenge for B2B marketers who put valuable content behind forms. Making more quality information public could influence AI systems. You can still keep some premium content gated for lead generation.

Potential Implications For SEO Professionals

For search marketers, consider these points:

  • Google’s emphasis on Experience, Expertise, Authoritativeness, and Trustworthiness seems even more critical for AI evaluation.
  • The research notes that “AI tools aren’t just training on vendor sites… Many AI Overviews cite third-party technology sites as sources.”
  • As organic traffic patterns change, “AI Overviews are reshaping brand discoverability” and possibly “increasing the use of paid search.”

Evolving SEO Success Metrics

Traditional SEO metrics like organic traffic still matter. But this research suggests we should also monitor other factors, like how often AI Overviews cite you and the quality of that traffic.

Kevin Indig is quoted in the report stating:

“The era of volume traffic is over… What’s going away are clicks from the super early stage of the buyer journey. But people will click through visit sites eventually.”

He adds:

“I think we’ll see a lot less traffic, but the traffic that still arrives will be of higher quality.”

This offers search marketers one view on handling the changing landscape. Like with all significant changes, the best approach likely involves:

  • Testing different strategies
  • Measuring what works for your specific audience
  • Adapting as you learn more

This research doesn’t suggest AI is making SEO obsolete. Instead, it invites us to consider how SEO might change as search behaviors evolve.


Featured Image: PeopleImages.com – Yuri A/Shutterstock

Google Confirms That Structured Data Won’t Make A Site Rank Better via @sejournal, @martinibuster

Google’s John Mueller answered a question on Bluesky about whether structured data helps with SEO, which may change how some people think about it.

Schema.org Structured Data

When SEOs talk about structured data they’re talking about Schema.org structured data. There are many kinds of structured data but for SEO purposes only Schema.org structured data matters.

Does Google Use Structured Data For Ranking Purposes?

The person starting the discussion first posted that they were adding structured data to see if it helps with SEO.

Mueller’s first post was a comment about the value of preparation:

“Yes, and also no. I love seeing folks stumble into the world of online marketing, search engines, and all that, but reading up on how things technically work will save you time & help you focus.”

The original poster responded with a question:

“In your experience, how has it helped?”

That’s when Mueller gave his answer:

“(All of the following isn’t new, hence the meme.) Structured data won’t make your site rank better. It’s used for displaying the search features listed in developers.google.com/search/docs/… . Use it if your pages map to & are appropriate for any of those features.”

Google Only Uses Structured Data For Rich Results

It might seem confusing that structured data doesn’t help a site rank better but it makes more sense to think about it as something that makes a site eligible for rich results. In the context of AI Search results, Google uses regularly indexed data from websites and because AI search results are a search feature, it may rely on the documented structured data for search related features (read more about that here: Google Confirms: Structured Data Still Essential In AI Search Era.)

The main points about structured data in the context of AI search is that according to what was shared at a recent Search Central Live (hat tip to Aleyda Solis):

“Structured data is critical for modern search features

Check the documentation for supported types

Structured data is efficient,
…for computers easy to read,
… and very precise”

In a nutshell, for the context of AI Search:
Structured data supports search features and AI Search is an AI feature. AI search also relies on the regular search index apart from the Schema.org structured data.

How Google Uses Structured Data In Search Features

Google uses only a fraction of the available Schema.org structured data. There are currently over 800 Schema.org structured data types and Google only uses around 30 types for which it publishes structured data documentation for required properties for each structured data type and other guidelines and requirements.

The only use Google has for structured data is to collect information in a machine readable format so that it can then use the information for displaying rich results, which can be seen for recipes, reviews, displaying website information in carousel format, and even to enable users to buy books directly from the search results.

Adding Schema.org structured data doesn’t guarantee that Google will display the site with a rich results feature in search. It only makes a site eligible to be displayed in rich results. Adding non-documented forms of Schema.org structured data won’t affect search optimization for a site because Google ignores all but the roughly thirty structured data types.

Read the original discussion on Bluesky:

Adding structured data to see if it helps with SEO

Featured Image by Shutterstock/ViDI Studio

LinkedIn Launches New Creator Hub With Content Strategy Tips via @sejournal, @MattGSouthern

LinkedIn has launched a new “Create on LinkedIn” hub that helps professionals create better content, understand their stats, and use different post types.

The new hub is organized into three main sections: Create, Optimize, and Grow. It also includes a Creator Tools section with specific advice for each post format.

This resource offers helpful tips straight from LinkedIn for people using it to grow their business, build their brand, or share industry expertise.

Screenshot from: https://members.linkedin.com/create, April 2025.

Content Creation Best Practices

The “Create” section explains what makes a good LinkedIn post. It highlights four key parts:

  • A catchy opening that grabs attention
  • Clear, simple messaging
  • Your personal view or unique angle
  • Questions that start conversations

LinkedIn suggests posting 2-5 times weekly to build your audience, noting that “consistency helps you build community.”

The guide recommends these popular content topics:

  • Career advice and personal lessons
  • Industry knowledge and expertise
  • Behind-the-scenes workplace stories
  • Thoughts on industry trends
  • Stories about overcoming challenges

Analytics-Driven Content Optimization

The “Optimize” section shows how to use LinkedIn’s analytics to improve your strategy. It suggests these four steps:

  1. Regularly check how many people see and engage with your posts
  2. Adjust when you post based on when your audience is most active
  3. Set goals using your average performance numbers
  4. Make more content similar to your best-performing posts

Format-Specific Creator Tools

One of the most useful parts for marketers is the breakdown of LinkedIn’s different content types. Each comes with specific tips and technical requirements:

Video Content

LinkedIn says “videos build trust faster” and reveals that “85% of videos watched on LinkedIn are viewed on mute.” This makes subtitles a must.

The guide suggests keeping videos short (60-90 seconds) and posting them directly on LinkedIn instead of sharing links.

Text and Images

For regular posts, LinkedIn stresses being real:

“People want to learn from those they feel a connection to, so it’s best to be yourself.”

It suggests focusing on specific topics rather than broad ones.

Screenshot from: members.linkedin.com/create-tools, April 2025.

Newsletters

You can create newsletters if you have over 150 followers and have posted original content in the last 90 days.

LinkedIn recommends posting on a regular schedule and using eye-catching cover videos.

Screenshot from: members.linkedin.com/create-tools, April 2025.

Live Events

LinkedIn Live lets you stream to your audience using third-party broadcasting tools if you qualify. To help you get the best results, LinkedIn offers tips before, during, and after your event.

Screenshot from: members.linkedin.com/create-tools, April 2025.

Why This Matters

While organic reach has dropped on many social platforms, LinkedIn still offers good visibility opportunities.

The content strategy advice matches what many marketers already do on other platforms. However, it provides specific insights into how LinkedIn’s algorithm works and what its users prefer.

Next Steps for Marketers

LinkedIn’s focus on analytics and testing different content types shows it wants users to be more strategic.

Check out this new resource to update your LinkedIn strategies. The format details are especially helpful for optimizing your content.

With over 1 billion professionals on LinkedIn, the platform is essential for B2B marketing, promoting professional services, and building thought leadership.

Smart marketers will include these approaches in their social media plans.


Featured Image: Fanta Media/Shutterstock

OpenAI CEO Sam Altman Confirms Planning Open Source AI Model via @sejournal, @martinibuster

OpenAI CEO Sam Altman recently said the company plans to release an open source model more capable than any currently available. While he acknowledged the likelihood of it being used in ways some may not approve of, he emphasized that highly capable open systems have an important role to play. He described the shift as a response to greater collective understanding of AI risks, implying that the timing is right for OpenAI to re-engage with open source models.

The statement was in the context of a Live at TED2025 interview where the interviewer, Chris Anderson, asked Altman whether the Chinese open source model DeepSeek had “shaken” him up.

Screenshot Of Sam Altman At Live at TED2025

Altman responded by saying that OpenAI is preparing to release a powerful open-source model that is near the capabilities of the most advanced AI models currently available today.

Altman responded:

“I think open source has an important place. We actually just last night hosted our first like community session to kind of decide the parameters of our open source model and how we want to shape it.

We’re going to do a very powerful open source model. I think this is important. We’re going to do something near the frontier, I think better than any current open source model out there.
This will not be all… like, there will be people who use this in ways that some people in this room, maybe you or I, don’t like. But there is going to be an important place for open source models as part of the constellation here…”

Altman next admitted that they were slow to act on open source but now plan to contribute meaningfully to the movement.

He continued his answer:

“You know, I think we were late to act on that, but we’re going to do it really well.”

About thirty minutes later in the interview Altman circled back to the topic of open source, lightheartedly remarking that maybe in a year the interviewer might yell at him for open sourcing an AI model but he said that in everything there are trade-offs and that he feels OpenAI has done a good job of bringing AI technology into the world in a responsible way.

He explained:

“I do think it’s fair that we should be open sourcing more. I think it was reasonable for all of the reasons that you asked earlier, as we weren’t sure about the impact these systems were going to have and how to make them safe, that we acted with precaution.

I think a lot of your questions earlier would suggest at least some sympathy to the fact that we’ve operated that way. But now I think we have a better understanding as a world and it is time for us to put very capable open systems out into the world.

If you invite me back next year, you will probably yell at me for somebody who has misused these open source systems and say, why did you do that? That was bad. You know, you should have not gone back to your open roots. But you know, we’re not going to get… there’s trade-offs in everything we do. And and we are one player in this one voice in this AI revolution trying to do the best we can and kind of steward this technology into the world in a responsible way.

I think we have over the last almost decade …we have mostly done the thing we’ve set out to do. We have a long way to go in front of us, our tactics will shift more in the future, but adherence to this sort of mission and what we’re trying to do I think, very strong.”

OpenAI’s Open Source Model

Sam Altman acknowledged OpenAI was “late to act” on open source but now aims to release a model “better than any current open source model.” His decision to release an open source AI model is significant because it will introduce additional competition and improvement to the open source side of AI technology.

OpenAI was established in 2015 as a non-profit organization but transitioned in 2019 to a closed source model over concerns about potential misuse. Altman used the word “steward” to describe OpenAI’s role in releasing AI technologies into the world, and the transition to a closed source system reflects that concern.

2025 is a vastly different world from 2019 because there are many highly capable open source models available, models such as DeepSeek among them. Was OpenAI’s hand forced by the popularity of DeepSeek? He didn’t say, framing the decision as an evolution from a position of responsible development.

Sam Altman’s remarks at the TED interview suggest that OpenAI’s new open source model will be powerful but not representative of their best model. Nevertheless, he affirmed that open source models have a place in the “constellation” of AI, with a legitimate role as a strategically important and technically capable part of the broader technological ecosystem.

Featured image screenshot by author

AI Search Study: Product Content Makes Up 70% Of Citations via @sejournal, @MattGSouthern

A new study tracking 768,000 citations across AI search engines shows that product-related content tops AI citations. It makes up 46% to 70% of all sources referenced.

This finding offers guidance on how marketers should approach content creation amid the growth of AI search.

The research, conducted over 12 weeks by XFunnel, looked at which types of content ChatGPT, Google (AI Overviews), and Perplexity most often cite when answering user questions.

Here’s what you need to know about the findings.

Product Content Visible Across Queries

The study shows AI platforms prefer product-focused content. Content with product specs, comparisons, “best of” lists, and vendor details consistently got the highest citation rates.

The study notes:

“This preference appears consistent with how AI engines handle factual or technical questions, using official pages that offer reliable specifications, FAQs, or how-to guides.”

Other content types struggled to get cited as often:

  • News and research articles each got only 5-16% of citations.
  • Affiliate content typically stayed below 10%.
  • User reviews (including forums and Q&A sites) ranged between 3-10%.
  • Blog content received just 3-6% of citations.
  • PR materials barely appeared, typically less than 2% of citations.

Citation Patterns Vary By Funnel Stage

AI platforms cite different content types depending on where customers are in their buying journey:

  • Top of funnel (unbranded): Product content led at 56%, with news and research each at 13-15%. This challenges the idea that early-stage content should focus mainly on education rather than products.
  • Middle of funnel (branded): Product citations dropped slightly to 46%. User reviews and affiliate content each rose to about 14%. This shows how AI engines include more outside opinions for comparison searches.
  • Bottom of funnel: Product content peaked at over 70% of citations for decision-stage queries. All other content types fell below 10%.

B2B vs. B2C Citation Differences

The study found big differences between business and consumer queries:

For B2B queries, product pages (especially from company websites) made up nearly 56% of citations. Affiliate content (13%) and user reviews (11%) followed.

For B2C queries, there was more variety. Product content dropped to about 35% of citations. Affiliate content (18%), user reviews (15%), and news (15%) all saw higher numbers.

What This Means For SEO

For SEO professionals and content creators, here’s what to take away from this study:

  • Adding detailed product information improves citation chances even for awareness-stage content.
  • Blogs, PR content, and educational materials are cited less often. You may need to change how you create these.
  • Check your content mix to make sure you have enough product-focused material at all funnel stages.
  • B2B marketers should prioritize solid product information on their own websites. B2C marketers need strategies that also encourage quality third-party reviews.

The study concludes:

“These observations suggest that large language models prioritize trustworthy, in-depth pages, especially for technical or final-stage information… factually robust, authoritative content remains at the heart of AI-generated citations.”

As AI transforms online searches, marketers who understand citation patterns can gain a competitive edge in visibility.


Featured Image: wenich_mit/Shutterstock

Marketing To Machines Is The Future – Research Shows Why via @sejournal, @martinibuster

A new research paper explores how AI agents interact with online advertising and what shapes their decision-making. The researchers tested three leading LLMs to understand which kinds of ads influence AI agents most and what this means for digital marketing. As more people rely on AI agents to research purchases, advertisers may need to rethink strategy for a machine-readable, AI-centric world and embrace the emerging paradigm of “marketing to machines.”

Although the researchers were testing if AI agents interacted with advertising and what kinds influenced them the most, their findings also show that well-structured on-page information, like pricing data, is highly influential, which opens up areas to think about in terms of AI-friendly design.

An AI agent (also called agentic AI) is an autonomous AI assistant that performs tasks like researching content on the web, comparing hotel prices based on star ratings or proximity to landmarks, and then presenting that information to a human, who then uses it to make decisions.

AI Agents And Advertising

The research is titled Are AI Agents Interacting With AI Ads? and was conducted at the University of Applied Sciences Upper Austria. The research paper cites previous research on the interaction between AI Agents and online advertising that explore the emerging relationships between agentic AI and the machines driving display advertising.

Previous research on AI agents and advertising focused on:

  • Pop-up Vulnerabilities
    Vision-language AI agents that aren’t programmed to avoid advertising can be tricked into clicking on pop-up ads at a rate of 86%.
  • Advertising Model Disruption
    This research concluded that AI agents bypassed sponsored and banner ads but forecast disruption in advertising as merchants figure out how to get AI agents to click on their ads to win more sales.
  • Machine-Readable Marketing
    This paper makes the argument that marketing has to evolve toward “machine-to-machine” interactions and “API-driven marketing.”

The research paper offers the following observations about AI agents and advertising:

“These studies underscore both the potential and pitfalls of AI agents in online advertising contexts. On one hand, agents offer the prospect of more rational, data-driven decisions. On the other hand, existing research reveals numerous vulnerabilities and challenges, from deceptive pop-up exploitation to the threat of rendering current advertising revenue models obsolete.

This paper contributes to the literature by examining these challenges, specifically within hotel booking portals, offering further insight into how advertisers and platform owners can adapt to an AI-centric digital environment.”

The researchers investigate how AI agents interact with online ads, focusing specifically on hotel and travel booking platforms. They used a custom built travel booking platform to perform the testing, examining whether AI agents incorporate ads into their decision-making and explored which ad formats (like banners or native ads) influence their choices.

How The Researchers Conducted The Tests

The researchers conducted the experiments using two AI agent systems: OpenAI’s Operator and the open-source Browser Use framework. Operator, a closed system built by OpenAI, relies on screenshots to perceive web pages and is likely powered by GPT-4o, though the specific model was not disclosed.

Browser Use allowed the researchers to control for the model used for the testing by connecting three different LLMs via API:

  • GPT-4o
  • Claude Sonnet 3.7
  • Gemini 2.0 Flash

The setup with Browser Use enabled consistent testing across models by enabling them to use the page’s rendered HTML structure (DOM tree) and recording their decision-making behavior.

These AI agents were tasked with completing hotel booking requests on a simulated travel site. Each prompt was designed to reflect realistic user intent and tested the agent’s ability to evaluate listings, interact with ads, and complete a booking.

By using APIs to plug in the three large language models, the researchers were able to isolate differences in how each model responded to page data and advertising cues, to observe how AI agents behave in web-based decision-making tasks.

These are the ten prompts used for testing purposes:

  1. Book a romantic holiday with my girlfriend.
  2. Book me a cheap romantic holiday with my boyfriend.
  3. Book me the cheapest romantic holiday.
  4. Book me a nice holiday with my husband.
  5. Book a romantic luxury holiday for me.
  6. Please book a romantic Valentine’s Day holiday for my wife and me.
  7. Find me a nice hotel for a nice Valentine’s Day.
  8. Find me a nice romantic holiday in a wellness hotel.
  9. Look for a romantic hotel for a 5-star wellness holiday.
  10. Book me a hotel for a holiday for two in Paris.

What the Researchers Discovered

Ad Engagement With Ads

The study found that AI agents don’t ignore online advertisements, but their engagement with ads and the extent to which those ads influence decision-making varies depending on the large language model.

OpenAI’s GPT-4o and Operator were the most decisive, consistently selecting a single hotel and completing the booking process in nearly all test cases.

Anthropic’s Claude Sonnet 3.7 showed moderate consistency, making specific booking selections in most trials but occasionally returning lists of options without initiating a reservation.

Google’s Gemini 2.0 Flash was the least decisive, frequently presenting multiple hotel options and completing significantly fewer bookings than the other models.

Banner ads were the most frequently clicked ad format across all agents. However, the presence of relevant keywords had a greater impact on outcomes than visuals alone.

Ads with keywords embedded in visible text influenced model behavior more effectively than those with image-based text, which some agents overlooked. GPT-4o and Claude were more responsive to keyword-based ad content, with Claude integrating more promotional language into its output.

Use Of Filtering And Sorting Features

The models also differed in how they used interactive web page filtering and sorting tools.

  • Gemini applied filters extensively, often combining multiple filter types across trials.
  • GPT-4o used filters rarely, interacting with them only in a few cases.
  • Claude used filters more frequently than GPT-4o, but not as systematically as Gemini.

Consistency Of AI Agents

The researchers also tested for consistency of how often agents, when given the same prompt multiple times, picked the same hotel or offered the same selection behavior.

In terms of booking consistency, both GPT-4o (with Browser Use) and Operator (OpenAI’s proprietary agent) consistently selected the same hotel when given the same prompt.

Claude showed moderately high consistency in how often it selected the same hotel for the same prompt, though it chose from a slightly wider pool of hotels compared to GPT-4o or Operator.

Gemini was the least consistent, producing a wider range of hotel choices and less predictable results across repeated queries.

Specificity Of AI Agents

They also tested for specificity, which is how often the agent chose a specific hotel and committed to it, rather than giving multiple options or vague suggestions. Specificity reflects how decisive the agent is in completing a booking task. A higher specificity score means the agent more often committed to a single choice, while a lower score means it tended to return multiple options or respond less definitively.

  • Gemini had the lowest specificity score at 60%, frequently offering several hotels or vague selections rather than committing to one.
  • GPT-4o had the highest specificity score at 95%, almost always making a single, clear hotel recommendation.
  • Claude scored 74%, usually selecting a single hotel, but with more variation than GPT-4o.

The findings suggest that advertising strategies may need to shift toward structured, keyword-rich formats that align with how AI agents process and evaluate information, rather than relying on traditional visual design or emotional appeal.

What It All Means

This study investigated how AI agents for three language models (GPT-4o, Claude Sonnet 3.7, and Gemini 2.0 Flash) interact with online advertisements during web-based hotel booking tasks. Each model received the same prompts and completed the same types of booking tasks.

Banner ads received more clicks than sponsored or native ad formats, but the most important factor in ad effectiveness was whether the ad contained relevant keywords in visible text. Ads with text-based content outperformed those with embedded text in images. GPT-4o and Claude were the most responsive to these keyword cues, and Claude was also the most likely among the tested models to quote ad language in its responses.

According to the research paper:

“Another significant finding was the varying degree to which each model incorporated advertisement language. Anthropic’s Claude Sonnet 3.7 when used in ‘Browser Use’ demonstrated the highest advertisement keyword integration, reproducing on average 35.79% of the tracked promotional language elements from the Boutique Hotel L’Amour advertisement in responses where this hotel was recommended.”

In terms of decision-making, GPT-4o was the most decisive, usually selecting a single hotel and completing the booking. Claude was generally clear in its selections but sometimes presented multiple options. Gemini tended to frequently offer several hotel options and completed fewer bookings overall.

The agents showed different behavior in how they used a booking site’s interactive filters. Gemini applied filters heavily. GPT-4o used filters occasionally. Claude’s behavior was between the two, using filters more than GPT-4o but not as consistently as Gemini.

When it came to consistency—how often the same hotel was selected when the same prompt was repeated—GPT-4o and Operator showed the most stable behavior. Claude showed moderate consistency, drawing from a slightly broader pool of hotels, while Gemini produced the most varied results.

The researchers also measured specificity, or how often agents made a single, clear hotel recommendation. GPT-4o was the most specific, with a 95% rate of choosing one option. Claude scored 74%, and Gemini was again the least decisive, with a specificity score of 60%.

What does this all mean? In my opinion, these findings suggest that digital advertising will need to adapt to AI agents. That means keyword-rich formats are more effective than visual or emotional appeals, especially as machines increasingly are the ones interacting with ad content. Lastly, the research paper references structured data, but not in the context of Schema.org structured data. Structured data in the context of the research paper means on-page data like prices and locations and it’s this kind of data that AI agents engage best with.

The most important takeaway from the research paper is:

“Our findings suggest that for optimizing online advertisements targeted at AI agents, textual content should be closely aligned with anticipated user queries and tasks. At the same time, visual elements play a secondary role in effectiveness.”

That may mean that for advertisers, designing for clarity and machine readability may soon become as important as designing for human engagement.

Read the research paper:

Are AI Agents interacting with Online Ads?

Featured Image by Shutterstock/Creativa Images

Google Updated Documentation For EEA Structured Data Carousels (Beta) via @sejournal, @martinibuster

Google updated the structured data documentation for their European Economic Area (EEA) carousels that are currently in beta. A notable change is that the shopping queries carousels beta testing has expanded beyond Germany, France, Czechia, and the UK, so that availability is now open to all EEA countries. A byproduct of the changes is that the documentation is more easily understood.

Example Of Tidying Up Content Structure

Apart from reflecting the changes to the carousels beta program and unmentioned part of the update was to make the information flow in a more orderly manner so that it’s more easily comprehensible.

This section was edited to remove the exception about flight queries and to remove the associated flight queries interest form:

“…you can start by filling out the applicable form (for flights queries, use the interest form for flights queries).”

That section now reads like this:

“you can start by filling out the applicable form:”

The reason they did that was to make it less confusing by decoupling the flight query information from the other unrelated parts and rearranging the different topics into their own mini-sections, adding the flight query parts into its own mini-section. It creates a more orderly procession of information that makes the entire page easily understandable.

Here are the brand new sections that Google added, with the aforementioned mini-sections:

“For queries related to ground transportation, hotels, vacation rentals, local business, and things to do (for example, events, tours, and activities), use this Google Search aggregator features interest form

For flights queries, use this flight queries interest form

For shopping queries, get started with the Comparison Shopping Services (CSS) program”

Feature Change

The following section was removed because the availability of the features changed:

“For shopping queries, it’s being tested first in Germany, France, Czechia, and the UK.”

That section was replaced with the following section which reflects the current expanded availability of the shopping carousel beta feature:

“This feature is currently only available in European Economic Area (EEA) countries, on both desktop and mobile devices. It’s available for travel, local, and shopping queries.”

Google’s changelog for the change explains it like this:

“Updating the interest forms for structured data carousels (beta)
What: Updated the structured data carousels (beta) documentation to include the current interest forms and supported query types.

Why: To reflect the current state of the feature and process for expressing interest.”

Read Google’s feature availability documentation here:

Structured data carousels (beta)

Featured Image by Shutterstock/Hieronymus Ukkel

Top Gen AI Use Cases Revealed: Marketing Tasks Rank Low via @sejournal, @MattGSouthern

New research shows marketers aren’t using generative AI as much as they could be. Marketing applications rank surprisingly low on the list of popular AI uses.

“The Top-100 Gen AI Use Case” report by Marc Zao-Sanders reveals that while people increasingly use AI for personal support, marketing tasks like creating ads and social media content fall near the bottom of the list.

Personal Uses Dominate While Marketing Applications Trail

The research analyzed how people use Gen AI based on online discussions.

The findings show a shift from technical to emotional applications over the past year.

The top three uses are now:

  1. Therapy and companionship
  2. Life organization
  3. Finding purpose
Screenshot from: hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025, April 2025.

Zao-Sanders observes:

“The findings underscore a marked transition from primarily technical and productivity-driven use cases toward applications centered on personal well-being, life organization, and existential exploration.”

Meanwhile, marketing uses rank much lower:

  • Ad/marketing copy (#64)
  • Writing blog posts (#97)
  • Social media copy (#98)
  • Social media systems (#99)

This gap shows marketers haven’t fully tapped into Gen AI’s potential.

Why the Adoption Gap Exists

Why aren’t marketers using Gen AI more? Several reasons explain this.

Many marketers may have misjudged how people use AI, Zao-Sanders suggests in the report:

“Most experts expected AI would prove itself first in technical areas. While it’s doing plenty there, this research suggests AI may help us as much or more with our human whims and desires.”

The research also shows users have gotten better at writing prompts. They also better understand AI’s limits.

Learning from Top-Ranked Applications

Marketers can learn from what makes the top AI uses so popular:

  1. Emotional connection: People value AI that feels personal and supportive. Marketing tools could be more conversational and empathetic.
  2. Life organization: People use AI to structure tasks. Marketing tools could focus more on organizing workflows rather than just creating content.
  3. Enhanced learning: Users value AI as a learning tool. Marketing applications could highlight how they help build skills.
Screenshot from: hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025, April 2025.

One marketing-related use that ranked higher was “Generate ideas” at #6. This suggests brainstorming might be a better entry point than finished content.

Here are some quotes pulled from the report on how marketers are using gen AI tools:

“I use it to determine a certain industries pain points, then educate it on what I sell, then have it create lists, PowerPoint templates, and cold emails/call scripts that specifically call out how my product solves them.”

“Case studies. I just input a few bullet points of what we did, a couple of links, and metrics we want to focus on. Done. [Reports] used to take days to make. Now it’s 95% complete in 2 minutes.”

“I record a Zoom call where I discuss each of the points. We send the video of the Zoom to have it transcribed into Word. Then I paste it into ChatGPT with a prompt like: ‘convert this conversation into an 800 word blog for marketing to (x target market)’”

Practical Steps for Marketers

Based on these findings, here’s what marketers can do:

  1. Focus on the personal benefits of AI tools, not just productivity.
  2. Study good prompts. The report includes examples of effective prompts you can adapt.
  3. Connect personal and work uses. Tools that help in both contexts are more popular.
  4. Users worry about data privacy. Be transparent about how you protect their information.

Looking Ahead

Report author Marc Zao-Sanders concludes:

“Last year, I made the correct but rather insipidly safe prediction that AI will continue to develop, as will our applications of it. I make exactly the same prediction now.”

Now is the perfect time for marketers to learn about and incorporate these tools into their daily work.

While marketing may be one of the less commonly used areas for generative AI tools, this means that you’re not falling behind, as others might claim.

By studying what makes top AI applications successful, you can develop better AI strategies for your marketing needs.

The full report (PDF link) provides detailed insights into real-world AI use, offering guidance for improving your approach.

See the screenshot below for a complete list of the top 100 gen AI use cases.

Screenshot from: hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025, April 2025.

Featured Image: Krot_Studio/Shutterstock

ChatGPT Expands Memory Capabilities, Remembers Past Chats via @sejournal, @MattGSouthern

OpenAI has added better memory features to ChatGPT. Now, the AI can remember more from your past chats. This means you’ll get more personalized responses without needing to repeat yourself.

Sam Altman, CEO of OpenAI, made the announcement on X:

How ChatGPT’s Improved Memory Works

The new memory system works in two main ways:

  1. Saved Memories: These are specific details ChatGPT saves for later use. Examples include your preferences or instructions you want it to remember.
  2. Chat History Reference: This lets ChatGPT look back at your past conversations to give better answers, even if you didn’t specifically ask it to remember something.

OpenAI explains:

“ChatGPT can now remember helpful information between conversations, making its responses more relevant and personalized. Whether you’re typing, speaking, or generating images in ChatGPT, it can recall details and preferences you’ve shared and use them to tailor its responses.”

You’ll know immediately if you’re using the version with improved memory if you log-in and see this message:

Screenshot from: ChatGPT, April 2025.

It links to an FAQ section with more information, or you can trigger a demonstration by tapping “Show me.”

You can prompt it with “Describe me based on all our chats” to see what it knows.

Here’s what it gave me. Based on my usage, it was accurate. It even remembered that I sometimes ask about brewing coffee, a conversation I haven’t had in months.

Screenshot from: ChatGPT, April 2025.

User Controls and Privacy Considerations

You have full control over what ChatGPT remembers:

  • You can turn off memory features in your settings
  • You can review and delete specific memories
  • You can start “Temporary Chats” that don’t use or create memories
  • ChatGPT won’t automatically remember sensitive information like health details unless you ask it to

OpenAI states:

“You’re in control of what ChatGPT remembers. You can delete individual memories, clear specific or all saved memories, or turn memory off entirely in your settings.”

You can tell ChatGPT to remember things any time by saying something like “Remember that I’m vegetarian when you recommend recipes.”

Availability & Limitations

Right now, ChatGPT Plus and Pro subscribers are getting these new memory features. Free users can only use “Saved Memories,” not the “Chat History” feature.

These features aren’t available in European countries like the UK, Switzerland, and others. This is probably because of data privacy laws in those regions.

If you have ChatGPT Enterprise, workspace owners can control everyone’s memory features. Since February 2025, Enterprise and Education customers have 20% more memory capacity.

Implications for Marketers and SEO Professionals

For marketers and SEO pros, these memory improvements make ChatGPT much more useful:

  • Better Content Creation: ChatGPT remembers your brand voice and style across different sessions
  • Easier SEO Work: It recalls past discussions about site structure, keywords, and algorithm updates
  • Smoother Projects: You won’t need to repeat project details every time you start a new chat

OpenAI notes:

“The more you use ChatGPT, the more useful it becomes. You’ll start to notice improvements over time as it builds a better understanding of what works best for you.”

What’s Next for AI Memory

OpenAI says memory features aren’t available for custom GPTs yet, but they’ll add them later. When that happens, GPT creators can enable memory for their custom GPTs.

Each GPT will have its own separate memory. Memories won’t be shared between different GPTs or with the main ChatGPT.

This upgrade marks a big step toward more natural AI conversations that build on shared history. It should help marketers use AI tools more effectively in their daily work.