Cohorts, Clusters, And The Coming AI Ad System via @sejournal, @DuaneForrester

The funnel didn’t disappear. It went invisible.

Marketers spent decades perfecting the funnel: awareness, consideration, conversion. We built personas. We mapped content to stages. We watched users click, scroll, bounce, convert. Everything was visible.

But GenAI doesn’t show its hand.

The funnel still exists, it’s just hidden inside the model. Every time someone prompts ChatGPT or Perplexity, they reveal their place in a decision journey.

Not by filling out a form or triggering a pixel, but through the prompt fingerprint embedded in their question.

That’s the new funnel. You’re still being evaluated. Still being chosen. But the targeting is now invisible, inferred, and dynamic.

And most marketers have no idea it’s happening. In fairness, I think only the cohort portion of this is actively happening today.

The ad system I explore here is purely theoretical (though Google appears to be working in a similar direction currently, and its rollout could be realistic, soon – links below).

TL;DR: This article doesn’t just explain how I think GenAI is reshaping audience targeting; it introduces three new concepts I think you’ll need to understand the next evolution of paid media: Prompt Fingerprints, Embedding Fingerprints, and Intent Vector Bidding. 

The funnel isn’t gone. It’s embedded. And it’s about to start building and placing ads on its own.

About the terminology: 

Prompt Fingerprint and Intent Vector Bidding, I believe, are net-new terms for our industry, coined here to describe how future LLM-based systems could group users and auction ad space.

Conceptually, Intent Vector Bidding aligns with work already being done behind the scenes at Google (and I’m sure elsewhere), though I don’t believe they use this phrase. 

Embedding Fingerprint draws from AI research but is reframed here as a brand-side construct to power targeting and retrieval inside GenAI systems.

This article was written over the last three weeks of July, and I was happy to find an article on August 4 talking about the concepts I’m exploring for a future paid ads bidding system.

Coincidental, but validating. The link to that article is below.

Image credit: Duane Forrester

What Cohort Targeting Used To Be

In the pre-AI era, cohort targeting was built around observable behaviors.

  • Retargeting audiences built from cookies and pixels.
  • Segments shaped by demographics, location, and device.
  • Lookalikes trained on customer traits and CRM lists.

We mapped campaigns to persona types and funnel stages. A 42-year-old dad in Ohio was mid-funnel if he clicked a product video. An 18-year-old in Mumbai was top-funnel if he downloaded an ebook.

These were guesses, good ones, often, but still blunt instruments. And they were built on identifiers that don’t necessarily survive the GenAI shift.

Prompts Are The New Personas

Large language models don’t need to know who you are. They don’t really need to track you. They don’t care where you came from. They only care what you ask, and how you ask it.

Every prompt is vectorized. That means it’s turned into a mathematical representation of meaning, called an embedding. These vectors capture everything the model can glean from your input:

  • Topical domain.
  • Familiarity and depth.
  • Sentiment and urgency.
  • Stage of intent.

LLMs use this signal to group prompts with similar meaning, even if they come from completely different types of people.

And that’s how new cohorts can form. Not from identity. From intent.

Right now, most marketers are still optimizing for keywords, and missing the bigger picture. Keywords describe what someone is searching for. Prompt fingerprints describe why and how.

Someone asking “quietest portable generator for camping” isn’t just looking for a product, they’re signaling lifestyle priorities (minimal noise, portability, outdoor use) and stage (comparison shopping).

That single prompt tells the model far more than any demographic profile ever could.

And crucially, that person is joining a cohort of other prompters asking similar questions in similar ways. If your content isn’t semantically aligned with that group, it’s not just less visible. It’s excluded.

New Concept: Prompt Fingerprint

A unique embedding signature derived from a user’s language, structure, and inferred intent within a prompt. This fingerprint is your new persona.

It’s what the model actually sees and what it uses to determine which answers (and potentially which ads) you receive. (More on those ads later!)

When Context Creates The Cohort

Let’s say the Toronto Maple Leafs just won the Stanley Cup (hey, a guy can dream, right?!). Across the city, thousands of people start prompting:

  • “Where to celebrate in Toronto tonight?”
  • “Best bars near Scotiabank Arena open late?”
  • “Leaf’s victory parade time and location?”

None of these users knows each other. Some are teenagers, others are retirees. Some are local, others are visiting. Some are hardcore fans, some just like to party. But to the model, they’re now a momentary cohort; a group connected by real-time context, not long-term traits.

This is a fundamental break from everything digital marketers are used to. We’ve always grouped people by identity: age, interests, behavior, psychographics. But LLMs group people by situational similarity.

That creates new marketing opportunities and new blind spots.

Imagine you sell travel gear. A major snowstorm is forecast to slam into the Northeast U.S.

Within hours, prompts spike around early departures, snowproof duffel bags, and waterproof boots. A travel-stress cohort forms: people trying to escape before the storm hits. They’re not a segment you planned for. They’re a moment the system saw before you did.

If your content or product is aligned with that moment, you need a system that detects, matches, and delivers immediately. That’s what makes system-embedded ad tech essential.

You’re not buying audiences anymore. You’re buying alignment with the now, with a moment in time.

And this part is real today.

While the inner workings of commercial GenAI systems remain opaque, cluster-like behavior is often visible within a single platform session.

When you ask a string of similar questions in one ChatGPT or Gemini session, you may encounter repeated phrasing, brand mentions, or answer structure. That consistency suggests the model is grouping prompts by embedded meaning, not demographics or declared traits.

I cannot find studies or examples of this behavior being recorded, so please drop a comment if you have a source for such data. I keep hearing about it, but cannot find dedicated data.

Looking Forward

Entire classes of micro-cohorts may form and disappear within hours. To reach them, you’ll need AI-powered, system-embedded ad systems that can:

  • Detect the cohort’s emergence through real-time prompt patterns.
  • Generate ads aligned with the cohort’s immediate need.
  • Place and optimize those ads before the window closes.

Humans can’t move at that speed. AI can. And it has to because the opportunity vanishes with the context.

Sidebar: What I Think Is Real Vs. What I Think Is Coming

  • Prompt Fingerprints – Live Today: Every GenAI system turns your prompt into a vector embedding. It’s already the foundation of how models interpret meaning.
  • Cohort Clustering by Prompt Similarity – Active Now: You can observe this in tools like ChatGPT and Gemini. Similar prompts return similar answers, meaning the system is clustering users based on shared intent.
  • Embedding Fingerprints – Possible Today: If brands structure their content for vectorization, they can create an embedding signature that aligns with relevant prompts. Most don’t yet.
  • Intent Vector Bidding – Emerging Theory: Almost in the market today. Given current ad platform trends, this kind of bidding system is likely being explored widely across platforms.

Why Old-School Personas Will Work Less Effectively

Age. Income. ZIP code. None of that maps cleanly in vector space.

In the GenAI era, two people with radically different demographics might prompt in nearly identical ways and be served the same answers as a result.

It’s not about who you are. It’s about how your question fits into the model’s understanding of the world.

The classic marketing persona is much less reliable as a targeting unit. I’m suggesting the new unit is the Prompt Fingerprint, and marketers who ignore that shift may find themselves omitted from the conversation entirely.

The Funnel Is Still There — You Just Can’t See It

Here’s the thing: LLMs do understand funnel stages.

They just don’t label them the way marketers do. They infer them from phrasing, specificity, and structure.

  • TOFU: “Best folding kayaks for beginners”
  • MOFU: “Oru Inlet vs. Tucktec comparison”
  • BOFU: “Oru kayak discount codes July 2025”

These are prompt-level indicators of funnel stage. And if your content doesn’t align with how those prompts are formed, it likely won’t get retrieved.

Want to stay visible? Start mapping your content to the language patterns of funnel-stage prompts, not just to topics or keywords.

Embedding Fingerprints: The New Targeting Payload

It’s not just prompts that get vectorized. Your content does, too.

Every product page, blog post, or ad you write forms its own Embedding Fingerprint, a vector signature that reflects what your message actually means in the model’s understanding.

Repurposed Concept: Embedding Fingerprint

Originally used in machine learning to describe the vector signature of a piece of data, this concept is reframed here for content strategy.

An embedding fingerprint becomes the reusable vector signature tied to a brand, product, or message – a semantic identity that determines cohort alignment in GenAI systems.

If your content’s fingerprint aligns closely with a user’s prompt fingerprint, it’s more likely to be retrieved. If not, it’s effectively invisible, no matter how “optimized” it may be in traditional terms.

Intent Vector Bidding: A Possible New Advertising Paradigm

So, what happens when GenAI systems all start monetizing this behavior?

You could get a new kind of auction. One where the bid isn’t for a keyword or a user profile, per se, but for alignment.

New Concept: Intent Vector Bidding

A real-time ad bidding mechanism where placement is determined by alignment between a user’s prompt intent vector and an advertiser’s content vector.

To be clear: this is not live today in any public, commercial ad platform that I am aware of. But I think it’s well within reach. Models already understand alignment. Prompt clustering is already happening.

What’s missing is the infrastructure to let advertisers fully plug in. And you can bet the major players (OpenAI, Google, Meta, Microsoft, Amazon, etc.) are already thinking this way. Google is already looking at this openly.

We’ve Been Heading Here All Along

The shift toward LLM-native ad platforms might sound radical, but in reality, we’ve been headed this way for over a decade.

Step by step, platform by platform, advertisers have been ceding control to automation, often without realizing they were walking toward full autonomy.

Before we trace the path, please keep in mind that while I do have some background in the paid ad world, it’s much less than many of you.

I’m attempting to keep my date ranges and tech evolutions accurate, and I believe they are, but others may have a different view.

My point here isn’t historical accuracy, it’s to demonstrate a continual, directional progression, not nail down on which day of which year did Google do X.

And, I’ll add, maybe I’m entirely off base with my thinking here, but it’s still been interesting to map all this out, especially since Google has already been digging in on a similar concept.

1. From Manual Control To Rule-Based Efficiency

  • Early 2000s – 2015

In the early days of search and display, marketers controlled everything: keyword targeting, match types, ad copy, placements, and bidding.

Power users lived inside tools like AdWords Editor, manually optimizing bids by time of day, device type, and conversion rate.

Automation started small, with rule-based scripts for bid adjustments, budget caps, and geo-targeting refinements. You were still the pilot, just with some helpful instruments.

2. From Rule-Based Logic To AI-Guided Bidding

  • 2015 – 2018

Then came Smart Bidding.

Google introduced Target CPA, Target ROAS, and Enhanced CPC: bid strategies powered by machine learning models that ingested real-time auction data (device, time, location, conversion likelihood) and made granular decisions on your behalf.

Marketers set the goal, but the system chose the path. Control shifted from how to what result you want. This was a foundational step toward AI-defined outcomes.

3. From AI-Guided Bidding To Creative Automation

  • 2018 – 2023

Next came the automation of the message itself.

Responsive Search Ads let advertisers upload multiple headlines and descriptions and Google handled the permutations and combinations.

Meta and TikTok adopted similar dynamic creative formats.

Then Google launched Performance Max (2021), a turning point that eliminated keywords entirely.

  • You provide assets and conversion goals.
  • The system decides where and when to show your ads, whether across Search, YouTube, Display, Gmail, Maps, and more.
  • Targeting becomes opaque. Placement is more invisible. Strategy becomes trust.

You’re no longer steering the vehicle. You’re defining the destination and expecting the algorithm gets you there efficiently.

4. From Creative Automation To Generative Execution

  • 2023–2025

The model doesn’t just optimize messages anymore; it writes them.

  • Meta’s AI Sandbox generates headlines and CTAs from a prompt.
  • TikTok’s Creative Assistant produces hook-driven video scripts on demand.
  • Third-party tools and GPT-based agents build full ad campaigns, including copy and targeting.
  • Google’s Veo 3 and Veo 3 Fast now live on Vertex AI, generate polished ads and social clips from text or image-to-video inputs, optimized for rapid iteration and programmatic use.

This isn’t sci-fi. It’s what’s coming to market today.

5. What Comes Next – And Why It’s Inevitable

The final leap is where you don’t submit an ad, you instead submit your business.

A fully LLM-native ad platform would:

  • Accept your brand’s value propositions, certifications, product specs, creative assets, brand guidelines, company vision statements, and guardrails.
  • Monitor emergent cohorts in real time based on prompt clusters and conversation spikes.
  • Inject your brand into those moments if, and only if, your business’s vector aligns with the cohort’s intent.
  • Charge you automatically for participation in that alignment.

You wouldn’t target. You wouldn’t build campaigns. You’d just feed the system and monitor how well it performs as a semantic extension of your business.

The ad platform becomes a meaning-based proxy for your company, an intent-aware agent acting on your behalf.

That’s not speculative science fiction. It’s a natural endpoint of the road we’re already on, I believe. Performance Max removed the steering wheel. Generative AI threw out the copywriter. Prompt-aligned retrieval will take care of the rest.

Building The LLM-Native Ad Platform

This is a theoretical suggestion of what could be our future for paid ads within AI-generated answer systems.

To make Intent Vector Bidding real at scale, the underlying ad platform will have to evolve dramatically. I don’t see this as a plug-in bolted onto legacy PPC infrastructure.

It will be a fully native layer inside LLM-based systems, one that replaces both creative generation and ad placement management.

Here’s how it could work:

1. Advertiser Input Shifts From Campaigns To Data Feeds

Instead of building ads manually, businesses upload:

  • Targeted keywords, concepts, and product entities.
  • Multimedia assets: images, videos, audio clips.
  • Credentials: certifications, affiliations, licenses.
  • Brand guidelines: tone, voice, claims to avoid.
  • Business limitations: geography, availability, compliance.
  • Structured value props and pricing tiers.

2. The System Becomes The Creative + Placement Engine

The LLM:

  • Detects emerging prompt cohorts.
  • Matches intent vectors to advertiser fingerprints.
  • Constructs and injects ads on the fly, using aligned assets and messaging.
  • Adjusts tone and detail based on prompt stage (TOFU vs BOFU).

3. Billing Becomes Automated And Embedded

  • Accounts are pre-funded or credit-card linked.
  • Ad spend is triggered by real-time participation in retrieval or output injection.
  • No ad reps. No auctions you manage. Just vector-aligned outcomes billed per engagement, view, or inclusion.
  • Ad creation and placement become a single-price-point item as the system manages all, in real time.

If you want some more thoughts on this concept, or one that’s closely related, Cindy Krum was recently on Shelley Walsh’s IMHO show, where she talked about whether she thinks Google will put ads inside Gemini’s answers, and it was an interesting discussion.

You should give it a listen. And this report on Google suggests this is not only here now, but expanding.

The Human Role Doesn’t Disappear – It Evolves

Marketers and ad teams won’t be eliminated. Instead, they’ll become the data stewards and strategic interpreters of the system.

  • Expectation setting: Clients will need help understanding why their content shows up (or doesn’t) in GenAI outputs.
  • Data maintenance: The system is only as good as the assets you feed it, and relevance and freshness matter.
  • Governance and constraints: Humans will define ethical limits, messaging boundaries, and exclusions.
  • Training and iteration: AI ad visibility will rely on live outputs and observed responses, not static dashboards. You’ll tune prompts, inputs, and outputs based on what the system retrieves and how often it surfaces your content.

In this model, the ad strategist becomes part translator, part data curator, part retrieval mechanic.

And the ad platform? It becomes autonomous, context-driven, and functionally invisible, until you realize your product’s already been included in the buyer’s decision … and you’ve been billed accordingly.

A Closer Look: Intent Vector Bidding In Action

Imagine you’re an outdoor gear brand and there’s a sudden heatwave hitting the Pacific Northwest. Across Oregon and Washington, people begin prompting:

  • “Best ultralight tents for summer hiking”
  • “Camping gear for extreme heat”
  • “Stay cool while backpacking in July”

The model recognizes a spike in semantically similar prompts and data from news sources, etc. A heatwave cohort forms.

At the same time, your brand has a product page and ad copy about breathable mesh tents and high-vent airflow systems.

If your content has been vectorized (or if your system embeds an ad payload with a strong Embedding Fingerprint), it’s eligible to enter the auction.

But this isn’t a bid based on demographic data or historical retargeting. It’s based on how closely your product vector aligns with the live cohort’s prompt vectors.

The LLM chooses the most semantically aligned match. The better your alignment, the more likely your product is included in the AI’s answer, or inserted into the contextual ad slot within the response.

No campaign setup. No segmented audience targeting. Just semantic match at machine speed. This is where creative, product, and performance converge, and that convergence rewrites what it means to “win” in modern advertising.

What Marketers Can Do Right Now

There’s no dashboard that will tell you which Prompt Fingerprints you’re aligned with. That’s the hard part.

But you can start by thinking like a model until tools start to develop features that allow you to model your Prompt Fingerprint.

Start with:

  • Simulated prompt testing: Use GPT-4 (or Gemini or any other) to generate sample queries by funnel stage and see what brands get retrieved.
  • Create content for multi-cohort resonance: for example, a camping blog that aligns with both eco-conscious minimalists and adventure-seeking parents.
  • Build your own prompt libraries: Classify by intent stage, specificity, and phrasing. Use these to guide creative briefs, content chunking, and SEO.
  • Track AI summaries: In platforms like Perplexity, Gemini, and ChatGPT, your brand might influence answers even when you’re not explicitly mentioned. Your goal is to become the attributed source, not just a silent contributor.

In this new, genAI version of search, you’re no longer optimizing for page views. You’re optimizing for retrievability by semantic proximity.

The Rise Of The Prompt-Native Brand

Some brands will begin designing entire messaging strategies around prompt behavior. These prompt-native brands won’t wait for traffic to arrive. They’ll engineer their content to surf the wave of prompt clusters as they form.

  • Product copy structured to match MOFU queries.
  • Comparison pages written in prompt-first language.
  • AI ad copy tuned by cohort spike detection.

And eventually, new brands will emerge that never even needed a traditional website. Their entire presence will exist in AI conversations.

Built, tuned, and served directly into LLMs via vector-aligned content and Intent Vector Bids.

Wrapping Up

This is the next funnel, and it’s not a page. It’s a probability field. The funnel didn’t disappear. It just went invisible.

In traditional marketing, we mapped clear stages (awareness, interest, decision) and built content to match. That funnel still exists. But now it lives inside the model. It’s inferred, not declared. It’s shaped by prompts, not click paths.

And if your content doesn’t align with what the model sees in that moment, you’re missing in the retrieval.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: NicoElNino/Shutterstock

Building Brand Identity: How To Define Who You Are

Brand identity is the foundation of your business, from the conceptualization of your services and products all the way to marketing.

Before you can create an effective marketing, SEO, content strategy, or even a business strategy, you need to know who you are as a brand. It’s a step many marketers and business leaders overlook, but it’s the one that makes everything else work.

This episode breaks down why identity is the starting point for your business to have impact.

Editor-in-Chief of Search Engine Journal, Katie Morton, sits down with Mordy Oberstein, founder of Unify Brand Marketing, to discuss how to develop a true brand identity so your marketing strategy has something solid to stand on.

Watch the video or read the full transcript below.

Editor’s note: The following transcript has been edited for clarity, brevity, and adherence to our editorial guidelines.

Katie Morton: Hey everybody, it is I, Katie Morton, Editor-in-Chief of Search Engine Journal, and today I’m sitting down with Mordy Oberstein, founder of Unify Brand Marketing. Mordy, talk to me. What’s going on?

Mordy Oberstein: Episode three! It’s a thing now. I can’t believe we’ve made it this far. Counting episodes has become a bit of a challenge, though. We might even be on number four.

Katie: Counting is definitely hard! But let’s dive in.

Why Brand Identity Matters

Mordy: Last time, we talked about brand development and the stages of brand development. The first stage of brand development is developing brand identity. So, for the sake of continuity, which is important for branding, let’s talk about how you develop brand identity this time.

Katie: That sounds fantastic. How does one develop brand identity?

Mordy: Before we get into the “how,” let’s talk about why brand identity is so essential. Identity is the foundation of everything your brand or company does. You can’t create a marketing, SEO, or content strategy without first knowing who you are. Everyone skips this step—but it’s crucial.

Also, identity is the thing that allows your audience to connect to you. There has to be a point of connection for marketing to actually be effective. And people can’t connect unless there’s a “you” to connect with.

How To Build Brand Identity

Mordy: And that, in turn, also gives you a lot of focus where brands generally go off the rails is when they start focusing on the wrong things. It’s usually because of a lack of brand identity. So, how do you actually build identity?

The first thing to understand is that identity is not a fake thing. It’s not some make-believe concept like, “Oh, brand identity, it’s a fabrication.” No, identity is a real, living, breathing thing. And because of that, it has to be tied to what you actually do, what your offering really is. There’s no way to put lipstick on a pig.

The second thing I’ll say, before we dive deeper, is that brand identity has nothing to do with your company culture. If you think, “Oh, our identity is our company culture,” you’re doing it wrong. I know that’s a hot take.

The goal of identity is to create something authentic that your audience can connect with. And it needs to have depth for that connection to happen. To have depth, there has to be almost a therapeutic process that goes on. What you’re basically engaging in therapy for your brand.

Engage In Brand “Therapy”

Mordy: What I do with clients (and what you should do internally with your own team) is tap into who you actually are and what you actually want. It’s a process of asking: Why do you do the things you do?

You need to sit down with your team and have a session where you talk about:

  • Why you do what you do.
  • How you see your industry and niche.
  • How you view your product or service.
  • How you see your space and your audience.
  • What you want for your audience, not just practically, but meaningfully.

It’s not about what your audience gets in a practical sense. It’s about the outcome for their lives in a meaningful way.

During this process, you need to take notes like a therapist. As you’re having these discussions, ask yourself: What’s landing? What’s meaningful about this? What feels like something to chew on? Listen for the things that resonate – both in what you’re saying and what your team is saying.

From Reflection To Action: Formalizing Your Brand Identity

The next step is to formalize all of that into a pathway to showcase it. You take everything you discussed, all these concepts, ideas, and meaningful points, and try to concretize them into one unified (no pun intended) concept for yourself.

This means prioritizing. You can’t focus on everything. You have to take some of the meaningful things you talked about and say, “Okay, this is secondary.” You need to decide which points will be your primary focus.

Once you have a centralized concept of who you are, what you do, and why it’s meaningful and once it’s really clear to you – the next step is execution.

Because communication about who you are isn’t in the tagline on your homepage. It’s the nonverbal stuff. It’s latent. It’s everything you do. All the content you create, the activities you engage in should all signal and speak to who you are.

Integrating Identity Into Marketing Strategy

Mordy: This is where you start integrating all the work you did in those sessions into your actual marketing strategy.

It’s a three-step process:

  1. Sit down and have deep discussions to discover what’s meaningful.
  2. Prioritize: Decide which meaningful things you’re going to focus on.
  3. Integrate: Unify those concepts into your brand actions and strategies.

Does that make sense?

Katie: So no competitive analysis at this stage?

Mordy: I would encourage you not to look at your competitors yet. All you’re trying to do is figure out…take away the idea of brand for a second, take away the company. If someone asks you who you are, you don’t answer by thinking about your competition.

Instead, you ask yourself: What’s really meaningful to me? What do I really want? What do I want people to know? What do I like to focus on? All those kind of questions and you start pulling that out.

Katie: Exactly. Authenticity should naturally help differentiate you. It should, right?

MordyAnd that’s another thing, by the way, which is a great point that you bring up. It’s technically possible that you could find an identity of who you are that’s really meaningful, that has a layer of depth, that’s not the surface-level nonsense that a lot of brands fall into. It can be super clear to you, and it can be difficult to differentiate. It could be the exact same thing as another brand, but that’s a very, very unlikely thing. It’s a technical possibility, but I don’t think it’s an existential possibility.

Katie: That makes sense. If you think of a brand as an individual human, no two humans are alike. So neither should two brands be alike.

Mordy: Exactly. If you’re doing this exercise correctly, you’ll naturally create differentiation. And if you feel like you’re not, it means you haven’t dug deep enough yet.

Brand Identity Guides Real-World Implications

Katie: Full disclosure: We actually went through this brand identity exercise with Mordy at Search Engine Journal. It was extremely helpful, and like you said, it also trickled into real world actions. It’s helping to inform some of our product strategy and other things we’re planning on doing in the real world. This branding exercise is not just empty calories, so to speak.

Mordy: Thanks for saying that. That’s awesome.

If your marketing team isn’t getting traction and feels stuck, it’s often because you’re not tapped into who you actually are. But once you are, you feel very much not stuck. You get clarity: “Here’s where our product should go. We shouldn’t go that way; we should go this way.”

It’s where you see companies go off the rails with AI, for example. They just jump on every AI thing because they don’t know who they are. They don’t have the ability to say, “That’s not us.” Or, “Yes, we should get into AI, but it should be done in a way that reflects who we are.”

This identity work also gives you focus, traction, and momentum when you’re feeling stuck. We talked about this last time: knowing who you are is very important for figuring out who you’re for.

Katie: Right. That’s a good point. So it can help target your audience as well, who do you want to help? The other thing I found it’s motivating just from a work ethic standpoint, if you feel like you’re burned out or you’re spinning your wheels or you don’t know why you do what you do, it gives you sort of a North star to really connect with other human beings, with your customer, who are you trying to serve and why?

What is that intrinsic motivation that helps you get out of bed in the morning?

Mordy: It’s super meaningful. From a practical point of view, when teams or companies talk about needing an “internal vision,” what they really mean is they need an internal identity that can be communicated across teams. That’s what I feel you’re actually trying to say.

Aligning Brand Identity: A Picture Frame Business Example

Mordy: Let me give you a weird example. Let’s say I make picture frames. That’s my business: I sell picture frames.

If your identity is just, “We’re about making cheap picture frames,” that’s not meaningful. But if you start asking why you’re doing this, you might discover something deeper. Maybe you and your team really value cherishing memories. That’s your motivation. So, your product, the frame, is a way to help people cherish their memories by displaying them.

Half my pictures are still on my phone. They are not cherished. Print them, put them in a nice frame, display them, cherish those memories. But if you say you’re all about cherishing memories and then sell flimsy, garbage frames, that would be a misalignment.

Another company might say, “We want to add artistic flair to your pictures.” Their identity is about art and design. Two totally different companies doing totally different things with their brand identity. And it’s based on who they actually are, and their products should align.

Sometimes you’ll combine concepts. Maybe you believe in cherishing memories, but you also feel that an artistic frame enhances that experience. So, your core concept becomes: “We help you cherish memories by giving them artistic design that highlights how special they are.”

So that would be taking two concepts and unifying them together to create one core concept that speaks of both aspects of who you actually are. You can do five different things with this, it all depends on who you are in reality.

Katie: I can imagine, too, that you could build entire product lines from that concept. Maybe you serve different customer segments, or maybe it’s one customer who wants variety.

Mordy: Your whole product line should be informed by that decision. If you’re saying, “Cherishing the memory means giving it a really fancy frame,” then your products need to align with that. Imagine you bought a Monet…you wouldn’t put it in a cheap poster board frame. You’d give it a beautiful frame that reflects its value. Your memories are paintings; your pictures are memories.

Your products need to align. You’d create product lines of artistic frames to match your identity. If your products don’t reflect who you are, then either that’s not your identity, or you need to change your product to match it.

Brand Identity Drives Motivation

Katie: That makes sense. As a painter, so I can relate to this example. When I don’t know why I’m creating, I stop. The times that I am aligned with this exercise of figuring out who I am and who I’m trying to connect with, and the identity behind why I would be a painter, I’m so much more motivated to show up and paint.

Any time I get lost in the grind of the work week, it often makes me not paint, because I have different identities at different times, as we all do as human beings. Sometimes my work identity will take over. If the painter identity is weak or ill-defined, I can literally go years without painting.

So to bring it back to the concrete reality of what we’re talking about, the same happens in business. It’s so easy to get off track because people have so many priorities shoved at them all the time. So it’s really easy for businesses to become idea generators. If you don’t have those north star KPIs rooted in our brand identity, it’s so easy to go chase shiny things.

Mordy: …they’re all over the place. Businesses ask, “Why should I do this? Shouldn’t I focus on conversions, revenue, traffic?” But defining your identity helps you do that. You’ll target the right people with the right message and avoid wasting time and money on products, marketing, or content that don’t align with who you are.

When you’re confused, you try everything. You waste a ton of time, resources, and money. But if you sit down for a few hours, clarify your identity, you’ll know, “We need to do this, and not that.”

Mordy: Also, identity evolves over time, just like people. Your brand, who you are, why you do what you do, it changes. That’s normal. But it always needs to be clear to you.

People are creatures of meaning. If you can’t attach meaning to what you do, your audience won’t be able to connect or resonate. You’ll face an uphill battle trying to convince people to spend money with you. On top of that, your team won’t have buy-in. You, as the owner or CEO, might be motivated, but your team needs something meaningful to connect with.

That’s why it’s critical to communicate your identity across the entire organization. Don’t stop at the C-suite or the marketing team. Start having real conversations about this with every team member.

Quick Note On ICPs And Personas

Katie: I have one last question for you, Mordy. The idea of the ICP, how much does that factor into this particular step? How would you categorize that part of this discussion in terms of the ICP and the brand identity?

Mordy: That’s a hard question, it’s a whole topic in itself. I don’t like profiling like that. I like intent-based marketing over persona-based marketing.

Katie: Not to open a can of worms late in the discussion, but talk to me briefly about intent-based versus profiling.

Mordy: I’m more interested in why people do things than which person does which thing. Generally, when you’re more intent-focused, you open up more opportunities. But when you’re persona-focused, you sometimes end up with blinders on.

That’s not to say there’s no room for persona-based marketing. There is. But going back to your question about the ICP (kind of a hot take) shouldn’t be part of this process until you’ve figured out who you are.

Should your ICP, your Ideal Customer Profile, influence who you actually are? Does it change who you are? Think of it like going on a date. Should who the other person is influence who you are as a person? That’s not a recipe for success. You are who you are.

Of course, we’re all multifaceted people, but fundamentally, you are who you are. And because of that, you decide who you should engage with, whether that’s Customer X or Customer Y. Not the other way around.

Final Thoughts

Katie: Let me just add one thing. Let’s say someone is flexible as a brand or as a dater. Imagine a scenario where someone has aspirations, whether in business or relationships. Someone who’s an inexperienced business owner who wants to target a high-value customer, but doesn’t yet have the experience to offer real value.

In that case, you have two options. One is to accept where you are, get back down into your league, and serve the customers you’re best equipped to serve right now. The other option is to level up. Get educated. Improve yourself. If you’re aiming for a target that’s currently out of your league, there are steps you can take within reason to grow into that.

But that’s a whole other business development conversation. For the purposes of this branding exercise, it’s about authenticity and being realistic. It’s about knowing where you can truly add value. And at the heart of it, it always comes back to: Who are you? Like you said, it ties back to brand development.

Mordy: To kind of end off with a very simple example, again, if you micro-level this, it all becomes much easier to see. Let’s say there are two groups I want to hang out with. Group A likes baseball games. Group B prefers the ballet or symphony. Both groups seem cool, but I love baseball. That’s my thing. So I should hang out with the baseball crowd.

I’m not a fancy person. I don’t enjoy the symphony. If you do, that’s awesome, more power to you. But it’s not me. I’m not going to force myself into that crowd. Instead, I’ll lean into the baseball group. I’ll amplify that aspect of myself. I’ll get the jersey, the gear to show them I’m part of their group. Because I actually am.

I’m not faking it. I’m just trying to amplify what I actually am to show you that’s who I am. That’s the difference. One is you’re faking it in order to show people like, “Oh, here we go, this is who I am.” Not you at all.

The other way is, this is who I am, and I’m going to try to communicate that to you by all the things I’m going to do. And I might purposely and consciously try to do things or signal to you that “I’m part of your group. I fit in. Love me.”

Katie: That’s amazing. And just from a business standpoint, when it comes to SEO and acquiring customers and traffic, it’s so important to focus on your niche. You’re not going to be all things to all people, especially now when AI is answering all the basic questions.

You need to double down on who you are and speak authentically to your niche. Stop trying to appeal to too many people. The days of the open web firehose of traffic are done. So adjust and adapt.

Mordy: If you’re for everyone, you’re for no one.

Katie: Exactly. Alright, Mordy, we’re at time. Thank you so much for sitting down with me today. I’m looking forward to the next one.

For a free consultation with Mordy, head over to unifybrandmarketing.com.

And we’re at searchenginejournal.com for more content and discussions. Mordy is also a contributor at Search Engine Journal, and any final thoughts?

Mordy: Yeah, come check out the free consultation. And check out the SEJ content.

Katie: Awesome. Until next time. Bye.

Mordy: Bye.

More resources: 


Featured Image: Paolo Bobita/Search Engine Journal

Ask A PPC: How Do I Avoid Cannibalization On Similar Products? via @sejournal, @navahf

There’s nothing worse than watching your own products compete against each other.

When your paid media strategy starts pitting your product lines against one another, you’re not just inflating costs; you’re undercutting your own chances at conversion.

That’s the question this month’s “Ask A PPC” will tackle:

“I work for a company that has three brands in the same niche with a high ticket item for house renovation. All companies have high spend on search ads, but we are targeting the same keywords and we are seeing cannibalization.

What can we do with our bidding strategy to try and reduce our CPC and still compete on the same products/keywords, but not cannibalize each other?”

Let’s break down how to avoid keyword cannibalization, particularly when dealing with premium products, and how to structure campaigns in a way that keeps everything working together.

The Hard Truth: You Can’t Avoid All Cannibalization

Let’s start here because this is what no one wants to hear: If you’re targeting the same non-branded keywords, the same geographies, and similar audiences with similar value props, some level of internal competition is inevitable.

Search campaigns don’t know your product lines are siblings. All they see are bids, relevance scores, and conversion data. Some keywords/ads will win. Some won’t.

The goal is to mitigate the internal crossfire and make strategic decisions that give every product its best shot to shine.

Prioritize: Which Products Get Which Keywords?

We don’t like to play favorites with our products, but when it comes to generic, high-volume keywords, you might have to.

Unless you have contractual obligations to spend equally across product lines (try to avoid this), you’ll need to assign certain non-branded queries to one product or another.

Here’s how you can do it:

  • Segment by market: Allocate geographic zones to different products based on performance trends, sales reps, or product-market fit.
  • Use keyword research as a compass: Both Google’s and Microsoft’s keyword planners can show you which search terms have better affinity with which product.
  • Establish thematic lanes: If Product A is more “entry-level” and Product B is the “pro version,” let them own different stages of the funnel.

Use Category Pages, Not Product Pages

One workaround, especially with Dynamic Search Ads (DSA) and Performance Max (PMax), is to avoid pushing people directly to product pages. Instead, drive them to category or collection pages.

Why this works:

  • It gives consumers options without forcing them to pick one.
  • You can still control targeting and ad creative at the campaign or asset group level.
  • It creates a more balanced distribution of visibility without inflating cost-per-click (CPCs) by bidding on the same SKUs.

DSAs and PMax campaigns do this particularly well. You’re not bidding on keywords in the traditional sense; you’re letting Google’s (or Microsoft’s) AI determine which queries to match based on content and intent.

On Google, AI Max lets you guide that intent more narrowly through ad group-level settings.

On Microsoft, PMax can do something similar, especially if you feed it clean, structured data and lean into visual creative.

Build A Branded Safety Net

You likely already have branded campaigns in place, and if you don’t, this is an important go do.

Branded search and Shopping should ensure that anyone looking for a specific product by name sees only that product. This is where you can (and should) be strict about campaign segmentation.

Branded campaigns give you clean performance data, protect your CPCs from cannibalization, and provide the clearest attribution path.

Leverage Visual Differentiation

This is where platforms like Google Demand Gen and Microsoft Audience Ads really shine.

Visual content lets you sidestep keywords altogether and lean into product storytelling. You can target by interest, topic, or custom segments – not search intent – which means you can:

  • Run one campaign per product and assign each a budget.
  • Or run one big campaign and let the creative guide user choice.

You can use PMax here, too, especially on Microsoft, where PMax makes it more likely to secure Copilot placements across mobile and desktop.

Copilot has been shown to have 25% more relevancy than traditional search, according to Microsoft internal data.

The key is to treat these upper-funnel plays as audience builders. Then, once users engage, you can segment them with remarketing across both platforms.

Pro tip: On Microsoft, even just an impression is enough to build an audience. Which means your remarketing and exclusions can get very precise, very quickly.

So long as there’s at least one audience ad campaign in your impression-based remarketing sources, you can allow PMax to remarket to PMax and Search/Shopping to remarket to Search/Shopping, i.e., you can capture intent from Copilot even if they didn’t engage with you there.

Does This Really Solve Cannibalization?

The only surefire way to fully prevent cannibalization would be to run entirely separate ad accounts, one per product. But that opens up a Pandora’s box of compliance risks.

Google and Microsoft are both very aware of efforts to double-serve, and if they perceive your accounts as trying to game the system – even if you’re just trying to stay organized – you could end up suspended.

So instead, your best move is to manage the overlap, not eliminate it. Focus on:

  • Using category pages for non-branded queries.
  • Owning branded queries with tightly segmented campaigns.
  • Differentiating products visually through audience-first formats.
  • Using geographic and thematic separation when assigning generic keywords.

When done right, the consumer makes the final decision, not your CPC strategy. That’s not cannibalization. That’s just a user choosing which of your great products fits their needs best. And either way? You win.

Final Takeaways

To recap:

  • You can’t fully eliminate cannibalization without risking violating platform policies.
  • Smart segmentation of campaigns by geography, theme, and intent, helps mitigate overlap.
  • Category pages + visual ads can guide consumers to the right product without inflating CPCs.
  • Branded campaigns are your best friend; keep them clean, tight, and product-specific.
  • Audience-based targeting gives you control without competing on search terms.

At the end of the day, your campaigns should reflect how your users shop: exploring, comparing, deciding. Make that process easier for them, and less expensive for you.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

6 AI Marketing Myths That Are Costing You Money [Webinar] via @sejournal, @duchessjenm

Stop letting AI drain your budget. Learn how to make it work for you.

Think AI can fully run your marketing strategy on autopilot? 

Or that AI-generated content should deliver instant results? 

It is time to bust the AI myths that are slowing you down and costing you money.

Join Bailey Beckham, Senior Partner Marketing Manager at CallRail, and Jennifer McDonald, Senior Marketing Manager at Search Engine Journal, on August 21, 2025, for an exclusive webinar. Get the insights you need to stop wasting time and money and start leveraging AI the right way.

In this session, you will learn:

Why this session is essential:

AI tools can’t run your strategy on autopilot. You need to make smarter decisions, ask the right questions, and guide your AI tools to work for you, not against you. 

This webinar will help you unlock AI’s full potential and optimize your content to improve your marketing performance.

Register now to learn how to get your content loved by AI, LLMs, and most importantly, your audience. Can’t attend live? Don’t worry, sign up anyway, and we will send you the on-demand recording.

The Download: OpenAI’s open-weight models, and the future of internet search

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

OpenAI has finally released open-weight language models

The news: OpenAI has finally released its first open-weight large language models since 2019’s GPT-2. Unlike the models available through OpenAI’s web interface, these new open models can be freely downloaded, run, and even modified on laptops and other local devices.

Why it matters: These releases re-establish OpenAI as a presence for users of open models. That’s particularly notable at a time when Meta, which had previously dominated the American open-model landscape with its Llama models, may be reorienting toward closed releases—and when Chinese open models are becoming more popular than their American competitors. Read the full story

—Grace Huckins

MIT Technology Review Narrated: AI means the end of internet search as we’ve known it

The biggest change to the way search engines deliver information to us since the 1990s is happening right now. No more keyword searching. Instead, you can ask questions in natural language. And instead of links, you’ll increasingly be met with answers written by generative AI and based on live information from across the internet, delivered the same way.

Not everyone is excited for the change. Publishers are completely freaked out. And people are also worried about what these new LLM-powered results will mean for our fundamental shared reality.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Nvidia insists its AI chips don’t have a “kill switch”
After China’s Cyberspace Administration asked for security documentation. (CNBC)
+ The country’s ambitions to consolidate its chip giants aren’t going to plan. (FT $)
+ Two Chinese nationals have been charged with illegally shipping chips. (Reuters)

2 America’s new data centers are driving colossal electricity demand
And a handful of equipment makers are reaping the benefits. (FT $)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

3 RFK Jr has cancelled close to $500 million in mRNA vaccine contracts 
Which could leave us dangerously underprepared for a future pandemic. (Politico)
+ We’re losing a key insight into global health. (Vox)
+ How measuring vaccine hesitancy could help health professionals tackle it. (MIT Technology Review)

4 Uber has a sexual assault problem
Newly-unveiled records show it gathered far more sexual assault and misconduct reports than previously revealed. (NYT $)

5 A British politician created an AI clone of himself
And although it provoked a backlash, other MPs may follow his lead. (WP $)
+ A former CNN journalist has interviewed an AI version of a mass-shooting victim. (The Guardian)

6 xAI’s new Grok Imagine tool has a “spicy” mode
Which seems to be code for non-consensual porn images. (The Verge)  
+ It’s already generated fake Taylor Swift nudes without being asked. (Ars Technica)

7 How does ChatGPT fare as a couple’s counselor?
It gets some stuff right. But it also gets some things really wrong. (NPR)
+ The AI relationship revolution is already here. (MIT Technology Review)

8 Syria’s refugees are returning to rebuild its tech industry
But sectarian violence and poor connectivity mean it’s an uphill battle. (Rest of World)

9 Sales of Ozempic have dropped
Rival Mounjaro seems to be more effective. (The Guardian)
+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

10 Google Calendar rules college kids’ lives
They schedule everything from assignments to parties and hook ups. (WSJ $)

Quote of the day

“This is a bad day for science.”

—Scott Hensley, an immunologist at the University of Pennsylvania, criticizes the Department of Health and Human Services’ decision to cancel hundreds of millions of dollars in funding for mRNA vaccine projects, the New York Times reports.

One more thing

Future space food could be made from astronaut breath

The future of space food could be as simple—and weird—as a protein shake made with astronaut breath or a burger made from fungus.

For decades, astronauts have relied mostly on pre-packaged food during their forays off our planet. With missions beyond Earth orbit in sight, a NASA-led competition is hoping to change all that and usher in a new era of sustainable space food.

To solve the problem of feeding astronauts on long-duration missions, NASA asked companies to propose novel ways to develop sustainable foods for future missions. Around 200 rose to the challenge—creating nutritious (and outlandish) culinary creations in the process. Read the full story

—Jonathan O’Callaghan

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ There are a lot of funny cat videos out there but honestly, this is top-drawer.
+ Check out this adorable website where people share what they see in clouds.
+ Babe you’re glowing! No seriously, you literally are
+ I loved watching this woman from London’s East End wax lyrical about the dawn of TV.

Five ways that AI is learning to improve itself

Last week, Mark Zuckerberg declared that Meta is aiming to achieve smarter-than-human AI. He seems to have a recipe for achieving that goal, and the first ingredient is human talent: Zuckerberg has reportedly tried to lure top researchers to Meta Superintelligence Labs with nine-figure offers. The second ingredient is AI itself.  Zuckerberg recently said on an earnings call that Meta Superintelligence Labs will be focused on building self-improving AI—systems that can bootstrap themselves to higher and higher levels of performance.

The possibility of self-improvement distinguishes AI from other revolutionary technologies. CRISPR can’t improve its own targeting of DNA sequences, and fusion reactors can’t figure out how to make the technology commercially viable. But LLMs can optimize the computer chips they run on, train other LLMs cheaply and efficiently, and perhaps even come up with original ideas for AI research. And they’ve already made some progress in all these domains.

According to Zuckerberg, AI self-improvement could bring about a world in which humans are liberated from workaday drudgery and can pursue their highest goals with the support of brilliant, hypereffective artificial companions. But self-improvement also creates a fundamental risk, according to Chris Painter, the policy director at the AI research nonprofit METR. If AI accelerates the development of its own capabilities, he says, it could rapidly get better at hacking, designing weapons, and manipulating people. Some researchers even speculate that this positive feedback cycle could lead to an “intelligence explosion,” in which AI rapidly launches itself far beyond the level of human capabilities.

But you don’t have to be a doomer to take the implications of self-improving AI seriously. OpenAI, Anthropic, and Google all include references to automated AI research in their AI safety frameworks, alongside more familiar risk categories such as chemical weapons and cybersecurity. “I think this is the fastest path to powerful AI,” says Jeff Clune, a professor of computer science at the University of British Columbia and senior research advisor at Google DeepMind. “It’s probably the most important thing we should be thinking about.”

By the same token, Clune says, automating AI research and development could have enormous upsides. On our own, we humans might not be able to think up the innovations and improvements that will allow AI to one day tackle prodigious problems like cancer and climate change.

For now, human ingenuity is still the primary engine of AI advancement; otherwise, Meta would hardly have made such exorbitant offers to attract researchers to its superintelligence lab. But AI is already contributing to its own development, and it’s set to take even more of a role in the years to come. Here are five ways that AI is making itself better.

1. Enhancing productivity

Today, the most important contribution that LLMs make to AI development may also be the most banal. “The biggest thing is coding assistance,” says Tom Davidson, a senior research fellow at Forethought, an AI research nonprofit. Tools that help engineers write software more quickly, such as Claude Code and Cursor, appear popular across the AI industry: Google CEO Sundar Pichai claimed in October 2024 that a quarter of the company’s new code was generated by AI, and Anthropic recently documented a wide variety of ways that its employees use Claude Code. If engineers are more productive because of this coding assistance, they will be able to design, test, and deploy new AI systems more quickly.

But the productivity advantage that these tools confer remains uncertain: If engineers are spending large amounts of time correcting errors made by AI systems, they might not be getting any more work done, even if they are spending less of their time writing code manually. A recent study from METR found that developers take about 20% longer to complete tasks when using AI coding assistants, though Nate Rush, a member of METR’s technical staff who co-led the study, notes that it only examined extremely experienced developers working on large code bases. Its conclusions might not apply to AI researchers who write up quick scripts to run experiments.

Conducting a similar study within the frontier labs could help provide a much clearer picture of whether coding assistants are making AI researchers at the cutting edge more productive, Rush says—but that work hasn’t yet been undertaken. In the meantime, just taking software engineers’ word for it isn’t enough: The developers METR studied thought that the AI coding tools had made them work more efficiently, even though the tools had actually slowed them down substantially.

2. Optimizing infrastructure

Writing code quickly isn’t that much of an advantage if you have to wait hours, days, or weeks for it to run. LLM training, in particular, is an agonizingly slow process, and the most sophisticated reasoning models can take many minutes to generate a single response. These delays are major bottlenecks for AI development, says Azalia Mirhoseini, an assistant professor of computer science at Stanford University and senior staff scientist at Google DeepMind. “If we can run AI faster, we can innovate more,” she says.

That’s why Mirhoseini has been using AI to optimize AI chips. Back in 2021, she and her collaborators at Google built a non-LLM AI system that could decide where to place various components on a computer chip to optimize efficiency. Although some other researchers failed to replicate the study’s results, Mirhoseini says that Nature investigated the paper and upheld the work’s validity—and she notes that Google has used the system’s designs for multiple generations of its custom AI chips.

More recently, Mirhoseini has applied LLMs to the problem of writing kernels, low-level functions that control how various operations, like matrix multiplication, are carried out in chips. She’s found that even general-purpose LLMs can, in some cases, write kernels that run faster than the human-designed versions.

Elsewhere at Google, scientists built a system that they used to optimize various parts of the company’s LLM infrastructure. The system, called AlphaEvolve, prompts Google’s Gemini LLM to write algorithms for solving some problem, evaluates those algorithms, and asks Gemini to improve on the most successful—and repeats that process several times. AlphaEvolve designed a new approach for running datacenters that saved 0.7% of Google’s computational resources, made further improvements to Google’s custom chip design, and designed a new kernel that sped up Gemini’s training by 1%.   

That might sound like a small improvement, but at a huge company like Google it equates to enormous savings of time, money, and energy. And Matej Balog, a staff research scientist at Google DeepMind who led the AlphaEvolve project, says that he and his team tested the system on only a small component of Gemini’s overall training pipeline. Applying it more broadly, he says, could lead to more savings.

3. Automating training

LLMs are famously data hungry, and training them is costly at every stage. In some specific domains—unusual programming languages, for example—real-world data is too scarce to train LLMs effectively. Reinforcement learning with human feedback, a technique in which humans score LLM responses to prompts and the LLMs are then trained using those scores, has been key to creating models that behave in line with human standards and preferences, but obtaining human feedback is slow and expensive. 

Increasingly, LLMs are being used to fill in the gaps. If prompted with plenty of examples, LLMs can generate plausible synthetic data in domains in which they haven’t been trained, and that synthetic data can then be used for training. LLMs can also be used effectively for reinforcement learning: In an approach called “LLM as a judge,” LLMs, rather than humans, are used to score the outputs of models that are being trained. That approach is key to the influential “Constitutional AI” framework proposed by Anthropic researchers in 2022, in which one LLM is trained to be less harmful based on feedback from another LLM.

Data scarcity is a particularly acute problem for AI agents. Effective agents need to be able to carry out multistep plans to accomplish particular tasks, but examples of successful step-by-step task completion are scarce online, and using humans to generate new examples would be pricey. To overcome this limitation, Stanford’s Mirhoseini and her colleagues have recently piloted a technique in which an LLM agent generates a possible step-by-step approach to a given problem, an LLM judge evaluates whether each step is valid, and then a new LLM agent is trained on those steps. “You’re not limited by data anymore, because the model can just arbitrarily generate more and more experiences,” Mirhoseini says.

4. Perfecting agent design

One area where LLMs haven’t yet made major contributions is in the design of LLMs themselves. Today’s LLMs are all based on a neural-network structure called a transformer, which was proposed by human researchers in 2017, and the notable improvements that have since been made to the architecture were also human-designed. 

But the rise of LLM agents has created an entirely new design universe to explore. Agents need tools to interact with the outside world and instructions for how to use them, and optimizing those tools and instructions is essential to producing effective agents. “Humans haven’t spent as much time mapping out all these ideas, so there’s a lot more low-hanging fruit,” Clune says. “It’s easier to just create an AI system to go pick it.”

Together with researchers at the startup Sakana AI, Clune created a system called a “Darwin Gödel Machine”: an LLM agent that can iteratively modify its prompts, tools, and other aspects of its code to improve its own task performance. Not only did the Darwin Gödel Machine achieve higher task scores through modifying itself, but as it evolved, it also managed to find new modifications that its original version wouldn’t have been able to discover. It had entered a true self-improvement loop.

5. Advancing research

Although LLMs are speeding up numerous parts of the LLM development pipeline, humans may still remain essential to AI research for quite a while. Many experts point to “research taste,” or the ability that the best scientists have to pick out promising new research questions and directions, as both a particular challenge for AI and a key ingredient in AI development. 

But Clune says research taste might not be as much of a challenge for AI as some researchers think. He and Sakana AI researchers are working on an end-to-end system for AI research that they call the “AI Scientist.” It searches through the scientific literature to determine its own research question, runs experiments to answer that question, and then writes up its results.

One paper that it wrote earlier this year, in which it devised and tested a new training strategy aimed at making neural networks better at combining examples from their training data, was anonymously submitted to a workshop at the International Conference on Machine Learning, or ICML—one of the most prestigious conferences in the field—with the consent of the workshop organizers. The training strategy didn’t end up working, but the paper was scored highly enough by reviewers to qualify it for acceptance (it is worth noting that ICML workshops have lower standards for acceptance than the main conference). In another instance, Clune says, the AI Scientist came up with a research idea that was later independently proposed by a human researcher on X, where it attracted plenty of interest from other scientists.

“We are looking right now at the GPT-1 moment of the AI Scientist,” Clune says. “In a few short years, it is going to be writing papers that will be accepted at the top peer-reviewed conferences and journals in the world. It will be making novel scientific discoveries.”

Is superintelligence on its way?

With all this enthusiasm for AI self-improvement, it seems likely that in the coming months and years, the contributions AI makes to its own development will only multiply. To hear Mark Zuckerberg tell it, this could mean that superintelligent models, which exceed human capabilities in many domains, are just around the corner. In reality, though, the impact of self-improving AI is far from certain.

It’s notable that AlphaEvolve has sped up the training of its own core LLM system, Gemini—but that 1% speedup may not observably change the pace of Google’s AI advancements. “This is still a feedback loop that’s very slow,” says Balog, the AlphaEvolve researcher. “The training of Gemini takes a significant amount of time. So you can maybe see the exciting beginnings of this virtuous [cycle], but it’s still a very slow process.”

If each subsequent version of Gemini speeds up its own training by an additional 1%, those accelerations will compound. And because each successive generation will be more capable than the previous one, it should be able to achieve even greater training speedups—not to mention all the other ways it might devise to improve itself. Under such circumstances, proponents of superintelligence argue, an eventual intelligence explosion looks inevitable.

This conclusion, however, ignores a key observation: Innovation gets harder over time. In the early days of any scientific field, discoveries come fast and easy. There are plenty of obvious experiments to run and ideas to investigate, and none of them have been tried before. But as the science of deep learning matures, finding each additional improvement might require substantially more effort on the part of both humans and their AI collaborators. It’s possible that by the time AI systems attain human-level research abilities, humans or less-intelligent AI systems will already have plucked all the low-hanging fruit.

Determining the real-world impact of AI self-improvement, then, is a mighty challenge. To make matters worse, the AI systems that matter most for AI development—those being used inside frontier AI companies—are likely more advanced than those that have been released to the general public, so measuring o3’s capabilities might not be a great way to infer what’s happening inside OpenAI.

But external researchers are doing their best—by, for example, tracking the overall pace of AI development to determine whether or not that pace is accelerating. METR is monitoring advancements in AI abilities by measuring how long it takes humans to do tasks that cutting-edge systems can complete themselves. They’ve found that the length of tasks that AI systems can complete independently has, since the release of GPT-2 in 2019, doubled every seven months. 

Since 2024, that doubling time has shortened to four months, which suggests that AI progress is indeed accelerating. There may be unglamorous reasons for that: Frontier AI labs are flush with investor cash, which they can spend on hiring new researchers and purchasing new hardware. But it’s entirely plausible that AI self-improvement could also be playing a role.

That’s just one indirect piece of evidence. But Davidson, the Forethought researcher, says there’s good reason to expect that AI will supercharge its own advancement, at least for a time. METR’s work suggests that the low-hanging-fruit effect isn’t slowing down human researchers today, or at least that increased investment is effectively counterbalancing any slowdown. If AI notably increases the productivity of those researchers, or even takes on some fraction of the research work itself, that balance will shift in favor of research acceleration.

“You would, I think, strongly expect that there’ll be a period when AI progress speeds up,” Davidson says. “The big question is how long it goes on for.”

Google Ads Unveils RSA Asset Stats

A helpful reporting update is rolling out in Google Ads accounts. Advertisers can now view click and conversion data for each headline and description line of Responsive Search Ads, as well as aggregate RSA performance.

More Control

Advertisers have generally responded positively to RSAs. The ads allow up to 15 headlines and four description lines that rotate interchangeably for, potentially, thousands of combinations. With smart bidding, artificial intelligence, and personalization signals, Google shows the most likely-to-convert combination for each searcher.

Until now, however, advertisers could only see the overall RSA performance and total impressions of each asset and combination.

But click and conversion metrics for each asset now appear in the interface. The example below ranks the number of conversions from highest to lowest, along with their conversion rates and cost per conversion. Advertisers can easily identify which assets are meeting goals.

Screenshot of the RSA report

Google Ads now reports click and conversion metrics for each RSA asset. This example ranks the number of conversions from highest to lowest.

With the data, advertisers regain some control, although it’s essential to consider the bigger picture. More data doesn’t necessarily mean more changes.

Google’s AI optimizes for advertisers’ goals. A lower-performing asset could result from Google testing combinations. For instance, a headline could perform poorly for group A but well for group B when combined with description line C. Unfortunately, impressions remain the only available metric to advertisers when viewing RSA combinations.

Using the Data

Nonetheless, advertisers should not entirely defer to Google’s AI. Here are my typical action items.

Remove underperforming assets. I apply a filter to highlight poor performers, such as any asset with at least 100 clicks and zero conversions. It’s a quick rundown of headlines and descriptions to remove, as the message or landing page isn’t resonating with searchers.

Advertisers can view asset-level performance at the ad, ad group, and campaign levels. The ad level provides the most detail, but ad groups and campaigns are sufficient if the assets are identical. Regardless, ensure you have enough data for informed decisions — I aim for at least 50 clicks.

Pin the best performers. Conversely, identify the most productive assets through pinning — locking specific headlines and descriptions, such as a headline with a better-than-average conversion rate or a description with a low cost per lead.

Creating a new RSA for the top three to seven assets is another option. For example, if headlines A, D, F, and description lines M and N perform well, create an RSA with only those assets.

Keep in mind that pinning assets will reduce an ad’s strength. To be sure, “ad strength” is a novelty metric, but it roughly aligns with the number of likely impressions. Thus pin assets selectively to ensure consistent traffic.

Find new messaging from AI Max. When turned on, AI Max ads reveal performance for its automated assets.

Recall that AI Max campaigns create assets from copy on an advertiser’s website, landing page, and other ads. If an automatically created asset performs well, consider creating a new RSA ad or adding it to an existing one.

Screenshot of a AI Max performance report

AI Max’s automatic headlines and descriptions are a source for new or existing RSAs.

Caution

More data can lead to bad decisions. Exercise caution. Google Ads AI algorithm considers many variables to determine the best message for each searcher. Knowing the clicks and conversions for each headline and description is helpful, but part of the bigger picture.

Charts: U.S. Small Business Trends Q3 2025

The U.S. Chamber of Commerce Small Business Index is published quarterly in conjunction with MetLife, the financial services firm, and based on unique online interviews with 760 small business owners and operators. The index captures owners’ views on the “economy, hiring, investment, and other key economic indicators.”

The index is a measure of owners’ sentiment across key topics with 0 = extremely negative, 100 = extremely positive, and 50 = neutral.

For Q2 2025, the index rose to 65.2, up from 62.3 in the previous quarter, reflecting growing optimism around business health and cash flow.

The National Small Business Association, a 65,000-member non-profit advocacy organization unaffiliated with the U.S. government, conducts an annual in-depth survey of small businesses nationwide on the state of their companies.

This year’s survey report (PDF), issued in May, is based on approximately 650 interviews in April 2025 with small business owners in all 50 states and industries. Economic uncertainty is the most significant challenge facing small businesses today, with 59% identifying it as their primary concern.

Despite the uncertainty, roughly 50% of surveyed owners expect their sales to increase this year.

U.S. Bank surveyed 1,000 small business owners with annual revenues of $25 million or less and between two and 99 employees to examine the main macroeconomic challenges they face and their use of digital tools and AI. The survey was carried out from March 14 to April 4, 2025, and published in the bank’s “2025 Small Business Perspective” report (PDF).

Per the survey, U.S. small business owners are adopting new payment options to serve their customers better. Although cash is still the preferred in-store method, other payment options are becoming increasingly popular, with 42% reporting tap-to-pay as a primary method.

Ecosia & Qwant Launch European Search Infrastructure via @sejournal, @MattGSouthern

Ecosia has begun delivering its own search results for the first time in its 16-year history, starting with users in France who will receive a portion of results from a new European search index developed jointly with Qwant.

The rollout marks the first implementation of the European Search Perspective (EUSP) joint venture, which has created Staan (Search Trusted API Access Network), a privacy-focused search infrastructure designed for Europe.

Current Implementation & Timeline

French users are now receiving search results directly from EUSP’s independent European index. Ecosia aims to serve 30% of French search queries through the new infrastructure by the end of 2025.

In a statement to Tech.eu, Christian Kroll, CEO of Ecosia, said:

“Having our own search infrastructure is a critical step for digital plurality and for building a sovereign European alternative. With more control over our offering, we can better serve users, develop ethical AI, and double down on our mission to build tech that benefits people and the planet.”

Technical Independence

Ecosia and Qwant have historically relied on syndication platforms from major US tech companies. The new infrastructure allows both companies to deliver results independently and make backend improvements without relying on external providers.

The broader goal is to reduce reliance on digital infrastructure controlled by foreign companies.

Open Index, Structured For Growth

EUSP isn’t limited to Ecosia and Qwant. The index is open to other companies building search or generative AI tools.

It is also structured to allow outside investment, unlike Ecosia’s steward-owned model, where 99.99% of shares belong to a foundation.

Kroll said the goal is to create an infrastructure that supports competition and innovation in Europe while maintaining strong privacy protections:

“This isn’t just about better search. It’s about the freedom to build and shape the future of tech in Europe.”

Looking Ahead

Ecosia’s partnership with Qwant could lead to more diversity in how European users access and interact with search.

While the initial rollout is limited to France, the infrastructure is designed to scale and support other companies and markets over time.


Featured Image: George Khelashvili/Shutterstock

Google Says AI Clicks Are Better, What Does Your Data Say? via @sejournal, @MattGSouthern

Google’s latest blog post claims AI is making Search more useful than ever. Google says people are asking new kinds of questions, clicking on more links, and spending more time on the content they visit.

But with no supporting data or clear definitions, the message reads more like reassurance than transparency.

Rather than take Google at its word or assume the worst, you can use your own analytics to understand how AI in Search is affecting your site.

Here’s how to do that.

Google Says: “Quality Clicks” Are Up

In the post, Google says total organic traffic is “relatively stable year over year,” but that quality has improved.

According to the company, “quality clicks” are those where users don’t bounce back immediately, indicating they’re finding value in the destination.

This sounds good in theory, but it raises a few questions:

  • What does “slightly more” quality clicks mean?
  • Which sites are gaining, and which are losing?
  • And how is click quality being measured?

You won’t find those answers in Google’s post. But you can find clues in your own data.

1. Track Click-Through Rate On High-Volume Queries

If you suspect your site has lost ground due to AI Overviews, your first stop should be Google Search Console.

Try this:

  • Filter for top queries from the past 12 months.
  • Look at CTR changes before and after May 2024 (when AI Overviews began expanding).
  • Pay attention to queries that are longer, question-based, or likely to trigger summaries.

You may find impressions are holding steady or rising while CTR declines. That suggests your content is still being surfaced, but users may be getting their answers directly in Google’s AI-generated response.

2. Approximate “Quality Clicks” With Engagement Metrics

To test Google’s claim about higher quality clicks, you’ll need to look beyond Search Console.

In GA4, examine:

  • Engaged sessions (sessions lasting more than 10 seconds or including a conversion or multiple pageviews).
  • Average engagement time per session.
  • Scroll depth or video watch time, if applicable.

Compare these engagement metrics to the same period last year. If they’re improving, you may be getting more motivated visitors, supporting Google’s view.

But if they’re dropping, it could mean that AI Overviews are sending fewer, possibly less interested, visitors your way.

3. See Which Content Formats Are Gaining Visibility

Google says people are increasingly clicking on forums, videos, podcasts, and posts with “authentic voices.”

That aligns with its integration of Reddit and YouTube content into AI Overviews.

To see how this shift might be playing out for you:

  • Compare the performance of listicles, tutorials, and original reviews to more generic content.
  • If you create video or podcast content, track any uptick in referral traffic from Google.
  • Watch for changes in how your forum threads, product reviews, or community content perform compared to static pages.

You may find that narrative-style content, first-hand experiences, and multimedia formats are gaining traction, even if traditional evergreen pages are flat.

4. Watch For Redistribution, Not Just Declines

Google acknowledges that while overall traffic is stable, traffic is being redistributed.

That means some sites will lose while others gain, based on how well they align with evolving search behavior.

If your traffic has declined, it doesn’t necessarily mean your content isn’t ranking. It may be that the types of questions being asked and answered have changed.

Analyzing your top landing pages can help you spot patterns:

  • Are you seeing fewer entries on pages that used to rank for quick-answer queries?
  • Are in-depth or comparison-style pages gaining traffic?

The patterns you spot could help guide your content strategy.

Looking Ahead

When you rely on Search traffic, you deserve more than vague reassurances. Your analytics can help fill in the blanks.

By keeping an eye on your CTR, engagement, and how your content performs, you’ll get a better sense of whether AI in Search is helping you. This way, you can tweak your strategy to fit what works best for you.


Featured Image: Roman Samborskyi/Shutterstock