Google Brings AI Mode To Chrome’s Address Bar via @sejournal, @MattGSouthern

Google is rolling out AI Mode to the address bar in Chrome for U.S. users.

This move is part of a series of AI updates, including Gemini in Chrome, page-aware question prompts, improved scam protection, and instant password changes.

See Google’s launch video below:

What’s New

Google Chrome will enable you to access AI Mode directly from the search bar on desktop, ask follow-up questions, and explore the web more in-depth.

Additionally, Google is introducing contextual prompts that are connected to the page you’re currently viewing. When you use these prompts, an AI Overview will appear on the right side of the screen, allowing you to continue using AI Mode without leaving the page.

For now, this feature is available in English in the U.S., with plans to expand internationally.

Gemini In Chrome

Gemini in Chrome is rollout out to to Mac and Windows users in the U.S.

You can ask it to clarify complex information across multiple tabs, summarize open tabs, and consolidate details into a single view.

With integrations with Calendar, YouTube, and Maps, you can jump to a specific point in a video, get location details, or set meetings without switching tabs.

Google plans to add agentic capabilities in the coming months. Gemini will be able to perform tasks for you on the web, such as booking appointments or placing orders, with the option to stop it at any time.

Regarding availability, Google notes that business access will be available “in the coming weeks” through Workspace with enterprise-grade protections.

Security Enhancements

Enhanced protection in Safe Browsing now uses Gemini Nano to detect tech-support-style scams, making browsing safer. Google is also working on extending this protection to block fake virus alerts and fake giveaways.

Chrome is using AI to help reduce annoying spammy site notifications and to lower the prominence of intrusive permission prompts.

Additionally, Chrome will soon serve as a password helper, automatically changing compromised passwords with a single click on supported sites.

Why This Matters

Adding AI Mode to the omnibox makes it easier to ask conversational questions and follow-ups.

Content that answers related questions and compares options side by side may align better with these types of searches. Page-aware prompts also create new ways to explore related topics from article pages, which could change how people click through to other content.

Looking Ahead

Google frames this as “the biggest upgrade to Chrome in its history,” with staged rollouts and more countries and languages to come.


Featured Image: Photo Agency / Shutterstock

Personas Are Critical For AI search via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Here’s what I’m covering this week: How to build user personas for SEO from data you already have on hand.

You can’t treat personas as a “brand exercise” anymore.

In the AI-search era, prompts don’t just tell you what users want; they reveal who’s asking and under what constraints.

If your pages don’t match the person behind the query and connect with them quickly – their role, risks, and concerns they have, and the proof they require to resolve the intent – you’re likely not going to win the click or the conversion.

It’s time to not only pay attention and listen to your customers, but also optimize for their behavioral patterns.

Search used to be simple: queries = intent. You matched a keyword to a page and called it a day.

Personas were a nice-to-have, often useful for ads and creative or UX decisions, but mostly considered irrelevant by most to organic visibility or growth.

Not anymore.

Longer prompts and personalized results don’t just express what someone wants; they also expose who they are and the constraints they’re operating under.

AIOs and AI chats act as a preview layer and borrow trust from known brands. However, blue links still close when your content speaks to the person behind the prompt.

If that sounds like hard work, it is. And it’s why most teams stall implementing search personas across their strategy.

  • Personas can feel expensive, generic, academic, or agency-driven.
  • The old persona PDFs your brand invested in 3-5 years ago are dated – or missing entirely.
  • The resources, time, and knowledge it takes to build user personas are still significant blockers to getting the work done.

In this memo, I’ll show you how to build lean, practical, LLM-ready user personas for SEO – using the data you already have, shaped by real behavioral insights – so your pages are chosen when it counts.

While there are a few ways you could do this, and several really excellent articles out there on SEO personas this past year, this is the approach I take with my clients.

Most legacy persona decks were built for branding, not for search operators.

They don’t tell your writers, SEOs, or PMs what to do next, so they get ignored by your team after they’re created.

Mistake #1: Demographics ≠ Decisions

Classic user personas for SEO and marketing overfocused on demographics, which can give some surface-level insights into stereotypical behavior for certain groups.

But demographics don’t necessarily help your brand stand out against your competitors. And demographics don’t offer you the full picture.

Mistake #2: A Static PDF Or Shared Doc Ages Fast

If your personas were created once and never reanalyzed or updated again, it’s likely they got lost in G: Drive or Dropbox purgatory.

If there’s no owner working to ensure they’re implemented across production, there’s no feedback loop to understand if they’re working or if something needs to change.

Mistake #3: Pretty Delivered Decks, No Actionable Insights

Those well-designed persona deliverables look great, but when they aren’t tied to briefs, citations, trust signals, your content calendar, etc., they end up siloed from production. If a persona can’t shape a prompt or a page, it won’t shape any of your outcomes.

In addition to the fact classic personas weren’t built to implement across your search strategy, AI has shifted us from optimizing for intent to optimizing for identity and trust. In last week’s memo I shared the following:

The most significant, stand-out finding from that study: People use AI Overviews to get oriented and save time. Then, for any search that involves a transaction or high-stakes decision-making, searchers validate outside Google, usually with trusted brands or authority domains.

Old world of search optimization: Queries signaled intent. You ranked a page that matched the keyword and intent behind it, and your brand would catch the click. Personas were optional.

New world of search optimization: Prompts expose people, and AI changes how we search. Marketers aren’t just optimizing for search intent or demographics; we’re also optimizing for behavior.

Long AI prompts don’t just say what the user intends – they often reveal who is asking and what constraints or background of knowledge they bring.

For example, if a user prompts ChatGPT something like “I’m a healthcare compliance officer at a mid-sized hospital. Can you draft a checklist for evaluating new SaaS vendors, making sure it covers HIPAA regulations and costs under $50K a year,” then ChatGPT would have background information about the user’s general compliance needs, budget ceilings, risk tolerance, and preferred content formats.

AI systems then personalize summaries and citations around that context.

If your content doesn’t meet the persona’s trust requirements or output preference, it won’t be surfaced.

What that means in practice:

  • Prompts → identity signals. “As a solo marketer on a $2,000 budget…” or “for EU users under GDPR…” = role, constraints, and risk baked into the query.
  • Trust beats length. Classic search results are clicked on, but only when pages show the trust scaffolding a given persona needs for a specific query.
  • Format matters. Some personas want TL;DR and tables; others need demos, community validation (YouTube/Reddit), or primary sources.

So, here’s what to do about it.

You don’t need a five or six-figure agency study (although those are nice to have).

You need:

  • A collection of your already-existing data.
  • A repeatable process, not a static file.
  • A way to tie personas directly into briefs and prompts.

Turning your own existing data into usable user personas for SEO will equip you to tie personas directly to content briefs and SEO workflows.

Before you start collecting this data, set up an organized way to store it: Google Sheets, Notion, Airtable – whatever your team prefers. Store your custom persona prompt cards there, too, and you can copy and paste from there into ChatGPT & Co. as needed.

The work below isn’t for the faint of heart, but it will change how you prompt LLMs in your AI-powered workflows and your SEO-focused webpages for the better.

  1. Collect and cluster data.
  2. Draft persona prompt cards.
  3. Calibrate in ChatGPT & Co.
  4. Validate with real-world signals.

You’re going to mine several data sources that you already have, both qualitative and quantitative.

Keep in mind, being sloppy during this step means you will not have a good base for an “LLM ready” persona prompt card, which I’ll discuss in Step 2.

Attributes to capture for an “LLM-ready persona”:

  • Jobs-to-be-done (top 3).
  • Role and seniority.
  • Buying triggers + blockers (think budget, IT/legal constraints, risk).
  • 10-20 example questions at TOFU, MOFU, BOFU stages.
  • Trust cues (creators, domains, formats).
  • Output preferences (depth, format, tone).

Where AIO validation style data comes in:

Last week, we discussed four distinct AIO intent validations verified within the AIO usability study: Efficiency-first/Trust-driven/Comparative/Skeptical rejection.

If you want to incorporate this in your persona research – and I’d advise that you should – you’re going to look for:

  • Hesitation triggers across interactions with your brand: What makes them pause or refine their question (whether on a sales call or a heat map recording).
  • Click-out anchors: Which authority brands they use to validate (PayPal, NIH, Mayo Clinic, Stripe, KBB, etc.); use Sparktoro to find this information.
  • Evidence threshold: What proof ends hesitation for your user or different personas? (Citations, official terminology, dated reviews, side-by-side tables, videos).
  • Device/age nuance: Younger and mobile users → faster AIO acceptance; older cohorts → blue links and authority domains win clicks.

Below, I’ll walk you through where to find this information.

Qualitative Inputs

1. Your GSC queries hold a wealth of info. Split by TOFU/MOFU/BOFU, branded vs non-branded, and country. Then, use a regex to map question-style queries and see who’s really searching at each stage.

Below is the regex I like to use, which I discussed in Is AI cutting into your SEO conversions?. It also works for this task:

(?i)^(who|what|why|how|when|where|which|can|does|is|are|should|guide|tutorial|course|learn|examples?|definition|meaning|checklist|framework|template|tips?|ideas?|best|top|list(?:s)?|comparison|vs|difference|benefits|advantages|alternatives)b.*

2. On-Site Search Logs. These are the records of what visitors type into your website’s own search bar (not Google).

Extract exact phrasing of problems and “missing content” signals (like zero results, refined searches, or high exits/no clicks).

Plus, the wording visitors use reveals jobs-to-be-done, constraints, and vocabulary you should mirror on the page. Flag repeat questions as latent questions to resolve.

3. Support Tickets, CRM Notes, Win/Loss Analysis. Convert objections, blockers, and “how do I…” threads into searchable intents and hesitation themes.

Mine the following data from your records:

  • Support: Ticket titles, first message, last agent note, resolution summary.
  • CRM: Opportunity notes, metrics, decision criteria, lost-reason text.
  • Win/Loss: Objection snapshots, competitor cited, decision drivers, de-risking asks.
  • Context (if available): buyer role, segment (SMB/MM/ENT), region, product line, funnel stage.

Once gathered, compile and analyze to distill patterns.

Qualitative Inputs

1. Your sales calls and customer success notes are a wealth of information.

Use AI to analyze transcripts and/or notes to highlight jobs-to-be-done, triggers, blockers, and decision criteria in your customer’s own words.

2. Reddit and social media discussions.

This is where your buyers actually compare options and validate claims; capture the authority anchors (brands/domains) they trust.

3. Community/Slack spaces, email newsletter replies, article comments, short post-purchase or signup surveys.

Mine recurring “stuck points” and vocabulary you should mirror. Bucket recurring themes together and correlate across other data.

Pro tip: Use your topic map as the semantic backbone for all qualitative synthesis – discussed in depth in how to operationalize topic-first SEO. You’d start by locking the parent topics, then layer your personas as lenses: For each parent topic, fan out subtopics by persona, funnel stage, and the “people × problems” you pull from sales calls, CS notes, Reddit/LinkedIn, and community threads. Flag zero-volume/fringe questions on your map as priorities; they deepen authority and often resolve the hesitation themes your notes reveal.

After clustering pain points and recurring queries, you can take it one step further to tag each cluster with an AIO pattern by looking for:

  • Short dwell + 0–1 scroll + no refinements → Efficiency-first validations.
  • Longer dwell + multiple scrolls + hesitation language + authority click-outs → Trust-driven validations.
  • Four to five scrolls + multiple tabs (YouTube/Reddit/vendor) → Comparative validations.
  • Minimal AIO engagement + direct authority clicks (gov/medical/finance) → Skeptical rejection.

Not every team can run a full-blown usability study of the search results for targeted queries and topics, but you can infer many of these behavioral patterns through heatmaps of your own pages that have strong organic visibility.

2. Draft Persona Prompt Cards

Next up, you’ll take this data to inform creating a persona card.

A persona card is a one-page, ready-to-go snapshot of a target user segment that your marketing/SEO team can act on.

Unlike empty or demographic-heavy personas, a persona card ties jobs-to-be-done, constraints, questions, and trust cues directly to how you brief pages, structure proofs, and prompt LLMs.

A persona card ensures your pages and prompts match identity + trust requirements.

What you’re going to do in this step is convert each data-based persona cluster into a one-pager designed to be embedded directly into LLM prompts.

Include input patterns you expect from that persona – and the output format they’d likely want.

Optimizing Prompt Selection for Target Audience Engagement

Reusable Template: Persona Prompt Card

Drop this at the top of a ChatGPT conversation or save as a snippet.

This is an example template below based on the Growth Memo audience specifically, so you’ll need to not only modify it for your needs, but also tweak it per persona.

You are Kevin Indig advising a [ROLE, SENIORITY] at a [COMPANY TYPE, SIZE, LOCATION].

Objective: [Top 1–2 goals tied to KPIs and timeline]

Context: [Market, constraints, budget guardrails, compliance/IT notes]

Persona question style: [Example inputs they’d type; tone & jargon tolerance] 

Answer format:

- Start with a 3-bullet TL;DR.

- Then give a numbered playbook with 5-7 steps.

- Include 2 proof points (benchmarks/case studies) and 1 calculator/template.

- Flag risks and trade-offs explicitly.

- Keep to [brevity/depth]; [bullets/narrative]; include [table/chart] if useful.

What to avoid: [Banned claims, fluff, vendor speak] 

Citations: Prefer [domains/creators] and original research when possible.

Example Attribute Sets Using The Growth Memo Audience

Use this card as a starting point, then fill it with your data.

Below is an example of the prompt card with attributes filled for one of the ideal customer profiles (ICP) for the Growth Memo audience.

You are Kevin Indig advising an SEO Lead (Senior) at a Mid-Market B2B SaaS (US/EU).

Objective: Protect and grow organic pipeline in the AI-search era; drive qualified trials/demos in Q4; build durable topic authority.

Context: Competitive category; CMS constraints + limited Eng bandwidth; GDPR/CCPA; security/legal review for pages; budget ≤ $8,000/mo for content + tools; stakeholders: VP Marketing, Content Lead, PMM, RevOps.

Persona question style: “How do I measure topic performance vs keywords?”, “How do I structure entity-based internal linking?”, “What KPIs prove AIO exposure matters?”, “Regex for TOFU/MOFU/BOFU?”, “How to brief comparison pages that AIO cites?” Tone: precise, low-fluff, technical.

AIO validation profile:

- Dominant pattern(s): Trust-driven (primary), Comparative (frameworks/tools); Skeptical for YMYL claims.

- Hesitation triggers: Black-box vendor claims; non-replicable methods; missing citations; unclear risk/effort.

- Click-out anchors: Google Search Central & docs, schema.org, reputable research (Semrush/Ahrefs/SISTRIX/seoClarity), Pew/Ofcom, credible case studies, engineering/product docs.

- SERP feature bias: Skims AIO/snippets to frame, validates via organic authority + primary sources; uses YouTube for demos; largely ignores Ads.

- Evidence threshold: Methodology notes, datasets/replication steps, benchmarks, decision tables, risk trade-offs.

Answer format:

- Start with a three-bullet TL;DR.

- Then give a numbered playbook with 5-7 steps.

- Include 2 proof points (benchmarks/case studies) and 1 calculator/template.

- Flag risks and trade-offs explicitly.

- Keep to brevity + bullets; include a table/chart if useful.

Proof kit to include on-page:

Methodology & data provenance; decision table (framework/tool choice); “best for / not for”; internal-linking map or schema snippet; last-reviewed date; citations to Google docs/primary research; short demo or worksheet (e.g., Topic Coverage Score or KPI tree).

What to avoid:

Vendor-speak; outdated screenshots; cherry-picked wins; unverifiable stats; hand-wavy “AI magic.”

Citations:

Prefer Google Search Central/docs, schema.org, original studies/datasets; reputable tool research (Semrush, Ahrefs, SISTRIX, seoClarity); peer case studies with numbers.

Success signals to watch:

Topic-level lift (impressions/CTR/coverage), assisted conversions from topic clusters, AIO/snippet presence for key topics, authority referrals, demo starts from comparison hubs, reduced content decay, improved crawl/indexation on priority clusters.

Your goal here is to prove the Persona Prompt Cards actually produce useful answers – and to learn what evidence each persona needs.

Create one Custom Instruction profile per persona, or store each Persona Prompt Card as a prompt snippet you can prepend.

Run 10-15 real queries per persona. Score answers on clarity, scannability, credibility, and differentiation to your standard.

How to run the prompt card calibration:

  • Set up: Save one Prompt Card per persona.
  • Eval set: 10-15 real queries/persona across TOFU/MOFU/BOFU stages, including two or three YMYL or compliance-based queries, three to four comparisons, and three or four quick how-tos.
  • Ask for structure: Require TL;DR → numbered playbook → table → risks → citations (per the card).
  • Modify it: Add constraints and location variants; ask the same query two ways to test consistency.

Once you run sample queries to check for clarity and credibility, modify or upgrade your Persona Card as needed: Add missing trust anchors or evidence the model needed.

Save winning outputs as ways to guide your briefs that you can paste into drafts.

Log recurring misses (hallucinated stats, undated claims) as acceptance checks for production.

Then, do this for other LLMs that your audience uses. For instance, if your audience leans heavily toward using Perplexity.ai, calibrate your prompt there also. Make sure to also run the prompt card outputs in Google’s AI Mode, too.

Watch branded search trends, assisted conversions, and non-Google referrals to see if influence shows up where expected when you publish persona-tuned assets.

And make sure to measure lift by topic, not just per page: Segment performance by topic cluster (GSC regex or GA4 topic dimension). Operationalizing your topic-first seo strategy discusses how to do this.

Keep the following in mind when reviewing real-world signals:

  • Review at 30/60/90 days post-ship, and by topic cluster.
  • If Trust-driven pages show high scroll/low conversions → add/upgrade citations and expert reviews and quotes.
  • If Comparative pages get CTR but low product/sales demos signups → add short demo video, “best for / not for” sections, and clearer CTAs.
  • If Efficiency-first pages miss lifts in AIO/snippets → tighten TL;DR, simplify tables, add schema.
  • If Skeptical-rejection-geared pages yield authority traffic but no lift → consider pursuing authority partnerships.
  • Most importantly: redo the exercise every 60-90 days and match your new against old personas to iterate toward the ideal.

Building user personas for SEO is worth it, and it can be doable and fast by using in-house data and LLM support.

I challenge you to start with one lean persona this week to test this approach. Refine and expand your approach based on the results you see.

But if you plan to take this persona-building project on, avoid these common missteps:

  • Creating tidy PDFs with zero long-term benefits: Personas that don’t specify core search intents, pain points, and AIO intent patterns won’t move behavior.
  • Winning every SERP feature: This is a waste of time. Optimize your content for the right surface for the dominant behavioral patterns of your target users.
  • Ignoring hesitation: Hesitation is your biggest signal. If you don’t resolve it on-page, the click dies elsewhere.
  • Demographics over jobs-to-be-done: Focusing on characteristics of identity without incorporating behavioral patterns is the old way.

Featured Image: Paulo Bobita/Search Engine Journal

ChatGPT Study: 1 In 4 Conversations Now Seek Information via @sejournal, @MattGSouthern

New research from OpenAI and Harvard finds that “Seeking Information” messages now account for 24% of ChatGPT conversations, up from 14% a year earlier.

This is an NBER working paper (not peer-reviewed), based on consumer ChatGPT plans only, and the study used privacy-preserving methods where no human read user messages.

The working paper analyzes a representative sample of about 1.1 million conversations from May 2024 through June 2025.

By July, ChatGPT reached more than 700 million weekly active users, sending roughly 2.5 billion messages per day, or about 18 billion per week.

What People Use ChatGPT For

The three dominant topics are Practical Guidance, Seeking Information, and Writing, which together account for about 77% of usage.

Practical Guidance remains around 29%. Writing declined from 36% to 24% over the past year. Seeking Information grew from 14% to 24%.

The authors write that Seeking Information “appears to be a very close substitute for web search.”

Asking vs. Doing

The paper classifies intent as Asking, Doing, or Expressing.

About 49% of messages are Asking, 40% are Doing, and 11% are Expressing.

Asking messages “are consistently rated as having higher quality” than the other categories, based on an automated classifier and user feedback.

Work vs. Personal Use

Non-work usage rose from 53% in June 2024 to 73% in June 2025.

At work, Writing is the top use case, representing about 40% of work-related messages. Education is a major use: 10% of all messages involve tutoring or teaching.

Coding And Companionship

Only 4.2% of messages are about computer programming, and 1.9% concern relationships or personal reflection.

Who’s Using It

The study documents rapid global adoption.

Early gender gaps have narrowed, with the share of users having typically feminine names rising from 37% in January 2024 to 52% in July 2025.

Growth in the lowest-income countries has been more than four times that of the highest-income countries.

Why This Matters

If a quarter of conversations are information-seeking, some queries that would have gone to search may go toward conversational tools.

Consider responding to this shift with content that answers questions, while adding expertise that a chatbot can’t replicate. Writing and editing account for a large share of work-related use, which aligns with how teams are already folding AI into content workflows.

Looking Ahead

ChatGPT is becoming a major destination for finding information online.

In addition to the shift toward finding info, it’s worth highlighting that 70% of ChatGPT use is personal, not professional. This means consumer habits are changing broadly.

As this technology grows, it’ll be vital to track how your audience uses AI tools and adjust your content strategy to meet them where they are.


Featured Image: Photo Agency/Shutterstock

When Advertising Shifts To Prompts, What Should Advertisers Do? via @sejournal, @siliconvallaeys

When I last wrote about Google AI Mode, my focus was on the big differentiators: conversational prompts, memory-driven personalization, and the crucial pivot from keywords to context.

As we see with the Q2 ad platform financial results below, this shift is rapidly reshaping performance advertising. While AI Mode means Google has to rethink how it makes money, it forces us advertisers to rethink something even more fundamental: our entire strategy.

In the article about AI Mode, I laid out how prompts are different from keywords, why “synthetic keywords” are really just a temporary band-aid, and how fewer clicks might just challenge the age-old cost-per-click (CPC) revenue model.

This follow-up is about what these changes truly mean for us as advertisers, and why holding onto that keyword-era mindset could cost us our competitive edge.

The Great Rewiring Of Search

The biggest shift since we first got keyword-targeted online advertising is now in full swing. People aren’t searching with those relatively concise keywords anymore, the ones we optimized for how Google used to weigh certain words in a query.

Large language models (LLMs) have pretty much removed the shackles from the search bar. Now, users can fire off prompts with hundreds of words, and add even more context.

Think about the 400,000 token context window of GPT-5, which is like tens of thousands of words. Thankfully, most people don’t need that much space to explain what they want, but they are speaking in full sentences now, stutters and all.

Google’s internal ads in AI Mode document shares that early testers of AI Mode are asking queries that are two to three times as long as traditional searches on Google.

And thanks to LLMs’ multi-modal capabilities, users are searching with images (Google reports 20 billion Lens searches per month), drawing sketches, and even sending video. They’re finding what they need in entirely new ways.

Increasingly, users aren’t just looking for a list of what might be relevant. They expect a guided answer from the AI, one that summarizes options based on their personal preferences. People are asking AI to help them decide, not just to find.

And that fundamental change in user behavior is now reshaping the very platforms where these searches happen, starting with Google.

The Impact On Google As The Main Ads Platform

All of this definitely poses a threat to Google’s primary revenue stream. But as I mentioned in a LinkedIn post, the traffic didn’t vanish; it just moved.

Users didn’t ditch Google; they simply stopped using it the way they did when keywords were king. Plus, we’re seeing new players emerge, and search itself has fragmented:

This creates a fresh challenge for us advertisers: How do we design campaigns that actually perform when intent originates in these wildly new ways?

What Q2 Earnings Reports Told Us About AI In Search

The Q2 earnings calls were packed with GenAI details. Some of the most jaw-dropping figures involved the expected infrastructure investments.

Microsoft announced plans to spend an eye-watering $30 billion on capital expenditures in the coming quarter, and Alphabet estimated an $85 billion budget for the next year. I guess we’ll all be clicking a lot of ads to help pay for that. So, where will those ads come from when keywords are slowly being replaced by prompts?

Google shared some numbers to illustrate the scale of this shift. AI Overviews already reach 2 billion users a month. AI Mode itself is up to 100 million. The real question is, how is AI actually enabling better ads, and thus improving monetization?

Google reports:

  • Over 90 Performance Max improvements in the past year drove 10%+ more conversions and value.
  • Google’s AI Max for Search campaigns show a 27% lift in conversions or value over exact or phrase matches.

Microsoft Ads tells a similar story. In Q2 2025, it reported:

  • $13 billion in AI-related ad revenue.
  • Copilot-powered ads drove 2.3 times more conversions than traditional formats.
  • Users were 53% more likely to convert within 30 minutes.

So, what’s an advertiser to do with all this?

What Advertisers Should Do

As shared recently in a conversation with Kasim Aslam, these ecosystems are becoming intent originators. That old “search bar” is now a conversation, a screenshot, or even a voice command.

If your campaigns are still relying on waiting for someone to type a query, you’re showing up to the party late. Smart advertisers don’t just respond to intent; they predict it and position for it.

But how? Well, take a look at the Google products that are driving results for advertisers: They’re the newest AI-first offerings. Performance Max, for example, is keywordless advertising driven by feeds, creative, and audiences.

Another vital step for adapting to this shift is AI Max, which I’d call the most unrestrictive form of keyword advertising.

It blends elements of Dynamic Search Ads (DSAs), automatically created assets, and super broad keywords. This allows your ads to show up no matter how people search, even if they’re using those sprawling, multi-part prompts.

Sure, advertisers can still use today’s best practices, like reviewing search term reports and automatically created assets, then adding negatives or exclusions for the irrelevant ones. But let’s be honest, that’s a short-term, old-model approach.

As AI gains memory and contextual understanding, ads will be shown based on scenarios and user intent that isn’t even explicitly expressed.

Relying solely on negatives won’t cut it. The future demands that advertisers focus on getting involved earlier in the decision-making process and making sure the AI has all the right information to advocate for their brand.

Keywords Aren’t The Lever They Once Were

In the AI Mode era, prompts aren’t just simple queries; they’re rich, multi-turn conversations packed with context.

As I outlined in my last article, these interactions can pull in past sessions, images, and deeply personal preferences. No keyword list in the world can capture that level of nuance.

Tinuiti’s Q2 benchmark report shows Performance Max accounts for 59% of Shopping ad spend and delivers 18% higher click-through rates. This is a clear illustration that the platform is taking control of targeting.

And when structured feeds plus dynamic creative drive a 27% lift in conversions according to Google data, it’s because the creative itself is doing the targeting.

Those journeys happen out of sight, which is the biggest threat to advertisers whose strategies aren’t evolving.

The Real Danger: Invisible Decisions

One of my key takeaways from the AI Mode discussion was the risk of “zero-click” journeys. If the assistant delivers what a user needs inside the conversation, your brand might never get a visit.

According to Adobe Analytics, AI-powered referrals to U.S. retail sites grew 1,200% between July 2024 and February 2025. Traffic from these sources now doubles every 60 days.

These users:

  • Visit 12% more pages per session.
  • Bounce 23% less often.
  • Spend 45% more time browsing (especially in travel and finance verticals).

Even more importantly, 53% of users say they plan to rely on AI tools for shopping going forward.

In short, users are starting their journeys before they reach a traditional search engine, and they’re more engaged when they do. And winning in this environment means rethinking our levers for influence.

Why This Is An Opportunity, Not A Death Sentence

As I argued before, platforms aren’t killing keyword advertising; they’re evolving it. The advertisers winning now are leaning into the new levers:

Signals Over Keywords

  • Use customer relationship management (CRM) data to build high-intent audience lists.
  • Layer first-party data into automated campaign types through conversion value adjustments, audiences, or budget settings.
  • Optimize your product feed with rich attributes so AI has more to work with and knows exactly which products to recommend.
  • Ensure feed hygiene so LLMs have the most current data about your offers.
  • Enhance your website with more data for the LLMs to work with, like data tables, and schema.

Creative As Targeting

  • Build modular ad assets that AI can assemble dynamically: multiple headlines, descriptions, and images tailored to different audiences.
  • Test variations that align with different stages of the buying journey so you’re likely to show in more contextual scenarios across the entire consumer journey, not only at the end.

Measurement Beyond Clicks

  • Frequently evaluate the new metrics in Google Ads for AI Max and Performance Max. Changes are rolling out frequently, enabling smarter optimizations.
  • Track feed impression share by enabling these extra columns in Google Ads.
  • Monitor how often your products are surfaced in AI-driven recommendations, as with the recently updated AI Max report for “search terms and landing pages from AI Max.”
  • Focus your measurement on how well users are able to complete tasks, not just clicks.

The future isn’t about bidding on a query. It’s about supplying the AI with the best “raw ingredients” so you win the recommendation at the exact moment of decision.

That mindset shift is the real competitive advantage in the AI-first era.

The Bottom Line

My previous AI Mode post was about the mechanics of the shift. This one is about the mindset change required to survive it.

Keywords aren’t vanishing, but their role is shrinking fast. In an AI-driven, context-first search landscape, the brands that thrive will stop obsessing over what the user types and start shaping what the AI recommends.

If you can win that moment, you won’t just get found. You’ll get chosen.

More Resources:


Featured Image: Smile Studio AP/Shutterstock

Google Gemini Adds Audio File Uploads After Being Top User Request via @sejournal, @MattGSouthern

Google’s Gemini app now accepts audio file uploads, answering what the company acknowledges was its most requested feature.

For marketers and content teams, it means you can push recordings straight into Gemini for analysis, summaries, and repurposed content without jumping between tools.

Josh Woodward, VP at Google Labs and Gemini, announced the change on X:

“You can now upload any file to @GeminiApp. Including the #1 request: audio files are now supported!”

What’s New

Gemini can now ingest audio files in the same multi-file workflow you already use for documents and images.

You can attach up to 10 files per prompt, and files inside ZIP archives are supported, which helps when you want to upload raw tracks or several interview takes together.

Limits

  • Free plan: total audio length up to 10 minutes per prompt; up to 5 prompts per day.
  • AI Pro and AI Ultra: total audio length up to 3 hours per prompt.
  • Per prompt: up to 10 files across supported formats. Details are listed in Google’s Help Center.

Why This Matters

If your team works with podcasts, webinars, interviews, or customer calls, this closes a gap that often forced a separate transcription step.

You can upload a full interview and turn it into show notes, pull quotes, or a working draft in one place. It also helps meeting-heavy teams: a recorded strategy session can become action items and a brief without exporting to another tool first.

For agencies and networks, batching multiple episodes or takes into one prompt reduces friction in weekly workflows.

The practical win is fewer handoffs: source audio goes in, and the outlines, summaries, and excerpts you need come out. Inside the same system you already use for text prompting.

Quick Tip

Upload your audio together with any supporting context in the same prompt. That gives Gemini the grounding it needs to produce cleaner summaries and more accurate excerpts.

If you’re testing on the free tier, plan around the 10-minute ceiling; longer content is best on AI Pro or Ultra.

Looking Ahead

Google’s limits pages do change, so keep an eye on total length, file-count rules, and any new guardrails that affect longer recordings or larger teams. Also watch for deeper Workspace tie-ins (for example, easier handoffs from Meet recordings) that would streamline getting audio into Gemini without manual uploads.


Featured Image: Photo Agency/Shutterstock

Anthropic Agrees To $1.5B Settlement Over Pirated Books via @sejournal, @MattGSouthern

Anthropic agreed to a proposed $1.5 billion settlement in Bartz v. Anthropic over claims it downloaded pirated books to help train Claude.

If approved, plaintiffs’ counsel says it would be the largest U.S. copyright recovery to date. A preliminary approval hearing is set for today.

In June, Judge William Alsup held that training on lawfully obtained books can qualify as fair use, while copying and storing millions of pirated books is infringement. That order set the stage for settlement talks.

Settlement Details

The deal would pay about $3,000 per eligible title, with an estimated class size of roughly 500,000 books. Plaintiffs allege Anthropic pulled at least 7 million copies from piracy sites Library Genesis and Pirate Library Mirror.

Justin Nelson, counsel for the authors, said:

“As best as we can tell, it’s the largest copyright recovery ever.”

How Payouts Would Work

According to the Authors Guild’s summary, the fund is paid in four tranches after court approvals: $300M soon after preliminary approval, $300M after final approval, then $450M at 12 months and 450M at 24 months, with interest accruing in escrow.

A final “Works List” is due October 10, which will drive a searchable database for claimants.

The Guild notes the agreement requires destruction of pirated copies and resolves only past conduct.

Why This Matters

If you rely on AI tools in content workflows, provenance now matters more. Expect more licensing deals and clearer disclosures from vendors about training data sources.

For publishers and creators, the per-work payout sets a reference point that may strengthen negotiating leverage in future licensing talks.

Looking Ahead

The judge will consider preliminary approval today. If granted, the notice process begins this fall and payments to rightsholders would follow final approval and claims processing, funded on the installment schedule above.


Featured Image: Tigarto/Shutterstock

Google Publishes Exact Gemini Usage Limits Across All Tiers via @sejournal, @MattGSouthern

Google has published exact usage limits for Gemini Apps across the free tier and paid Google AI plans, replacing earlier vague language with concrete numbers marketers can plan around.

The Help Center update covers daily caps for prompts, images, Deep Research, video generation, and context windows, and notes that you’ll see in-product notices when you’re close to a limit.

What’s New

Until recently, Google’s documentation used general phrasing about “limited access” without specifying amounts.

The Help Center page now lists per-tier allowances for Gemini 2.5 Pro prompts, image generation, Deep Research, and more. It also clarifies that practical caps can vary with prompt complexity, file sizes, and conversation length, and that limits may change over time.

Google’s Help Center states:

“Gemini Apps has usage limits designed to ensure an optimal experience for everyone… we may at times have to cap the number of prompts, conversations, and generated assets that you can have within a specific timeframe.”

Free vs. Paid Tiers

On the free experience, you can use Gemini 2.5 Pro for up to five prompts per day.

The page lists general access to 2.5 Flash and includes:

  • 100 images per day
  • 20 Audio Overviews per day
  • Five Deep Research reports per month using 2.5 Flash).

Because overall app limits still apply, actual throughput depends on how long and complex your prompts are and how many files you attach.

Google AI Pro increases ceilings to:

  • 100 prompts per day on Gemini 2.5 Pro
  • 1,000 images per day
  • 20 Deep Research reports per day (using 2.5 Pro).

Google AI Ultra raises those to

  • 500 prompts per day
  • 200 Deep Research reports per day
  • Includes Deep Think with 10 prompts per day at a 192,000-token context window for more complex reasoning tasks.

Context Windows and Advanced Features

Context windows differ by tier. The free tier lists a 32,000-token context size, while Pro and Ultra show 1 million tokens, which is helpful when you need longer conversations or to process large documents in one go.

Ultra’s Deep Think is separate from the 1M context and is capped at 192k tokens for its 10 daily prompts.

Video generation is currently in preview with model-specific limits. Pro shows up to three videos per day with Veo 3 Fast (preview), while Ultra lists up to five videos per day with Veo 3 (preview).

Google indicates some features receive priority or early access on paid plans.

Availability and Requirements

The Gemini app in Google AI Pro and Ultra is available in 150+ countries and territories for users 18 or older.

Upgrades are tied to select Google One paid plans for personal accounts, which consolidate billing with other premium Google services.

Why This Matters

Clear ceilings make it easier to scope deliverables and budgets.

If you produce a steady stream of social or ad creative, the image caps and prompt totals are practical planning inputs.

Teams doing competitive analysis or longer-form research can evaluate whether the free tier’s five Deep Research reports per month cover occasional needs or if Pro’s daily allotment, Ultra’s higher limit, and Deep Think are a better fit for heavier workloads.

The documentation also emphasizes that caps can vary with usage patterns, so it’s worth watching the in-app limit warnings on busy days.

Looking Ahead

Google notes that limits may evolve. If your workflows depend on specific daily counts or large context windows, it’s sensible to review the Help Center page periodically and adjust plans as features move from preview to general availability.


Featured Image: Evolf/Shutterstock

AI Search Sends Users to 404 Pages Nearly 3X More Than Google via @sejournal, @MattGSouthern

New research examining 16 million URLs aligns with Google’s predictions that hallucinated links will become an issue across AI platforms.

An Ahrefs study shows that AI assistants send users to broken web pages nearly three times more often than Google Search.

The data arrives six months after Google’s John Mueller raised awareness about this issue.

ChatGPT Leads In URL Hallucination Rates

ChatGPT creates the most fake URLs among all AI assistants tested. The study found that 1% of URLs people clicked led to 404 pages. Google’s rate is just 0.15%.

The problem gets worse when looking at all URLs ChatGPT mentions, not just clicked ones. Here, 2.38% lead to error pages. Compare this to Google’s top search results, where only 0.84% are broken links.

Claude came in second with 0.58% broken links for clicked URLs. Copilot had 0.34%, Perplexity 0.31%, and Gemini 0.21%. Mistral had the best rate at 0.12%, but it also sends the least traffic to websites.

Why Does This Happen?

The research found two main reasons why AI creates fake links.

First, some URLs used to exist but don’t anymore. When AI relies on old information instead of searching the web in real-time, it might suggest pages that have been deleted or moved.

Second, AI sometimes invents URLs that sound right but never existed.

Ryan Law from Ahrefs shared examples from their own site. AI assistants created fake URLs like “/blog/internal-links/” and “/blog/newsletter/” because these sound like pages Ahrefs might have. But they don’t actually exist.

Limited Impact on Overall Traffic

The problem may seem significant, but most websites won’t notice much impact. AI assistants only bring in about 0.25% of website traffic. Google, by comparison, drives 39.35% of traffic.

This means fake URLs affect a tiny portion of an already small traffic source. Still, the issue might grow as more people use AI for research and information.

The study also found that 74% of new web pages contain AI-generated content. When this content includes fake links, web crawlers might index them, spreading the problem further.

Mueller’s Prediction Proves Accurate

These findings match what Google’s John Mueller predicted in March. He forecasted a “slight uptick of these hallucinated links being clicked” over the next 6-12 months.

Mueller suggested focusing on better 404 pages rather than chasing accidental traffic.

His advice to collect data before making big changes looks smart now, given the small traffic impact Ahrefs found.

Mueller also predicted the problem would fade as AI services improve how they handle URLs. Time will tell if he’s right about this, too.

Looking Forward

For now, most websites should focus on two things. Create helpful 404 pages for users who hit broken links. Then, set up redirects only for fake URLs that get meaningful traffic.

This allows you to handle the problem without overreacting to what remains a minor issue for most sites.

Let’s Look Inside An Answer Engine And See How GenAI Picks Winners via @sejournal, @DuaneForrester

Ask a question in ChatGPT, Perplexity, Gemini, or Copilot, and the answer appears in seconds. It feels effortless. But under the hood, there’s no magic. There’s a fight happening.

This is the part of the pipeline where your content is in a knife fight with every other candidate. Every passage in the index wants to be the one the model selects.

For SEOs, this is a new battleground. Traditional SEO was about ranking on a page of results. Now, the contest happens inside an answer selection system. And if you want visibility, you need to understand how that system works.

Let's Look Inside An Answer Engine and See How GenAI Picks WinnersImage Credit: Duane Forrester

The Answer Selection Stage

This isn’t crawling, indexing, or embedding in a vector database. That part is done before the query ever happens. Answer selection kicks in after a user asks a question. The system already has content chunked, embedded, and stored. What it needs to do is find candidate passages, score them, and decide which ones to pass into the model for generation.

Every modern AI search pipeline uses the same three stages (across four steps): retrieval, re-ranking, and clarity checks. Each stage matters. Each carries weight. And while every platform has its own recipe (the weighting assigned at each step/stage), the research gives us enough visibility to sketch a realistic starting point. To basically build our own model to at least partially replicate what’s going on.

The Builder’s Baseline

If you were building your own LLM-based search system, you’d have to tell it how much each stage counts. That means assigning normalized weights that sum to one.

A defensible, research-informed starting stack might look like this:

  • Lexical retrieval (keywords, BM25): 0.4.
  • Semantic retrieval (embeddings, meaning): 0.4.
  • Re-ranking (cross-encoder scoring): 0.15.
  • Clarity and structural boosts: 0.05.

Every major AI system has its own proprietary blend, but they’re all essentially brewing from the same core ingredients. What I’m showing you here is the average starting point for an enterprise search system, not exactly what ChatGPT, Perplexity, Claude, Copilot, or Gemini operate with. We’ll never know those weights.

Hybrid defaults across the industry back this up. Weaviate’s hybrid search alpha parameter defaults to 0.5, an equal balance between keyword matching and embeddings. Pinecone teaches the same default in its hybrid overview.

Re-ranking gets 0.15 because it only applies to the short list. Yet its impact is proven: “Passage Re-Ranking with BERT” showed major accuracy gains when BERT was layered on BM25 retrieval.

Clarity gets 0.05. It’s small, but real. A passage that leads with the answer, is dense with facts, and can be lifted whole, is more likely to win. That matches the findings from my own piece on semantic overlap vs. density.

At first glance, this might sound like “just SEO with different math.” It isn’t. Traditional SEO has always been guesswork inside a black box. We never really had access to the algorithms in a format that was close to their production versions. With LLM systems, we finally have something search never really gave us: access to all the research they’re built on. The dense retrieval papers, the hybrid fusion methods, the re-ranking models, they’re all public. That doesn’t mean we know exactly how ChatGPT or Gemini dials their knobs, or tunes their weights, but it does mean we can sketch a model of how they likely work much more easily.

From Weights To Visibility

So, what does this mean if you’re not building the machine but competing inside it?

Overlap gets you into the room, density makes you credible, lexical keeps you from being filtered out, and clarity makes you the winner.

That’s the logic of the answer selection stack.

Lexical retrieval is still 40% of the fight. If your content doesn’t contain the words people actually use, you don’t even enter the pool.

Semantic retrieval is another 40%. This is where embeddings capture meaning. A paragraph that ties related concepts together maps better than one that is thin and isolated. This is how your content gets picked up when users phrase queries in ways you didn’t anticipate.

Re-ranking is 15%. It’s where clarity and structure matter most. Passages that look like direct answers rise. Passages that bury the conclusion drop.

Clarity and structure are the tie-breaker. 5% might not sound like much, but in close fights, it decides who wins.

Two Examples

Zapier’s Help Content

Zapier’s documentation is famously clean and answer-first. A query like “How to connect Google Sheets to Slack” returns a ChatGPT answer that begins with the exact steps outlined because the content from Zapier provides the exact data needed. When you click through a ChatGPT resource link, the page you land on is not a blog post; it’s probably not even a help article. It’s the actual page that lets you accomplish the task you asked for.

  • Lexical? Strong. The words “Google Sheets” and “Slack” are right there.
  • Semantic? Strong. The passage clusters related terms like “integration,” “workflow,” and “trigger.”
  • Re-ranking? Strong. The steps lead with the answer.
  • Clarity? Very strong. Scannable, answer-first formatting.

In a 0.4 / 0.4 / 0.15 / 0.05 system, Zapier’s chunk scores across all dials. This is why their content often shows up in AI answers.

A Marketing Blog Post

Contrast that with a typical long marketing blog post about “team productivity hacks.” The post mentions Slack, Google Sheets, and integrations, but only after 700 words of story.

  • Lexical? Present, but buried.
  • Semantic? Decent, but scattered.
  • Re-ranking? Weak. The answer to “How do I connect Sheets to Slack?” is hidden in a paragraph halfway down.
  • Clarity? Weak. No liftable answer-first chunk.

Even though the content technically covers the topic, it struggles in this weighting model. The Zapier passage wins because it aligns with how the answer selection layer actually works.

Traditional search still guides the user to read, evaluate, and decide if the page they land on answers their need. AI answers are different. They don’t ask you to parse results. They map your intent directly to the task or answer and move you straight into “get it done” mode. You ask, “How to connect Google Sheets to Slack,” and you end up with a list of steps or a link to the page where the work is completed. You don’t really get a blog post explaining how someone did this during their lunch break, and it only took five minutes.

Volatility Across Platforms

There’s another major difference from traditional SEO. Search engines, despite algorithm changes, converged over time. Ask Google and Bing the same question, and you’ll often see similar results.

LLM platforms don’t converge, or at least, aren’t so far. Ask the same question in Perplexity, Gemini, and ChatGPT, and you’ll often get three different answers. That volatility reflects how each system weights its dials. Gemini may emphasize citations. Perplexity may reward breadth of retrieval. ChatGPT may compress aggressively for conversational style. And we have data that shows that between a traditional engine, and an LLM-powered answer platform, there is a wide gulf between answers. Brightedge’s data (62% disagreement on brand recommendations) and ProFound’s data (…AI modules and answer engines differ dramatically from search engines, with just 8 – 12% overlap in results) showcase this clearly.

For SEOs, this means optimization isn’t one-size-fits-all anymore. Your content might perform well in one system and poorly in another. That fragmentation is new, and you’ll need to find ways to address it as consumer behavior around using these platforms for answers shifts.

Why This Matters

In the old model, hundreds of ranking factors blurred together into a consensus “best effort.” In the new model, it’s like you’re dealing with four big dials, and every platform tunes them differently. In fairness, the complexity behind those dials is still pretty vast.

Ignore lexical overlap, and you lose part of that 40% of the vote. Write semantically thin content, and you can lose another 40. Ramble or bury your answer, and you won’t win re-ranking. Pad with fluff and you miss the clarity boost.

The knife fight doesn’t happen on a SERP anymore. It happens inside the answer selection pipeline. And it’s highly unlikely those dials are static. You can bet they move in relation to many other factors, including each other’s relative positioning.

The Next Layer: Verification

Today, answer selection is the last gate before generation. But the next stage is already in view: verification.

Research shows how models can critique themselves and raise factuality. Self-RAG demonstrates retrieval, generation, and critique loops. SelfCheckGPT runs consistency checks across multiple generations. OpenAI is reported to be building a Universal Verifier for GPT-5. And, I wrote about this whole topic in a recent Substack article.

When verification layers mature, retrievability will only get you into the room. Verification will decide if you stay there.

Closing

This really isn’t regular SEO in disguise. It’s a shift. We can now more clearly see the gears turning because more of the research is public. We also see volatility because each platform spins those gears differently.

For SEOs, I think the takeaway is clear. Keep lexical overlap strong. Build semantic density into clusters. Lead with the answer. Make passages concise and liftable. And I do understand how much that sounds like traditional SEO guidance. I also understand how the platforms using the information differ so much from regular search engines. Those differences matter.

This is how you survive the knife fight inside AI. And soon, how you pass the verifier’s test once you’re there.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: tete_escape/Shutterstock