Personas Are Critical For AI search via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Here’s what I’m covering this week: How to build user personas for SEO from data you already have on hand.

You can’t treat personas as a “brand exercise” anymore.

In the AI-search era, prompts don’t just tell you what users want; they reveal who’s asking and under what constraints.

If your pages don’t match the person behind the query and connect with them quickly – their role, risks, and concerns they have, and the proof they require to resolve the intent – you’re likely not going to win the click or the conversion.

It’s time to not only pay attention and listen to your customers, but also optimize for their behavioral patterns.

Search used to be simple: queries = intent. You matched a keyword to a page and called it a day.

Personas were a nice-to-have, often useful for ads and creative or UX decisions, but mostly considered irrelevant by most to organic visibility or growth.

Not anymore.

Longer prompts and personalized results don’t just express what someone wants; they also expose who they are and the constraints they’re operating under.

AIOs and AI chats act as a preview layer and borrow trust from known brands. However, blue links still close when your content speaks to the person behind the prompt.

If that sounds like hard work, it is. And it’s why most teams stall implementing search personas across their strategy.

  • Personas can feel expensive, generic, academic, or agency-driven.
  • The old persona PDFs your brand invested in 3-5 years ago are dated – or missing entirely.
  • The resources, time, and knowledge it takes to build user personas are still significant blockers to getting the work done.

In this memo, I’ll show you how to build lean, practical, LLM-ready user personas for SEO – using the data you already have, shaped by real behavioral insights – so your pages are chosen when it counts.

While there are a few ways you could do this, and several really excellent articles out there on SEO personas this past year, this is the approach I take with my clients.

Most legacy persona decks were built for branding, not for search operators.

They don’t tell your writers, SEOs, or PMs what to do next, so they get ignored by your team after they’re created.

Mistake #1: Demographics ≠ Decisions

Classic user personas for SEO and marketing overfocused on demographics, which can give some surface-level insights into stereotypical behavior for certain groups.

But demographics don’t necessarily help your brand stand out against your competitors. And demographics don’t offer you the full picture.

Mistake #2: A Static PDF Or Shared Doc Ages Fast

If your personas were created once and never reanalyzed or updated again, it’s likely they got lost in G: Drive or Dropbox purgatory.

If there’s no owner working to ensure they’re implemented across production, there’s no feedback loop to understand if they’re working or if something needs to change.

Mistake #3: Pretty Delivered Decks, No Actionable Insights

Those well-designed persona deliverables look great, but when they aren’t tied to briefs, citations, trust signals, your content calendar, etc., they end up siloed from production. If a persona can’t shape a prompt or a page, it won’t shape any of your outcomes.

In addition to the fact classic personas weren’t built to implement across your search strategy, AI has shifted us from optimizing for intent to optimizing for identity and trust. In last week’s memo I shared the following:

The most significant, stand-out finding from that study: People use AI Overviews to get oriented and save time. Then, for any search that involves a transaction or high-stakes decision-making, searchers validate outside Google, usually with trusted brands or authority domains.

Old world of search optimization: Queries signaled intent. You ranked a page that matched the keyword and intent behind it, and your brand would catch the click. Personas were optional.

New world of search optimization: Prompts expose people, and AI changes how we search. Marketers aren’t just optimizing for search intent or demographics; we’re also optimizing for behavior.

Long AI prompts don’t just say what the user intends – they often reveal who is asking and what constraints or background of knowledge they bring.

For example, if a user prompts ChatGPT something like “I’m a healthcare compliance officer at a mid-sized hospital. Can you draft a checklist for evaluating new SaaS vendors, making sure it covers HIPAA regulations and costs under $50K a year,” then ChatGPT would have background information about the user’s general compliance needs, budget ceilings, risk tolerance, and preferred content formats.

AI systems then personalize summaries and citations around that context.

If your content doesn’t meet the persona’s trust requirements or output preference, it won’t be surfaced.

What that means in practice:

  • Prompts → identity signals. “As a solo marketer on a $2,000 budget…” or “for EU users under GDPR…” = role, constraints, and risk baked into the query.
  • Trust beats length. Classic search results are clicked on, but only when pages show the trust scaffolding a given persona needs for a specific query.
  • Format matters. Some personas want TL;DR and tables; others need demos, community validation (YouTube/Reddit), or primary sources.

So, here’s what to do about it.

You don’t need a five or six-figure agency study (although those are nice to have).

You need:

  • A collection of your already-existing data.
  • A repeatable process, not a static file.
  • A way to tie personas directly into briefs and prompts.

Turning your own existing data into usable user personas for SEO will equip you to tie personas directly to content briefs and SEO workflows.

Before you start collecting this data, set up an organized way to store it: Google Sheets, Notion, Airtable – whatever your team prefers. Store your custom persona prompt cards there, too, and you can copy and paste from there into ChatGPT & Co. as needed.

The work below isn’t for the faint of heart, but it will change how you prompt LLMs in your AI-powered workflows and your SEO-focused webpages for the better.

  1. Collect and cluster data.
  2. Draft persona prompt cards.
  3. Calibrate in ChatGPT & Co.
  4. Validate with real-world signals.

You’re going to mine several data sources that you already have, both qualitative and quantitative.

Keep in mind, being sloppy during this step means you will not have a good base for an “LLM ready” persona prompt card, which I’ll discuss in Step 2.

Attributes to capture for an “LLM-ready persona”:

  • Jobs-to-be-done (top 3).
  • Role and seniority.
  • Buying triggers + blockers (think budget, IT/legal constraints, risk).
  • 10-20 example questions at TOFU, MOFU, BOFU stages.
  • Trust cues (creators, domains, formats).
  • Output preferences (depth, format, tone).

Where AIO validation style data comes in:

Last week, we discussed four distinct AIO intent validations verified within the AIO usability study: Efficiency-first/Trust-driven/Comparative/Skeptical rejection.

If you want to incorporate this in your persona research – and I’d advise that you should – you’re going to look for:

  • Hesitation triggers across interactions with your brand: What makes them pause or refine their question (whether on a sales call or a heat map recording).
  • Click-out anchors: Which authority brands they use to validate (PayPal, NIH, Mayo Clinic, Stripe, KBB, etc.); use Sparktoro to find this information.
  • Evidence threshold: What proof ends hesitation for your user or different personas? (Citations, official terminology, dated reviews, side-by-side tables, videos).
  • Device/age nuance: Younger and mobile users → faster AIO acceptance; older cohorts → blue links and authority domains win clicks.

Below, I’ll walk you through where to find this information.

Qualitative Inputs

1. Your GSC queries hold a wealth of info. Split by TOFU/MOFU/BOFU, branded vs non-branded, and country. Then, use a regex to map question-style queries and see who’s really searching at each stage.

Below is the regex I like to use, which I discussed in Is AI cutting into your SEO conversions?. It also works for this task:

(?i)^(who|what|why|how|when|where|which|can|does|is|are|should|guide|tutorial|course|learn|examples?|definition|meaning|checklist|framework|template|tips?|ideas?|best|top|list(?:s)?|comparison|vs|difference|benefits|advantages|alternatives)b.*

2. On-Site Search Logs. These are the records of what visitors type into your website’s own search bar (not Google).

Extract exact phrasing of problems and “missing content” signals (like zero results, refined searches, or high exits/no clicks).

Plus, the wording visitors use reveals jobs-to-be-done, constraints, and vocabulary you should mirror on the page. Flag repeat questions as latent questions to resolve.

3. Support Tickets, CRM Notes, Win/Loss Analysis. Convert objections, blockers, and “how do I…” threads into searchable intents and hesitation themes.

Mine the following data from your records:

  • Support: Ticket titles, first message, last agent note, resolution summary.
  • CRM: Opportunity notes, metrics, decision criteria, lost-reason text.
  • Win/Loss: Objection snapshots, competitor cited, decision drivers, de-risking asks.
  • Context (if available): buyer role, segment (SMB/MM/ENT), region, product line, funnel stage.

Once gathered, compile and analyze to distill patterns.

Qualitative Inputs

1. Your sales calls and customer success notes are a wealth of information.

Use AI to analyze transcripts and/or notes to highlight jobs-to-be-done, triggers, blockers, and decision criteria in your customer’s own words.

2. Reddit and social media discussions.

This is where your buyers actually compare options and validate claims; capture the authority anchors (brands/domains) they trust.

3. Community/Slack spaces, email newsletter replies, article comments, short post-purchase or signup surveys.

Mine recurring “stuck points” and vocabulary you should mirror. Bucket recurring themes together and correlate across other data.

Pro tip: Use your topic map as the semantic backbone for all qualitative synthesis – discussed in depth in how to operationalize topic-first SEO. You’d start by locking the parent topics, then layer your personas as lenses: For each parent topic, fan out subtopics by persona, funnel stage, and the “people × problems” you pull from sales calls, CS notes, Reddit/LinkedIn, and community threads. Flag zero-volume/fringe questions on your map as priorities; they deepen authority and often resolve the hesitation themes your notes reveal.

After clustering pain points and recurring queries, you can take it one step further to tag each cluster with an AIO pattern by looking for:

  • Short dwell + 0–1 scroll + no refinements → Efficiency-first validations.
  • Longer dwell + multiple scrolls + hesitation language + authority click-outs → Trust-driven validations.
  • Four to five scrolls + multiple tabs (YouTube/Reddit/vendor) → Comparative validations.
  • Minimal AIO engagement + direct authority clicks (gov/medical/finance) → Skeptical rejection.

Not every team can run a full-blown usability study of the search results for targeted queries and topics, but you can infer many of these behavioral patterns through heatmaps of your own pages that have strong organic visibility.

2. Draft Persona Prompt Cards

Next up, you’ll take this data to inform creating a persona card.

A persona card is a one-page, ready-to-go snapshot of a target user segment that your marketing/SEO team can act on.

Unlike empty or demographic-heavy personas, a persona card ties jobs-to-be-done, constraints, questions, and trust cues directly to how you brief pages, structure proofs, and prompt LLMs.

A persona card ensures your pages and prompts match identity + trust requirements.

What you’re going to do in this step is convert each data-based persona cluster into a one-pager designed to be embedded directly into LLM prompts.

Include input patterns you expect from that persona – and the output format they’d likely want.

Optimizing Prompt Selection for Target Audience Engagement

Reusable Template: Persona Prompt Card

Drop this at the top of a ChatGPT conversation or save as a snippet.

This is an example template below based on the Growth Memo audience specifically, so you’ll need to not only modify it for your needs, but also tweak it per persona.

You are Kevin Indig advising a [ROLE, SENIORITY] at a [COMPANY TYPE, SIZE, LOCATION].

Objective: [Top 1–2 goals tied to KPIs and timeline]

Context: [Market, constraints, budget guardrails, compliance/IT notes]

Persona question style: [Example inputs they’d type; tone & jargon tolerance] 

Answer format:

- Start with a 3-bullet TL;DR.

- Then give a numbered playbook with 5-7 steps.

- Include 2 proof points (benchmarks/case studies) and 1 calculator/template.

- Flag risks and trade-offs explicitly.

- Keep to [brevity/depth]; [bullets/narrative]; include [table/chart] if useful.

What to avoid: [Banned claims, fluff, vendor speak] 

Citations: Prefer [domains/creators] and original research when possible.

Example Attribute Sets Using The Growth Memo Audience

Use this card as a starting point, then fill it with your data.

Below is an example of the prompt card with attributes filled for one of the ideal customer profiles (ICP) for the Growth Memo audience.

You are Kevin Indig advising an SEO Lead (Senior) at a Mid-Market B2B SaaS (US/EU).

Objective: Protect and grow organic pipeline in the AI-search era; drive qualified trials/demos in Q4; build durable topic authority.

Context: Competitive category; CMS constraints + limited Eng bandwidth; GDPR/CCPA; security/legal review for pages; budget ≤ $8,000/mo for content + tools; stakeholders: VP Marketing, Content Lead, PMM, RevOps.

Persona question style: “How do I measure topic performance vs keywords?”, “How do I structure entity-based internal linking?”, “What KPIs prove AIO exposure matters?”, “Regex for TOFU/MOFU/BOFU?”, “How to brief comparison pages that AIO cites?” Tone: precise, low-fluff, technical.

AIO validation profile:

- Dominant pattern(s): Trust-driven (primary), Comparative (frameworks/tools); Skeptical for YMYL claims.

- Hesitation triggers: Black-box vendor claims; non-replicable methods; missing citations; unclear risk/effort.

- Click-out anchors: Google Search Central & docs, schema.org, reputable research (Semrush/Ahrefs/SISTRIX/seoClarity), Pew/Ofcom, credible case studies, engineering/product docs.

- SERP feature bias: Skims AIO/snippets to frame, validates via organic authority + primary sources; uses YouTube for demos; largely ignores Ads.

- Evidence threshold: Methodology notes, datasets/replication steps, benchmarks, decision tables, risk trade-offs.

Answer format:

- Start with a three-bullet TL;DR.

- Then give a numbered playbook with 5-7 steps.

- Include 2 proof points (benchmarks/case studies) and 1 calculator/template.

- Flag risks and trade-offs explicitly.

- Keep to brevity + bullets; include a table/chart if useful.

Proof kit to include on-page:

Methodology & data provenance; decision table (framework/tool choice); “best for / not for”; internal-linking map or schema snippet; last-reviewed date; citations to Google docs/primary research; short demo or worksheet (e.g., Topic Coverage Score or KPI tree).

What to avoid:

Vendor-speak; outdated screenshots; cherry-picked wins; unverifiable stats; hand-wavy “AI magic.”

Citations:

Prefer Google Search Central/docs, schema.org, original studies/datasets; reputable tool research (Semrush, Ahrefs, SISTRIX, seoClarity); peer case studies with numbers.

Success signals to watch:

Topic-level lift (impressions/CTR/coverage), assisted conversions from topic clusters, AIO/snippet presence for key topics, authority referrals, demo starts from comparison hubs, reduced content decay, improved crawl/indexation on priority clusters.

Your goal here is to prove the Persona Prompt Cards actually produce useful answers – and to learn what evidence each persona needs.

Create one Custom Instruction profile per persona, or store each Persona Prompt Card as a prompt snippet you can prepend.

Run 10-15 real queries per persona. Score answers on clarity, scannability, credibility, and differentiation to your standard.

How to run the prompt card calibration:

  • Set up: Save one Prompt Card per persona.
  • Eval set: 10-15 real queries/persona across TOFU/MOFU/BOFU stages, including two or three YMYL or compliance-based queries, three to four comparisons, and three or four quick how-tos.
  • Ask for structure: Require TL;DR → numbered playbook → table → risks → citations (per the card).
  • Modify it: Add constraints and location variants; ask the same query two ways to test consistency.

Once you run sample queries to check for clarity and credibility, modify or upgrade your Persona Card as needed: Add missing trust anchors or evidence the model needed.

Save winning outputs as ways to guide your briefs that you can paste into drafts.

Log recurring misses (hallucinated stats, undated claims) as acceptance checks for production.

Then, do this for other LLMs that your audience uses. For instance, if your audience leans heavily toward using Perplexity.ai, calibrate your prompt there also. Make sure to also run the prompt card outputs in Google’s AI Mode, too.

Watch branded search trends, assisted conversions, and non-Google referrals to see if influence shows up where expected when you publish persona-tuned assets.

And make sure to measure lift by topic, not just per page: Segment performance by topic cluster (GSC regex or GA4 topic dimension). Operationalizing your topic-first seo strategy discusses how to do this.

Keep the following in mind when reviewing real-world signals:

  • Review at 30/60/90 days post-ship, and by topic cluster.
  • If Trust-driven pages show high scroll/low conversions → add/upgrade citations and expert reviews and quotes.
  • If Comparative pages get CTR but low product/sales demos signups → add short demo video, “best for / not for” sections, and clearer CTAs.
  • If Efficiency-first pages miss lifts in AIO/snippets → tighten TL;DR, simplify tables, add schema.
  • If Skeptical-rejection-geared pages yield authority traffic but no lift → consider pursuing authority partnerships.
  • Most importantly: redo the exercise every 60-90 days and match your new against old personas to iterate toward the ideal.

Building user personas for SEO is worth it, and it can be doable and fast by using in-house data and LLM support.

I challenge you to start with one lean persona this week to test this approach. Refine and expand your approach based on the results you see.

But if you plan to take this persona-building project on, avoid these common missteps:

  • Creating tidy PDFs with zero long-term benefits: Personas that don’t specify core search intents, pain points, and AIO intent patterns won’t move behavior.
  • Winning every SERP feature: This is a waste of time. Optimize your content for the right surface for the dominant behavioral patterns of your target users.
  • Ignoring hesitation: Hesitation is your biggest signal. If you don’t resolve it on-page, the click dies elsewhere.
  • Demographics over jobs-to-be-done: Focusing on characteristics of identity without incorporating behavioral patterns is the old way.

Featured Image: Paulo Bobita/Search Engine Journal

Ask An SEO: High Volumes Or High Authority Evergreen Content? via @sejournal, @rollerblader

This week’s Ask an SEO question comes from an anonymous user:

“Should we still publish high volumes of content, or is it better to invest in fewer, higher-authority evergreen pieces?”

Great question! The answer is always higher-authority content, but not always evergreen if your goal is growth and sustainability. If the goal is quick traffic and a churn-and-burn model, high volume makes sense. More content does not mean more SEO. Sustainable SEO traffic via content is providing a proper user experience, which includes making sure the other topics on the site are helpful to a user.

Why High Volumes Of Content Don’t Work Long Term

The idea of creating high volumes of content to get traffic is a strategy where you focus a page on specific keywords and phrases and optimize the page for these phrases. When Google launched BERT and MUM, this strategy (which was already outdated) got its final nail in the coffin. These updates to Google’s systems looked at the associations between the words, hierarchy of the page, and the website to figure out the experience of the page vs. the specific words on the page.

By looking at what the words mean in relation to the headers, the sentences above and below, and the code of the page, like schema, SEO moved away from keywords to what the user will learn from the experience on the page. At the same time, proactive SEOs focused more heavily on vectors and entities; neither of these are new topics.

Back in the mid-2000s, article spinners helped to generate hundreds of keyword-focused pages quickly and easily. With them, you create a spintax (similar to prompts for large language models or LLMs like ChatGPT and Perplexity) with macros for words to be replaced, and the software would create “original” pieces of content. These could then be launched en masse, similar to “programmatic SEO,” which is not new and never a smart idea.

Google and other search engines would surface these and rank the sites until they got caught. Panda did a great job finding article spinner pages and starting to devalue and penalize sites using this technique of mass content creation.

Shortly after, website owners began using PHP with merchant data feeds to create shopping pages for specific products and product groups. This is similar to how media companies produce shopping listicles and product comparisons en masse. The content is unique and original (for that site), but is also being produced en masse, which usually means little to no value. This includes human-written content that is then used for comparisons, even when a user selects to compare the two. In this situation, you’ll want to use canonical links and meta robots properly, but that’s for a different post.

Panda and the core algorithms already had a way to detect “thin pages” from content spinning, so although these product pages worked, especially when combined with spun content or machine-created content describing the products, these sites began getting penalized and devalued.

We’re now seeing AI content being created that is technically unique and “original” via ChatGPT, Perplexity, etc, and it is working for fast traffic gains. But these same sites are getting caught and losing that traffic when they do. It is the same exact pattern as article spinning and PHP + data feed shopping lists and pages.

I could see an argument being made for “fan-out” queries and why having pages focused on specific keywords makes sense. Fan-out queries are AI results that automate “People Also Ask,” “things to know,” and other continuation-rich results in a single output, vs. having separate search features.

If an SEO has experience with actual SEO best practices and knows about UX, they’ll know that the fan-out query is using the context and solutions provided on the pages, not multiple pages focused on similar keywords.

This would be the equivalent of building a unique page for each People Also Ask query or adding them as FAQs on the page. This is not a good UX, and Google knows you’re spamming/overoptimizing. It may work, but when you get caught, you’re in a worse position than when you started.

Each page should have a unique solution, not a unique keyword. When the content is focused on the solution, that solution becomes the keyword phrases, and the same page can show up for multiple different phrases, including different variations in the fan-out result.

If the goal is to get traffic and make money quickly, then abandon or sell the domain, more content is a good strategy. But you won’t have a reliable or long-term income and will always be chasing the next thing.

Evergreen And Non-Evergreen High-Quality Content

Focusing on quality content that provides value to an end user is better for long-term success than high volumes of content. The person will learn from the article, and the content tends to be trustworthy. This type of content is what gets backlinks naturally from high-authority and topically relevant websites.

More importantly, each page on the website will have a clear intent. With sites that focus on volume vs. quality, a lot of the posts and pages will look similar as they’re focused on similar keywords, and users won’t know which article provides the actual solution. This is a bad UX. Or the topics jump around, where one page is about the best perfumes and another is about harnesses for dogs. The trust in the quality of the content is diminished because the site can’t be an expert in everything. And it is clear the content is made up by machines, i.e., fake.

Not all of the content needs to be evergreen, either. Companies and consumer trends happen, and people want timely information mixed in with evergreen topics. If it is product releases, an archive and list of all releases can be helpful.

Fashion sites can easily do the trends from that season. The content is outdated when the next season starts, but the coverage of the trends is something people will look back on and source or use as a reference. This includes fashion students sourcing content for classes, designers looking for inspiration from the past, and mass media covering when things trended and need a reference point.

When evergreen content begins to slide, you can always refresh it. Look back and see what has changed or advanced since the last update, and see how you can improve on it.

  • Look for customer service questions that are not answered.
  • Add updated software features or new colors.
  • See if there are examples that could be made better or clearer.
  • If new regulations are passed locally, state level, or federally, add these in so the content is accurate.
  • Delete content that is outdated, or label it as no longer relevant with the reasons why.
  • Look for sections that may have seemed relevant to the topic, but actually weren’t, and remove them so the content becomes stronger.

There is no shortage of ways to refresh evergreen content and improve on it. These are the pillar pages that can bring consistent traffic over the long run and keep business strong, while the non-evergreen pages do their part, creating ebbs and flows of traffic. With some projects, we don’t produce new content for a month or two at a time because the pillar pages need to be refreshed, and the clients still do well with traffic.

Creating mass amounts of content is a good strategy for people who want to make money fast and do not plan on keeping the domain for a long time. It is good for churn-and-burn sites, domains you rent (if the owner is ok with it), and testing projects. When your goal is to build a sustainable business, high-authority content that provides value is the way to go.

You don’t need to worry about the amount of content with this strategy; you focus on the user experience. When you do this, most channels can grow, including email/SMS, social media, PR, branding, and SEO.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

The Download: computing’s bright young minds, and cleaning up satellite streaks

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet tomorrow’s rising stars of computing

Each year, MIT Technology Review honors 35 outstanding people under the age of 35 who are driving scientific progress and solving tough problems in their fields.

Today we want to introduce you to the computing innovators on the list who are coming up with new AI chips and specialized datasets—along with smart ideas about how to assess advanced systems for safety.

Check out the full list of honorees—including our innovator of the year—here

Job titles of the future: Satellite streak astronomer

Earlier this year, the $800 million Vera Rubin Observatory commenced its decade-long quest to create an extremely detailed time-lapse movie of the universe.

Rubin is capable of capturing many more stars than any other astronomical observatory ever built; it also sees many more satellites. Up to 40% of images captured by the observatory within its first 10 years of operation will be marred by their sunlight-reflecting streaks.

Meredith Rawls, a research scientist at the telescope’s flagship observation project, Vera Rubin’s Legacy Survey of Space and Time, is one of the experts tasked with protecting Rubin’s science mission from the satellite blight. Read the full story.

—Tereza Pultarova

This story is from our new print edition, which is all about the future of security. Subscribe here to catch future copies when they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China has accused Nvidia of violating anti-monopoly laws
As US and Chinese officials head into a second day of tariff negotiations. (Bloomberg $)
+ The investigation dug into Nvidia’s 2020 acquisition of computing firm Mellanox. (CNBC)
+ But China’s antitrust regulator hasn’t confirmed if it will punish it. (WSJ $)

2 The US is getting closer to making a TikTok deal
But it’s still prepared to go ahead with a ban if an agreement can’t be reached. (Reuters)

3 Grok spread misinformation about a far-right rally in London
It falsely claimed that police misrepresented old footage as being from the protest. (The Guardian)
+ Elon Musk called for a new UK government during a video speech. (Politico)

4 Here’s what people are really using ChatGPT for
Users are more likely to use it for personal, rather than work-related queries. (WP $)
+ Anthropic says businesses are using AI to automate, not collaborate. (Bloomberg $)
+ Therapists are secretly using ChatGPT. Clients are triggered. (MIT Technology Review)

5 How China’s Hangzhou became a global AI hub
Spawning not just Alibaba, but DeepSeek too. (WSJ $)
+ China and the US are completely dominating the global AI race. (Rest of World)
+ How DeepSeek ripped up the AI playbook. (MIT Technology Review)

6 Driverless car fleets could plunge US cities into traffic chaos
Are we really prepared? (Vox $)

7 The shipping industry is harnessing AI to fight cargo fires
The risk of deadly fires is rising due to shipments of batteries and other flammable goods. (FT $)

8 Sales of used EVs are sky-rocketing
Buyers are snapping up previously-owned bargains. (NYT $)
+ EV owners won’t be able to drive in carpool lanes any more. (Wired $)

9 A table-top fusion reactor isn’t as crazy as it sounds
This startup is trying to make compact reactors a reality. (Economist $)
+ Inside a fusion energy facility. (MIT Technology Review)

10 How a magnetic field could help clean up space
If we don’t, we could soon lose access to Earth’s low orbit altogether. (IEEE Spectrum)
+ The world’s next big environmental problem could come from space. (MIT Technology Review)

Quote of the day

“If we’re going on a journey, they’re absolutely taking travel sickness tablets immediately. They’re not even considering coming in the car without them.”

—Phil Bellamy, an electric car owner, describes the extreme nausea his daughters experience while riding in his vehicle to the Guardian.

One more thing

Google, Amazon and the problem with Big Tech’s climate claims

Last year, Amazon trumpeted that it had purchased enough clean electricity to cover the energy demands of all its global operations, seven years ahead of its sustainability target.

That news closely followed Google’s acknowledgment that the soaring energy demands of its AI operations helped ratchet up its corporate emissions by 13% last year—and that it had backed away from claims that it was already carbon neutral.

If you were to take the announcements at face value, you’d be forgiven for believing that Google is stumbling while Amazon is speeding ahead in the race to clean up climate pollution.

But while both companies are coming up short in their own ways, Google’s approach to driving down greenhouse-gas emissions is now arguably more defensible. To learn why, read our story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Steven Spielberg was just 26 when he made Jaws? The more you know.
+ This tiny car’s huge racing track journey is completely hypnotic.
+ Easy dinner recipes? Yes please.
+ This archive of thousands of historical children’s books is a real treasure trove—and completely free to read.

Did Google Just Prevent Rank Tracking?

Google’s default search results list 10 organic listings per page. Yet adding &num=100 to the search result URL will show 100 listings, not 10. It’s one of Google’s many specialized search “operators” — until now.

This week, Google dropped support for the &num=100 parameter. It’s a telling move. Many search pros speculate the aim is to restrict AI bots that use the parameter to perform so-called fan-out searches. The collateral damage is on search engine ranking tools, which have long used the parameter to scrape results for keywords. Many of those tools no longer function, at least for now.

Surprisingly, the move affected Performance data in Search Console. Most website owners now see increases in average positions and declines in the number of impressions.

Screenshot of Search Console Performance report

In Search Console, most website owners now see increases in average positions and declines in the number of impressions. Click image to enlarge.

Search Console

Google has provided no explanation. Presumably the changes in Performance data are owing to traffic from the third-party bots, not humans, to track rankings. That is the unexpected huge takeaway: Search Console data at least partially includes bot activity.

In other words, the lost “Impressions” were URLs as shown to bot scrapers, not human searchers. The “Average Position” metric is closely tied to “Impressions,” as Search Console records the topmost position of a URL as seen by searchers. Impressions now decline if “searchers” are bots.

Thus organic performance data in Search Console now is more human impressions and fewer bots. The data reflects actual consumers viewing the listings.

The data remains skewed for top-ranking URLs because page 1 of search results is still accessible to bots, although I know of no way to quantify bot searches versus those of humans.

Adios Rank Tracking?

Search result scrapers require much computing time and energy. Third-party tools will likely raise their prices as, from now on, their bots must “click” to the next page nine times to reach 100 listings.

Tim Soulo, CMO of Ahrefs, a top SEO platform, hinted today on LinkedIn that the tool would likely report rankings on only the first two pages to remain financially sustainable.

So the future of SEO rank tracking is unclear. Likely, tracking organic search positions will become more expensive and produce fewer results (only the top two pages).

What to Do?

  • Wait for the Performance section in Search Console to stabilize
  • Consider SEO platforms that integrate with Search Console. For example, SEO Testing allows customers to import and archive the Performance data and annotate industry updates (such as Google’s &num=100 move) for traffic or rankings impact.

To be sure, rank tracking is becoming obsolete. But monitoring organic search positions remains essential for keyword gap analysis and content ideas, among other SEO tasks.

ChatGPT Study: 1 In 4 Conversations Now Seek Information via @sejournal, @MattGSouthern

New research from OpenAI and Harvard finds that “Seeking Information” messages now account for 24% of ChatGPT conversations, up from 14% a year earlier.

This is an NBER working paper (not peer-reviewed), based on consumer ChatGPT plans only, and the study used privacy-preserving methods where no human read user messages.

The working paper analyzes a representative sample of about 1.1 million conversations from May 2024 through June 2025.

By July, ChatGPT reached more than 700 million weekly active users, sending roughly 2.5 billion messages per day, or about 18 billion per week.

What People Use ChatGPT For

The three dominant topics are Practical Guidance, Seeking Information, and Writing, which together account for about 77% of usage.

Practical Guidance remains around 29%. Writing declined from 36% to 24% over the past year. Seeking Information grew from 14% to 24%.

The authors write that Seeking Information “appears to be a very close substitute for web search.”

Asking vs. Doing

The paper classifies intent as Asking, Doing, or Expressing.

About 49% of messages are Asking, 40% are Doing, and 11% are Expressing.

Asking messages “are consistently rated as having higher quality” than the other categories, based on an automated classifier and user feedback.

Work vs. Personal Use

Non-work usage rose from 53% in June 2024 to 73% in June 2025.

At work, Writing is the top use case, representing about 40% of work-related messages. Education is a major use: 10% of all messages involve tutoring or teaching.

Coding And Companionship

Only 4.2% of messages are about computer programming, and 1.9% concern relationships or personal reflection.

Who’s Using It

The study documents rapid global adoption.

Early gender gaps have narrowed, with the share of users having typically feminine names rising from 37% in January 2024 to 52% in July 2025.

Growth in the lowest-income countries has been more than four times that of the highest-income countries.

Why This Matters

If a quarter of conversations are information-seeking, some queries that would have gone to search may go toward conversational tools.

Consider responding to this shift with content that answers questions, while adding expertise that a chatbot can’t replicate. Writing and editing account for a large share of work-related use, which aligns with how teams are already folding AI into content workflows.

Looking Ahead

ChatGPT is becoming a major destination for finding information online.

In addition to the shift toward finding info, it’s worth highlighting that 70% of ChatGPT use is personal, not professional. This means consumer habits are changing broadly.

As this technology grows, it’ll be vital to track how your audience uses AI tools and adjust your content strategy to meet them where they are.


Featured Image: Photo Agency/Shutterstock

Google Modifies Search Results Parameter, Affecting SEO Tools via @sejournal, @MattGSouthern

Google appears to have disabled or is testing the removal of the &num=100 URL parameter that shows 100 results per page.

Reports of the change began around September 10, and quickly spread through the SEO community as rank-tracking tools showed disruptions.

Google hasn’t yet issued a public statement.

What’s Happening

The &num=100 parameter has long been used to retrieve 100 results in one request.

Over the weekend, practitioners noticed that forcing 100 results often no longer works, and in earlier tests it worked only intermittently, which suggested a rollout or experiment.

@tehseoowner reported on X:

Keyword Insights wrote:

Ripple Effects On Rank-Tracking Tools

Clark and others documented tools showing missing rankings or error states as the change landed.

Some platforms’ search engine results page (SERP) screenshots and daily sensors briefly stalled or displayed data gaps.

Multiple SEO professionals saw sharp declines in desktop impressions in Google Search Console starting September 10, with average position increasing accordingly.

Clark’s analysis connects the timing of those drops to the &num=100 change. He proposes that earlier desktop impression spikes were partly inflated by bots from SEO and AI analytics tools loading pages with 100 results, which would register many more impressions than a normal 10-result page.

This is a community theory at this stage, not a confirmed Google explanation.

Re-Examining “The Great Decoupling”

Over the past year, many teams reported rising impressions without matching clicks and associated that pattern with AI Overviews.

Clark argues the &num=100 change, and the resulting tool disruptions, offer an alternate explanation for at least part of that decoupling, especially on desktop where most rank tracking happens.

This remains an interpretation until Google comments or provides new reporting filters.

What People Are Saying

Clark wrote about the shift after observing significant drops in desktop impressions across multiple accounts starting on September 10.

He wrote:

“… I’m seeing a noticeable decline in desktop impressions, resulting in a sharp increase in average position.

“This is across many accounts that I have access to and seems to have started around September 10th when the change first begun.”

Keyword Insights said:

“Google has killed the n=100 SERP parameter. Instead of 1 request for 100 SERP results, it now takes 10 requests (10x the cost). This impacts Keyword Insights’ rankings module. We’re reviewing options and will update the platform soon.”

Ryan Jones suggests:

“All of the AI tools scraping Google are going to result in the shutdown of most SEO tools. People are scraping so much, so aggressively for AI that Google is fighting back, and breaking all the SEO rank checkers and SERP scrapers in the process.”

Considerations For SEO teams

Take a closer look at recent Search Console trends.

If you noticed a spike in desktop impressions in late 2024 or early 2025 without clicks, some of those impressions may have been driven by bots. Use the week-over-week changes since September 10 as a new baseline and note any substantial changes in your reporting.

Check with your rank-tracking provider. Some tools are still working with pagination or alternative methods, while others have had gaps and are now fixing them.

Looking Ahead

Google has been reached out to for comment, but hasn’t confirmed whether this is a temporary test or a permanent shift.

Tool vendors are already adapting, and the community is reevaluating how much of the ‘great decoupling’ story stemmed from methodology rather than user behavior.

We’ll update if Google provides any guidance or if reporting changes show up in Search Console.


Featured Image: Roman Samborskyi/Shutterstock

The CMO Vs. CGO Dilemma: Why The Right Leader Is Critical For Success  via @sejournal, @dannydenhard

Unless you have been living under a rock, you would have seen or experienced the evolution of marketing in recent years; often centered around the marketing leader and the chief marketing officer (CMO) role.

The CMO role has come under fire for performance, for the lack of big bang delivery, for not moving away from vanity metrics, and often being overly defensive at the leadership table.

Marketing Leadership Is Harder Than Ever

In coaching CMOs and equivalent titles, there are several recurring themes, one of which stands out in almost all coachees: Your job as a CMO is being a company executive first and then being a department leader.

You are in the C-Suite to represent the business needs, and business needs will trump your department and team needs, often going against how you are wired.

The business needs and the department needs shouldn’t be different. However, they are often at odds, especially when you, as the leader, haven’t placed the right guardrails; what often occurs is that you have followed poorly thought-through goals, key performance indicators (KPIs), and enabled disconnected objectives and key results (OKRs).

In other scenarios, the CMO role is being removed and not replaced, and the CMO title is removed. Repeatedly being replaced with VP, director, or “head of” titles, often resulting in the marketing leader not being in the C-Suite and regularly reporting one to two steps removed from the CEO.

Enter The Chief Growth Officer (CGO)

There are often reasons why there is a rebrand or title change within the C-Suite:

  • It is deliberate, changing the internal comms of the role. It demonstrates that, as a business, you are moving from marketing to growth or from old to new.
  • The removal of the previous CMO and legal requirements will dictate a change in title or a shift in job and description of the role.
  • If you work at a startup, it is often evolving the narrative with investors, which often helps frame previous struggles and drives the message that you are concentrating on growth.
  • There is also a showing of intent to the industry, often sending out press releases to show you are moving towards growth.

The Difference Between Marketing & Growth

The truth: The difference between marketing and growth setups is either negligible or a huge gulf.

Many confident marketing leaders would set up their teams in a very similar way; they would similarly set goals, but the department would work and operate in small ways.

The “Huge Gulf” Difference In Operating Includes:

  • Removing siloed teams of specialists.
  • Reducing and reframing the former way of defensive actions (Marketers have the hardest job and everyone thinks they can do marketing. Marketers have had to protect doing things that don’t scale and aren’t easily attributable).
  • Moving from not being connected to a truly cross-functional department.
  • Intentional reporting and proactively marketing more frequently and aggressively internally, which is the lost art in many marketing departments.

Like the best marketing organizations, the best growth departments are hyper-connected. They are intertwined cross-functionally, and they are pushing numbers constantly, reporting on the most important metrics and being able to tell the story of how it’s all connected. Reporting which KPI connects to which goal, how each goal connects up to the business objective, and how the brand brings performance.

Why The CGO Role Is Different

Skill Gaps

There are specific skill sets that differentiate successful CGOs from traditional CMOs – areas that often come up and stand apart marketing and growth. These include data fluency and the ability to crunch data themselves, adopting an experimentation-first mindset, being able to test, learn, and iterate as second nature, and everything CGOs do has revenue attribution baked in.

Customer Journey Ownership

Many CGOs are taking ownership of the entire customer lifecycle, and are happy to jump into product analysis and request missing product feature builds. There are many CMOs who struggle with the shift from leads and marketing qualified leads (MQLs) to customer lifetime values (CLVs).

Technology Integration

Often, CGOs have a greater understanding of tech stacks and the investment required in technical tools, and are more than comfortable working directly with product and engineering teams. Often the Achilles’ heel of CMOs.

Measurement Evolution

Growth leaders will often have sophisticated attribution models and real-time performance dashboards, focusing on performance across the board and being on top of numbers. Many CMOs can struggle with getting into the weeds of data and being able to talk confidently with the executive committee members.

External Stakeholder Management

CGOs will often have direct relationships with investors and board members, whereas “traditional CMOs” are regularly disconnected and have limited relationships with important management and investors.

Growth Department Challenges

In coaching CGOs, there are unique pressures that emerge in their sessions. The business requires its growth department to be accountable for every number and drive business performance through (almost all) marketing activities. No easy task.

The growth leader must evolve the former marketing approach into a fresh growth approach, which requires a new culture of performance, tactical refresh, a dedicated approach within teams in the department. That has to transform traditional disciplines following historical goals and tactics into the new growth approach. It’s no mean feat, especially in long-serving teams and traditional businesses.

The Long-Term Impact

Having built growth departments, holding both CMO and CGO titles, many long-term impacts are overlooked:

  • Stagnating Careers: Many team members can see their career stagnate if they are not brought onto the growth journey, and can feel because of their discipline, they are not considered a performance channel.
  • Specialist Struggles: In many marketing departments, there is a larger number of specialists and many specialists struggle with more integrated ways of working. It will be important for specialists to attempt to learn other skills and appreciate their generalist colleagues who will rely on them. Specialists are often those impacted most by the “marketing to growth” move.
  • Generalist Growth: Generalists are a crucial part of the move towards growth, often being relied upon to act as the glue and as the bridge. Generalists will need to understand the plan and connect with their specialist department colleagues, and help to shape and reshape.
  • Team Members Lost In The Transition: In any changeover, there will be team members who get lost. They will report to or through new managers, and will drift or will feel lost, and their performance will be hit. It is critical that all team members understand their plan and feel they are brought on the journey. Many middle managers are actually lost first. Ensure you keep checking in and have a plan co-created with the department lead.
  • Minding The Gap: The gap between teams can grow, and many teams can struggle to adapt to the change quickly enough. This also occurs when performance-based CGOs can overlook brand and retention teams.
  • Cultural Issues: Humans are averse to change. Now, opting out is the default, not opting in. It is on the team leads and the department head to bring everyone on the journey and make the hard decisions when members will not opt in.

The Path Forward: Lead Your Marketing Leadership Evolution

The shift from CMO to CGO isn’t just about changing titles or acting differently; it’s about fundamentally reimagining how marketing drives business growth.

For marketing leaders reading this, the question isn’t whether this evolution will happen, but how quickly you can adapt to lead the charge for departmental and business success.

Something I share in coaching is, if you’re a current CMO (or equivalent), you should step back and ask yourself the following questions:

  1. Are you already operating as a “CGO”?
  2. Are you deeply embedded in revenue conversations?
  3. Are you able to connect and drive cross-functional alignment and drive change?
  4. Do you positively obsess over business metrics that matter beyond your department?

If the answer is yes, you’re already on the right path. If not, it’s time to evolve before the decision is made above you or for you.

If this fills you with dread, then I can only be direct: You will have to learn to change your approach or get used to feeling the heat of business evolution.

For organizations considering this transition, remember that the best CGOs don’t just inherit marketing teams; they proactively transform them.

They build a culture where every team member understands their direct impact on business growth, where specialists learn to think and operate as generalists, and where the entire department becomes a revenue-generating engine rather than being considered a cost center.

Smart marketing leaders can also lead this transformation, but being able to prove they can evolve themselves and the people around them to this new way of working is critically important. A word to wise: Do not put yourself forward without knowing you are will be an essential leader in this new operating model and when it struggles you will be the leader they look to get the new system back on track.

The companies that get this transition right will see marketing finally claim its rightful seat (back) at the strategic table.

Those that don’t risk relegating their marketing function to tactical execution will see many of their competitors pull ahead with integrated growth strategies.

The choice now is yours: Evolve your marketing leadership to meet the demands of modern business, or watch as your competitors rewrite the rules of growth, while you’re struggling with metrics and influencing your business cross-functionally.

The future belongs to leaders who can bridge the gap between marketing’s art and growth’s science. The title will change and revert, but the question is: Will you be one of the modern marketing leaders, or could you be left behind?

More Resources:


Featured Image: Anton Vierietin/Shutterstock

WP Engine Vs. Automattic: Rulings Preserve WP Engine’s Lawsuit via @sejournal, @martinibuster

The judge overseeing the legal battle between WP Engine versus Automattic and Matt Mullenweg issued a ruling that fully dismissed two of WP Engine’s claims, allowed several to proceed, and gave WP Engine the chance to amend others.

Nine Claims Allowed To Proceed – One Partially Survives

Counts 1 & 2

  • Count 1: Intentional Interference with Contractual Relations
  • Count 2: Intentional Interference with Prospective Economic Advantage

Those two counts survived the motion to dismiss. That means WP Engine can try to prove that Automattic/Mullenweg interfered with its contracts and business opportunities. This shows that the judge didn’t throw out WP Engine’s entire “you’re sabotaging our business” approach. If WP Engine wins on these counts they could be eligible to receive damages.

In total, the judge’s order allowed nine claims to proceed and one to partially survive.

These are the remaining claims that survived and are allowed to proceed:

  • CFAA Unauthorized Access (Count 19):
    Tied to allegations that Automattic and Mullenweg covertly replaced WP Engine’s ACF plugin with their own SCF plugin on customer sites without authorization.
  • Unfair Competition (Count 5)
    Connected to claims that Automattic’s conduct, including unauthorized plugin replacement and trademark issues, amounted to unlawful and unfair business practices under California law.
  • Defamation (Count 9) & Trade Libel (Count 10)
    Statements on WordPress.org alleging WP Engine offered a “cheap knock-off” of WordPress and that WP Engine delivered a “bastardized simulacra of WordPress’s GPL code.”
  • Slander (Count 11):
    Based on public remarks Mullenweg made at WordCamp US and in a livestreamed interview where Mullenweg described WP Engine as “parasitic” and damaging to the open-source community.
  • Lanham Act (Count 17: Unfair Competition) & Lanham Act (Count 18: False Advertising)
    Automattic and Mullenweg filed a motion to partially dismiss these counts but the motion was not granted, so these two counts move forward.

This is the claim that partially survived:

Promissory Estoppel (Count 6)
This is based on specific promises, such as free plugin hosting on wordpress.org, which the court found definite enough to proceed, while broader statements like “everyone is welcome” were too vague to support the claim.

Two Claims Dismissed With Leave To Amend

The judge dismissed two of the claims with “leave to amend,” which means the court found an issue with how WP Engine pleaded their claims. The claims were not legally sufficient, but the judge gave WP Engine the option to update its complaint to fix the problems. If WP Engine amends successfully, those claims can return to the case.

The two claims dismissed with leave to amend are:

1. Antitrust claims of monopolization, attempted monopolization, and illegal tying (Sherman Act & Cartwright Act).

On the antitrust claims, the Court found WP Engine failed to define a relevant market, stating:

“…consumers entering the WordPress ecosystem by electing a WordPress web content management system would know they were locked-in to WordPress aftermarkets. Mullenweg’s purported deception and extortionate acts did not change that fundamental operating principle of the WordPress marketplace.”

2. CFAA extortion claim (Count 3): WP Engine alleged Automattic threatened to block wordpress.org access and demanded licensing fees.

Regarding the extortion claims, WP Engine alleged that Automattic and Mullenweg violated the Computer Fraud and Abuse Act (CFAA) by threatening to block WP Engine’s access to wordpress.org and demanding licensing fees.

The Court dismissed this claim with leave to amend, finding the allegations did not sufficiently establish “extortion” under CFAA standards. The judge noted that merely threatening to block access, even coupled with demands for licensing, did not meet the statutory requirements as pled. However, WP Engine has been given time to amend the complaint (“with leave to amend”).

Two Claims Fully Dismissed

Two of WP Engine’s claims were fully dismissed:

  • Count 4: Attempted Extortion (California Penal Code)
  • Count 16: Trademark Misuse

Count 4
Count 4 was dismissed because the California Penal Code allows government prosecutors to bring criminal charges for attempted extortion, but it does not give private parties like WP Engine the right to sue under that statute. The dismissal was not about whether Automattic’s conduct could be considered extortion but about whether WP Engine had the legal authority to use that law in a civil case.

Count 16
The court dismissed Count 16 because trademark misuse is only recognized as a defense, not as a lawsuit that can be filed on its own. WP Engine may still raise trademark misuse later if Automattic tries to enforce trademarks against it.

The exact wording is:

“With no authority from WPEngine that authorizes pleading declaratory judgment of trademark misuse as a standalone cause of action rather than an affirmative defense, the Court GRANTS Defendants’ motion to dismiss Count 16, without prejudice to WPEngine asserting it as an affirmative defense if appropriate later in this litigation.”

Post By Matt Mullenweg About The Ruling

Automattic CEO and WordPress co-founder posted an upbeat blog post about the court ruling that offered a simplified summary of the court order, which is fine, but simplification can leave out details. He’s right that the decision narrows the case and that the attempted extortion claim is out for good.

He wrote:

“…the court dismissed several of WP Engine and Silver Lake’s most serious claims — antitrust, monopolization, and extortion have been knocked out!”

The attempted extortion under California Penal Code (Count 4) was indeed “knocked out.” But the Computer Fraud and Abuse Act (CFAA) extortion claim (Count 3) was dismissed with leave to amend, meaning WP Engine has the opportunity to try again.

The antitrust and monopolization claims (Counts 12–15) were also dismissed but with leave to amend, meaning they too are not permanently gone.

His post is technically correct.

But the simplification leaves out what the judge allowed to move forward:

Automattic’s motion to dismiss Count 1 (intentional interference with contractual relations) and Count 2 (intentional interference with prospective economic relations) were denied, and both will move forward, potentially making WP Engine eligible to receive damages if they win on these counts.

Then there are the others that are moving forward:

  • CFAA (Count 19): This is significant. It alleges Automattic covertly swapped WP Engine’s widely-used ACF plugin with its own SCF plugin on customer sites without consent. The court found these allegations plausible enough to move forward
  • Unfair Competition (Count 5): Connected to claims that Automattic’s conduct, including unauthorized plugin replacement and trademark issues, amounted to unlawful and unfair business practices under California law. (The court specifically pointed to the surviving CFAA and Lanham Act claims as the legal basis for letting this proceed.)
  • Defamation (Count 9) & Trade Libel (Count 10): Based on statements on WordPress.org alleging WP Engine offered a “cheap knock-off” of WordPress and that WP Engine delivered a “bastardized simulacra of WordPress’s GPL code.”
  • Slander (Count 11): Grounded in public remarks Mullenweg made at WordCamp US and in a livestreamed interview where he described WP Engine as “parasitic” and damaging to the open-source community.
  • Lanham Act (Count 17: Unfair Competition) & Lanham Act (Count 18: False Advertising): Defendants sought partial dismissal, but the court declined. Both counts remain live and move forward.

Featured Image by Shutterstock/Kaspars Grinvalds

When Advertising Shifts To Prompts, What Should Advertisers Do? via @sejournal, @siliconvallaeys

When I last wrote about Google AI Mode, my focus was on the big differentiators: conversational prompts, memory-driven personalization, and the crucial pivot from keywords to context.

As we see with the Q2 ad platform financial results below, this shift is rapidly reshaping performance advertising. While AI Mode means Google has to rethink how it makes money, it forces us advertisers to rethink something even more fundamental: our entire strategy.

In the article about AI Mode, I laid out how prompts are different from keywords, why “synthetic keywords” are really just a temporary band-aid, and how fewer clicks might just challenge the age-old cost-per-click (CPC) revenue model.

This follow-up is about what these changes truly mean for us as advertisers, and why holding onto that keyword-era mindset could cost us our competitive edge.

The Great Rewiring Of Search

The biggest shift since we first got keyword-targeted online advertising is now in full swing. People aren’t searching with those relatively concise keywords anymore, the ones we optimized for how Google used to weigh certain words in a query.

Large language models (LLMs) have pretty much removed the shackles from the search bar. Now, users can fire off prompts with hundreds of words, and add even more context.

Think about the 400,000 token context window of GPT-5, which is like tens of thousands of words. Thankfully, most people don’t need that much space to explain what they want, but they are speaking in full sentences now, stutters and all.

Google’s internal ads in AI Mode document shares that early testers of AI Mode are asking queries that are two to three times as long as traditional searches on Google.

And thanks to LLMs’ multi-modal capabilities, users are searching with images (Google reports 20 billion Lens searches per month), drawing sketches, and even sending video. They’re finding what they need in entirely new ways.

Increasingly, users aren’t just looking for a list of what might be relevant. They expect a guided answer from the AI, one that summarizes options based on their personal preferences. People are asking AI to help them decide, not just to find.

And that fundamental change in user behavior is now reshaping the very platforms where these searches happen, starting with Google.

The Impact On Google As The Main Ads Platform

All of this definitely poses a threat to Google’s primary revenue stream. But as I mentioned in a LinkedIn post, the traffic didn’t vanish; it just moved.

Users didn’t ditch Google; they simply stopped using it the way they did when keywords were king. Plus, we’re seeing new players emerge, and search itself has fragmented:

This creates a fresh challenge for us advertisers: How do we design campaigns that actually perform when intent originates in these wildly new ways?

What Q2 Earnings Reports Told Us About AI In Search

The Q2 earnings calls were packed with GenAI details. Some of the most jaw-dropping figures involved the expected infrastructure investments.

Microsoft announced plans to spend an eye-watering $30 billion on capital expenditures in the coming quarter, and Alphabet estimated an $85 billion budget for the next year. I guess we’ll all be clicking a lot of ads to help pay for that. So, where will those ads come from when keywords are slowly being replaced by prompts?

Google shared some numbers to illustrate the scale of this shift. AI Overviews already reach 2 billion users a month. AI Mode itself is up to 100 million. The real question is, how is AI actually enabling better ads, and thus improving monetization?

Google reports:

  • Over 90 Performance Max improvements in the past year drove 10%+ more conversions and value.
  • Google’s AI Max for Search campaigns show a 27% lift in conversions or value over exact or phrase matches.

Microsoft Ads tells a similar story. In Q2 2025, it reported:

  • $13 billion in AI-related ad revenue.
  • Copilot-powered ads drove 2.3 times more conversions than traditional formats.
  • Users were 53% more likely to convert within 30 minutes.

So, what’s an advertiser to do with all this?

What Advertisers Should Do

As shared recently in a conversation with Kasim Aslam, these ecosystems are becoming intent originators. That old “search bar” is now a conversation, a screenshot, or even a voice command.

If your campaigns are still relying on waiting for someone to type a query, you’re showing up to the party late. Smart advertisers don’t just respond to intent; they predict it and position for it.

But how? Well, take a look at the Google products that are driving results for advertisers: They’re the newest AI-first offerings. Performance Max, for example, is keywordless advertising driven by feeds, creative, and audiences.

Another vital step for adapting to this shift is AI Max, which I’d call the most unrestrictive form of keyword advertising.

It blends elements of Dynamic Search Ads (DSAs), automatically created assets, and super broad keywords. This allows your ads to show up no matter how people search, even if they’re using those sprawling, multi-part prompts.

Sure, advertisers can still use today’s best practices, like reviewing search term reports and automatically created assets, then adding negatives or exclusions for the irrelevant ones. But let’s be honest, that’s a short-term, old-model approach.

As AI gains memory and contextual understanding, ads will be shown based on scenarios and user intent that isn’t even explicitly expressed.

Relying solely on negatives won’t cut it. The future demands that advertisers focus on getting involved earlier in the decision-making process and making sure the AI has all the right information to advocate for their brand.

Keywords Aren’t The Lever They Once Were

In the AI Mode era, prompts aren’t just simple queries; they’re rich, multi-turn conversations packed with context.

As I outlined in my last article, these interactions can pull in past sessions, images, and deeply personal preferences. No keyword list in the world can capture that level of nuance.

Tinuiti’s Q2 benchmark report shows Performance Max accounts for 59% of Shopping ad spend and delivers 18% higher click-through rates. This is a clear illustration that the platform is taking control of targeting.

And when structured feeds plus dynamic creative drive a 27% lift in conversions according to Google data, it’s because the creative itself is doing the targeting.

Those journeys happen out of sight, which is the biggest threat to advertisers whose strategies aren’t evolving.

The Real Danger: Invisible Decisions

One of my key takeaways from the AI Mode discussion was the risk of “zero-click” journeys. If the assistant delivers what a user needs inside the conversation, your brand might never get a visit.

According to Adobe Analytics, AI-powered referrals to U.S. retail sites grew 1,200% between July 2024 and February 2025. Traffic from these sources now doubles every 60 days.

These users:

  • Visit 12% more pages per session.
  • Bounce 23% less often.
  • Spend 45% more time browsing (especially in travel and finance verticals).

Even more importantly, 53% of users say they plan to rely on AI tools for shopping going forward.

In short, users are starting their journeys before they reach a traditional search engine, and they’re more engaged when they do. And winning in this environment means rethinking our levers for influence.

Why This Is An Opportunity, Not A Death Sentence

As I argued before, platforms aren’t killing keyword advertising; they’re evolving it. The advertisers winning now are leaning into the new levers:

Signals Over Keywords

  • Use customer relationship management (CRM) data to build high-intent audience lists.
  • Layer first-party data into automated campaign types through conversion value adjustments, audiences, or budget settings.
  • Optimize your product feed with rich attributes so AI has more to work with and knows exactly which products to recommend.
  • Ensure feed hygiene so LLMs have the most current data about your offers.
  • Enhance your website with more data for the LLMs to work with, like data tables, and schema.

Creative As Targeting

  • Build modular ad assets that AI can assemble dynamically: multiple headlines, descriptions, and images tailored to different audiences.
  • Test variations that align with different stages of the buying journey so you’re likely to show in more contextual scenarios across the entire consumer journey, not only at the end.

Measurement Beyond Clicks

  • Frequently evaluate the new metrics in Google Ads for AI Max and Performance Max. Changes are rolling out frequently, enabling smarter optimizations.
  • Track feed impression share by enabling these extra columns in Google Ads.
  • Monitor how often your products are surfaced in AI-driven recommendations, as with the recently updated AI Max report for “search terms and landing pages from AI Max.”
  • Focus your measurement on how well users are able to complete tasks, not just clicks.

The future isn’t about bidding on a query. It’s about supplying the AI with the best “raw ingredients” so you win the recommendation at the exact moment of decision.

That mindset shift is the real competitive advantage in the AI-first era.

The Bottom Line

My previous AI Mode post was about the mechanics of the shift. This one is about the mindset change required to survive it.

Keywords aren’t vanishing, but their role is shrinking fast. In an AI-driven, context-first search landscape, the brands that thrive will stop obsessing over what the user types and start shaping what the AI recommends.

If you can win that moment, you won’t just get found. You’ll get chosen.

More Resources:


Featured Image: Smile Studio AP/Shutterstock