The Download: regulators are coming for AI companions, and meet our Innovator of 2025

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The looming crackdown on AI companionship

As long as there has been AI, there have been people sounding alarms about what it might do to us: rogue superintelligence, mass unemployment, or environmental ruin. But another threat entirely—that of kids forming unhealthy bonds with AI—is pulling AI safety out of the academic fringe and into regulators’ crosshairs.

This has been bubbling for a while. Two high-profile lawsuits filed in the last year, against Character.AI and OpenAI, allege that their models contributed to the suicides of two teenagers. A study published in July, found that 72% of teenagers have used AI for companionship. And stories about “AI psychosis” have highlighted how endless conversations with chatbots can lead people down delusional spirals.

It’s hard to overstate the impact of these stories. To the public, they are proof that AI is not merely imperfect, but harmful. If you doubted that this outrage would be taken seriously by regulators and companies, three things happened this week that might change your mind.

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

If you’re interested in reading more about AI companionship, why not check out:

+ AI companions are the final stage of digital addiction—and lawmakers are taking aim. Read the full story.

+ Chatbots are rapidly changing how we connect to each other—and ourselves. We’re never going back. Read the full story.

+ Why GPT-4o’s sudden shutdown last month left people grieving. Read the full story.

+ An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it.

+ OpenAI has released its first research into how using ChatGPT affects people’s emotional well-being. But there’s still a lot we don’t know.

Meet the designer of the world’s fastest whole-genome sequencing method

Every year, MIT Technology Review selects one individual whose work we admire to recognize as Innovator of the Year. For 2025, we chose Sneha Goenka, who designed the computations behind the world’s fastest whole-genome sequencing method. Thanks to her work, physicians can now sequence a patient’s genome and diagnose a genetic condition in less than eight hours—an achievement that could transform medical care.

Register here to join an exclusive subscriber-only Roundtable conversation with Goenka, Leilani Battle, assistant professor at the University of Washington, and our editor in chief Mat Honan at 1pm ET on Tuesday September 23.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Childhood vaccination rates are falling across the US
Much of the country no longer has the means to stop the spread of deadly disease. (NBC News)
+ Take a look at the factors driving vaccine hesitancy. (WP $)
+ RFK Jr is appointing more vaccine skeptics to the CDC advisory panel. (Ars Technica)
+ Why US federal health agencies are abandoning mRNA vaccines. (MIT Technology Review)

2 The US and China have reached a TikTok deal 
Beijing says the spin-off version sold to US investors will still use ByteDance’s algorithm. (FT $)
+ But further details are still pretty scarce. (WP $)
+ The deal may have been fueled by China’s desire for Trump to visit the country. (WSJ $)

3 OpenAI is releasing a version of GPT-5 optimized for agentic coding
It’s a direct rival to Anthropic’s Claude Code and Microsoft’s GitHub Copilot. (TechCrunch)
+ OpenAI says it’s been trained on real-world engineering tasks. (VentureBeat)
+ The second wave of AI coding is here. (MIT Technology Review)

4 The FTC is investigating Ticketmaster’s bot-fighting measures 
It’s probing whether the platform is doing enough to prevent illegal automated reselling. (Bloomberg $)

5 Google has created a new privacy-preserving LLM
VaultGemma uses a technique called differential privacy to reduce the amount of data AI holds onto. (Ars Technica)

6 Space tech firms are fighting it out for NATO contracts
Militaries are willing to branch out and strike deals with commercial vendors. (FT $)
+ Why Trump’s “golden dome” missile defense idea is another ripped straight from the movies. (MIT Technology Review)

7 Facebook users are receiving their Cambridge Analytica payouts
Don’t spend it all at once! (The Verge)

8 The future of supercomputing could hinge on moon mining missions
Companies are rushing to buy the moon’s resources before mining has even begun. (WP $)

9 What it’s like living with an AI toy
Featuring unsettling conversations galore. (The Guardian)

10 Anthropic’s staff are obsessed with an albino alligator 🐊
As luck would have it, he just happens to be called Claude. (WSJ $)

Quote of the day

“It’s going to mean more infections, more hospitalizations, more disability and more death.”

—Demetre Daskalakis, former director of the CDC’s National Center for Immunization and Respiratory Diseases, explains the probable outcomes of America’s current vaccine policy jumble, the BBC reports.

One more thing

Robots are bringing new life to extinct species

In the last few years, paleontologists have developed a new trick for turning back time and studying prehistoric animals: building experimental robotic models of them.

In the absence of a living specimen, scientists say, an ambling, flying, swimming, or slithering automaton is the next best thing for studying the behavior of extinct organisms. Here are four examples of robots that are shedding light on creatures of yore.

—Shi En Kim

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ New York City is full of natural life, if you know where to look.
+ This photo of Jim Morrison enjoying a beer for breakfast is the epitome of rock ‘n’ roll.
+ How to age like a champion athlete.
+ Would you dare drive the world’s most narrow car?

De-risking investment in AI agents

Automation has become a defining force in the customer experience. Between the chatbots that answer our questions and the recommendation systems that shape our choices, AI-driven tools are now embedded in nearly every interaction. But the latest wave of so-called “agentic AI”—systems that can plan, act, and adapt toward a defined goal—promises to push automation even further.

“Every single person that I’ve spoken to has at least spoken to some sort of GenAI bot on their phones. They expect experiences to be not scripted. It’s almost like we’re not improving customer experience, we’re getting to the point of what customers expect customer experience to be,” says vice president of product management at NICE, Neeraj Verma.

For businesses, the potential is transformative: AI agents that can handle complex service interactions, support employees in real time, and scale seamlessly as customer demands shift. But the move from scripted, deterministic flows to non-deterministic, generative systems brings new challenges. How can you test something that doesn’t always respond the same way twice? How can you balance safety and flexibility when giving an AI system access to core infrastructure? And how can you manage cost, transparency, and ethical risk while still pursuing meaningful returns?

These solutions will determine how, and how quickly, companies embrace the next era of customer experience technology.

Verma argues that the story of customer experience automation over the past decade has been one of shifting expectations—from rigid, deterministic flows to flexible, generative systems. Along the way, businesses have had to rethink how they mitigate risk, implement guardrails, and measure success. The future, Verma suggests, belongs to organizations that focus on outcome-oriented design: tools that work transparently, safely, and at scale.

“I believe that the big winners are going to be the use case companies, the applied AI companies,” says Verma.

Watch the webcast now.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Can Writing a Book Grow Your Business?

In a Publishers Weekly article last fall, tech entrepreneur and author Uri Levine says “writing, publishing, marketing, and promoting the book… are somewhat similar to building a startup.”

That’s no surprise. Entrepreneurs and authors have a lot in common: They believe in an idea and want to reach people willing to pay for it. Writing a book, like building an ecommerce business, is risky, requiring vision, dedication, management, and attention to detail. Most startups fold within five years, and only about 4% of books sell more than 1,000 copies.

Even so, a book could reach a large audience, have a lasting impact, and create opportunities such as consulting, speaking, teaching, and new partnerships. Bill Morrison, a real estate salesman turned bestselling author, told Forbes, “I’m the same guy with the same tie, but now everyone is paying attention.”

Considerations

If you’ve considered writing a book, here are a few questions to consider before taking the (time-consuming) plunge.

  • What is your objective? Are you looking to enhance your reputation, make an impact, generate leads, appear on podcasts, launch a speaking career, or establish yourself as an influencer and thought leader? Is your goal realistic?
  • Who are your target prospects? Do you have a clear picture of an “ideal reader” for your book? You can’t write for everyone the same way, or market your book effectively, until you’ve identified the prospects and where to find them.
  • How will your book be different? Are there other successful books on your topic? What will make your book stand out? What makes your perspective unique and valuable?
  • How will your book benefit readers? What problem will your book solve? Will it change how readers think, or empower them to do something they couldn’t before? Will they feel inspired?
  • Is your topic date-sensitive or evergreen? The answer will likely guide your approach to writing, publishing, and marketing. Does success require quick publication while a trend is still hot?
  • How much time and money will you invest? Are your budget and capacity in line with the goals? Expect to spend at least several months and a few thousand dollars for idea development, writing, editing, publishing, and marketing, even for a short, self-published ebook with a niche audience. A more ambitious project can take a year or more to write; freelance editing, design, and publicity (plus printing and distribution services) may require several months and cost $10,000 to $50,000. Traditional publishers can take longer and often require authors to do most of the marketing.

Josh Bernoff is a serial business-book author and consultant. In an April 2025 post, he stated the key reasons business books fail are unclear goals and audience focus, little differentiation from competing titles, and no marketing. To succeed, he says, authors must define their objectives and audience, invest in editorial quality, and market strategically. In other words, treat your book just like a business.

Ashley Bernardi, a media relations specialist for authors, agrees. She told a Forbes writer, “The most successful authors think like business people. There is a strategy behind the book, multiple revenue streams, and the author is the best marketing weapon. Not the publisher, not the PR firm, and not the agent, but the author.”

Author Survey

What could a book do for you?

In 2024, a group of four author-service firms, including Josh Bernoff’s, surveyed “350 authors and prospective authors, of which 301 had published a nonfiction book. Two-thirds of them had published multiple books.”

The results, published as the “Business Book ROI Study” in a PDF, found that 89% of respondents said writing a book was a good decision, and nearly two-thirds reported profitability, despite many having spent more and sold less than they expected.

About a third increased their speaking and consulting earnings, and almost one in five had more than $250,000 in book-related revenues. Other benefits included growth in credibility, personal brands, and social media followings.

Finally, there are many ways to repurpose content from a book into articles, graphics, videos, case studies, or excerpts for use in promoting yourself and your business.

YouTube Monetization Updates Across Long-Form, Shorts, & Live via @sejournal, @MattGSouthern

YouTube has announced a suite of monetization updates designed to help creators diversify their revenue streams.

Key updates include dynamic sponsorship for long-form videos, brand linking in Shorts, AI-powered product tagging for Shopping, and side-by-side live ads.

The updates come as YouTube revealed it paid out over $100 billion to creators, artists, and media companies globally over the past four years.

What’s New

Dynamic Sponsorship

YouTube is introducing a new way for creators to manage sponsorships in their videos.

Creators will soon be able to dynamically add brand segments to their content, rather than having to permanently embed them.

YouTube’s announcement reads:

“This new format enables you to remove the sponsorship when the deal is complete, resell the slot to another brand or eventually sell the same slot to multiple brands in different markets — transforming your videos into living assets to grow your business. Creators can choose the perfect moment to insert the branded segment, and will see detailed performance insights directly in YouTube Studio, which can also be shared with the brand.”

Testing starts with a small group early next year.

Shorts Links

YouTube is adding the ability to link directly to a sponsor’s website from Shorts.

YouTube states:

“For Shorts creators, they’ll soon be able to add a link to a brand’s site specifically for brand deals. This will make it easier for viewers to discover and buy products, while giving creators a powerful way to drive results for brand partners.”

Shopping

YouTube Shopping is getting a series of updates to improve the shopping experience for both creators and viewers.

The platform is adding automatic timestamps that show when products are available in videos, making it simpler for viewers to find and buy featured items.

YouTube is also automating product selection in Merchant Center, which reduces the manual work creators have to do to tag and link products to their content.

YouTube’s announcement reads:

“We know tagging products can be time-consuming, so to make the experience better for creators, we’re leaning on an AI-powered system to identify the optimal moment a product is mentioned and automatically display the product tag at that time, capturing viewer interest when it’s highest. We’ll also begin testing the ability to automatically identify and tag all eligible products mentioned in your video later this year.”

These updates are planned for later this year.

Live Streaming

Live streaming, which draws more than 30 percent of YouTube’s daily logged-in viewers, according to company data, is getting new features to help creators earn more money.

YouTube is rolling out live ads that show up next to streams, rather than interrupting them.

YouTube’s announcement reads:

“The new side-by-side ads are a less intrusive format for viewers, while helping creators get paid without pulling their audience away.

YouTube is also introducing a feature that lets live streams transition directly to member communities and channel memberships.

The company adds:

… We’re rolling out a new feature that allows channel membership creators to easily transition from public to members-only livestreams, without disruption. This makes it easy to create premium, members-only content, while strengthening your community and attracting new paid members.

Why This Matters

These updates are a move toward giving creators more control over how they make money from their content, while also giving brands more ways to partner with them.

By opening up new revenue streams beyond traditional pre-roll and mid-roll ads, YouTube is equipping creators with tools that could make the platform more attractive for full-time publishing.

Personas Are Critical For AI search via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Here’s what I’m covering this week: How to build user personas for SEO from data you already have on hand.

You can’t treat personas as a “brand exercise” anymore.

In the AI-search era, prompts don’t just tell you what users want; they reveal who’s asking and under what constraints.

If your pages don’t match the person behind the query and connect with them quickly – their role, risks, and concerns they have, and the proof they require to resolve the intent – you’re likely not going to win the click or the conversion.

It’s time to not only pay attention and listen to your customers, but also optimize for their behavioral patterns.

Search used to be simple: queries = intent. You matched a keyword to a page and called it a day.

Personas were a nice-to-have, often useful for ads and creative or UX decisions, but mostly considered irrelevant by most to organic visibility or growth.

Not anymore.

Longer prompts and personalized results don’t just express what someone wants; they also expose who they are and the constraints they’re operating under.

AIOs and AI chats act as a preview layer and borrow trust from known brands. However, blue links still close when your content speaks to the person behind the prompt.

If that sounds like hard work, it is. And it’s why most teams stall implementing search personas across their strategy.

  • Personas can feel expensive, generic, academic, or agency-driven.
  • The old persona PDFs your brand invested in 3-5 years ago are dated – or missing entirely.
  • The resources, time, and knowledge it takes to build user personas are still significant blockers to getting the work done.

In this memo, I’ll show you how to build lean, practical, LLM-ready user personas for SEO – using the data you already have, shaped by real behavioral insights – so your pages are chosen when it counts.

While there are a few ways you could do this, and several really excellent articles out there on SEO personas this past year, this is the approach I take with my clients.

Most legacy persona decks were built for branding, not for search operators.

They don’t tell your writers, SEOs, or PMs what to do next, so they get ignored by your team after they’re created.

Mistake #1: Demographics ≠ Decisions

Classic user personas for SEO and marketing overfocused on demographics, which can give some surface-level insights into stereotypical behavior for certain groups.

But demographics don’t necessarily help your brand stand out against your competitors. And demographics don’t offer you the full picture.

Mistake #2: A Static PDF Or Shared Doc Ages Fast

If your personas were created once and never reanalyzed or updated again, it’s likely they got lost in G: Drive or Dropbox purgatory.

If there’s no owner working to ensure they’re implemented across production, there’s no feedback loop to understand if they’re working or if something needs to change.

Mistake #3: Pretty Delivered Decks, No Actionable Insights

Those well-designed persona deliverables look great, but when they aren’t tied to briefs, citations, trust signals, your content calendar, etc., they end up siloed from production. If a persona can’t shape a prompt or a page, it won’t shape any of your outcomes.

In addition to the fact classic personas weren’t built to implement across your search strategy, AI has shifted us from optimizing for intent to optimizing for identity and trust. In last week’s memo I shared the following:

The most significant, stand-out finding from that study: People use AI Overviews to get oriented and save time. Then, for any search that involves a transaction or high-stakes decision-making, searchers validate outside Google, usually with trusted brands or authority domains.

Old world of search optimization: Queries signaled intent. You ranked a page that matched the keyword and intent behind it, and your brand would catch the click. Personas were optional.

New world of search optimization: Prompts expose people, and AI changes how we search. Marketers aren’t just optimizing for search intent or demographics; we’re also optimizing for behavior.

Long AI prompts don’t just say what the user intends – they often reveal who is asking and what constraints or background of knowledge they bring.

For example, if a user prompts ChatGPT something like “I’m a healthcare compliance officer at a mid-sized hospital. Can you draft a checklist for evaluating new SaaS vendors, making sure it covers HIPAA regulations and costs under $50K a year,” then ChatGPT would have background information about the user’s general compliance needs, budget ceilings, risk tolerance, and preferred content formats.

AI systems then personalize summaries and citations around that context.

If your content doesn’t meet the persona’s trust requirements or output preference, it won’t be surfaced.

What that means in practice:

  • Prompts → identity signals. “As a solo marketer on a $2,000 budget…” or “for EU users under GDPR…” = role, constraints, and risk baked into the query.
  • Trust beats length. Classic search results are clicked on, but only when pages show the trust scaffolding a given persona needs for a specific query.
  • Format matters. Some personas want TL;DR and tables; others need demos, community validation (YouTube/Reddit), or primary sources.

So, here’s what to do about it.

You don’t need a five or six-figure agency study (although those are nice to have).

You need:

  • A collection of your already-existing data.
  • A repeatable process, not a static file.
  • A way to tie personas directly into briefs and prompts.

Turning your own existing data into usable user personas for SEO will equip you to tie personas directly to content briefs and SEO workflows.

Before you start collecting this data, set up an organized way to store it: Google Sheets, Notion, Airtable – whatever your team prefers. Store your custom persona prompt cards there, too, and you can copy and paste from there into ChatGPT & Co. as needed.

The work below isn’t for the faint of heart, but it will change how you prompt LLMs in your AI-powered workflows and your SEO-focused webpages for the better.

  1. Collect and cluster data.
  2. Draft persona prompt cards.
  3. Calibrate in ChatGPT & Co.
  4. Validate with real-world signals.

You’re going to mine several data sources that you already have, both qualitative and quantitative.

Keep in mind, being sloppy during this step means you will not have a good base for an “LLM ready” persona prompt card, which I’ll discuss in Step 2.

Attributes to capture for an “LLM-ready persona”:

  • Jobs-to-be-done (top 3).
  • Role and seniority.
  • Buying triggers + blockers (think budget, IT/legal constraints, risk).
  • 10-20 example questions at TOFU, MOFU, BOFU stages.
  • Trust cues (creators, domains, formats).
  • Output preferences (depth, format, tone).

Where AIO validation style data comes in:

Last week, we discussed four distinct AIO intent validations verified within the AIO usability study: Efficiency-first/Trust-driven/Comparative/Skeptical rejection.

If you want to incorporate this in your persona research – and I’d advise that you should – you’re going to look for:

  • Hesitation triggers across interactions with your brand: What makes them pause or refine their question (whether on a sales call or a heat map recording).
  • Click-out anchors: Which authority brands they use to validate (PayPal, NIH, Mayo Clinic, Stripe, KBB, etc.); use Sparktoro to find this information.
  • Evidence threshold: What proof ends hesitation for your user or different personas? (Citations, official terminology, dated reviews, side-by-side tables, videos).
  • Device/age nuance: Younger and mobile users → faster AIO acceptance; older cohorts → blue links and authority domains win clicks.

Below, I’ll walk you through where to find this information.

Qualitative Inputs

1. Your GSC queries hold a wealth of info. Split by TOFU/MOFU/BOFU, branded vs non-branded, and country. Then, use a regex to map question-style queries and see who’s really searching at each stage.

Below is the regex I like to use, which I discussed in Is AI cutting into your SEO conversions?. It also works for this task:

(?i)^(who|what|why|how|when|where|which|can|does|is|are|should|guide|tutorial|course|learn|examples?|definition|meaning|checklist|framework|template|tips?|ideas?|best|top|list(?:s)?|comparison|vs|difference|benefits|advantages|alternatives)b.*

2. On-Site Search Logs. These are the records of what visitors type into your website’s own search bar (not Google).

Extract exact phrasing of problems and “missing content” signals (like zero results, refined searches, or high exits/no clicks).

Plus, the wording visitors use reveals jobs-to-be-done, constraints, and vocabulary you should mirror on the page. Flag repeat questions as latent questions to resolve.

3. Support Tickets, CRM Notes, Win/Loss Analysis. Convert objections, blockers, and “how do I…” threads into searchable intents and hesitation themes.

Mine the following data from your records:

  • Support: Ticket titles, first message, last agent note, resolution summary.
  • CRM: Opportunity notes, metrics, decision criteria, lost-reason text.
  • Win/Loss: Objection snapshots, competitor cited, decision drivers, de-risking asks.
  • Context (if available): buyer role, segment (SMB/MM/ENT), region, product line, funnel stage.

Once gathered, compile and analyze to distill patterns.

Qualitative Inputs

1. Your sales calls and customer success notes are a wealth of information.

Use AI to analyze transcripts and/or notes to highlight jobs-to-be-done, triggers, blockers, and decision criteria in your customer’s own words.

2. Reddit and social media discussions.

This is where your buyers actually compare options and validate claims; capture the authority anchors (brands/domains) they trust.

3. Community/Slack spaces, email newsletter replies, article comments, short post-purchase or signup surveys.

Mine recurring “stuck points” and vocabulary you should mirror. Bucket recurring themes together and correlate across other data.

Pro tip: Use your topic map as the semantic backbone for all qualitative synthesis – discussed in depth in how to operationalize topic-first SEO. You’d start by locking the parent topics, then layer your personas as lenses: For each parent topic, fan out subtopics by persona, funnel stage, and the “people × problems” you pull from sales calls, CS notes, Reddit/LinkedIn, and community threads. Flag zero-volume/fringe questions on your map as priorities; they deepen authority and often resolve the hesitation themes your notes reveal.

After clustering pain points and recurring queries, you can take it one step further to tag each cluster with an AIO pattern by looking for:

  • Short dwell + 0–1 scroll + no refinements → Efficiency-first validations.
  • Longer dwell + multiple scrolls + hesitation language + authority click-outs → Trust-driven validations.
  • Four to five scrolls + multiple tabs (YouTube/Reddit/vendor) → Comparative validations.
  • Minimal AIO engagement + direct authority clicks (gov/medical/finance) → Skeptical rejection.

Not every team can run a full-blown usability study of the search results for targeted queries and topics, but you can infer many of these behavioral patterns through heatmaps of your own pages that have strong organic visibility.

2. Draft Persona Prompt Cards

Next up, you’ll take this data to inform creating a persona card.

A persona card is a one-page, ready-to-go snapshot of a target user segment that your marketing/SEO team can act on.

Unlike empty or demographic-heavy personas, a persona card ties jobs-to-be-done, constraints, questions, and trust cues directly to how you brief pages, structure proofs, and prompt LLMs.

A persona card ensures your pages and prompts match identity + trust requirements.

What you’re going to do in this step is convert each data-based persona cluster into a one-pager designed to be embedded directly into LLM prompts.

Include input patterns you expect from that persona – and the output format they’d likely want.

Optimizing Prompt Selection for Target Audience Engagement

Reusable Template: Persona Prompt Card

Drop this at the top of a ChatGPT conversation or save as a snippet.

This is an example template below based on the Growth Memo audience specifically, so you’ll need to not only modify it for your needs, but also tweak it per persona.

You are Kevin Indig advising a [ROLE, SENIORITY] at a [COMPANY TYPE, SIZE, LOCATION].

Objective: [Top 1–2 goals tied to KPIs and timeline]

Context: [Market, constraints, budget guardrails, compliance/IT notes]

Persona question style: [Example inputs they’d type; tone & jargon tolerance] 

Answer format:

- Start with a 3-bullet TL;DR.

- Then give a numbered playbook with 5-7 steps.

- Include 2 proof points (benchmarks/case studies) and 1 calculator/template.

- Flag risks and trade-offs explicitly.

- Keep to [brevity/depth]; [bullets/narrative]; include [table/chart] if useful.

What to avoid: [Banned claims, fluff, vendor speak] 

Citations: Prefer [domains/creators] and original research when possible.

Example Attribute Sets Using The Growth Memo Audience

Use this card as a starting point, then fill it with your data.

Below is an example of the prompt card with attributes filled for one of the ideal customer profiles (ICP) for the Growth Memo audience.

You are Kevin Indig advising an SEO Lead (Senior) at a Mid-Market B2B SaaS (US/EU).

Objective: Protect and grow organic pipeline in the AI-search era; drive qualified trials/demos in Q4; build durable topic authority.

Context: Competitive category; CMS constraints + limited Eng bandwidth; GDPR/CCPA; security/legal review for pages; budget ≤ $8,000/mo for content + tools; stakeholders: VP Marketing, Content Lead, PMM, RevOps.

Persona question style: “How do I measure topic performance vs keywords?”, “How do I structure entity-based internal linking?”, “What KPIs prove AIO exposure matters?”, “Regex for TOFU/MOFU/BOFU?”, “How to brief comparison pages that AIO cites?” Tone: precise, low-fluff, technical.

AIO validation profile:

- Dominant pattern(s): Trust-driven (primary), Comparative (frameworks/tools); Skeptical for YMYL claims.

- Hesitation triggers: Black-box vendor claims; non-replicable methods; missing citations; unclear risk/effort.

- Click-out anchors: Google Search Central & docs, schema.org, reputable research (Semrush/Ahrefs/SISTRIX/seoClarity), Pew/Ofcom, credible case studies, engineering/product docs.

- SERP feature bias: Skims AIO/snippets to frame, validates via organic authority + primary sources; uses YouTube for demos; largely ignores Ads.

- Evidence threshold: Methodology notes, datasets/replication steps, benchmarks, decision tables, risk trade-offs.

Answer format:

- Start with a three-bullet TL;DR.

- Then give a numbered playbook with 5-7 steps.

- Include 2 proof points (benchmarks/case studies) and 1 calculator/template.

- Flag risks and trade-offs explicitly.

- Keep to brevity + bullets; include a table/chart if useful.

Proof kit to include on-page:

Methodology & data provenance; decision table (framework/tool choice); “best for / not for”; internal-linking map or schema snippet; last-reviewed date; citations to Google docs/primary research; short demo or worksheet (e.g., Topic Coverage Score or KPI tree).

What to avoid:

Vendor-speak; outdated screenshots; cherry-picked wins; unverifiable stats; hand-wavy “AI magic.”

Citations:

Prefer Google Search Central/docs, schema.org, original studies/datasets; reputable tool research (Semrush, Ahrefs, SISTRIX, seoClarity); peer case studies with numbers.

Success signals to watch:

Topic-level lift (impressions/CTR/coverage), assisted conversions from topic clusters, AIO/snippet presence for key topics, authority referrals, demo starts from comparison hubs, reduced content decay, improved crawl/indexation on priority clusters.

Your goal here is to prove the Persona Prompt Cards actually produce useful answers – and to learn what evidence each persona needs.

Create one Custom Instruction profile per persona, or store each Persona Prompt Card as a prompt snippet you can prepend.

Run 10-15 real queries per persona. Score answers on clarity, scannability, credibility, and differentiation to your standard.

How to run the prompt card calibration:

  • Set up: Save one Prompt Card per persona.
  • Eval set: 10-15 real queries/persona across TOFU/MOFU/BOFU stages, including two or three YMYL or compliance-based queries, three to four comparisons, and three or four quick how-tos.
  • Ask for structure: Require TL;DR → numbered playbook → table → risks → citations (per the card).
  • Modify it: Add constraints and location variants; ask the same query two ways to test consistency.

Once you run sample queries to check for clarity and credibility, modify or upgrade your Persona Card as needed: Add missing trust anchors or evidence the model needed.

Save winning outputs as ways to guide your briefs that you can paste into drafts.

Log recurring misses (hallucinated stats, undated claims) as acceptance checks for production.

Then, do this for other LLMs that your audience uses. For instance, if your audience leans heavily toward using Perplexity.ai, calibrate your prompt there also. Make sure to also run the prompt card outputs in Google’s AI Mode, too.

Watch branded search trends, assisted conversions, and non-Google referrals to see if influence shows up where expected when you publish persona-tuned assets.

And make sure to measure lift by topic, not just per page: Segment performance by topic cluster (GSC regex or GA4 topic dimension). Operationalizing your topic-first seo strategy discusses how to do this.

Keep the following in mind when reviewing real-world signals:

  • Review at 30/60/90 days post-ship, and by topic cluster.
  • If Trust-driven pages show high scroll/low conversions → add/upgrade citations and expert reviews and quotes.
  • If Comparative pages get CTR but low product/sales demos signups → add short demo video, “best for / not for” sections, and clearer CTAs.
  • If Efficiency-first pages miss lifts in AIO/snippets → tighten TL;DR, simplify tables, add schema.
  • If Skeptical-rejection-geared pages yield authority traffic but no lift → consider pursuing authority partnerships.
  • Most importantly: redo the exercise every 60-90 days and match your new against old personas to iterate toward the ideal.

Building user personas for SEO is worth it, and it can be doable and fast by using in-house data and LLM support.

I challenge you to start with one lean persona this week to test this approach. Refine and expand your approach based on the results you see.

But if you plan to take this persona-building project on, avoid these common missteps:

  • Creating tidy PDFs with zero long-term benefits: Personas that don’t specify core search intents, pain points, and AIO intent patterns won’t move behavior.
  • Winning every SERP feature: This is a waste of time. Optimize your content for the right surface for the dominant behavioral patterns of your target users.
  • Ignoring hesitation: Hesitation is your biggest signal. If you don’t resolve it on-page, the click dies elsewhere.
  • Demographics over jobs-to-be-done: Focusing on characteristics of identity without incorporating behavioral patterns is the old way.

Featured Image: Paulo Bobita/Search Engine Journal

Ask An SEO: High Volumes Or High Authority Evergreen Content? via @sejournal, @rollerblader

This week’s Ask an SEO question comes from an anonymous user:

“Should we still publish high volumes of content, or is it better to invest in fewer, higher-authority evergreen pieces?”

Great question! The answer is always higher-authority content, but not always evergreen if your goal is growth and sustainability. If the goal is quick traffic and a churn-and-burn model, high volume makes sense. More content does not mean more SEO. Sustainable SEO traffic via content is providing a proper user experience, which includes making sure the other topics on the site are helpful to a user.

Why High Volumes Of Content Don’t Work Long Term

The idea of creating high volumes of content to get traffic is a strategy where you focus a page on specific keywords and phrases and optimize the page for these phrases. When Google launched BERT and MUM, this strategy (which was already outdated) got its final nail in the coffin. These updates to Google’s systems looked at the associations between the words, hierarchy of the page, and the website to figure out the experience of the page vs. the specific words on the page.

By looking at what the words mean in relation to the headers, the sentences above and below, and the code of the page, like schema, SEO moved away from keywords to what the user will learn from the experience on the page. At the same time, proactive SEOs focused more heavily on vectors and entities; neither of these are new topics.

Back in the mid-2000s, article spinners helped to generate hundreds of keyword-focused pages quickly and easily. With them, you create a spintax (similar to prompts for large language models or LLMs like ChatGPT and Perplexity) with macros for words to be replaced, and the software would create “original” pieces of content. These could then be launched en masse, similar to “programmatic SEO,” which is not new and never a smart idea.

Google and other search engines would surface these and rank the sites until they got caught. Panda did a great job finding article spinner pages and starting to devalue and penalize sites using this technique of mass content creation.

Shortly after, website owners began using PHP with merchant data feeds to create shopping pages for specific products and product groups. This is similar to how media companies produce shopping listicles and product comparisons en masse. The content is unique and original (for that site), but is also being produced en masse, which usually means little to no value. This includes human-written content that is then used for comparisons, even when a user selects to compare the two. In this situation, you’ll want to use canonical links and meta robots properly, but that’s for a different post.

Panda and the core algorithms already had a way to detect “thin pages” from content spinning, so although these product pages worked, especially when combined with spun content or machine-created content describing the products, these sites began getting penalized and devalued.

We’re now seeing AI content being created that is technically unique and “original” via ChatGPT, Perplexity, etc, and it is working for fast traffic gains. But these same sites are getting caught and losing that traffic when they do. It is the same exact pattern as article spinning and PHP + data feed shopping lists and pages.

I could see an argument being made for “fan-out” queries and why having pages focused on specific keywords makes sense. Fan-out queries are AI results that automate “People Also Ask,” “things to know,” and other continuation-rich results in a single output, vs. having separate search features.

If an SEO has experience with actual SEO best practices and knows about UX, they’ll know that the fan-out query is using the context and solutions provided on the pages, not multiple pages focused on similar keywords.

This would be the equivalent of building a unique page for each People Also Ask query or adding them as FAQs on the page. This is not a good UX, and Google knows you’re spamming/overoptimizing. It may work, but when you get caught, you’re in a worse position than when you started.

Each page should have a unique solution, not a unique keyword. When the content is focused on the solution, that solution becomes the keyword phrases, and the same page can show up for multiple different phrases, including different variations in the fan-out result.

If the goal is to get traffic and make money quickly, then abandon or sell the domain, more content is a good strategy. But you won’t have a reliable or long-term income and will always be chasing the next thing.

Evergreen And Non-Evergreen High-Quality Content

Focusing on quality content that provides value to an end user is better for long-term success than high volumes of content. The person will learn from the article, and the content tends to be trustworthy. This type of content is what gets backlinks naturally from high-authority and topically relevant websites.

More importantly, each page on the website will have a clear intent. With sites that focus on volume vs. quality, a lot of the posts and pages will look similar as they’re focused on similar keywords, and users won’t know which article provides the actual solution. This is a bad UX. Or the topics jump around, where one page is about the best perfumes and another is about harnesses for dogs. The trust in the quality of the content is diminished because the site can’t be an expert in everything. And it is clear the content is made up by machines, i.e., fake.

Not all of the content needs to be evergreen, either. Companies and consumer trends happen, and people want timely information mixed in with evergreen topics. If it is product releases, an archive and list of all releases can be helpful.

Fashion sites can easily do the trends from that season. The content is outdated when the next season starts, but the coverage of the trends is something people will look back on and source or use as a reference. This includes fashion students sourcing content for classes, designers looking for inspiration from the past, and mass media covering when things trended and need a reference point.

When evergreen content begins to slide, you can always refresh it. Look back and see what has changed or advanced since the last update, and see how you can improve on it.

  • Look for customer service questions that are not answered.
  • Add updated software features or new colors.
  • See if there are examples that could be made better or clearer.
  • If new regulations are passed locally, state level, or federally, add these in so the content is accurate.
  • Delete content that is outdated, or label it as no longer relevant with the reasons why.
  • Look for sections that may have seemed relevant to the topic, but actually weren’t, and remove them so the content becomes stronger.

There is no shortage of ways to refresh evergreen content and improve on it. These are the pillar pages that can bring consistent traffic over the long run and keep business strong, while the non-evergreen pages do their part, creating ebbs and flows of traffic. With some projects, we don’t produce new content for a month or two at a time because the pillar pages need to be refreshed, and the clients still do well with traffic.

Creating mass amounts of content is a good strategy for people who want to make money fast and do not plan on keeping the domain for a long time. It is good for churn-and-burn sites, domains you rent (if the owner is ok with it), and testing projects. When your goal is to build a sustainable business, high-authority content that provides value is the way to go.

You don’t need to worry about the amount of content with this strategy; you focus on the user experience. When you do this, most channels can grow, including email/SMS, social media, PR, branding, and SEO.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

The Download: computing’s bright young minds, and cleaning up satellite streaks

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet tomorrow’s rising stars of computing

Each year, MIT Technology Review honors 35 outstanding people under the age of 35 who are driving scientific progress and solving tough problems in their fields.

Today we want to introduce you to the computing innovators on the list who are coming up with new AI chips and specialized datasets—along with smart ideas about how to assess advanced systems for safety.

Check out the full list of honorees—including our innovator of the year—here

Job titles of the future: Satellite streak astronomer

Earlier this year, the $800 million Vera Rubin Observatory commenced its decade-long quest to create an extremely detailed time-lapse movie of the universe.

Rubin is capable of capturing many more stars than any other astronomical observatory ever built; it also sees many more satellites. Up to 40% of images captured by the observatory within its first 10 years of operation will be marred by their sunlight-reflecting streaks.

Meredith Rawls, a research scientist at the telescope’s flagship observation project, Vera Rubin’s Legacy Survey of Space and Time, is one of the experts tasked with protecting Rubin’s science mission from the satellite blight. Read the full story.

—Tereza Pultarova

This story is from our new print edition, which is all about the future of security. Subscribe here to catch future copies when they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China has accused Nvidia of violating anti-monopoly laws
As US and Chinese officials head into a second day of tariff negotiations. (Bloomberg $)
+ The investigation dug into Nvidia’s 2020 acquisition of computing firm Mellanox. (CNBC)
+ But China’s antitrust regulator hasn’t confirmed if it will punish it. (WSJ $)

2 The US is getting closer to making a TikTok deal
But it’s still prepared to go ahead with a ban if an agreement can’t be reached. (Reuters)

3 Grok spread misinformation about a far-right rally in London
It falsely claimed that police misrepresented old footage as being from the protest. (The Guardian)
+ Elon Musk called for a new UK government during a video speech. (Politico)

4 Here’s what people are really using ChatGPT for
Users are more likely to use it for personal, rather than work-related queries. (WP $)
+ Anthropic says businesses are using AI to automate, not collaborate. (Bloomberg $)
+ Therapists are secretly using ChatGPT. Clients are triggered. (MIT Technology Review)

5 How China’s Hangzhou became a global AI hub
Spawning not just Alibaba, but DeepSeek too. (WSJ $)
+ China and the US are completely dominating the global AI race. (Rest of World)
+ How DeepSeek ripped up the AI playbook. (MIT Technology Review)

6 Driverless car fleets could plunge US cities into traffic chaos
Are we really prepared? (Vox $)

7 The shipping industry is harnessing AI to fight cargo fires
The risk of deadly fires is rising due to shipments of batteries and other flammable goods. (FT $)

8 Sales of used EVs are sky-rocketing
Buyers are snapping up previously-owned bargains. (NYT $)
+ EV owners won’t be able to drive in carpool lanes any more. (Wired $)

9 A table-top fusion reactor isn’t as crazy as it sounds
This startup is trying to make compact reactors a reality. (Economist $)
+ Inside a fusion energy facility. (MIT Technology Review)

10 How a magnetic field could help clean up space
If we don’t, we could soon lose access to Earth’s low orbit altogether. (IEEE Spectrum)
+ The world’s next big environmental problem could come from space. (MIT Technology Review)

Quote of the day

“If we’re going on a journey, they’re absolutely taking travel sickness tablets immediately. They’re not even considering coming in the car without them.”

—Phil Bellamy, an electric car owner, describes the extreme nausea his daughters experience while riding in his vehicle to the Guardian.

One more thing

Google, Amazon and the problem with Big Tech’s climate claims

Last year, Amazon trumpeted that it had purchased enough clean electricity to cover the energy demands of all its global operations, seven years ahead of its sustainability target.

That news closely followed Google’s acknowledgment that the soaring energy demands of its AI operations helped ratchet up its corporate emissions by 13% last year—and that it had backed away from claims that it was already carbon neutral.

If you were to take the announcements at face value, you’d be forgiven for believing that Google is stumbling while Amazon is speeding ahead in the race to clean up climate pollution.

But while both companies are coming up short in their own ways, Google’s approach to driving down greenhouse-gas emissions is now arguably more defensible. To learn why, read our story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Steven Spielberg was just 26 when he made Jaws? The more you know.
+ This tiny car’s huge racing track journey is completely hypnotic.
+ Easy dinner recipes? Yes please.
+ This archive of thousands of historical children’s books is a real treasure trove—and completely free to read.

Did Google Just Prevent Rank Tracking?

Google’s default search results list 10 organic listings per page. Yet adding &num=100 to the search result URL will show 100 listings, not 10. It’s one of Google’s many specialized search “operators” — until now.

This week, Google dropped support for the &num=100 parameter. It’s a telling move. Many search pros speculate the aim is to restrict AI bots that use the parameter to perform so-called fan-out searches. The collateral damage is on search engine ranking tools, which have long used the parameter to scrape results for keywords. Many of those tools no longer function, at least for now.

Surprisingly, the move affected Performance data in Search Console. Most website owners now see increases in average positions and declines in the number of impressions.

Screenshot of Search Console Performance report

In Search Console, most website owners now see increases in average positions and declines in the number of impressions. Click image to enlarge.

Search Console

Google has provided no explanation. Presumably the changes in Performance data are owing to traffic from the third-party bots, not humans, to track rankings. That is the unexpected huge takeaway: Search Console data at least partially includes bot activity.

In other words, the lost “Impressions” were URLs as shown to bot scrapers, not human searchers. The “Average Position” metric is closely tied to “Impressions,” as Search Console records the topmost position of a URL as seen by searchers. Impressions now decline if “searchers” are bots.

Thus organic performance data in Search Console now is more human impressions and fewer bots. The data reflects actual consumers viewing the listings.

The data remains skewed for top-ranking URLs because page 1 of search results is still accessible to bots, although I know of no way to quantify bot searches versus those of humans.

Adios Rank Tracking?

Search result scrapers require much computing time and energy. Third-party tools will likely raise their prices as, from now on, their bots must “click” to the next page nine times to reach 100 listings.

Tim Soulo, CMO of Ahrefs, a top SEO platform, hinted today on LinkedIn that the tool would likely report rankings on only the first two pages to remain financially sustainable.

So the future of SEO rank tracking is unclear. Likely, tracking organic search positions will become more expensive and produce fewer results (only the top two pages).

What to Do?

  • Wait for the Performance section in Search Console to stabilize
  • Consider SEO platforms that integrate with Search Console. For example, SEO Testing allows customers to import and archive the Performance data and annotate industry updates (such as Google’s &num=100 move) for traffic or rankings impact.

To be sure, rank tracking is becoming obsolete. But monitoring organic search positions remains essential for keyword gap analysis and content ideas, among other SEO tasks.