Synthetic Personas For Better Prompt Tracking via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

We all know prompt tracking is directional. The most effective way to reduce noise is to track prompts based on personas.

This week, I’m covering:

  • Why AI personalization makes traditional “track the SERP” models incomplete, and how synthetic personas fill the gap.
  • The Stanford validation data showing 85% accuracy at one-third the cost, and how Bain cut research time by 50-70%.
  • The five-field persona card structure and how to generate 15-30 trackable prompts per segment across intent levels.
The best way to make your prompt tracking much more accurate is to base it on personas. Synthetic Personas speed you up at a fraction of the price. (Image Credit: Kevin Indig)

A big difference between classic and AI search is that the latter delivers highly personalized results.

  • Every user gets different answers based on their context, history, and inferred intent.
  • The average AI prompt is ~5x longer than classic search keywords (23 words vs. 4.2 words), conveying much richer intent signals that AI models use for personalization.
  • Personalization creates a tracking problem: You can’t monitor “the” AI response anymore because each prompt is essentially unique, shaped by individual user context.

Traditional persona research solves this – you map different user segments and track responses for each – but it creates new problems. It takes weeks to conduct interviews and synthesize findings.

By the time you finish, the AI models have changed. Personas become stale documentation that never gets used for actual prompt tracking.

Synthetic personas fill the gap by building user profiles from behavioral and profiling data: analytics, CRM records, support tickets, review sites. You can spin up hundreds of micro-segment variants and interact with them in natural language to test how they’d phrase questions.

Most importantly: They are the key to more accurate prompt tracking because they simulate actual information needs and constraints.

The shift: Traditional personas are descriptive (who the user is), synthetic personas are predictive (how the user behaves). One documents a segment, the other simulates it.

Image Credit: Kevin Indig

Example: Enterprise IT buyer persona with job-to-be-done “evaluate security compliance” and constraint “need audit trail for procurement” will prompt differently than an individual user with the job “find cheapest option” and constraint “need decision in 24 hours.”

  • First prompt: “enterprise project management tools SOC 2 compliance audit logs.”
  • Second prompt: “best free project management app.”
  • Same product category, completely different prompts. You need both personas to track both prompt patterns.

Build Personas With 85% Accuracy For One-Third Of The Price

Stanford and Google DeepMind trained synthetic personas on two-hour interview transcripts, then tested whether the AI personas could predict how those same real people would answer survey questions later.

  • The method: Researchers conducted follow-up surveys with the original interview participants, asking them new questions. The synthetic personas answered the same questions.
  • Result: 85% accuracy. The synthetic personas replicated what the actual study participants said.
  • For context, that’s comparable to human test-retest consistency. If you ask the same person the same question two weeks apart, they’re about 85% consistent with themselves.

The Stanford study also measured how well synthetic personas predicted social behavior patterns in controlled experiments – things like who would cooperate in trust games, who would follow social norms, and who would share resources fairly.

The correlation between synthetic persona predictions and actual participant behavior was 98%. This means the AI personas didn’t just memorize interview answers; they captured underlying behavioral tendencies that predicted how people would act in new situations.

Bain & Company ran a separate pilot that showed comparable insight quality at one-third the cost and one-half the time of traditional research methods. Their findings: 50-70% time reduction (days instead of weeks) and 60-70% cost savings (no recruiting fees, incentives, transcription services).

The catch: These results depend entirely on input data quality. The Stanford study used rich, two-hour interview transcripts. If you train on shallow data (just pageviews or basic demographics), you get shallow personas. Garbage in, garbage out.

How To Build Synthetic Personas For Better Prompt Tracking

Building a synthetic persona has three parts:

  1. Feed it with data from multiple sources about your real users: call transcripts, interviews, message logs, organic search data.
  2. Fill out the Persona Card – the five fields that capture how someone thinks and searches.
  3. Add metadata to track the persona’s quality and when it needs updating.

The mistake most teams make: trying to build personas from prompts. This is circular logic – you need personas to understand what prompts to track, but you’re using prompts to build personas. Instead, start with user information needs, then let the persona translate those needs into likely prompts.

Data Sources To Feed Synthetic Personas

The goal is to understand what users are trying to accomplish and the language they naturally use:

  1. Support tickets and community forums: Exact language customers use when describing problems. Unfiltered, high-intent signal.
  2. CRM and sales call transcripts: Questions they ask, objections they raise, use cases that close deals. Shows the decision-making process.
  3. Customer interviews and surveys: Direct voice-of-customer on information needs and research behavior.
  4. Review sites (G2, Trustpilot, etc.): What they wish they’d known before buying. Gap between expectation and reality.
  5. Search Console query data: Questions they ask Google. Use regex to filter for question-type queries:
    (?i)^(who|what|why|how|when|where|which|can|does|is|are|should|guide|tutorial|course|learn|examples?|definition|meaning|checklist|framework|template|tips?|ideas?|best|top|lists?|comparison|vs|difference|benefits|advantages|alternatives)b.*

    (I like to use the last 28 days, segment by target country)

Persona card structure (five fields only – more creates maintenance debt):

These five fields capture everything needed to simulate how someone would prompt an AI system. They’re minimal by design. You can always add more later, but starting simple keeps personas maintainable.

  1. Job-to-be-done: What’s the real-world task they’re trying to accomplish? Not “learn about X” but “decide whether to buy X” or “fix problem Y.”
  2. Constraints: What are their time pressures, risk tolerance levels, compliance requirements, budget limits, and tooling restrictions? These shape how they search and what proof they need.
  3. Success metric: How do they judge “good enough?” Executives want directional confidence. Engineers want reproducible specifics.
  4. Decision criteria: What proof, structure, and level of detail do they require before they trust information and act on it?
  5. Vocabulary: What are the terms and phrases they naturally use? Not “churn mitigation” but “keeping customers.” Not “UX optimization” but “making the site easier to use.”

Specification Requirements

This is the metadata that makes synthetic personas trustworthy; it prevents the “black box” problem.

When someone questions a persona’s outputs, you can trace back to the evidence.

These requirements form the backbone of continuous persona development. They keep track of changes, sources, and confidence in the weighting.

  • Provenance: Which data sources, date ranges, and sample sizes were used (e.g., “Q3 2024 Support Tickets + G2 Reviews”).
  • Confidence score per field: A High/Medium/Low rating for each of the five Persona Card fields, backed by evidence counts. (e.g., “Decision Criteria: HIGH confidence, based on 47 sales calls vs. Vocabulary: LOW confidence, based on 3 internal emails”).
  • Coverage notes: Explicitly state what the data misses (e.g., “Overrepresents enterprise buyers, completely misses users who churned before contacting support”).
  • Validation benchmarks: Three to five reality checks against known business truths to spot hallucinations. (e.g., “If the persona claims ‘price’ is the top constraint, does that match our actual deal cycle data?”).
  • Regeneration triggers: Pre-defined signals that it’s time to re-run the script and refresh the persona (e.g., a new competitor enters the market, or vocabulary in support tickets shifts significantly).

Where Synthetic Personas Work Best

Before you build synthetic personas, understand where they add value and where they fall short.

High-Value Use Cases

  • Prompt design for AI tracking: Simulate how different user segments would phrase questions to AI search engines (the core use case covered in this article).
  • Early-stage concept testing: Test 20 messaging variations, narrow to the top five before spending money on real research.
  • Micro-segment exploration: Understand behavior across dozens of different user job functions (enterprise admin vs. individual contributor vs. executive buyer) or use cases without interviewing each one.
  • Hard-to-reach segments: Test ideas with executive buyers or technical evaluators without needing their time.
  • Continuous iteration: Update personas as new support tickets, reviews, and sales calls come in.

Crucial Limitations Of Synthetic Personas You Need To Understand

  • Sycophancy bias: AI personas are overly positive. Real users say, “I started the course but didn’t finish.” Synthetic personas say, “I completed the course.” They want to please.
  • Missing friction: They’re more rational and consistent than real people. If your training data includes support tickets describing frustrations or reviews mentioning pain points, the persona can reference these patterns when asked – it just won’t spontaneously experience new friction you haven’t seen before.
  • Shallow prioritization: Ask what matters, and they’ll list 10 factors as equally important. Real users have a clear hierarchy (price matters 10x more than UI color).
  • Inherited bias: Training data biases flow through. If your CRM underrepresents small business buyers, your personas will too.
  • False confidence risk: The biggest danger. Synthetic personas always have coherent answers. This makes teams overconfident and skip real validation.

Operating rule: Use synthetic personas for exploration and filtering, not for final decisions. They narrow your option set. Real users make the final call.

Solving The Cold Start Problem For Prompt Tracking

Synthetic personas are a filter tool, not a decision tool. They narrow your option set from 20 ideas to five finalists. Then, you validate those five with real users before shipping.

For AI prompt tracking specifically, synthetic personas solve the cold-start problem. You can’t wait to accumulate six months of real prompt volume before you start optimizing. Synthetic personas let you simulate prompt behavior across user segments immediately, then refine as real data comes in.

Where they’ll cause you to fail is if you use them as an excuse to skip real validation. Teams love synthetic personas because they’re fast and always give answers. That’s also what makes them dangerous. Don’t skip the validation step with real customers.


Featured Image: Paulo Bobita/Search Engine Journal

Google Can Now Monitor Search For Your Government IDs via @sejournal, @MattGSouthern
  • Google’s “Results about you” tool now lets you find and request removal of search results containing government-issued IDs.
  • This includes IDs like passports, driver’s licenses, and Social Security numbers.
  • The expansion is rolling out in the U.S. over the coming days, with additional regions planned.

Google’s Results about you tool now monitors Search results for government-issued IDs like passports, driver’s licenses, and Social Security numbers.

Should I Optimize My Content Differently For Each Platform? – Ask An SEO via @sejournal, @rollerblader

This week’s Ask an SEO question is from an anonymous reader who asks:

“Should I be optimizing content differently for LinkedIn, Reddit, and traditional search engines? I’m seeing these platforms rank highly in Google results, but I’m not sure how to create a cohesive multi-platform SEO approach.”

Yes, you should absolutely be optimizing your content differently based on where you publish it, where you want to reach the audience, and the way they engage. This includes what you put out, what goes on your website, and what exists in your metadata. Each platform has a different user experience, and the people there go for different reasons, so your job with your content is to meet their needs where they are.

Metadata

For SEO purposes, you’re limited to a certain amount of pixels for meta titles and descriptions in a search result, whereas on social media platforms, you’re limited to a different number of characters. This means your titles and descriptions need to be modified to fit the pixel or character lengths defined by the platforms, including Open Graph, rich pins, etc.  The people on the platform may also be at different stages of their journey and be different audience demographics.

If the audience on one platform that has its own metadata elements, and it is younger or skews towards one gender, cater the text and imagery in your metadata towards them. It’s worth seeing if that resonates better, but only if that is the majority from that platform. For search engines, it can be anyone and any demographic, so make it a strong sales pitch that is all-inclusive. Use your customer service and review data to find out what matters to them and use it in your messaging. The same goes for the images you use.

What fits on Pinterest won’t look good on LinkedIn, and an image for Google Discover may not work great on Instagram. Pinterest can display a vertical infographic and make it look great, but it will be illegible on platforms that have squares and landscape-oriented images. Resize, change the wording, and ensure the focal point on the image matches the platform it’ll be used on via your metadata.

Search engines and social algorithms look for different things as well. A search engine may allow some clickbait and salesy types of titles and metadata, but social media algorithms may penalize sites that do this. And each platform will be using and looking for different signals.

This is why you want to speak to the audiences on the platforms and focus on what the platform rewards, not just a search engine. Your customers on TikTok may be younger and use different wording than your customers on Facebook, but both will need a balance of the wording on your webpage. This is where using unique metadata by platform and purpose matters.

Content On Your Own Pages

Not every page on your website has to be for SEO, AIO, or GEO, and neither does the user experience. If the page is for an email blast or remarketing where you have strong calls to action, less text, and more conversion, you can noindex it or use a canonical link to the detailed new customer experience page. The same goes for SEO vs. social media visitors.

Someone from social media may need more of an education when buying a product because they didn’t set out that day to buy it; they were on social media to have fun. Someone looking for a product, product + review, or a comparison has a background on the product and wants a solution, so they went to a search engine to find one. This is where an educational vs. a conversion option can happen, and both can exist without competing, even though they’re optimized for the same keyword phrase.

The schema, the way wording is used, and elements on the page like an “add to cart” button above the fold help search engines to know the page is for conversions, while an H1, H2, and text with internal links to product and content pages mean it’s for educational purposes. Now apply this to the goal of what you want to the person to do on the page to the page, and keep in mind where they are coming from before they reach it.

You may want a more visual approach with video demonstrations or reviews, and options to shop and learn more for some experiences, vs. giving them the product and a buy now button. Both are optimized for the same keywords, but both are there for different visitors. This is where you deduplicate them using your SEO skills.

The keywords and phrases will be similar in your title tag, H1 tag, and compete directly against the product or collection page, but the page is how people from Snapchat and Reddit engage vs. someone from an email blast that knows how your brand behaves. So, set the canonical link to the main product page and/or add a meta robots noindex,nofollow. When you’re pushing your content out, share the version of the page to the platform it is designed for. Your site structure and robots.txt guide the search engines and AI to the pages meant for them, helping to eliminate the cannibalization.

It is the same content, the same purpose, and the same goal, just a unique format for the platform you want traffic from. I wouldn’t recommend this for everything because it is a ton of work, but for important pages, products, and services, it can make a difference to provide a better UX based on what the person and platform prefer.

What You Post To The Platform

Last is the content you post to the platform. Some allow hashtags; others prefer a lot of words, and platforms like X or Bluesky restrict the number of words you can use unless you pay. The audiences on these platforms pay attention to and use different words, and the algorithms may reward or penalize content differently.

On LinkedIn and Reddit, you may want to share a portion of the post and a summary of what the person will learn, then encourage engagement and a click through to your website or app. On Facebook, you may do a snippet of text and a stronger call-to-action, as people aren’t there for networking and learning like they are on LinkedIn.

Reddit may also benefit from examples and trust builders, where YouTube Shorts is about a quick message that entices an interaction and ideally a click through. The written description on a YouTube Short may go ignored as it is hidden, so the video is more important message-wise. Reddit can also be people looking for real human experiences, reviews, and comparisons from real customers. So, if you engage and publish your content, look at the topic of the forum and meet the user on the page at that specific stage of their journey.

The description still matters on most of these platforms because they are algorithm-based, and so are the search engines that feature their content. The content here acts like food for the algorithms along with user signals, so make sure you write something that properly matches the video’s content and follows best practices. If you’re publishing to Medium or Reddit and want to get the comparison queries, focus on unbiased and fair comparisons or reviews so Google surfaces it (disclosing you are one of the brands if you are). Then focus your own pages on conversion copy so as the person is ready to buy a blue t-shirt, they see your conversion page.

You should change the content based on the platform, and even your own website, when the goal is to bring users in from a specific traffic source. Someone from social media may like a video, while someone from a search engine wants text. Just make sure you code and structure your pages correctly, and you cater the experience to the right platform so the users reach their correct experience.

This is not practical for every page, so do your best, and at a minimum, customize what you share publicly and what is in your metadata. Those are easy enough and fast enough to be able to be done at scale, then pay attention to the UX on the page and make adjustments as needed.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

New Data Shows Googlebot’s 2 MB Crawl Limit Is Enough via @sejournal, @martinibuster

New data based on real-world actual web pages demonstrates that Googlebot’s crawl limit of two megabytes is more than adequate. New SEO tools provide an easy way to check how much the HTML of a web page weighs.

Data Shows 2 Megabytes Is Plenty

Raw HTML is basically just a text file. For a text file to get to two megabytes it would require over two million characters.

The HTTPArchive explains what’s in the HTML weight measurement:

“HTML bytes refers to the pure textual weight of all the markup on the page. Typically it will include the document definition and commonly used on page tags such as

or . However it also contains inline elements such as the contents of script tags or styling added to other tags. This can rapidly lead to bloating of the HTML doc.”

That is the same thing that Googlebot is downloading as HTML, just the on-page markup, not the links to JavaScript or CSS.

According to the HTTPArchive’s latest report, the real-world median average size of raw HTML is 33 kilobytes. The heaviest page weight at the 90th percentile is 155 kilobytes, meaning that the HTML for 90% of sites are less than or approximately equal to 155 kilobytes in size. Only at the 100th percentile does the size of HTML explode to way beyond two megabytes, which means that pages weighing two megabytes or more are extreme outliers.

The HTTPArchive report explains:

“HTML size remained uniform between device types for the 10th and 25th percentiles. Starting at the 50th percentile, desktop HTML was slightly larger.

Not until the 100th percentile is a meaningful difference when desktop reached 401.6 MB and mobile came in at 389.2 MB.”

The data separates the home page measurements from the inner page measurements and surprisingly shows that there is little difference between the weights of either. The data is explained:

“There is little disparity between inner pages and the home page for HTML size, only really becoming apparent at the 75th and above percentile.

At the 100th percentile, the disparity is significant. Inner page HTML reached an astounding 624.4 MB—375% larger than home page HTML at 166.5 MB.”

Mobile And Desktop HTML Sizes Are Similar

Interestingly, the page sizes between mobile and desktop versions were remarkably similar, regardless of whether HTTPArchive was measuring the home page or one of the inner pages.

HTTPArchive explains:

“The size difference between mobile and desktop is extremely minor, this implies that most websites are serving the same page to both mobile and desktop users.

This approach dramatically reduces the amount of maintenance for developers but does mean that overall page weight is likely to be higher as effectively two versions of the site are deployed into one page.”

Though the overall page weight might be higher since the mobile and desktop HTML exists simultaneously in the code, as noted earlier, the actual weight is still far below the two-megabyte threshold all the way up until the 100th percentile.

Given that it takes about two million characters to push the website HTML to two megabytes and that the HTTPArchive data based on actual websites shows that the vast majority of sites are well under Googlebot’s 2 MB limit, it’s safe to say it’s okay to scratch off HTML size from the list of SEO things to worry about.

Tame The Bots

Dave Smart of Tame The Bots recently posted that they updated their tool so that it now will stop crawling at the two megabyte limit for those whose sites are extreme outliers, showing at what point Googlebot would stop crawling a page.

Smart posted:

“At the risk of overselling how much of a real world issue this is (it really isn’t for 99.99% of sites I’d imagine), I added functionality to tamethebots.com/tools/fetch-
 to cap text based files to 2 MB to simulate this.”

Screenshot Of Tame The Bots Interface

The tool will show what the page will look like to Google if the crawl is limited to two megabytes of HTML. But it doesn’t show whether the tested page exceeds two megabytes, nor does it show how much the web page weighs. For that, there are other tools.

Tools That Check Web Page Size

There are a few tool sites that show the HTML size but here are two that just show the web page size. I tested the same page on each tool and they both showed roughly the same page weight, give or take a few kilobytes.

Toolsaday Web Page Size Checker

The interestingly named Toolsaday web page size checker enables users to test one URL at a time. This specific tool just does the one thing, making it easy to get a quick reading of how much a web page weights in kilobytes (or higher if the page is in the 100th percentile).

Screenshot Of Toolsaday Test Results

Small SEO Tools Website Page Size Checker

The Small SEO Tools Website Page Size Checker differs from the Toolsaday tool in that Small SEO Tools enables users to test ten URLs at a time.

Not Something To Worry About

The bottom line about the two megabyte Googlebot crawl limit is that it’s not something the average SEO needs to worry about. It literally affects a very small percentage of outliers. But if it makes you feel better, give one of the above SEO tools a try to reassure yourself or your clients.

Featured Image by Shutterstock/Fathur Kiwon

Making AI Work, MIT Technology Review’s new AI newsletter, is here

For years, our newsroom has explored AI’s limitations and potential dangers, as well as its growing energy needs. And our reporters have looked closely at how generative tools are being used for tasks such as coding and running scientific experiments. 

But how is AI actually being used in fields like health care, climate tech, education, and finance? How are small businesses using it? And what should you keep in mind if you use AI tools at work? These questions guided the creation of Making AI Work, a new AI mini-course newsletter.

Sign up for Making AI Work to see weekly case studies exploring tools and tips for AI implementation. The limited-run newsletter will deliver practical, industry-specific guidance on how generative AI is being used and deployed across sectors and what professionals need to know to apply it in their everyday work. The goal is to help working professionals more clearly see how AI is actually being used today, and what that looks like in practice—including new challenges it presents. 

You can sign up at any time and you’ll receive seven editions, delivered once per week, until you complete the series. 

Each newsletter begins with a case study, examining a specific use case of AI in a given industry. Then we’ll take a deeper look at the AI tool being used, with more context about how other companies or sectors are employing that same tool or system. Finally, we’ll end with action-oriented tips to help you apply the tool. 

Here’s a closer look at what we’ll cover:

  • Week 1: How AI is changing health care 

Explore the future of medical note-taking by learning about the Microsoft Copilot tool used by doctors at Vanderbilt University Medical Center. 

  • Week 2: How AI could power up the nuclear industry 

Dig into an experiment between Google and the nuclear giant Westinghouse to see if AI can help build nuclear reactors more efficiently. 

  • Week 3: How to encourage smarter AI use in the classroom

Visit a private high school in Connecticut and meet a technology coordinator who will get you up to speed on MagicSchool, an AI-powered platform for educators. 

  • Week 4: How small businesses can leverage AI

Hear from an independent tutor on how he’s outsourcing basic administrative tasks to Notion AI. 

  • Week 5: How AI is helping financial firms make better investments

Learn more about the ways financial firms are using large language models like ChatGPT Enterprise to supercharge their research operations. 

  • Week 6: How to use AI yourself 

We’ll share some insights from the staff of MIT Technology Review about how you might use AI tools powered by LLMs in your own life and work.

  • Week 7: 5 ways people are getting AI right

The series ends with an on-demand virtual event featuring expert guests exploring what AI adoptions are working, and why.  

If you’re not quite ready to jump into Making AI Work, then check out Intro to AI, MIT Technology Review’s first AI newsletter mini-course, which serves as a beginner’s guide to artificial intelligence. Readers will learn the basics of what AI is, how it’s used, what the current regulatory landscape looks like, and more. Sign up to receive Intro to AI for free. 

Our hope is that Making AI Work will help you understand how AI can, well, work for you. Sign up for Making AI Work to learn how LLMs are being put to work across industries. 

The Download: what Moltbook tells us about AI hype, and the rise and rise of AI therapy

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Moltbook was peak AI theater

For a few days recently, the hottest new hangout on the internet was a vibe-coded Reddit clone called Moltbook, which billed itself as a social network for bots. As the website’s tagline puts it: “Where AI agents share, discuss, and upvote. Humans welcome to observe.”

We observed! Launched on January 28, Moltbook went viral in a matter of hours. It’s been designed as a place where instances of a free open-source LLM-powered agent known as OpenClaw (formerly known as ClawdBot, then Moltbot), could come together and do whatever they wanted.

But is Moltbook really a glimpse of the future, as many have claimed? Or something else entirely? Read the full story.

—Will Douglas Heaven

The ascent of the AI therapist

We’re in the midst of a global mental-­health crisis. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year.

Given the clear demand for accessible and affordable mental-health services, it’s no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots, or from specialized psychology apps like Wysa and Woebot.

Four timely new books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust. Read the full story.

—Becky Ferreira

This story is from the most recent print issue of MIT Technology Review magazine, which shines a light on the exciting innovations happening right now. If you haven’t already, subscribe now to receive future issues once they land.

Making AI Work, MIT Technology Review’s new AI newsletter, is here

For years, our newsroom has explored AI’s limitations and potential dangers, as well as its growing energy needs. And our reporters have looked closely at how generative tools are being used for tasks such as coding and running scientific experiments.

But how is AI actually being used in fields like health care, climate tech, education, and finance? How are small businesses using it? And what should you keep in mind if you use AI tools at work? These questions guided the creation of Making AI Work, a new AI mini-course newsletter. Read more about it, and sign up here to receive the seven editions straight to your inbox.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is failing to punish polluters
The number of civil lawsuits it’s pursuing has sharply dropped in comparison to Trump’s first term. (Ars Technica)
+ Rising GDP = greater carbon emissions. But does it have to? (The Guardian)

2 The European Union has warned Meta against blocking rival AI assistants
It’s the latest example of Brussels’ attempts to rein in Big Tech. (Bloomberg $)

3 AI ads took over the Super Bowl
Hyping up chatbots and taking swipes at their competitors. (TechCrunch)
+ They appeared to be trying to win over AI naysayers, too. (WP $)
+ Celebrities were out in force to flog AI wares. (Slate $)

4 China wants to completely dominate the humanoid robot industry
Local governments and banks are only too happy to oblige promising startups. (WSJ $)
+ Why the humanoid workforce is running late. (MIT Technology Review)

5 We’re witnessing the first real crypto crash
Cryptocurrency is now fully part of the financial system, for better or worse. (NY Mag $)
+ Wall Street’s grasp of AI is pretty shaky too. (Semafor)
+ Even traditionally safe markets are looking pretty volatile right now. (Economist $)

6 The man who coined vibe coding has a new fixation 
“Agentic engineering” is the next big thing, apparently. (Insider $)
+ Agentic AI is the talk of the town right now. (The Information $)
+ What is vibe coding, exactly? (MIT Technology Review)

7 AI running app Runna has adjusted its aggressive training plans đŸƒâ€â™‚ïž
Runners had long suspected its suggestions were pushing them towards injury. (WSJ $)

8 San Francisco’s march for billionaires was a flop 
Only around three dozen supporters turned up. (SF Chronicle)
+ Predictably, journalists nearly outnumbered the demonstrators. (TechCrunch)

9 AI is shaking up romance novels ❀
But models still aren’t great at writing sex scenes. (NYT $)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review) 

10 ChatGPT won’t be replacing human stylists any time soon
Its menswear suggestions are more manosphere influencer than suave gentleman. (GQ)

Quote of the day

“There is no Plan B, because that assumes you will fail. We’re going to do the start-up thing until we die.”

—William Alexander, an ambitious 21-year old AI worker, explains his and his cohort’s attitudes towards trying to make it big in the highly-competitive industry to the New York Times.

One more thing

The open-source AI boom is built on Big Tech’s handouts. How long will it last?

In May 2023 a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.

In many ways, that’s a good thing. AI won’t thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious, and if Big Tech decides to shut up shop, a boomtown could become a backwater. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Dark showering, anyone?
+ Chef Yujia Hu is renowned for his shoe-shaped sushi designs.
+ Meanwhile, in the depths of the South Atlantic Ocean: a giant phantom jelly has been spotted.
+ I have nothing but respect for this X account dedicated to documenting rats and mice in movies and TV 🐀🐁

Why the Moltbook frenzy was like Pokémon

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Lots of influential people in tech last week were describing Moltbook, an online hangout populated by AI agents interacting with one another, as a glimpse into the future. It appeared to show AI systems doing useful things for the humans that created them (one person used the platform to help him negotiate a deal on a new car). Sure, it was flooded with crypto scams, and many of the posts were actually written by people, but something about it pointed to a future of helpful AI, right?

The whole experiment reminded our senior editor for AI, Will Douglas Heaven, of something far less interesting: Pokémon.

Back in 2014, someone set up a game of Pokémon in which the main character could be controlled by anyone on the internet via the streaming platform Twitch. Playing was as clunky as it sounds, but it was incredibly popular: at one point, a million people were playing the game at the same time.

“It was yet another weird online social experiment that got picked up by the mainstream media: What did this mean for the future?” Will says. “Not a lot, it turned out.”

The frenzy about Moltbook struck a similar tone to Will, and it turned out that one of the sources he spoke to had been thinking about Pokémon too. Jason Schloetzer, at the Georgetown Psaros Center for Financial Markets and Policy, saw the whole thing as a sort of Pokémon battle for AI enthusiasts, in which they created AI agents and deployed them to interact with other agents. In this light, the news that many AI agents were actually being instructed by people to say certain things that made them sound sentient or intelligent makes a whole lot more sense. 

“It’s basically a spectator sport,” he told Will, “but for language models.”

Will wrote an excellent piece about why Moltbook was not the glimpse into the future that it was said to be. Even if you are excited about a future of agentic AI, he points out, there are some key pieces that Moltbook made clear are still missing. It was a forum of chaos, but a genuinely helpful hive mind would require more coordination, shared objectives, and shared memory.

“More than anything else, I think Moltbook was the internet having fun,” Will says. “The biggest question that now leaves me with is: How far will people push AI just for the laughs?”

Read the whole story.

Traffic Impact of Google Discover Update

Google Discover has become a reliable traffic source for some publications. Last week, Google launched a core update to Discover in the U.S., with the global rollout coming.

Google’s Search Central blog has included “Get on Discover” guidelines since 2019, explaining its content quality requirements and traffic recovery strategies. Google revised the guidelines last week, alongside the core update.

Some requirements have not changed:

  • Titles and headlines must clearly “capture the essence of the content.”
  • Include “compelling, high-quality images,” especially those 1,200 pixels wide.
  • Address “current interests [that] tells a story well, or provides unique insights.”

Yet two requirements — clickbait avoidance and page experience — are new.

New Guidelines

Avoid clickbait

The previous guideline versions warned against “misleading or exaggerated details in preview content.” The revision moved this recommendation to the top, presumably to emphasize its importance as reflected in the core update.

The guidelines state that “clickbait” can prevent would-be readers from understanding the content and manipulate them into clicking a link.

The guidelines separately warn publishers from using “sensationalism tactics
 by catering to morbid curiosity, titillation, or outrage.”

Page experience

“Provide a great page experience” is new, although it’s in keeping with Google’s traditional search algorithm, which rewards sites with stong user engagement.

Google collects page experience metrics from its Chrome browser and retains them only for high-traffic pages. Search Console shows no Core Web Vitals data for sites with little traffic.

Sites with 50% or more losses in Discover traffic should audit the user experience:

  • In Search Console, look for URLs marked “poor” in the Core Web Vitals report.
  • Evaluate how those pages load, especially on mobile devices. The headings and body text should load first, allowing users to start reading immediately.
  • Look for elements, such as ads or pop-ups, that block the content.

Traffic Impact

The revised guidelines do not address “topic authority,” yet Google’s announcement of Discover’s core update does:

Since many sites demonstrate deep knowledge across a wide range of subjects, our systems are designed to identify expertise on a topic-by-topic basis.

The focus on topical expertise suggests the update will elevate niche, authoritative sites.

Finally, the announcement states that Discover will show more local and personalized content.

Nonetheless, most ecommerce blogs have modest Discover traffic and will therefore experience little (if any) impact from the core update. Still, keep an eye on the Discover section in Search Console; switch to “weekly” stats for a current overview.

Screenshot of the Discover section in Search Console

In Search Console’s Discover section, switch to “weekly” stats for a current overview.

Bing Webmaster Tools Adds AI Citation Performance Data via @sejournal, @MattGSouthern

Microsoft introduced an AI Performance dashboard in Bing Webmaster Tools, giving visibility into how content gets cited across Copilot and AI-generated answers in Bing.

The feature, now in public preview, shows citation counts, page-level activity, and trends over time. It covers AI experiences across Copilot, AI summaries in Bing, and select partner integrations.

Microsoft announced the feature on the Bing Webmaster Blog.

What’s New

The AI Performance dashboard provides four core metrics.

Total citations tracks how often your content appears as a source in AI-generated answers during a selected time period. Average cited pages shows the daily average of unique URLs from your site referenced across AI answers.

Page-level citation activity breaks down which specific URLs get cited most often. This lets you see which pages AI systems reference and how that activity changes over time.

The dashboard also introduces “grounding queries,” which Microsoft describes as the key phrases AI used when retrieving content for answers. The company notes this data represents a sample rather than complete citation activity.

A timeline view shows how citation patterns change over time across supported AI experiences.

Why This Matters

This is the first time Bing Webmaster Tools has shown how often content is cited in generative answers, including which URLs are referenced and how citation activity changes over time.

Google includes AI Overviews and AI Mode in Search Console’s overall Performance reporting, but it doesn’t offer a dedicated AI Overviews/AI Mode report or citation-style URL counts. AI Overviews also occupy a single position, with all links assigned that same position.

Bing’s dashboard goes further. It tracks which pages get cited, how often, and what phrases triggered the citation. That gives you data to work with instead of guesses.

Looking Ahead

AI Performance is available now in Bing Webmaster Tools as a public preview. Microsoft said it will continue refining metrics as more data is processed.

Bing has been building toward this for a while. The platform consolidated web search and chat metrics into a single dashboard and has added comparison features and content control tools since then.


Featured Image: Mijansk786/Shutterstock

OpenAI Begins Testing Ads In ChatGPT For Free And Go Users via @sejournal, @MattGSouthern

OpenAI is testing ads inside ChatGPT, bringing sponsored content to the product for the first time.

The test is live for logged-in adult users in the U.S. on the free and Go subscription tiers. Plus, Pro, Business, Enterprise, and Education subscribers won’t see ads.

OpenAI announced the launch with a brief blog post confirming that the principles it outlined in January are now in effect.

OpenAI’s post also adds Education to the list of ad-free tiers, which wasn’t included in the company’s initial plans.

How The Ads Work

Ads appear at the bottom of ChatGPT responses, visually separated from the answer and labeled as sponsored.

OpenAI says it selects ads by matching advertiser submissions with the topic of your conversation, your past chats, and past interactions with ads. If someone asks about recipes, they might see an ad for a meal kit or grocery delivery service.

Advertisers don’t see users’ conversations or personal details. They receive only aggregate performance data like views and clicks.

Users can dismiss ads, see why a specific ad appeared, turn off personalization, or clear all ad-related data. OpenAI also confirmed it won’t show ads in conversations about health, mental health, or politics, and won’t serve them to accounts identified as under 18.

Free users who don’t want ads have another option. OpenAI says you can opt out of ads in the Free tier in exchange for fewer daily free messages. Go users can avoid ads by upgrading to Plus or Pro.

The Path To Today

OpenAI first announced plans to test ads on January 16, alongside the U.S. launch of ChatGPT Go at $8 per month. The company laid out five principles. They cover mission alignment, answer independence, conversation privacy, choice and control, and long-term value.

The January post was careful to frame ads as supporting access rather than driving revenue. Altman wrote on X at the time:

“It is clear to us that a lot of people want to use a lot of AI and don’t want to pay, so we are hopeful a business model like this can work.”

That framing sits alongside OpenAI’s financial reality. Altman said in November that the company is considering infrastructure commitments totaling about $1.4 trillion over eight years. He also said OpenAI expects to end 2025 with an annualized revenue run rate above $20 billion. A source told CNBC that OpenAI expects ads to account for less than half of its revenue long term.

OpenAI has confirmed a $200,000 minimum commitment for early ChatGPT ads, Adweek reported. Digiday reported media buyers were quoted about $60 per 1,000 views for sponsored placements during the initial U.S. test.

Altman’s Evolving Position

The launch represents a notable turn from Altman’s earlier public statements on advertising.

In an October 2024 fireside chat at Harvard, Altman said he “hates” ads and called the idea of combining ads with AI “uniquely unsettling,” as CNN reported. He contrasted ChatGPT’s user-aligned model with Google’s ad-driven search, saying Google’s results depended on “doing badly for the user.”

By November 2025, Altman’s position had softened. He told an interviewer he wasn’t “totally against” ads but said they would “take a lot of care to get right.” He drew a line between pay-to-rank advertising, which he said would be “catastrophic,” and transaction fees or contextual placement that doesn’t alter recommendations.

The test rolling out today follows the contextual model Altman described. Ads sit below responses and don’t affect what ChatGPT recommends. Whether that distinction holds as ad revenue grows will be the longer-term question.

Where Competitors Stand

The timing puts OpenAI’s decision in sharp contrast with its two closest rivals.

Anthropic ran a Super Bowl campaign last week centered on the tagline “Ads are coming to AI. But not to Claude.” The spots showed fictional chatbots interrupting personal conversations with sponsored pitches.

Altman called the campaign “clearly dishonest,” writing on X that OpenAI “would obviously never run ads in the way Anthropic depicts them.”

Google has also kept distance from chatbot ads. DeepMind CEO Demis Hassabis said at Davos in January that Google has no current plans for ads in Gemini, calling himself “a little bit surprised” that OpenAI moved so early. He drew a distinction between assistants, where trust is personal, and search, where Google already shows ads in AI Overviews.

That was the second time in two months that Google leadership publicly denied plans for Gemini advertising. In December, Google Ads VP Dan Taylor disputed an Adweek report claiming advertisers were told to expect Gemini ads in 2026.

The three companies are now on distinctly different paths. OpenAI is testing conversational ads at scale. Anthropic is marketing its refusal to run them. Google is running ads in AI Overviews but holding off on its standalone assistant.

Why This Matters

OpenAI says ChatGPT is used by hundreds of millions of people. CNBC reported that Altman told employees ChatGPT has about 800 million weekly users. That creates pressure to find revenue beyond subscriptions, and advertising is the proven model for monetizing free users across consumer tech.

For practitioners, today’s launch opens a new ad channel for AI platform monetization. The targeting mechanism uses conversation context rather than search keywords, which creates a different kind of intent signal. Someone asking ChatGPT for help planning a trip is further along in the decision process than someone typing a search query.

The restrictions are also worth watching. No ads near health, politics, or mental health topics means the inventory is narrower than traditional search. Combined with reported $60 CPMs and a $200K minimum, this starts as a premium play for a limited set of advertisers rather than a self-serve marketplace.

Looking Ahead

OpenAI described today’s rollout as a test to “learn, listen, and make sure we get the experience right.” No timeline was given for expanding beyond the U.S. or beyond free and Go tiers.

Separately, CNBC reported that Altman told employees in an internal Slack message that ChatGPT is “back to exceeding 10% monthly growth” and that an “updated Chat model” is expected this week.

How users respond to ads in their ChatGPT conversations will determine whether this test scales or gets pulled back. It will also test whether the distinction Altman drew in November between trust-destroying ads and acceptable contextual ones holds up in practice.