The Halo Effect: Your Paid Media Went Offline, Can You Survive Without It? via @sejournal, @jonkagan

Hello, my fellow digital marketers! This study was born out of a question that gave me a combination of irritation and renewed curiosity: “If I turned off all paid media, would my business actually suffer?”

This is a question that is as old as time (in digital marketing time that is), and just like swallows returning to Capistrano, I am posed this question every Q1, when a brand reviews my annual paid media budget recommendation.

What I thought was going to be a four-week test, actually turned out to be a three-month test with a one-month analysis.

The Scenario

The analysis was done for a fast-casual dining restaurant chain that operates 50+ restaurants across 10+ U.S. States that I was asked to do some auditing on. But, honestly, this repeats for most brands and verticals (much less so in B2B, I will note).

As alluded to earlier, the brand had a noticeable disconnect between the restaurant deciphering the impact of its website revenue and the media dollars spent, vs. wondering if it was just cannibalizing its name recognition and organic efforts. It challenged the belief that media was not contributing, and wanted to turn it off in a trial period. But more so, it just didn’t think the paid media was contributing, and it wanted to save some cash. So, we obliged.

Due to the disconnect in restaurant dining revenue being passed back to digital media, we elected to focus on its direct on-site online order of food to validate the data.

Before and after the test period, it was using search, Performance Max, paid social, digital OOH, and Display. All channels covered both prospecting and retargeting efforts.

It ran limited email efforts to its customers registered for rewards, but it does not have a mobile app, so the customer list is quite small, while its annual digital media investment is around $1.1 million.

For the analysis, we planned to pause media for five weeks in the middle of its low season (which is about four months long), and then compare the overall impact on the site before it was paused and after we brought it back.

Thrilling? Well, let’s just say some folks do get all hot and bothered around a mid-to-low impact media holdout aggregate site activity analysis.

The Important Parameters

So, there are some important things to note around this test:

  • Traditional media was never stopped, but always ran at a low level, mostly billboards and radio.
  • For unexplainable reasons, they never hooked up Search Console.
  • They run a consistent SEO effort.
  • The analysis was done on the same restaurants for all three time periods (they had a couple shutdown and one open in this time period, so that data was removed from the assessment).
  • Their primary key performance indicator is in-restaurant visits, but they struggle to connect the source back to media initiative (we use three different foot traffic tracking vendors to measure it, but they don’t have the ability to pass back the in-restaurant sales data to the visit).
  • Foot traffic leads are actually worth 15-25% more than online order sales, but we do not have true pass-through revenue for them.

Against the recommendation to do this test on an isolated market was not taken, and they did a full blackout.

Breakout of Online Orders vs. Store Visits 4/14/25 to 5/18/25 (Image from author, January 2026)

Hypothesis

In my typical (and often inappropriate snarckastic manner, or so I am told), I referred them to my 2021 article, “How Paid Search Incrementality Impacts SEO (Does 1+1=3?),”  and told them that this should be their baseline for anticipated impact. For those who don’t want to click on the link and read the article, my stance was that removing paid media and running organic only would have a grand net loss for the brand in terms of traffic and sales.

To give you a sense of performance, prior to the test, paid media accounted for ~28% of incremental site traffic and ~23% of online orders. Which in turn supports the following beliefs:

  • With paid search engine-driven traffic exiting, we expect organic to rise, but not enough to offset what paid drove.
  • With paid social out, we expect a net loss of overall social traffic, in addition to any halo impact driven by social awareness (i.e., direct to site, organic search).
  • With programmatic traffic out, we expect a decrease in aggregate search traffic and direct to site traffic.

Net-net, the loss of paid media will result in a net loss of site traffic, leading to a net loss of online sales, which will be greater than media cost that would’ve been used to generate those sales.

Data trends (Image from author, January 2026)

The Pre-Test Data

Having selected a five-week period as our control period, we reviewed the initial data upfront:

Spend Impressions Clicks/Site Visits Online Orders Revenue
Search $30,000 395,000+ 57,000+ 6,000+ $250,000+
Performance Max $20,000+ 9 million+ 27,000+ 275+ $11,000+
Social $23,000+ 12 million+ 38,000+ 40+ $500
Programmatic Display $450 19,000+ 100+ 1 $13
DOOH* $5,000 62,000+ 0 0 $0.00
Total $80,000 21 million+ 123,000+ 6,000+ $262,000+

*Digital Out-of-Home advertising (DOOH)

Additionally, organic search had 131K+ site visits (42% of total), along with 12K+ online orders (46% of total) and $532K+ of revenue (47% of total).

While direct to site traffic had 78K+ site visits (25% of total), along with 8K+ online orders (29% of total) and $315K+ of revenue (28% of total).

Based on pre-test data, every site visit (from all traffic sources) was equal to $3.61 in online order revenue.

The Test Itself

  • Organic search site visits rose 14% (+18K), orders rose 31% (+4K), and revenue rose 30% (+$161K)
  • Direct to site visits dropped 4% (-3K), orders dropped 3% (-277) and revenue dropped 5% (-$15K)
  • The single largest channel loss of traffic was social (organic+paid), which dropped 98% (-39K) in visits, and dropped 55% in orders (note this was from 80 to 36) and 27% in revenue (a loss of $400)
  • All other site non-paid media traffic channels remained relatively flat
  • Overall site visits dropped 22% (-68K+), orders dropped 9% (-2,500) and revenue dropped 9% (-$105K)
    • Since total site visits decreased by 68K+ and not 123K+, this means the halo effect from paid media of site visits is ~55K
Visits between test periods
Visits between test periods (Image from author, January 2026)

This means that, despite organic search growing, as it was not being “cannibalized” by paid media, it could not offset the traffic or sales volume that paid search and performance max contributed.

Additionally, the lack of paid awareness media (i.e., display, social, etc.) leads to a contraction of total related searches to the brand name, as illustrated by the aggregate drop in total search traffic to the site, but also a drop in direct to site traffic as well.

“But Jon, they saved on ad spend, that should be helping them come out ahead?”

Wrong.

While they didn’t spend $80K on ads. Thus, the paid media cost per paid site visit dropped from $0.64 to $0. But they lost an aggregate 68K+ visits. On average a visit to the site in the pretest period (for all traffic sources) had a revenue value back to the brand of $3.61, during the test that rose to $4.20 (increased as more direct to site and organic search took a bigger piece of the traffic contribution pie).

This means, the actual Sales Value Impact=((Avg Revenue per Paid Media Site Visit)*Direct Paid Media Visit Lost/Gained)-/+Ad Spend Saved or Spent

Another way to write that formula is SVI=((ARPMSV)*DPMVLG)-/+Ad Spend)

Meaning on the conservative side it would be:

(($2.12)*-123,572)+$79,626.40)= -$182,346.32

But the reality is, organic search rose as it was no longer being cannibalized by paid. But not by enough to offset the loss of paid, so it requires separating the loss into direct impact and halo impact.

The direct impact is similar to the formula above, but you swap paid media visits out for total site visits change (68,652), which then brings the net loss down to $65,915.84.

Then there is a halo effect of paid media on organic. Which is why non-paid visits couldn’t offset the total loss in visits when paid was out. To calculate out the halo effect impact, we would do a formula of:

Halo Sales Value Impact= (((Avg Revenue per Paid Media Site Visit)*((Paid Media Traffic Lost or Gained-Total Traffic Loss or Gained in Test Period)) 

Or, written as HSVI=(((ARPMSV)*((PMTLG-(PPMT-TTLGTP)))

Meaning on the conservative side, it would be:

((($2.12)*((123,572-68,652)))=$116,430.40

Combine the 2 outcomes together, and you get your loss of $182,346.24 explained.

This means, that by not running paid media, full impact was a net missed revenue opportunity of $182,346.32, between direct and halo.

This goes to an extremely conservative method, and does not account for store visit revenue, or any shifts in revenue per visit over time.

In addition to the traffic that would’ve stood to gain additions to their email/audience lists.

Bringing paid media back

After the 5 weeks offline, we brought media back, in fact we increased investment by 48% (no change of channels, but all incremental went to social awareness and display).  This generated 29% fewer clicks from pre-test, but increased impressions 107%.

In the post-test period, vs the test period, the return of media, at the increased investment, lead to a 21% decrease in organic search traffic. But with paid search and performance max generating enough gain to have a net positive in all of search of +6% in site visits. Overall search driven online orders and revenue both saw a 2% increase when paid was reintroduced.

The only true net loss in growth was direct to site (which was believed that it would rise when media was back in market), which decreased in traffic by 6% and orders by 10%. Overall saw a 38% lift in total site visits, but a drop of 1% in online orders (revenue was flat). The loss in online orders was exclusively direct to site traffic.

What Does This All Mean?

A variety of things.

No matter what way you cut it, the presence of paid media had a halo effect on all activity, most notably, the aggregation of paid and organic search.

Visits (Image from author, January 2026)

But the post-click impact on the site may not follow the same path.

Online orders (Image from author, January 2026)

Which means, a larger view must be taken to examine additional impact (i.e., foot traffic, loyalty club sign-ups, and LTV).

It also reinforces the concept of 1+1=3, the theory of incrementality. While no other changes were made beyond the exiting of paid digital media, and the brand remained in low season the whole way through, the actual impact of inbound traffic lost (not covered by organic) was considerable.

It also stands as a reminder: Just because a site visit doesn’t generate immediate sales/revenue, it does not mean it doesn’t serve a purpose (i.e., foot traffic).

The Takeaway

Any brand that has more paid media site traffic than non-paid site traffic, and thinks they can turn off paid and coast equally on just non-paid traffic alone, has the same mindset as any NY Jets fan (the inability to accept a very harsh reality).

But, despite my writing, I am an optimist, and I encourage brands to do a similar study, if for no other reason than to have the data on hand for when the CMO comes in saying they want to turn off paid because they don’t think they should pay for it.

But word to the wise: Don’t do what we did here, do a market holdout, so that if things go south, it isn’t system-wide.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

The Download: Making AI Work, and why the Moltbook hype is similar to Pokémon

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

A first look at Making AI Work, MIT Technology Review’s new AI newsletter

Are you interested in learning more about the ways in which AI is actually being used? We’ve launched a new weekly newsletter series exploring just that: digging into how generative AI is being used and deployed across sectors and what professionals need to know to apply it in their everyday work.

Each edition of Making AI Work begins with a case study, examining a specific use case of AI in a given industry. Then we’ll take a deeper look at the AI tool being used, with more context about how other companies or sectors are employing that same tool or system. Finally, we’ll end with action-oriented tips to help you apply the tool.

The first edition takes a look at how AI is changing health care, digging into the future of medical note-taking by learning about the Microsoft Copilot tool used by doctors at Vanderbilt University Medical Center. Sign up here to receive the seven editions straight to your inbox, and if you’d like to read more about AI’s impact on health care in the meantime, check out some of our past reporting:

+  This medical startup uses LLMs to run appointments and make diagnoses.

+ How AI is changing how we quantify pain by helping health-care providers better assess their patients’ discomfort. Read the full story.

+ End-of-life decisions are difficult and distressing. Could AI help?

+ Artificial intelligence is infiltrating health care. But we shouldn’t let it make all the decisions unchecked. Read the full story.

Why the Moltbook frenzy was like Pokémon

Lots of influential people in tech recently described Moltbook, an online hangout populated by AI agents interacting with one another, as a glimpse into the future. It appeared to show AI systems doing useful things for the humans that created them—sure, it was flooded with crypto scams, and many of the posts were actually written by people, but something about it pointed to a future of helpful AI, right?

The whole experiment reminded our senior editor for AI, Will Douglas Heaven, of something far less interesting: Pokémon. Read the full story to find out why.

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI has begun testing ads in ChatGPT 
But the ads won’t influence the responses it provides, apparently. (The Verge)
+ Users who pay at least $20 a month for the chatbot will be exempt. (Gizmodo)
+ So will users believed to be under 18. (Axios)

2 The White House has a plan to stop data centers from raising electricity prices
It’s going to ask AI companies to voluntarily commit to keeping costs down. (Politico)
+ The US federal government is adopting AI left, right and center. (WP $)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

3 Elon Musk wants to colonize the moon
For now at least, his grand ambitions to live on Mars are taking a backseat. (CNN)
+ His full rationale for this U-turn isn’t exactly clear. (Ars Technica)
+ Musk also wants to become the first to launch a working data center in space. (FT $)
+ The case against humans in space. (MIT Technology Review)

4 Cheap AI tools are helping criminals to ramp up their scams
They’re using LLMs to massively scale up their attacks. (Bloomberg $)
+ Cyberattacks by AI agents are coming. (MIT Technology Review)

5 Iceland could be heading towards becoming one giant glacier
If human-driven warming disrupts a vital ocean current, that is. (WP $)
+ Inside a new quest to save the “doomsday glacier.” (MIT Technology Review)

6 Amazon is planning to launch an AI content marketplace
It’s reported to have spoken to media publishers to gauge their interest. (The Information $)

7 Doctors can’t agree on how to diagnose Alzheimer’s
They worry that some patients are being misdiagnosed. (WSJ $)

8 The first wave of AI enthusiasts are burning out
A new study has found that AI tools are linked to employees working more, not less. (TechCrunch)

9 We’re finally moving towards better ways to measure body fat
BMI is a flawed metric. Physicians are finally using better measures. (New Scientist $)
+ These are the best ways to measure your body fat. (MIT Technology Review)

10 It’s getting harder to become a social media megastar
Maybe that’s a good thing? (Insider $)
+ The likes of Mr Beast are still raking in serious cash, though. (The Information $)

Quote of the day

“This case is as easy as ABC—addicting, brains, children.”

—Lawyer Mark Lanier lays out his case during the opening statements of a new tech addiction trial in which a woman has accused Meta of deliberately designing their platforms to be addictive, the New York Times reports.

One more thing

China wants to restore the sea with high-tech marine ranches

A short ferry ride from the port city of Yantai, on the northeast coast of China, sits Genghai No. 1, a 12,000-metric-ton ring of oil-rig-style steel platforms, advertised as a hotel and entertainment complex.

Genghai is in fact an unusual tourist destination, one that breeds 200,000 “high-quality marine fish” each year. The vast majority are released into the ocean as part of a process known as marine ranching.

The Chinese government sees this work as an urgent and necessary response to the bleak reality that fisheries are collapsing both in China and worldwide. But just how much of a difference can it make? Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Wow, Joel and Ethan Coen’s dark comedic classic Fargo is 30 years old.
+ A new exhibition in New York is rightfully paying tribute to one of the greatest technological inventions: the Walkman ($)
+ This gigantic sleeping dachshund sculpture in South Korea is completely bonkers.
+ A beautiful heart-shaped pendant linked to King Henry VIII has been secured by the British Museum.

A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions

In September, Alfred Stephen, a freelance software developer in Singapore, purchased a ChatGPT Plus subscription, which costs $20 a month and offers more access to advanced models, to speed up his work. But he grew frustrated with the chatbot’s coding abilities and its gushing, meandering replies. Then he came across a post on Reddit about a campaign called QuitGPT

The campaign urged ChatGPT users to cancel their subscriptions, flagging a substantial contribution by OpenAI president Greg Brockman to President Donald Trump’s super PAC MAGA Inc. It also pointed out that the US Immigration and Customs Enforcement, or ICE, uses a résumé screening tool powered by ChatGPT-4. The federal agency has become a political flashpoint since its agents fatally shot two people in Minneapolis in January. 

For Stephen, who had already been tinkering with other chatbots, learning about Brockman’s donation was the final straw. “That’s really the straw that broke the camel’s back,” he says. When he canceled his ChatGPT subscription, a survey popped up asking what OpenAI could have done to keep his subscription. “Don’t support the fascist regime,” he wrote.

QuitGPT is one of the latest salvos in a growing movement by activists and disaffected users to cancel their subscriptions. In just the past few weeks, users have flooded Reddit with stories about quitting the chatbot. Many lamented the performance of GPT-5.2, the latest model. Others shared memes parodying the chatbot’s sycophancy. Some planned a “Mass Cancellation Party” in San Francisco, a sardonic nod to the GPT-4o funeral that an OpenAI employee had floated, poking fun at users who are mourning the model’s impending retirement. Still, others are protesting against what they see as a deepening entanglement between OpenAI and the Trump administration.

OpenAI did not respond to a request for comment.

As of December 2025, ChatGPT had nearly 900 million weekly active users, according to The Information. While it’s unclear how many users have joined the boycott, QuitGPT is getting attention. A recent Instagram post from the campaign has more than 36 million views and 1.3 million likes. And the organizers say that more than 17,000 people have signed up on the campaign’s website, which asks people whether they canceled their subscriptions, will commit to stop using ChatGPT, or will share the campaign on social media. 

“There are lots of examples of failed campaigns like this, but we have seen a lot of effectiveness,” says Dana Fisher, a sociologist at American University. A wave of canceled subscriptions rarely sways a company’s behavior, unless it reaches a critical mass, she says. “The place where there’s a pressure point that might work is where the consumer behavior is if enough people actually use their … money to express their political opinions.”

MIT Technology Review reached out to three employees at OpenAI, none of whom said they were familiar with the campaign. 

Dozens of left-leaning teens and twentysomethings scattered across the US came together to organize QuitGPT in late January. They range from pro-democracy activists and climate organizers to techies and self-proclaimed cyber libertarians, many of them seasoned grassroots campaigners. They were inspired by a viral video posted by Scott Galloway, a marketing professor at New York University and host of The Prof G Pod. He argued that the best way to stop ICE was to persuade people to cancel their ChatGPT subscriptions. Denting OpenAI’s subscriber base could ripple through the stock market and threaten an economic downturn that would nudge Trump, he said.

“We make a big enough stink for OpenAI that all of the companies in the whole AI industry have to think about whether they’re going to get away enabling Trump and ICE and authoritarianism,” says an organizer of QuitGPT who requested anonymity because he feared retaliation by OpenAI, citing the company’s recent subpoenas against advocates at nonprofits. OpenAI made for an obvious first target of the movement, he says, but “this is about so much more than just OpenAI.”

Simon Rosenblum-Larson, a labor organizer in Madison, Wisconsin, who organizes movements to regulate the development of data centers, joined the campaign after hearing about it through Signal chats among community activists. “The goal here is to pull away the support pillars of the Trump administration. They’re reliant on many of these tech billionaires for support and for resources,” he says. 

QuitGPT’s website points to new campaign finance reports showing that Greg Brockman and his wife each donated $12.5 million to MAGA Inc., making up nearly a quarter of the roughly $102 million it raised over the second half of 2025. The information that ICE uses a résumé screening tool powered by ChatGPT-4 came from an AI inventory published by the Department of Homeland Security in January.

QuitGPT is in the mold of Galloway’s own recently launched campaign, Resist and Unsubscribe. The movement urges consumers to cancel their subscriptions to Big Tech platforms, including ChatGPT, for the month of February, as a protest to companies “driving the markets and enabling our president.” 

“A lot of people are feeling real anxiety,” Galloway told MIT Technology Review. “You take enabling a president, proximity to the president, and an unease around AI,” he says, “and now people are starting to take action with their wallets.” Galloway says his campaign’s website can draw more than 200,000 unique visits in a day and that he receives dozens of DMs every hour showing screenshots of canceled subscriptions.

The consumer boycotts follow a growing wave of pressure from inside the companies themselves. In recent weeks, tech workers have been urging their employers to use their political clout to demand that ICE leave US cities, cancel company contracts with the agency, and speak out against the agency’s actions. CEOs have started responding. OpenAI’s Sam Altman wrote in an internal Slack message to employees that ICE is “going too far.” Apple CEO Tim Cook called for a “deescalation” in an internal memo posted on the company’s website for employees. It was a departure from how Big Tech CEOs have courted President Trump with dinners and donations since his inauguration.

Although spurred by a fatal immigration crackdown, these developments signal that a sprawling anti-AI movement is gaining momentum. The campaigns are tapping into simmering anxieties about AI, says Rosenblum-Larson, including the energy costs of data centers, the plague of deepfake porn, the teen mental-health crisis, the job apocalypse, and slop. “It’s a really strange set of coalitions built around the AI movement,” he says.

“Those are the right conditions for a movement to spring up,” says David Karpf, a professor of media and public affairs at George Washington University. Brockman’s donation to Trump’s super PAC caught many users off guard, he says. “In the longer arc, we are going to see users respond and react to Big Tech, deciding that they’re not okay with this.”

Ask an Expert: Should Merchants Block AI Bots?

“Ask an Expert” is an occasional series where we pose questions to seasoned ecommerce pros. For this installment, we’ve turned to Scot Wingo, a serial ecommerce entrepreneur most recently of ReFiBuy, a generative engine optimization platform, and before that, ChannelAdvisor, the marketplace management firm.

He addresses tactics for managing genAI bots.

Practical Ecommerce: Should ecommerce merchants monitor and even block AI agents that crawl their sites?

Scot Wingo: It’s a nuanced and strategic decision essential to every merchant.

Scot Wingo

Scot Wingo

The four agentic commerce experiences — ChatGPT (Instant Checkout, Agentic Commerce Protocol), Google Gemini (Universal Commerce Protocol), Microsoft Copilot (Copilot Checkout, ACP), and Perplexity (PayPal, Instant Buy) — have nearly 1 billion combined monthly active users. With Google transitioning from traditional search to AI Mode, that number will dramatically increase.

For merchants, the opportunity is as big or bigger than Amazon or any other marketplace.

Merchants should embrace AI agents and ensure access to the entire product catalog.

But genAI models need more than access. Agentic commerce thrives not just on extensive attributes but also on the products’ applications and use cases. Merchants should expand attributes beyond what’s shown on product detail pages and provide essential context via a deep and wide question-and-answer section that includes common shopper queries. It enables the models to match consumer prompts with relevant recommendations, driving sales to those merchants.

The time for action is now. Gemini’s shift to AI Mode means zero-click searches will increase, likely producing 20-30% fewer clicks (and revenue) in 2026.

Make 2026 The Year Your Business Thrives On Reddit [Webinar] via @sejournal, @hethr_campbell

Yes, yes we all know customer behavior is changing and Reddit conversations showing up in

AI search is a big part of that shift. What are you doing about it?

If your Reddit marketing strategy hasn’t evolved since 2024 (or you don’t have one to start with), you’re not just behind. You might be actively harming your brand.

We’re past the point of debating whether brands should be on Reddit. That part’s settled. This session is about how to navigate Reddit the right way.

And, we’re going to show you exactly how to do it! I am super excited to bring back our Reddit expert, and SEJ owner/advisor, Brent Csutoras. Bring your notepad and start flexing those fingers, because you’ll have plenty of inspiration and action items after this one!

Learn From Someone Who’s Navigated It All

On February 24, join Brent for a live presentation showcasing how brands can have success. And, we’ll be doing live Q&A at the end, so bring your specific burning questions.

With nearly 20 years on Reddit and experience building authentic presence for brands like Purple, Asurion, and TikTok, Brent understands what communities respect and what they reject. These are frameworks built from real campaigns, real mistakes, and real results on a platform where communities can smell fakeness from three subreddits away.

What You’ll Learn

How karma and authority actually work now
The mechanics changed, and what used to build credibility can now destroy it. You’ll learn the new rules for establishing authority without triggering Reddit’s detection systems.

Brand representation without the astroturfing accusations
Reddit communities have gotten better at spotting fake engagement, and they’re not shy about calling it out publicly. Discover how brands are building genuine presence, when to be transparent about who you are, and how to navigate the line between wanting to be your typical salesy brand versus a thought leader they want to hear from and engage with.

Why your brand needs its own subreddit and how to run it right
Owned subreddits have become critical infrastructure for Reddit success. Learn what makes brand subreddits work, how to build engagement that communities want to participate in, and the common mistakes that kill momentum in that first 90 days.

Who Should Attend

This webinar is essential for marketing directors, social media managers, and brand strategists who recognize Reddit’s importance and are looking for the playbook to do it successfully.

If you’re at a B2B SaaS company or consumer brand trying to prove Reddit’s value to leadership, this session will give you frameworks for measuring real impact on the customer journey and building authentic presence that communities respect.

Walk Away With Updated Frameworks

This isn’t a session about Reddit basics or generic social media strategy. This is up-to-date, specific guidance on what works right now.

You’ll leave with actionable frameworks you can implement immediately, a clearer understanding of how to measure Reddit’s true influence on your business, and the confidence to build a presence that drives quality traffic.

Register for the webinar and ask your questions live! Learn how to be a part of the community conversations and thrive on Reddit this year.

I can’t wait to see you there!

Synthetic Personas For Better Prompt Tracking via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

We all know prompt tracking is directional. The most effective way to reduce noise is to track prompts based on personas.

This week, I’m covering:

  • Why AI personalization makes traditional “track the SERP” models incomplete, and how synthetic personas fill the gap.
  • The Stanford validation data showing 85% accuracy at one-third the cost, and how Bain cut research time by 50-70%.
  • The five-field persona card structure and how to generate 15-30 trackable prompts per segment across intent levels.
The best way to make your prompt tracking much more accurate is to base it on personas. Synthetic Personas speed you up at a fraction of the price. (Image Credit: Kevin Indig)

A big difference between classic and AI search is that the latter delivers highly personalized results.

  • Every user gets different answers based on their context, history, and inferred intent.
  • The average AI prompt is ~5x longer than classic search keywords (23 words vs. 4.2 words), conveying much richer intent signals that AI models use for personalization.
  • Personalization creates a tracking problem: You can’t monitor “the” AI response anymore because each prompt is essentially unique, shaped by individual user context.

Traditional persona research solves this – you map different user segments and track responses for each – but it creates new problems. It takes weeks to conduct interviews and synthesize findings.

By the time you finish, the AI models have changed. Personas become stale documentation that never gets used for actual prompt tracking.

Synthetic personas fill the gap by building user profiles from behavioral and profiling data: analytics, CRM records, support tickets, review sites. You can spin up hundreds of micro-segment variants and interact with them in natural language to test how they’d phrase questions.

Most importantly: They are the key to more accurate prompt tracking because they simulate actual information needs and constraints.

The shift: Traditional personas are descriptive (who the user is), synthetic personas are predictive (how the user behaves). One documents a segment, the other simulates it.

Image Credit: Kevin Indig

Example: Enterprise IT buyer persona with job-to-be-done “evaluate security compliance” and constraint “need audit trail for procurement” will prompt differently than an individual user with the job “find cheapest option” and constraint “need decision in 24 hours.”

  • First prompt: “enterprise project management tools SOC 2 compliance audit logs.”
  • Second prompt: “best free project management app.”
  • Same product category, completely different prompts. You need both personas to track both prompt patterns.

Build Personas With 85% Accuracy For One-Third Of The Price

Stanford and Google DeepMind trained synthetic personas on two-hour interview transcripts, then tested whether the AI personas could predict how those same real people would answer survey questions later.

  • The method: Researchers conducted follow-up surveys with the original interview participants, asking them new questions. The synthetic personas answered the same questions.
  • Result: 85% accuracy. The synthetic personas replicated what the actual study participants said.
  • For context, that’s comparable to human test-retest consistency. If you ask the same person the same question two weeks apart, they’re about 85% consistent with themselves.

The Stanford study also measured how well synthetic personas predicted social behavior patterns in controlled experiments – things like who would cooperate in trust games, who would follow social norms, and who would share resources fairly.

The correlation between synthetic persona predictions and actual participant behavior was 98%. This means the AI personas didn’t just memorize interview answers; they captured underlying behavioral tendencies that predicted how people would act in new situations.

Bain & Company ran a separate pilot that showed comparable insight quality at one-third the cost and one-half the time of traditional research methods. Their findings: 50-70% time reduction (days instead of weeks) and 60-70% cost savings (no recruiting fees, incentives, transcription services).

The catch: These results depend entirely on input data quality. The Stanford study used rich, two-hour interview transcripts. If you train on shallow data (just pageviews or basic demographics), you get shallow personas. Garbage in, garbage out.

How To Build Synthetic Personas For Better Prompt Tracking

Building a synthetic persona has three parts:

  1. Feed it with data from multiple sources about your real users: call transcripts, interviews, message logs, organic search data.
  2. Fill out the Persona Card – the five fields that capture how someone thinks and searches.
  3. Add metadata to track the persona’s quality and when it needs updating.

The mistake most teams make: trying to build personas from prompts. This is circular logic – you need personas to understand what prompts to track, but you’re using prompts to build personas. Instead, start with user information needs, then let the persona translate those needs into likely prompts.

Data Sources To Feed Synthetic Personas

The goal is to understand what users are trying to accomplish and the language they naturally use:

  1. Support tickets and community forums: Exact language customers use when describing problems. Unfiltered, high-intent signal.
  2. CRM and sales call transcripts: Questions they ask, objections they raise, use cases that close deals. Shows the decision-making process.
  3. Customer interviews and surveys: Direct voice-of-customer on information needs and research behavior.
  4. Review sites (G2, Trustpilot, etc.): What they wish they’d known before buying. Gap between expectation and reality.
  5. Search Console query data: Questions they ask Google. Use regex to filter for question-type queries:
    (?i)^(who|what|why|how|when|where|which|can|does|is|are|should|guide|tutorial|course|learn|examples?|definition|meaning|checklist|framework|template|tips?|ideas?|best|top|lists?|comparison|vs|difference|benefits|advantages|alternatives)b.*

    (I like to use the last 28 days, segment by target country)

Persona card structure (five fields only – more creates maintenance debt):

These five fields capture everything needed to simulate how someone would prompt an AI system. They’re minimal by design. You can always add more later, but starting simple keeps personas maintainable.

  1. Job-to-be-done: What’s the real-world task they’re trying to accomplish? Not “learn about X” but “decide whether to buy X” or “fix problem Y.”
  2. Constraints: What are their time pressures, risk tolerance levels, compliance requirements, budget limits, and tooling restrictions? These shape how they search and what proof they need.
  3. Success metric: How do they judge “good enough?” Executives want directional confidence. Engineers want reproducible specifics.
  4. Decision criteria: What proof, structure, and level of detail do they require before they trust information and act on it?
  5. Vocabulary: What are the terms and phrases they naturally use? Not “churn mitigation” but “keeping customers.” Not “UX optimization” but “making the site easier to use.”

Specification Requirements

This is the metadata that makes synthetic personas trustworthy; it prevents the “black box” problem.

When someone questions a persona’s outputs, you can trace back to the evidence.

These requirements form the backbone of continuous persona development. They keep track of changes, sources, and confidence in the weighting.

  • Provenance: Which data sources, date ranges, and sample sizes were used (e.g., “Q3 2024 Support Tickets + G2 Reviews”).
  • Confidence score per field: A High/Medium/Low rating for each of the five Persona Card fields, backed by evidence counts. (e.g., “Decision Criteria: HIGH confidence, based on 47 sales calls vs. Vocabulary: LOW confidence, based on 3 internal emails”).
  • Coverage notes: Explicitly state what the data misses (e.g., “Overrepresents enterprise buyers, completely misses users who churned before contacting support”).
  • Validation benchmarks: Three to five reality checks against known business truths to spot hallucinations. (e.g., “If the persona claims ‘price’ is the top constraint, does that match our actual deal cycle data?”).
  • Regeneration triggers: Pre-defined signals that it’s time to re-run the script and refresh the persona (e.g., a new competitor enters the market, or vocabulary in support tickets shifts significantly).

Where Synthetic Personas Work Best

Before you build synthetic personas, understand where they add value and where they fall short.

High-Value Use Cases

  • Prompt design for AI tracking: Simulate how different user segments would phrase questions to AI search engines (the core use case covered in this article).
  • Early-stage concept testing: Test 20 messaging variations, narrow to the top five before spending money on real research.
  • Micro-segment exploration: Understand behavior across dozens of different user job functions (enterprise admin vs. individual contributor vs. executive buyer) or use cases without interviewing each one.
  • Hard-to-reach segments: Test ideas with executive buyers or technical evaluators without needing their time.
  • Continuous iteration: Update personas as new support tickets, reviews, and sales calls come in.

Crucial Limitations Of Synthetic Personas You Need To Understand

  • Sycophancy bias: AI personas are overly positive. Real users say, “I started the course but didn’t finish.” Synthetic personas say, “I completed the course.” They want to please.
  • Missing friction: They’re more rational and consistent than real people. If your training data includes support tickets describing frustrations or reviews mentioning pain points, the persona can reference these patterns when asked – it just won’t spontaneously experience new friction you haven’t seen before.
  • Shallow prioritization: Ask what matters, and they’ll list 10 factors as equally important. Real users have a clear hierarchy (price matters 10x more than UI color).
  • Inherited bias: Training data biases flow through. If your CRM underrepresents small business buyers, your personas will too.
  • False confidence risk: The biggest danger. Synthetic personas always have coherent answers. This makes teams overconfident and skip real validation.

Operating rule: Use synthetic personas for exploration and filtering, not for final decisions. They narrow your option set. Real users make the final call.

Solving The Cold Start Problem For Prompt Tracking

Synthetic personas are a filter tool, not a decision tool. They narrow your option set from 20 ideas to five finalists. Then, you validate those five with real users before shipping.

For AI prompt tracking specifically, synthetic personas solve the cold-start problem. You can’t wait to accumulate six months of real prompt volume before you start optimizing. Synthetic personas let you simulate prompt behavior across user segments immediately, then refine as real data comes in.

Where they’ll cause you to fail is if you use them as an excuse to skip real validation. Teams love synthetic personas because they’re fast and always give answers. That’s also what makes them dangerous. Don’t skip the validation step with real customers.


Featured Image: Paulo Bobita/Search Engine Journal

Google Can Now Monitor Search For Your Government IDs via @sejournal, @MattGSouthern
  • Google’s “Results about you” tool now lets you find and request removal of search results containing government-issued IDs.
  • This includes IDs like passports, driver’s licenses, and Social Security numbers.
  • The expansion is rolling out in the U.S. over the coming days, with additional regions planned.

Google’s Results about you tool now monitors Search results for government-issued IDs like passports, driver’s licenses, and Social Security numbers.

Should I Optimize My Content Differently For Each Platform? – Ask An SEO via @sejournal, @rollerblader

This week’s Ask an SEO question is from an anonymous reader who asks:

Should I be optimizing content differently for LinkedIn, Reddit, and traditional search engines? I’m seeing these platforms rank highly in Google results, but I’m not sure how to create a cohesive multi-platform SEO approach.”

Yes, you should absolutely be optimizing your content differently based on where you publish it, where you want to reach the audience, and the way they engage. This includes what you put out, what goes on your website, and what exists in your metadata. Each platform has a different user experience, and the people there go for different reasons, so your job with your content is to meet their needs where they are.

Metadata

For SEO purposes, you’re limited to a certain amount of pixels for meta titles and descriptions in a search result, whereas on social media platforms, you’re limited to a different number of characters. This means your titles and descriptions need to be modified to fit the pixel or character lengths defined by the platforms, including Open Graph, rich pins, etc.  The people on the platform may also be at different stages of their journey and be different audience demographics.

If the audience on one platform that has its own metadata elements, and it is younger or skews towards one gender, cater the text and imagery in your metadata towards them. It’s worth seeing if that resonates better, but only if that is the majority from that platform. For search engines, it can be anyone and any demographic, so make it a strong sales pitch that is all-inclusive. Use your customer service and review data to find out what matters to them and use it in your messaging. The same goes for the images you use.

What fits on Pinterest won’t look good on LinkedIn, and an image for Google Discover may not work great on Instagram. Pinterest can display a vertical infographic and make it look great, but it will be illegible on platforms that have squares and landscape-oriented images. Resize, change the wording, and ensure the focal point on the image matches the platform it’ll be used on via your metadata.

Search engines and social algorithms look for different things as well. A search engine may allow some clickbait and salesy types of titles and metadata, but social media algorithms may penalize sites that do this. And each platform will be using and looking for different signals.

This is why you want to speak to the audiences on the platforms and focus on what the platform rewards, not just a search engine. Your customers on TikTok may be younger and use different wording than your customers on Facebook, but both will need a balance of the wording on your webpage. This is where using unique metadata by platform and purpose matters.

Content On Your Own Pages

Not every page on your website has to be for SEO, AIO, or GEO, and neither does the user experience. If the page is for an email blast or remarketing where you have strong calls to action, less text, and more conversion, you can noindex it or use a canonical link to the detailed new customer experience page. The same goes for SEO vs. social media visitors.

Someone from social media may need more of an education when buying a product because they didn’t set out that day to buy it; they were on social media to have fun. Someone looking for a product, product + review, or a comparison has a background on the product and wants a solution, so they went to a search engine to find one. This is where an educational vs. a conversion option can happen, and both can exist without competing, even though they’re optimized for the same keyword phrase.

The schema, the way wording is used, and elements on the page like an “add to cart” button above the fold help search engines to know the page is for conversions, while an H1, H2, and text with internal links to product and content pages mean it’s for educational purposes. Now apply this to the goal of what you want to the person to do on the page to the page, and keep in mind where they are coming from before they reach it.

You may want a more visual approach with video demonstrations or reviews, and options to shop and learn more for some experiences, vs. giving them the product and a buy now button. Both are optimized for the same keywords, but both are there for different visitors. This is where you deduplicate them using your SEO skills.

The keywords and phrases will be similar in your title tag, H1 tag, and compete directly against the product or collection page, but the page is how people from Snapchat and Reddit engage vs. someone from an email blast that knows how your brand behaves. So, set the canonical link to the main product page and/or add a meta robots noindex,nofollow. When you’re pushing your content out, share the version of the page to the platform it is designed for. Your site structure and robots.txt guide the search engines and AI to the pages meant for them, helping to eliminate the cannibalization.

It is the same content, the same purpose, and the same goal, just a unique format for the platform you want traffic from. I wouldn’t recommend this for everything because it is a ton of work, but for important pages, products, and services, it can make a difference to provide a better UX based on what the person and platform prefer.

What You Post To The Platform

Last is the content you post to the platform. Some allow hashtags; others prefer a lot of words, and platforms like X or Bluesky restrict the number of words you can use unless you pay. The audiences on these platforms pay attention to and use different words, and the algorithms may reward or penalize content differently.

On LinkedIn and Reddit, you may want to share a portion of the post and a summary of what the person will learn, then encourage engagement and a click through to your website or app. On Facebook, you may do a snippet of text and a stronger call-to-action, as people aren’t there for networking and learning like they are on LinkedIn.

Reddit may also benefit from examples and trust builders, where YouTube Shorts is about a quick message that entices an interaction and ideally a click through. The written description on a YouTube Short may go ignored as it is hidden, so the video is more important message-wise. Reddit can also be people looking for real human experiences, reviews, and comparisons from real customers. So, if you engage and publish your content, look at the topic of the forum and meet the user on the page at that specific stage of their journey.

The description still matters on most of these platforms because they are algorithm-based, and so are the search engines that feature their content. The content here acts like food for the algorithms along with user signals, so make sure you write something that properly matches the video’s content and follows best practices. If you’re publishing to Medium or Reddit and want to get the comparison queries, focus on unbiased and fair comparisons or reviews so Google surfaces it (disclosing you are one of the brands if you are). Then focus your own pages on conversion copy so as the person is ready to buy a blue t-shirt, they see your conversion page.

You should change the content based on the platform, and even your own website, when the goal is to bring users in from a specific traffic source. Someone from social media may like a video, while someone from a search engine wants text. Just make sure you code and structure your pages correctly, and you cater the experience to the right platform so the users reach their correct experience.

This is not practical for every page, so do your best, and at a minimum, customize what you share publicly and what is in your metadata. Those are easy enough and fast enough to be able to be done at scale, then pay attention to the UX on the page and make adjustments as needed.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

New Data Shows Googlebot’s 2 MB Crawl Limit Is Enough via @sejournal, @martinibuster

New data based on real-world actual web pages demonstrates that Googlebot’s crawl limit of two megabytes is more than adequate. New SEO tools provide an easy way to check how much the HTML of a web page weighs.

Data Shows 2 Megabytes Is Plenty

Raw HTML is basically just a text file. For a text file to get to two megabytes it would require over two million characters.

The HTTPArchive explains what’s in the HTML weight measurement:

“HTML bytes refers to the pure textual weight of all the markup on the page. Typically it will include the document definition and commonly used on page tags such as

or . However it also contains inline elements such as the contents of script tags or styling added to other tags. This can rapidly lead to bloating of the HTML doc.”

That is the same thing that Googlebot is downloading as HTML, just the on-page markup, not the links to JavaScript or CSS.

According to the HTTPArchive’s latest report, the real-world median average size of raw HTML is 33 kilobytes. The heaviest page weight at the 90th percentile is 155 kilobytes, meaning that the HTML for 90% of sites are less than or approximately equal to 155 kilobytes in size. Only at the 100th percentile does the size of HTML explode to way beyond two megabytes, which means that pages weighing two megabytes or more are extreme outliers.

The HTTPArchive report explains:

“HTML size remained uniform between device types for the 10th and 25th percentiles. Starting at the 50th percentile, desktop HTML was slightly larger.

Not until the 100th percentile is a meaningful difference when desktop reached 401.6 MB and mobile came in at 389.2 MB.”

The data separates the home page measurements from the inner page measurements and surprisingly shows that there is little difference between the weights of either. The data is explained:

“There is little disparity between inner pages and the home page for HTML size, only really becoming apparent at the 75th and above percentile.

At the 100th percentile, the disparity is significant. Inner page HTML reached an astounding 624.4 MB—375% larger than home page HTML at 166.5 MB.”

Mobile And Desktop HTML Sizes Are Similar

Interestingly, the page sizes between mobile and desktop versions were remarkably similar, regardless of whether HTTPArchive was measuring the home page or one of the inner pages.

HTTPArchive explains:

“The size difference between mobile and desktop is extremely minor, this implies that most websites are serving the same page to both mobile and desktop users.

This approach dramatically reduces the amount of maintenance for developers but does mean that overall page weight is likely to be higher as effectively two versions of the site are deployed into one page.”

Though the overall page weight might be higher since the mobile and desktop HTML exists simultaneously in the code, as noted earlier, the actual weight is still far below the two-megabyte threshold all the way up until the 100th percentile.

Given that it takes about two million characters to push the website HTML to two megabytes and that the HTTPArchive data based on actual websites shows that the vast majority of sites are well under Googlebot’s 2 MB limit, it’s safe to say it’s okay to scratch off HTML size from the list of SEO things to worry about.

Tame The Bots

Dave Smart of Tame The Bots recently posted that they updated their tool so that it now will stop crawling at the two megabyte limit for those whose sites are extreme outliers, showing at what point Googlebot would stop crawling a page.

Smart posted:

“At the risk of overselling how much of a real world issue this is (it really isn’t for 99.99% of sites I’d imagine), I added functionality to tamethebots.com/tools/fetch-… to cap text based files to 2 MB to simulate this.”

Screenshot Of Tame The Bots Interface

The tool will show what the page will look like to Google if the crawl is limited to two megabytes of HTML. But it doesn’t show whether the tested page exceeds two megabytes, nor does it show how much the web page weighs. For that, there are other tools.

Tools That Check Web Page Size

There are a few tool sites that show the HTML size but here are two that just show the web page size. I tested the same page on each tool and they both showed roughly the same page weight, give or take a few kilobytes.

Toolsaday Web Page Size Checker

The interestingly named Toolsaday web page size checker enables users to test one URL at a time. This specific tool just does the one thing, making it easy to get a quick reading of how much a web page weights in kilobytes (or higher if the page is in the 100th percentile).

Screenshot Of Toolsaday Test Results

Small SEO Tools Website Page Size Checker

The Small SEO Tools Website Page Size Checker differs from the Toolsaday tool in that Small SEO Tools enables users to test ten URLs at a time.

Not Something To Worry About

The bottom line about the two megabyte Googlebot crawl limit is that it’s not something the average SEO needs to worry about. It literally affects a very small percentage of outliers. But if it makes you feel better, give one of the above SEO tools a try to reassure yourself or your clients.

Featured Image by Shutterstock/Fathur Kiwon

Making AI Work, MIT Technology Review’s new AI newsletter, is here

For years, our newsroom has explored AI’s limitations and potential dangers, as well as its growing energy needs. And our reporters have looked closely at how generative tools are being used for tasks such as coding and running scientific experiments

But how is AI actually being used in fields like health care, climate tech, education, and finance? How are small businesses using it? And what should you keep in mind if you use AI tools at work? These questions guided the creation of Making AI Work, a new AI mini-course newsletter.

Sign up for Making AI Work to see weekly case studies exploring tools and tips for AI implementation. The limited-run newsletter will deliver practical, industry-specific guidance on how generative AI is being used and deployed across sectors and what professionals need to know to apply it in their everyday work. The goal is to help working professionals more clearly see how AI is actually being used today, and what that looks like in practice—including new challenges it presents. 

You can sign up at any time and you’ll receive seven editions, delivered once per week, until you complete the series. 

Each newsletter begins with a case study, examining a specific use case of AI in a given industry. Then we’ll take a deeper look at the AI tool being used, with more context about how other companies or sectors are employing that same tool or system. Finally, we’ll end with action-oriented tips to help you apply the tool. 

Here’s a closer look at what we’ll cover:

  • Week 1: How AI is changing health care 

Explore the future of medical note-taking by learning about the Microsoft Copilot tool used by doctors at Vanderbilt University Medical Center. 

  • Week 2: How AI could power up the nuclear industry 

Dig into an experiment between Google and the nuclear giant Westinghouse to see if AI can help build nuclear reactors more efficiently. 

  • Week 3: How to encourage smarter AI use in the classroom

Visit a private high school in Connecticut and meet a technology coordinator who will get you up to speed on MagicSchool, an AI-powered platform for educators. 

  • Week 4: How small businesses can leverage AI

Hear from an independent tutor on how he’s outsourcing basic administrative tasks to Notion AI. 

  • Week 5: How AI is helping financial firms make better investments

Learn more about the ways financial firms are using large language models like ChatGPT Enterprise to supercharge their research operations. 

  • Week 6: How to use AI yourself 

We’ll share some insights from the staff of MIT Technology Review about how you might use AI tools powered by LLMs in your own life and work.

  • Week 7: 5 ways people are getting AI right

The series ends with an on-demand virtual event featuring expert guests exploring what AI adoptions are working, and why.  

If you’re not quite ready to jump into Making AI Work, then check out Intro to AI, MIT Technology Review’s first AI newsletter mini-course, which serves as a beginner’s guide to artificial intelligence. Readers will learn the basics of what AI is, how it’s used, what the current regulatory landscape looks like, and more. Sign up to receive Intro to AI for free. 

Our hope is that Making AI Work will help you understand how AI can, well, work for you. Sign up for Making AI Work to learn how LLMs are being put to work across industries.