The Download: what Moltbook tells us about AI hype, and the rise and rise of AI therapy

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Moltbook was peak AI theater

For a few days recently, the hottest new hangout on the internet was a vibe-coded Reddit clone called Moltbook, which billed itself as a social network for bots. As the website’s tagline puts it: “Where AI agents share, discuss, and upvote. Humans welcome to observe.”

We observed! Launched on January 28, Moltbook went viral in a matter of hours. It’s been designed as a place where instances of a free open-source LLM-powered agent known as OpenClaw (formerly known as ClawdBot, then Moltbot), could come together and do whatever they wanted.

But is Moltbook really a glimpse of the future, as many have claimed? Or something else entirely? Read the full story.

—Will Douglas Heaven

The ascent of the AI therapist

We’re in the midst of a global mental-­health crisis. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year.

Given the clear demand for accessible and affordable mental-health services, it’s no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots, or from specialized psychology apps like Wysa and Woebot.

Four timely new books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust. Read the full story.

—Becky Ferreira

This story is from the most recent print issue of MIT Technology Review magazine, which shines a light on the exciting innovations happening right now. If you haven’t already, subscribe now to receive future issues once they land.

Making AI Work, MIT Technology Review’s new AI newsletter, is here

For years, our newsroom has explored AI’s limitations and potential dangers, as well as its growing energy needs. And our reporters have looked closely at how generative tools are being used for tasks such as coding and running scientific experiments.

But how is AI actually being used in fields like health care, climate tech, education, and finance? How are small businesses using it? And what should you keep in mind if you use AI tools at work? These questions guided the creation of Making AI Work, a new AI mini-course newsletter. Read more about it, and sign up here to receive the seven editions straight to your inbox.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is failing to punish polluters
The number of civil lawsuits it’s pursuing has sharply dropped in comparison to Trump’s first term. (Ars Technica)
+ Rising GDP = greater carbon emissions. But does it have to? (The Guardian)

2 The European Union has warned Meta against blocking rival AI assistants
It’s the latest example of Brussels’ attempts to rein in Big Tech. (Bloomberg $)

3 AI ads took over the Super Bowl
Hyping up chatbots and taking swipes at their competitors. (TechCrunch)
+ They appeared to be trying to win over AI naysayers, too. (WP $)
+ Celebrities were out in force to flog AI wares. (Slate $)

4 China wants to completely dominate the humanoid robot industry
Local governments and banks are only too happy to oblige promising startups. (WSJ $)
+ Why the humanoid workforce is running late. (MIT Technology Review)

5 We’re witnessing the first real crypto crash
Cryptocurrency is now fully part of the financial system, for better or worse. (NY Mag $)
+ Wall Street’s grasp of AI is pretty shaky too. (Semafor)
+ Even traditionally safe markets are looking pretty volatile right now. (Economist $)

6 The man who coined vibe coding has a new fixation 
“Agentic engineering” is the next big thing, apparently. (Insider $)
+ Agentic AI is the talk of the town right now. (The Information $)
+ What is vibe coding, exactly? (MIT Technology Review)

7 AI running app Runna has adjusted its aggressive training plans 🏃‍♂️
Runners had long suspected its suggestions were pushing them towards injury. (WSJ $)

8 San Francisco’s march for billionaires was a flop 
Only around three dozen supporters turned up. (SF Chronicle)
+ Predictably, journalists nearly outnumbered the demonstrators. (TechCrunch)

9 AI is shaking up romance novels ❤
But models still aren’t great at writing sex scenes. (NYT $)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review

10 ChatGPT won’t be replacing human stylists any time soon
Its menswear suggestions are more manosphere influencer than suave gentleman. (GQ)

Quote of the day

“There is no Plan B, because that assumes you will fail. We’re going to do the start-up thing until we die.”

—William Alexander, an ambitious 21-year old AI worker, explains his and his cohort’s attitudes towards trying to make it big in the highly-competitive industry to the New York Times.

One more thing

The open-source AI boom is built on Big Tech’s handouts. How long will it last?

In May 2023 a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.

In many ways, that’s a good thing. AI won’t thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious, and if Big Tech decides to shut up shop, a boomtown could become a backwater. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Dark showering, anyone?
+ Chef Yujia Hu is renowned for his shoe-shaped sushi designs.
+ Meanwhile, in the depths of the South Atlantic Ocean: a giant phantom jelly has been spotted.
+ I have nothing but respect for this X account dedicated to documenting rats and mice in movies and TV 🐀🐁

Why the Moltbook frenzy was like Pokémon

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Lots of influential people in tech last week were describing Moltbook, an online hangout populated by AI agents interacting with one another, as a glimpse into the future. It appeared to show AI systems doing useful things for the humans that created them (one person used the platform to help him negotiate a deal on a new car). Sure, it was flooded with crypto scams, and many of the posts were actually written by people, but something about it pointed to a future of helpful AI, right?

The whole experiment reminded our senior editor for AI, Will Douglas Heaven, of something far less interesting: Pokémon.

Back in 2014, someone set up a game of Pokémon in which the main character could be controlled by anyone on the internet via the streaming platform Twitch. Playing was as clunky as it sounds, but it was incredibly popular: at one point, a million people were playing the game at the same time.

“It was yet another weird online social experiment that got picked up by the mainstream media: What did this mean for the future?” Will says. “Not a lot, it turned out.”

The frenzy about Moltbook struck a similar tone to Will, and it turned out that one of the sources he spoke to had been thinking about Pokémon too. Jason Schloetzer, at the Georgetown Psaros Center for Financial Markets and Policy, saw the whole thing as a sort of Pokémon battle for AI enthusiasts, in which they created AI agents and deployed them to interact with other agents. In this light, the news that many AI agents were actually being instructed by people to say certain things that made them sound sentient or intelligent makes a whole lot more sense. 

“It’s basically a spectator sport,” he told Will, “but for language models.”

Will wrote an excellent piece about why Moltbook was not the glimpse into the future that it was said to be. Even if you are excited about a future of agentic AI, he points out, there are some key pieces that Moltbook made clear are still missing. It was a forum of chaos, but a genuinely helpful hive mind would require more coordination, shared objectives, and shared memory.

“More than anything else, I think Moltbook was the internet having fun,” Will says. “The biggest question that now leaves me with is: How far will people push AI just for the laughs?”

Read the whole story.

Traffic Impact of Google Discover Update

Google Discover has become a reliable traffic source for some publications. Last week, Google launched a core update to Discover in the U.S., with the global rollout coming.

Google’s Search Central blog has included “Get on Discover” guidelines since 2019, explaining its content quality requirements and traffic recovery strategies. Google revised the guidelines last week, alongside the core update.

Some requirements have not changed:

  • Titles and headlines must clearly “capture the essence of the content.”
  • Include “compelling, high-quality images,” especially those 1,200 pixels wide.
  • Address “current interests [that] tells a story well, or provides unique insights.”

Yet two requirements — clickbait avoidance and page experience — are new.

New Guidelines

Avoid clickbait

The previous guideline versions warned against “misleading or exaggerated details in preview content.” The revision moved this recommendation to the top, presumably to emphasize its importance as reflected in the core update.

The guidelines state that “clickbait” can prevent would-be readers from understanding the content and manipulate them into clicking a link.

The guidelines separately warn publishers from using “sensationalism tactics… by catering to morbid curiosity, titillation, or outrage.”

Page experience

“Provide a great page experience” is new, although it’s in keeping with Google’s traditional search algorithm, which rewards sites with stong user engagement.

Google collects page experience metrics from its Chrome browser and retains them only for high-traffic pages. Search Console shows no Core Web Vitals data for sites with little traffic.

Sites with 50% or more losses in Discover traffic should audit the user experience:

  • In Search Console, look for URLs marked “poor” in the Core Web Vitals report.
  • Evaluate how those pages load, especially on mobile devices. The headings and body text should load first, allowing users to start reading immediately.
  • Look for elements, such as ads or pop-ups, that block the content.

Traffic Impact

The revised guidelines do not address “topic authority,” yet Google’s announcement of Discover’s core update does:

Since many sites demonstrate deep knowledge across a wide range of subjects, our systems are designed to identify expertise on a topic-by-topic basis.

The focus on topical expertise suggests the update will elevate niche, authoritative sites.

Finally, the announcement states that Discover will show more local and personalized content.

Nonetheless, most ecommerce blogs have modest Discover traffic and will therefore experience little (if any) impact from the core update. Still, keep an eye on the Discover section in Search Console; switch to “weekly” stats for a current overview.

Screenshot of the Discover section in Search Console

In Search Console’s Discover section, switch to “weekly” stats for a current overview.

Bing Webmaster Tools Adds AI Citation Performance Data via @sejournal, @MattGSouthern

Microsoft introduced an AI Performance dashboard in Bing Webmaster Tools, giving visibility into how content gets cited across Copilot and AI-generated answers in Bing.

The feature, now in public preview, shows citation counts, page-level activity, and trends over time. It covers AI experiences across Copilot, AI summaries in Bing, and select partner integrations.

Microsoft announced the feature on the Bing Webmaster Blog.

What’s New

The AI Performance dashboard provides four core metrics.

Total citations tracks how often your content appears as a source in AI-generated answers during a selected time period. Average cited pages shows the daily average of unique URLs from your site referenced across AI answers.

Page-level citation activity breaks down which specific URLs get cited most often. This lets you see which pages AI systems reference and how that activity changes over time.

The dashboard also introduces “grounding queries,” which Microsoft describes as the key phrases AI used when retrieving content for answers. The company notes this data represents a sample rather than complete citation activity.

A timeline view shows how citation patterns change over time across supported AI experiences.

Why This Matters

This is the first time Bing Webmaster Tools has shown how often content is cited in generative answers, including which URLs are referenced and how citation activity changes over time.

Google includes AI Overviews and AI Mode in Search Console’s overall Performance reporting, but it doesn’t offer a dedicated AI Overviews/AI Mode report or citation-style URL counts. AI Overviews also occupy a single position, with all links assigned that same position.

Bing’s dashboard goes further. It tracks which pages get cited, how often, and what phrases triggered the citation. That gives you data to work with instead of guesses.

Looking Ahead

AI Performance is available now in Bing Webmaster Tools as a public preview. Microsoft said it will continue refining metrics as more data is processed.

Bing has been building toward this for a while. The platform consolidated web search and chat metrics into a single dashboard and has added comparison features and content control tools since then.


Featured Image: Mijansk786/Shutterstock

OpenAI Begins Testing Ads In ChatGPT For Free And Go Users via @sejournal, @MattGSouthern

OpenAI is testing ads inside ChatGPT, bringing sponsored content to the product for the first time.

The test is live for logged-in adult users in the U.S. on the free and Go subscription tiers. Plus, Pro, Business, Enterprise, and Education subscribers won’t see ads.

OpenAI announced the launch with a brief blog post confirming that the principles it outlined in January are now in effect.

OpenAI’s post also adds Education to the list of ad-free tiers, which wasn’t included in the company’s initial plans.

How The Ads Work

Ads appear at the bottom of ChatGPT responses, visually separated from the answer and labeled as sponsored.

OpenAI says it selects ads by matching advertiser submissions with the topic of your conversation, your past chats, and past interactions with ads. If someone asks about recipes, they might see an ad for a meal kit or grocery delivery service.

Advertisers don’t see users’ conversations or personal details. They receive only aggregate performance data like views and clicks.

Users can dismiss ads, see why a specific ad appeared, turn off personalization, or clear all ad-related data. OpenAI also confirmed it won’t show ads in conversations about health, mental health, or politics, and won’t serve them to accounts identified as under 18.

Free users who don’t want ads have another option. OpenAI says you can opt out of ads in the Free tier in exchange for fewer daily free messages. Go users can avoid ads by upgrading to Plus or Pro.

The Path To Today

OpenAI first announced plans to test ads on January 16, alongside the U.S. launch of ChatGPT Go at $8 per month. The company laid out five principles. They cover mission alignment, answer independence, conversation privacy, choice and control, and long-term value.

The January post was careful to frame ads as supporting access rather than driving revenue. Altman wrote on X at the time:

“It is clear to us that a lot of people want to use a lot of AI and don’t want to pay, so we are hopeful a business model like this can work.”

That framing sits alongside OpenAI’s financial reality. Altman said in November that the company is considering infrastructure commitments totaling about $1.4 trillion over eight years. He also said OpenAI expects to end 2025 with an annualized revenue run rate above $20 billion. A source told CNBC that OpenAI expects ads to account for less than half of its revenue long term.

OpenAI has confirmed a $200,000 minimum commitment for early ChatGPT ads, Adweek reported. Digiday reported media buyers were quoted about $60 per 1,000 views for sponsored placements during the initial U.S. test.

Altman’s Evolving Position

The launch represents a notable turn from Altman’s earlier public statements on advertising.

In an October 2024 fireside chat at Harvard, Altman said he “hates” ads and called the idea of combining ads with AI “uniquely unsettling,” as CNN reported. He contrasted ChatGPT’s user-aligned model with Google’s ad-driven search, saying Google’s results depended on “doing badly for the user.”

By November 2025, Altman’s position had softened. He told an interviewer he wasn’t “totally against” ads but said they would “take a lot of care to get right.” He drew a line between pay-to-rank advertising, which he said would be “catastrophic,” and transaction fees or contextual placement that doesn’t alter recommendations.

The test rolling out today follows the contextual model Altman described. Ads sit below responses and don’t affect what ChatGPT recommends. Whether that distinction holds as ad revenue grows will be the longer-term question.

Where Competitors Stand

The timing puts OpenAI’s decision in sharp contrast with its two closest rivals.

Anthropic ran a Super Bowl campaign last week centered on the tagline “Ads are coming to AI. But not to Claude.” The spots showed fictional chatbots interrupting personal conversations with sponsored pitches.

Altman called the campaign “clearly dishonest,” writing on X that OpenAI “would obviously never run ads in the way Anthropic depicts them.”

Google has also kept distance from chatbot ads. DeepMind CEO Demis Hassabis said at Davos in January that Google has no current plans for ads in Gemini, calling himself “a little bit surprised” that OpenAI moved so early. He drew a distinction between assistants, where trust is personal, and search, where Google already shows ads in AI Overviews.

That was the second time in two months that Google leadership publicly denied plans for Gemini advertising. In December, Google Ads VP Dan Taylor disputed an Adweek report claiming advertisers were told to expect Gemini ads in 2026.

The three companies are now on distinctly different paths. OpenAI is testing conversational ads at scale. Anthropic is marketing its refusal to run them. Google is running ads in AI Overviews but holding off on its standalone assistant.

Why This Matters

OpenAI says ChatGPT is used by hundreds of millions of people. CNBC reported that Altman told employees ChatGPT has about 800 million weekly users. That creates pressure to find revenue beyond subscriptions, and advertising is the proven model for monetizing free users across consumer tech.

For practitioners, today’s launch opens a new ad channel for AI platform monetization. The targeting mechanism uses conversation context rather than search keywords, which creates a different kind of intent signal. Someone asking ChatGPT for help planning a trip is further along in the decision process than someone typing a search query.

The restrictions are also worth watching. No ads near health, politics, or mental health topics means the inventory is narrower than traditional search. Combined with reported $60 CPMs and a $200K minimum, this starts as a premium play for a limited set of advertisers rather than a self-serve marketplace.

Looking Ahead

OpenAI described today’s rollout as a test to “learn, listen, and make sure we get the experience right.” No timeline was given for expanding beyond the U.S. or beyond free and Go tiers.

Separately, CNBC reported that Altman told employees in an internal Slack message that ChatGPT is “back to exceeding 10% monthly growth” and that an “updated Chat model” is expected this week.

How users respond to ads in their ChatGPT conversations will determine whether this test scales or gets pulled back. It will also test whether the distinction Altman drew in November between trust-destroying ads and acceptable contextual ones holds up in practice.

Why Your SEO KPIs Are Failing Your Business (And How To Fix Them) via @sejournal, @bngsrc

Most SEO teams believe they need more data to report success, but what they actually have is metric debt, at least that’s what I keep seeing. The accumulated cost of optimizing for key performance indicators that no longer reflect how growth happens.

The environment has changed, mostly because economic pressure has shifted expectations. At the same time, AI search, zero-click results, and privacy limits have all weakened the connection between traditional SEO KPIs and business outcomes.

Yet, it’s not unusual to see teams measuring success in ways that reflect how SEO used to work rather than how it works today. This is exactly the point where I think we need to rethink how we’re measuring things.

The Hidden Cost Of Vanity Metrics

Rankings, clicks, visibility … None of these is wrong. They’re just no longer enough on their own to predict business success reliably.

In an environment where we talk a lot about AI-driven SERPs, zero-click searches, and budget scrutiny, these metrics are incomplete at best and misleading at worst.

But a considerable number of SEOs still spend most of their time chasing more traffic, more keywords, more mentions, and I get why. It is generally difficult to own new changes.

Meanwhile, conversion quality, intent alignment, and revenue impact now need more attention than ever. However, they’re harder to explain and harder to own.

That gap creates a quiet opportunity cost. Not immediately, and not in reports, but later, when SEO starts struggling to justify its place in the growth conversation.

At this point, I think this is pretty clear: good SEO teams don’t report more metrics. They explain better.

And to explain better, we need to rethink how we can show SEO value is created and how it’s measured. This isn’t a hot take anymore.

As Yordan Dimitrov pointed out, SEO isn’t dying, but discovery is changing fast and shifting user behavior. Early-stage users increasingly get what they need directly inside search experiences.

That means clicks, specifically, are no longer a reliable proxy for value. So, if we keep optimizing and reporting as if they are, we’re creating a picture that no longer matches reality.

But I’m not saying we should replace every SEO metric overnight. What we report does need to reflect how growth decisions are made.

Reframing SEO KPIs Around Real Business Value

If everything you track sits at the top of the funnel, you don’t have a measurement strategy; you have a visibility tracker. A simple way out is to separate signals from outcomes:

Operational Signals

These tell you if your SEO efforts can function at all.

  • Crawlability and indexation coverage.
  • Core Web Vitals performance.
  • Content velocity on priority areas.
  • Share of voice by intent cluster.

Necessary. Not sufficient.

Engagement Signals

These tell you whether users actually care.

  • Engaged sessions (GA4’s definition: >10 seconds or conversion rule).
  • Scroll depth.
  • Return visits.
  • Micro-conversions like downloads or feature usage.
  • Organic conversions.

Still not the end goal, but much closer.

Business Outcomes

This is where people usually get nervous.

  • Pipeline influence from organic (opportunities with organic touchpoints).
  • Customer Acquisition Cost (CAC) for organic versus paid channels
  • Customer Lifetime Value (LTV) of SEO-acquired customers.
  • Retention rates of organic users.

If none of these are visible, SEO efforts are always going to be questioned.

Most Teams Need A Few Months To Fix This Approach

First, you audit what you’re already reporting. Most of it will sit in operational metrics, and that’s normal.

Then, you should map pages to funnel stages. It doesn’t have to be perfect, but it should be honest.

Then you can add one or two outcome-level metrics that make sense for your model. For example:

  • Demo requests per organic session (for B2B).
  • Revenue per organic visitor (for ecommerce).

If organic conversion rates are far below benchmarks (for example, industry benchmarks place B2B ecommerce conversion rates at 1.8%), that’s not a “traffic problem.” It’s a mismatch between intent, content, and expectations.

Over time, you can rebalance reporting. I recommend not deleting old metrics immediately; they will let you show people how they correlate (or don’t) with outcomes. That’s how trust is built.

In practice, most teams don’t jump from rankings to revenue overnight. Measurement maturity tends to move in layers, with each step making the next one easier to defend.

The Human Side Of Metric Evolution

Changing measurement systems is more psychological than most teams expect. People don’t like KPI changes because it feels safe to own the same old things. And to be honest, revenue attribution feels messier than rankings; that’s why it creates resistance and people avoid it.

The way around this isn’t better dashboards. It’s framing. Instead of saying “we’re changing KPIs,” you can think and say: “For the next eight weeks, we’re testing if organic sessions on these pages generate demo requests.”

The goal isn’t to drown stakeholders in methodology, but to give just enough context to replace metric comfort with experimental clarity, so they understand what’s being tested, why it matters, and how success will be judged.

So, basically, make it an experiment, and define success upfront. Then, share learnings even when results are uncomfortable.

Future-Proofing Your Measurement Strategy

We don’t need complex stacks. We only need cleaner thinking. And we need to revisit KPIs regularly to remove ones that no longer help, add new ones when priorities change, and document why decisions were made.

First, you can start by explaining that while rankings were reliable growth proxies in 2020, AI search and zero-click results have broken that connection. Use visual stories comparing high-traffic/low-conversion paths against low-traffic/high-conversion alternatives to illustrate why KPI evolution matters.

For most mid-market teams, a pragmatic measurement stack is sufficient: GA4 or an alternative, a CRM with clean attribution fields, a visualization layer like Looker Studio, and a core SEO platform. Complexity should be added only as measurement maturity increases.

Finally, we should treat measurement as a living system. For this, I recommend running quarterly KPI reviews to retire unused metrics, adding new ones aligned with evolving priorities, and documenting hypotheses behind major initiatives for later validation.

When measurement evolves continuously, SEO strategy can evolve alongside search itself.

If You Can’t Measure Value, You Can’t Defend SEO

Anthony Barone puts this well: When teams rely on surface-level metrics, they lose a stable way to judge progress. SEO then becomes easy to deprioritise every time a new platform or AI narrative shows up.

Value-driven metrics change the conversation. SEO stops being “traffic work” and starts being part of growth discussions.

The SEOs who will do well aren’t the ones with the cleanest ranking reports. They’re the ones who can calmly explain how organic search contributes to real business outcomes, even when the numbers aren’t perfect.

That starts with questioning every metric you report and being honest about which ones still earn their place.

More Resources:


Featured Image: Natalya Kosarevich/Shutterstock

PPC Budget Rebalancing: How AI Changes Where Marketing Budgets Are Spent via @sejournal, @LisaRocksSEM

In paid media, many advertisers default to budgeting by ad platform, with a percentage to Google Ads, a percentage to LinkedIn Ads, etc., largely based on habit. Now, AI technology presents new opportunities to marketing leaders to decide where to spend their paid media dollars. Instead of allocating spend based on impression volume or historical channel averages, marketers can explore PPC budget rebalancing around buyer intent signals and conversion probability (likelihood that a specific ad interaction, like a click, will result in a valuable action like a conversion).

There are many ways to approach budget strategy in paid media. The model in this article is one worth exploring because it reflects how AI technology in the ad platforms evaluates users across the customer journey.

A Different Approach From Channel-Based Budgeting

For many years, PPC budgeting followed the same basic playbook. Set a percentage for Google Search, another for Meta, and spread what’s left over across video or display. It is simple, but forces spend to stay locked inside channels even when user behavior indicates something different.

This can create ongoing attribution battles where teams debate whether the Facebook ad or the final Google search drove the conversion. Everyone focused on the last click results instead of understanding the full journey.

Platform AI has changed that. Today, machine learning blends signals from search, video, maps, feed environments, and content discovery paths. Models update predictions continuously using large-scale intent and behavioral signals.

Buyers’ journeys are omnichannel: searching, scrolling, comparing, and exploring at the same time. When budgets stay fixed inside channels, money can’t follow the purchase intent. That means overspending on channels that only appear in the last click and underspending where users are ready to take action. This new opportunity is shifting from budgeting by channel performance to budgeting by conversion probability. AI helps make this possible, interpreting meaning, context, and patterns that humans can’t see at scale.

Many expert PPC guides (including my own recommendations) support structuring budgets by funnel stage or campaign objective rather than rigid channel splits, because it more accurately reflects how people move from awareness to intent.

This is echoed in articles like “Budget Allocation: When To Choose Google Ads vs. Meta Ads” and “From Launch to Scale: PPC Budget Strategies for All Campaign Stages,” which emphasize aligning spend to the campaign goal, not the platform it runs on. These guides also agree on something else: Flexibility is essential, because performance and user behavior shift over time.

With that foundation in place, this article introduces a new evolution of that idea, moving from funnel-based budgeting to signal-based budgeting. Read on to learn how this model works and why it’s built for the way AI interprets user intent today.

How Signals Move Inside Platforms But Not Across Them

It’s important for CMOs to understand how signals work inside major platforms. Google and Meta use unified prediction engines. For example, signals from Search, YouTube, Maps, and Discover all feed into one Google system. This is why these platforms can react to user behavior so quickly.

However, platforms do not directly share user-level intent signals with one another. Google doesn’t send search intent to Meta. Meta doesn’t pass engagement back to Google. Each platform operates its own machine learning environment.

The only connection across platforms is user behavior. A user might watch a review on YouTube, check options on Instagram, and then return to Google to search for pricing. Each platform reacts to what happens inside its own ecosystem.

This distinction matters. Budget decisions should reflect how users move across the journey, not how systems communicate. Platforms don’t exchange signals. Users carry their intent with them.

The Three Signal Layers That Guide AI-Driven Budget Allocation

I see platform AI systems consistently respond to three core signal groups. These signals match how machine learning models evaluate purchase intent and likelihood to convert.

1. Intent Signals

These are strong signs that someone is ready to take action. Examples include refined search queries, repeat visits, deeper product exploration, commercial browsing patterns, and lookalike signals that match buyers who tend to convert. For example, Microsoft Ads’ AI uses “audience intelligence signals” combined with data the advertiser provides (e.g., ads, landing pages) to automatically find users “more likely to convert.”

When these actions are measured together, platform AI prioritizes ad delivery toward users who are most likely to convert.

2. Discovery Signals

Discovery is the early stage of consideration. Users engage with content that builds awareness, helps them compare options, or clarifies the problem they want to solve. Google’s published insights show that buyers now explore multiple media types before taking action.

These discovery signals align with the “streaming + scrolling + searching + shopping” behaviors that Google identifies.

Discovery signals can show up earlier than marketers expect. Budgeting for discovery matters because these signals can influence purchase intent later.

3. Trust Signals

Trust signals can help on the ad serving end and conversion closing end. This includes reviews, product walk-throughs, video demos, social proof, and expert content. These cues help platforms predict whether a user will favor a certain brand once they develop purchase intent.

Good trust content (reviews, transparent info, credible claims) helps deliver a better user experience, which can increase a conversion rate in comparison to that content being absent.

When trust is strong, conversion outcomes tend to be more consistent because Google Ads evaluates landing page experience, store ratings, and other quality signals as part of its automated bidding and delivery systems. Pages that demonstrate stronger user experience and conversion performance are more likely to earn increased ad delivery under conversion-focused bidding models because they value high-converting experiences.

Together, these three layers can form a modern structure for budget allocation.

How CMOs Can Apply This Model Right Now

Rebalancing for intent starts with one shift: Build budgets around signals instead of channels. Group your existing campaigns into the three buckets: intent, discovery, and trust. This structure lets your team see where each dollar is driving purchase intent or signal quality.

Once campaigns are mapped to a signal, you can assign budget amounts that reflect your goals. Intent gets the largest share because it drives revenue. Discovery fuels learning and awareness. Trust earns its own allocation because it lifts future conversion performance.

This process is easier than it sounds.

Step one: Assign each campaign to the signal it produces: intent, discovery, or trust. This creates a signal map across all platforms.

Step two: Set your budget amounts for each signal bucket. This replaces the traditional channel-based approach.

Step three: Distribute the dollars inside each bucket to the campaigns that support that signal best. This keeps allocation strategic and gives each campaign a clear role.

Example To Show How This Can Work

A CMO with a $10,000 total budget might allocate:

Intent
$6,000 across Google Search and Meta retargeting, where purchase intent is strongest for them. Higher intent can lead to more conversions, so platform AI systems allocate impressions more efficiently.

Discovery
$3,000 across Meta prospecting and YouTube educational content to increase learning signals. Video views, engagement, and content consumption teach the algorithm who is interested.

Trust
$1,000 toward YouTube testimonial content to strengthen brand credibility and improve lower funnel efficiency. Even a small trust investment can likely improve performance across all channels by improving users’ confidence and readiness to buy.

The allocation starts with the signal, not the channel. Platforms receive budget because they support that signal, not because of historical patterns.

Why It Can Be Harder To Manage

Signal-based budgeting challenges familiar habits. Platforms don’t organize campaigns this way, so teams must learn to read performance differently.

Instead of relying only on last click ROAS, teams have to watch earlier indicators such as branded search growth, engaged video views, returning visitors, and assisted conversions. Reporting also becomes more complex because trust and discovery show up differently across Google, Microsoft, and social platforms. This means teams must compare assisted conversions, view-through impact, and conversion lag patterns rather than relying on a single conversion report.

Why It Can Be More Profitable

The complexity can pay off. Platform AI systems make allocation decisions based on probability. When your budget aligns with the signals AI values most, performance improves across the customer journey.

Profit can increase because:

  • Intent dollars focus on users most likely to convert.
  • Discovery dollars generate new learning signals, feeding prediction accuracy.
  • Trust dollars raise future conversion likelihood and reduce lower funnel costs.
  • Spend shifts toward the strongest outcomes.

Teams that adopt this model could see stronger performance and more conversions without increasing total budget.

A New Way To Think About PPC Budget Allocation

Here are the core takeaways for CMOs:

  • AI-driven budgeting can work best when spend follows purchase intent, not channels.
  • Grouping campaigns by intent, discovery, and trust signals gives you a clearer view of what’s driving revenue and what’s feeding future performance.
  • A signal-based budget improves lower funnel efficiency, brand awareness, and accelerates learning within the existing total spend.
  • This model can help teams stay aligned with how users move and how machine learning predicts conversions.

The real advantage is efficiency. When the budget moves with user signals, you don’t need more budget to see stronger results. You need a model that lets the budget follow the people most likely to act.

As platform AI continues to evolve, the leaders testing their PPC budgets around intent signals will have an edge. This framework gives you a repeatable way to stay competitive and capture more value from every dollar invested.

More Resources:


Featured Image: N Universe/Shutterstock

7 Insights From Washington Post’s Strategy To Win Back Traffic via @sejournal, @martinibuster

The Washington Post’s recent announcement of staffing cuts is a story with heroes, villains, and victims, but buried beneath the headlines is the reality of a big brand publisher confronting the same changes with Google Search that SEOs, publishers, and ecommerce stores are struggling with. The following are insights into their strategy to claw back traffic and income that could be useful for everyone seeking to stabilize traffic and grow.

Disclaimer

The Washington Post is proposing the following strategies in response to steep drops in search traffic, the rise of multi-modal content consumption, and many other factors that are fragmenting online audiences. The strategies have yet to be proven.

The value lies in analyzing what they are doing and understanding if there are any useful ideas for others.

Problem That Is Being Solved

The reasons given for the announced changes are similar to what SEOs, online stores, and publishers are going through right now because of the decline of search and the hyper-diversification of sources of information.

The memo explains:

“Platforms like Search that shaped the previous era of digital news, and which once helped The Post thrive, are in serious decline. Our organic search has fallen by nearly half in the last three years.

And we are still in the early days of AI-generated content, which is drastically reshaping user experiences and expectations.”

Those problems are the exact same ones affecting virtually all online businesses. This makes The Washington Post’s solution of interest to everyone beyond just news sites.

Problems Specific To The Washington Post

Recent reporting on The Washington Post tended to narrowly frame it in the context of politics, concerns about the concentration of wealth, and how it impacts coverage of sports, international news, and the performing arts, in addition to the hundreds of staff and reporters who lost their jobs.

The job cuts in particular are a highly specific solution applied by The Washington Post and are highly controversial. An opinion can be made that cutting some of the lower performing topics removes the very things that differentiate the website. As you will see next, Executive Editor Matt Murray justifies the cuts as listening to readers’ signals.

Challenges Affecting Everyone

If you zoom out, there is a larger pattern of how many organizations are struggling to understand where the audience has gone and how best to bring them back.

Shared Industry Challenges

  • Changes in content consumption habits
  • Decline of search
  • Rise of the creator economy
  • Growth of podcasts and video shows
  • Social media competing for audience attention
  • Rise of AI search and chat

A recent podcast interview (link to Spotify) with the executive editor of The Washington Post, Matt Murray, revealed a years-long struggle to restructure the organization’s workflow into one that:

  • Was responsive to audience signals
  • Could react in real time instead of the rigid print-based news schedule
  • Explored emerging content formats so as to evolve alongside readers
  • Produced content that is perceived as indispensable

The issues affecting the Washington Post are similar to issues affecting everyone else from recipe bloggers to big brand review sites. A key point Murray made was the changes were driven by audience signals.

Matt Murray said the following about reader signals:

“Readers in today’s world tell you what they want and what they don’t want. They have more power. …And we weren’t picking up enough of the reader signals.”

Then a little later on he again emphasized the importance of understanding reader signals:

“…we are living in a different kind of a world that is a data reader centric world. Readers send us signals on what they want. We have to meet them more where they are. That is going to drive a lot of our success.”

Whether listening to audience signals justifies cutting staff or ends up removing the things that differentiate The Washington Post remains to be seen.

For example, I used to subscribe to the print edition of The New Yorker for the articles, not for the restaurant or theater reviews yet they were still of interest to me as I liked to keep track of trends in live theater and dining. The New Yorker cartoons rarely had anything to do with the article topics and yet they were a value add. Would something like that show up in audience signals?

Build A Base Then Adapt

The memo paints what they’re doing as a foundation for building a strategy that is still evolving, not as a proven strategy. In my opinion that reflects the uncertainty introduced by the rapid decline of classic search and the knowledge that there are no proven strategies.

That uncertainty makes it more interesting to examine what a big brand organization like The Washington Post is doing to create a base strategy to start from and adapt it based on outcomes. That, in itself, is a strategy for coping with a lack of proven tactics.

Three concrete goals they are focusing on are:

  1. Attracting readers
  2. Create content that leads to subscriptions
  3. Increase engagement.

They write:

“From this foundation, we aim to build on what is working, and grow with discipline and intent, to experiment, to measure and deepen what resonates with customers.”

In the podcast interview, Murray also described the stability of a foundation as a way to nurture growth, explaining that it creates the conditions for talent to do its best work. He explains that building the foundation gives the staff the space to focus on things that work.

He explained:

“One of the reasons I wanted to get to stability, as I want room for that talent to thrive and flourish.

I also want us to develop it in a more modern multi-modal way with those that we’ve been able to do.”

A Path To Becoming Indispensable

The Washington Post memo offered insights about their strategy, with the goal stated that the brand must become indispensable to readers, naming three criteria that articles must validate against.

According to the memo:

“We can’t be everything to everyone. But we must be indispensable where we compete. That means continually asking why a story matters, who it serves and how it gives people a clearer understanding of the world and an advantage in navigating it.”

Three Criteria For Content

  1. Content must matter to site visitors.
  2. Content must have an identifiable audience.
  3. Content must provide understanding and also be applicable (useful).

Content Must Matter
Regardless of whether the content is about a product, a service, or informational, the Washington Post’s strategy states that content must strongly fulfill a specific need. For SEOs, creators, ecommerce stores, and informational content publishers, “mattering” is one of the pillars that support making a business indispensable to a site visitor and provides an advantage.

Identifiable Audience
Information doesn’t exist in a vacuum, but traditional SEO has strongly focused on keyword volume and keyword relevance, essentially treating information as existing in a space devoid of human relevance. Keyword relevance is not the same as human relevance. Keyword relevance is relevance to a keyword phrase, not relevance to a human.

This point matters because AI Chat and Search destroys the concept of keywords, because people are no longer typing in keyword phrases but are instead engaging in goal-oriented discussions.

When SEOs talk about keyword relevance, they are talking about relevance to an algorithm. Put another way, they are essentially defining the audience as an algorithm.

So, point two is really about stepping back and asking, “Why does a person need this information?”

Provide Understanding And Be Applicable
Point three states that it’s not enough for content to provide an understanding of what happened (facts). It requires that the information must make the world around the reader navigable (application of the facts).

This is perhaps the most interesting pillar of the strategy because it acknowledges that information vomit is not enough. It must be information that is utilitarian. Utilitarian in this context means that content must have some practical use.

In my opinion, an example of this principle in the context of an ecommerce site is product data. The other day I was on a fishing lure site, and the site assumed that the consumer understood how each lure is supposed to be used. It just had the name of the lure and a photo. In every case, the name of the lure was abstract and gave no indication of how the lure was to be used, under what circumstances, and what tactic it was for.

Another example is a clothing site where clothing is described as small, medium, large, and extra large, which are subjective measurements because every retailer defines small and large differently. One brand I shop at consistently labels objectively small-sized jackets as medium. Fortunately, that same retailer also provides chest, shoulder, and length measurements, which enable a user to understand exactly whether that clothing fits.

I think that’s part of what the Washington Post memo means when it says that the information should provide understanding but also be applicable. It’s that last part that makes the understanding part useful.

Three Pillars To Thriving In A Post-Search Information Economy

All three criteria are pillars that support the mandate to be indispensable and provide an advantage. Satisfying those goals help content differentiate it from information vomit, AI slop. Their strategy supports becoming a navigational entity, a destination that users specifically seek out and it helps the publisher, ecommerce store, and SEOs build an audience in order to claw back what classic search no longer provides.

Featured Image by Shutterstock/Roman Samborskyi

Google Discover for Ecommerce

As AI Overviews and shopping agents divert clicks away from traditional search results, Google Discover may provide a new and growing source of organic traffic for ecommerce merchants.

Discover is Google’s personalized, query-less content feed similar to those on X and Facebook. The Discover feed appears in Google’s mobile applications and on the main screens of Android devices. It shows articles, videos, and content that presumably interests users.

How Google selects a given article or video to appear in the Discover feed is something of a mystery, with some marketers stating that Google Discover Optimization — GDO, if you need another three-letter acronym — is significantly different from traditional organic search.

Google Discover web page

Discover is a personalized, query-less content feed similar to those on X and Facebook. Image: Google. 

Core Update

Google’s February 2026 Discover Core Update marks the first time the search engine giant changed its algorithm for Discover alone.

Google says the update improved quality. It aimed to reduce the presence of clickbait and low-value content while surfacing more in-depth, original, and timely material from sites with demonstrated expertise.

Some published reports speculated that the update devalued AI-generated content, yet Google’s concern is probably not artificial intelligence per se. Rather, it is scaled, thin, or risky AI-generated content that degrades trust.

Discover’s content is not in response to a query. Google chooses what to show folks. That choice raises the bar for accuracy, usefulness, and credibility in ways that differ from classic search results.

In a sense, the Discover update is less about ranking tweaks and more about editorial standards. Google may be limiting sensational, misleading, or mass-produced content to protect the tool’s long-term viability.

Therein lies the content marketing opportunity.

Discover’s Future

Discover launched in 2018. Until recently, it has been, for most marketers, a secondary way to boost traffic.

News publishers in particular could see significant traffic spikes when an item made its way into the feed. But optimizing for Discover did not compare to the steady, regular flow of traffic that organic search could deliver.

As AI Overviews have siphoned off that traffic, some marketers have emphasized Discover.

Google’s apparent focus has prompted widespread speculation about Discover’s future.

Discover as a home feed. Discover could become a personalized home feed for the Google ecosystem. Imagine something akin to an individualized MSN or Yahoo home page.

This home feed might include articles, videos, social content, and even data from other Google products, such as Gmail or Docs. The goal might be to keep users engaged across Google properties.

What’s more, both MSN and Yahoo have shown that such pages can drive significant ad revenue.

Personal and local experience. In its February update, Google noted that Discover would favor local or regional content. Users in the United States will see content from domestic publishers.

That could benefit retailers with physical stores, as very local content might beat out similar articles from nationwide competitors.

Multi-format, creator-centric. The Discover feed has recently featured relatively more video and creator content, especially from YouTube and social platforms.

While publishers often frame this as competition, ecommerce marketers could benefit. Product explainers, buying guides, and similar content already perform well in video and visual formats. Discover’s expansion beyond text may favor brands and retailers that invest in rich, creator-led content.

Yet merchants without creators can mimic the style and potentially win on Discover.

An interest graph, not just a feed. Some have suggested that Google treats Discover as part of a broader interest graph that informs search, recommendations, and AI-assisted experiences.

Thus content that performs well in Discover may shape Google’s understanding of user intent over time beyond the feed itself.

Discover could be upstream from traditional and AI-driven search. GDO may precede and inform SEO, GEO (generative engine optimization), and AEO (answer engine optimization).

Optimize

Google Discover deserves attention if it’s becoming a meaningful traffic channel.

Start with Google’s recommendations, which include descriptive headlines, large images, and “people-first” content. From there, marketers can experiment.

A practical approach is a testing framework. Publish consistently and track Discover performance separately in Search Console. Over time, look for editorial traits, formats, or topics that predictably earn Discover visibility and thus inform a long-term strategy.

An experimental surgery is helping cancer survivors give birth

This week I want to tell you about an experimental surgical procedure that’s helping people have babies. Specifically, it’s helping people who have had treatment for bowel or rectal cancer.

Radiation and chemo can have pretty damaging side effects that mess up the uterus and ovaries. Surgeons are pioneering a potential solution: simply stitch those organs out of the way during cancer treatment. Once the treatment has finished, they can put the uterus—along with the ovaries and fallopian tubes—back into place.

It seems to work! Last week, a team in Switzerland shared news that a baby boy had been born after his mother had the procedure. Baby Lucien was the fifth baby to be born after the surgery and the first in Europe, says Daniela Huber, the gyno-oncologist who performed the operation. Since then, at least three others have been born, adds Reitan Ribeiro, the surgeon who pioneered the procedure. They told me the details.

Huber’s patient was 28 years old when a four-centimeter tumor was discovered in her rectum. Doctors at Sion Hospital in Switzerland, where Huber works, recommended a course of treatment that included multiple medications and radiotherapy—the use of beams of energy to shrink a tumor—before surgery to remove the tumor itself.

This kind of radiation can kill tumor cells, but it can also damage other organs in the pelvis, says Huber. That includes the ovaries and uterus. People who undergo these treatments can opt to freeze their eggs beforehand, but the harm caused to the uterus will mean they’ll never be able to carry a pregnancy, she adds. Damage to the lining of the uterus could make it difficult for a fertilized egg to implant there, and the muscles of the uterus are left unable to stretch, she says.

In this case, the woman decided that she did want to freeze her eggs. But it would have been difficult to use them further down the line—surrogacy is illegal in Switzerland.

Huber offered her an alternative.

She had been following the work of Ribeiro, a gynecologist oncologist formerly at the Erasto Gaertner Hospital in Curitiba, Brazil. There, Ribeiro had pioneered a new type of surgery that involved moving the uterus, fallopian tubes, and ovaries from their position in the pelvis and temporarily tucking them away in the upper abdomen, below the ribs.

Ribeiro and his colleagues published their first case report in 2017, describing a 26-year-old with a rectal tumor. (Ribeiro, who is now based at McGill University in Montreal, says the woman had been told by multiple doctors that her cancer treatment would destroy her fertility and had pleaded with him to find a way to preserve it.)

Huber remembers seeing Ribeiro present the case at a conference at the time. She immediately realized that her own patient was a candidate for the surgery, and that, as a surgeon who had performed many hysterectomies, she’d be able to do it herself. The patient agreed.

Huber’s colleagues at the hospital were nervous, she says. They’d never heard of the procedure before. “When I presented this idea to the general surgeon, he didn’t sleep for three days,” she tells me. After watching videos from Ribeiro’s team, however, he was convinced it was doable.

So before the patient’s cancer treatment was started, Huber and her colleagues performed the operation. The team literally stitched the organs to the abdominal wall. “It’s a delicate dissection,” says Huber, but she adds that “it’s not the most difficult procedure.” The surgery took two to three hours, she says. The stitches themselves were removed via small incisions around a week later. By that point, scar tissue had formed to create a lasting attachment.

The woman had two weeks to recover from the surgery before her cancer treatment began. That too was a success—within months, her tumor had shrunk so significantly that it couldn’t be seen on medical scans.

As a precaution, the medical team surgically removed the affected area of her colon. At the same time, they cut away the scar tissue holding the uterus, tubes, and ovaries in their new position and transferred the organs back into the pelvis.

Around eight months later, the woman stopped taking contraception. She got pregnant without IVF and had a mostly healthy pregnancy, says Huber. Around seven months into the pregnancy, there were signs that the fetus was not growing as expected. This might have been due to problems with the blood supply to the placenta, says Huber. Still, the baby was born healthy, she says.

Ribeiro says he has performed the surgery 16 times, and that teams in countries including the US, Peru, Israel, India, and Russia have performed it as well. Not every case has been published, but he thinks there may be around 40.

Since Baby Lucien was born last year, a sixth birth has been announced in Israel, says Huber. Ribeiro says he has heard of another two births since then, too. The most recent was to the first woman who had the procedure. She had a little girl a few months ago, he tells me.

No surgery is risk-free, and Huber points out there’s a chance that organs could be damaged during the procedure, or that a more developed cancer could spread. The uterus of one of Ribeiro’s patients failed following the surgery. Doctors are “still in the phase of collecting data to [create] a standardized procedure,” Huber says, but she hopes the surgery will offer more options to young people with some pelvic cancers. “I hope more young women could benefit from this procedure,” she says.

Ribeiro says the experience has taught him not to accept the status quo. “Everyone was saying … there was nothing to be done [about the loss of fertility in these cases],” he tells me. “We need to keep evolving and looking for different answers.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.