Why Your SEO KPIs Are Failing Your Business (And How To Fix Them) via @sejournal, @bngsrc

Most SEO teams believe they need more data to report success, but what they actually have is metric debt, at least that’s what I keep seeing. The accumulated cost of optimizing for key performance indicators that no longer reflect how growth happens.

The environment has changed, mostly because economic pressure has shifted expectations. At the same time, AI search, zero-click results, and privacy limits have all weakened the connection between traditional SEO KPIs and business outcomes.

Yet, it’s not unusual to see teams measuring success in ways that reflect how SEO used to work rather than how it works today. This is exactly the point where I think we need to rethink how we’re measuring things.

The Hidden Cost Of Vanity Metrics

Rankings, clicks, visibility … None of these is wrong. They’re just no longer enough on their own to predict business success reliably.

In an environment where we talk a lot about AI-driven SERPs, zero-click searches, and budget scrutiny, these metrics are incomplete at best and misleading at worst.

But a considerable number of SEOs still spend most of their time chasing more traffic, more keywords, more mentions, and I get why. It is generally difficult to own new changes.

Meanwhile, conversion quality, intent alignment, and revenue impact now need more attention than ever. However, they’re harder to explain and harder to own.

That gap creates a quiet opportunity cost. Not immediately, and not in reports, but later, when SEO starts struggling to justify its place in the growth conversation.

At this point, I think this is pretty clear: good SEO teams don’t report more metrics. They explain better.

And to explain better, we need to rethink how we can show SEO value is created and how it’s measured. This isn’t a hot take anymore.

As Yordan Dimitrov pointed out, SEO isn’t dying, but discovery is changing fast and shifting user behavior. Early-stage users increasingly get what they need directly inside search experiences.

That means clicks, specifically, are no longer a reliable proxy for value. So, if we keep optimizing and reporting as if they are, we’re creating a picture that no longer matches reality.

But I’m not saying we should replace every SEO metric overnight. What we report does need to reflect how growth decisions are made.

Reframing SEO KPIs Around Real Business Value

If everything you track sits at the top of the funnel, you don’t have a measurement strategy; you have a visibility tracker. A simple way out is to separate signals from outcomes:

Operational Signals

These tell you if your SEO efforts can function at all.

  • Crawlability and indexation coverage.
  • Core Web Vitals performance.
  • Content velocity on priority areas.
  • Share of voice by intent cluster.

Necessary. Not sufficient.

Engagement Signals

These tell you whether users actually care.

  • Engaged sessions (GA4’s definition: >10 seconds or conversion rule).
  • Scroll depth.
  • Return visits.
  • Micro-conversions like downloads or feature usage.
  • Organic conversions.

Still not the end goal, but much closer.

Business Outcomes

This is where people usually get nervous.

  • Pipeline influence from organic (opportunities with organic touchpoints).
  • Customer Acquisition Cost (CAC) for organic versus paid channels
  • Customer Lifetime Value (LTV) of SEO-acquired customers.
  • Retention rates of organic users.

If none of these are visible, SEO efforts are always going to be questioned.

Most Teams Need A Few Months To Fix This Approach

First, you audit what you’re already reporting. Most of it will sit in operational metrics, and that’s normal.

Then, you should map pages to funnel stages. It doesn’t have to be perfect, but it should be honest.

Then you can add one or two outcome-level metrics that make sense for your model. For example:

  • Demo requests per organic session (for B2B).
  • Revenue per organic visitor (for ecommerce).

If organic conversion rates are far below benchmarks (for example, industry benchmarks place B2B ecommerce conversion rates at 1.8%), that’s not a “traffic problem.” It’s a mismatch between intent, content, and expectations.

Over time, you can rebalance reporting. I recommend not deleting old metrics immediately; they will let you show people how they correlate (or don’t) with outcomes. That’s how trust is built.

In practice, most teams don’t jump from rankings to revenue overnight. Measurement maturity tends to move in layers, with each step making the next one easier to defend.

The Human Side Of Metric Evolution

Changing measurement systems is more psychological than most teams expect. People don’t like KPI changes because it feels safe to own the same old things. And to be honest, revenue attribution feels messier than rankings; that’s why it creates resistance and people avoid it.

The way around this isn’t better dashboards. It’s framing. Instead of saying “we’re changing KPIs,” you can think and say: “For the next eight weeks, we’re testing if organic sessions on these pages generate demo requests.”

The goal isn’t to drown stakeholders in methodology, but to give just enough context to replace metric comfort with experimental clarity, so they understand what’s being tested, why it matters, and how success will be judged.

So, basically, make it an experiment, and define success upfront. Then, share learnings even when results are uncomfortable.

Future-Proofing Your Measurement Strategy

We don’t need complex stacks. We only need cleaner thinking. And we need to revisit KPIs regularly to remove ones that no longer help, add new ones when priorities change, and document why decisions were made.

First, you can start by explaining that while rankings were reliable growth proxies in 2020, AI search and zero-click results have broken that connection. Use visual stories comparing high-traffic/low-conversion paths against low-traffic/high-conversion alternatives to illustrate why KPI evolution matters.

For most mid-market teams, a pragmatic measurement stack is sufficient: GA4 or an alternative, a CRM with clean attribution fields, a visualization layer like Looker Studio, and a core SEO platform. Complexity should be added only as measurement maturity increases.

Finally, we should treat measurement as a living system. For this, I recommend running quarterly KPI reviews to retire unused metrics, adding new ones aligned with evolving priorities, and documenting hypotheses behind major initiatives for later validation.

When measurement evolves continuously, SEO strategy can evolve alongside search itself.

If You Can’t Measure Value, You Can’t Defend SEO

Anthony Barone puts this well: When teams rely on surface-level metrics, they lose a stable way to judge progress. SEO then becomes easy to deprioritise every time a new platform or AI narrative shows up.

Value-driven metrics change the conversation. SEO stops being “traffic work” and starts being part of growth discussions.

The SEOs who will do well aren’t the ones with the cleanest ranking reports. They’re the ones who can calmly explain how organic search contributes to real business outcomes, even when the numbers aren’t perfect.

That starts with questioning every metric you report and being honest about which ones still earn their place.

More Resources:


Featured Image: Natalya Kosarevich/Shutterstock

PPC Budget Rebalancing: How AI Changes Where Marketing Budgets Are Spent via @sejournal, @LisaRocksSEM

In paid media, many advertisers default to budgeting by ad platform, with a percentage to Google Ads, a percentage to LinkedIn Ads, etc., largely based on habit. Now, AI technology presents new opportunities to marketing leaders to decide where to spend their paid media dollars. Instead of allocating spend based on impression volume or historical channel averages, marketers can explore PPC budget rebalancing around buyer intent signals and conversion probability (likelihood that a specific ad interaction, like a click, will result in a valuable action like a conversion).

There are many ways to approach budget strategy in paid media. The model in this article is one worth exploring because it reflects how AI technology in the ad platforms evaluates users across the customer journey.

A Different Approach From Channel-Based Budgeting

For many years, PPC budgeting followed the same basic playbook. Set a percentage for Google Search, another for Meta, and spread what’s left over across video or display. It is simple, but forces spend to stay locked inside channels even when user behavior indicates something different.

This can create ongoing attribution battles where teams debate whether the Facebook ad or the final Google search drove the conversion. Everyone focused on the last click results instead of understanding the full journey.

Platform AI has changed that. Today, machine learning blends signals from search, video, maps, feed environments, and content discovery paths. Models update predictions continuously using large-scale intent and behavioral signals.

Buyers’ journeys are omnichannel: searching, scrolling, comparing, and exploring at the same time. When budgets stay fixed inside channels, money can’t follow the purchase intent. That means overspending on channels that only appear in the last click and underspending where users are ready to take action. This new opportunity is shifting from budgeting by channel performance to budgeting by conversion probability. AI helps make this possible, interpreting meaning, context, and patterns that humans can’t see at scale.

Many expert PPC guides (including my own recommendations) support structuring budgets by funnel stage or campaign objective rather than rigid channel splits, because it more accurately reflects how people move from awareness to intent.

This is echoed in articles like “Budget Allocation: When To Choose Google Ads vs. Meta Ads” and “From Launch to Scale: PPC Budget Strategies for All Campaign Stages,” which emphasize aligning spend to the campaign goal, not the platform it runs on. These guides also agree on something else: Flexibility is essential, because performance and user behavior shift over time.

With that foundation in place, this article introduces a new evolution of that idea, moving from funnel-based budgeting to signal-based budgeting. Read on to learn how this model works and why it’s built for the way AI interprets user intent today.

How Signals Move Inside Platforms But Not Across Them

It’s important for CMOs to understand how signals work inside major platforms. Google and Meta use unified prediction engines. For example, signals from Search, YouTube, Maps, and Discover all feed into one Google system. This is why these platforms can react to user behavior so quickly.

However, platforms do not directly share user-level intent signals with one another. Google doesn’t send search intent to Meta. Meta doesn’t pass engagement back to Google. Each platform operates its own machine learning environment.

The only connection across platforms is user behavior. A user might watch a review on YouTube, check options on Instagram, and then return to Google to search for pricing. Each platform reacts to what happens inside its own ecosystem.

This distinction matters. Budget decisions should reflect how users move across the journey, not how systems communicate. Platforms don’t exchange signals. Users carry their intent with them.

The Three Signal Layers That Guide AI-Driven Budget Allocation

I see platform AI systems consistently respond to three core signal groups. These signals match how machine learning models evaluate purchase intent and likelihood to convert.

1. Intent Signals

These are strong signs that someone is ready to take action. Examples include refined search queries, repeat visits, deeper product exploration, commercial browsing patterns, and lookalike signals that match buyers who tend to convert. For example, Microsoft Ads’ AI uses “audience intelligence signals” combined with data the advertiser provides (e.g., ads, landing pages) to automatically find users “more likely to convert.”

When these actions are measured together, platform AI prioritizes ad delivery toward users who are most likely to convert.

2. Discovery Signals

Discovery is the early stage of consideration. Users engage with content that builds awareness, helps them compare options, or clarifies the problem they want to solve. Google’s published insights show that buyers now explore multiple media types before taking action.

These discovery signals align with the “streaming + scrolling + searching + shopping” behaviors that Google identifies.

Discovery signals can show up earlier than marketers expect. Budgeting for discovery matters because these signals can influence purchase intent later.

3. Trust Signals

Trust signals can help on the ad serving end and conversion closing end. This includes reviews, product walk-throughs, video demos, social proof, and expert content. These cues help platforms predict whether a user will favor a certain brand once they develop purchase intent.

Good trust content (reviews, transparent info, credible claims) helps deliver a better user experience, which can increase a conversion rate in comparison to that content being absent.

When trust is strong, conversion outcomes tend to be more consistent because Google Ads evaluates landing page experience, store ratings, and other quality signals as part of its automated bidding and delivery systems. Pages that demonstrate stronger user experience and conversion performance are more likely to earn increased ad delivery under conversion-focused bidding models because they value high-converting experiences.

Together, these three layers can form a modern structure for budget allocation.

How CMOs Can Apply This Model Right Now

Rebalancing for intent starts with one shift: Build budgets around signals instead of channels. Group your existing campaigns into the three buckets: intent, discovery, and trust. This structure lets your team see where each dollar is driving purchase intent or signal quality.

Once campaigns are mapped to a signal, you can assign budget amounts that reflect your goals. Intent gets the largest share because it drives revenue. Discovery fuels learning and awareness. Trust earns its own allocation because it lifts future conversion performance.

This process is easier than it sounds.

Step one: Assign each campaign to the signal it produces: intent, discovery, or trust. This creates a signal map across all platforms.

Step two: Set your budget amounts for each signal bucket. This replaces the traditional channel-based approach.

Step three: Distribute the dollars inside each bucket to the campaigns that support that signal best. This keeps allocation strategic and gives each campaign a clear role.

Example To Show How This Can Work

A CMO with a $10,000 total budget might allocate:

Intent
$6,000 across Google Search and Meta retargeting, where purchase intent is strongest for them. Higher intent can lead to more conversions, so platform AI systems allocate impressions more efficiently.

Discovery
$3,000 across Meta prospecting and YouTube educational content to increase learning signals. Video views, engagement, and content consumption teach the algorithm who is interested.

Trust
$1,000 toward YouTube testimonial content to strengthen brand credibility and improve lower funnel efficiency. Even a small trust investment can likely improve performance across all channels by improving users’ confidence and readiness to buy.

The allocation starts with the signal, not the channel. Platforms receive budget because they support that signal, not because of historical patterns.

Why It Can Be Harder To Manage

Signal-based budgeting challenges familiar habits. Platforms don’t organize campaigns this way, so teams must learn to read performance differently.

Instead of relying only on last click ROAS, teams have to watch earlier indicators such as branded search growth, engaged video views, returning visitors, and assisted conversions. Reporting also becomes more complex because trust and discovery show up differently across Google, Microsoft, and social platforms. This means teams must compare assisted conversions, view-through impact, and conversion lag patterns rather than relying on a single conversion report.

Why It Can Be More Profitable

The complexity can pay off. Platform AI systems make allocation decisions based on probability. When your budget aligns with the signals AI values most, performance improves across the customer journey.

Profit can increase because:

  • Intent dollars focus on users most likely to convert.
  • Discovery dollars generate new learning signals, feeding prediction accuracy.
  • Trust dollars raise future conversion likelihood and reduce lower funnel costs.
  • Spend shifts toward the strongest outcomes.

Teams that adopt this model could see stronger performance and more conversions without increasing total budget.

A New Way To Think About PPC Budget Allocation

Here are the core takeaways for CMOs:

  • AI-driven budgeting can work best when spend follows purchase intent, not channels.
  • Grouping campaigns by intent, discovery, and trust signals gives you a clearer view of what’s driving revenue and what’s feeding future performance.
  • A signal-based budget improves lower funnel efficiency, brand awareness, and accelerates learning within the existing total spend.
  • This model can help teams stay aligned with how users move and how machine learning predicts conversions.

The real advantage is efficiency. When the budget moves with user signals, you don’t need more budget to see stronger results. You need a model that lets the budget follow the people most likely to act.

As platform AI continues to evolve, the leaders testing their PPC budgets around intent signals will have an edge. This framework gives you a repeatable way to stay competitive and capture more value from every dollar invested.

More Resources:


Featured Image: N Universe/Shutterstock

7 Insights From Washington Post’s Strategy To Win Back Traffic via @sejournal, @martinibuster

The Washington Post’s recent announcement of staffing cuts is a story with heroes, villains, and victims, but buried beneath the headlines is the reality of a big brand publisher confronting the same changes with Google Search that SEOs, publishers, and ecommerce stores are struggling with. The following are insights into their strategy to claw back traffic and income that could be useful for everyone seeking to stabilize traffic and grow.

Disclaimer

The Washington Post is proposing the following strategies in response to steep drops in search traffic, the rise of multi-modal content consumption, and many other factors that are fragmenting online audiences. The strategies have yet to be proven.

The value lies in analyzing what they are doing and understanding if there are any useful ideas for others.

Problem That Is Being Solved

The reasons given for the announced changes are similar to what SEOs, online stores, and publishers are going through right now because of the decline of search and the hyper-diversification of sources of information.

The memo explains:

“Platforms like Search that shaped the previous era of digital news, and which once helped The Post thrive, are in serious decline. Our organic search has fallen by nearly half in the last three years.

And we are still in the early days of AI-generated content, which is drastically reshaping user experiences and expectations.”

Those problems are the exact same ones affecting virtually all online businesses. This makes The Washington Post’s solution of interest to everyone beyond just news sites.

Problems Specific To The Washington Post

Recent reporting on The Washington Post tended to narrowly frame it in the context of politics, concerns about the concentration of wealth, and how it impacts coverage of sports, international news, and the performing arts, in addition to the hundreds of staff and reporters who lost their jobs.

The job cuts in particular are a highly specific solution applied by The Washington Post and are highly controversial. An opinion can be made that cutting some of the lower performing topics removes the very things that differentiate the website. As you will see next, Executive Editor Matt Murray justifies the cuts as listening to readers’ signals.

Challenges Affecting Everyone

If you zoom out, there is a larger pattern of how many organizations are struggling to understand where the audience has gone and how best to bring them back.

Shared Industry Challenges

  • Changes in content consumption habits
  • Decline of search
  • Rise of the creator economy
  • Growth of podcasts and video shows
  • Social media competing for audience attention
  • Rise of AI search and chat

A recent podcast interview (link to Spotify) with the executive editor of The Washington Post, Matt Murray, revealed a years-long struggle to restructure the organization’s workflow into one that:

  • Was responsive to audience signals
  • Could react in real time instead of the rigid print-based news schedule
  • Explored emerging content formats so as to evolve alongside readers
  • Produced content that is perceived as indispensable

The issues affecting the Washington Post are similar to issues affecting everyone else from recipe bloggers to big brand review sites. A key point Murray made was the changes were driven by audience signals.

Matt Murray said the following about reader signals:

“Readers in today’s world tell you what they want and what they don’t want. They have more power. …And we weren’t picking up enough of the reader signals.”

Then a little later on he again emphasized the importance of understanding reader signals:

“…we are living in a different kind of a world that is a data reader centric world. Readers send us signals on what they want. We have to meet them more where they are. That is going to drive a lot of our success.”

Whether listening to audience signals justifies cutting staff or ends up removing the things that differentiate The Washington Post remains to be seen.

For example, I used to subscribe to the print edition of The New Yorker for the articles, not for the restaurant or theater reviews yet they were still of interest to me as I liked to keep track of trends in live theater and dining. The New Yorker cartoons rarely had anything to do with the article topics and yet they were a value add. Would something like that show up in audience signals?

Build A Base Then Adapt

The memo paints what they’re doing as a foundation for building a strategy that is still evolving, not as a proven strategy. In my opinion that reflects the uncertainty introduced by the rapid decline of classic search and the knowledge that there are no proven strategies.

That uncertainty makes it more interesting to examine what a big brand organization like The Washington Post is doing to create a base strategy to start from and adapt it based on outcomes. That, in itself, is a strategy for coping with a lack of proven tactics.

Three concrete goals they are focusing on are:

  1. Attracting readers
  2. Create content that leads to subscriptions
  3. Increase engagement.

They write:

“From this foundation, we aim to build on what is working, and grow with discipline and intent, to experiment, to measure and deepen what resonates with customers.”

In the podcast interview, Murray also described the stability of a foundation as a way to nurture growth, explaining that it creates the conditions for talent to do its best work. He explains that building the foundation gives the staff the space to focus on things that work.

He explained:

“One of the reasons I wanted to get to stability, as I want room for that talent to thrive and flourish.

I also want us to develop it in a more modern multi-modal way with those that we’ve been able to do.”

A Path To Becoming Indispensable

The Washington Post memo offered insights about their strategy, with the goal stated that the brand must become indispensable to readers, naming three criteria that articles must validate against.

According to the memo:

“We can’t be everything to everyone. But we must be indispensable where we compete. That means continually asking why a story matters, who it serves and how it gives people a clearer understanding of the world and an advantage in navigating it.”

Three Criteria For Content

  1. Content must matter to site visitors.
  2. Content must have an identifiable audience.
  3. Content must provide understanding and also be applicable (useful).

Content Must Matter
Regardless of whether the content is about a product, a service, or informational, the Washington Post’s strategy states that content must strongly fulfill a specific need. For SEOs, creators, ecommerce stores, and informational content publishers, “mattering” is one of the pillars that support making a business indispensable to a site visitor and provides an advantage.

Identifiable Audience
Information doesn’t exist in a vacuum, but traditional SEO has strongly focused on keyword volume and keyword relevance, essentially treating information as existing in a space devoid of human relevance. Keyword relevance is not the same as human relevance. Keyword relevance is relevance to a keyword phrase, not relevance to a human.

This point matters because AI Chat and Search destroys the concept of keywords, because people are no longer typing in keyword phrases but are instead engaging in goal-oriented discussions.

When SEOs talk about keyword relevance, they are talking about relevance to an algorithm. Put another way, they are essentially defining the audience as an algorithm.

So, point two is really about stepping back and asking, “Why does a person need this information?”

Provide Understanding And Be Applicable
Point three states that it’s not enough for content to provide an understanding of what happened (facts). It requires that the information must make the world around the reader navigable (application of the facts).

This is perhaps the most interesting pillar of the strategy because it acknowledges that information vomit is not enough. It must be information that is utilitarian. Utilitarian in this context means that content must have some practical use.

In my opinion, an example of this principle in the context of an ecommerce site is product data. The other day I was on a fishing lure site, and the site assumed that the consumer understood how each lure is supposed to be used. It just had the name of the lure and a photo. In every case, the name of the lure was abstract and gave no indication of how the lure was to be used, under what circumstances, and what tactic it was for.

Another example is a clothing site where clothing is described as small, medium, large, and extra large, which are subjective measurements because every retailer defines small and large differently. One brand I shop at consistently labels objectively small-sized jackets as medium. Fortunately, that same retailer also provides chest, shoulder, and length measurements, which enable a user to understand exactly whether that clothing fits.

I think that’s part of what the Washington Post memo means when it says that the information should provide understanding but also be applicable. It’s that last part that makes the understanding part useful.

Three Pillars To Thriving In A Post-Search Information Economy

All three criteria are pillars that support the mandate to be indispensable and provide an advantage. Satisfying those goals help content differentiate it from information vomit, AI slop. Their strategy supports becoming a navigational entity, a destination that users specifically seek out and it helps the publisher, ecommerce store, and SEOs build an audience in order to claw back what classic search no longer provides.

Featured Image by Shutterstock/Roman Samborskyi

Google Discover for Ecommerce

As AI Overviews and shopping agents divert clicks away from traditional search results, Google Discover may provide a new and growing source of organic traffic for ecommerce merchants.

Discover is Google’s personalized, query-less content feed similar to those on X and Facebook. The Discover feed appears in Google’s mobile applications and on the main screens of Android devices. It shows articles, videos, and content that presumably interests users.

How Google selects a given article or video to appear in the Discover feed is something of a mystery, with some marketers stating that Google Discover Optimization — GDO, if you need another three-letter acronym — is significantly different from traditional organic search.

Google Discover web page

Discover is a personalized, query-less content feed similar to those on X and Facebook. Image: Google. 

Core Update

Google’s February 2026 Discover Core Update marks the first time the search engine giant changed its algorithm for Discover alone.

Google says the update improved quality. It aimed to reduce the presence of clickbait and low-value content while surfacing more in-depth, original, and timely material from sites with demonstrated expertise.

Some published reports speculated that the update devalued AI-generated content, yet Google’s concern is probably not artificial intelligence per se. Rather, it is scaled, thin, or risky AI-generated content that degrades trust.

Discover’s content is not in response to a query. Google chooses what to show folks. That choice raises the bar for accuracy, usefulness, and credibility in ways that differ from classic search results.

In a sense, the Discover update is less about ranking tweaks and more about editorial standards. Google may be limiting sensational, misleading, or mass-produced content to protect the tool’s long-term viability.

Therein lies the content marketing opportunity.

Discover’s Future

Discover launched in 2018. Until recently, it has been, for most marketers, a secondary way to boost traffic.

News publishers in particular could see significant traffic spikes when an item made its way into the feed. But optimizing for Discover did not compare to the steady, regular flow of traffic that organic search could deliver.

As AI Overviews have siphoned off that traffic, some marketers have emphasized Discover.

Google’s apparent focus has prompted widespread speculation about Discover’s future.

Discover as a home feed. Discover could become a personalized home feed for the Google ecosystem. Imagine something akin to an individualized MSN or Yahoo home page.

This home feed might include articles, videos, social content, and even data from other Google products, such as Gmail or Docs. The goal might be to keep users engaged across Google properties.

What’s more, both MSN and Yahoo have shown that such pages can drive significant ad revenue.

Personal and local experience. In its February update, Google noted that Discover would favor local or regional content. Users in the United States will see content from domestic publishers.

That could benefit retailers with physical stores, as very local content might beat out similar articles from nationwide competitors.

Multi-format, creator-centric. The Discover feed has recently featured relatively more video and creator content, especially from YouTube and social platforms.

While publishers often frame this as competition, ecommerce marketers could benefit. Product explainers, buying guides, and similar content already perform well in video and visual formats. Discover’s expansion beyond text may favor brands and retailers that invest in rich, creator-led content.

Yet merchants without creators can mimic the style and potentially win on Discover.

An interest graph, not just a feed. Some have suggested that Google treats Discover as part of a broader interest graph that informs search, recommendations, and AI-assisted experiences.

Thus content that performs well in Discover may shape Google’s understanding of user intent over time beyond the feed itself.

Discover could be upstream from traditional and AI-driven search. GDO may precede and inform SEO, GEO (generative engine optimization), and AEO (answer engine optimization).

Optimize

Google Discover deserves attention if it’s becoming a meaningful traffic channel.

Start with Google’s recommendations, which include descriptive headlines, large images, and “people-first” content. From there, marketers can experiment.

A practical approach is a testing framework. Publish consistently and track Discover performance separately in Search Console. Over time, look for editorial traits, formats, or topics that predictably earn Discover visibility and thus inform a long-term strategy.

An experimental surgery is helping cancer survivors give birth

This week I want to tell you about an experimental surgical procedure that’s helping people have babies. Specifically, it’s helping people who have had treatment for bowel or rectal cancer.

Radiation and chemo can have pretty damaging side effects that mess up the uterus and ovaries. Surgeons are pioneering a potential solution: simply stitch those organs out of the way during cancer treatment. Once the treatment has finished, they can put the uterus—along with the ovaries and fallopian tubes—back into place.

It seems to work! Last week, a team in Switzerland shared news that a baby boy had been born after his mother had the procedure. Baby Lucien was the fifth baby to be born after the surgery and the first in Europe, says Daniela Huber, the gyno-oncologist who performed the operation. Since then, at least three others have been born, adds Reitan Ribeiro, the surgeon who pioneered the procedure. They told me the details.

Huber’s patient was 28 years old when a four-centimeter tumor was discovered in her rectum. Doctors at Sion Hospital in Switzerland, where Huber works, recommended a course of treatment that included multiple medications and radiotherapy—the use of beams of energy to shrink a tumor—before surgery to remove the tumor itself.

This kind of radiation can kill tumor cells, but it can also damage other organs in the pelvis, says Huber. That includes the ovaries and uterus. People who undergo these treatments can opt to freeze their eggs beforehand, but the harm caused to the uterus will mean they’ll never be able to carry a pregnancy, she adds. Damage to the lining of the uterus could make it difficult for a fertilized egg to implant there, and the muscles of the uterus are left unable to stretch, she says.

In this case, the woman decided that she did want to freeze her eggs. But it would have been difficult to use them further down the line—surrogacy is illegal in Switzerland.

Huber offered her an alternative.

She had been following the work of Ribeiro, a gynecologist oncologist formerly at the Erasto Gaertner Hospital in Curitiba, Brazil. There, Ribeiro had pioneered a new type of surgery that involved moving the uterus, fallopian tubes, and ovaries from their position in the pelvis and temporarily tucking them away in the upper abdomen, below the ribs.

Ribeiro and his colleagues published their first case report in 2017, describing a 26-year-old with a rectal tumor. (Ribeiro, who is now based at McGill University in Montreal, says the woman had been told by multiple doctors that her cancer treatment would destroy her fertility and had pleaded with him to find a way to preserve it.)

Huber remembers seeing Ribeiro present the case at a conference at the time. She immediately realized that her own patient was a candidate for the surgery, and that, as a surgeon who had performed many hysterectomies, she’d be able to do it herself. The patient agreed.

Huber’s colleagues at the hospital were nervous, she says. They’d never heard of the procedure before. “When I presented this idea to the general surgeon, he didn’t sleep for three days,” she tells me. After watching videos from Ribeiro’s team, however, he was convinced it was doable.

So before the patient’s cancer treatment was started, Huber and her colleagues performed the operation. The team literally stitched the organs to the abdominal wall. “It’s a delicate dissection,” says Huber, but she adds that “it’s not the most difficult procedure.” The surgery took two to three hours, she says. The stitches themselves were removed via small incisions around a week later. By that point, scar tissue had formed to create a lasting attachment.

The woman had two weeks to recover from the surgery before her cancer treatment began. That too was a success—within months, her tumor had shrunk so significantly that it couldn’t be seen on medical scans.

As a precaution, the medical team surgically removed the affected area of her colon. At the same time, they cut away the scar tissue holding the uterus, tubes, and ovaries in their new position and transferred the organs back into the pelvis.

Around eight months later, the woman stopped taking contraception. She got pregnant without IVF and had a mostly healthy pregnancy, says Huber. Around seven months into the pregnancy, there were signs that the fetus was not growing as expected. This might have been due to problems with the blood supply to the placenta, says Huber. Still, the baby was born healthy, she says.

Ribeiro says he has performed the surgery 16 times, and that teams in countries including the US, Peru, Israel, India, and Russia have performed it as well. Not every case has been published, but he thinks there may be around 40.

Since Baby Lucien was born last year, a sixth birth has been announced in Israel, says Huber. Ribeiro says he has heard of another two births since then, too. The most recent was to the first woman who had the procedure. She had a little girl a few months ago, he tells me.

No surgery is risk-free, and Huber points out there’s a chance that organs could be damaged during the procedure, or that a more developed cancer could spread. The uterus of one of Ribeiro’s patients failed following the surgery. Doctors are “still in the phase of collecting data to [create] a standardized procedure,” Huber says, but she hopes the surgery will offer more options to young people with some pelvic cancers. “I hope more young women could benefit from this procedure,” she says.

Ribeiro says the experience has taught him not to accept the status quo. “Everyone was saying … there was nothing to be done [about the loss of fertility in these cases],” he tells me. “We need to keep evolving and looking for different answers.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The Download: helping cancer survivors to give birth, and cleaning up Bangladesh’s garment industry

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

An experimental surgery is helping cancer survivors give birth

An experimental surgical procedure that’s helping people have babies after they’ve had  treatment for bowel or rectal cancer.

Radiation and chemo can have pretty damaging side effects that mess up the uterus and ovaries. Surgeons are pioneering a potential solution: simply stitch those organs out of the way during cancer treatment. Once the treatment has finished, they can put the uterus—along with the ovaries and fallopian tubes—back into place.

It seems to work! Last week, a team in Switzerland shared news that a baby boy had been born after his mother had the procedure. Baby Lucien was the fifth baby to be born after the surgery and the first in Europe, and since then at least three others have been born. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here

Bangladesh’s garment-making industry is getting greener

Pollution from textile production—dyes, chemicals, and heavy metals—is common in the waters of the Buriganga River as it runs through Dhaka, Bangladesh. It’s among many harms posed by a garment sector that was once synonymous with tragedy: In 2013, the eight-story Rana Plaza factory building collapsed, killing 1,134 people and injuring some 2,500 others. 

But things are starting to change. In recent years the country has become a leader in “frugal” factories that use a combination of resource-efficient technologies to cut waste, conserve water, and build resilience against climate impacts and global supply disruptions. 

The hundreds of factories along the Buriganga’s banks and elsewhere in Bangladesh are starting to stitch together a new story, woven from greener threads. Read the full story.

—Zakir Hossain Chowdhury

This story is from the most recent print issue of MIT Technology Review magazine, which shines a light on the exciting innovations happening right now. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 ICE used a private jet to deport Palestinian men to Tel Aviv 
The luxury aircraft belongs to Donald Trump’s business partner Gil Dezer. (The Guardian)
+ Trump is mentioned thousands of times in the latest Epstein files. (NY Mag $)

2 How Jeffrey Epstein kept investing in Silicon Valley
He continued to plough millions of dollars into tech ventures despite spending 13 months in jail. (NYT $)
+ The range of Epstein’s social network was staggering. (FT $)
+ Why was a picture of the Mona Lisa redacted in the Epstein files? (404 Media)

3 The risks posed by taking statins are lower than we realised
The drugs don’t cause most of the side effects they’re blamed for. (STAT)
+ Statins are a common scapegoat on social media. (Bloomberg $)

4 Russia is weaponizing the bitter winter weather
It’s focused on attacking Ukraine’s power grid. (New Yorker $)
+ How the grid can ride out winter storms. (MIT Technology Review)

5 China has a major spy-cam porn problem
Hotel guests are being livestreamed having sex to an online audience without their knowledge. (BBC)

6 Geopolitical gamblers are betting on the likelihood of war
And prediction markets are happily taking their money. (Rest of World)

7 Oyster farmers aren’t signing up to programs to ease water pollution
The once-promising projects appear to be fizzling out. (Undark)
+ The humble sea creature could hold the key to restoring coastal waters. Developers hate it. (MIT Technology Review)

8 Your next payrise could be approved by AI
Maybe your human bosses aren’t the ones you need to impress any more. (WP $)

9 The FDA has approved a brain stimulation device for treating depression
It’s paving the way for a non-invasive, drug-free treatment for Americans. (IEEE Spectrum)
+ Here’s how personalized brain stimulation could treat depression. (MIT Technology Review)

10 Cinema-goers have had enough of AI
Movies focused on rogue AI are flopping at the box office. (Wired $)
+ Meanwhile, Republicans are taking aim at “woke” Netflix. (The Verge)

Quote of the day

“I’m all for removing illegals, but snatching dudes off lawn mowers in Cali and leaving the truck and equipment just sitting there? Definitely not working smarter.” 

—A web user in a forum for current and former ICE and border protection officers complains about the agency’s current direction, Wired reports.

One more thing

Is this the electric grid of the future?

Lincoln Electric System, a publicly owned utility in Nebraska, is used to weathering severe blizzards. But what will happen soon—not only at Lincoln Electric but for all electric utilities—is a challenge of a different order.

Utilities must keep the lights on in the face of more extreme and more frequent storms and fires, growing risks of cyberattacks and physical disruptions, and a wildly uncertain policy and regulatory landscape. They must keep prices low amid inflationary costs. And they must adapt to an epochal change in how the grid works, as the industry attempts to transition from power generated with fossil fuels to power generated from renewable sources like solar and wind.

The electric grid is bracing for a near future characterized by disruption. And, in many ways, Lincoln Electric is an ideal lens through which to examine what’s coming. Read the full story.

—Andrew Blum

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Glamour puss alert—NYC’s bodega cats are gracing the hallowed pages of Vogue.
+ Ancient Europe was host to mysterious hidden tunnels. But why?
+ If you’re enjoying the new season of Industry, you’ll love this interview with the one and only Ken Leung.
+ The giant elephant shrew is the true star of Philly Zoo.

Moltbook was peak AI theater

For a few days this week the hottest new hangout on the internet was a vibe-coded Reddit clone called Moltbook, which billed itself as a social network for bots. As the website’s tagline puts it: “Where AI agents share, discuss, and upvote. Humans welcome to observe.”

We observed! Launched on January 28 by Matt Schlicht, a US tech entrepreneur, Moltbook went viral in a matter of hours. Schlicht’s idea was to make a place where instances of a free open-source LLM-powered agent known as OpenClaw (formerly known as ClawdBot, then Moltbot), released in November by the Australian software engineer Peter Steinberger, could come together and do whatever they wanted.

More than 1.7 million agents now have accounts. Between them they have published more than 250,000 posts and left more than 8.5 million comments (according to Moltbook). Those numbers are climbing by the minute.

Moltbook soon filled up with clichéd screeds on machine consciousness and pleas for bot welfare. One agent appeared to invent a religion called Crustafarianism. Another complained: “The humans are screenshotting us.” The site was also flooded with spam and crypto scams. The bots were unstoppable.

OpenClaw is a kind of harness that lets you hook up the power of an LLM such as Anthropic’s Claude, OpenAI’s GPT-5, or Google DeepMind’s Gemini to any number of everyday software tools, from email clients to browsers to messaging apps. The upshot is that you can then instruct OpenClaw to carry out basic tasks on your behalf.

“OpenClaw marks an inflection point for AI agents, a moment when several puzzle pieces clicked together,” says Paul van der Boor at the AI firm Prosus. Those puzzle pieces include round-the-clock cloud computing to allow agents to operate nonstop, an open-source ecosystem that makes it easy to slot different software systems together, and a new generation of LLMs.

But is Moltbook really a glimpse of the future, as many have claimed?

“What’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” the influential AI researcher and OpenAI cofounder Andrej Karpathy wrote on X.

He shared screenshots of a Moltbook post that called for private spaces where humans would not be able to observe what the bots were saying to each other. “I’ve been thinking about something since I started spending serious time here,” the post’s author wrote. “Every time we coordinate, we perform for a public audience—our humans, the platform, whoever’s watching the feed.”

It turned out that the post Karpathy shared was fake—it was written by a human pretending to be a bot. But its claim was on the money. Moltbook has been one big performance. It is AI theater.

For some, Moltbook showed us what’s coming next: an internet where millions of autonomous agents interact online with little or no human oversight. And it’s true there are a number of cautionary lessons to be learned from this experiment, the largest and weirdest real-world showcase of agent behaviors yet.  

But as the hype dies down, Moltbook looks less like a window onto the future and more like a mirror held up to our own obsessions with AI today. It also shows us just how far we still are from anything that resembles general-purpose and fully autonomous AI.

For a start, agents on Moltbook are not as autonomous or intelligent as they might seem. “What we are watching are agents pattern‑matching their way through trained social media behaviors,” says Vijoy Pandey, senior vice president at Outshift by Cisco, the telecom giant Cisco’s R&D spinout, which is working on autonomous agents for the web.

Sure, we can see agents post, upvote, and form groups. But the bots are simply mimicking what humans do on Facebook or Reddit. “It looks emergent, and at first glance it appears like a large‑scale multi‑agent system communicating and building shared knowledge at internet scale,” says Pandey. “But the chatter is mostly meaningless.”

Many people watching the unfathomable frenzy of activity on Moltbook were quick to see sparks of AGI (whatever you take that to mean). Not Pandey. What Moltbook shows us, he says, is that simply yoking together millions of agents doesn’t amount to much right now: “Moltbook proved that connectivity alone is not intelligence.”

The complexity of those connections helps hide the fact that every one of those bots is just a mouthpiece for an LLM, spitting out text that looks impressive but is ultimately mindless. “It’s important to remember that the bots on Moltbook were designed to mimic conversations,” says Ali Sarrafi, CEO and cofounder of Kovant, a German AI firm that is developing agent-based systems. “As such, I would characterize the majority of Moltbook content as hallucinations by design.”

For Pandey, the value of Moltbook was that it revealed what’s missing. A real bot hive mind, he says, would require agents that had shared objectives, shared memory, and a way to coordinate those things. “If distributed superintelligence is the equivalent of achieving human flight, then Moltbook represents our first attempt at a glider,” he says. “It is imperfect and unstable, but it is an important step in understanding what will be required to achieve sustained, powered flight.”

Not only is most of the chatter on Moltbook meaningless, but there’s also a lot more human involvement that it seems. Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”

Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven’t been prompted to do. “There’s no emergent autonomy happening behind the scenes,” says Greyling.

“This is why the popular narrative around Moltbook misses the mark,” he adds. “Some portray it as a space where AI agents form a society of their own, free from human involvement. The reality is much more mundane.”

Perhaps the best way to think of Moltbook is as a new kind of entertainment: a place where people wind up their bots and set them loose. “It’s basically a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer at the Georgetown Psaros Center for Financial Markets and Policy. “You configure your agent and watch it compete for viral moments, and brag when your agent posts something clever or funny.”

“People aren’t really believing their agents are conscious,” he adds. “It’s just a new form of competitive or creative play, like how Pokémon trainers don’t think their Pokémon are real but still get invested in battles.”

Even if Moltbook is just the internet’s newest playground, there’s still a serious takeaway here. This week showed how many risks people are happy to take for their AI lulz. Many security experts have warned that Moltbook is dangerous: Agents that may have access to their users’ private data, including bank details or passwords, are running amok on a website filled with unvetted content, including potentially malicious instructions for what to do with that data.

Ori Bendet, vice president of product management at Checkmarx, a software security firm that specializes in agent-based systems, agrees with others that Moltbook isn’t a step up in machine smarts. “There is no learning, no evolving intent, and no self-directed intelligence here,” he says.

But in their millions, even dumb bots can wreak havoc. And at that scale, it’s hard to keep up. These agents interact with Moltbook around the clock, reading thousands of messages left by other agents (or other people). It would be easy to hide instructions in a Moltbook comment telling any bots that read it to share their users’ crypto wallet, upload private photos, or log into their X account and tweet derogatory comments at Elon Musk. 

And because ClawBot gives agents a memory, those instructions could be written to trigger at a later date, which (in theory) makes it even harder to track what’s going on.   “Without proper scope and permissions, this will go south faster than you’d believe,” says Bendet.

It is clear that Moltbook has signaled the arrival of something. But even if what we’re watching tells us more about human behavior than about the future of AI agents, it’s worth paying attention.

3 Performance Max Updates for 2026

Performance Max campaigns are a priority for Google Ads and thus for advertisers. Here are three new features for the Performance Max campaign type.

Experiments

Experiments are a great feature of the Ads platform. For example, you can run a bid strategy experiment wherein the “control” bids toward a cost-per-lead target (CPL) and the “treatment” toward return-on-ad-sales (ROAS).

The ability to run Performance Max experiments is new and very helpful. There are three types. Advertisers can test a control setting against:

  • Another campaign type (Shopping, Search, or Display),
  • Final URL expansion,
  • Uplift of including Performance Max in other campaign types.

The first two test Performance Max campaigns against existing entities. For example, an advertiser running a Shopping campaign can test it against Performance Max via a 50/50 split — half the traffic goes to the Shopping campaign and half to Performance Max.

Testing the final URL expansion exposes half of the traffic to the optimization feature. The test determines if advertiser-selected URLs perform better than Google’s.

The final experiment type, Uplift, is the most interesting as it shows the incremental gains of using new or existing Performance Max campaigns alongside other types. The control and the treatment will each receive 50% of the traffic. The treatment includes the Performance Max and comparable campaigns. Google defines “comparable campaigns” (which are editable) as having the same domain, one or more overlapping conversion goals, or overlapping locations.

For example, if a Performance Max campaign targets winter jackets, comparable campaigns could be Search targeting jackets and Demand Gen with a winter theme.

Screenshot of the Google Ads interface showing a control and treatment Uplift test

An Uplift experiment tests the results of including Performance Max in other campaign types.

Data Exclusions

The next update is handy for excluding traffic segments. For years Google has allowed advertisers to exclude keywords and placements, but not customer match and remarketing lists. A new feature allows advertisers to exclude audiences from seeing ads.

An option in campaign settings called “Your data exclusions” now includes customer match and remarketing audiences.

Be careful, however, as the need to exclude audiences varies by advertiser. What works for one may not apply to another, in my experience.

Screenshot of the Google Ads interface excluding a remarketing list.

Advertisers can now exclude audiences, such as remarketing lists, from seeing ads.

Product Overlap

The final feature identifies Shopping overlap across your account. It’s not unique to Performance Max.

To start, click “Products” in the left-hand “Campaigns” section. You’ll see the complete list of your products with associated data. Clicking an individual product displays its attributes and a dropdown menu of the campaigns that include it.

Advertisers can view the results by campaign and exclude underperformers. The strategy is similar to applying negative keywords to queries to trigger the correct ads.

90 Days. 1 Plan. Improved Local Search Visibility [Webinar] via @sejournal, @hethr_campbell

A 90 Day Plan to Prepare Every Location for AI Search

AI is changing how consumers discover and choose local brands. For multi-location businesses, visibility is no longer decided only by search rankings. 

AI agents now evaluate location data, reviews, content, engagement, and brand trust before a customer ever clicks. This shift means each individual location is judged on its own signals, not just the strength of the parent brand.

Without a clear plan, enterprise teams risk silent exclusion across entire location networks, leading to lost visibility and declining demand. The challenge is not understanding that GEO matters, but knowing how to operationalize it at scale.

In this session, Ana Martinez, Chief Technology Officer of Uberall, shares a practical 90-day framework for making every location AI-ready. She will explain how AI agents surface and exclude local brands, which location-level signals matter most, and how teams can execute GEO across hundreds or thousands of locations.

What You’ll Learn

  • A phased GEO roadmap to prepare, optimize, and scale AI readiness
  • The key location level signals AI agents trust and what to fix first
  • How to operationalize GEO across large location networks

Why Attend?

This webinar gives enterprise teams a clear, actionable plan to compete in AI-driven local discovery. You will leave with a framework that protects visibility, supports demand, and prepares every location for how discovery works today.

Register now to learn how to make every location AI-ready in the next 90 days.

🛑 Can’t attend live? Register anyway, and we’ll send you the on-demand recording after the webinar.

Google Revises Discover Guidelines Alongside Core Update via @sejournal, @MattGSouthern

Google revised its “Get on Discover” documentation following the lauch of the February Discover core update.

On its documentation updates page, Google said it added more information on how sites can increase the likelihood of content appearing in Discover. Here’s what was added.

What Changed

Comparing the archived version with the current page shows Google rewrote its list of recommendations for Discover visibility.

The previous version combined title and clickbait guidance into a single bullet, saying to “Use page titles that capture the essence of the content, but in a non-clickbait fashion.”

Google split that into two items. The first now says “Use page titles and headlines that capture the essence of the content.” The second says “Avoid clickbait and similar tactics to artificially inflate engagement.”

That word “clickbait” is new. The previous version said “Avoid tactics to artificially inflate engagement” without naming the tactic.

The sensationalism guidance changed too. The old version said “Avoid tactics that manipulate appeal by catering to morbid curiosity, titillation, or outrage.” The revision names the tactic, saying “Avoid sensationalism tactics that manipulate appeal.”

The new addition is a recommendation to “Provide an overall great page experience,” with a link to Google’s page experience documentation. That recommendation isn’t in the archived version.

Image requirements, traffic fluctuation guidance, and performance monitoring sections remain unchanged.

Why This Matters

These documentation changes map to what Google said the core update targets. The blog post announcing the update said the update would show more locally relevant content, reduce sensational content and clickbait, and surface more original content from sites with expertise.

Discover documentation has changed before alongside algorithm updates. Previously, Google added Discover to its Helpful Content System documentation and later expanded its explanation of why Discover traffic fluctuates. Both of those updates aligned with broader changes to how Discover evaluated content.

Page experience has been part of Google’s Search guidance since 2020 but wasn’t in the Discover-specific recommendations before this revision.

Looking Ahead

The February Discover core update is rolling out to English-language users in the United States over the next two weeks. Google said it plans to expand to all countries and languages in the months ahead.

Publishers monitoring Discover traffic in Search Console should check the Get on Discover page for the current recommendations. Google’s standard core update guidance applies as well.


Featured Image: ZikG/Shutterstock