The human work behind humanoid robots is being hidden

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

In January, Nvidia’s Jensen Huang, the head of the world’s most valuable company, proclaimed that we are entering the era of physical AI, when artificial intelligence will move beyond language and chatbots into physically capable machines. (He also said the same thing the year before, by the way.)

The implication—fueled by new demonstrations of humanoid robots putting away dishes or assembling cars—is that mimicking human limbs with single-purpose robot arms is the old way of automation. The new way is to replicate the way humans think, learn, and adapt while they work. The problem is that the lack of transparency about the human labor involved in training and operating such robots leaves the public both misunderstanding what robots can actually do and failing to see the strange new forms of work forming around them.

Consider how, in the AI era, robots often learn from humans who demonstrate how to do a chore. Creating this data at scale is now leading to Black Mirror–esque scenarios. A worker in Shanghai, for example, recently spent a week wearing a virtual-reality headset and an exoskeleton while opening and closing the door of a microwave hundreds of times a day to train the robot next to him, Rest of World reported. In North America, the robotics company Figure appears to be planning something similar: It announced in September it would partner with the investment firm Brookfield, which manages 100,000 residential units, to capture “massive amounts” of real-world data “across a variety of household environments.” (Figure did not respond to questions about this effort.)

Just as our words became training data for large language models, our movements are now poised to follow the same path. Except this future might leave humans with an even worse deal, and it’s already beginning. The roboticist Aaron Prather told me about recent work with a delivery company that had its workers wear movement-tracking sensors as they moved boxes; the data collected will be used to train robots. The effort to build humanoids will likely require manual laborers to act as data collectors at massive scale. “It’s going to be weird,” Prather says. “No doubts about it.” 

Or consider tele-operation. Though the endgame in robotics is a machine that can complete a task on its own, robotics companies employ people to operate their robots remotely. Neo, a $20,000 humanoid robot from the startup 1X, is set to ship to homes this year, but the company’s founder, Bernt Øivind Børnich, told me recently that he’s not committed to any prescribed level of autonomy. If a robot gets stuck, or if the customer wants it to do a tricky task, a tele-operator from the company’s headquarters in Palo Alto, California, will pilot it, looking through its cameras to iron clothes or unload the dishwasher.

This isn’t inherently harmful—1X gets customer consent before switching into tele-operation mode—but privacy as we know it will not exist in a world where tele-operators are doing chores in your house through a robot. And if home humanoids are not genuinely autonomous, the arrangement is better understood as a form of wage arbitrage that re-creates the dynamics of gig work while, for the first time, allowing physical tasks to be performed wherever labor is cheapest.

We’ve been down similar roads before. Carrying out “AI-driven” content moderation on social media platforms or assembling training data for AI companies often requires workers in low-wage countries to view disturbing content. And despite claims that AI will soon enough train on its outputs and learn on its own, even the best models require an awful lot of human feedback to work as desired.

These human workforces do not mean that AI is just vaporware. But when they remain invisible, the public consistently overestimates the machines’ actual capabilities.

That’s great for investors and hype, but it has consequences for everyone. When Tesla marketed its driver-assistance software as “Autopilot,” for example, it inflated public expectations about what the system could safely do—a distortion a Miami jury recently found contributed to a crash that killed a 22-year-old woman (Tesla was ordered to pay $240 million in damages). 

The same will be true for humanoid robots. If Huang is right, and physical AI is coming for our workplaces, homes, and public spaces, then the way we describe and scrutinize such technology matters. Yet robotics companies remain as opaque about training and tele-operation as AI firms are about their training data. If that does not change, we risk mistaking concealed human labor for machine intelligence—and seeing far more autonomy than truly exists.

Peptides are everywhere. Here’s what you need to know.

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Want to lose weight? Get shredded? Stay mentally sharp? A wellness influencer might tell you to take peptides, the latest cure-all in the alternative medicine arsenal. People inject them. They snort them. They combine them into concoctions with superhero names, like the Wolverine stack.  

Matt Kaeberlein, a longevity researcher, first started hearing about peptides a few years ago. “At that point it was mostly functional medicine doctors that were using peptides,” he says, referring to physicians who embrace alternative medicine and supplements. “In the last six months, it’s kind of gone crazy.”

Peptides have gone mainstream. At the health-technology startup Superpower in Los Angeles, employees can get free peptide shots on Fridays. At a health food store in Phoenix, a sidewalk sign reads, “We have peptides!” At a tae kwon do center in South Carolina, a peptide wholesaler hosts an informational evening. On social media, they’re everywhere. And that popularity seems poised to grow; Department of Health and Human Services secretary Robert F. Kennedy Jr. has promised to end the FDA’s “aggressive suppression” of peptides.

The benefits and risks of many of these compounds, however, are largely unknown. Some of the most popular peptides have never been tested in human trials. They are sold for research purposes, not human consumption. Some are illegal knockoffs of wildly successful weight-loss medicines. The vast majority come from China, a fact that has some legislators worried. Last week, Senator Tom Cotton urged the head of the FDA to crack down on illegal shipments of peptides from China. In the absence of regulatory oversight, some people are sending the compounds they purchase off for independent testing just to ensure that the product is legit. 

What is a peptide?

A peptide is simply a short string of amino acids, the building blocks of proteins. “Scientists generally think of peptides as very small protein fragments, but we don’t really have a precise cutoff between a peptide and a protein,” says Paul Knoepfler, a stem-cell researcher at the University of California, Davis. Insulin is a peptide, as is human growth hormone. So are some neurotransmitters, like oxytocin. 

But when wellness influencers talk about peptides, they’re often referring to particular compounds—formulated as injections, pills, or nasal sprays—that have become trendy lately. 

Some of these peptides are FDA-approved prescription medications. GLP-1 medicines, for example, are approved to treat diabetes and obesity but are also easily accessible online to almost anyone who wants to use them. Many sites sell microdoses of GLP-1s with claims that they can “support longevity,” reduce cognitive decline, or curb inflammation. 

Many more peptides are experimental. “The majority fall into the unapproved bucket,” says Kaeberlein, who is chief executive officer of Optispan, a Seattle-based health-care technology company focused on longevity. That bucket includes drugs that promote the release of growth hormones, like TB-500, CJC-1295, and ipamorelin, and compounds said to promote tissue repair and wound healing, like BPC-157 and GHK-Cu. It’s primarily these unapproved compounds that have raised concerns. “Anybody can set up an online shop selling research-grade peptides,” says Tenille Davis, a pharmacist and chief advocacy officer at the Alliance for Pharmacy Compounding, a trade organization representing more than 600 pharmacies. “And nobody knows what’s even in the vials.”  

It’s not just fitness gurus, biohackers, and longevity fanatics who are taking these experimental drugs. Kaeberlein recalls hearing about an acquaintance whose doctor prescribed her unapproved peptides. She was “just a typical upper-middle-class woman,” he says. “That’s when it really hit me that this has sort of gone relatively mainstream.”

What do peptides do?

All kinds of things, purportedly. GHK-Cu is supposed to help with wound healing and collagen production. BPC-157 is said to promote tissue repair and curb inflammation, TB-500 to foster blood vessel formation. Here’s the caveat: The evidence for these benefits comes largely from animal studies and online testimonials, not human trials. “There’s no human clinical evidence to show that they even do what people are claiming that they do,” says Stuart Phillips, a muscle physiologist at McMaster University in Hamilton, Ontario. “So it could be just a giant rip-off.”

Some experimental peptides probably do have beneficial wound healing properties or regenerative effects, Kaeberlein says. For BPC-157, for example, “the animal data is compelling,” he says. But there are still plenty of unknowns: What is the right dosage? How long should you take it? What’s the best way to administer it? Those are questions that can be answered only through rigorous clinical trials. In the absence of those studies, doctors “just make up their own protocols,” he says. Some consumers go the DIY route, reconstituting powdered peptides and injecting their own concoctions at home. 

So why am I seeing ads for these peptide therapies if they’re not approved? 

Federal law prohibits companies from marketing medications that haven’t been approved. That includes most peptides, which are regulated as small molecules, not dietary supplements. (Two notable exceptions are collagen peptides and creatine peptides, often sold as powders.) The law is designed to protect consumers from drugs that haven’t been proved safe and effective.

But it doesn’t stop labs from making peptides for research purposes. “Most of the peptides being consumed in the marketplace now are being sold by these online companies that are selling them labeled for research use only,” Davis says. The vials often bear disclaimers that clearly say as much: “For research use only” or “Not for human consumption.” It’s illegal to market these products for human use, but “the websites make it pretty clear that the buyers are intended to be using these products themselves,” she says.

The practice isn’t legal, but enforcement has been sporadic. “FDA sends warning letters, shuts down companies. But because it’s all online, they have a really hard time keeping up with these entities,” Davis says. And companies have plenty of incentive to keep illegally marketing the products. “They can make millions of dollars without having to spend money and time doing research,” Knoepfler says. “It’s a cash grab.”

Compounding pharmacies, which are legally allowed to create bespoke medications by mixing bulk active ingredients, often get requests to dispense peptides, but most peptides don’t meet the eligibility criteria for compounding. This has always been the case, but in 2023 the FDA explicitly added several common experimental peptides to the list of bulk substances that cannot be compounded because of safety concerns. “It put an exclamation point on policy that was already in place,” Davis says.  

Many GLP-1 medications are available from compounding pharmacies. That used to be accepted because the drugs were in short supply. Now, however, supplies of most of these medications are stable, and sellers are under increasing pressure from regulators to stop mass-marketing these drugs. 

What’s the harm in trying them? 

Peptides sold for research purposes come from labs with little regulatory oversight. “When you buy stuff online intended for research grade, you have no idea what’s in the vial that you’re getting. You have no idea the sterility practices that it was manufactured under, or what sort of impurities might be in the vial,” Davis says.

Phillips has heard some people say they send their peptides for third-party testing to ensure that they’re pure, “like it’s some kind of flex,” he says. “And I’m like, ‘Well, you just proved that this stuff lives in the shadows, for crying out loud.’”

Finnrick Analytics, a peptide-testing startup in Austin, Texas, has analyzed the purity and potency of more than 5,000 samples of 15 different peptides from 173 vendors. The results show that the quality varies substantially from vendor to vendor and even batch to batch. For example, the company tested nearly 450 samples of BPC-157 from 64 vendors. In some cases, the vials sold as BPC-157 didn’t contain the compound at all. In those that did, the purity varied from about 82% to 100%. 

Perhaps more worrying, 8% of all the peptide samples Finnrick tested had measurable levels of endotoxins, bacterial fragments that can cause fever and chills or, in larger doses, septic shock. 

The health risks aren’t just hypothetical. In 2025, two women had to be hospitalized and placed on ventilators after receiving peptide injections at a longevity conference in Las Vegas. Both recovered, and it’s still not clear whether they reacted to the peptides themselves or to some impurity in the vials. 

“The idea that all peptides are safe and all peptides are natural is just nonsense,” Kaeberlein says. “I tend to consider myself fairly libertarian when it comes to what people want to do for their health,” he adds. “If you want to take an experimental drug, that’s up to you.” But the problem with unregulated experimental therapies is that it’s exceedingly difficult to assess benefit and harm. “The relatively small percentage of people that are bad actors will be bad actors, and they will dishonestly market this stuff to people who aren’t equipped to really understand the true risks and rewards,” he says.

And, like any drug, peptides come with a risk of side effects. For approved medications, these are detailed right on the package insert. But for many experimental peptides, there hasn’t been enough research to understand what those side effects might be. Some researchers have warned that peptides that promote growth or blood vessel formation might also foster the growth of cancers.  

For competitive athletes who use peptides, meanwhile, the risks include not just possible health problems but suspension. Some peptides, like BPC-157, are banned by the World Anti-Doping Agency. 

The FDA has undergone a pretty substantial overhaul under the Trump administration. Are the regulations around peptides likely to change? 

I don’t have a crystal ball, but it seems likely. In May 2025, US health secretary Robert F. Kennedy Jr. joined the longevity enthusiast and biohacker Gary Brecka on his podcast The Ultimate Human and promised to “end the war at FDA against alternative medicine—the war on stem cells, the war on chelating drugs, the war on peptides.”

Knoepfler anticipates that Kennedy will force the FDA to allow compounding of some of the most popular peptides, like BPC-157 and GHK-Cu. “Such a step would put public health at great risk, while giving compounders and likely wellness influencers a lot more profit,” he says. 

The FDA seems intent on cracking down on GLP-1 copycats, however. In early February, commissioner Marty Makary posted on X that the agency would take “swift action against companies mass-marketing illegal copycat drugs, claiming they are similar to FDA-approved products.”

AI Prompts Come to Search Console

Google Search Console’s AI analysis went public last week after being available to select users.

The feature, in the Performance > Search results tab, enables insights on a site’s organic search performance by entering generative AI-like prompts.

I’ll provide examples in this post.

Screenshot of Search Console's new AI analysis

Search Console’s AI analysis responds to genAI-like prompts. Click image to enlarge.

Query analysis

Instead of using prebuilt filters to discern various types of searches, users can enter prompts, such as:

  • “Show question-like queries.”
  • “Filter queries longer than [xx] characters.”
  • “Show informational queries.”

The feature initially responds to the prompts with regular expressions. For example, for the prompt, “Show queries with informational intent,” it created the following regex:

Queries matching regex: (what|how|why|guide|tutorial|explain|definition|example|best way to|steps to|tips for|reasons why|benefits of|difference between)

I could then click “Apply” to see the filtered results.

Similarly, I prompted “Show product research queries.” The feature responded:

Queries matching regex:(best|review|compare|vs|price|buy|alternatives|guide|recommendations|features|specs|cost|discount|deal|sale|where to buy|how to choose|top rated|pros and cons)

Again, I could “Apply” or “Dismiss” that regex.

Brand search

The AI analysis performed surprisingly well on brand-name searches. I prompted, “Show branded queries.” The regex responses included my brand name and a one- and two-word pattern:

Queries matching regex: (brandname|brand name)

Traffic drop

Search Console’s AI analysis can assemble traffic change reports. For example, I prompted:

  • “Show pages that lost the most clicks over the past 30 days.”
  • “Compare clicks last month with the same month of the previous year.”

Country-specific

Users can also evaluate organic search visibility in countries:

  • “Show me clicks, Average CTR, and Average Position of my queries in Canada last month.”

Here’s a prompt for traffic changes:

  • “Show pages that lost the most clicks over the past 30 days in Canada.”

Limitations

The new AI feature, while helpful, is not a game-changer. Inexperienced users do not typically know what to ask, while seasoned pros can go directly to the prebuilt filters.

Moreover, the AI integration works only with top filters. It cannot process requests for filters unavailable in the Performance reports. For example, it cannot respond to prompts for queries with an average position greater than 2, indicating room for improvement.

Screenshot of a Performance report showing a filter only on position 1.

Default Performance reports allow filtering only on position 1.

Google Discover Update: Early Data Shows Fewer Domains In US via @sejournal, @MattGSouthern

NewzDash published an analysis comparing Discover visibility before and after Google’s February 2026 Discover core update, using panel data from millions of US users tracked through its DiscoverPulse tool.

It compared pre-update (Jan 25-31) and post-update (Feb 8-14) windows across the top 1,000 domains and top 1,000 articles in the US, California, and New York.

For transparency, NewzDash is a news SEO tracking platform that sells Discover monitoring tools.

What The Data Shows

Google said the update targeted more locally relevant content, less sensational and clickbait content, and more in-depth, timely content from sites with topic expertise. The NewzDash data has early readings on all three.

NewzDash compared Discover feeds in California, New York, and the US as a whole. The three feeds mostly overlapped, but each state got local stories the others didn’t. New York-local domains appeared roughly five times more often in the New York feed than in the California feed, and vice versa.

In California, local articles in the top 100 placements rose from 10 to 16 in the post-update window. The local layer included content from publishers like SFGate and LA Times that didn’t appear in the national top 100 during the same period.

Clickbait reduction was harder to confirm. NewzDash acknowledged that headline markers alone can’t prove clickbait decreased. It did find that what it called ‘templated curiosity-gap patterns’ appeared to lose visibility. Yahoo’s presence in the US top 1,000 dropped from 11 to 6 articles, with zero items in the top 100 post-update.

Unique content categories grew across all three geographic views, but unique publishers shrank in the US (172 to 158 domains) and California (187 to 177). That combination suggests Discover is covering more topics but sending that distribution to a narrower set of publishers.

This pattern aligns with what early December core update analysis showed about specialized sites gaining ground over generalists.

X.com’s Growing Discover Presence

X.com posts from institutional accounts climbed from 3 to 13 items in the US top 100 Discover placements and from 2 to 14 in New York’s top 100.

NewzDash noted it had tracked X.com’s Discover growth since November and said the update appeared to accelerate the trend. Most top-performing X items came from established media brands.

The analysis noted it couldn’t prove or disprove whether X posts are cannibalizing publisher traffic in Discover, calling the data a “directional sanity check.” The open question is whether routing through X adds friction that could reduce click-through to owned pages.

Why This Matters

As we continue to monitor the Discover core update, we now have early data on what it seems to favor. Regional publishers with locally relevant content showed up more often in NewzDash’s post-update top lists.

Discover covered more topics in the post-update window, but fewer sites were getting that traffic in the US and California. Publishers without a clear topic focus could be on the wrong side of that trend.

Looking Ahead

This analysis covers an early window while the rollout is still being completed. The post-update measurement period overlaps with the Super Bowl, Winter Olympics, and ICC Men’s T20 World Cup, any of which could independently inflate News and Sports category visibility.

Google said it plans to expand the Discover core update beyond English-language US users in the months ahead.


Featured Image: joingate/Shutterstock

SerpApi Challenges Google’s Right To Sue Over SERP Scraping via @sejournal, @MattGSouthern

SerpApi filed a motion to dismiss Google’s federal lawsuit, two months after Google sued the company under the DMCA for allegedly bypassing its SearchGuard anti-scraping system.

The filing goes beyond disputing the technical allegations. SerpApi is challenging whether Google has the legal right to bring the case at all.

The Standing Question

SerpApi’s core argument is that the DMCA protects copyright owners, not companies that display others’ content.

Google’s complaint cited licensed images in Knowledge Panels, merchant-supplied photos in Shopping results, and third-party content in Maps as examples of copyrighted material SerpApi allegedly scraped.

SerpApi CEO Julien Khaleghy wrote that the content in Google’s search results belongs to publishers, authors, and creators, not to Google.

Khaleghy writes:

“Google is a website operator. It is not the copyright holder of the information it surfaces.”

Khaleghy argued that only a copyright holder can authorize access controls under the DMCA. Google, he wrote, is trying to assert those rights without the knowledge or consent of the creators whose work is at issue.

In the 31-page motion, SerpApi invokes the Supreme Court’s 2014 ruling in Lexmark International, Inc. v. Static Control Components, Inc., which established that a plaintiff must show injuries within the “zone of interests” the law was designed to protect. SerpApi argues Google’s alleged injuries, including infrastructure costs and lost ad revenue from automated queries, don’t fall within what the DMCA was built to address.

The Circumvention Question

SerpApi also disputes whether bypassing SearchGuard counts as circumvention under the DMCA.

Google alleged in December that SerpApi solved JavaScript challenges, used rotating IP addresses, and mimicked human browser behavior to get past SearchGuard.

Khaleghy wrote that the DMCA defines “to circumvent a technological measure,” in part, as “to descramble a scrambled work, to decrypt an encrypted work, or otherwise to avoid, bypass, remove, deactivate, or impair a technological measure,” and argued SerpApi does none of those things.

Khaleghy writes:

“We access publicly visible web pages, the same ones accessible to any browser. We do not break encryption. We do not disable authentication systems.”

The motion states Google “does not allege unscrambling or decryption of any work, or the impairment, deactivation, or removal of any access system.” SerpApi calls SearchGuard a bot-management tool, not a copyright access control.

Why This Matters

The outcome could reach beyond SerpApi. Google’s DMCA theory, if accepted, would let any platform displaying licensed third-party content use the statute to block automated access to publicly visible pages.

When we covered Google’s original filing in December, I noted the central question was whether SearchGuard qualifies as a DMCA-protected access control. SerpApi’s motion now adds a layer underneath that. Even if SearchGuard qualifies, SerpApi argues Google isn’t the right party to enforce it.

In a separate case decided on December 15, 2025, U.S. District Judge Sidney Stein dismissed Ziff Davis’s DMCA Section 1201(a) anti-circumvention claim tied to robots.txt against OpenAI, holding Ziff Davis failed to plausibly allege that robots.txt is a technological measure that effectively controls access, or that OpenAI circumvented it.

Google’s SearchGuard is more technically complex than a robots.txt directive, but both cases test whether the DMCA can be used to restrict automated access to publicly available content.

Looking Ahead

The hearing on SerpApi’s motion is scheduled for May 19, 2026. Google will file its opposition before then.

SerpApi also filed a motion to dismiss in a separate lawsuit brought by Reddit in October, which named SerpApi alongside Perplexity, Oxylabs, and AWMProxy. Both cases raise questions about using DMCA anti-circumvention claims to challenge bot evasion and automated access to pages that are viewable in a normal browser.


Featured Image: CrizzyStudio/Shutterstock

4 Sites That Recovered From Google’s December 2025 Core Update – What They Changed via @sejournal, @marie_haynes

The December 2025 core update had a significant impact on a large number of sites. Each of the sites below that have done well are either long term clients, past clients or sites that I have done a site review for. While we can never say with certainty what changed as the result of a change to Google’s core algorithms and systems, I’ll share some observations on what I think helped these sites improve.

1. Trust Matters Immensely

This first client, a medical eCommerce site, reached out to me in mid 2024 and we started on a long term engagement. A few days into our relationship they were strongly negatively impacted by the August 2024 core update. It was devastating.

When you are impacted by a core update, in most cases, you remain suppressed until another core update happens. It usually takes several core updates. And given that these only happen a few times a year, this site remained suppressed for quite some time.

We worked on a lot of things:

  • Improving blog post quality so it was not “commodity content”.
  • Improving page load time.
  • Optimizing images.
  • Improving FAQ content on product pages to help answer customer questions.
  • Creating helpful guides.
  • Improving product descriptions to better answer questions their customers have.
  • Adding more information about the E-E-A-T of authors.
  • Adding more authors with medical E-E-A-T.
  • Getting more reviews from satisfied customers.

While I think that all of the above helped contribute to a better assessment of quality for this site, I actually think that what helped the most had very little to do with SEO, but rather, was the result of the business working hard to truly improve upon customer service.

Core updates are tightly connected to E-E-A-T. Google says that trust is the most important aspect of E-E-A-T. The quality rater guidelines, which serve as guidance to help Google’s quality raters who help train their AI systems to improve in producing high quality search algorithms, mention “trust” 191 times.

For online stores, the raters are told that reliable customer service is vitally important.

Image Credit: Marie Haynes

A few bad reviews aren’t likely to tank your rankings, but this business had previously had significant logistical problems with shipping. They had been working hard to rectify these. Yet, if I asked AI Mode to tell me about the reputation of this company compared to their competitors, it would always tell me that there were serious concerns.

Here’s an interesting prompt you can use in AI Mode:

Make a chart showing the perceived trust in [url or brand] over time.

You can see that finally in 2025 the overall trust in this brand improved.

Image Credit: Marie Haynes

My suspicion is that these trust issues were the main driver in their core update suppression. I can’t say whether it was the improvement in customer trust that made a difference, the improvements in quality we made, or perhaps both. But these results were so good to see.

Image Credit: Marie Haynes

They continue to improve. Google recommends them more often in Popular Products carousels, ranks them more highly for many important terms and more importantly, drives far more sales for them now.

2. Original Content Takes A Lot Of Work

The next site is another site that was impacted by a core update.

This site is an affiliate site that writes about a large ticket product. They have a lot of competition from some big players in their industry. When I reviewed their site, one thing was obvious to me. While they had a lot of content, most of it offered essentially the same value as everyone else. This was frustrating considering they actually did purchase and review these products. What they were writing about was mostly a collection of known facts on these products rather than their personal experience. And what was experiential was buried in massive walls of text that were difficult for readers to navigate.

Google’s guidance on core updates recommends that if you were impacted, you should consider rewriting or restructuring your content to make it easier for your audience to read and navigate the page.

Image Credit: Marie Haynes

This site put an incredible amount of work into improving their content quality:

  • They purchased the products they reviewed and took detailed photos of everything they discussed. And videos. Really helpful videos.
  • The blog posts were written by an expert in their field. This already was the case, but we worked on making it more clear what their expertise was and why it was helpful.
  • We brainstormed with AI to help us come up with ideas for adding helpful unique information that was borne from their experience and not likely to be found on other sites.
  • We used Microsoft Clarity to identify aspects on pages that were frustrating users and worked to improve them.
  • We added interactive quizzes to help readers and drive engagement.
  • We worked on improving freshness for every important post, ensuring they were up to date with the latest information.
  • We worked to really get in the shoes of a searcher and understand what they wanted to see. We made sure that this information was easy to find even if a reader was skimming.
  • We broke up large walls of text into chunks with good headings that were easy to skim and navigate.
  • We noindexed pages that were talking on YMYL topics for which they lacked expertise.
  • We worked on improving core web vitals. (Note: I don’t think this is a huge ranking factor, but in this case the largest contentful paint was taking forever and likely frustrated users.)

Once again, it took many months of tireless work before improvements were seen! Rankings improved to the first page for many important keywords and some moved from page 4 to position #1-3.

Image Credit: Marie Haynes

3. Work To Improve User Experience

This next site was not a long term client, but rather, a site review I did for an eCommerce site in an YMYL niche. The SEO working on this site applied many of my recommendations and made some other smart changes as well including:

  • Improving site navigation and hierarchy.
  • Improved UX. They have a nicer, more modern font. The site looks more professional.
  • Improved customer checkout flow which improved checkout abandonment rates.
  • Improved their About Us page to add more information to demonstrate the brand’s experience and history. Note: I don’t think this matters immensely to Google’s algorithms as most of their assessment of trust is made from off-site signals, but it may help users feel more comfortable with engaging.
  • Produced content around some topics that were gaining public attention. This did help to truly earn some new links and mentions from authoritative sources.

After making these changes, the site was able to procure a knowledge panel for brand searches. And, search traffic is climbing.

Image Credit: Marie Haynes

4. First Hand Experience Can Really Help

This next site is another one that I did a site review for. It is a city guide that monetizes through affiliate links and sponsors. For every page I looked at I came to the same conclusion: There was nothing on this page that couldn’t be covered by an AI Overview. Almost every piece of information was essentially paraphrased from somewhere else on the web.

The most recent update to the rater guidelines increased the use of the word “paraphrased” from 3 mentions to 25. I think this applies to a lot of sites!

Image Credit: Marie Haynes

and

Image Credit: Marie Haynes

and also,

Image Credit: Marie Haynes

Yet, when I spoke with the site owner she shared to me that they had on-site writers who were truly writing from their experience.

While I don’t know specifically what changes this site owner has made, I looked at several pages that had seen nice improvements in conjunction with the core update and noticed the following improvements:

  • They’ve added video to some posts – filmed by their team.
  • There’s original photography from their team – not taken from elsewhere on the web. Not every photo is original, but quite a few of them are.
  • Added information to help readers make their decision, like “This place is best for…” or, “Must try dishes include…”
  • They wrote about their actual experiences. Rather than just sharing what dishes were available at a restaurant, they share which ones they tried and how they felt they stood out compared to other restaurants.
  • They’ve worked to keep content updated and fresh.

This site saw some nice improvements. However, they still have ground to gain as they previously were doing much better in the days before the helpful content updates.

Image Credit: Marie Haynes

Some Thoughts For Sites That Have Not Done Well

The December 2025 core update had a devastating negative impact on many sites. If you were impacted, your answer is unlikely to lie in technical SEO fixes, disavowing links or building new links. Google’s ranking systems are a collection of AI systems that work together with one goal in mind – to present searchers with pages that they are likely to find helpful. Many components of the ranking systems are deep learning systems which means that they improve on these recommendations over time.

I’d recommend the following for you:

1. Consider Whether The Brand Has Trust Issues

You can try the AI Mode prompt I used above. A few bad reviews is not going to cause you a core update suppression. But, a prolonged history of repeated customer service frustrations, fraud or anything else that significantly impacts your reputation can seriously impact your ability to rank. This is especially true if you are writing on YMYL topics.

2. Look At How Your Content Is Structured

It is a helpful exercise to look at which pages Google’s algorithms are ranking for your queries. If they don’t seem to make sense to you, look at how quickly they get people to the answer they are trying to find. I have found that often sites that are impacted make their readers scroll through a lot of fluff or ads to get to the important bits. Improve your headings – not for search engines, but for readers who are skimming. Put the important parts at the top. Or, if that’s not feasible, make it really easy for people to find the “main content”.

Here’s a good exercise – Open up the rater guidelines. These are guidelines for human raters who help Google understand if the AI systems are producing good, helpful rankings. CTRL-F for “main content” and see what you can learn.

3. Really Ask Yourself Whether Your Content Is Mostly “Commodity Content”

Commodity content is information that is widely available in many places on the web. There was a time when a business could thrive by writing pages that aggregate known information on a topic. Now that Google has AI Overviews and AI Mode, this type of page is much less valuable. You will still see some pages cited in AI Overviews that essentially parrot what is already in the AIO. Usually these are authoritative sites which are helpful for readers who want to see information from an authority rather than an AI answer.

Liz Reid from Google said these interesting words in an interview with the WSJ:

“What people click on in AI Overviews is content that is richer and deeper. That surface level AI generated content, people don’t want that, because if they click on that they don’t actually learn that much more than they previously got. They don’t trust the result any more across the web. So what we see with AI Overviews is that we sort of surface these sites and get fewer, what we call bounced clicks. A bounced click is like, you click on this site and you’re like, “Ah, I didn’t want that” and you go back. And so AI Overviews give some content and then we get to surface sort of deeper, richer content, and we’ll look to continue to do that over time so that we really do get that creator content and not AI generated.”

Here is a good exercise to try on some of the pages that have declined with the core update. Give your url or copy your page’s content into your favourite LLM and use this prompt:

“What are 10 concepts that are discussed in this page? For each concept tell me whether this topic has been widely written about online. Does this content I am sharing with you add anything truly uniquely interesting and original to the body of knowledge that already exists? Your goal here is to be brutally honest and not just flatter me. I want to know if this page is likely to be considered commodity content or whether it truly is content that is richer and deeper than other pages available on the web.”

You can follow this up with this prompt:

“Give me 10 ideas that I can use to truly create content that goes deeper on these topics? How can I draw from my real world experience to produce this kind of content?”

Concluding Thoughts

I’ve been studying Google updates for a long time – since the early days of Panda and Penguin updates. I built a business on helping sites recover following Google update hits. However, over the years I have found it is increasingly more difficult for a site that is impacted by a Google update to recover. This is why today, although I do still love doing site reviews to give you ideas for improving, I generally decline doing work with sites that have been strongly impacted by Google updates. While recovery is possible, it generally takes a year or more of hard work and even then, recovery is not guaranteed as Google’s algorithms and people’s preferences are continually changing.

The sites that saw nice recovery with this Google update were sites that worked on things like:

  • Truly improving the world’s perception of their customer service.
  • Creating original and insightful content that was substantially better than other pages that exist.
  • Using their own imagery and videos in many cases.
  • Working hard to improve user experience.

If you missed it I recently published this video that talks about what we learned about the role of user satisfaction signals in Google’s algorithms. Traditional ranking factors create an initial pool of results. AI systems rerank them, working to predict what the searcher will find most helpful. And the quality raters as well as live users in live user tests help fine-tune these systems.

And here are some more blog posts that you may find helpful:

Ultimately, Google’s systems work to reward content that users are likely to find satisfying. Your goal is to be the most helpful result there is!

More Resources:


Read Marie’s newsletter AI News You Can Use, subscribe now.


Featured Image: Jack_the_sparow/Shutterstock

Agentic Commerce Optimization: A Technical Guide To Prepare For Google’s UCP via @sejournal, @alexmoss

In January, I wrote about the birth of agentic commerce through both Agentic Commerce Protocol (ACP) and Universal Commerce Protocol (UCP), and how this could impact us all as consumers, business owners, and SEOs. As we still sit on waitlists for both, this doesn’t mean that we can’t prepare for it.

UCP fixes a real-life problem for many, minimizing the fragmented commerce journey. Instead of building separate integrations for every agent platform as we have been mostly doing in the past, you can now [theoretically] integrate once and will integrate seamlessly with other tools and platforms.

But note here that, as opposed to ACP which focuses more so on the checkout → fulfillment → payment journey, UCP goes beyond this with six capabilities covering the entire commerce lifecycle.

This, of course, will impact an SEO’s ambit. As we shift from optimizing for clicks to optimizing for selection, we also need to ensure that it’s you/your client that is selected through data integrity, product signals, and AI-readable commerce capabilities. Structured data has always served an important role for the internet as a whole and will continue to be the driving force on how you can serve agents, crawlers, and humans in the best way possible.

I allude to a possible new acronym “ACO” – Agentic Commerce Optimization – and the following could be considered the closest we can get to guidelines on how we undertake it.

UCP Isn’t Coming, It’s Here

UCP was only announced in January, but there’s already confirmation that its capabilities are rolling out. On Feb. 11, 2026, Vidhya Srinivasan (VP/GM of Advertising & Commerce at Google) announced that Wayfair and Etsy now use UCP so that you can purchase directly within AI Mode, and was observed the next day by Brodie Clark.

UCP’s Six Layered Capabilities

On the day UCP was released, Google explained its methodology.

From this, I defined six core capabilities:

  1. Product Discovery – how agents find and surface your inventory during research.
  2. Cart Management – multi-item baskets, dynamic pricing, complex basket rules.
  3. Identity Linking – OAuth 2.0 authorization for personalized experiences and loyalty.
  4. Checkout – session creation, tax calculation, payment handling.
  5. Order Management – webhook-based lifecycle and logistical updates.
  6. Vertical Capabilities – extensible modules for specialized use cases like travel booking windows or subscription schedules.

UCP’s schema authoring guide shows how capabilities are defined through versioned JSON schemas, which act as the foundation of the protocol. When it comes to considering this as an SEO, properties such as offers, aggregateRating, and shippingDetails aren’t just for surfacing rich snippets, etc., for product discovery, they’re now what agents query during the entire process.

Schema Is, And Will Continue To Be, Essential

UCP’s technical specification uses its own JSON schema-based vocabulary. Whilst it doesn’t build on schema.org directly, it remains critical in the broader ecosystem. As Pascal Fleury Fleury said at Google Search Central Live in December, “schema is the glue that binds all these ontologies together”. UCP handles the transaction; schema.org helps agents decide who to transact with.

Ensure you’re on top of and populate product schema as much as you can. It may seem like SEO 101. Regardless, audit all of this now to ensure you’re not missing anything when UCP really rolls out.

This includes checks on:

  • Product schema (with complete coverage): All core fields: name, description, SKU, GTIN, brand, related images, and offers.
  • Offers must include: Price, priceCurrency, availability, URL, seller. Add aggregateRating and review to ensure you have positive third-party perspective.
  • Ensure all product variants output correctly.
  • Include shippingDetails with delivery estimates.
  • Organization and Brand: Assists with “Merchant of Record” verification. If you’re not an Organization, then fallback to Person.
  • Designated FAQPage: Ensure you have an FAQpage as these can be incorporated alongside product-level FAQs and used as part of its decision-making.

Prepare Your Merchant Center Feed

UCP will utilize your existing Merchant Center feed as the discovery layer. This means that beyond the normal on-site schema you provide, Merchant Center itself requires more details that you can populate within its platform.

  • Return policies (required to be a Merchant of Record): Complete all return costs, return windows, and policy links. These will be used not just within the checkout and transactional areas, but again a consideration for selection at all. Advanced accounts need policies at each sub-account level.
  • Customer support information: Not only would initial information be offered to the customer, but there may be ways in which entry-level customer support queries can be completely managed, thus increasing customer satisfaction while minimizing customer support agent capacity.
  • Agentic checkout eligibility: Add the native_commerce attribute to your feed, as products are only eligible here if this is set up.
  • Product identifiers: Each product must have an ID, and correlate to the product ID when using the checkout API.
  • Product consumer warnings: Any product warning should assert the consumer_notice attribute.

Google recommends that this be done through a supplemental data source in Merchant Center rather than modifying your primary feed, which would prevent incorrect formatting or other invalidation.

Lastly, double-check if the products you’re selling aren’t included within its product restrictions list, as there are several that, if you do offer those things, you should consider how to manage alongside the abilities of UCP.

Optimizing Conversational Commerce Attributes

Within the UCP blog post announcement, Srinivasan introduced a way for more clarity with conversational commerce attributes:

“…we’re announcing dozens of new data attributes in Merchant Center designed for easy discovery in the conversational commerce era, on surfaces like AI Mode, Gemini and Business Agent. These new attributes complement retailers’ existing data feeds and go beyond traditional keywords to include things like answers to common product questions, compatible accessories or substitutes.”

These provide further clarity (and therefore minimize hallucinations) during the discovery process in order to be selected or ruled out.

Not only would this incorporate product and brand-related FAQs, but take this a step further to also consider:

  • Compatibility: Potential up-sell opportunities.
  • Substitution: An opportunity for dealing with out-of-stock items.
  • Related products: Great for cross-sell opportunities.

Furthermore, this can be used to become even more specific, moving beyond basic attributes to agent-parseable details. Now, if a product is “purple” on a basic level, “dark purple” or even something unobvious, such as “Wolf” (real example below), may be more appropriate for finer detail while still falling under “purple.” The same can be considered for sizes, materials (or a mixture of materials), etc.

Multi-Modal Fan-Out Selection

When executed well, optimizing for conversational commerce attributes will increase the possibility of selection within fan-out query results. When considering some of these attributes, it is worth looking at tools, such as WordLift’s Visual Fan-Out simulator, which illustrates how a single image decomposes into multiple search intents, revealing which attributes agents may prioritize when performing query fan-out. But how would this look?

As an example, I used one product image and browsed downward three horizons. Using On’s Cloudsurfer Max as an example (used with permission):

Cloudsurfer Max in the colour “Wolf”
Image credit: On

Using just one product image, this is what is presented on the surface:

Screenshot from WordLift’s Visual Fan-Out simulator, February 2026

It immediately noticed that the product was On, and specifically from the Cloudsurfer range. Great start! Now let’s see what it sees over the horizon:

Screenshot from WordLift’s Visual Fan-Out simulator, February 2026
Screenshot from WordLift’s Visual Fan-Out simulator, February 2026
Screenshot from WordLift’s Visual Fan-Out simulator, February 2026

Here, you can draw inspiration or direction on how best to place yourself for potential and likely fan-out queries. With this example, I found it interesting that Horizon 2 mentions performance running gear as a large category, then when performing fan-out on that showed the related products around gear in general. This shows how wide LLMs consider selection and how you can present attributes to attract selection.

UCP’s Roadmap Is Expanding Into Multi-Verticals

UCP is already planning to go beyond one single purchase but expands beyond retail into travel, services, and other verticals. Its roadmap details several priorities over the coming year, including:

  • Multi‑item carts and complex baskets: Moving beyond single‑item checkout to native multi‑item carts, bundling, promotions, tax/shipping logic, and more realistic fulfillment handling.
  • Loyalty and account linking: Standardized loyalty program management and account linking so agents can apply points, member pricing, and benefits across merchants.
  • Post‑purchase support: Support for order tracking, returns, and customer‑service handoff so agents can manage customer support post-sale.
  • Personalization signals: Richer signals for cross‑sell/upsell, wishlists, history, and context‑based recommendations.
  • New verticals: Expansion beyond retail into travel, services, digital goods, and food/restaurant use cases via extensions to the protocol.

Each of the points above is worth further reading and consideration if this is something your brand may offer. Furthermore, its plans to expand beyond retail into travel, services, digital goods, and hospitality mean that, if you’re working within any of these verticals, you need to be even more prepared to ensure eligibility.

Social Proof And Third-Party Perspective

Regardless of how well you may optimize on-site to prepare for UCP, all this data integrity still needs to be validated by trusted third-party sources.

Third-party platforms, such as Trustpilot and G2, appear to be frequently cited and trusted among most of the LLMs, so I’d still advise that you continue to collect those positive brand and product reviews in order to satisfy consensus, resulting in more opportunities to be selected during product discovery.

TL;DR – Prepare Now

If you own or manage any form of ecommerce site, now is the time to ensure you’re preparing for UCP’s rollout as soon as possible. It’s only a matter of time, and with AI Mode spreading into default experiences, getting ahead of the rollout is essential.

  1. Join the UCP waitlist.
  2. Prepare Merchant Center: return policies, native_commerce attribute.
  3. Ensure your developers research and understand the UCP documentation.
  4. Populate conversational attributes: question-answers, compatibility, substitutes.
  5. Audit and improve any schema where applicable.

This is moving faster than most previous commerce shifts, and brands that wait for full rollout signals will already be behind. This isn’t a short-term LLM gimmick but is part of the largest change in the ecommerce space.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

SEO Fundamental: Google Explains Why It May Not Use A Sitemap via @sejournal, @martinibuster

Google’s John Mueller answered a question about why a Search Console was providing a sitemap fetch error even though server logs show that GoogleBot successfully fetched it.

The question was asked on Reddit. The person who started the discussion listed a comprehensive list of technical checks that they did to confirm that the sitemap returns a 200 response code, uses a valid XML structure, indexing is allowed and so on.

The sitemap is technically valid in every way but Google Search Console keeps displaying an error message about it.

The Redditor explained:

“I’m encountering very tricky issue with sitemap submission immediately resulted `Couldn’t fetch` status and `Sitemap could not be read` error in the detail view. But i have tried everything I can to ensure the sitemap is accessible and also in server logs, can confirm that GoogleBot traffic successfully retrieved sitemap with 200 success code and it is a validated sitemap with URL – loc and lastmod tags.

…The configuration was initially setup and sitemap submitted in Dec 2025 and for many months, there’s no updates to sitemap crawl status – multiple submissions throughout the time all result the same immediate failure. Small # of pages were submitted manually and all were successfully crawled, but none of the rest URLs listed in sitemap.xml were crawled.”

Google’s John Mueller answered the question, implying that the error message is triggered by an issue related to the content.

Mueller responded:

“One part of sitemaps is that Google has to be keen on indexing more content from the site. If Google’s not convinced that there’s new & important content to index, it won’t use the sitemap.”

While Mueller did not use the phrase “site quality,” site quality is implied because he says that Google has to be “keen on indexing more content from the site” that is “new and important.”

That implies two things, that maybe the site doesn’t produce much new content and that the content might not be important. The part about content being important is a very broad description that can mean a lot of things and not all of those reasons necessarily mean that the content is low quality.

Sometimes the ranked sites are missing an important form of content or a structure that makes it easier for users to understand a topic or come to a decision. It could be an image, it could be a step by step, it could be a video, it could be a lot of things but not necessarily all of them. When in doubt, think like a site visitor and try to imagine what would be the most helpful for them. Or it could  be that the content is trivial because it’s thin or not unique. Mueller was broad but I think circling back to what makes a site visitor happy is the way to identify ways to improve content.

Featured Image by Shutterstock/Asier Romero

Email Marketing Awaits True AI

Artificial intelligence has not yet fulfilled its email marketing potential, at least not without human help.

There is a maturation gap between the AI-powered email that marketers can imagine and the campaigns they can actually create. Filling this gap could be a significant opportunity.

Email Endures

Email marketing should seemingly be obsolete. The first “email,” after all, occurred in October 1971, nearly 55 years ago.

Surely, social media platforms, text messaging, and various applications such as WhatsApp and Discord could have supplanted it. And let’s not forget the grim industry concerns when Gmail introduced the “Promotions” tab in 2013. Today, AI inbox summaries are the latest marketing threat.

Nonetheless, for many ecommerce businesses, email continues to produce a disproportionate share of revenue. The channel remains durable not because it is novel, but because it is owned, measurable, and tightly connected to shopper behavior.

Female in front of a laptop engaging with email

The potential of AI-driven email marketing lies in targeting each shopper individually.

Relevant

As an example, self-described email marketing nerd Chase Dimond recently posted on X his ecommerce email marketing recommendations.

Diamond’s “7 Types of Emails Every Ecom Store Should Send” and “4 Must-Send Ecommerce Emails” describe traditional, pre-AI tactics.

His recommendations imply that core campaigns — abandoned cart reminders, urgency-driven promotions, referral requests, content-led engagement sequences — still convert. The underlying psychology of timing, relevance, and motivation has not suddenly expired.

Audience of One

This apparent status quo, however, does not mean that email marketing cannot evolve. We expect artificial intelligence to be disruptive.

To this end, AI’s greatest potential:

  • Enables segmentation at the audience-of-one level,
  • Delivers the right offer to the right person at the exact moment it will convert,
  • Achieves precise, individualized relevance.

True AI means every subscriber becomes a segment of one, combining behavioral signals, predictive intent, contextual timing, and offer economics. The result is an individualized experience optimized for conversion.

I see four requirements for this sort of AI-powered email.

  • Predictive personalization. The AI automation must evaluate a shopper’s evolving propensity to purchase specific products or respond to particular offers. Where rule-based segments might separate “high value customers” from “lapsed buyers,” a predictive model determines when a specific individual is ready to buy, and what offer is most persuasive.
  • Contextual timing. Pre-AI workflows fire fixed triggers based on cart or browse abandonments. AI should identify not only the event but the best moment for conversion.
  • Offer optimization. Rather than blasting a single discount or incentive to every subscriber, an AI system can adjust its amount or type. Shopper A responds to free shipping, while shopper B requires a small bonus.
  • Scalable to individual behavior. In theory, AI could generate thousands or millions of unique message variations tailored to individual behavior and propensity. It could orchestrate dynamic sequence choices, conditional messaging paths, and offer decisions — all without human interaction.

An audience of one redefines email marketing.

The Gap

Unfortunately that vision is not achievable today. AI is currently more of an email marketing assistant than a personalized engine.

Certainly it can streamline campaign variations, generating, for example, multiple subject lines and tests. Many email platforms now include native support for real-time behavioral scoring, predictive intent, individualized offers, and send-time optimization.

Yet the gulf between possibility and practical tooling is why many marketers say “AI isn’t there yet.” The potential remains high, but the infrastructure is still emerging.

In the meantime, creative email marketers can experiment with audience-of-one approximations by stitching together existing technologies, such as:

  • Workflow automation tools knit disparate systems that function autonomously once created. Providers include Zapier, Make, and n8n.
  • Recommenders. Platforms such as Recombee and Luigi’s Box can drive product and offer recommendations. These too can sync with workflow automations.

Until platforms close the gap, experimentation and integration are likely to shape competitive advantage, enabling some ecommerce email marketers to approximate individualized messaging well before such capabilities become standard features.

Measles cases are rising. Other vaccine-preventable infections could be next.

There’s a measles outbreak happening close to where I live. Since the start of this year, 34 cases have been confirmed in Enfield, a northern borough of London. Most of those affected are children under the age of 11. One in five have needed hospital treatment.

It’s another worrying development for an incredibly contagious and potentially fatal disease. Since October last year, 962 cases of measles have been confirmed in South Carolina. Large outbreaks (with more than 50 confirmed cases) are underway in four US states. Smaller outbreaks are being reported in another 12 states.

The vast majority of these cases have been children who were not fully vaccinated. Vaccine hesitancy is thought to be a significant reason children are missing out on important vaccines—the World Health Organization described it as one of the 10 leading threats to global health in 2019. And if we’re seeing more measles cases now, we might expect to soon see more cases of other vaccine-preventable infections, including some that can cause liver cancer or meningitis.

Some people will always argue that measles is not a big deal—that infections used to be common, and most people survived them and did just fine. It is true that in most cases kids do recover well from the virus. But not always.

Measles symptoms tend to start with a fever and a runny nose. The telltale rash comes later. In some cases, severe complications develop. They can include pneumonia, blindness, and inflammation of the brain. Some people won’t develop complications until years later. In rare cases, the disease can be fatal.

Before the measles vaccine was introduced, in 1963, measles epidemics occurred every two to three years, according to the WHO. Back then, around 2.6 million people died from measles every year. Since it was introduced, the measles vaccine is thought to have prevented almost 59 million deaths.

But vaccination rates have been lagging, says Anne Zink, an emergency medicine physician and clinical fellow at the Yale School of Public Health. “We’ve seen a slow decline in people who are willing to get vaccinated against measles for some time,” she says. “As we get more and more people who are at risk because they’re unvaccinated, the higher the chances that the disease can then spread and take off.”

Vaccination rates need to be at 95% to prevent measles outbreaks. But rates are well below that level in some regions. Across South Carolina, the proportion of kindergartners who received both doses of the MMR vaccine, which protects against measles as well as mumps and rubella, has dropped steadily over the last five years, from 94% in 2020-2021 to 91% in 2024-2025. Some schools in the state have coverage rates as low as 20%, state epidemiologist Linda Bell told reporters last month.

Vaccination rates are low in London, too. Fewer than 70% of children have received both doses of their MMR by the time they turn five, according to the UK Health Security Agency. In some boroughs, vaccination rates are as low as 58%. So perhaps it’s not surprising we’re seeing outbreaks.

The UK is one of six countries to have lost their measles elimination status last month, along with Spain, Austria, Armenia, Azerbaijan, and Uzbekistan. Canada lost its elimination status last year.

The highly contagious measles could be a bellwether for other vaccine-preventable diseases. Zink is already seeing signs. She points to a case of polio that paralyzed a man in New York in 2022. That happened when rates of polio vaccination were low, she says. “Polio is a great example of … a disease that is primarily asymptomatic, and most people don’t have any symptoms whatsoever, but for the people who do get symptoms, it can be life-threatening.”

Then there’s mumps—another disease the MMR vaccine protects against. It’s another one of those infections that can be symptom-free and harmless in some, especially children, but nasty for others. It can cause a painful swelling of the testes, and other complications include brain swelling and deafness. (From my personal experience of being hospitalized with mumps, I can attest that even “mild” infections are pretty horrible.)

Mumps is less contagious than measles, so we might expect a delay between an uptick in measles cases and the spread of mumps, says Zink. But she says that she’s more concerned about hepatitis B.

“It lives on surfaces for a long period of time, and if you’re not vaccinated against it and you’re exposed to it as a kid, you’re at a really high risk of developing liver cancer and death,” she says.

Zink was formerly chief medical officer of Alaska, a state that in the 1970s had the world’s highest rate of childhood liver cancer caused by hepatitis B. Screening and universal newborn vaccination programs eliminated the virus’s spread.

Public health experts worry that the current US administration’s position on vaccines may contribute to the decline in vaccine uptake. Last month the US Centers for Disease Control and Prevention approved changes to childhood vaccination recommendations. The agency no longer recommends the hepatitis B vaccine for all newborns. The chair of the CDC’s vaccine advisory panel has also questioned broad vaccine recommendations for polio.

Even vitamin injections are being refused by parents, says Zink. A shot of vitamin K at birth can help prevent severe bleeding in some babies. But recent research suggests that parents of 5% of newborns are refusing it (up from 2.9% in 2017).

“I can’t tell you how many of my pediatric [doctor] friends have told me about having to care for a kiddo in the ICU with … bleeding into their brain because the kid didn’t get vitamin K at birth,” says Zink. “And that can kill kids, [or have] lifelong, devastating, stroke-like symptoms.”

All this paints a pretty bleak picture for children’s health. But things can change. Vaccination can still offer protection to plenty of people at risk of infection. South Carolina’s Department of Public Health is offering free MMR vaccinations to residents at mobile clinics.

“It’s easy to think ‘It’s not going to be me,’” says Zink. “Seeing kiddos who don’t have the agency to make decisions [about vaccination] being so sick from vaccine-preventable diseases, to me, is one of the most challenging things of practicing medicine.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.