Still not ready for Black Friday 2025? Here is your last minute rescue plan

Heads up! Black Friday is almost here, and if you still haven’t prepared, it’s time to act fast. The clock is ticking, but you can still make meaningful updates that count. This article covers practical and straightforward last minute Black Friday tips to help you make quick, effective changes to your eCommerce store. Even with just a few days left, there’s still room to attract customers and make the most of the biggest shopping event of the year.

Table of contents

Key takeaways

  • Act quickly to implement last minute Black Friday tips for maximizing eCommerce sales
  • Focus on essentials such as clear offers, optimized checkout processes, and engaging email campaigns to boost conversions
  • Leverage social media to build anticipation, share customer stories, and create urgency with time-sensitive posts
  • Consider quick SEO fixes to enhance visibility, like updating meta titles and refreshing content for Black Friday
  • Utilize tools like Yoast SEO for enhanced performance and structured data to ensure your deals stand out in search results

Did you know?

Numbers show that Black Friday 2024 broke all records, as U.S. shoppers spent a staggering $ 10.8 billion online, representing a 10.2 percent increase from 2023. These numbers prove one thing: it is never too late to take action and grab your share of the Black Friday rush.

The must-dos (essentials you can’t miss)

The fastest way to put your Black Friday campaign on pilot mode is by focusing on a few essentials that make an immediate difference. These must-do, last minute Black Friday tips are your quick wins, helping you cover the basics, build momentum, and set up the foundation for a successful marketing campaign.

Make your offers crystal clear

When shoppers land on your website, your Black Friday deals should be impossible to miss. Highlight your best offers right on the homepage or add a static banner so visitors see them immediately. The clearer your offers are, the easier it is for customers to take action.

One of the most innovative ways to increase engagement is by using countdown timers. They build urgency, encourage faster decisions, and make shoppers feel like they’re part of something time-sensitive. The Diamond Store saw this in action when they added a live countdown clock to their 24-hour Black Friday email campaign. The result? A 400% higher conversion rate compared to their previous emails.

Forever 21 shows all the offers clearly on the homepage

For WordPress users, OptinMonster is a quick way to get started. It lets you create dynamic floating bars and banners with countdowns, all through a simple drag-and-drop builder.

If you’re using Shopify, the Essential Countdown Timer Bar app works perfectly for creating announcement bars or cart countdowns to drive urgency and prevent cart abandonment.

Check your checkout

Did you know a long or confusing checkout process is one of the biggest reasons shoppers abandon their carts, especially during high-traffic days like Black Friday? That’s the last thing you want when every second counts.

Before the rush begins, take a few minutes to go through your own checkout process on both desktop and mobile. Place a test order just like a customer would. Verify that your discount codes are applied correctly, your payment options load smoothly, and the overall flow feels quick and effortless.

Read more: Boost your checkout page UX: Vital tips for online stores

Ask a few friends, family members, or even teammates to try it too. Fresh eyes often spot friction points you might miss, such as unclear buttons, confusing forms, or slow-loading pages.

Trust also plays a huge role. Ensure your checkout page displays secure payment badges and recognizable gateways, such as PayPal, Apple Pay, or Stripe. When shoppers feel confident their payment is safe, they’re far more likely to hit “Buy now.”

And one last tip: keep it simple. The fewer distractions and clicks, the smoother the path to purchase. That’s precisely what drives conversions during a last minute Black Friday rush.

Send a simple email to your list

Black Friday emails have been shown to generate 33 percent higher conversion rates than regular marketing messages. That alone makes it one of the smartest last minute Black Friday tips to focus on. When time is short, your existing customer base is your best asset. They already trust your brand and are far more likely to act quickly on your offers.

Keep your email focused and straightforward. Start with a subject line that clearly highlights your best deal or most significant discount. For example, in the screenshot below, you can see how the key offer or discount is prominently displayed in the subject line, while the body reinforces the offer with a clear call to action.

Inside the email, make your main offer impossible to miss. Emphasize the key benefits of your product or service, and include a direct call to action that takes users straight to your Black Friday sale page. Make it visually engaging by adding a countdown timer or a short GIF that brings energy and urgency to the message.

Remember, this isn’t about crafting a perfect campaign. It’s about getting the right message to the right people at the right time. A simple, well-timed email can make a real difference in your Black Friday sales.

Promote on social media channels

Social media continues to play a significant role in Black Friday success. It has seen a 7 percent year-over-year increase in traffic, now driving around 10 percent of all global mobile traffic referrals during the holiday season. Your audience is already scrolling, searching, and shopping, so this is your opportunity to be where they are.

In these last few days, your social media strategy should focus on building anticipation and trust. If you have customer review videos, testimonials, or any user-generated content, start sharing them now. Boosting these posts or running quick ad campaigns featuring real customer stories can help you build credibility fast. People are far more likely to buy when they see genuine experiences from others.

You can also collaborate with a micro-influencer or a brand advocate who already has a connection with your target audience. Even a brief post, story, or reel from them can draw attention to your sale and help you gain visibility.

If you are short on time, focus only on your most active platform, whether that is Instagram, Facebook, TikTok, or LinkedIn. Post your best offer as a pinned post or a story highlight and use countdown stickers or short video snippets to create a sense of urgency.

Lastly, remember to engage. Reply to comments, answer questions, and reshare posts from happy customers. Small interactions can make your brand feel more approachable and help you stand out during the Black Friday rush.

Must read: How to handle comments on your blog

Quick SEO fixes for better Black Friday reach

If you haven’t touched your SEO yet, don’t worry. There’s still time to make a few quick updates that can help your store appear in the search results. These last minute Black Friday SEO tweaks can enhance visibility, attract the right audience, and might give your deals a competitive edge.

Start with your meta titles and meta descriptions. Add words like Black Friday 2025, sale, or deal to your titles so searchers know what to expect. For example, instead of ‘Women’s handbags – Classic collection,’ you can try ‘Black Friday 2025 deals on women’s handbags.’ Keep it relevant, natural, and clear.

Next, check your product and landing pages. Make sure they’re up to date with current pricing, stock status, and offers. Highlight the discounts in your product descriptions, and, if possible, include keywords that shoppers might search for, such as ‘best Black Friday deals’ or ‘holiday gift offers.’

Another smart move is to reuse your existing content. If you already have an older Black Friday or holiday gift guide, simply refresh it for 2025 by updating the year, offers, and internal links. It’s a fast way to keep your content relevant without having to start from scratch.

Lastly, take a minute to review your page experience. A fast, mobile-friendly site can make or break your Black Friday sales. Run a quick check using Google’s PageSpeed Insights and fix anything that’s slowing your pages down. Even minor improvements can help increase conversions.

These quick wins may not replace a comprehensive Black Friday SEO strategy. However, they can still make your website more discoverable and help you capture traffic from shoppers actively seeking deals.

The nice-to-dos (if you have a little more time)

Okay, so the must-dos can help you frame a solid last minute marketing campaign. But if you’ve managed to check those off quickly and still have a little time on your hands, don’t stop there. The following few ideas may seem optional, but they can give your campaign the extra boost it needs to capture more attention, convert hesitant shoppers, and capitalize on the Black Friday rush.

Run simple retargeting ads

Don’t let potential buyers slip away after visiting your store. Retargeting ads help remind them of products they viewed or added to their carts, increasing the chances of conversion. Even a short, time-bound campaign with strong visuals and clear CTAs can make a difference during the Black Friday rush.

Bundle products or create quick gift sets

Shoppers love convenience, especially during the holidays. Bundling complementary products or creating quick gift sets can simplify decision-making and increase your average order value. Highlight these as limited-time deals to develop a sense of urgency and drive faster sales.

Add live chat or quick support options

Many customers abandon their carts when questions go unanswered. Adding a live chat feature helps resolve last minute queries instantly and keeps buyers engaged throughout the checkout process. Tools like Tidio and LiveChat integrate seamlessly with both WordPress and Shopify, making setup quick and easy.

Make your Black Friday deals shine with Yoast SEO for free!

Getting your offers in front of the right people starts with how your website appears and performs in search results. That’s where Yoast SEO can be a real game-changer during the Black Friday rush.

Here’s how:

Write SEO-friendly content

With Yoast SEO, you can create content that both readers and search engines understand. With Yoast SEO’s real-time feedback:

  • Get instant insights on keyword use, density, and placement
  • Optimize your product titles and descriptions to highlight key offers
  • Ensure your content maintains the right balance between keywords and readability

Improve readability

Shoppers move fast during Black Friday. Keep them engaged with content that is easy to read and skim. Yoast helps you:

  • Simplify long sentences and paragraphs
  • Use better transitions for a smoother flow
  • Maintain a consistent tone and structure throughout your content

Help search engines crawl your site efficiently

Visibility depends on how easily search engines can crawl and index your site. With Yoast SEO, you can:

  • Automatically generate XML sitemaps to guide crawlers
  • Use SEO-friendly breadcrumbs to create a clear site structure
  • Ensure your most important Black Friday pages are indexed correctly

Prepare your website for the future of search

AI-powered search is transforming the way people discover brands and deals online. The llms.txt feature in Yoast SEO helps you:

  • Communicate directly with AI systems, such as ChatGPT
  • Control how your content is accessed and cited by large language models
  • Enhance the likelihood of your offers being accurately represented in AI-driven summaries and recommendations

Install Yoast SEO now

Bonus: Automate structured data for rich results

Want your Black Friday products to stand out in search with details like price, stock status, and ratings? That’s where structured data comes in. It helps search engines understand your products better and display them as rich results.

With the Yoast WooCommerce SEO plugin, this process becomes effortless. It automatically adds product-specific structured data to your pages, so your deals are clearer and more clickable in search results. This gives your listings the best chance to shine when shoppers are scanning for quick, trustworthy deals during the Black Friday rush.

Buy WooCommerce SEO now!

Unlock powerful features and much more for your online store with Yoast WooCommerce SEO!

Final thoughts: simple moves, big impact

As the countdown begins, remember that success isn’t about doing more but doing what matters most. It’s easy to get caught up in ambitious plans, such as redesigning your website, launching new products, or building influencer partnerships, but those time-intensive ideas rarely deliver quick results when the clock is ticking.

Instead, focus on achievable actions that create immediate impact. Refresh your existing content, refine your offers, and utilize tools like Yoast SEO to optimize your pages efficiently. A few smart tweaks to your product descriptions, meta titles, or site speed can often drive better conversions than a full-scale overhaul.

The key to winning Black Friday isn’t scale, it’s strategy. Work with what you already have, double down on proven tactics, and use every minute wisely. That’s how you turn last minute prep into lasting results.

How Structured Data Shapes AI Snippets And Extends Your Visibility Quota via @sejournal, @cyberandy

When conversational AIs like ChatGPT, Perplexity, or Google AI Mode generate snippets or answer summaries, they’re not writing from scratch, they’re picking, compressing, and reassembling what webpages offer. If your content isn’t SEO-friendly and indexable, it won’t make it into generative search at all. Search, as we know it, is now a function of artificial intelligence.

But what if your page doesn’t “offer” itself in a machine-readable form? That’s where structured data comes in, not just as an SEO gig, but as a scaffold for AI to reliably pick the “right facts.” There has been some confusion in our community, and in this article, I will:

  1. walk through controlled experiments on 97 webpages showing how structured data improves snippet consistency and contextual relevance,
  2. map those results into our semantic framework.

Many have asked me in recent months if LLMs use structured data, and I’ve been repeating over and over that an LLM doesn’t use structured data as it has no direct access to the world wide web. An LLM uses tools to search the web and fetch webpages. Its tools – in most cases – greatly benefit from indexing structured data.

Image by author, October 2025

In our early results, structured data increases snippet consistency and improves contextual relevance in GPT-5. It also hints at extending the effective wordlim envelope – this is a hidden GPT-5 directive that decides how many words your content gets in a response. Imagine it as a quota on your AI visibility that gets expanded when content is richer and better-typed. You can read more about this concept, which I first outlined on LinkedIn.

Why This Matters Now

  • Wordlim constraints: AI stacks operate with strict token/character budgets. Ambiguity wastes budget; typed facts conserve it.
  • Disambiguation & grounding: Schema.org reduces the model’s search space (“this is a Recipe/Product/Article”), making selection safer.
  • Knowledge graphs (KG): Schema often feeds KGs that AI systems consult when sourcing facts. This is the bridge from web pages to agent reasoning.

My personal thesis is that we want to treat structured data as the instruction layer for AI. It doesn’t “rank for you,” it stabilizes what AI can say about you.

Experiment Design (97 URLs)

While the sample size was small, I wanted to see how ChatGPT’s retrieval layer actually works when used from its own interface, not through the API. To do this, I asked GPT-5 to search and open a batch of URLs from different types of websites and return the raw responses.

You can prompt GPT-5 (or any AI system) to show the verbatim output of its internal tools using a simple meta-prompt. After collecting both the search and fetch responses for each URL, I ran an Agent WordLift workflow [disclaimer, our AI SEO Agent] to analyze every page, checking whether it included structured data and, if so, identifying the specific schema types detected.

These two steps produced a dataset of 97 URLs, annotated with key fields:

  • has_sd → True/False flag for structured data presence.
  • schema_classes → the detected type (e.g., Recipe, Product, Article).
  • search_raw → the “search-style” snippet, representing what the AI search tool showed.
  • open_raw → a fetcher summary, or structural skim of the page by GPT-5.

Using a “LLM-as-a-Judge” approach powered by Gemini 2.5 Pro, I then analyzed the dataset to extract three main metrics:

  • Consistency: distribution of search_raw snippet lengths (box plot).
  • Contextual relevance: keyword and field coverage in open_raw by page type (Recipe, E-comm, Article).
  • Quality score: a conservative 0–1 index combining keyword presence, basic NER cues (for e-commerce), and schema echoes in the search output.

The Hidden Quota: Unpacking “wordlim

While running these tests, I noticed another subtle pattern, one that might explain why structured data leads to more consistent and complete snippets. Inside GPT-5’s retrieval pipeline, there’s an internal directive informally known as wordlim: a dynamic quota determining how much text from a single webpage can make it into a generated answer.

At first glance, it acts like a word limit,  but it’s adaptive. The richer and better-typed a page’s content, the more room it earns in the model’s synthesis window.

From my ongoing observations:

  • Unstructured content (e.g., a standard blog post) tends to get about ~200 words.
  • Structured content (e.g., product markup, feeds) extends to ~500 words.
  • Dense, authoritative sources (APIs, research papers) can reach 1,000+ words.

This isn’t arbitrary. The limit helps AI systems:

  1. Encourage synthesis across sources rather than copy-pasting.
  2. Avoid copyright issues.
  3. Keep answers concise and readable.

Yet it also introduces a new SEO frontier: your structured data effectively raises your visibility quota. If your data isn’t structured, you’re capped at the minimum; if it is, you grant AI more trust and more space to feature your brand.

While the dataset isn’t yet large enough to be statistically significant across every vertical, the early patterns are already clear – and actionable.

Figure 1 – How Structured Data Affects AI Snippet Generation (Image by author, October 2025)

Results

Figure 2 – Distribution of Search Snippet Lengths (Image by author, October 2025)

1) Consistency: Snippets Are More Predictable With Schema

In the box plot of search snippet lengths (with vs. without structured data):

  • Medians are similar → schema doesn’t make snippets longer/shorter on average.
  • Spread (IQR and whiskers) is tighter when has_sd = True → less erratic output, more predictable summaries.

Interpretation: Structured data doesn’t inflate length; it reduces uncertainty. Models default to typed, safe facts instead of guessing from arbitrary HTML.

2) Contextual Relevance: Schema Guides Extraction

  • Recipes: With Recipe schema, fetch summaries are far likelier to include ingredients and steps. Clear, measurable lift.
  • Ecommerce: The search tool often echoes JSON‑LD fields (e.g., aggregateRating, offer, brand) evidence that schema is read and surfaced. Fetch summaries skew to exact product names over generic terms like “price,” but the identity anchoring is stronger with schema.
  • Articles: Small but present gains (author/date/headline more likely to appear).

3) Quality Score (All Pages)

Averaging the 0–1 score across all pages:

  • No schema → ~0.00
  • With schema → positive uplift, driven mostly by recipes and some articles.

Even where means look similar, variance collapses with schema. In an AI world constrained by wordlim and retrieval overhead, low variance is a competitive advantage.

Beyond Consistency: Richer Data Extends The Wordlim Envelope (Early Signal)

While the dataset isn’t yet large enough for significance tests, we observed this emerging pattern:
Pages with richer, multi‑entity structured data tend to yield slightly longer, denser snippets before truncation.

Hypothesis: Typed, interlinked facts (e.g., Product + Offer + Brand + AggregateRating, or Article + author + datePublished) help models prioritize and compress higher‑value information – effectively extending the usable token budget for that page.
Pages without schema more often get prematurely truncated, likely due to uncertainty about relevance.

Next step: We’ll measure the relationship between semantic richness (count of distinct Schema.org entities/attributes) and effective snippet length. If confirmed, structured data not only stabilizes snippets – it increases informational throughput under constant word limits.

From Schema To Strategy: The Playbook

We structure sites as:

  1. Entity Graph (Schema/GS1/Articles/ …): products, offers, categories, compatibility, locations, policies;
  2. Lexical Graph: chunked copy (care instructions, size guides, FAQs) linked back to entities.

Why it works: The entity layer gives AI a safe scaffold; the lexical layer provides reusable, quotable evidence. Together they drive precision under thewordlim constraints.

Here’s how we’re translating these findings into a repeatable SEO playbook for brands working under AI discovery constraints.

  1. Ship JSON‑LD for core templates
    • Recipes → Recipe (ingredients, instructions, yields, times).
    • Products → Product + Offer (brand, GTIN/SKU, price, availability, ratings).
    • Articles → Article/NewsArticle (headline, author, datePublished).
  2. Unify entity + lexical
    Keep specs, FAQs, and policy text chunked and entity‑linked.
  3. Harden snippet surface
    Facts must be consistent across visible HTML and JSON‑LD; keep critical facts above the fold and stable.
  4. Instrument
    Track variance, not just averages. Benchmark keyword/field coverage inside machine summaries by template.

Conclusion

Structured data doesn’t change the average size of AI snippets; it changes their certainty. It stabilizes summaries and shapes what they include. In GPT-5, especially under aggressive wordlim conditions, that reliability translates into higher‑quality answers, fewer hallucinations, and greater brand visibility in AI-generated results.

For SEOs and product teams, the takeaway is clear: treat structured data as core infrastructure. If your templates still lack solid HTML semantics, don’t jump straight to JSON-LD: fix the foundations first. Start by cleaning up your markup, then layer structured data on top to build semantic accuracy and long-term discoverability. In AI search, semantics is the new surface area.

More Resources:


Featured Image: TierneyMJ/Shutterstock

Are LLM Visibility Trackers Worth It?

TL;DR

  1. When it comes to LLM visibility, not all brands are created equal. For some, it matters far more than others.
  2. LLMs give different answers to the same question. Trackers combat this by simulating prompts repeatedly to get an average visibility/citation score.
  3. While simulating the same prompts isn’t perfect, secondary benefits like sentiment analysis are not SEO-specific issues. Which right now is a good thing.
  4. Unless a visibility tracker offers enough scale at a reasonable price, I would be wary. But if the traffic converts well and you need to know more, get tracking.
(Image Credit: Harry Clarkson-Bennett)

A small caveat to start. This really depends on how your business makes money and whether LLMs are a fundamental part of your audience journey. You need to understand how people use LLMs and what it means for your business.

Brands that sell physical products have a different journey from publishers that sell opinion or SaaS companies that rely more deeply on comparison queries than anyone else.

Or a coding company destroyed by one snidey Reddit moderator with a bone to pick…

For example, Ahrefs made public some of its conversion rate data from LLMs. 12.1% of their signups came from LLMs from just 0.5% of their total traffic. Which is huge.

AI search visitors convert 23x better than traditional organic search visitors for Ahrefs. (Image Credit: Harry Clarkson-Bennett)

But for us, LLM traffic converts significantly worse. It is a fraction of a fraction.

Honestly, I think LLM visibility trackers at this scale are a bit here today and gone tomorrow. If you can afford one, great. If not, don’t sweat it. Take it all with a pinch of salt. AI search is just a part of most journeys, and tracking the same prompts day in, day out has obvious flaws.

They’re just aggregating what someone said about you on Reddit while they’re taking a shit in 2016.

What Do They Do?

Trackers like Profound and Brand Radar are designed to show you how your brand is framed and recommended in AI answers. Over time, you can measure yours and your competitors’ visibility in the platforms.

Image Credit: Harry Clarkson-Bennett

But LLM visibility is smoke and mirrors.

Ask a question, get an answer. Ask the same question, to the same machine, from the same computer, and get a different answer. A different answer with different citations and businesses.

It has to be like this, or else we’d never use the boring ones.

To combat the inherent variance determined by their temperature setting, LLM trackers simulate prompts repeatedly throughout the day. In doing so, you get an average visibility and citation score alongside some other genuinely useful add-ons like your sentiment score and some competitor benchmarking.

“Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.”

OpenAI Documentation

Simulate a prompt 100 times. If your content was used in 70 of the responses and you were cited seven times, you would have a 70% visibility score and a 7% citation score.

Trust me, that’s much better than it sounds… These engines do not want to send you traffic.

In Brian Balfour’s excellent words, they have identified the moat and the gates are open. They will soon shut. As they shut, monetization will be hard and fast. The likelihood of any referral traffic, unless it’s monetized, is low.

Like every tech company ever.

If you aren’t flush with cash, I’d say most businesses just do not need to invest in them right now. They’re a nice-to-have rather than a necessity for most of us.

How Do They Work?

As far as I can tell, there are two primary models.

  1. Pay for a tool that tracks specific synthetic prompts that you add yourself.
  2. Purchase an enterprise-like tool that tracks more of the market at scale.

Some tools, like Profound, offer both. The cheaper model (the price point is not for most businesses) lets you track synthetic prompts under topics and/or tags. The enterprise model gives you a significantly larger scale.

Whereas tools like Ahrefs Brand Radar provide a broader view of the entire market. As the prompts are all synthetic, there are some fairly large holes. But I prefer broad visibility.

I have not used it yet, but I believe Similarweb have launched their own LLM visibility tracker, which includes real user prompts from Clickstream data.

This makes for a far more useful version of these tools IMO and goes some way to answering the synthetic elephant in the room. And it helps you understand the role LLMs play in the user journey. Which is far more valuable.

The Problem

Does doing good SEO improve your chances of improving your LLM visibility?

Certainly looks like it…

GPT-5 no longer needs to train on more information. It is as well-versed as its overlords now want to pay for. It’s bored of ravaging the internet’s detritus and reaches out to a search index using RAG to verify a response. A response, it does not quite have the appropriate level of confidence to answer effectively.

But I’m sure we will need to modify it somewhat if your primary goal is to increase LLM visibility. Increase expenditure on TOFU and digital PR campaigns being a notable point.

Image Credit: Harry Clarkson-Bennett

Right now, LLMs have an obvious spam problem. One I don’t expect they’ll be willing to invest in solving anytime soon. The AI bubble and gross valuation of these companies will dictate how they drive revenue. And quickly.

It sure as hell won’t be sorting out their spam problem. When you have a $300 billion contract to pay and revenues of $12 billion, you need some more money. Quickly.

So anyone who pays for best page link inclusions or adds hidden and footer text to their websites will benefit in the short-term. But most of us should still build things actual, breathing, snoring people.

With the new iterations of LLM trackers calling search instead of formulating an answer for prompts based on learned ‘knowledge’, it becomes even harder to create an ‘LLM optimization strategy.’

As a news site, I know that most prompts we would vaguely show up in would trigger the web index. So I just don’t quite see the value. It’s very SEO-led.

If you don’t believe me, Will Reynolds is an inarguably better source of information (Image Credit: Harry Clarkson-Bennett)

How You Can Add Value With Sentiment Analysis

I found almost zero value to be had from tracking prompts in LLMs at a purely answer level. So, let’s forget all that for a second and use them for something else. Let’s start with some sentiment analysis.

These trackers give us access to:

  • A wider online sentiment score.
  • Review sources LLMs called upon (at a prompt level).
  • Sentiment scores by topics.
  • Prompts and links to on and off-site information sources.

You can identify where some of these issues start. Which, to be fair, is basically Trustpilot and Reddit.

I won’t go through everything, but a couple of quick examples:

  1. LLMs may be referencing some not-so-recently defunct podcasts and newsletters as “reasons to subscribe.”
  2. Your cancellation process may be cited as the most serious issues for most customers.

Unless you have explicitly stated that these podcasts and newsletters have finished, it’s all fair game. You need to tighten up your product marketing and communications strategy.

For people first. Then for LLMs.

These are not SEO specific projects. We’re moving into an era where solely SEO projects will be difficult to get pushed through. A fantastic way of getting buy-in is to highlight projects with benefits outside of search.

Highlighting serious business issues – poor reviews, inaccurate, out-of-date information et al. – can help get C-suite attention and support for some key brand reputation projects.

Profound’s sentiment analysis tab (Image Credit: Harry Clarkson-Bennett)
Here it is broken down by topic. You can see individual prompts and responses to each topic (Image Credit: Harry Clarkson-Bennett)

To me, this has nothing to do with LLMs. Or what our audience might ask an ill-informed answer engine. They are just the vessel.

It is about solving problems. Problems that drive real value to your business. In your case, this could be about increasing the LTV of a customer. Increasing their retention rate, reducing churn, and increasing the chance of a conversion by providing an improved experience.

If you’ve worked in SEO for long enough, someone will have floated the idea of improving your online sentiment and reviews past you.

“But will this improve our SEO?”

Said Jeff, a beleaguered business owner.

Who knows, Jeff. It really depends on what is holding you back compared to your competition. And like it or not, search is not very investible right now.

But that doesn’t matter in this instance. This isn’t a search-first project. It’s an audience-first project. It encompasses everyone. From customer service to SEO and editorial. It’s just the right thing to do for the business.

A quick hark back to the Google Leak shows you just how many review and sentiment-focused metrics may affect how you rank.

There are nine alone that mention review or sentiment in the title

There are nine alone that mention review or sentiment in the title (Image Credit: Harry Clarkson-Bennett)

For a long time, search has been about brands and trust. Branded search volume, outperforming expected CTR (a Bayesian type predictive model), direct traffic, and general user engagement and satisfaction.

This isn’t because Google knows better than people. It’s because they have stored how we feel about pages and brands in relation to queries and used that as a feedback loop. Google trusts brands because we do.

Most of us have never had to worry about reviews and sentiment. But this is a great time to fix any issues you may have under the guise of AEO, GEO, SEO, or whatever you want to call it.

Lars Lofgren’s article titled How a Competitor Crippled a $23.5M Bootcamp By Becoming a Reddit Moderator is an incredible look at how Codesmith was nobbled by negative PR. Negative PR started and maintained by one Reddit Mod. One.

So keeping tabs on your reputation and identifying potentially serious issues is never a bad thing.

Could I Just Build My Own?

Yep. For starters, you’d need an estimation of monthly LLM API costs based on the number of monthly tokens required. Let’s use Profound’s lower-end pricing tier as an estimate and our old friend Gemini to figure out some estimated costs.

  • 200 prompts × 10 runs × 12 days (approx.) × 3 models = 24,000 monthly runs.
  • 24,000 runs × 1,000 tokens/query (conservative est.) = 24,000,000 tokens.

Based on this, here’s a (hopefully) accurate cost estimate per model from our robot pal.

Image Credit: Harry Clarkson-Bennett

Right then. You now need some back-end functionality, data storage, and some front-end visualization. I’ll tot up as we go.

$21 per month

Back-End

  • A Scheduler/Runner like Render VPS to execute 800 API calls per day.
  • A data orchestrater. Essentially, some Python code to parse raw JSON and extract relevant citation and visibility data.

$10 per month

Data Storage

  • A database, like Supabase (which you can integrate directly through Lovable), to store raw responses and structured metrics.
  • Data storage (which should be included as part of your database).

$15 per month

Front-End Visualization

  • A web dashboard to create interactive, shareable dashboards. I unironically love Lovable. It’s easy to connect directly to databases. I have also used Streamlit previously. Lovable looks far sleeker but has its own challenges.
  • You may also need a visualization library to help generate time series charts and graphs. Some dashboards have this built in.

$50 per month

$96 all in. I think the likelihood is it’s closer to $50 than $100. No scrimping. At the higher end of budgets for tools I use (Lovable) and some estimates from Gemini, we’re talking about a tool that will cost under $100 a month to run and function very well.

This isn’t a complicated project or setup. It is, IMO, an excellent project to learn the vibe coding ropes. Which I will say is not all sunshine and rainbows.

So, Should I Buy One?

If you can afford it, I would get one. For at least a month or two. Review your online sentiment. See what people really say about you online. Identify some low lift wins around product marketing and review/reputation management, and review how your competitors fare.

This might be the most important part of LLM visibility. Set up a tracking dashboard via Google Analytics (or whatever dreadful analytics provider you use) and see a) how much traffic you get and b) whether it’s valuable.

The more valuable it is, the more value there will be in tracking your LLM visibility.

You could also make one. The joy of making one is a) you can learn a new skill and b) you can make other things for the same cost.

Frustrating, yes. Fun? Absolutely.

More Resources: 


This post was originally published on Leadership In SEO.


Featured Image: Viktoriia_M/Shutterstock

Google Answers What To Do For AEO/GEO via @sejournal, @martinibuster

Google’s VP of Product, Robby Stein, recently answered the question of what people should think about in terms of AEO/GEO. He provided a multi-part answer that began with how Google’s AI creates answers and ended with guidance on what creators should consider.

Foundations Of Google AI Search

The question asked was about AEO/GEO, which was characterized by the podcast host as the evolution of SEO. Google’s Robby Stein’s answer suggested thinking about the context of AI answers.

This is the question that was asked:

“What’s your take on this whole rise of AEO, GEO, which is kind of this evolution of SEO?

I’m guessing your answer is going to be just create awesome stuff and don’t worry about it, but you know, there’s a whole skill of getting to show up in these answers. Thoughts on what people should be thinking about here?”

Stein began his answer describing the foundations of how Google’s AI search works:

“Sure. I mean, I can give you a little bit of under the hood, like how this stuff works, because I do think that helps people understand what to do.

When our AI constructs a response, it’s actually trying to, it does something called query fan-out, where the model uses Google search as a tool to do other querying.

So maybe you’re asking about specific shoes. It’ll add and append all of these other queries, like maybe dozens of queries, and start searching basically in the background. And it’ll make requests to our data kind of backend. So if it needs real-time information, it’ll go do that.

And so at the end of the day, actually something’s searching. It’s not a person, but there’s searches happening.”

Robby Stein shows that Google’s AI still relies on conventional search engine retrieval, it’s just scaled and automated. The system performs dozens of background searches and evaluates the same quality signals that guide ordinary search rankings.

That means that “answer engine optimization” is basically the same as SEO because the underlying indexing, ranking and quality factors inherent to traditional SEO principles still apply to queries that the AI itself issues as part of the query fan-out process.

For SEOs, the insight is that visibility in AI answers depends less on gaming a new algorithm and more on producing content that satisfies intent so thoroughly that Google’s automated searches treat it as the best possible answer. As you’ll see later in this article, originality also plays a role.

Role Of Traditional Search Signals

An interesting part of this discussion is centered on the kinds of quality signals that Google describes in its Quality Raters Guidelines. Stein talks about originality of the content, for example.

Here’s what he said:

“And then each search is paired with content. So if for a given search, your webpage is designed to be extremely helpful.

And then you can look up Google’s human rater guidelines and read… what makes great information? This is something Google has studied more than anyone.

And it’s like:

  • Do you satisfy the user intent of what they’re trying to get?
  • Do you have sources?
  • Do you cite your information?
  • Is it original or is it repeating things that have been repeated 500 times?

And there’s these best practices that I think still do largely apply because it’s going to ultimately come down to an AI is doing research and finding information.

And a lot of the core signals, is this a good piece of information for the question, they’re still valid. They’re still extremely valid and extremely useful. And that will produce a response where you’re more likely to show up in those experiences now.”

Although Stein is describing AI Search results, his answer shows that Google’s AI Search still values the same underlying quality factors found in traditional search. Originality, source citations, and satisfying intent remain the foundation of what makes information “good” in Google’s view. AI has changed the interface of search and encouraged more complex queries, but the ranking factors continue to be the same recognizable signals related to expertise and authoritativeness.

More On How Google’s AI Search Works

The podcast host, Lenny, followed up with another question about how Google’s AI Search might follow a different approach from a strictly chatbot approach.

He asked:

“It’s interesting your point about how it goes in searches. When you use it, it’s like searching a thousand pages or something like that. Is that a just a different core mechanic to how other popular chatbots work because the others don’t go search a bunch of websites as you’re asking.”

Stein answered with more details about how AI search works, going beyond query fan-out, identifying factors it uses to surface what they feel to be the best answers. For example, he mentions parametric memory. Parametric memory is the knowledge that an AI has as part of its training. It’s essentially the knowledge stored within the model and not fetched from external sources.

Stein explained:

“Yeah, this is something that we’ve done uniquely for our AI. It obviously has the ability to use parametric memory and thinking and reasoning and all the things a model does.

But one of the things that makes it unique for designing it specifically for informational tasks, like we want it to be the best at informational needs. That’s what Google’s all about.

  • And so how does it find information?
  • How does it know if information is right?
  • How does it check its work?

These are all things that we built into the model. And so there is a unique access to Google. Obviously, it’s part of Google search.

So it’s Google search signals, everything from spam, like what’s content that could be spam and we don’t want to probably use in a response, all the way to, this is the most authoritative, helpful piece of information.

We’re going link to it and we’re going to explain, hey, according to this website, check out that information and you’re going to probably go see that yourself.

So that’s how we’ve thought about designing this.”

Stein’s explanation makes it clear that Google’s AI Search is not designed to mimic the conversational style of general chatbots but to reinforce the company’s core goal of delivering trustworthy information that’s authoritative and helpful.

Google’s AI Search does this by relying on signals from Google Search, such as spam detection and helpfulness, the system grounds its AI-generated answers in the same evaluation and ranking framework inherent in regular search ranking.

This approach positions AI Search as less a standalone version of search and more like an extension of Google’s information-retrieval infrastructure, where reasoning and ranking work together to surface factually accurate answers.

Advice For Creators

Stein at one point acknowledges that creators want to know what to do for AI Search. He essentially gives the advice to think about the questions people are asking. In the old days that meant thinking about what keywords searchers are using. He explains that’s no longer the case because people are using long conversational queries now.

He explained:

“I think the only thing I would give advice to would be, think about what people are using AI for.

I mentioned this as an expansionary moment, …that people are asking a lot more questions now, particularly around things like advice or how to, or more complex needs versus maybe more simple things.

And so if I were a creator, I would be thinking, what kind of content is someone using AI for? And then how could my content be the best for that given set of needs now?
And I think that’s a really tangible way of thinking about it.”

Stein’s advice doesn’t add anything new but it does reframe the basics of SEO for the AI Search era. Instead of optimizing for isolated keywords, creators should consider anticipating the fuller intent and informational journey inherent in conversational questions. That means structuring content to directly satisfy complex informational needs, especially “how to” or advice-driven queries that users increasingly pose to AI systems rather than traditional keyword search.

Takeaways

  • AI Is Search Still Built on Traditional SEO Signals
    Google’s AI Search relies on the same core ranking principles as traditional search—intent satisfaction, originality, and citation of sources.
  • How Query Fan-Out Works
    AI Search issues dozens of background searches per query, using Google Search as a tool to fetch real-time data and evaluate quality signals.
  • Integration of Parametric Memory and Search Signals
    The model blends stored knowledge (parametric memory) with live Google Search data, combining reasoning with ranking systems to ensure factual accuracy.
  • Google’s AI Search Is Like An Extension of Traditional Search
    AI Search isn’t a chatbot; it’s a search-based reasoning system that reinforces Google’s informational trust model rather than replacing it.
  • Guidance for Creators in the AI Search Era
    Optimizing for AI means understanding user intent behind long, conversational queries—focusing on advice- and how-to-style content that directly satisfies complex informational needs.

Google’s AI Search builds on the same foundations that have long defined traditional search, using retrieval, ranking, and quality signals to surface information that demonstrates originality and trustworthiness. By combining live search signals with the model’s own stored knowledge, Google has created a system that explains information and cites the websites that provided it. For creators, this means that success now depends on producing content that fully addresses the complex, conversational questions people bring to AI systems.

Watch the podcast segment starting at about the 15:30 minute mark:

Featured Image by Shutterstock/PST Vector

How Leaders Are Using AI Search to Drive Growth [Webinar] via @sejournal, @hethr_campbell

Turn Data Into an Actionable AI Search Strategy

AI search is transforming consumer behavior faster than any shift in the past 20 years. Many teams are chasing visibility, but few understand what the data actually means for their business or how to act on it.

Join Mark Traphagen, VP of Product Marketing and Training at seoClarity, and Tania German, VP of Marketing at seoClarity, for a live webinar designed for SEOs, digital leaders, and executives. You’ll learn how to interpret AI search data and apply it to your strategy to drive real business results.

What You’ll Learn

  • Why consumer discovery is changing so rapidly.
  • How visibility drives revenue with Instant Checkout in ChatGPT.
  • What Google’s AI Overviews and AI Mode mean for your brand’s presence.
  • Tactics to improve mentions, citations, and visibility on AI search engines.

Why Attend

This webinar gives you the clarity and measurement framework needed to confidently answer, “What’s our AI search strategy?” Walk away with a playbook you can use to lead your organization through the AI search shift successfully.

Register now to secure your seat and get a clear, data-backed framework for AI search strategy.

🛑 Can’t attend live? Register anyway, and we’ll send the full recording.

The AI Search Effect: What Agencies Need To Know For Local Search Clients

This post was sponsored by GatherUp. The opinions expressed in this article are the sponsor’s own.

Local Search Has Changed: From “Found” to “Chosen”

Not long ago, showing up in a Google search was enough. A complete Google Business Profile (GBP) and a steady stream of reviews could put your client in front of the right customers.

But today’s local search looks very different. It’s no longer just about being found; it’s about being chosen.

That shift has only accelerated with the rise of AI-powered search. Instead of delivering a list of links, engines like ChatGPT, Google’s Gemini, and Perplexity now generate instant summaries. Changing the way consumers interact with search results, these summaries are the key to whether or not your client’s business gets seen at all.

Reality Check: if listings aren’t accurate, consistent, and AI-ready, businesses risk invisibility.

AI Search Is Reshaping Behavior & Brand Visibility

AI search is already reshaping behavior.

Only 8% of users click a traditional link when an AI summary appears. That means the majority of your clients’ potential customers are making decisions without ever leaving the AI-generated response.

So, how does AI decide which businesses to include in its answers? Two categories of signals matter most:

Put simply, if a client’s listings are messy, incomplete, or outdated, AI is far less likely to surface them in a summary. And that’s a problem, considering more than 4 out of 5 people use search engines to find local businesses.

The Hidden Dangers of Neglected Listings

Agencies know the pain of messy listings firsthand. But your clients may not realize just how damaging it can be:

  • Trust erosion: 80% of consumers lose trust in businesses with incorrect or inconsistent.
  • Lost visibility: Roughly a third of local organic results now come from business directories. If listings are incomplete, that’s a third of opportunities gone.
  • Negative perception: A GBP with outdated hours or broken URLs communicates neglect, not professionalism.

Consider “Mary,” a marketing director overseeing 150+ locations. Without automation, her team spends hours chasing duplicate profiles, correcting seasonal hours, and fighting suggested edits. Updates lag behind reality. Customers’ trust slips. And every inconsistency is another signal to search engines, and now AI, that the business isn’t reliable.

For many agencies, the result is more than frustrated clients. It’s a high churn risk.

Why This Matters More Than Ever to Consumers

Consumers expect accuracy at every touchpoint, and they’re quick to lose confidence when details don’t add up.

  • 80% of consumers lose trust in a business with incorrect or inconsistent information, like outdated hours, wrong addresses, or broken links.
  • A Google Business Profile with missing fields or duplicate entries signals neglect.
  • When AI engines surface summaries, they pull from this. Inconsistencies make it less likely your client’s business will appear at all.

Reviews still play a critical role, but they work best when paired with clean, consistent listings. 99% of consumers read reviews before choosing a business, and 68% prioritize recent reviews over overall star ratings. If the reviews say “great service” but the business shows the wrong phone number or closed hours, that trust is instantly broken.

In practice, this means agencies must help clients maintain both accurate listings and authentic reviews. Together, they signal credibility to consumers and to AI search engines deciding which businesses make the cut.

Real-World Data: The ROI of Getting Listings Right

Agencies that take listings seriously are already seeing outsized returns:

  • A healthcare agency managing 850+ locations saved 132 hours per month and reduced costs by $21K annually through listings automation, delivering a six-figure annual ROI.
  • A travel brand optimizing global listings recorded a 200% increase in Google visibility and a 30x rise in social engagement.
  • A retail chain improving profile completeness saw a 31% increase in revenue attributed to local SEO improvements.

The proof is clear: accurate, consistent, and scalable listings management is no longer optional. It’s a revenue driver.

Actionable Steps Agencies Can Take Right Now

AI search is moving fast, but agencies don’t have to be caught flat-footed. Here are five practical steps to protect your clients’ visibility and trust.

1.  Audit Listings for Accuracy and Consistency

Start with a full audit of your clients’ GBPs and directory listings. Look for mismatches in hours, addresses, URLs, and categories. Even small discrepancies send negative signals to both consumers and AI search engines.

I know you updated your listings last year, and not much has changed, but unless your business is a time capsule, your customers expect real-time accuracy.

2.  Eliminate Duplicates

Duplicate listings aren’t just confusing to customers; they actively hurt SEO. Suppress duplicates across directories and consolidate data at the source to prevent aggregator overwrites. Google penalized 6.1% of business listings flagged for duplicate or spam entries in Q1 alone, underscoring how seriously platforms are taking accuracy enforcement.

3.  Optimize for Engagement

Encourage clients to respond authentically to reviews. Research shows 73% of consumers will give a business a second chance if they receive a thoughtful response to a negative review. Engagement isn’t just customer service; it’s a ranking signal.

4.  Create AI-Readable Content

AI thrives on structured, educational content. Encourage clients to build out their web presence with FAQs, descriptive product or service pages, and customer-centric content that mirrors natural language. This makes it easier for AI to pull them into summaries.

5.  Automate at Scale

Manual updates don’t cut it for multi-location brands. Implement automation for bulk publishing, data synchronization, and ongoing updates. This ensures accuracy and saves agencies countless hours of low-value labor.

The AI Opportunity: Agencies as Strategic Partners

For agencies, the rise of AI search is both a threat and an opportunity. Yes, clients who ignore their listings risk becoming invisible. But agencies that lean in can position themselves as strategic partners, helping businesses adapt to a disruptive new era.

That means reframing listings management not as “background work,” but as the foundation of trust and visibility in AI-powered search.

As GatherUp’s research concludes, “In the AI-driven search era, listings are no longer background work; they are the foundation of visibility and trust.”

The Time to Act Is Now

AI search is here, and it’s rewriting the rules of local visibility. Agencies that fail to help their clients adapt risk irrelevance.

But those that act now can deliver measurable growth, stronger client relationships, and defensible ROI.

The path forward is clear: audit listings, eliminate duplicates, optimize for engagement, publish AI-readable content, and automate at scale.

And if you want to see where your clients stand today, GatherUp offers a free listings audit to help identify gaps and opportunities.

👉 Run a free listings audit and see how your business measures up.

Image Credits

Featured Image: Image by GatherUp. Used with permission.

In-Post Images: Image by GatherUp. Used with permission.

How aging clocks can help us understand why we age—and if we can reverse it

Be honest: Have you ever looked up someone from your childhood on social media with the sole intention of seeing how they’ve aged? 

One of my colleagues, who shall remain nameless, certainly has. He recently shared a photo of a former classmate. “Can you believe we’re the same age?” he asked, with a hint of glee in his voice. A relative also delights in this pastime. “Wow, she looks like an old woman,” she’ll say when looking at a picture of someone she has known since childhood. The years certainly are kinder to some of us than others.

But wrinkles and gray hairs aside, it can be difficult to know how well—or poorly—someone’s body is truly aging, under the hood. A person who develops age-related diseases earlier in life, or has other biological changes associated with aging (such as elevated cholesterol or markers of inflammation), might be considered “biologically older” than a similar-age person who doesn’t have those changes. Some 80-year-olds will be weak and frail, while others are fit and active. 

Doctors have long used functional tests that measure their patients’ strength or the distance they can walk, for example, or simply “eyeball” them to guess whether they look fit enough to survive some treatment regimen, says Tamir Chandra, who studies aging at the Mayo Clinic. 

But over the past decade, scientists have been uncovering new methods of looking at the hidden ways our bodies are aging. What they’ve found is changing our understanding of aging itself. 

“Aging clocks” are new scientific tools that can measure how our organs are wearing out, giving us insight into our mortality and health. They hint at our biological age. While chronological age is simply how many birthdays we’ve had, biological age is meant to reflect something deeper. It measures how our bodies are handling the passing of time and—perhaps—lets us know how much more of it we have left. And while you can’t change your chronological age, you just might be able to influence your biological age.

It’s not just scientists who are using these clocks. Longevity influencers like Bryan Johnson often use them to make the case that they are aging backwards. “My telomeres say I’m 10 years old,” Johnson posted on X in April. The Kardashians have tried them too (Khloé was told on TV that her biological age was 12 years below her chronological age). Even my local health-food store offers biological age testing. Some are pushing the use of clocks even further, using them to sell unproven “anti-aging” supplements.

The science is still new, and few experts in the field—some of whom affectionately refer to it as “clock world”—would argue that an aging clock can definitively reveal an individual’s biological age. 

But their work is revealing that aging clocks can offer so much more than an insta-brag, a snake-oil pitch—or even just an eye-catching number. In fact, they are helping scientists unravel some of the deepest mysteries in biology: Why do we age? How do we age? When does aging begin? What does it even mean to age?

Ultimately, and most importantly, they might soon tell us whether we can reverse the whole process.

Clocks kick off

The way your genes work can change. Molecules called methyl groups can attach to DNA, controlling the way genes make proteins. This process is called methylation, and it can potentially occur at millions of points along the genome. These epigenetic markers, as they are known, can switch genes on or off, or increase or decrease how much protein they make. They’re not part of our DNA, but they influence how it works.

In 2011, Steve Horvath, then a biostatistician at the University of California, Los Angeles, took part in a study that was looking for links between sexual orientation and these epigenetic markers. Steve is straight; he says his twin brother, Markus, who also volunteered, is gay.

That study didn’t find a link between DNA methyl­ation and sexual orientation. But when Horvath looked at the data, he noticed a different trend—a very strong link between age and methylation at around 88 points on the genome. He once told me he fell off his chair when he saw it

Many of the affected genes had already been linked to age-related brain and cardiovascular diseases, but it wasn’t clear how methylation might be related to those diseases. 

If a model could work out what average aging looks like, it could potentially estimate whether someone was aging unusually fast or slowly. It could transform medicine and fast-track the search for an anti-aging drug. It could help us understand what aging is, and why it happens at all.

In 2013, Horvath collected methylation data from 8,000 tissue and cell samples to create what he called the Horvath clock—essentially a mathematical model that could estimate age on the basis of DNA methylation at 353 points on the genome. From a tissue sample, it was able to detect a person’s age within a range of 2.9 years.

That clock changed everything. Its publication in 2013 marked the birth of “clock world.” To some, the possibilities were almost endless. If a model could work out what average aging looks like, it could potentially estimate whether someone was aging unusually fast or slowly. It could transform medicine and fast-track the search for an anti-aging drug. It could help us understand what aging is, and why it happens at all.

The epigenetic clock was a success story in “a field that, frankly, doesn’t have a lot of success stories,” says João Pedro de Magalhães, who researches aging at the University of Birmingham, UK.

It took a few years, but as more aging researchers heard about the clock, they began incorporating it into their research and even developing their own clocks. Horvath became a bit of a celebrity. Scientists started asking for selfies with him at conferences, he says. Some researchers even made T-shirts bearing the front page of his 2013 paper.

Some of the many other aging clocks developed since have become notable in their own right. Examples include the PhenoAge clock, which incorporates health data such as blood cell counts and signs of inflammation along with methyl­ation, and the Dunedin Pace of Aging clock, which tells you how quickly or slowly a person is aging rather than pointing to a specific age. Many of the clocks measure methylation, but some look at other variables, such as proteins in blood or certain carbohydrate molecules that attach to such proteins.

Today, there are hundreds or even thousands of clocks out there, says Chiara Herzog, who researches aging at King’s College London and is a member of the Biomarkers of Aging Consortium. Everyone has a favorite. Horvath himself favors his GrimAge clock, which was named after the Grim Reaper because it is designed to predict time to death.

That clock was trained on data collected from people who were monitored for decades, many of whom died in that period. Horvath won’t use it to tell people when they might die of old age, he stresses, saying that it wouldn’t be ethical. Instead, it can be used to deliver a biological age that hints at how long a person might expect to live. Someone who is 50 but has a GrimAge of 60 can assume that, compared with the average 50-year-old, they might be a bit closer to the end.

GrimAge is not perfect. While it can strongly predict time to death given the health trajectory someone is on, no aging clock can predict if someone will start smoking or get a divorce (which generally speeds aging) or suddenly take up running (which can generally slow it). “People are complicated,” Horvath tells MIT Technology Review. “There’s a huge error bar.”

On the whole, the clocks are pretty good at making predictions about health and lifespan. They’ve been able to predict that people over the age of 105 have lower biological ages, which tracks given how rare it is for people to make it past that age. A higher epigenetic age has been linked to declining cognitive function and signs of Alzheimer’s disease, while better physical and cognitive fitness has been linked to a lower epigenetic age.

Black-box clocks

But accuracy is a challenge for all aging clocks. Part of the problem lies in how they were designed. Most of the clocks were trained to link age with methylation. The best clocks will deliver an estimate that reflects how far a person’s biology deviates from the average. Aging clocks are still judged on how well they can predict a person’s chronological age, but you don’t want them to be too close, says Lucas Paulo de Lima Camillo, head of machine learning at Shift Bioscience, who was awarded $10,000 by the Biomarkers of Aging Consortium for developing a clock that could estimate age within a range of 2.55 years.

a cartoon alarm clock shrugging
None of the clocks are precise enough to predict the biological age of a single person. Putting the same biological sample through five different clocks will give you five wildly different results.
LEON EDLER

“There’s this paradox,” says Camillo. If a clock is really good at predicting chronological age, that’s all it will tell you—and it probably won’t reveal much about your biological age. No one needs an aging clock to tell them how many birthdays they’ve had. Camillo says he’s noticed that when the clocks get too close to “perfect” age prediction, they actually become less accurate at predicting mortality.

Therein lies the other central issue for scientists who develop and use aging clocks: What is the thing they are really measuring? It is a difficult question for a field whose members notoriously fail to agree on the basics. (Everything from the definition of aging to how it occurs and why is up for debate among the experts.)

They do agree that aging is incredibly complex. A methylation-based aging clock might tell you about how that collection of chemical markers compares across individuals, but at best, it’s only giving you an idea of their “epigenetic age,” says Chandra. There are probably plenty of other biological markers that might reveal other aspects of aging, he says: “None of the clocks measure everything.” 

We don’t know why some methyl groups appear or disappear with age, either. Are these changes causing damage? Or are they a by-product of it? Are the epigenetic patterns seen in a 90-year-old a sign of deterioration? Or have they been responsible for keeping that person alive into very old age?

To make matters even more complicated, two different clocks can give similar answers by measuring methylation at entirely different regions of the genome. No one knows why, or which regions might be the best ones to focus on.

“The biomarkers have this black-box quality,” says Jesse Poganik at Brigham and Women’s Hospital in Boston. “Some of them are probably causal, some of them may be adaptive … and some of them may just be neutral”: either “there’s no reason for them not to happen” or “they just happen by random chance.”

What we know is that, as things stand, none of the clocks are precise enough to predict the biological age of a single person (sorry, Khloé). Putting the same biological sample through five different clocks will give you five wildly different results.

Even the same clock can give you different answers if you put a sample through it more than once. “They’re not yet individually predictive,” says Herzog. “We don’t know what [a clock result] means for a person, [or if] they’re more or less likely to develop disease.”

And it’s why plenty of aging researchers—even those who regularly use the clocks in their work—haven’t bothered to measure their own epigenetic age. “Let’s say I do a clock and it says that my biological age … is five years older than it should be,” says Magalhães. “So what?” He shrugs. “I don’t see much point in it.”

You might think this lack of clarity would make aging clocks pretty useless in a clinical setting. But plenty of clinics are offering them anyway. Some longevity clinics are more careful, and will regularly test their patients with a range of clocks, noting their results and tracking them over time. Others will simply offer an estimate of biological age as part of a longevity treatment package.

And then there are the people who use aging clocks to sell supplements. While no drug or supplement has been definitively shown to make people live longer, that hasn’t stopped the lightly regulated wellness industry from pushing a range of “treatments” that range from lotions to herbal pills all the way through to stem-cell injections.

Some of these people come to aging meetings. I was in the audience at an event when one CEO took to the stage to claim he had reversed his own biological age by 18 years—thanks to the supplement he was selling. Tom Weldon of Ponce de Leon Health told us his gray hair was turning brown. His biological age was supposedly reversing so rapidly that he had reached “longevity escape velocity.”

But if the people who buy his supplements expect some kind of Benjamin Button effect, they might be disappointed. His company hasn’t yet conducted a randomized controlled trial to demonstrate any anti-aging effects of that supplement, called Rejuvant. Weldon says that such a trial would take years and cost millions of dollars, and that he’d “have to increase the price of our product more than four times” to pay for one. (The company has so far tested the active ingredient in mice and carried out a provisional trial in people.)

More generally, Horvath says he “gets a bad taste in [his] mouth” when people use the clocks to sell products and “make a quick buck.” But he thinks that most of those sellers have genuine faith in both the clocks and their products. “People truly believe their own nonsense,” he says. “They are so passionate about what they discovered, they fall into this trap of believing [their] own prejudices.” 

The accuracy of the clocks is at a level that makes them useful for research, but not for individual predictions. Even if a clock did tell someone they were five years younger than their chronological age, that wouldn’t necessarily mean the person could expect to live five years longer, says Magalhães. “The field of aging has long been a rich ground for snake-oil salesmen and hype,” he says. “It comes with the territory.” (Weldon, for his part, says Rejuvant is the only product that has “clinically meaningful” claims.) 

In any case, Magalhães adds that he thinks any publicity is better than no publicity.

And there’s the rub. Most people in the longevity field seem to have mixed feelings about the trendiness of aging clocks and how they are being used. They’ll agree that the clocks aren’t ready for consumer prime time, but they tend to appreciate the attention. Longevity research is expensive, after all. With a surge in funding and an explosion in the number of biotech companies working on longevity, aging scientists are hopeful that innovation and progress will follow. 

So they want to be sure that the reputation of aging clocks doesn’t end up being tarnished by association. Because while influencers and supplement sellers are using their “biological ages” to garner attention, scientists are now using these clocks to make some remarkable discoveries. Discoveries that are changing the way we think about aging.

How to be young again

Two little mice lie side by side, anesthetized and unconscious, as Jim White prepares his scalpel. The animals are of the same breed but look decidedly different. One is a youthful three-month-old, its fur thick, black, and glossy. By comparison, the second mouse, a 20-month-old, looks a little the worse for wear. Its fur is graying and patchy. Its whiskers are short, and it generally looks kind of frail.

But the two mice are about to have a lot more in common. White, with some help from a colleague, makes incisions along the side of each mouse’s body and into the upper part of an arm and leg on the same side. He then carefully stitches the two animals together—membranes, fascia, and skin. 

The procedure takes around an hour, and the mice are then roused from their anesthesia. At first, the two still-groggy animals pull away from each other. But within a few days, they seem to have accepted that they now share their bodies. Soon their circulatory systems will fuse, and the animals will share a blood flow too.

cartoon man in profile with a stick of a wrist watch around a lit stick of dynamite in his mouth
“People are complicated. There’s a huge error bar.” — Steve Horvath, former biostatistician at the University of California, Los Angeles
LEON EDLER

White, who studies aging at Duke University, has been stitching mice together for years; he has performed this strange procedure, known as heterochronic parabiosis, more than a hundred times. And he’s seen a curious phenomenon occur. The older mice appear to benefit from the arrangement. They seem to get younger.

Experiments with heterochronic parabiosis have been performed for decades, but typically scientists keep the mice attached to each other for only a few weeks, says White. In their experiment, he and his colleagues left the mice attached for three months—equivalent to around 10 human years. The team then carefully separated the animals to assess how each of them had fared. “You’d think that they’d want to separate immediately,” says White. “But when you detach them … they kind of follow each other around.”

The most striking result of that experiment was that the older mice who had been attached to a younger mouse ended up living longer than other mice of a similar age. “[They lived] around 10% longer, but [they] also maintained a lot of [their] function,” says White. They were more active and maintained their strength for longer, he adds.

When his colleagues, including Poganik, applied aging clocks to the mice, they found that their epigenetic ages were lower than expected. “The young circulation slowed aging in the old mice,” says White. The effect seemed to last, too—at least for a little while. “It preserved that youthful state for longer than we expected,” he says.

The young mice went the other way and appeared biologically older, both while they were attached to the old mice and shortly after they were detached. But in their case, the effect seemed to be short-lived, says White: “The young mice went back to being young again.” 

To White, this suggests that something about the “youthful state” might be programmed in some way. That perhaps it is written into our DNA. Maybe we don’t have to go through the biological process of aging. 

This gets at a central debate in the aging field: What is aging, and why does it happen? Some believe it’s simply a result of accumulated damage. Some believe that the aging process is programmed; just as we grow limbs, develop a brain, reach puberty, and experience menopause, we are destined to deteriorate. Others think programs that play an important role in our early development just turn out to be harmful later in life by chance. And there are some scientists who agree with all of the above.

White’s theory is that being old is just “a loss of youth,” he says. If that’s the case, there’s a silver lining: Knowing how youth is lost might point toward a way to somehow regain it, perhaps by restoring those youthful programs in some way. 

Dogs and dolphins

Horvath’s eponymous clock was developed by measuring methylation in DNA samples taken from tissues around the body. It seems to represent aging in all these tissues, which is why Horvath calls it a pan-tissue clock. Given that our organs are thought to age differently, it was remarkable that a single clock could measure aging in so many of them.

But Horvath had ambitious plans for an even more universal clock: a pan-species model that could measure aging in all mammals. He started out, in 2017, with an email campaign that involved asking hundreds of scientists around the world to share samples of tissues from animals they had worked with. He tried zoos, too.   

The pan-mammalian clock suggests that there is something universal about aging—not just that all mammals experience it in a similar way, but that a similar set of genetic or epigenetic factors might be responsible for it.

“I learned that people had spent careers collecting [animal] tissues,” he says. “They had freezers full of [them].” Amenable scientists would ship those frozen tissues, or just DNA, to Horvath’s lab in California, where he would use them to train a new model.

Horvath says he initially set out to profile 30 different species. But he ended up receiving around 15,000 samples from 200 scientists, representing 348 species—including everything from dogs to dolphins. Could a single clock really predict age in all of them?

“I truly felt it would fail,” says Horvath. “But it turned out that I was completely wrong.” He and his colleagues developed a clock that assessed methylation at 36,000 locations on the genome. The result, which was published in 2023 as the pan-mammalian clock, can estimate the age of any mammal and even the maximum lifespan of the species. The data set is open to anyone who wants to download it, he adds: “I hope people will mine the data to find the secret of how to extend a healthy lifespan.”

The pan-mammalian clock suggests that there is something universal about aging—not just that all mammals experience it in a similar way, but that a similar set of genetic or epigenetic factors might be responsible for it.

Comparisons between mammals also support the idea that the slower methylation changes occur, the longer the lifespan of the animal, says Nelly Olova, an epigeneticist who researches aging at the University of Edinburgh in the UK. “DNA methylation slowly erodes with age,” she says. “We still have the instructions in place, but they become a little messier.” The research in different mammals suggests that cells can take only so much change before they stop functioning.

“There’s a finite amount of change that the cell can tolerate,” she says. “If the instructions become too messy and noisy … it cannot support life.”

Olova has been investigating exactly when aging clocks first begin to tick—in other words, the point at which aging starts. Clocks can be trained on data from volunteers, and by matching the patterns of methylation on their DNA to their chronological age. The trained clocks are then typically used to estimate the biological age of adults. But they can also be used on samples from children. Or babies. They can be used to work out the biological age of cells that make up embryos. 

In her research, Olova used adult skin cells, which—thanks to Nobel Prize–winning research in the 2000s—can be “reprogrammed” back to a state resembling that of the pluripotent stem cells found in embryos. When Olova and her colleagues used a “partial reprogramming” approach to take cells close to that state, they found that the closer they got to the entirely reprogrammed state, the “younger” the cells were. 

It was around 20 days after the cells had been reprogrammed into stem cells that they reached the biological age of zero according to the clock used, says Olova. “It was a bit surreal,” she says. “The pluripotent cells measure as minus 0.5; they’re slightly below zero.”

Vadim Gladyshev, a prominent aging researcher at Harvard University, has since proposed that the same negative level of aging might apply to embryos. After all, some kind of rejuvenation happens during the early stages of embryo formation—an aged egg cell and an aged sperm cell somehow create a brand-new cell. The slate is wiped clean.

Gladyshev calls this point “ground zero.” He posits that it’s reached sometime during the “mid-embryonic state.” At this point, aging begins. And so does “organismal life,” he argues. “It’s interesting how this coincides with philosophical questions about when life starts,” says Olova. 

Some have argued that life begins when sperm meets egg, while others have suggested that the point when embryonic cells start to form some kind of unified structure is what counts. The ground zero point is when the body plan is set out and cells begin to organize accordingly, she says. “Before that, it’s just a bunch of cells.”

This doesn’t mean that life begins at the embryonic state, but it does suggest that this is when aging begins—perhaps as the result of “a generational clearance of damage,” says Poganik.

It is early days—no pun intended—for this research, and the science is far from settled. But knowing when aging begins could help inform attempts to rewind the clock. If scientists can pinpoint an ideal biological age for cells, perhaps they can find ways to get old cells back to that state. There might be a way to slow aging once cells reach a certain biological age, too. 

“Presumably, there may be opportunities for targeting aging before … you’re full of gray hair,” says Poganik. “It could mean that there is an ideal window for intervention which is much earlier than our current geriatrics-based approach.”

When young meets old

When White first started stitching mice together, he would sit and watch them for hours. “I was like, look at them go! They’re together, and they don’t even care!” he says. Since then, he’s learned a few tricks. He tends to work with female mice, for instance—the males tend to bicker and nip at each other, he says. The females, on the other hand, seem to get on well. 

The effect their partnership appears to have on their biological ages, if only temporarily, is among the ways aging clocks are helping us understand that biological age is plastic to some degree. White and his colleagues have also found, for instance, that stress seems to increase biological age, but that the effect can be reversed once the stress stops. Both pregnancy and covid-19 infections have a similar reversible effect.

Poganik wonders if this finding might have applications for human organ transplants. Perhaps there’s a way to measure the biological age of an organ before it is transplanted and somehow rejuvenate organs before surgery. 

But new data from aging clocks suggests that this might be more complicated than it sounds. Poganik and his colleagues have been using methylation clocks to measure the biological age of samples taken from recently transplanted hearts in living people. 

If being old is simply a case of losing our youthfulness, then that might give us a clue to how we can somehow regain it.

Young hearts do well in older bodies, but the biological age of these organs eventually creeps up to match that of their recipient. The same is true for older hearts in younger bodies, says Poganik, who has not yet published his findings. “After a few months, the tissue may assimilate the biological age of the organism,” he says. 

If that’s the case, the benefits of young organs might be short-lived. It also suggests that scientists working on ways to rejuvenate individual organs may need to focus their anti-aging efforts on more systemic means of rejuvenation—for example, stem cells that repopulate the blood. Reprogramming these cells to a youthful state, perhaps one a little closer to “ground zero,” might be the way to go.

Whole-body rejuvenation might be some way off, but scientists are still hopeful that aging clocks might help them find a way to reverse aging in people.

“We have the machinery to reset our epigenetic clock to a more youthful state,” says White. “That means we have the ability to turn the clock backwards.” 

Can we repair the internet?

From addictive algorithms to exploitative apps, data mining to misinformation, the internet today can be a hazardous place. Books by three influential figures—the intellect behind “net neutrality,” a former Meta executive, and the web’s own inventor—propose radical approaches to fixing it. But are these luminaries the right people for the job? Though each shows conviction, and even sometimes inventiveness, the solutions they present reveal blind spots.

book cover
The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity
Tim Wu
KNOPF, 2025

In The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity, Tim Wu argues that a few platform companies have too much concentrated power and must be dismantled. Wu, a prominent Columbia professor who popularized the principle that a free internet requires all online traffic to be treated equally, believes that existing legal mechanisms, especially anti-monopoly laws, offer the best way to achieve this goal.

Pairing economic theory with recent digital history, Wu shows how platforms have shifted from giving to users to extracting from them. He argues that our failure to understand their power has only encouraged them to grow, displacing competitors along the way. And he contends that convenience is what platforms most often exploit to keep users entrapped. “The human desire to avoid unnecessary pain and inconvenience,” he writes, may be “the strongest force out there.”

He cites Google’s and Apple’s “ecosystems” as examples, showing how users can become dependent on such services as a result of their all-­encompassing seamlessness. To Wu, this isn’t a bad thing in itself. The ease of using Amazon to stream entertainment, make online purchases, or help organize day-to-day life delivers obvious gains. But when powerhouse companies like Amazon, Apple, and Alphabet win the battle of convenience with so many users—and never let competitors get a foothold—the result is “industry dominance” that must now be reexamined.

The measures Wu advocates—and that appear the most practical, as they draw on existing legal frameworks and economic policies—are federal anti-monopoly laws, utility caps that limit how much companies can charge consumers for service, and “line of business” restrictions that prohibit companies from operating in certain industries.

Columbia University’s Tim Wu shows how platforms have shifted from giving to users to extracting from them. He argues that our failure to understand their power has only encouraged them to grow.

Anti-monopoly provisions and antitrust laws are effective weapons in our armory, Wu contends, pointing out that they have been successfully used against technology companies in the past. He cites two well-known cases. The first is the 1960s antitrust case brought by the US government against IBM, which helped create competition in the computer software market that enabled companies like Apple and Microsoft to emerge. The 1982 AT&T case that broke the telephone conglomerate up into several smaller companies is another instance. In each, the public benefited from the decoupling of hardware, software, and other services, leading to more competition and choice in a technology market.

But will past performance predict future results? It’s not yet clear whether these laws can be successful in the platform age. The 2025 antitrust case against Google—in which a judge ruled that the company did not have to divest itself of its Chrome browser as the US Justice Department had proposed—reveals the limits of pursuing tech breakups through the law. The 2001 antitrust case brought against Microsoft likewise failed to separate the company from its web browser and mostly kept the conglomerate intact. Wu noticeably doesn’t discuss the Microsoft case when arguing for antitrust action today.

Nick Clegg, until recently Meta’s president of global affairs and a former deputy prime minister of the UK, takes a position very different from Wu’s: that trying to break up the biggest tech companies is misguided and would degrade the experience of internet users. In How to Save the Internet: The Threat to Global Connection in the Age of AI and Political Conflict, Clegg acknowledges Big Tech’s monopoly over the web. But he believes punitive legal measures like antitrust laws are unproductive and can be avoided by means of regulation, such as rules for what content social media can and can’t publish. (It’s worth noting that Meta is facing its own antitrust case, involving whether it should have been allowed to acquire Instagram and WhatsApp.)

book cover
How to Save the Internet: The Threat to Global Connection in the Age of AI and Political Conflict
Nick Clegg
BODLEY HEAD, 2025

Clegg also believes Silicon Valley should take the initiative to reform itself. He argues that encouraging social media networks to “open up the books” and share their decision-making power with users is more likely to restore some equilibrium than contemplating legal action as a first resort.

But some may be skeptical of a former Meta exec and politician who worked closely with Mark Zuckerberg and still wasn’t able to usher in such changes to social media sites while working for one. What will only compound this skepticism is the selective history found in Clegg’s book, which briefly acknowledges some scandals (like the one surrounding Cambridge Analytica’s data harvesting from Facebook users in 2016) but refuses to discuss other pertinent ones. For example, Clegg laments the “fractured” nature of the global internet today but fails to acknowledge Facebook’s own role in this splintering.

Breaking up Big Tech through antitrust laws would hinder innovation, says Clegg, arguing that the idea “completely ignores the benefits users gain from large network effects.” Users stick with these outsize channels because they can find “most of what they’re looking for,” he writes, like friends and content on social media and cheap consumer goods on Amazon and eBay.

Wu might concede this point, but he would disagree with Clegg’s claims that maintaining the status quo is beneficial to users. “The traditional logic of antitrust law doesn’t work,” Clegg insists. Instead, he believes less sweeping regulation can help make Big Tech less dangerous while ensuring a better user experience.

Clegg has seen both sides of the regulatory coin: He worked in David Cameron’s government passing national laws for technology companies to follow and then moved to Meta to help the company navigate those types of nation-specific obligations. He bemoans the hassle and complexity Silicon Valley faces in trying to comply with differing rules across the globe, some set by “American federal agencies” and others by “Indian nationalists.”

But with the resources such companies command, surely they are more than equipped to cope? Given that Meta itself has previously meddled in access to the internet (such as in India, whose telecommunications regulator ultimately blocked its Free Basics internet service for violating net neutrality rules), this complaint seems suspect coming from Clegg. What should be the real priority, he argues, is not any new nation-specific laws but a global “treaty that protects the free flow of data between signatory countries.”

What the former Meta executive Nick Clegg advocates—unsurprisingly—is not a breakup of Big Tech but a push for it to become “radically transparent.”

Clegg believes that these nation-specific technology obligations—a recent one is Australia’s ban on social media for people under 16—usually reflect fallacies about the technology’s human impact, a subject that can be fraught with anxiety. Such laws have proved ineffective and tend to taint the public’s understanding of social networks, he says. There is some truth to his argument here, but reading a book in which a former Facebook executive dismisses techno-determinism—that is, the argument that technology makes people do or think certain things—may be cold comfort to those who have seen the harm technology can do.

In any case, Clegg’s defensiveness about social networks may not gain much favor from users themselves. He stresses the need for more personal responsibility, arguing that Meta doesn’t ever intend for users to stay on Facebook or Instagram endlessly: “How long you spend on the app in a single session is not nearly as important as getting you to come back over and over again.” Social media companies want to serve you content that is “meaningful to you,” he claims, not “simply to give you a momentary dopamine spike.” All this feels disingenuous at best.

What Clegg advocates—unsurprisingly—is not a breakup of Big Tech but a push for it to become “radically transparent,” whether on its own or, if necessary, with the help of federal legislators. He also wants platforms to bring users more into their governance processes (by using Facebook’s model of community forums to help improve their apps and products, for example). Finally, Clegg also wants Big Tech to give users more meaningful control of their data and how companies such as Meta can use it.

Here Clegg shares common ground with the inventor of the web, Tim Berners-Lee, whose own proposal for reform advances a technically specific vision for doing just that. In his memoir/manifesto This Is for Everyone: The Unfinished Story of the World Wide Web, Berners-Lee acknowledges that his initial vision—of a technology he hoped would remain open-source, collaborative, and completely decentralized—is a far cry from the web that we know today.

book cover
This Is for Everyone: The Unfinished Story of the World Wide Web
Tim Berners-Lee
FARRAR, STRAUS & GIROUX, 2025

If there’s any surviving manifestation of his original project, he says, it’s Wikipedia, which remains “probably the best single example of what I wanted the web to be.” His best idea for moving power from Silicon Valley platforms into the hands of users is to give them more data control. He pushes for a universal data “pod” he helped develop, known as “Solid” (an abbreviation of “social linked data”).

The system—which was originally developed at MIT—would offer a central site where people could manage data ranging from credit card information to health records to social media comment history. “Rather than have all this stuff siloed off with different providers across the web, you’d be able to store your entire digital information trail in a single private repository,” Berners-Lee writes.

The Solid product may look like a kind of silver bullet in an age when data harvesting is familiar and data breaches are rampant. Placing greater control with users and enabling them to see “what data [i]s being generated about them” does sound like a tantalizing prospect.

But some people may have concerns about, for example, merging their confidential health records with data from personal devices (like heart rate info from a smart watch). No matter how much user control and decentralization Berners-Lee may promise, recent data scandals (such as cases in which period-tracking apps misused clients’ data) may be on people’s minds.

Berners-Lee believes that centralizing user data in a product like Solid could save people time and improve daily life on the internet. “An alien coming to Earth would think it was very strange that I had to tell my phone the same things again and again,” he complains about the experience of using different airline apps today.

With Solid, everything from vaccination records to credit card transactions could be kept within the digital vault and plugged into different apps. Berners-Lee believes that AI could also help people make more use of this data—for example, by linking meal plans to grocery bills. Still, if he’s optimistic on how AI and Solid could coordinate to improve users’ lives, he is vague on how to make sure that chatbots manage such personal data sensitively and safely.

Berners-Lee generally opposes regulation of the web (except in the case of teenagers and social media algorithms, where he sees a genuine need). He believes in internet users’ individual right to control their own data; he is confident that a product like Solid could “course-correct” the web from its current “exploitative” and extractive direction.

Of the three writers’ approaches to reform, it is Wu’s that has shown some effectiveness of late. Companies like Google have been forced to give competitors some advantage through data sharing, and they have now seen limits on how their systems can be used in new products and technologies. But in the current US political climate, will antitrust laws continue to be enforced against Big Tech?

Clegg may get his way on one issue: limiting new nation-specific laws. President Donald Trump has confirmed that he will use tariffs to penalize countries that ratify their own national laws targeting US tech companies. And given the posture of the Trump administration, it doesn’t seem likely that Big Tech will see more regulation in the US. Indeed, social networks have seemed emboldened (Meta, for example, removed fact-checkers and relaxed content moderation rules after Trump’s election win). In any case, the US hasn’t passed a major piece of federal internet legislation since 1996.

If using anti-monopoly laws through the courts isn’t possible, Clegg’s push for a US-led omnibus deal—setting consensual rules for data and acceptable standards of human rights—may be the only way to make some more immediate improvements.

In the end, there is not likely to be any single fix for what ails the internet today. But the ideas the three writers agree on—greater user control, more data privacy, and increased accountability from Silicon Valley—are surely the outcomes we should all fight for.

Nathan Smith is a writer whose work has appeared in the Washington Post, the Economist, and the Los Angeles Times.

The Download: aging clocks, and repairing the internet

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How aging clocks can help us understand why we age—and if we can reverse it

Wrinkles and gray hairs aside, it can be difficult to know how well—or poorly—someone’s body is truly aging. A person who develops age-related diseases earlier in life, or has other biological changes associated with aging, might be considered “biologically older” than a similar-age person who doesn’t have those changes. Some 80-year-olds will be weak and frail, while others are fit and active.

Over the past decade, scientists have been uncovering new methods of looking at the hidden ways our bodies are aging. And what they’ve found is changing our understanding of aging itself. Read the full story.

—Jessica Hamzelou

Can we repair the internet?

From addictive algorithms to exploitative apps, data mining to misinformation, the internet today can be a hazardous place. New books by three influential figures—the intellect behind “net neutrality,” a former Meta executive, and the web’s own inventor—propose radical approaches to fixing it. But are these luminaries the right people for the job? Read the full story.

—Nathan Smith

Both these stories are from our forthcoming print issue, which is all about the body. If you haven’t already, subscribe now to receive future issues once they land. Plus, you’ll also receive a free digital report on nuclear power.

2025 climate tech companies to watch: Cyclic Materials and its rare earth recycling tech

Rare earth magnets are essential for clean energy, but only a tiny fraction of the metals inside them are ever recycled. Cyclic Materials aims to change that by opening one of the largest rare earth magnet recycling operations outside of China next year. 

By collecting a wide range of devices and recycling multiple metals, the company seeks to overcome the economic challenges that have long held back such efforts. Read the full story.

—Maddie Stone

Cyclic Materials is one of our 10 climate tech companies to watch—our annual list of some of the most promising climate tech firms on the planet. Check out the rest of the list here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 California’s AI safety bill has been signed into law   
It holds AI companies legally accountable if their chatbots fail to protect users. (TechCrunch)
+ It also requires chatbots to remind young users that they’re not human. (The Verge)
+ Gavin Newsom also green-lit measures for social media warning labels. (The Hill)

2 Satellites are leaking unencrypted data
Including civilian text messages, plus military and law enforcement communications. (Wired $)
+ It’s getting mighty crowded up there too. (Space)

3 Defense startups are reviving manufacturing in quiet US towns
The weapons of the future are being built in Delaware, Michigan and Ohio. (NYT $)
+ Phase two of military AI has arrived. (MIT Technology Review)

4 Europe is worried about becoming an AI “colony”
The bloc is too dependent on US tech, experts fear. (FT $)
+ The US is locked in a bind with China. (Rest of World)

5 Vast chunks of human knowledge are missing from the web 
And AI is poised to make the problem even worse. (Aeon)
+ How AI and Wikipedia have sent vulnerable languages into a doom spiral. (MIT Technology Review)

6 How mega batteries are unlocking an energy revolution
Vast battery units are helping to shore up grids and extend the use of clean power. (FT $)
+ This startup wants to use the Earth as a massive battery. (MIT Technology Review)

7 A new chemical detection technique reveals what’s making wildlife ill
It’s a small step toward a healthier future for all animals—including humans. (Knowable Magazine)
+ We’re inhaling, eating, and drinking toxic chemicals. Now we need to figure out how they’re affecting us. (MIT Technology Review)

8 The world is growing more food crops than ever before
But hunger still hasn’t been eradicated. (Vox)
+ Africa fights rising hunger by looking to foods of the past. (MIT Technology Review)

9 Google is starting to hide sponsored search results
Only after you’ve seen them first. (The Verge)
+ Is Google playing catchup on search with OpenAI? (MIT Technology Review)

10 Indonesia’s film industry is embracing AI
To the detriment of artists and storyboarders. (Rest of World)

Quote of the day

“It is attempting to solve a problem that wasn’t a problem before AI showed up, or before big tech showed up.”

—Greg Loudon, a certified beer judge and brewery sales manager, tells 404 Media why he’s so unimpressed by a prominent competition using AI to judge the quality of beer.

One more thing

The lucky break behind the first CRISPR treatment

The world’s first commercial gene-editing treatment is set to start changing the lives of people with sickle-cell disease. It’s called Casgevy, and it was approved in November 2022 in the UK.

The treatment, which will be sold in the US by Vertex Pharmaceuticals, employs CRISPR, which can be easily programmed by scientists to cut DNA at precise locations they choose.

But where do you aim CRISPR, and how did the researchers know what DNA to change? That’s the lesser-known story of the sickle-cell breakthrough. Read more about it.

—Antonio Regalado

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Why you should consider adopting a “coffee name.”
+ Where does your favorite Star Wars character rank in this ultimate list? (Number one is correct.)
+ Steve McQueen, you will always be cool.
+ The compelling argument for adopting an ethical diet.

5 SEO Tactics to Be Seen & Trusted on AI Search [Webinar] via @sejournal, @duchessjenm

Is your brand ready for AI-driven SERPs?

Search is evolving faster than ever. AI-driven engines like ChatGPT, Google SGE, and Bing Copilot are changing how users discover and trust brands. Traditional SEO tactics alone may no longer guarantee visibility or authority in Answer Engines.

Discover five proven tactics to protect your SERP presence and maintain trust in AI search.

What You’ll Learn

Craig Smith, Chief Strategy Officer at Outerbox, will show exactly how to adapt your SEO strategy for generative search and answer engines. 

You’ll walk away with actionable steps to:

Register now to get the SEO playbook your competitors wish they had.

Why You Can’t Miss This Webinar

AI Overviews are already impacting traffic. Brands that adapt now will dominate visibility and authority while others fall behind.

🛑 Can’t attend live? Register anyway and we’ll send you the recording so you can watch at your convenience.