How Referral Traffic Undermines Long-Term Brand Growth via @sejournal, @martinibuster

Mordy Oberstein, a search marketing professional whom I hold in high esteem, recently shared the provocative idea that referral traffic is not a brand’s friend and that every brand, as it matures, should wean itself from it. Referrals from other websites are generally considered a sign of a high-performing business, but it’s not a long-term strategy because it depends on sources that cannot be controlled.

Referral Traffic Is Necessary But…

Mordy Oberstein (LinkedIn profile), formerly of Wix, asserted in a Facebook post that relying on a traffic source, whether that’s another website or a search engine, offers a degree of vulnerability to maintaining steady traffic and performance.

He broke it down as a two-fold weakness:

  • Relying on the other site to keep featuring your brand.
  • Relying on Google to keep ranking that other site which in turn sends visitors to your brand.

The flow of traffic can stop at either of those two points, which is a hidden weakness that can affect the long-term sustainability of healthy traffic and sales.

Mordy explained:

“It’s a double vulnerability…

1) Relying on being featured by the website (the traffic source)
2) Relying on Google to give that website …traffic (the channel)

There are two levels of exposure & vulnerability.

As your brand matures, you want to own your own narrative.

More referral traffic is not your friend. It’s why, as a brand matures, it should wean off of it.

Full disclosure, this is my opinion. I am sure a lot of people will disagree.”

Becoming A Destination

I’ve always favored promoting a site in a way that helps it become synonymous with a given topic because that’s how to make it a default destination and encourage the kinds of signals that Google interprets as authoritative. I’ve done things like created hats with logos to give away, annual product giveaways and other promotional activities, both online and offline. While my competition was doing SEO busy work I created fans. Promoting a site is basically just getting it in front of people, both online and offline.

Brand Authority Is An Excuse, Not A Goal

Some SEOs believe in a concept called Brand Authority, which is a misleading explanation for why a website rank.  The term Brand Authority is not about Branding and it’s not about Authoritativeness, either. It’s just an excuse for why a site is top-ranked.

The phrase Brand Authority has its roots in PageRank. Big brand websites used to have a PageRank of 9 out of 10 and even a 10/10, which enabled them to rank for virtually any keywords they wanted. A link from one of those sites practically guarantee a top ten ranking. But Google ended the outsized influence of PageRank because it resulted in less relevant results, which was around 2004-ish, about the time that Google started using Navboost, a ranking signal that essentially measures how people feel about a site, which is what PageRank does, too.

This insight, that Google uses signals about how people feel about a site, is important because the feelings people have for a business are what being a brand is all about.

Marty Neumeier, a thought leader on how to promote companies (author of The Brand Gap) explained what being a brand is all about:

“Instead of creating the brand first, the company creates customers (through products and social media), the customers build the brand (through purchases and advocacy), and the customer-built brand sustains the company (through “tribal” loyalty). This model takes into account a profound and counterintuitive truth: a brand is not owned by the company, but by the customers who draw meaning from it. Your brand isn’t what you say it is. It’s what they say it is.”

Neumeier also explains how brand is about customer feelings:

“The best brands are vivid. They create clear mental pictures and powerful feelings in the minds and hearts of customers. They’re brought to life through their touchpoints, the places where customers experience them, from the first exposure to a brand’s name, to buying the product, to eventually making it part of who they are.”

That “tribal loyalty” is the kind of thing Google tries to measure. So when Danny Sullivan talks about differentiating your site to make it like a brand, he is not referring to so-called “brand authority.” He is talking about doing the kinds of things that influence people to feel positive about a site.

Getting Back To Mordy Oberstein

It seems to me that what he’s saying is that referral traffic is a stepping stone towards becoming a destination, it’s a means to an end. It’s not the goal, it’s a step toward the goal of becoming a destination.

On the other side of that process, I think it’s important to maintain relevance with potential site visitors and customers, especially today with the rapid pace of innovation, generational change, new inventions, and new product models. Relevance to people has been a Google ranking signal for a long time, beginning with PageRank, then with additional signals like Navboost.

The SEO factor that the SEO industry has largely missed is the part about about getting people to think positive thoughts about your site and your business, enough to share with other people.

Mordy’s insight about traffic is beautiful and elegant.

Read Mordy’s entire post on Facebook.

Featured Image by Shutterstock/Yunus Praditya

What is Site Kit by Google? A guide for WordPress users

Site Kit by Google is a free WordPress plugin that connects your site to important tools like Analytics, Search Console, and Ads. After installing, it’s easy to verify your accounts, after which you see data in your dashboard. That data is nice to have, but it has limits, especially if you need detailed reports.

Table of contents

What is Site Kit by Google and why use it?

Site Kit by Google is a fundamental analytics tool that helps you answer questions like:

  • How many people are visiting your site?
  • What page do they land on first?
  • Which keywords did they search to find you?
  • Are your ads earning clicks?

With Site Kit, Google puts the data right into WordPress, so you don’t need to go digging around different platforms to seek your data. The tool gets its data straight from each service, and shows the most important data in clear graphs, tables, and a flexible, customizable Key Metrics widget. 

Who is it for? (and when it’s not enough)

But Site Kit is not the analytics tool to rule them all in WordPress land. It covers the basics well, but it won’t work for everyone’s goals. What it does do is make it incredibly easy to set up and run various Google Analytics accounts. 

Site Kit by Google works well for:

  • WordPress users who want to track basic performance
  • People who prefer not to use extra plugins or code
  • Site owners who manage everything themselves

But it may feel limited if you:

  • Run ads at scale and need conversion-level insight
  • Use custom events or eCommerce tracking
  • Want to control every aspect of your website’s scripts and tags

It covers the basics well, but it’s not built for advanced setups.

What does it look like?

After installing and connecting Site Kit, you’ll find a new menu item in your WordPress dashboard. Clicking this will lead you to the dashboard where most of the statistics and settings live. You’ll also notice a new drop-down menu when you visit posts on your site. Thanks to this drop-down, you can quickly see statistics for this specific article without having to open Analytics.

Overview dashboard

The Dashboard gives you an overview of how your site is performing. Of course, depending on what services you connect your site to, you might see something like this:

  • Traffic and engagement insights from Google Analytics 
  • Clicks and impressions from search traffic provided by Search Console
  • An overview of the top-performing pages
  • Earnings from Ads or AdSense, if you run ads, that is
  • Site speed performance powered by PageSpeed Insights 
  • An overview of how different groups compare, for instance, new vs. returning visitors

Some sections also show trend indicators like arrows or percentage changes compared to the previous period. This will help you spot trends and act upon them. Click on any source to open a more detailed view in the corresponding Google tool.

Part of the Site Kit dashboard showing various stats and the Key Metrics widget at the top

Key Metrics widget

You can set up the Key Metrics section the way you want. Site Kit will ask you a couple of questions about your site’s goals and what you want to focus on. Then, it will suggest metrics to show at the top of the dashboard. You can choose which blocks you want to see, such as top converting traffic sources, new visitors, recent trending pages, and much more. 

Admin bar stats

After Site Kit is active, you’ll also see a small dropdown at the top of your WordPress admin bar when you’re viewing your site. Click it, and you’ll get a mini-report showing page-specific stats, including search impressions, clicks, and traffic over time.

Site Kit will help you quickly find out how your content is doing, straight from the WordPress admin bar
Site Kit will help you quickly find out how your content is doing, straight from the WordPress admin bar

What Google services can you connect?

Once installed, you can connect the following tools. Two of them — Search Console and Google Analytics 4 — are enabled during the initial setup. You can connect:

  • Google Analytics 4
  • Search Console
  • AdSense 
  • Reader Revenue Manager
  • Google Ads
  • Tag Manager

Google Analytics 4 (GA4)

Site Kit will add your GA4 tag automatically, after which it shows data such as:

  • The number of visitors
  • Sources of sessions (organic search, direct, referral)
  • Average engagement rate
  • Session durations

The data shown is summarized, so if you want custom reports or event tracking, you need to open GA4. 

a new dashoard in site kit showing the difference in interaction between various visitor groups
Visitor grouping is the newest addition to Site Kit by Google

Google Search Console

After installing and connecting, you’ll get some key data from Search Console right inside your WordPress dashboard:

  • The queries people searched to find your site
  • Number of clicks and impressions
  • Unique visitors from search
  • Page-level performance in search

This kind of data is very helpful for content optimization purposes and to inform your SEO strategy. 

AdSense/Ads (monetization)

If you use Google’s systems to run ads, Site Kit can show data on ad impressions, top-earning pages, and estimated revenue from auto ads, for instance. Simply connect the services to see the data. Remember that it doesn’t replace the AdSense dashboards, but it does give you quick insights.

Reader Revenue Manager

Reader Revenue Manager is a Google tool for adding subscription and contribution options to your website. It’s designed for publishers and content creators who want to monetize their content through reader support, such as recurring memberships or one-time donations.

With Site Kit, you can connect Reader Revenue Manager to your WordPress site in just a few clicks. Once linked, it adds the necessary code to your site automatically, so you don’t need to add tags or install it manually. This feature is optional in Site Kit and is mostly used by publishers offering paywalled or premium content.

PageSpeed Insights

Site Kit runs a PageSpeed test directly inside WordPress. In the PageSpeed Insights section, you’ll see both lab data and field data. Lab data is based on simulated testing in a controlled environment and helps you identify performance issues during development. Field data, on the other hand, reflects how real users experience your site across different devices and network conditions. Together, they provide a balanced view of how your pages perform.

The report shows load performance scores, data on Core Web Vitals (like LCP and CLS). It also gives suggestions for improving speed. But it only tests your homepage and doesn’t include custom settings. For full reports, you can still visit PageSpeed Insights separately.

Tag Manager

You can link a Google Tag Manager container through Site Kit. This lets you manage third-party scripts (like Facebook Pixel or custom tracking tags) from one place. The plugin doesn’t give you a full interface for editing tags — you’ll do that inside the Tag Manager platform.

Managing Analytics in Site Kit by Google

For most site owners or managers, Analytics and Search Console are the most important Google tools. Site Kit makes it easy to set those two services up properly. Of course, you can also use existing accounts.

Enhanced measurement support

GA4 also has Enhanced Measurement, which tracks scrolls, outbound links, file downloads, and other actions automatically. If you activate these in your GA4 property, Site Kit can track them. Unfortunately, it’s not possible to choose which ones to turn on from inside WordPress; you need to go into your GA4 settings for that. 

Event tracking and tag insertion

Site Kit doesn’t support event setup or tracking reports inside the WordPress dashboard. If you need full control over events, you have to use GA4 directly or use Tag Manager to set up the custom events.

Limitations of Analytics in Site Kit

You’ll probably understand by now that Site Kit is not a replacement for GA4 — it’s a neat tool that gives quick insights and nothing more. You don’t get access to funnel reports, attribution models, or filters. You can’t edit events or see predictive metrics, and there’s no support for GA4 audiences or Google Analytics 360.

What’s Enhanced Conversion tracking?

With Enhanced Conversions, you can connect Google Ads clicks to leads or form submissions. This improves the reporting of these events when users are on different devices or block cookies. After setting this up, Site Kit will detect form submissions and pass the data to Google Ads.

Site Kit currently supports some of the most popular WordPress contact form plugins, such as Contact Form 7, WPForms, and Ninja Forms. However, if you use an unsupported custom form, Site Kit can’t automatically add enhanced conversions. 

Again, Site Kit has many limitations in this area. For instance, it doesn’t support purchase-based eCommerce conversions or offline conversions. It also doesn’t support pixel-level tracking, third-party forms, popups, and embedded forms. So, it’s specifically designed for simple lead form submissions. 

Key Metrics widget for quick performance insights

Key Metrics are a very valuable addition as they give quick insights into data of your choosing. They’re quick to understand but not very in-depth. For key strategy decisions, you’re going to need more data.

This widget pulls together important GA4 and Search Console data into a block on your dashboard. You can choose which metrics to show and reorder them. To change your selection, click the Change metrics button in the corner of the Key Metrics section. You can also rerun the question from the Site Kit admin settings.

Each metric includes a figure and a trend comparison from the previous period. For example, you may see engagement is “up 6%” compared to the last 28 days. Click any of them to open the full source report in GA4 or Search Console.

The widget has limitations. It doesn’t show custom events or real-time reporting, campaign attribution breakdowns, or GA4-specific collections like audiences or conversions. The widget and Site Kit, in general, are for broad insights, not advanced analytics. 

The Site Kit Key Metrics widget shows various data that you can tailor to your needs and goals
The Site Kit Key Metrics widget shows various data that you can tailor to your needs and goals

Is Site Kit by Google enough for your goals?

Site Kit is a good starting point for most WordPress users. It brings together valuable Google data without having to do much work. But whether it’s enough depends on whether you need to get from your analytics and tracking tools. 

SEO and content insights

Site Kit is not an SEO plugin like Yoast SEO. However, you can get data from Search Console that will help you understand how people find your website in the search results. With this, you’ll form an understanding of which content works well and how your site performs in the search results. 

However, as mentioned, it’s not an SEO plugin, so you need to install a tool like Yoast SEO to do much of the heavy lifting. Plugins like these help with most SEO tasks, like fixing technical issues, adding structured data, and improving your content. 

Monetization

If you’re running ads, Site Kit shows basic ad metrics like impressions, estimated earnings, and top-earning pages. It helps you monitor your ads without having to log into another app. 

It doesn’t support advanced ad setups, and you can’t manually place ads. It’s also not possible to optimize layouts based on behavior or run A/B tests to find the best ad format. If you’re working with multiple ad networks, you’ll need a tool that can do a lot more than Site Kit.

Marketing analytics

For reporting basics, Site Kit will do just fine. You can see trends in users, sessions, referral sources, and engagement time — all brought to you by Google Analytics 4. 

However, Site Kit doesn’t give access to campaign statistics, UTM tracking, or event-based funnels. It also doesn’t offer the option to set goals or segment traffic by behavior. For these kinds of insights, you need to dive straight into GA4 or use a more in-depth reporting tool. If you run marketing campaigns, track conversions, or use CRM tools, Site Kit won’t provide enough data. 

eCommerce and advanced use cases

For eCommerce, Site Kit won’t cut it. It doesn’t integrate with WooCommerce and doesn’t offer a revenue tracking option. It also doesn’t have access to carts, products, transactions, or customer behavior. There’s no way to measure things like average order value or conversion rates. 

For advanced eCommerce tracking, you need to set this up in GA4 directly or use other methods to access this data. Site Kit doesn’t support this at all. 

Should you use Site Kit by Google?

Site Kit is a good option if you want a free tool to view traffic, search, and performance statistics without having to set up a bunch of tools. It’s very easy to use and useful enough for small websites. 

If you’re running a huge publication or an online store, need to track custom campaigns, or manage a large number of ad accounts, Site Kit won’t cut it. That’s not to say it’s useless for those cases. One of its biggest draws is that it makes setting up GA4, Search Console, Ads, and Tag Manager accounts incredibly easy. It’s a great starting point to build your analytic toolkit upon.

A US court just put ownership of CRISPR back in play

The CRISPR patents are back in play.

On Monday, the US Court of Appeals for the Federal Circuit said scientists Jennifer Doudna and Emmanuelle Charpentier will get another chance to show they ought to own the key patents on what many consider the defining biotechnology invention of the 21st century.

The pair shared a 2020 Nobel Prize for developing the versatile gene-editing system, which is already being used to treat various genetic disorders, including sickle cell disease

But when key US patent rights were granted in 2014 to researcher Feng Zhang of the Broad Institute of MIT and Harvard, the decision set off a bitter dispute in which hundreds of millions of dollars—as well as scientific bragging rights—are at stake.

The new decision is a boost for the Nobelists, who had previously faced a string of demoralizing reversals over the patent rights in both the US and Europe.

“This goes to who was the first to invent, who has priority, and who is entitled to the broadest patents,” says Jacob Sherkow, a law professor at the University of Illinois. 

He says there is now at least a chance that Doudna and Charpentier “could walk away as the clear winner.”

The CRISPR patent battle is among the most byzantine ever, putting the technology alongside the steam engine, the telephone, the lightbulb, and the laser among the most hotly contested inventions in history.

In 2012, Doudna and Charpentier were first to publish a description of a CRISPR gene editor that could be programmed to precisely cut DNA in a test tube. There’s no dispute about that.

However, the patent fight relates to the use of CRISPR to edit inside animal cells—like those of human beings. That’s considered a distinct invention, and one both sides say they were first to come up with that very same year. 

In patent law, this moment is known as conception—the instant a lightbulb appears over an inventor’s head, revealing a definite and workable plan for how an invention is going to function.

In 2022, a specialized body called the Patent Trial and Appeal Board, or PTAB, decided that Doudna and Charpentier hadn’t fully conceived the invention because they initially encountered trouble getting their editor to work in fish and other species. Indeed, they had so much trouble that Zhang scooped them with a 2013 publication demonstrating he could use CRISPR to edit human cells.

The Nobelists appealed the finding, and yesterday the appeals court vacated it, saying the patent board applied the wrong standard and needs to reconsider the case. 

According to the court, Doudna and Charpentier didn’t have to “know their invention would work” to get credit for conceiving it. What could matter more, the court said, is that it actually did work in the end. 

In a statement, the University of California, Berkeley, applauded the call for a do-over.  

“Today’s decision creates an opportunity for the PTAB to reevaluate the evidence under the correct legal standard and confirm what the rest of the world has recognized: that the Doudna and Charpentier team were the first to develop this groundbreaking technology for the world to share,” Jeff Lamken, one of Berkeley’s attorneys, said in the statement.

The Broad Institute posted a statement saying it is “confident” the appeals board “will again confirm Broad’s patents, because the underlying facts have not changed.”

The decision is likely to reopen the investigation into what was written in 13-year-old lab notebooks and whether Zhang based his research, in part, on what he learned from Doudna and Charpentier’s publications. 

The case will now return to the patent board for a further look, although Sherkow says the court finding can also be appealed directly to the US Supreme Court. 

Police tech can sidestep facial recognition bans now

Six months ago I attended the largest gathering of chiefs of police in the US to see how they’re using AI. I found some big developments, like officers getting AI to write their police reports. Today, I published a new story that shows just how far AI for police has developed since then. 

It’s about a new method police departments and federal agencies have found to track people: an AI tool that uses attributes like body size, gender, hair color and style, clothing, and accessories instead of faces. It offers a way around laws curbing the use of facial recognition, which are on the rise. 

Advocates from the ACLU, after learning of the tool through MIT Technology Review, said it was the first instance they’d seen of such a tracking system used at scale in the US, and they say it has a high potential for abuse by federal agencies. They say the prospect that AI will enable more powerful surveillance is especially alarming at a time when the Trump administration is pushing for more monitoring of protesters, immigrants, and students. 

I hope you read the full story for the details, and to watch a demo video of how the system works. But first, let’s talk for a moment about what this tells us about the development of police tech and what rules, if any, these departments are subject to in the age of AI.

As I pointed out in my story six months ago, police departments in the US have extraordinary independence. There are more than 18,000 departments in the country, and they generally have lots of discretion over what technology they spend their budgets on. In recent years, that technology has increasingly become AI-centric. 

Companies like Flock and Axon sell suites of sensors—cameras, license plate readers, gunshot detectors, drones—and then offer AI tools to make sense of that ocean of data (at last year’s conference I saw schmoozing between countless AI-for-police startups and the chiefs they sell to on the expo floor). Departments say these technologies save time, ease officer shortages, and help cut down on response times. 

Those sound like fine goals, but this pace of adoption raises an obvious question: Who makes the rules here? When does the use of AI cross over from efficiency into surveillance, and what type of transparency is owed to the public?

In some cases, AI-powered police tech is already driving a wedge between departments and the communities they serve. When the police in Chula Vista, California, were the first in the country to get special waivers from the Federal Aviation Administration to fly their drones farther than normal, they said the drones would be deployed to solve crimes and get people help sooner in emergencies. They’ve had some successes

But the department has also been sued by a local media outlet alleging it has reneged on its promise to make drone footage public, and residents have said the drones buzzing overhead feel like an invasion of privacy. An investigation found that these drones were deployed more often in poor neighborhoods, and for minor issues like loud music. 

Jay Stanley, a senior policy analyst at the ACLU, says there’s no overarching federal law that governs how local police departments adopt technologies like the tracking software I wrote about. Departments usually have the leeway to try it first, and see how their communities react after the fact. (Veritone, which makes the tool I wrote about, said they couldn’t name or connect me with departments using it so the details of how it’s being deployed by police are not yet clear). 

Sometimes communities take a firm stand; local laws against police use of facial recognition have been passed around the country. But departments—or the police tech companies they buy from—can find workarounds. Stanley says the new tracking software I wrote about poses lots of the same issues as facial recognition while escaping scrutiny because it doesn’t technically use biometric data.

“The community should be very skeptical of this kind of tech and, at a minimum, ask a lot of questions,” he says. He laid out a road map of what police departments should do before they adopt AI technologies: have hearings with the public, get community permission, and make promises about how the systems will and will not be used. He added that the companies making this tech should also allow it to be tested by independent parties. 

“This is all coming down the pike,” he says—and so quickly that policymakers and the public have little time to keep up. He adds, “Are these powers we want the police—the authorities that serve us—to have, and if so, under what conditions?”

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The Download: CRISPR in court, and the police’s ban-skirting AI

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

A US court just put ownership of CRISPR back in play

The CRISPR patents are back in play.

Yesterday, the US Court of Appeals for the Federal Circuit said scientists Jennifer Doudna and Emmanuelle Charpentier will get another chance to show they ought to own the key patents on what many consider the defining biotechnology invention of the 21st century.

The pair shared a 2020 Nobel Prize for developing the gene-editing system, which is already being used to treat various disorders.

But when US patent rights were granted in 2014 to Feng Zhang of the Broad Institute of MIT and Harvard, the decision set off a bitter dispute in which hundreds of millions of dollars—as well as scientific bragging rights—are at stake. Read the full story.

—Antonio Regalado

To read more about CRISPR, why not take a look at:

+ Charpentier and Doudna announced they wanted to cancel their own CRISPR patents in Europe last year. Read the full story.

+ How CRISPR will help the world cope with climate change. Read the full story.

+ The US has approved CRISPR pigs for food. Pigs whose DNA makes them resistant to a virus could be the first big consumer product using gene editing. Read the full story.

+ CRISPR will get easier and easier to administer. What does that mean for the future of our species?

Police tech can sidestep facial recognition bans now

—James O’Donnell

Six months ago I attended the largest gathering of chiefs of police in the US to see how they’re using AI. I found some big developments, like officers getting AI to write their reports. Now, I’ve published a new story that shows just how far AI for police has developed since then.

It’s about a new method police are using to track people: an AI tool that uses attributes like body size, gender, hair color and style, clothing, and accessories instead of faces. It offers a way around laws curbing the use of facial recognition, which are on the rise.

Here’s what this tells us about the development of police tech and what rules, if any, these departments are subject to in the age of AI. Read the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Two Trump officials were denied access to the US Copyright Office 
Their visit came days after the administration fired the office’s head. (Wired $)
+ Shira Perlmutter oversaw a report raising concerns about training AI with copyrighted materials. (WP $)

2 Google knew it couldn’t monitor how Israel might use its cloud technology
But it went ahead with Project Nimbus anyway. (The Intercept)

3 Spain still doesn’t know what caused its massive power blackout
Investigators are examining generators’ cyber defences for weaknesses. (FT $)
+ Could solar power be to blame? (MIT Technology Review)

4 Apple is considering hiking the price of iPhones
The company doesn’t want to blame tariffs, though. (WSJ $)
+ Apple boss Tim Cook had a call with Trump following the tariff rollback news. (CNBC)
+ It’s reportedly developing an AI tool to extend phones’ battery life. (Bloomberg $)

5 Venture capitalists aren’t 100% sure what an AI agent is
That isn’t stopping companies from sinking millions into them. (TechCrunch)
+ Google is working on its own agent ahead of its I/O conference. (The Information $)
+ What AI assistants can—and can’t—do. (Vox)
+ Check out our AI agent explainer. (MIT Technology Review)

6 Scammers are stealing the identities of death row inmates
And prisoners are unlikely to see correspondence alerting them to the fraud. (NBC News)

7 Weight-loss drugs aren’t always enough
You need long-term changes in health, not just weight. (The Atlantic $)
+ How is Trump planning to lower drug costs, exactly? (NY Mag $)
+ Drugs like Ozempic now make up 5% of prescriptions in the US. (MIT Technology Review)

8 China’s e-commerce giants are racing to deliver goods within an hour
As competition has intensified, companies are fighting to be the quickest. (Reuters)

9 This spacecraft will police satellites’ orbits 🛰
And hunt them down where necessary. (IEEE Spectrum)
+ The world’s biggest space-based radar will measure Earth’s forests from orbit. (MIT Technology Review)

10 Is your beard trimmer broken? Simply 3D-print a new part.
Philips is experimenting with letting its customers create their own replacements. (The Verge)

Quote of the day

“We usually set it up so that our team doesn’t get to creep in.”

—Angie Saltman, founder and president of tech company Saltmedia, explains how her company helps store Indigenous data securely away from the Trump administration, the Verge reports.

One more thing

Meet the radio-obsessed civilian shaping Ukraine’s drone defense

Drones have come to define the brutal conflict in Ukraine that has now dragged on for more than three years. And most rely on radio communications—a technology that Serhii “Flash” Beskrestnov has obsessed over since childhood.

While Flash is now a civilian, the former officer has still taken it upon himself to inform his country’s defense in all matters related to radio. Once a month, he studies the skies for Russian radio transmissions and tries to learn about the problems facing troops in the fields and in the trenches.

In this race for survival—as each side constantly tries to best the other, only to start all over again when the other inevitably catches up—Ukrainian soldiers need to develop creative solutions, and fast. As Ukraine’s wartime radio guru, Flash may just be one of their best hopes for doing that. Read the full story.

—Charlie Metcalfe

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Tune in at any time to the Coral City Camera, an underwater camera streaming live from an urban coral reef in Miami 🐠
+ Inhuman Resources, which mixes gaming, reading, and listening, sounds nuts.
+ This compilation of 331 film clips to recreate Eminem’s Lose Yourself is spectacular.
+ Questions I never thought I’d ask: what if Bigfoot were British?

Why climate researchers are taking the temperature of mountain snow

On a crisp morning in early April, Dan McEvoy and Bjoern Bingham cut clean lines down a wide run at the Heavenly Ski Resort in South Lake Tahoe, then ducked under a rope line cordoning off a patch of untouched snow. 

They side-stepped up a small incline, poled past a row of Jeffrey pines, then dropped their packs. 

The pair of climate researchers from the Desert Research Institute (DRI) in Reno, Nevada, skied down to this research plot in the middle of the resort to test out a new way to take the temperature of the Sierra Nevada snowpack. They were equipped with an experimental infrared device that can take readings as it’s lowered down a hole in the snow to the ground.

The Sierra’s frozen reservoir provides about a third of California’s water and most of what comes out of the faucets, shower heads, and sprinklers in the towns and cities of northwestern Nevada. As it melts through the spring and summer, dam operators, water agencies, and communities have to manage the flow of billions of gallons of runoff, storing up enough to get through the inevitable dry summer months without allowing reservoirs and canals to flood.

The need for better snowpack temperature data has become increasingly critical for predicting when the water will flow down the mountains, as climate change fuels hotter weather, melts snow faster, and drives rapid swings between very wet and very dry periods. 

In the past, it has been arduous work to gather such snowpack observations. Now, a new generation of tools, techniques, and models promises to ease that process, improve water forecasts, and help California and other states safely manage one of their largest sources of water in the face of increasingly severe droughts and flooding.

Observers, however, fear that any such advances could be undercut by the Trump administration’s cutbacks across federal agencies, including the one that oversees federal snowpack monitoring and survey work. That could jeopardize ongoing efforts to produce the water data and forecasts on which Western communities rely.

“If we don’t have those measurements, it’s like driving your car around without a fuel gauge,” says Larry O’Neill, Oregon’s state climatologist. “We won’t know how much water is up in the mountains, and whether there’s enough to last through the summer.”

The birth of snow surveys

The snow survey program in the US was born near Lake Tahoe, the largest alpine lake in North America, around the turn of the 20th century. 

Without any reliable way of knowing how much water would flow down the mountain each spring, lakefront home and business owners, fearing floods, implored dam operators to release water early in the spring. Downstream communities and farmers pushed back, however, demanding that the dam was used to hold onto as much water as possible to avoid shortages later in the year. 

In 1908, James Church, a classics professor at the University of Nevada, Reno, whose passion for hiking around the mountains sparked an interest in the science of snow, invented a device that helped resolve the so-called Lake Tahoe Water Wars: the Mt. Rose snow sampler, named after the peak of a Sierra spur that juts into Nevada.

Professor James E. Church wearing goggles and snowshoes, standing on a snowy hillside
James Church, a professor of classics at the University of Nevada, Reno, became a pioneer in the field of snow surveys.
COURTESY OF UNIVERSITY OF NEVADA, RENO

It’s a simple enough device, with sections of tube that screw together, a sharpened end, and measurement ticks along the side. Snow surveyors measure the depth of the snow by plunging the sampler down to the ground. They then weigh the filled tube on a specialized scale to calculate the water content of the snow. 

Church used the device to take measurements at various points across the range, and calibrated his water forecasts by comparing his readings against the rising and falling levels of Lake Tahoe. 

It worked so well that the US began a federal snow survey program in the mid-1930s, which evolved into the one carried on today by the Department of Agriculture’s Natural Resources Conservation Service (NRCS). Throughout the winter, hundreds of snow surveyors across the American West head up to established locations on snowshoes, backcountry skis, or snowmobiles to deploy their Mt. Rose samplers, which have barely changed over more than a century. 

In the 1960s, the US government also began setting up a network of permanent monitoring sites across the mountains, now known as the SNOTEL network. There are more than 900 stations continuously transmitting readings from across Western states and Alaska. They’re equipped with sensors that measure air temperature, snow depth, and soil moisture, and include pressure-sensitive “snow pillows” that weigh the snow to determine the water content. 

The data from the snow surveys and SNOTEL sites all flows into snow depth and snow water content reports that the NRCS publishes, along with forecasts of the amount of water that will fill the streams and reservoirs through the spring and summer.

Taking the temperature

None of these survey and monitoring programs, however, provide the temperature throughout the snowpack. 

The Sierra Nevada snowpack can reach more than 6 meters (20 feet), and the temperature within it may vary widely, especially toward the top. Readings taken at increments throughout can determine what’s known as the cold content, or the amount of energy required to shift the snowpack to a uniform temperature of 32˚F. 

Knowing the cold content of the snowpack helps researchers understand the conditions under which it will begin to rapidly melt, particularly as it warms up in the spring or after rain falls on top of the snow.

If the temperature of the snow, for example, is close to 32˚F even at several feet deep, a few warm days could easily set it melting. If, on the other hand, the temperature measurements show a colder profile throughout the middle, the snowpack is more stable and will hold up longer as the weather warms.

a person with raising a snow shovel up at head height
Bjoern Bingham, a research scientist at the Desert Research Institute, digs at snowpit at a research plot within the Heavenly Ski Resort, near South Lake Tahoe, California.
JAMES TEMPLE

The problem is that taking the temperature of the entire snowpack has been, until now, tough and time-consuming work. When researchers do it at all, they mainly do so by digging snow pits down to the ground and then taking readings with probe thermometers along an inside wall.

There have been a variety of efforts to take continuous remote readings from sensors attached to fences, wires, or towers, which the snowpack eventually buries. But the movement and weight of the dense shifting snow tends to break the devices or snap the structures they’re assembled upon.

“They rarely last a season,” McAvoy says.

Anne Heggli, a professor of mountain hydrometeorology at DRI, happened upon the idea of using an infrared device to solve this problem during a tour of the institute’s campus in 2019, when she learned that researchers there were using an infrared meat thermometer to take contactless readings of the snow surface.

In 2021, Heggli began collaborating with RPM Systems, a gadget manufacturing company, to design an infrared device optimized for snowpack field conditions. The resulting snow temperature profiler is skinny enough to fit down a hole dug by snow surveyors and dangles on a cord marked off at 10-centimeter (4-inch) increments.

a researcher stands in a snowy trench taking notes, while a second researcher drops a yellow measure down from the surface level
Bingham and Daniel McEvoy, an associate research professor at the Desert Research Institute, work together to take temperature readings from inside the snowpit as well as from within the hole left behind by a snow sampler.
JAMES TEMPLE

At Heavenly on that April morning, Bingham, a staff scientist at DRI, slowly fed the device down a snow sampler hole, calling out temperature readings at each marking. McEvoy scribbled them down on a worksheet fastened to his clipboard as he used a probe thermometer to take readings of his own from within a snow pit the pair had dug down to the ground.

They were comparing the measurements to assess the reliability of the infrared device in the field, but the eventual aim is to eliminate the need to dig snow pits. The hope is that state and federal surveyors could simply carry along a snow temperature profiler and drop it into the snowpack survey holes they’re creating anyway, to gather regular snowpack temperature readings from across the mountains.

In 2023, the US Bureau of Reclamation, the federal agency that operates many of the nation’s dams, funded a three-year research project to explore the use of the infrared gadgets in determining snowpack temperatures. Through it, the DRI research team has now handed devices out to 20 snow survey teams across California, Colorado, Idaho, Montana, Nevada, and Utah to test their use in the field and supplement the snowpack data they’re collecting.

The Snow Lab

The DRI research project is one piece of a wider effort to obtain snowpack temperature data across the mountains of the West.

By early May, the snow depth had dropped from an April peak of 114 inches to 24 inches (2.9 meters to 0.6 meters) at the UC Berkeley Central Sierra Snow Lab, an aging wooden structure perched in the high mountains northwest of Lake Tahoe.

Megan Mason, a research scientist at the lab, used a backcountry ski shovel to dig out a trio of instruments from what was left of the pitted snowpack behind the building. Each one featured different types of temperature sensors, arrayed along a strong polymer beam meant to hold up under the weight and movement of the Sierra snowpack.  

She was pulling up the devices after running the last set of observations for the season, as part of an effort to develop a resilient system that can survive the winter and transmit hourly temperature readings.

The lab is working on the project, dubbed the California Cold Content Initiative, in collaboration with the state’s Department of Water Resources. California is the only western state that opted to maintain its own snow survey program and run its own permanent monitoring stations, all of which are managed by the water department. 

The plan is to determine which instruments held up and functioned best this winter. Then, they can begin testing the most promising approaches at several additional sites next season. Eventually, the goal is to attach the devices at more than 100 of California’s snow monitoring stations, says Andrew Schwartz, the director of the lab.

The NRCS is conducting a similar research effort at select SNOTEL sites equipped with a beaded temperature cable. One such cable is visible at the Heavenly SNOTEL station, next to where McEvoy and Bingham dug their snow pit, strung vertically between an arm extended from the main tower and the snow-covered ground. 

a gloved hand inserts a probe wire into a hole in the snow
DRI’s Bjoern Bingham feeds the snow temperature profiler, an infrared device, down a hole in the Sierra snowpack.
JAMES TEMPLE

Schwartz said that the different research groups are communicating and collaborating openly on the projects, all of which promise to provide complementary information, expanding the database of snowpack temperature readings across the West.

For decades, agencies and researchers generally produced water forecasts using relatively simple regression models that translated the amount of water in the snowpack into the amount of water that will flow down the mountain, based largely on the historic relationships between those variables. 

But these models are becoming less reliable as climate change alters temperatures, snow levels, melt rates, and evaporation, and otherwise drives alpine weather patterns outside of historic patterns.

“As we have years that scatter further and more frequently from the norm, our models aren’t prepared,” Heggli says.

Plugging direct temperature observations into more sophisticated models that have emerged in recent years, Schwartz says, promises to significantly improve the accuracy of water forecasts. That, in turn, should help communities manage through droughts and prevent dams from overtopping even as climate change fuels alternately wetter, drier, warmer, and weirder weather.

About a quarter of the world’s population relies on water stored in mountain snow and glaciers, and climate change is disrupting the hydrological cycles that sustain these natural frozen reservoirs in many parts of the world. So any advances in observations and modeling could deliver broader global benefits.

Ominous weather

There’s an obvious threat to this progress, though.

Even if these projects work as well as hoped, it’s not clear how widely these tools and techniques will be deployed at a time when the White House is gutting staff across federal agencies, terminating thousands of scientific grants, and striving to eliminate tens of billions of dollars in funding at research departments. 

The Trump administration has fired or put on administrative leave nearly 6,000 employees across the USDA, or 6% of the department’s workforce. Those cutbacks have reached regional NRCS offices, according to reporting by local and trade outlets.

That includes more than half of the roles at the Portland office, according to O’Neill, the state climatologist. Those reductions prompted a bipartisan group of legislators to call on the Secretary of Agriculture to restore the positions, warning the losses could impair water data and analyses that are crucial for the state’s “agriculture, wildland fire, hydropower, timber, and tourism sectors,” as the Statesman Journal reported.

There are more than 80 active SNOTEL stations in Oregon.

The fear is there won’t be enough people left to reach all the sites this summer to replace batteries, solar panels, and drifting or broken sensors, which could quickly undermine the reliability of the data or cut off the flow of information. 

“Staff and budget reductions at NRCS will make it impossible to maintain SNOTEL instruments and conduct routine manual observations, leading to inoperability of the network within a year,” the lawmakers warned.

The USDA and NRCS didn’t respond to inquiries from MIT Technology Review

looking down at a researcher standing in a snowy trench with a clipboard of notes
DRI’s Daniel McEvoy scribbles down temperature readings at the Heavenly site.
JAMES TEMPLE

If the federal cutbacks deplete the data coming back from SNOTEL stations or federal snow survey work, the DRI infrared method could at least “still offer a simplistic way of measuring the snowpack temperatures” in places where state and regional agencies continue to carry out surveys, McEvoy says.

But most researchers stress the field needs more surveys, stations, sensors, and readings to understand how the climate and water cycles are changing from month to month and season to season. Heggli stresses that there should be broad bipartisan support for programs that collect snowpack data and provide the water forecasts that farmers and communities rely on. 

“This is how we account for one of, if not the, most valuable resource we have,” she says. “In the West, we go into a seasonal drought every summer; our snowpack is what trickles down and gets us through that drought. We need to know how much we have.”

The Download: taking the temperature of snow, and the future of privacy

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why climate researchers are taking the temperature of mountain snow

The Sierra’s frozen reservoir provides about a third of California’s water and most of what comes out of the faucets, shower heads, and sprinklers in the towns and cities of northwestern Nevada. As it melts through the spring and summer, dam operators, water agencies, and communities have to manage the flow of billions of gallons of runoff, storing up enough to get through the dry summer months without allowing reservoirs and canals to flood.

The need for better snowpack temperature data has become increasingly critical for predicting when the water will flow down the mountains, as climate change fuels hotter weather, melts snow faster, and drives rapid swings between very wet and very dry periods.

In the past, it was hard work to gather this data. Now, a new generation of tools, techniques, and models promises to ease that process, improve water forecasts, and help California and other states manage in the face of increasingly severe droughts and flooding. However, observers fear that any such advances could be undercut by the Trump administration’s cutbacks across federal agencies. Read the full story.

—James Temple

MIT Technology Review Narrated: What’s next for our privacy?

The US still has no federal privacy law. But recent enforcement actions against data brokers may offer some new protections for Americans’ personal information.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is warning other countries not to use Huawei’s chips
If they do, they may face criminal penalties for breaching US export controls. (FT $)
+ The Trump administration has axed the ‘AI Diffusion Rule’ for chips. (WSJ $)
+ It may move towards negotiating deals with countries directly. (Bloomberg $)

2 US tech firms are inking AI deals with the Middle East
Among the biggest of which is Nvidia. (The Guardian)
+ Tech leaders accompanied Trump on his trip. (WP $)

3 A new treatment for inherited breast cancer was trialed successfully
The drug olaparib can help to significantly improve survival rates. (BBC)

4 TikTok workers fear a new messaging feature could be exploited
But the company is pressing ahead with it anyway. (The Information $)

5 Apple is working on brain-computer interfaces for its products
People with brain implants could one day use them to control their devices. (WSJ $)
+ Brain-computer interfaces face a critical test. (MIT Technology Review)

6 What’s next for NASA?
The agency is poised for its most radical shakeup in decades. (Ars Technica)
+ NASA has made an air traffic control system for drones. (MIT Technology Review)

7 Finland is harvesting heat from its data centers
While no data center is good for the environment, this helps lessen their footprint. (Bloomberg $)
+ The next data center hub? India. (FT $)
+ These four charts sum up the state of AI and energy. (MIT Technology Review)

8 Airbnb wants to become the next everything-app
It wants to expand beyond vacations and into a community platform, apparently. (Wired $)
+ Hotel-like services, anyone? (NYT $)
+ Its host features have been overhauled, too. (The Verge)

9 The FBI is buying new tech to help it see through walls
Thanks to radar. (New Scientist $)

10 Baidu is planning to launch its robotaxis in Europe 🚗
In a bid to extend its competitive advantage overseas. (WSJ $)
+ How Wayve’s driverless cars will meet one of their biggest challenges yet. (MIT Technology Review)

Quote of the day

“It’s literally Einstein’s proverbial definition of insanity: doing the same thing over and over again and expecting a different result.”

—Entrepreneur Arnaud Bertrand reflects on America’s latest attempt to rein in Huawei in a post on X.

One more thing

How DeepSeek ripped up the AI playbook—and why everyone’s going to follow its lead

When the Chinese firm DeepSeek dropped a large language model called R1 at the start of this year, it sent shock waves through the US tech industry. Not only did R1 match the best of the homegrown competition, it was built for a fraction of the cost—and given away for free.

DeepSeek has now suddenly become the company to beat. What exactly did it do to rattle the tech world so fully? Is the hype justified? And what can we learn from the buzz about what’s coming next? Here’s what you need to know.  

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Happy birthday to the one, the only Stevie Wonder—75 years young this week.
+ A scooped bagel? Not on my watch. 🥯
+ A few tips on how to navigate some of life’s trickier conversations with ease.
+ Everyone’s got a random junk drawer. Here’s how to get it under control.

The first US hub for experimental medical treatments is coming

A bill that allows medical clinics to sell unproven treatments has been passed in Montana. 

Under the legislation, doctors can apply for a license to open an experimental treatment clinic and recommend and sell therapies not approved by the Food and Drug Administration (FDA) to their patients. Once it’s signed by the governor, the law will be the most expansive in the country in allowing access to drugs that have not been fully tested. 

The bill allows for any drug produced in the state to be sold in it, providing it has been through phase I clinical trials—the initial, generally small, first-in-human studies that are designed to check that a new treatment is not harmful. These trials do not determine if the drug is effective.

The bill, which was passed by the state legislature on April 29 and is expected to be signed by Governor Greg Gianforte, essentially expands on existing Right to Try legislation in the state. But while that law was originally designed to allow terminally ill people to access experimental drugs, the new bill was drafted and lobbied for by people interested in extending human lifespans—a group of longevity enthusiasts that includes scientists, libertarians, and influencers.  

These longevity enthusiasts are hoping Montana will serve as a test bed for opening up access to experimental drugs. “I see no reason why it couldn’t be adopted by most of the other states,” said Todd White, speaking to an audience of policymakers and others interested in longevity at an event late last month in Washington, DC. White, who helped develop the bill and directs a research organization focused on aging, added that “there are some things that can be done at the federal level to allow Right to Try laws to proliferate more readily.” 

Supporters of the bill say it gives individuals the freedom to make choices about their own bodies. At the same event, bioethicist Jessica Flanigan of the University of Richmond said she was “optimistic” about the measure, because “it’s great any time anybody is trying to give people back their medical autonomy.” 

Ultimately, they hope that the new law will enable people to try unproven drugs that might help them live longer, make it easier for Americans to try experimental treatments without having to travel abroad, and potentially turn Montana into a medical tourism hub.

But ethicists and legal scholars aren’t as optimistic. “I hate it,” bioethicist Alison Bateman-House of New York University says of the bill. She and others are worried about the ethics of promoting and selling unproven treatments—and the risks of harm should something go wrong.

Easy access?

No drugs have been approved to treat human aging. Some in the longevity field believe that regulation has held back the development of such drugs. In the US, federal law requires that drugs be shown to be both safe and effective before they can be sold. That requirement was made law in the 1960s following the thalidomide tragedy, in which women who took the drug for morning sickness had babies with sometimes severe disabilities. Since then, the FDA has been responsible for the approval of new drugs.  

Typically, new drugs are put through a series of human trials. Phase I trials generally involve between 20 and 100 volunteers and are designed to check that the drug is safe for humans. If it is, the drug is then tested in larger groups of hundreds, and then thousands, of volunteers to assess the dose and whether it actually works. Once a drug is approved, people who are prescribed it are monitored for side effects. The entire process is slow, and it can last more than a decade—a particular pain point for people who are acutely aware of their own aging. 

But some exceptions have been made for people who are terminally ill under Right to Try laws. Those laws allow certain individuals to apply for access to experimental treatments that have been through phase I clinical trials but have not received FDA approval.

Montana first passed a Right to Try law in 2015 (a federal law was passed around three years later). Then in 2023, the state expanded the law to include all patients there, not just those with terminal illnesses—meaning that any person in Montana could, in theory, take a drug that had been through only a phase I trial.

At the time, this was cheered by many longevity enthusiasts—some of whom had helped craft the expanded measure.

But practically, the change hasn’t worked out as they envisioned. “There was no licensing, no processing, no registration” for clinics that might want to offer those drugs, says White. “There needed to be another bill that provided regulatory clarity for service providers.” 

So the new legislation addresses “how clinics can set up shop in Montana,” says Dylan Livingston, founder and CEO of the Alliance for Longevity Initiatives, which hosted the DC event. Livingston built A4LI, as it’s known, a few years ago, as a lobbying group for the science of human aging and longevity.

Livingston, who is exploring multiple approaches to improve both funding for scientific research and to change drug regulation, helped develop and push the 2023 bill in Montana with the support of State Senator Kenneth Bogner, he says. “I gave [Bogner] a menu of things that could be done at the state level … and he loved the idea” of turning Montana into a medical tourism hub, he says. 

After all, as things stand, plenty of Americans travel abroad to receive experimental treatments that cannot legally be sold in the US, including expensive, unproven stem cell and gene therapies, says Livingston. 

“If you’re going to go and get an experimental gene therapy, you might as well keep it in the country,” he says. Livingston has suggested that others might be interested in trying a novel drug designed to clear aged “senescent” cells from the body, which is currently entering phase II trials for an eye condition caused by diabetes. “One: let’s keep the money in the country, and two: if I was a millionaire getting an experimental gene therapy, I’d rather be in Montana than Honduras.”

“Los Alamos for longevity”

Honduras, in particular, has become something of a home base for longevity experiments. The island of Roatán is home to the Global Alliance for Regenerative Medicine clinic, which, along with various stem cell products, sells a controversial unproven “anti-aging” gene therapy for around $20,000 to customers including wealthy longevity influencer Bryan Johnson

Tech entrepreneur and longevity enthusiast Niklas Anzinger has also founded the city of Infinita in the region’s special economic zone of Próspera, a private city where residents are able to make their own suggestions for medical regulations. It’s the second time he’s built a community there as part of his effort to build a “Los Alamos for longevity” on the island, a place where biotech companies can develop therapies that slow or reverse human aging “at warp speed,” and where individuals are free to take those experimental treatments. (The first community, Vitalia, featured a biohacking lab, but came to an end following a disagreement between the two founders.) 

Anzinger collaborated with White, the longevity enthusiast who spoke at the A4LI event (and is an advisor to Infinita VC, Anzinger’s investment company), to help put together the new Montana bill. “He asked if I would help him try to advance the new bill, so that’s what we did for the last few months,” says White, who trained as an electrical engineer but left his career in telecommunications to work with an organization that uses blockchain to fund research into extending human lifespans. 

“Right to Try has always been this thing [for people] who are terminal[ly ill] and trying a Hail Mary approach to solving these things; now Right to Try laws are being used to allow you to access treatments earlier,” White told the audience at the A4LI event. “Making it so that people can use longevity medicines earlier is, I think, a very important thing.”

The new bill largely sets out the “infrastructure” for clinics that want to sell experimental treatments, says White. It states that clinics will need to have a license, for example, and that this must be renewed on an annual basis. 

“Now somebody who actually wants to deliver drugs under the Right to Try law will be able to do so,” he says. The new legislation also protects prescribing doctors from disciplinary action.

And it sets out requirements for informed consent that go further than those of existing Right to Try laws. Before a person takes an experimental drug under the new law, they will be required to provide a written consent that includes a list of approved alternative drugs and a description of the worst potential outcome.

On the safe side

“In the Montana law, we explicitly enhanced the requirements for informed consent,” Anzinger told an audience at the same A4LI event. This, along with the fact that the treatments will have been through phase I clinical trials, will help to keep people safe, he argued. “We have to treat this with a very large degree of responsibility,” he added.

“We obviously don’t want to be killing people,” says Livingston. 

But he also adds that he, personally, won’t be signing up for any experimental treatments. “I want to be the 10 millionth, or even the 50 millionth, person to get the gene therapy,” he says. “I’m not that adventurous … I’ll let other people go first.”

Others are indeed concerned that, for the “adventurous” people, these experimental treatments won’t necessarily be safe. Phase I trials are typically tiny, and they often involve less than 50 people, all of whom are typically in good health. A trial like that won’t tell you much about side effects that only show up in 5% of people, for example, or about interactions the drug might have with other medicines.

Around 90% of drug candidates in clinical trials fail. And around 17% of drugs fail late-stage clinical trials because of safety concerns. Even those that make it all the way through clinical trials and get approved by the FDA can still end up being withdrawn from the market when rare but serious side effects show up. Between 1992 and 2023, 23 drugs that were given accelerated approval for cancer indications were later withdrawn from the market. And between 1950 and 2013, the reason for the withdrawal of 95 drugs was “death.”

“It’s disturbing that they want to make drugs available after phase I testing,” says Sharona Hoffman, professor of law and bioethics at Case Western Reserve University in Cleveland, Ohio. “This could endanger patients.”

“Famously, the doctor’s first obligation is to first do no harm,” says Bateman-House. “If [a drug] has not been through clinical trials, how do you have any standing on which to think it isn’t going to do any harm?”

But supporters of the bill argue that individuals can make their own decisions about risk. When speaking at the A4LI event, Flanigan introduced herself as a bioethicist before adding “but don’t hold it against me; we’re not all so bad.” She argued that current drug regulations impose a “massive amount of restrictions on your bodily rights and your medical freedom.” Why should public officials be the ones making decisions about what’s safe for people? Individuals, she argued, should be empowered to make those judgments themselves.

Other ethicists counter that this isn’t an issue of people’s rights. There are lots of generally accepted laws about when we can access drugs, says Hoffman; people aren’t allowed to drink and drive because they might kill someone. “So, no, you don’t have a right to ingest everything you want if there are risks associated with it.”

The idea that individuals have a right to access experimental treatments has in fact failed in US courts in the past, says Carl Coleman, a bioethicist and legal scholar at Seton Hall in New Jersey. 

He points to a case from 20 years ago: In the early 2000s, Frank Burroughs founded the Abigail Alliance for Better Access to Developmental Drugs. His daughter, Abigail Burroughs, had head and neck cancer, and she had tried and failed to access experimental drugs. In 2003, about two years after Abigail’s death, the group sued the FDA, arguing that people with terminal cancer have a constitutionally protected right to access experimental, unapproved treatments, once those treatments have been through phase I trials. In 2007, however, a court rejected that argument, determining  that terminally ill individuals do not have a constitutional right to experimental drugs.

Bateman-House also questions a provision in the Montana bill that claims to make treatments more equitable. It states that “experimental treatment centers” should allocate 2% of their net annual profits “to support access to experimental treatments and healthcare for qualifying Montana residents.” Bateman-House says she’s never seen that kind of language in a bill before. It may sound positive, but it could in practice introduce even more risk to the local community. “On the one hand, I like equity,” she says. “On the other hand, I don’t like equity to snake oil.”

After all, the doctors prescribing these drugs won’t know if they will work. It is never ethical to make somebody pay for a treatment when you don’t have any idea whether it will work, Bateman-House adds. “That’s how the US system has been structured: There’s no profit without evidence of safety and efficacy.”

The clinics are coming

Any clinics that offer experimental treatments in Montana will only be allowed to sell drugs that have been made within the state, says Coleman. “Federal law requires any drug that is going to be distributed in interstate commerce to have FDA approval,” he says.

White isn’t too worried about that. Montana already has manufacturing facilities for biotech and pharmaceutical companies, including Pfizer. “That was one of the specific advantages [of focusing] on Montana, because everything can be done in state,” he says. He also believes that the current administration is “predisposed” to change federal laws around interstate drug manufacturing. (FDA commissioner Marty Makary has been a vocal critic of the agency and the pace at which it approves new drugs.)

At any rate, the clinics are coming to Montana, says Livingston. “We have half a dozen that are interested, and maybe two or three that are definitively going to set up shop out there.” He won’t name names, but he says some of the interested clinicians already have clinics in the US, while others are abroad. 

Mac Davis—founder and CEO of Minicircle, the company that developed the controversial “anti-aging” gene therapy—told MIT Technology Review he was “looking into it.”

“I think this can be an opportunity for America and Montana to really kind of corner the market when it comes to medical tourism,” says Livingston. “There is no other place in the world with this sort of regulatory environment.”

Google DeepMind’s new AI agent uses large language models to crack real-world problems

Google DeepMind has once again used large language models to discover new solutions to long-standing problems in math and computer science. This time the firm has shown that its approach can not only tackle unsolved theoretical puzzles, but improve a range of important real-world processes as well.

Google DeepMind’s new tool, called AlphaEvolve, uses the Gemini 2.0 family of large language models (LLMs) to produce code for a wide range of different tasks. LLMs are known to be hit and miss at coding. The twist here is that AlphaEvolve scores each of Gemini’s suggestions, throwing out the bad and tweaking the good, in an iterative process, until it has produced the best algorithm it can. In many cases, the results are more efficient or more accurate than the best existing (human-written) solutions.

“You can see it as a sort of super coding agent,” says Pushmeet Kohli, a vice president at Google DeepMind who leads its AI for Science teams. “It doesn’t just propose a piece of code or an edit, it actually produces a result that maybe nobody was aware of.”

In particular, AlphaEvolve came up with a way to improve the software Google uses to allocate jobs to its many millions of servers around the world. Google DeepMind claims the company has been using this new software across all of its data centers for more than a year, freeing up 0.7% of Google’s total computing resources. That might not sound like much, but at Google’s scale it’s huge.

Jakob Moosbauer, a mathematician at the University of Warwick in the UK, is impressed. He says the way AlphaEvolve searches for algorithms that produce specific solutions—rather than searching for the solutions themselves—makes it especially powerful. “It makes the approach applicable to such a wide range of problems,” he says. “AI is becoming a tool that will be essential in mathematics and computer science.”

AlphaEvolve continues a line of work that Google DeepMind has been pursuing for years. Its vision is that AI can help to advance human knowledge across math and science. In 2022, it developed AlphaTensor, a model that found a faster way to solve matrix multiplications—a fundamental problem in computer science—beating a record that had stood for more than 50 years. In 2023, it revealed AlphaDev, which discovered faster ways to perform a number of basic calculations performed by computers trillions of times a day. AlphaTensor and AlphaDev both turn math problems into a kind of game, then search for a winning series of moves.

FunSearch, which arrived in late 2023, swapped out game-playing AI and replaced it with LLMs that can generate code. Because LLMs can carry out a range of tasks, FunSearch can take on a wider variety of problems than its predecessors, which were trained to play just one type of game. The tool was used to crack a famous unsolved problem in pure mathematics.

AlphaEvolve is the next generation of FunSearch. Instead of coming up with short snippets of code to solve a specific problem, as FunSearch did, it can produce programs that are hundreds of lines long. This makes it applicable to a much wider variety of problems.    

In theory, AlphaEvolve could be applied to any problem that can be described in code and that has solutions that can be evaluated by a computer. “Algorithms run the world around us, so the impact of that is huge,” says Matej Balog, a researcher at Google DeepMind who leads the algorithm discovery team.

Survival of the fittest

Here’s how it works: AlphaEvolve can be prompted like any LLM. Give it a description of the problem and any extra hints you want, such as previous solutions, and AlphaEvolve will get Gemini 2.0 Flash (the smallest, fastest version of Google DeepMind’s flagship LLM) to generate multiple blocks of code to solve the problem.

It then takes these candidate solutions, runs them to see how accurate or efficient they are, and scores them according to a range of relevant metrics. Does this code produce the correct result? Does it run faster than previous solutions? And so on.

AlphaEvolve then takes the best of the current batch of solutions and asks Gemini to improve them. Sometimes AlphaEvolve will throw a previous solution back into the mix to prevent Gemini from hitting a dead end.

When it gets stuck, AlphaEvolve can also call on Gemini 2.0 Pro, the most powerful of Google DeepMind’s LLMs. The idea is to generate many solutions with the faster Flash but add solutions from the slower Pro when needed.

These rounds of generation, scoring, and regeneration continue until Gemini fails to come up with anything better than what it already has.

Number games

The team tested AlphaEvolve on a range of different problems. For example, they looked at matrix multiplication again to see how a general-purpose tool like AlphaEvolve compared to the specialized AlphaTensor. Matrices are grids of numbers. Matrix multiplication is a basic computation that underpins many applications, from AI to computer graphics, yet nobody knows the fastest way to do it. “It’s kind of unbelievable that it’s still an open question,” says Balog.

The team gave AlphaEvolve a description of the problem and an example of a standard algorithm for solving it. The tool not only produced new algorithms that could calculate 14 different sizes of matrix faster than any existing approach, it also improved on AlphaTensor’s record-beating result for multipying two four-by-four matrices.

AlphaEvolve scored 16,000 candidates suggested by Gemini to find the winning solution, but that’s still more efficient than AlphaTensor, says Balog. AlphaTensor’s solution also only worked when a matrix was filled with 0s and 1s. AlphaEvolve solves the problem with other numbers too.

“The result on matrix multiplication is very impressive,” says Moosbauer. “This new algorithm has the potential to speed up computations in practice.”

Manuel Kauers, a mathematician at Johannes Kepler University in Linz, Austria, agrees: “The improvement for matrices is likely to have practical relevance.”

By coincidence, Kauers and a colleague have just used a different computational technique to find some of the speedups AlphaEvolve came up with. The pair posted a paper online reporting their results last week.

“It is great to see that we are moving forward with the understanding of matrix multiplication,” says Kauers. “Every technique that helps is a welcome contribution to this effort.”

Real-world problems

Matrix multiplication was just one breakthrough. In total, Google DeepMind tested AlphaEvolve on more than 50 different types of well-known math puzzles, including problems in Fourier analysis (the math behind data compression, essential to applications such as video streaming), the minimum overlap problem (an open problem in number theory proposed by mathematician Paul Erdős in 1955), and kissing numbers (a problem introduced by Isaac Newton that has applications in materials science, chemistry, and cryptography). AlphaEvolve matched the best existing solutions in 75% of cases and found better solutions in 20% of cases.  

Google DeepMind then applied AlphaEvolve to a handful of real-world problems. As well as coming up with a more efficient algorithm for managing computational resources across data centers, the tool found a way to reduce the power consumption of Google’s specialized tensor processing unit chips.

AlphaEvolve even found a way to speed up the training of Gemini itself, by producing a more efficient algorithm for managing a certain type of computation used in the training process.

Google DeepMind plans to continue exploring potential applications of its tool. One limitation is that AlphaEvolve can’t be used for problems with solutions that need to be scored by a person, such as lab experiments that are subject to interpretation.   

Moosbauer also points out that while AlphaEvolve may produce impressive new results across a wide range of problems, it gives little theoretical insight into how it arrived at those solutions. That’s a drawback when it comes to advancing human understanding.  

Even so, tools like AlphaEvolve are set to change the way researchers work. “I don’t think we are finished,” says Kohli. “There is much further that we can go in terms of how powerful this type of approach is.”

Charts: Retail Media Trends Worldwide

Retail media is the practice of publishing third-party advertisements on branded retail sites, as pioneered by Amazon Ads. To date, U.S. merchants have dominated retail media, capturing over half of global retail media ad spend, according to a new survey and report from the Boston Consulting Group, a global advisory firm.

The BCG survey focused on retail media outside the U.S., querying 100 retailers and advertiser-brands across Europe, Africa, the Middle East, and South America.

Per the survey, brands that advertised on retail sites in those regions have achieved a higher return on investment than on other marketing channels, mainly due to better targeting from retailers’ first-party data.

Surveyed retailers cited two primary benefits of publishing ads: expanding revenue and enhancing partnerships with suppliers and brands.

Most retailers offer basic targeting, but brand advertisers seek more advanced capabilities.