Rethink Your Email Sunset Policy

Experienced marketers know an unengaged email list tanks deliverability and clouds campaign data. But what’s often missing is nuance: how to design a sunset policy that works with the brand’s funnel.

This post isn’t a reminder to clean your list with abandon. It’s a framework for doing it smarter, especially when attribution is messy, inbox competition is fierce, and engagement signals are fuzzier than ever.

The alternative, keeping everyone, results in:

  • Declining inbox placement rates,
  • Falling open rates (despite Apple’s Mail Privacy Protection inflating them),
  • Spam traps and increased bounces from low-quality emails.

Yet experienced marketers still hesitate to sunset aggressively — and for good reason. No one wants to cut customers who intend to reorder, especially for long consideration cycles — big-ticket purchases such as furniture or luxury goods — or inconsistent seasonal spikes. That’s why a nuanced, tailored, data-informed sunset policy is essential.

Female looking at a laptop and a smartphone

A nuanced sunset policy recognizes messy attribution and fuzzy engagement signals.

Beyond Open Rates

Post iOS 15, opens are inflated and increasingly unreliable. Relying solely on opens — even on Klaviyo, Iterable, and Braze — means you’re likely retaining ghost contacts, especially if you’re not separating for Apple’s MPP.

Instead, look at subscriber intent across channels, not just inbox activity. Build a sunset policy around blended engagement signals:

  • Clicks,
  • Site visits (via UTMs or pixel-based tracking),
  • Purchase behavior,
  • Session time or scroll depth (via Google Analytics 4 or a customer data platform such as Segment or Klaviyo).

Subscribers Not Equal

Not everyone behaves similarly with a brand. A strategic sunsetting policy considers varied users across the marketing lifecycle. To preserve value, tier your list:

  • High-value, high-recent buyers. Suppress after longer windows (e.g., 180 days) but retarget through paid media or SMS.
  • Repeat but lapsed buyers. Shorter sunset window (90–120 days) with re-engagement flows before suppressing.
  • Non-purchasers with high email interaction. Consider retargeting with educational, customer-driven, or product-led content.
  • Low-value, low-engagement contacts. Flag for suppression or archival.

Tiered Sunset Flow

Step 1. Identify drop-off points by cohort. Map when different subscriber types stop engaging, not just how long they’ve been inactive.

Step 2. Create re-engagement flows by segment. Instead of one-size-fits-all messaging, design two or three touch sequences per tier. For example:

  • A buyer-focused winback campaign with product restock alerts or loyalty perks,
  • A lurker-focused re-engagement with social proof or an updated brand story,
  • A light-touch “Do you still want to hear from us?” campaign for low-engagement users.

Step 3. Offer a soft exit. Before removing, offer preference management, frequency options, or SMS sign-up. This preserves the relationship on subscribers’ terms.

Strategic Suppression

Most email service providers can suppress contacts from campaigns without fully removing them, retaining purchase history and behavioral insights. Add those contacts to exclusion audiences in Meta or Google Ads. Suppress them from flows but reactivate for seasonal sends, “last chance” campaigns, and high-impact sales.

Non-email touchpoints are valuable, too. Customers who no longer open emails might still follow the brand on Instagram or engage via SMS. A full-funnel approach means meeting them where they show up.

Automate with Context

Most marketers have automated sunset flows, but “setting and forgetting” could reduce revenue. A quarterly review is essential to ensure automations:

  • Align with the promo and product calendar,
  • Reflect seasonality spikes and new retention goals,
  • Use personalized content blocks or product feeds for relevancy.

A well-crafted sunset policy respects customers’ time and the brand’s long-term health. A mindset of curation rather than deletion leaves room for customers to re-engage when the time is right.

TikTok Launches Footnotes: Its Answer To X’s Community Notes via @sejournal, @MattGSouthern

TikTok is testing a new feature called “Footnotes” that adds extra information to videos on the platform.

The test will start today in the United States.

What Are TikTok Footnotes?

Footnotes let approved TikTok users add information to videos. This feature aims to make content more trustworthy.

TikTok calls this a “community-based approach” where many users help improve information quality.

Who Can Contribute Footnotes?

TikTok has rules for who can add footnotes. US users can apply now, and TikTok will also invite eligible users.

To qualify, you must:

  • Have used TikTok for more than six months
  • Be at least 18 years old
  • Have a clean record with no recent Community Guidelines violations

TikTok will slowly give more people access over the coming months. Approved users can both add footnotes and rate others’ contributions.

How The System Works

TikTok’s announcement explains that Footnotes uses a special ranking system to help people with different viewpoints find common ground.

The system lets contributors add footnotes and vote on how helpful others’ additions are. Only footnotes that enough people find helpful will be shown to everyone.

As more people write and rate footnotes on different topics, the system will get better at displaying the most valuable information.

Similar to X’s Community Notes

TikTok’s Footnotes is similar to Community Notes on X. TikTok mentions that Footnotes is “inspired by the open-sourced system that other platforms use,” which appears to reference Community Notes.

Both systems:

  • Let users add context to posts
  • Use a rating system where people with different viewpoints need to agree
  • Require contributors to meet specific standards
  • Only show notes that many users find helpful
  • Aim to improve content quality through community input rather than just relying on platform moderators

This approach to content checking is becoming popular across social media as platforms look for better ways to handle misinformation without being accused of bias.

Part of a Broader Industry Shift

TikTok’s Footnotes launch comes amid a trend in social media content moderation. Following X’s Community Notes system, Meta announced in March that it would replace its third-party fact-checking program with its own Community Notes feature.

This shift toward community-based moderation represents a major change in how platforms handle potentially misleading content. Rather than relying on centralized fact-checkers, these platforms now empower users to provide context.

The timing of these changes is notable, as they follow President Trump’s return to office and come amid ongoing regulatory scrutiny. For TikTok specifically, this move comes at a sensitive time. The company faces a June 19 deadline for its parent company, ByteDance, to divest its U.S. operations, following a 75-day extension granted by the Trump administration.

Looking Ahead

TikTok says Footnotes is still in testing. The company will gather feedback from users, contributors, and creators to improve the feature. Marketers should watch how this develops before making big strategy changes.


Featured Image: ShutterStockies/Shutterstock

Google’s New Domain Structure: What’s Next For Hreflang? via @sejournal, @MattGSouthern

Google is making a big change to its domain structure. Soon, all country-specific Google domains will redirect to Google.com.

This change ties into earlier hints that Google may rely less on hreflang markup, showing how Google is changing its approach to international search.

Google Consolidates Domain Structure

Google announced plans to phase out country-specific domains like google.fr (France), google.ca (Canada), and google.co.jp (Japan). All these will eventually redirect to Google.com.

Google says in its announcement:

“Over the years, our ability to provide a local experience has improved. In 2017, we began providing the same experience with local results for everyone using Search, whether they were using google.com or their country’s ccTLD.”

Google explained that country-level domains are no longer needed because they can now deliver locally relevant results no matter which domain you use.

Implementation Timeline

Google will roll out this change slowly over the coming months, giving users time to adjust to the new system.

While the URL in your browser will change, Google says search will still work the same way.

Google stressed that the update “won’t affect the way Search works, nor will it change how we handle obligations under national laws.”

Connection to Hreflang Evolution

This domain change seems to be part of a bigger shift in how Google handles international content.

In July, Google’s Gary Illyes hinted that they might rely less on manual hreflang tags and more on automatic language detection.

Illyes stated in a podcast:

“Ultimately, I would want less and less annotations, site annotations, and more automatically learned things.”

SEO professional Montse Cano pointed out this connection in a social media post, noting that “hreflang might actually change too due to improvements in AI.”

While no changes are confirmed, it’s something to watch for in the future.

Implications For SEO Professionals

This change affects search marketers in several ways, especially those working on international SEO:

  • Your analytics will show different referral patterns as traffic moves from country-specific domains to Google.com.
  • Along with less reliance on hreflang, website managers may have fewer technical tasks for international targeting.
  • Google seems more confident in automatically detecting the right content versions for users.
  • Users should get a more uniform experience across regions while still seeing localized results.

Next Steps

While Google is getting better at automatic detection, SEO pros should still:

  • Keep using hreflang tags until Google officially says otherwise
  • Make sure your site clearly signals language and regional targeting
  • Watch your analytics for traffic pattern changes during the transition
  • Think about how this affects SEO strategies that relied on country-specific domains

Key Takeaway

This change shows Google is more confident in understanding context, language, and user intent without needing explicit signals like separate domains.

Combined with discussions about automatic language detection, Google’s AI seems ready to handle work that once required manual setup.

SEO professionals should see this as part of search technology’s natural evolution. Stay alert to how these changes affect your international search visibility and traffic.


Featured Image: JHVEPhoto/Shutterstock

WordPress 6.8: The highlights you should know about

On April 15th, 2025, ​WordPress 6.8 was released. This release, named Cecil, is aimed at bringing more control when it comes to the design of your website. It also comes with features that improve performance and enhance security for users and developers. Let’s dive into a few of the highlights in this latest release.

The Style Book for Classic themes

The Style Book, previously exclusive to block-based themes, is now accessible for Classic themes that have editor-styles or a theme.json file. It also comes with a new look and a few new settings to play around with. You can find this feature, or see if you have access to this feature, by going to Appearance > Design > Styles in your vertical navigation bar on the left side.

The Style Book shows the design elements of your site.

This tool gives you an overview of your site’s design elements, including colors, typography, and block styles. Just scroll down to have a look at all the design elements and whether you’re happy with everything design-wise.

Increasing performance with speculative loading

WordPress 6.8 introduces speculative loading, a feature that leverages the Speculation Rules API to prefetch or prerender pages based on user interactions. When someone clicks on a link, this page can now be preloaded. This anticipatory loading results in faster page transitions and a smoother browsing experience.

Of course, WordPress will not load everything as this would compromise performance if it becomes too much. This feature aims to keep a balance between speed and efficiency in choosing which pages to load. Developers can customize this behavior by using a plugin or writing code if they want it to act differently. It’s also good to know that this feature only works in newer browsers.

This feature is part of a longer list of enhancements focused on performance and speed, for both editing and browsing. Without going into too much detail, this update comes with improvements to the block editor, query caching and shorter interactions thanks to the Interactivity API.

WordPress 6.8 brings a few new enhancements to the design and editing experience. We’ll go into a few of them, such as the improved global styles panel, the option to turn image blocks into featured images and new density settings in table layouts.​

Global styles panel

The Global Styles Panel in the full site editor has had an update, making it easier than ever to give your website a cohesive and polished look. Now, users can tweak typography, colors, and layout settings for their entire site — all from one convenient place. Whether you’re adjusting heading sizes, setting your brand colors, or fine-tuning spacing, this central hub helps you manage your site’s design without having to dive into custom CSS or theme files.

One of the most welcome little updates in WordPress 6.8 is the ability to turn any image block directly into a featured image with a single click. No more uploading the same image twice — once in the content and once for the featured image. If you’ve already added a perfect image to your post, you can now designate it as the featured image straight from the block editor, which simplifies your workflow and saves time.

WordPress 6.8 feature: setting image block as featured image
Simply select any image in your post and click ‘Set as featured image’

Density options for your tables

WordPress 6.8 introduces new density settings for table-based layouts, particularly useful when you’re managing data-heavy content like tables in the admin or Data Views. With these new controls, you can choose how compact or spacious you want your table rows and cells to appear, depending on your personal preference or the type of content you’re handling. Whether you prefer a minimal, airy look or a dense, info-packed table, WordPress now gives you the flexibility to adjust it to suit your needs.

Under-the-hood improvements

Beyond the user-facing features, WordPress 6.8 includes a lot of enhancements under the hood. Let’s check out a few highlights.​

Higher security with bcrypt

Although less visible, this feature is one to get excited about as well. As it brings a significant boost to security. The system is moving away from the MD5-based system for password hashing and transitioning to bcrypt. This change improves overall password security and requires no action from you or your users. It’s automatically implemented and will make it a lot harder to crack passwords.

Efficient block type registration

The new release introduces a new function that makes registering multiple block types much easier for developers. Instead of writing separate code for each block, plugin and theme creators can now register a whole collection of blocks in one go. This not only reduces repetitive code but also helps keep projects organized and easier to maintain, making life simpler for developers and creating fewer opportunities for bugs.

Internationalization improvements

With WordPress powering sites across the globe, internationalization is always a priority. This release ensures that error messages generated by PHPMailer (WordPress’s email handling system) are properly localized, so users see helpful and understandable messages in their language. Additionally, any plugin update notifications sent via email will now respect the site administrator’s selected language, making routine updates more user-friendly for non-English speakers.

Accessibility enhancements

WordPress 6.8 continues the platform’s commitment to making websites more inclusive by improving accessibility across the board. One of the key changes in this release is the removal of redundant title attributes, which were often unnecessary and could cause confusion for users relying on screen readers.

This small but meaningful update helps streamline the browsing experience for people using assistive technology, ensuring that WordPress websites are more welcoming and accessible to all visitors. Other than that, there are over 100+ accessibility fixes and enhancements in this release.

Update your site to WordPress 6.8

Those are a few of the features that you’ll find in this latest release. There’s a lot of focus on enhancing the overall user experience and refining existing functionalities. Whether you’re a developer or website manager, this update is meant to make your life a bit easier and your website more secure and performant. Read the official WordPress release post for more information on this newest version of WordPress. Or, if you’re curious to see these features in action, update to the newest version and try them out yourself!

Read more: Learn how to use WordPress without touching any code »

How Do You Resolve A WordPress Plugin Conflict?

One of the scariest things that can occur with a WordPress user is that you install a plugin, and upon activation, you get a white screen of death.

This screen, where your beautifully crafted website once lived, is now plain white or produces a line or two of unformatted text.

A plugin conflict is when you have two plugins installed, and while they both work fine, running them together breaks the site.

It usually happens when plugins run in tandem, and they both come packaged with the same or similar library functions. There’s a naming conflict, and PHP produces an error.

This article will discuss how to fix them.

Plugin Conflicts Are Becoming More Rare

First of all, plugin conflict: where somebody installs a plugin that conflicts with another plugin, is becoming more rare.

WordPress, in the last few years, has introduced protections in place that means if an error does occur, rather than activate the plugin fully, it’ll backtrack automatically, provide an error, and leave the plugin deactivated.

For the majority of users, this is what they see.

The plugin The plugin “Broken Plugin” is unable to be activated within WordPress as it generates a fatal error. (Image from author, March 2025)

At this point, an investigation should be done in a staging environment with this plugin, but unless it’s a unique plugin, an alternative may need to be found that doesn’t conflict with your setup.

Plugin conflicts tend to occur when you install a Must Use (MU) plugin via a service like FTP, an update to one or more plugins takes place, or you have a custom plugin activated and changes are pushed to the server.

I’ll walk you through my process of resolving plugin conflicts.

Do You Have Access To WordPress?

To begin, the first question you should ask is if you have access to WordPress.

If you do, conventional wisdom dictates that the course of action to take is to deactivate all plugins and switch to a default theme, to try and troubleshoot where the problem occurs.

If you are doing this on a live site, this isn’t ideal, as the site may still have a lot of functionality.

A different approach is to install the Health Check and Troubleshooting plugin. Installing this plugin would allow you to run a version of the site with a default theme, and no plugins installed.

Simply activate each plugin in turn until you identify the one that is causing the issue and then leave that one deactivated.

Make sure the theme is the last thing activated, as custom themes could use functionality in plugins that could bring down the site.

If You Don’t Have Access To WordPress

If you don’t have access to WordPress, then there could be a bit of a process in order to diagnose and fix the problem.

This approach is what I take as best as I can when diagnosing plugin conflicts. It can be done in any order, depending on your knowledge and what you have access to.

Have Access To The Administrative Email? You May Get An Email

If you have access to the administrator email with WordPress (set in Settings > General), you may receive an email.

This will allow you to put the site into Recovery Mode. From there, you can log in, and it will identify the plugin that has the issue, and you can deactivate it.

WordPress recovery mode with the plugin conflict highlighting Hello Dolly as where the conflict lies.WordPress recovery mode with the plugin conflict highlighting Hello Dolly, as where the conflict lies. (Image from author, March 2025)

Check The Hosts’ Log File

The first step would be to check the host’s log file.

Depending on the host, it may be easily visible in your host’s dashboard or from within cPanel, but if you only have a file browser, they tend to be put outside of the /public_html/ or /www/ (which are publicly available). Usually, one level up in a file called /logs/ tends to be where it’s located.

Should you find the file (it should have a name like error_log), download it and search for any Fatal Error within the document, maybe towards the bottom.

Within the error message, you should have a couple of file locations that will dictate where the file issues occur.

No Logs? You May Need To Activate Them

If you have FTP/SFTP access to the site but no logs, you may need to activate them.

Within the root directory of WordPress, add the following lines to the wp-config.php file.

define( 'WP_DEBUG', true );
define( 'WP_DEBUG_LOG', true );
define( 'WP_DEBUG_DISPLAY', false );
@ini_set( 'display_errors', 0 );

This will create a file debug.log within the wp-content/ folder. From there, you can see the errors in this file.

Security Tip: The debug.log will be publicly visible, so once you’ve fixed the issue, remove these lines from wp-config.php and delete the debug.log file.

Resolving These Plugin Conflicts

Whichever method you use, your logs should produce lines like this below:-

Fatal error: Cannot redeclare hello_dolly_get_lyric() (previously declared in/wp-content/plugins/broken-plugin/index.php:17) in /wp-content/plugins/hello-dolly/hello.php on line 46

Each element means:

  • “Fatal Error” determines the error. A fatal error in PHP means that the site immediately stops working. You can get other errors or warnings.
  • “Cannot redeclare hello_dolly_get_lyric()” is the fatal error. In this case, there are two PHP functions with the same name (hello_dolly_get_lyric()). This is the basis of the plugin conflict.
  • “/wp-content/plugins/hello-dolly/hello.php on line 46” tells you where this error occurs. While the line number isn’t important (unless you’re coding yourself), it does tell you the plugin where the plugin error occurs – in this case, “hello-dolly”.

The next step is to manually change the plugin.

In your chosen FTP programme or file manager, go to the plugin folder within WordPress – /wp-content/plugins/ in this case – and rename the plugin folder (in this case, change “hello-dolly” to “broken-hello-dolly”). This will deactivate the plugin when you next log into WordPress.

The plugin The plugin “Hello Dolly” has been deactivated due to it not existing. In reality, it’s been renamed so WordPress can’t find it. (Image from author, March 2025)

It’s a good idea not to delete the WordPress plugin if you can prevent it. This will force the deactivation of the plugin in question.

From there, you can investigate the two plugins and identify why the two functions are called twice.

For Developers: Good Practice Can Prevent Plugin Conflicts

If you are a developer building WordPress sites, following good practice can prevent plugin conflicts.

Here are some tips for preventing your plugin or WordPress sites from having plugin conflicts with other plugins out there:

  1. If you are not using PHP Namespaces, then I’d recommend naming your classes or functions with a prefix. Something like plugin_name_function_name can prevent similar functionality from having the same function name. Try to make them unique (so don’t use wp_ as a prefix).
  2. Using function_exists around your functions to prevent your functions from loading if they already exist.
  3. If you are importing functionality, using class_exists can check to see if the class has already been loaded.
  4. Loading your functionality late, so naming your plugin folder with a late alphabet letter is useful. Not every developer follows the same approach as you!
  5. If you are building on one site, make sure your server setup is the same (or as close to being the same) as the live environment.

You’re never going to completely guarantee your plugin or theme doesn’t conflict with the millions of plugins that exist in the WordPress space.

However, by following the above steps, you can minimize conflict as much as possible, and simple changes to your code writing can prevent a world of debugging hell later.

More Resources:


Featured Image: Whiskerz/Shutterstock

PPC Unlocked: Fast Wins For Smarter Ad Strategies via @sejournal, @CallRail

Click fraud in lead generation can drain your marketing budget and corrupt your data, leading to misguided strategic decisions.

While automated detection tools serve as a first line of defense, relying solely on them is not enough.

This guide presents practical, hands-on approaches to identify and combat click fraud in your lead generation campaigns in Google Ads.

Understanding Modern Click Fraud Patterns

Click fraud isn’t just about basic bots anymore. The people running these scams have gotten much smarter, and they’re using tricks that your regular fraud tools might miss.

It’s a big business, and if you think you are not affected, you are wrong.

Here’s what’s really happening to your ad budget: Real people in click farms are getting paid to click on ads all day long.

They use VPNs to hide where they’re really coming from, making them look just like normal customers. And they’re good at it.

The bots have gotten better, too. They now copy exactly how real people use websites: They move the mouse naturally, fill out forms like humans, and even make typing mistakes on purpose.

When these smart bots team up with real people, they become really hard to spot.

The scammers are also messing with your tracking in clever ways. They can trick your website into thinking they’re new visitors every time.

They can make their phones seem like they’re in your target city when they’re actually on the other side of the world.

If you’re counting on basic click fraud protection to catch all this, you’re in trouble. These aren’t the obvious fake clicks from years ago – they’re smart attacks that need smart solutions.

That being said, the good old competitor trying to click 50 times on your ad is also still existent and not going away anytime soon.

Luckily, it is safe to say that Google can spot and detect those obvious fraud clicks in many cases.

Google’s Click Fraud Dilemma: Walking The Revenue Tightrope

Google faces a tricky problem with click fraud.

Every fake click puts money in Google’s pocket right now, but too many fake clicks will drive advertisers away. This creates a conflict of interest.

Google needs to show that it’s fighting click fraud to keep advertisers happy and the ad platform and all of its networks healthy, but it can’t afford to catch every single fake click.

If it did, its ad revenue would drop sharply in the short term because it also runs the risk of blocking valid clicks if it goes in too aggressively.

But if it doesn’t catch enough fraud, advertisers will lose trust and move their budgets elsewhere.

Some advertisers say this explains why Google’s fraud detection isn’t as strict as it could be.

They argue Google has found a sweet spot where it catches just enough fraud to keep advertisers from leaving, but not so much that it seriously hurts its revenue.

This balance gets even harder as fraudsters get better at making fake clicks look real.

This is also why many advertisers don’t fully trust Google’s own click fraud detection and prefer to use third-party tools.

These tools tend to flag more clicks as fraudulent than Google does, suggesting Google might be more conservative in what it considers fraud.

The Over-Blocking Problem Of Third-Party Tools

Third-party click fraud tools have their own business problem: They need to prove they’re worth paying for every month.

This creates pressure to show lots of “blocked fraud” to justify their subscription costs. The result? Many of these tools are too aggressive and often block real customers by mistake.

Other tactics are to show lots of suspicious traffic or activities.

Think about it. If a click fraud tool shows zero fraud for a few weeks, clients might think they don’t need it anymore and cancel.

So, these tools tend to set their detection rules very strict, marking anything slightly suspicious as fraud. This means they might block a real person who:

  • Uses a VPN for privacy.
  • Shares an IP address with others (like in an office).
  • Browses with privacy tools.
  • Has unusual but legitimate clicking patterns.

This over-blocking can actually hurt businesses more than the fraud these tools claim to stop.

It’s like a store security guard who’s so worried about shoplifters that they start turning away honest customers, too.

Why Click Fraud Tools Are Still Valuable

Despite these issues, click fraud tools are still really useful as a first line of defense.

They’re like security cameras for your ad traffic. They might not catch everything perfectly, but they give you a good picture of what’s happening.

Here’s what makes them worth using:

  • They quickly show you patterns in your traffic that humans would take weeks to spot.
  • Even if they’re sometimes wrong about individual clicks, they’re good at finding unusual patterns, like lots of clicks from the same place or at odd hours.
  • They give you data you can use to make your own decisions – you don’t have to block everything they flag as suspicious.

The key is to use these tools as a starting point, not a final answer. Look at their reports, but think about them carefully.

Are the “suspicious” clicks actually hurting your business? Do blocked users fit your customer profile?

Use the tool’s data along with your own knowledge about your customers to make smarter decisions about what’s really fraud and what’s not.

In terms of functionality, most third-party click fraud detection tools are somewhat similar to each other.

A simple Google search on “click fraud tool” shows the market leaders; the only bigger difference is usually pricing and contract duration.

Tackling Click Fraud With Custom Solutions

After getting a first impression with third-party click fraud tools, it’s best to build a collection of custom solutions to tackle your individual scenario.

Every business has a different situation with different software environments, website systems, and monitoring.

For custom solutions, it’s recommended to work closely with your IT department or developer, as many solutions require some modification on your website.

The Basics: Selecting An Identifier

There are a handful of solutions to cover 80% of the basics.

The first way to do something against click fraud is to find a unique identifier to work with.

In most cases, this will be the IP address since you can exclude certain IP addresses from Google Ads, thus making it a good identifier to work with.

Other identifiers like Fingerprints are also possible options. Once an identifier is found, you need to make sure your server logs or internal tracking can monitor users and their identifiers for further analysis.

The Basics: CAPTCHAs

Another basic tool, which is often forgotten, is CAPTCHAs.

CAPTCHAs can detect bots or fraudulent traffic. Google offers a free and simple-to-implement solution with reCAPTCHA.

CAPTCHAs might seem like an easy answer to bot traffic, but they come with serious downsides.

Every time you add a CAPTCHA, you’re basically telling your real users, “Prove you’re human before I trust you.” This creates friction, and friction kills conversions.

Most websites see a drop in form completions after adding CAPTCHAs if they are set too aggressively.

Smart CAPTCHAs can limit the frequency, but not all CAPTCHA providers allow that option, so choose your provider or solution wisely.

The Basics: Honeypot Fields

Honeypot fields are hidden form fields that act as traps for bots.

The trick is simple but effective: Add extra fields to your form that real people can’t see, but bots will try to fill out.

Only bots reading the raw HTML will find these fields; regular users won’t even know they’re there. The key is to make these fields look real to bots.

Use names that bots love to fill in, like “url,” “website,” or “email2.” If any of these hidden fields get filled out, you know it’s probably a bot. Real people won’t see them, so they can’t fill them out.

Pro tip: Don’t just add “honeypot” or “trap” to your field names. Bots are getting smarter and often check for obvious trap names. Instead, use names that look like regular-form fields.

Advanced Validation Methods

Smart Form Validation: Email

Most businesses only check if an email address has an “@” symbol and looks roughly correct.

This basic approach leaves the door wide open for fake leads and spam submissions.

Modern email validation needs to go much deeper. Start by examining the email’s basic structure, but don’t stop there.

Look at the domain itself: Is it real? How long has it existed? Does it have proper mail server records?

These checks can happen in real time while your user fills out the form. It should be noted, however, that smart form validation usually requires some sort of third-party provider to check the details, which means you need to rely on external services.

A common mistake is blocking all free email providers like Gmail or Yahoo. This might seem logical, but it’s a costly error.

Many legitimate business users rely on Gmail for their day-to-day operations, especially small business owners.

Instead of blanket blocks, look for unusual patterns within these email addresses. A Gmail address with a normal name pattern is probably fine; one with a random string of characters should raise red flags.

For enterprise B2B sales, you expect bigger companies to sign up with their company domain email address, so blocking free mail providers might work.

Smart Form Validation: Phone

Phone validation goes far beyond just counting digits. Think about the logic of location first.

When someone enters a phone number with a New York area code but lists their address in California, that’s worth investigating.

But be careful with this approach – people move, they travel, and they keep their old numbers. The key is to use these mismatches as flags for further verification, not as automatic rejections.

The Art Of Smart Data Formatting

Data formatting isn’t just about making your database look neat. It’s about catching mistakes and fraud while making the form easy to complete for legitimate users.

Name fields are a perfect example.

While you want to catch obviously fake names like “asdfgh” or repeated characters, remember that real names come in an incredible variety of formats and styles.

Some cultures use single names, others have very long names, and some include characters that might look unusual to your system.

Modify Your Google Ads Campaign Settings To Tackle Click Fraud

Google offers multiple campaign options to increase reach, on the downside most of those options come along with an increase of click fraud activities.

App Placements

Performance Max campaigns can place your ads across Google’s entire network, including in apps. While this broad reach can be powerful, it also opens the door to potential fraud.

The challenge is that you have limited control over where your ads appear, and some of these automatic placements can lead to wasted ad spend.

Kids’ games are often a major source of accidental and fraudulent clicks. These apps frequently have buttons placed near ad spaces, and children playing games can accidentally tap ads while trying to play.

What looks like engagement in your analytics is actually just frustrated kids trying to hit the “play” button.

Another issue comes from apps that use deceptive design to generate clicks. They might place clickable elements right where ads appear, or design their interface so users naturally tap where ads are located.

This isn’t always intentional fraud. Sometimes, it’s just poor app design, but it costs you money either way.

Unlike traditional campaigns, where you can easily exclude specific placements, Performance Max’s automation makes this more challenging.

The system optimizes for conversions, but it might not recognize that clicks from certain apps never lead to quality leads. By the time you spot the pattern, you’ve already spent money on these low-quality clicks.

Excluding app placements is for almost all advertisers a must have. Very few advertisers benefit from app placements at all.

Partner And Display Network

Lead generation businesses face a unique challenge with Performance Max campaigns that ecommerce stores can largely avoid.

While ecommerce businesses can simply run Shopping-only campaigns and tap into high-intent product searches, lead gen businesses are stuck dealing with the full Performance Max package, including the often problematic Display Network.

The Display Network opens up your ads to a mass of websites, many of which might not be the quality placements you’d want for your business.

While Google tries to filter out bad actors, the display network still includes sites that exist primarily to generate ad clicks.

These sites might look legitimate at first glance, but they’re designed to encourage accidental clicks or attract bot traffic.

Some are specifically designed for server bot farms, as they run on expired domains and have no content besides ads.

Lead generation businesses don’t have this luxury. Their Performance Max campaigns typically run on all networks except shopping. This creates several problems:

  • The quality of clicks varies wildly. Someone might click your medical practice ad while trying to close a pop-up on a gaming site. They’ll never become a patient, but you still pay for that click.
  • Display placements can appear on sites that don’t match your brand’s professional image. Imagine a law firm’s ad showing up on a site full of questionable content – not ideal for building trust with potential clients.
  • Bot traffic and click farms often target display ads because they’re easier to interact with than shopping ads. You might see high click-through rates that look great until you realize none of these clicks are turning into leads.

All those are reasons to question PMax campaigns for lead gen, but that’s a decision every marketer has to make.

Advanced Google Ads Settings To Tackle Click Fraud

If the basics are implemented but there is still a higher amount of suspected click fraud, advanced solutions need to be implemented.

Besides excluding suspicious IP addresses, you can also build negative audiences.

The idea is to have a second success page for your lead generation form and only forward potential bots or fake sign-ups to this page.

To achieve that, your website needs to evaluate potential bots live during the sign-up process.

You can then setup a dedicated “bot pixel” on the second success page in order to send data of this audience to Google.

Once enough data is retrieved, you can exclude this audience from your campaigns. This approach is a little trickier to implement but is worth the effort as those audience signals are of high quality if enough data is supplied.

Make sure to only fire the “bot pixel” on the special success page and only there, otherwise you run the risk of mixing your audiences which would render the system useless.

Filtering Fake Leads With Conditional Triggers

Another tracking-based strategy is to set up condition-based conversion tracking. Combined with hidden form fields, you can modify the conversion trigger not to send data if the hidden field was filled.

In that scenario, you would filter out bots from conversion tracking, sending back only real conversion to your campaign, and therefore, also training the Google algorithm and bidding strategy only on real data.

You eliminate a majority of fake leads and traffic with this setup.

Making Sign-Ups More Challenging To Improve Lead Quality

Another advanced strategy is to make the sign-up process a lot harder.

Tests have shown that much longer forms are not finished by bots because they are usually trained on simpler and shorter forms, which require only mail, name, phone, and address.

Asking specific questions and working with dropdowns can dramatically increase the lead quality. It should be noted, however, that longer forms can also hurt the valid signup rate, which is a risk you want to take if you have to deal with bot and fraud traffic.

A fitting case was a car dealer I worked with. They had a form where people could offer their cars for sale and retrieve a price estimate.

A short form had almost three times the signup rate than before, but it turned out later that a lot of them were spam signups or even very low-qualified leads.

A shorter form leads to more spam because it’s easy to sign up. After switching to a longer form, the signups dropped, but quality increased drastically.

Almost 20 fields long, and potential clients had to upload pictures of their car.

It took a few minutes to finish the signup, but those who did were committed to doing business and open to discussing the sale, which also made it easier for the salespeople to follow up properly.

A Hard Truth About Lead Fraud

Let’s be honest: You can’t completely stop lead fraud. It’s like shoplifting in retail – you can reduce it, you can catch it faster, but you can’t eliminate it entirely.

The fraudsters are always getting smarter, and for every security measure we create, they’ll eventually find a way around it.

But here’s the good news: You don’t need perfect protection. What you need is a balanced approach that catches most of the bad leads while letting good ones through easily.

Think of it like running a store: You want security, but not so much that it scares away real customers.

The key is to layer your defenses. Use click fraud tools as your first line of defense, add smart form validation as your second, and keep a human eye on patterns as your final check.

Will some fake leads still get through? Yes. But if you can stop 90% of the fraud, you’re winning the battle.

Remember: Perfect is the enemy of good. Focus on making fraud expensive and difficult for the bad actors, while keeping your lead generation process smooth and simple for real prospects. That’s how you win in the long run.

More Resources:


Featured Image: BestForBest/Shutterstock

How AI Is Changing The Way We Measure Success In Digital Advertising via @sejournal, @LisaRocksSEM

Success in PPC has historically been measured using performance indicators like click-through rates (CTR), cost per acquisition (CPA), and return on ad spend (ROAS).

However, with the rise of AI, new technologies are having an impact on how we approach and measure performance and success, causing a major change in customer behavior.

From Click-Based Metrics To Predictive Performance Modeling

PPC has relied heavily on click-based metrics, it’s even in the name “pay-per-click.” This has always provided immediate but narrow insights.

AI changes this by integrating predictive performance modeling: Machine learning algorithms analyze historical data to predict which campaigns will drive conversions.

Predictive modeling in AI-powered marketing is revolutionizing how advertisers allocate their precious resources by identifying high-converting audience segments before campaigns even launch.

Instead of reacting to past performance, AI-driven predictive analytics helps businesses forecast:

  • Future customer behaviors based on past interactions.
  • The likelihood of conversion for different audience segments.
  • The optimal bid adjustments for different times of day or geographies.

This allows a more in-depth and detailed budget allocation and performance optimizations beyond simple impressions or clicks.

Quality Score 2.0 – AI-Driven Relevance Metrics

Google’s long-standing Quality Score is based on expected CTR, ad relevance, and landing page experience.

With the current tech advancements, it no longer provides a complete picture of user intent or engagement. AI provides a more advanced approach that some in the industry refer to as “Quality Score 2.0.”

AI-powered relevance metrics now analyze:

  • Deeper contextual signals beyond keywords, including sentiment analysis and user intent.
  • Engagement and behavior patterns to determine the likelihood of conversions.
  • Automated creative testing and adaptive learning to refine ad messaging in real-time.

Google’s AI-driven Performance Max campaigns now use advanced machine learning techniques to optimize ad relevance, suggesting that the traditional Quality Score may soon be obsolete.

Automated Bidding & AI-Driven KPIs

Automated “smart” bidding has changed the way advertisers manage campaign performance.

Manual bid strategies have always required constant monitoring, now AI dynamically adjusts bids based on real-time data signals such as:

  • User device, location, and browsing behavior.
  • Time-of-day performance variations.
  • Probability of conversion based on previous engagement.

Automated bidding strategies like Maximize Conversion Value and Target ROAS are outperforming manual CPC approaches, increasing account efficiencies.

AI-driven key performance indicators (KPIs) are helping advertisers shift to goal-based strategies tied directly to revenue.

Campaigns hitting the revenue goals can be easily scaled, which is a big step in maximizing PPC investments.

The Rise Of New AI-Generated PPC Metrics

Beyond improving existing measurement models, AI is introducing entirely new ways to assess digital ad performance.

These AI-driven PPC metrics offer more holistic insights into customer engagement and lifetime value.

AI Attribution Modeling

Attribution has always been a challenge in PPC.

Traditional models like last-click and linear attribution often miss the full picture by giving all the credit to a single touchpoint, making it hard to understand how different interactions actually contribute to conversions.

AI-powered attribution models solve this by using machine learning to distribute credit across multiple interactions, including clicks, video views, offline actions, and cross-device conversions.

This approach captures the complete customer journey rather than just focusing on the last click interaction.

AI attribution models typically include:

  • Data-Driven Attribution: Measures the true impact of each interaction, whether it’s a click, view, or engagement.
  • Dynamic Adaptation: Continuously adjusts as new data comes in to keep the model accurate and up-to-date.
  • Cross-Channel Integration: Combines online and offline data to reduce gaps and blind spots in tracking.

AI Attribution Modeling is a measurement tool and provides a comprehensive view of how interactions contribute to long-term value.

It is also a strategic approach that connects both Engagement Value Score (EVS) and Customer Lifetime Value (CLV).

EVS measures the depth and quality of interactions rather than just clicks, while CLV focuses on the long-term worth of a customer.

By combining AI attribution with EVS and CLV, marketers gain a deeper understanding of the customer journey and can optimize campaigns for both meaningful engagement and sustainable growth rather than just short-term conversions.

Let’s dive into these two more specific metrics.

Engagement Value Score (EVS)

A growing alternative to CTR, the EVS measures how meaningful an interaction is rather than just if a click occurred.

Unlike CTR, which assumes all clicks are valuable, EVS pinpoints users who genuinely engage with your content.

To measure EVS, combine different engagement signals into one score. Start with your key engagement actions, like:

  • Time Spent on Site: How long users stay on your pages.
  • Multi-Touch Interactions: Video views, chatbot conversations, or content consumption.
  • Behavioral Indicators of Intent: Scroll depth or repeat visits.

After assigning points to each action, create a custom metric in Google Analytics 4 that calculates the total EVS score from these individual actions and integrates into the Google Ads account.

Implementation Steps:

  1. Create Events: Set up custom engagement events with conditions that match high EVS behaviors.
  2. Mark as Key Events: After creating these custom events, mark them as ket events in GA4.
  3. Import to Google Ads: Once the custom conversion is set up in GA4, import it into Google Ads.
  4. Align Bidding Strategies: Use automated bidding strategies that optimize for conversions rather than just clicks.

By using this EVS methodology, Google Ads can optimize campaigns not just for clicks, but for meaningful interactions that drive high value.

Customer Lifetime Value (CLV)

Rather than optimizing for one-time conversions, Customer Lifetime Value (CLV) focuses on the long-term value of a customer.

AI-driven CLV measurement looks beyond quick wins and digs into the total worth of a customer over their entire relationship with your brand.

It’s similar to using EVS in that is focuses on meaningful interactions rather than quick clicks.

To measure CLV accurately, AI models analyze key data points like:

  • Past Purchase Behavior: Predicts future spend based on historical transactions.
  • Churn Risk and Retention Probability: Identifies how likely a customer is to leave or stay.
  • Cross-Channel Interactions: Tracks engagement across social media, email, and customer support.

Just like EVS, CLV requires combining multiple signals into one clear metric. After gathering these data points, create a custom metric in GA4 that calculates the total CLV from individual interactions.

Implementation Steps:

  1. Create Events: Set up custom engagement events for key behaviors (like repeat purchases or social interactions).
  2. Mark as Key Events: Once created, mark these events as key events in GA4.
  3. Import to Google Ads: Bring the custom conversion data into Google Ads to guide bidding strategies.
  4. Optimize with AI: Use automated bidding and predictive analytics to prioritize high-CLV customers.

AI-powered CLV analysis is gaining traction as businesses move toward sustainable, long-term growth strategies rather than chasing short-term conversions.

Take a scientific deep dive into this topic, including risk-adjusted CLV, here.

Challenges And Considerations

While AI-driven measurement is transforming PPC advertising, it is not without its challenges. Decision-makers need to consider the following:

Data Privacy & Compliance

AI’s ability to collect and analyze large amounts of user data raises concerns about privacy and compliance.

General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) are data privacy laws that regulate how businesses collect, store, and use personal information from consumers.

With these regulations, advertisers must balance data-driven insights with ethical and legal responsibilities. AI-powered models should prioritize anonymized data and ensure transparency in data usage.

AI Accuracy

Machine learning models rely on historical data, which can sometimes lead to inaccuracies.

If an AI model is trained on outdated or incomplete data, it can result in poor decision-making. Human oversight is needed to reduce these risks.

Algorithmic Bias

AI models can sometimes reflect biases present in the data they are trained on.

If left unchecked, this can lead to skewed campaign recommendations that favor certain demographics over others. Businesses must check that AI tools are built with fairness and inclusivity in mind.

Interpreting AI-Generated Insights

AI provides highly complex data outputs, which can be difficult for marketing teams to interpret.

Businesses should invest in AI literacy training for decision-makers and teams to ensure that insights are actionable and interpreted correctly.

Key Takeaways

AI is fundamentally changing how we measure success in PPC and digital advertising.

From predictive performance modeling to AI-driven attribution, CLV, and EVS, these advanced metrics are helping marketers move beyond basic clicks and short-term conversions.

Instead, they focus on deeper insights that drive sustainable growth and long-term value.

However, leveraging AI responsibly requires navigating challenges like data privacy, accuracy, algorithmic bias, and the complexity of interpreting insights.

Marketers must prioritize transparency, fairness, and continuous learning to make the most of these powerful tools.

The future of digital advertising lies in bringing together data insights and thoughtful strategy and sustaining that success over time.

More Resources:


Featured Image: metamorworks/Shutterstock

AI Overviews: We Reverse-Engineered Them So You Don’t Have To [+ What You Need To Do Next]

This post was sponsored by DAC. The opinions expressed in this article are the sponsor’s own. Authors: Dan Lauer & Michael Goodman

Is the classic funnel model (TOFU-MOFU-BOFU) still relevant in an AI-driven SERP?

What kinds of queries trigger Google’s AI Overviews?

How can you structure content so that AI pulls your site into the response?

Do you really need to change your SEO strategy?

For years, SEO teams followed a familiar SEO playbook:

  1. Optimize upper-funnel content to capture awareness,
  2. mid-funnel content to drive consideration,
  3. lower-funnel content to convert.

One page, one keyword, one intent.

But with the rise of ChatGPT, Perplexity, Copilot, Gemini, and now Google’s AI Mode, that linear model is increasingly outdated.

So, how do you move forward and keep your visibility high in modern search engine results pages (SERPs)?

We’ve reverse-engineered AI Overviews, so you don’t have to. Let’s dive in.

What We’ve Discovered Through Reverse Engineering Google’s AI Overviews (AIO)

From what we’re seeing across client industries and in how AI-driven results behave, the traditional funnel model – the idea of users moving cleanly from awareness to consideration to conversion – feels increasingly out of step with how people actually search.

How Today’s Search Users Actually Search

Today’s users jump between channels, devices, and questions.

They skim, abandon, revisit, and decide faster than ever.

AI Overviews don’t follow a tidy funnel because most people don’t either.

They surface multiple types of information at once, not because it’s smarter SEO, but because it’s closer to how real decisions get made.

AIOs & AI Mode Aren’t Just Answering Queries – They’re Expanding Them

Traditionally, SEO strategy followed a structured framework. Take a travel-related topic, for example:

  • Informational (Upper-Funnel) – “How to plan a cruise?”
  • Commercial (Mid-Funnel) – “Best cruise lines for families”
  • Transactional (lower-Funnel) – “Find Best Alaska Cruise Deals”

However, AI Overviews don’t stick to that structure.

Instead, they blend multiple layers of intent into a single, comprehensive response.

How AI Overviews Answer & Expand Search Queries

Let’s stay with the travel theme. A search for “Mediterranean cruise” might return an AI Overview that includes:

  • Best Time to go (Informational).
  • Booking Your Cruise (Commercial).
  • Cruise Lines (Navigational).

AI Mode Example for ‘Mediterranean Cruise’

What’s Happening Here?

In this case, Google isn’t just answering the query.

It anticipates what the user will want to know next, acting more like a digital concierge than a traditional search engine.

The AI Overview Test & Parameters

  • Source: Semrush & Google
  • Tested Data: 200 cruise-related informational queries

We started noticing this behavior showing up more often, so we wanted to see how common it actually is.

To get a clearer picture, we pulled 200 cruise-related informational queries from SEMrush and ran them through our custom-built AI SERP scraper. The goal was to see how often these queries triggered AI Overviews, and what kind of intent those Overviews covered.

The patterns were hard to miss:

  • 88% of those queries triggered an AI Overview
  • More than half didn’t just answer the initial question.
  • 52% mixed in other layers of intent, like brand suggestions, booking options, or comparisons, right alongside the basic information someone might’ve been looking for.

Using a different query related to Mediterranean Cruises, the AIO response acts as a travel agent, guiding the user on topics like:

  • How to fly,
  • Destinations with region,
  • Cruise prices,
  • Cruise lines that sail to that destination.

While it’s an Information non-brand search query,  the AIO response is lower-funnel as well.

Again, less than half of the queries were matched intent.

Here are some examples of queries that were identified as Informational and provided only the top-of-funnel response without driving the user further down the funnel.

The Verdict

Even when someone asks a simple, top-of-funnel question, AI is already steering them toward what to do next, whether that’s comparing prices, picking a provider, or booking a trip.

What Does This Mean for SEO Strategies Moving Forward?

If AI Overviews and AI Mode are blending intent types, content, and SEO strategies need to catch up:

  1. It’s no longer enough to rank for high-volume informational keywords. If your content doesn’t address multiple layers of intent, AI will fill the gaps with someone else’s content.
  2. SEO teams need to analyze how AI handles their most important queries. What related questions is it pulling in? Are those answers coming from your site or your competitors?
  3. Think beyond keyword volume. Long-tail queries may have lower search traffic, but they often align better with AI-cited content. Structure your pages with clear headings, bullets, and concise, helpful language—that’s what AI models prefer to surface.

The Future of SEO in an AI World: Hybrid Intent Optimization

The fundamentals of technical and on-page SEO still matter. But if your content is still built around single keywords and single intent types, you’re likely to lose visibility as AI continues to reshape the SERP.

The brands that adapt to this shift by creating content that mirrors the blended, fast-moving behavior of actual users are the ones that will continue to own key moments across the funnel, even as the funnel itself evolves.

As AI transforms search behavior, its crucial to adapt your SEO strategies accordingly. At DAC, we specialize in aligning your content with the latest search trends to enhance visibility and engagement. Reach out to us today to future-proof your strategy with our award-winning TotalSERP approach and stay ahead in the evolving digital landscape.

https://www.dacgroup.com/” class=”btn-learn-more button-green medium-size”>Optimize Your SEO For AI Search, Now

Image Credits

Featured Image: Image by DAC. Used with permission.

In-Post Image: Images by DAC. Used with permission.

Meet the researchers testing the “Armageddon” approach to asteroid defense

One day, in the near or far future, an asteroid about the length of a football stadium will find itself on a collision course with Earth. If we are lucky, it will land in the middle of the vast ocean, creating a good-size but innocuous tsunami, or in an uninhabited patch of desert. But if it has a city in its crosshairs, one of the worst natural disasters in modern times will unfold. As the asteroid steams through the atmosphere, it will begin to fragment—but the bulk of it will likely make it to the ground in just a few seconds, instantly turning anything solid into a fluid and excavating a huge impact crater in a heartbeat. A colossal blast wave, akin to one unleashed by a large nuclear weapon, will explode from the impact site in every direction. Homes dozens of miles away will fold like cardboard. Millions of people could die.

Fortunately for all 8 billion of us, planetary defense—the science of preventing asteroid impacts—is a highly active field of research. Astronomers are watching the skies, constantly on the hunt for new near-Earth objects that might pose a threat. And others are actively working on developing ways to prevent a collision should we find an asteroid that seems likely to hit us.

We already know that at least one method works: ramming the rock with an uncrewed spacecraft to push it away from Earth. In September 2022, NASA’s Double Asteroid Redirection Test, or DART, showed it could be done when a semiautonomous spacecraft the size of a small car, with solar panel wings, was smashed into an (innocuous) asteroid named Dimorphos at 14,000 miles per hour, successfully changing its orbit around a larger asteroid named Didymos. 

But there are circumstances in which giving an asteroid a physical shove might not be enough to protect the planet. If that’s the case, we could need another method, one that is notoriously difficult to test in real life: a nuclear explosion. 

Scientists have used computer simulations to explore this potential method of planetary defense. But in an ideal world, researchers would ground their models with cold, hard, practical data. Therein lies a challenge. Sending a nuclear weapon into space would violate international laws and risk inflaming political tensions. What’s more, it could do damage to Earth: A rocket malfunction could send radioactive debris into the atmosphere. 

Over the last few years, however, scientists have started to devise some creative ways around this experimental limitation. The effort began in 2023, with a team of scientists led by Nathan Moore, a physicist and chemical engineer at the Sandia National Laboratories in Albuquerque, New Mexico. Sandia is a semi-secretive site that serves as the engineering arm of America’s nuclear weapons program. And within that complex lies the Z Pulsed Power Facility, or Z machine, a cylindrical metallic labyrinth of warning signs and wiring. It’s capable of summoning enough energy to melt diamond. 

About 25,000 asteroids more than 460 feet long—a size range that starts with midsize “city killers” and goes up in impact from there—are thought to exist close to Earth. Just under half of them have been found.

The researchers reckoned they could use the Z machine to re-create the x-ray blast of a nuclear weapon—the radiation that would be used to knock back an asteroid—on a very small and safe scale.

It took a while to sort out the details. But by July 2023, Moore and his team were ready. They waited anxiously inside a control room, monitoring the thrumming contraption from afar. Inside the machine’s heart were two small pieces of rock, stand-ins for asteroids, and at the press of a button, a maelstrom of x-rays would thunder toward them. If they were knocked back by those x-rays, it would prove something that, until now, was purely theoretical: You can deflect an asteroid from Earth using a nuke.

This experiment “had never been done before,” says Moore. But if it succeeded, its data would contribute to the safety of everyone on the planet. Would it work?

Monoliths and rubble piles

Asteroid impacts are a natural disaster like any other. You shouldn’t lose sleep over the prospect, but if we get unlucky, an errant space rock may rudely ring Earth’s doorbell. “The probability of an asteroid striking Earth during my lifetime is very small. But what if one did? What would we do about it?” says Moore. “I think that’s worth being curious about.”

Forget about the gigantic asteroids you know from Hollywood blockbusters. Space rocks over two-thirds of a mile (about one kilometer) in diameter—those capable of imperiling civilization—are certainly out there, and some hew close to Earth’s own orbit. But because these asteroids are so elephantine, astronomers have found almost all of them already, and none pose an impact threat. 

Rather, it’s asteroids a size range down—those upwards of 460 feet (140 meters) long—that are of paramount concern. About 25,000 of those are thought to exist close to our planet, and just under half have been found. The day-to-day odds of an impact are extremely low, but even one of the smaller ones in that size range could do significant damage if it found Earth and hit a populated area—a capacity that has led astronomers to dub such midsize asteroids “city killers.”

If we find a city killer that looks likely to hit Earth, we’ll need a way to stop it. That could be technology to break or “disrupt” the asteroid into fragments that will either miss the planet entirely or harmlessly ignite in the atmosphere. Or it could be something that can deflect the asteroid, pushing it onto a path that will no longer intersect with our blue marble. 

Because disruption could accidentally turn a big asteroid into multiple smaller, but still deadly, shards bound for Earth, it’s often considered to be a strategy of last resort. Deflection is seen as safer and more elegant. One way to achieve it is to deploy a spacecraft known as a kinetic impactor—a battering ram that collides with an asteroid and transfers its momentum to the rocky interloper, nudging it away from Earth. NASA’s DART mission demonstrated that this can work, but there are some important caveats: You need to deflect the asteroid years in advance to make sure it completely misses Earth, and asteroids that we spot too late—or that are too big—can’t be swatted away by just one DART-like mission. Instead, you’d need several kinetic impactors—maybe many of them—to hit one side of the asteroid perfectly each time in order to push it far enough to save our planet. That’s a tall order for orbital mechanics, and not something space agencies may be willing to gamble on. 

In that case, the best option might instead be to detonate a nuclear weapon next to the asteroid. This would irradiate one hemisphere of the asteroid in x-rays, which in a few millionths of a second would violently shatter and vaporize the rocky surface. The stream of debris spewing out of that surface and into space would act like a rocket, pushing the asteroid in the opposite direction. “There are scenarios where kinetic impact is insufficient, and we’d have to use a nuclear explosive device,” says Moore.

IKEA-style diagram of an asteroid trailed by a cloud of particles with an inset of an explosion

MCKIBILLO

This idea isn’t new. Several decades ago, Peter Schultz, a planetary geologist and impacts expert at Brown University, was giving a planetary defense talk at the Lawrence Livermore National Laboratory in California, another American lab focused on nuclear deterrence and nuclear physics research. Afterwards, he recalls, none other than Edward Teller, the father of the hydrogen bomb and a key member of the Manhattan Project, invited him into his office for a chat. “He wanted to do one of these near-Earth-­asteroid flybys and wanted to test the nukes,” Schultz says. What, he wondered, would happen if you blasted an asteroid with a nuclear weapon’s x-rays? Could you forestall a spaceborne disaster using weapons of mass destruction?

But Teller’s dream wasn’t fulfilled—and it’s unlikely to become a reality anytime soon. The United Nations’ 1967 Outer Space Treaty states that no nation can deploy or use nuclear weapons off-world (even if it’s not clear how long certain spacefaring nations will continue to adhere to that rule).

Even raising the possibility of using nukes to defend the planet can be tricky. “There’re still many folks that don’t want to talk about it at all … even if that were the only option to prevent an impact,” says Megan Bruck Syal, a physicist and planetary defense researcher at Lawrence Livermore. Nuclear weapons have long been a sensitive subject, and with relations between several nuclear nations currently at a new nadir, anxiety over the subject is understandable. 

But in the US, there are groups of scientists who “recognize that we have a special responsibility as a spacefaring nation and as a nuclear-­capable nation to look at this,” Syal says. “It isn’t our preference to use a nuclear explosive, of course. But we are still looking at it, in case it’s needed.” 

But how? 

Mostly, researchers have turned to the virtual world, using supercomputers at various US laboratories to simulate the asteroid-­agitating physics of a nuclear blast. To put it mildly, “this is very hard,” says Mary Burkey, a physicist and planetary defense researcher at Lawrence Livermore. You cannot simply flick a switch on a computer and get immediate answers. “When a nuke goes off in space, there’s just x-ray light that’s coming out of it. It’s shining on the surface of your asteroid, and you’re tracking those little photons penetrating maybe a tiny little bit into the surface, and then somehow you have to take that micro­meter worth of resolution and then propagate it out onto something that might be on the order of hundreds of meters wide, watching that shock wave propagate and then watching fragments spin off into space. That’s four different problems.” 

Mimicking the physics of x-ray rock annihilation with as much verisimilitude as possible is difficult work. But recent research using these high-fidelity simulations does suggest that nukes are an effective planetary defense tool for both disruption and deflection. The thing is, though, no two asteroids are alike; each is mechanically and geologically unique, meaning huge uncertainties remain. A more monolithic asteroid might respond in a straightforward way to a nuclear deflection campaign, whereas a rubble pile asteroid—a weakly bound fleet of boulders barely held together by their own gravity—might respond in a chaotic, uncontrollable way. Can you be sure the explosion wouldn’t accidentally shatter the asteroid, turning a cannonball into a hail of bullets still headed for Earth? 

Simulations can go a long way toward answering these questions, but they remain virtual re-creations of reality, with built-in assumptions. “Our models are only as good as the physics that we understand and that we put into them,” says Angela Stickle, a hypervelocity impact physicist at the Johns Hopkins University Applied Physics Laboratory in Maryland. To make sure the simulations are reproducing the correct physics and delivering realistic data, physical experiments are needed to ground them.

Every firing of the Z machine carries the energy of more than 1,000 lightning bolts, and each shot lasts a few millionths of a second.

Researchers studying kinetic impactors can get that sort of real-world data. Along with DART, they can use specialized cannons—like the Vertical Gun Range at NASA’s Ames Research Center in California—to fire all sorts of projectiles at meteorites. In doing so, they can find out how tough or fragile asteroid shards can be, effectively reproducing a kinetic impact mission on a small scale. 

Battle-testing nuke-based asteroid defense simulations is another matter. Re-creating the physics of these confrontations on a small scale was long considered to be exceedingly difficult. Fortunately, those keen on fighting asteroids are as persistent as they are creative—and several teams, including Moore’s at Sandia, think they have come up with a solution.

X-ray scissors

The prime mission of Sandia, like that of Lawrence Livermore, is to help maintain the nation’s nuclear weapons arsenal. “It’s a national security laboratory,” says Moore. “Planetary defense affects the entire planet,” he adds—making it, by default, a national security issue as well. And that logic, in part, persuaded the powers that be in July 2022 to try a brand-new kind of experiment. Moore took charge of the project in January 2023—and with the shot scheduled for the summer, he had only a few months to come up with the specific plan for the experiment. There was “lots of scribbling on my whiteboard, running computer simulations, and getting data to our engineers to design the test fixture for the several months it would take to get all the parts machined and assembled,” he says.

Although there were previous and ongoing experiments that showered asteroid-like targets with x-rays, Moore and his team were frustrated by one aspect of them. Unlike actual asteroids floating freely in space, the micro-­asteroids on Earth were fixed in place. To truly test whether x-rays could deflect asteroids, targets would have to be suspended in a vacuum—and it wasn’t immediately clear how that could be achieved.

Generating the nuke-like x-rays was the easy part, because Sandia had the Z machine, a hulking mass of diodes, pipes, and wires interwoven with an assortment of walkways that circumnavigate a vacuum chamber at its core. When it’s powered up, electrical currents are channeled into capacitors—and, when commanded, blast that energy at a target or substance to create radiation and intense magnetic pressures. 

Flanked by klaxons and flashing lights, it’s an intimidating sight. “It’s the size of a building—about three stories tall,” says Moore. Every firing of the Z machine carries the energy of more than 1,000 lightning bolts, and each shot lasts a few millionths of a second: “You can’t even blink that fast.” The Z machine is named for the axis along which its energetic particles cascade, but the Z could easily stand for “Zeus.”

The Z Pulsed Power Facility, or Z machine, at Sandia National Laboratories in Albuquerque, New Mexico, concentrates electricity into short bursts of intense energy that can be used to create x-rays and gamma rays and compress matter to high densities.
RANDY MONTOYA/SANDIA NATIONAL LABORATORY

The original purpose of the Z machine, whose first form was built half a century ago, was nuclear fusion research. But over time, it’s been tinkered with, upgraded, and used for all kinds of science. “The Z machine has been used to compress matter to the same densities [you’d find at] the centers of planets. And we can do experiments like that to better understand how planets form,” Moore says, as an example. And the machine’s preternatural energies could easily be used to generate x-rays—in this case, by electrifying and collapsing a cloud of argon gas.

“The idea of studying asteroid deflection is completely different for us,” says Moore. And the machine “fires just once a day,” he adds, “so all the experiments are planned more than a year in advance.” In other words, the researchers had to be near certain their one experiment would work, or they would be in for a long wait to try again—if they were permitted a second attempt. 

For some time, they could not figure out how to suspend their micro-asteroids. But eventually, they found a solution: Two incredibly thin bits of aluminum foil would hold their targets in place within the Z machine’s vacuum chamber. When the x-ray blast hit them and the targets, the pieces of foil would be instantly vaporized, briefly leaving the targets suspended in the chamber and allowing them to be pushed back as if they were in space. “It’s like you wave your magic wand and it’s gone,” Moore says of the foil. He dubbed this technique “x-ray scissors.” 

In July 2023, after considerable planning, the team was ready. Within the Z machine’s vacuum chamber were two fingernail-size targets—a bit of quartz and some fused silica, both frequently found on real asteroids. Nearby, a pocket of argon gas swirled away. Satisfied that the gigantic gizmo was ready, everyone left and went to stand in the control room. For a moment, it was deathly quiet.

Stand by.

Fire.

It was over before their ears could even register a metallic bang. A tempest of electricity shocked the argon gas cloud, causing it to implode; as it did, it transformed into a plasma and x-rays screamed out of it, racing toward the two targets in the chamber. The foil vanished, the surfaces of both targets erupted outward as supersonic sprays of debris, and the targets flew backward, away from the x-rays, at 160 miles per hour.

Moore wasn’t there. “I was in Spain when the experiment was run, because I was celebrating my anniversary with my wife, and there was no way I was going to miss that,” he says. But just after the Z machine was fired, one of his colleagues sent him a very concise text: IT WORKED.

“We knew right away it was a huge success,” says Moore. The implications were immediately clear. The experimental setup was complex, but they were trying to achieve something extremely fundamental: a real-world demonstration that a nuclear blast could make an object in space move. 

“We’re genuinely looking at this from the standpoint of ‘This is a technology that could save lives.’”

Patrick King, a physicist at the Johns Hopkins University Applied Physics Laboratory, was impressed. Previously, pushing back objects using x-ray vaporization had been extremely difficult to demonstrate in the lab. “They were able to get a direct measurement of that momentum transfer,” he says, calling the x-ray scissors an “elegant” technique.

Sandia’s work took many in the community by surprise. “The Z machine experiment was a bit of a newcomer for the planetary defense field,” says Burkey. But she notes that we can’t overinterpret the results. It isn’t clear, from the deflection of the very small and rudimentary asteroid-like targets, how much a genuine nuclear explosion would deflect an actual asteroid. As ever, more work is needed. 

King leads a team that is also working on this question. His NASA-funded project involves the Omega Laser Facility, a complex based at the University of Rochester in upstate New York. Omega can generate x-rays by firing powerful lasers at a target within a specialized chamber. Upon being irradiated, the target generates an x-ray flash, similar to the one produced during a nuclear explosion in space, which can then be used to bombard various objects—in this case, some Earth rocks acting as asteroid mimics, and (crucially) some bona fide meteoritic material too. 

King’s Omega experiments have tried to answer a basic question: “How much material actually gets removed from the surface?” says King. The amount of material that flies off the pseudo-asteroids, and the vigor with which it’s removed, will differ from target to target. The hope is that these results—which the team is still considering—will hint at how different types of asteroids will react to being nuked. Although experiments with Omega cannot produce the kickback seen in the Z machine, King’s team has used a more realistic and diverse series of targets and blasted them with x-rays hundreds of times. That, in turn, should clue us in to how effectively, or not, actual asteroids would be deflected by a nuclear explosion.

“I wouldn’t say one [experiment] has definitive advantages over the other,” says King. “Like many things in science, each approach can yield insight along different ‘axes,’ if you will, and no experimental setup gives you the whole picture.”

Ikea-style diagram of the Earth with a chat bubble inset of two figures high-fiving.

MCKIBILLO

Experiments like Moore’s and King’s may sound technologically baroque—a bit like lightning-fast Rube Goldberg machines overseen by wizards. But they are likely the first in a long line of increasingly sophisticated tests. “We’ve just scratched the surface of what we can do,” Moore says. As with King’s experiments, Moore hopes to place a variety of materials in the Z machine, including targets that can stand in for the wetter, more fragile carbon-rich asteroids that astronomers commonly see in near-Earth space. “If we could get our hands on real asteroid material, we’d do it,” he says. And it’s expected that all this experimental data will be fed back into those nuke-versus-­asteroid computer simulations, helping to verify the virtual results.

Although these experiments are perfectly safe, planetary defenders remain fully cognizant of the taboo around merely discussing the use of nukes for any reason—even if that reason is potentially saving the world. “We’re genuinely looking at this from the standpoint of ‘This is a technology that could save lives,’” King says.

Inevitably, Earth will be imperiled by a dangerous asteroid. And the hope is that when that day arrives, it can be dealt with using something other than a nuke. But comfort should be taken from the fact that scientists are researching this scenario, just in case it’s our only protection against the firmament. “We are your taxpayer dollars at work,” says Burkey. 

There’s still some way to go before they can be near certain that this asteroid-stopping technique will succeed. Their progress, though, belongs to everyone. “Ultimately,” says Moore, “we all win if we solve this problem.” 

Robin George Andrews is an award-winning science journalist based in London and the author, most recently, of How to Kill an Asteroid: The Real Science of Planetary Defense.

How the federal government is tracking changes in the supply of street drugs

In 2021, the Maryland Department of Health and the state police were confronting a crisis: Fatal drug overdoses in the state were at an all-time high, and authorities didn’t know why. There was a general sense that it had something to do with changes in the supply of illicit drugs—and specifically of the synthetic opioid fentanyl, which has caused overdose deaths in the US to roughly double over the past decade, to more than 100,000 per year. 

But Maryland officials were flying blind when it came to understanding these fluctuations in anything close to real time. The US Drug Enforcement Administration reported on the purity of drugs recovered in enforcement operations, but the DEA’s data offered limited detail and typically came back six to nine months after the seizures. By then, the actual drugs on the street had morphed many times over. Part of the investigative challenge was that fentanyl can be some 50 times more potent than heroin, and inhaling even a small amount can be deadly. This made conventional methods of analysis, which required handling the contents of drug packages directly, incredibly risky. 

Seeking answers, Maryland officials turned to scientists at the National Institute of Standards and Technology, the national metrology institute for the United States, which defines and maintains standards of measurement essential to a wide range of industrial sectors and health and security applications.

There, a research chemist named Ed Sisco and his team had developed methods for detecting trace amounts of drugs, explosives, and other dangerous materials—techniques that could protect law enforcement officials and others who had to collect these samples. Essentially, Sisco’s lab had fine-tuned a technology called DART (for “direct analysis in real time”) mass spectrometry—which the US Transportation Security Administration uses to test for explosives by swiping your hand—to enable the detection of even tiny traces of chemicals collected from an investigation site. This meant that nobody had to open a bag or handle unidentified powders; a usable residue sample could be obtained by simply swiping the outside of the bag.  

Sisco realized that first responders or volunteers at needle exchange sites could use these same methods to safely collect drug residue from bags, drug paraphernalia, or used test strips—which also meant they would no longer need to wait for law enforcement to seize drugs for testing. They could then safely mail the samples to NIST’s lab in Maryland and get results back in as little as 24 hours, thanks to innovations in Sisco’s lab that shaved the time to generate a complete report from 10 to 30 minutes to just one or two. This was partly enabled by algorithms that allowed them to skip the time-consuming step of separating the compounds in a sample before running an analysis.

The Rapid Drug Analysis and Research (RaDAR) program launched as a pilot in October 2021 and uncovered new, critical information almost immediately. Early analysis found xylazine—a veterinary sedative that’s been associated with gruesome wounds in users—in about 80% of opioid samples they collected. 

This was a significant finding, Sisco says: “Forensic labs care about things that are illegal, not things that are not illegal but do potentially cause harm. Xylazine is not a scheduled compound, but it leads to wounds that can lead to amputation, and it makes the other drugs more dangerous.” In addition to the compounds that are known to appear in high concentrations in street drugs—xylazine, fentanyl, and the veterinary sedative medetomidine—NIST’s technology can pick out trace amounts of dozens of adulterants that swirl through the street-drug supply and can make it more dangerous, including acetaminophen, rat poison, and local anesthetics like lidocaine.

What’s more, the exact chemical formulation of fentanyl on the street is always changing, and differences in molecular structure can make the drugs deadlier. So Sisco’s team has developed new methods for spotting these “analogues”—­compounds that resemble known chemical structures of fentanyl and related drugs.

Ed Sisco in a mask
Ed Sisco’s lab at NIST developed a test that gives law enforcement and public health officials vital information about what substances are present in street drugs.
B. HAYES/NIST

The RaDAR program has expanded to work with partners in public health, city and state law enforcement, forensic science, and customs agencies at about 65 sites in 14 states. Sisco’s lab processes 700 to 1,000 samples a month. About 85% come from public health organizations that focus on harm reduction (an approach to minimizing negative impacts of drug use for people who are not ready to quit). Results are shared at these collection points, which also collect survey data about the effects of the drugs.

Jason Bienert, a wound-care nurse at Johns Hopkins who formerly volunteered with a nonprofit harm reduction organization in rural northern Maryland, started participating in the RaDAR program in spring 2024. “Xylazine hit like a storm here,” he says. “Everyone I took care of wanted to know what was in their drugs because they wanted to know if there was xylazine in it.” When the data started coming back, he says, “it almost became a race to see how many samples we could collect.” Bienert sent in about 14 samples weekly and created a chart on a dry-erase board, with drugs identified by the logos on their bags, sorted into columns according to the compounds found in them: ­heroin, fentanyl, xylazine, and everything else.

“It was a super useful tool,” Bienert says. “Everyone accepted the validity of it.” As people came back to check on the results of testing, he was able to build rapport and offer additional support, including providing wound care for about 50 people a week.

The breadth and depth of testing under the RaDAR program allow an eagle’s-eye view of the national street-drug landscape—and insights about drug trafficking. “We’re seeing distinct fingerprints from different states,” says Sisco. NIST’s analysis shows that fentanyl has taken over the opioid market—except for pockets in the Southwest, there is very little heroin on the streets anymore. But the fentanyl supply varies dramatically as you cross the US. “If you drill down in the states,” says Sisco, “you also see different fingerprints in different areas.” Maryland, for example, has two distinct fentanyl supplies—one with xylazine and one without.

In summer 2024, RaDAR analysis detected something really unusual: the sudden appearance of an industrial-grade chemical called BTMPS, which is used to preserve plastic, in drug samples nationwide. In the human body, BTMPS acts as a calcium channel blocker, which lowers blood pressure, and mixed with xylazine or medetomidine, can make overdoses harder to treat. Exactly why and how BTMPS showed up in the drug supply isn’t clear, but it continues to be found in fentanyl samples at a sustained level since it was initially detected. “This was an example of a compound we would have never thought to look for,” says Sisco. 

To Sisco, Bienert, and others working on the public health front of the drug crisis, the ever-shifting chemical composition of the street-drug supply speaks to the futility of the “war on drugs.” They point out that a crackdown on heroin smuggling is what gave rise to fentanyl. And NIST’s data shows how in June 2024—the month after Pennsylvania governor Josh Shapiro signed a bill to make possession of xylazine illegal in his state—it was almost entirely replaced on the East Coast by the next veterinary drug, medetomidine. 

Over the past year, for reasons that are not fully understood, drug overdose deaths nationally have been falling for the first time in decades. One theory is that xylazine has longer-lasting effects than fentanyl, which means people using drugs are taking them less often. Or it could be that more and better information about the drugs themselves is helping people make safer decisions.

“It’s difficult to say the program prevents overdoses and saves lives,” says Sisco. “But it increases the likelihood of people coming in to needle exchange centers and getting more linkages to wound care, other services, other education.” Working with public health partners “has humanized this entire area for me,” he says. “There’s a lot more gray than you think—it’s not black and white. And it’s a matter of life or death for some of these people.” 

Adam Bluestein writes about innovation in business, science, and technology.