New Ecommerce Tools: October 1, 2025

Our handpicked list this week of new products and services for ecommerce merchants includes updates on sustainable packaging, website builders, agent-based commerce, social commerce, pay-later purchases, B2B marketplaces, B2C CRMs, and more.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants

ChatGPT launches Instant Checkout. OpenAI announced that ChatGPT Plus, Pro, and Free users can now buy directly from Etsy sellers in chat, with Shopify integration coming soon. Instant Checkout supports single-item purchases, with multi-item carts to follow. OpenAI is also exposing the tech that powers Instant Checkout (i.e., the Agentic Commerce Protocol), so that more merchants and developers can build integrations. Co-developed with Stripe, the Agentic Commerce Protocol enables AI agents, people, and businesses to collaborate on purchases.

Web page for ChatGPT's Instant Checkout

ChatGPT’s Instant Checkout

Klaviyo launches B2C CRM with AI agents for marketing and customer services. Klaviyo has unveiled Marketing Agent and Customer Agent for its B2C customer relationship management tool, built on its data platform and unifying data, marketing, service, and analytics. Marketing Agent autonomously plans and launches campaigns, creates on-brand content, personalizes each send, and learns without prompting. Customer Agent delivers personalized assistance to consumers by resolving common questions, recommending products, and escalating when necessary to a human agent with full context.

Ordoro and Cartology partner to empower ecommerce merchants on Amazon. Ordoro, an ecommerce logistics and multichannel fulfillment platform, has collaborated with Cartology, an Amazon agency specializing in brand strategy and account growth. Together, the companies aim to provide Amazon sellers with a streamlined path to scale, combining front-end optimization with backend fulfillment. The partnership combines Cartology’s expertise in marketplace strategy with Ordoro’s capabilities in inventory management and shipping automation, enabling sellers to grow smarter and more sustainably.

PayPal Honey turns queries into shopping. PayPal Honey is turning AI-centric shopping queries into buying experiences, transforming its coupon finder into a value-focused commerce intelligence platform. Honey’s extension will display products that its chatbot recommends, with real-time pricing, merchant options, and exclusive offers. Honey draws from the company’s SKU-level product catalog, spanning hundreds of millions of items, to match AI-recommended products. These features will be available by Black Friday at no cost to Honey users, per PayPal.

Mercado Libre expands into B2B with launch of Libre Negocios. Mercado Libre, the leading ecommerce platform in Latin America, is entering the B2B market with the launch of Mercado Libre Negocios (loosely, “Mercado Business Freedom”). Negocios aims to streamline wholesale buying and selling across the region. Businesses can create accounts linked to a tax ID number to unlock purchasing options and exclusive benefits. Buyers gain access to competitive pricing, volume discounts, fast deliveries, approved invoices, and flexible financing through Mercado Pago, the payment platform.

Home page of Mercado Libre Negocios

Mercado Libre Negocios

Pinterest introduces Top of Search ads. Pinterest is previewing in beta Top of Search ads, which appear in the top 10 slots of search results and Related Pins. Per Pinterest, Top of Search ads ensure products show where shopping journeys typically begin. Also, a brand-exclusive ad unit will highlight advertiser catalogs.

Zoovu launches enhanced AI shopping assistant. Zoovu, an AI search and product discovery platform, has announced enhanced capabilities and increased availability of Zoe, its generative-AI shopping assistant. According to Zoovu, Zoe is composable, modular, and natively integrated into an entire shopping journey, providing a conversational AI expert on product detail pages, search results, category pages, and self-service portals. Zoe syndicates the same AI expert to retail partner sites and in-store kiosks.

PayPal to sell BNPL loans to Blue Owl Capital. PayPal and Blue Owl Capital, a lender and investor, have announced a two-year agreement wherein Blue Owl will purchase approximately $7 billion of PayPal’s buy-now, pay-later receivables. PayPal will remain responsible for all customer-facing activities, including underwriting and servicing, associated with its U.S. “Pay in 4” product.

Recommendation engine Novi launches Shopping Optimizer. Novi, an AI-powered recommendation engine, has unveiled Shopping Optimizer, designed to increase sales by helping merchants surface products backed by verified info from AI shopping assistants. Novi says its proprietary optimization models leverage trust signals, such as badges, labels, certifications, and endorsements, as proof points of credibility.

Home page of Novi

Novi

NameSilo acquires CommerceHQ, a drag-and-drop website builder. NameSilo, a domain registrar, has announced the acquisition and integration of CommerceHQ, a website builder with ecommerce capabilities. The acquisition brings a drag-and-drop builder into NameSilo’s ecosystem, enabling customers to build and launch ecommerce-enabled websites alongside their domain registrations. As part of the integration, NameSilo introduced bundled offerings that combine domain, website, and email. Customers can choose self-serve or concierge-style services.

PAC Worldwide releases sustainable packaging innovations. PAC Worldwide, a provider of protective packaging and part of ProAmpac, has introduced (i) Post Consumer Recycled Bubble Roll and (ii) fixed release liner for its wicketed paper mailer, helping packers streamline workflows, reduce ergonomic strain, and maintain safer packing areas. According to PAC Worldwide, the new offerings underscore its commitment to delivering more sustainable, high-performance solutions for today’s ecommerce and retail markets.

Acadia, a product data app, integrates with BigCommerce. Distributor Data Solutions has integrated its “Acadia by DDS” app with BigCommerce. The app expedites product data management for B2B distributors and manufacturers by enabling real-time synchronization of product content from DDS Acadia accounts directly to BigCommerce stores. According to DDS, Acadia instantly updates product details, leverages advanced AI to categorize new products, enhances searchability, and supports multi-storefronts.

AI-native ecommerce platform Genstore secures $10 million in seed funding. Genstore, an AI-native store builder, has completed a $10 million seed funding round, led by Weimob with participation from Lighthouse Founders’ Fund. Genstore provides online merchants with a suite of intelligent assistant agents, automating operations such as product listing, copy, customer service, and marketing. Merchants can launch a store through AI conversation, requiring no coding or design skills, per Genstore, which states the funding will accelerate product development and market expansion.

Home page of Genstore

Genstore

How People Really Use LLMs And What That Means For Publishers

OpenAI released the largest study to date on how users really use ChatGPT. I have painstakingly synthesized the ones you and I should pay heed to, so you don’t have to wade through the plethora of useful and pointless insights.

TL;DR

  1. LLMs are not replacing search. But they are shifting how people access and consume information.
  2. Asking (49%) and Doing (40%) queries dominate the market and are increasing in quality.
  3. The top three use cases – Practical Guidance, Seeking Information, and Writing – account for 80% of all conversations.
  4. Publishers need to build linkable assets that add value. It can’t just be about chasing traffic from articles anymore.
Image Credit: Harry Clarkson-Bennett

Chatbot 101

A chatbot is a statistical model trained to generate a text response given some text input. Monkey see, monkey do.

The more advanced chatbots have a two or more-stage training process. In stage one (less colloquially known as “pre-training”), LLMs are trained to predict the next word in a string.

Like the world’s best accountant, they are both predictable and boring. And that’s not necessarily a bad thing. I want my chefs fat, my pilots sober, and my money men so boring they’re next in line to lead the Green Party.

Stage two is where things get a little fancier. In the “post-training” phase, models are trained to generate “quality” responses to a prompt. They are fine-tuned on different strategies, like reinforcement learning, to help grade responses.

Over time, the LLMs, like Pavlov’s dog, are either rewarded or reprimanded based on the quality of their responses.

In phase one, the model “understands” (definitely in inverted commas) a latent representation of the world. In phase two, its knowledge is honed to generate the best quality response.

Without temperature settings, LLMs will generate exactly the same response time after time, as long as the training process is the same.

Higher temperatures (closer to 1.0) increase randomness and creativity. Lower temperatures (closer to 0) make the model(s) far more predictive and precise.

So, your use case determines the appropriate temperature settings. Coding should be set closer to zero. Creative, more content-focused tasks should be closer to one.

I have already talked about this in my article on how to build a brand post AI. But I highly recommend reading this very good guide on how temperature scales work with LLMs and how they impact the user base.

What Does The Data Tell Us?

That LLMs are not a direct replacement for search. Not even that close IMO. This Semrush study highlighted that LLM super users increased the amount of traditional searches they were doing. The expansion theory seems to hold true.

But they have brought on a fundamental shift in how people access and interact with information. Conversational interfaces have incredible value. Particularly in a workplace format.

Who knew we were so lazy?

1. Guidance, Seeking Information, And Writing Dominate

These top three use cases account for 80% of all human-robot conversations. Practical guidance, seeking information, and please help me write something bland and lacking any kind of passion or insight, wondrous robot.

I will concede that the majority of Writing queries are for editing existing work. Still. If I read something written by AI, I will feel duped. And deception is not an attractive quality.

2. Non-Work-Related Usage Is Increasing

  • Non-work-related messages grew from 53% of all usage to more than 70% by July 2025.
  • LLMs have become habitual. Particularly when it comes to helping us make the right decisions. Both in and out of work.

3. Writing Is The Most Common Workplace Application

  • Writing is the most common work use case, accounting for 40% of work-related messages on average in June 2025.
  • About two-thirds of all Writing messages are requests to modify existing user text rather than create new text from scratch.

I know enough people that just use LLMs to help them write better emails. I almost feel sorry for the tech bros that the primary use cases for these tools are so lacking in creativity.

4. Less So Coding

  • Computer coding queries are a relatively small share, at only 4.2% of all messages.*
  • This feels very counterintuitive, but specialist bots like Claude or tools like Lovable are better alternatives.
  • This is a point of note. Specialist LLM usage will grow and will likely dominate specific industries because they will be able to develop better quality outputs. The specialized stage two style training makes for a far superior product.

*Compared to 33% of work-related Claude conversations.

It’s important to note that other studies have some very different takes on what people use LLMs for. So this isn’t as cut and dry as we think. I’m sure things will continue to change.

5. Men No Longer Dominate

  • Early adopters were disproportionately male (around 80% with typically masculine names).
  • That number declined to 48% by June 2025, with active users now slightly more likely to have typically feminine names.

Sure, us men have our flaws. Throughout history maybe we’ve been a tad quick to battle and a little dominating. But good to see parity.

  • 89% of all queries are Asking and Doing related.
  • 49% Asking and 40% Doing, with just 11% for Expressing.
  • Asking messages have grown faster than Doing messages over the last year, and are rated higher quality.
A ChatGPT-built table with examples of each query type – Asking, Doing, and Expressing (Image Credit: Harry Clarkson-Bennett)

7. Relationships And Personal Reflection Are Not Prominent

  • There have been a number of studies that state that LLMs have become personal therapists for people (see above).
  • However, relationships and personal reflection only account for 1.9% of total messages according to OpenAI.

8. The Bloody Youth (*Shakes Fist*)

Takeaways

I don’t think LLMs are a disaster for publishers. Sure, they don’t send any referral traffic and have started to remove citations outside of paid users (classic). But none of these tech-heads are going to give us anything.

It’s a race to the moon, and we’re the dog they sent on the test flight.

But if you’re a publisher with an opinion, an audience, and – hopefully – some brand depth and assets to hand, you’ll be ok. Although their crawling behavior is getting out of hand.

Shit-quality traffic and not a lot of it (Image Credit: Harry Clarkson-Bennett)

One of the most practical outcomes we as publishers can take from this data is the apparent change in intents. For eons, we’ve been lumbered with navigational, informational, commercial, and transactional.

Now we have Doing. Or Generating. And it’s huge.

Even simple tools can still drive fantastic traffic and revenue (Image Credit: Harry Clarkson-Bennett)

SEO isn’t dead for publishers. But we do need to do more than just keep publishing content. There’s a lot to be said for espousing the values of AI, while keeping it at arm’s length.

Think BBC Verify. Content that can’t be synthesized by machines because it adds so much value. Tools and linkable assets. Real opinions from experts pushed to the fore.

But it’s hard to scale that quality. Programmatic SEO can drive amazing value. As can tools. Tools that answer users’ “Doing” queries time after time. We have to build things that add value outside of the existing corpus.

And if your audience is generally younger and more trusting, you’re going to have to lean into this more.

More Resources:


This post was originally published on Leadership in SEO.


Featured Image: Roman Samborskyi/Shutterstock

How AI Really Weighs Your Links (Analysis Of 35,000 Datapoints) via @sejournal, @Kevin_Indig

Before we jump in:

  • I hate to brag, but I will say I’m extremely proud to have placed 4th in the G50 SEO World Championships this past week.
  • I’m speaking at NESS, the global News & Editorial SEO Summit, on October 22. Growth Memo readers get 20% off when code “kevin2025”

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

Historically, backlinks have always been one of the most reliable currencies of visibility in search results.

We know links matter for visibility in AI-based search, but how they work inside LLMs – including AI Overviews, Gemini, or ChatGPT & Co.- is still somewhat of a black box.

The rise of AI search models changes the rules of organic visibility and the competition for share of voice in LLM results.

So the question is, do backlinks still earn visibility in AI-based modalities of search… and if so, which ones?

If backlinks were the currency of the pre-LLM web, this week’s analysis is a first look at whether they’re still legal tender in the new AI search economy.

Together with Semrush, I analyzed 1,000 domains and their AI mentions against core backlink metrics.

Image Credit: Kevin Indig

The data surfaced four clear takeaways:

  1. Backlink-earned authority helps, but it’s not everything.
  2. Link quality outweighs volume.
  3. Most surprisingly, nofollow links pull real weight.
  4. Image links can move the needle on authority.

These findings help us all understand how AI models surface sites, along with exposing what backlink levers marketers can pull to influence visibility.

Below, you’ll find the methodology, deeper data takeaways, and, for premium subscribers, recommendations (with benchmarks) to put these findings into action.

Methodology

For this analysis, I looked at relationships between AI mentions for 1,000 randomly selected web domains. All data is from the Semrush AI SEO Toolkit, Semrush’s AI visibility & search analytics platform.

Along with the Semrush team, I examined the number of mentions across:

  • ChatGPT.
  • ChatGPT with Search activated.
  • Gemini.
  • Google’s AI Overviews.
  • Perplexity.

(If you’re wondering where Claude.ai fits in this analysis, we didn’t include it at this time as its user base is generally less focused on web search and more on generative tasks.)

For the platforms above, we measured Share of Voice and the number of AI mentions against the following backlink metrics:

  • Total backlinks.
  • Unique linking domains.
  • Follow links.
  • Nofollow links.
  • Authority Score (a Semrush metric referred to as Ascore below).
  • Text links.
  • Image links.

In this analysis, I used two different ways of measuring correlation across the data: a Pearson correlation and a Spearman correlation.

If you are familiar with these concepts, skip to the next section where we dive into the results.

For everyone else, I’ll break these down so you have a better understanding of the findings below.

Both Pearson and Spearman are correlation coefficients – numbers between -1 and +1 that measure how strongly two different variables are related.

The closer the coefficient is to +1 or -1, the more likely and stronger the correlation. (Near 0 means weak or no correlation at all.)

  • Pearson’s r measures the strength and direction of a linear relationship between two variables. Pearson looks at a linear correlation across the data using the raw values. This way of measuring is sensitive to outliers. But, if the relationship curves or has thresholds, Pearson under-measures it.
  • Spearman’s ρ (rho) measures the strength and direction of a monotonic relationship, or whether values consistently move in the same or opposite direction, not necessarily in a straight line. Spearman looks at rank correlation across the data. It asks whether higher X tends to come with higher Y; Spearman correlation asks: “When one thing increases, does the other usually increase too?”. It’s a correlation that is more robust to outliers and accounts for non-linear, monotonic patterns.

A gap between Pearson and Spearman correlation coefficients can mean the gains are non-linear.

In other words: There’s a threshold to cross. And that means the effect of X on Y doesn’t kick in right away.

Examining both the Pearson and Spearman coefficients can tell us if nothing (or very little) happens until you pass a certain point – and then once you exceed that point, the relationship shows up strongly.

Here’s a quick example of what an analysis that involves both coefficients can reveal:

Spending $500 (action X) on ads might not move the needle on sales growth (outcome Y). But once you cross, say, $5,000/month (action X), sales start growing steadily (outcome Y).

And that’s the end of your statistics lesson for today.

Image Credit: Kevin Indig

The first signal we examined was the strength of the relationship between the number of backlinks a site gets versus its AI Share of Voice.

Here’s what the data showed:

  • Authority Score has a moderate link to Share of Voice (SoV): Pearson ~0.23, Spearman ~0.36.
  • Higher authority means higher SoV, but the gains are uneven. There’s a threshold you need to cross.
  • Authority supports visibility, yet it does not explain most of the variance. What this means is that backlinks do have an impact on AI visibility, but there is more to the story, like your content, brand perceptions, etc.

Also, the number of unique linking domains matters more than the total number of backlinks.

In plain terms, your site is more likely to have a larger SoV when you have links from many different websites than a huge number of links from just a few sites.

Image Credit: Kevin Indig

Across all models, the strongest relationship occurred between Authority Score (0.65 Pearson, 0.57 Spearman) and the number of mentions

Here’s how Semrush defines the Authority Score measurement:

Authority Score is our compound metric that grades the overall quality of a website or a webpage. The higher the score, the more assumed weight a domain’s or webpage’s outbound links to another site could have.

It takes into account the number and quality of backlinks, organic traffic to link source pages, and the spamminess of the link profile.

Of course, Ascore is just a proxy for quality. LLMs have their own way of arriving at backlink quality. But the data shows that we can use Semrush’s Ascore as a good representative.

Most models value this metric equally for mentions, but ChatGPT Search and Perplexity value it the least compared to the average.

Surprisingly, regular ChatGPT (without search activated) weighs Ascore the most out of all models.

Critical to know: Median mentions jump from ~21.5 in decile 8 to ~79.0 in decile 9. The relationship is non-linear. In other words, the biggest gains come when you hit the upper boundaries of authority, or Ascore in this case.

(For context, a decile is a way of splitting a dataset into 10 equal parts. Each segment, or decile, contains 10% of the data points when they’re sorted in order.)

Image Credit: Kevin Indig

Perhaps the most significant finding from this analysis is that it doesn’t matter much if the links are set to nofollow or not!

And this has huge implications.

Confirmation of the value of nofollow links is so important because these types of links tend to be easier to build than follow links.

This is where LLMs are distinctly different from search engines: We’ve known for a while that Google also counts nofollow links, but not how much and for what (crawling, ranking, etc).

Once again, you won’t see big gains until you’re in the top 3 deciles, or the top 30% of the data points.

Follow links → Mentions:

  • Pearson 0.334, Spearman 0.504

Nofollow links → Mentions:

  • Pearson 0.340, Spearman 0.509

Conversely, Google’s AI Overviews and Perplexity weighed regular links the highest and nofollow links the least.

And interestingly, Gemini and ChatGPT weigh nofollow links the highest (over regular follow links).

Here’s my own theory as to why Gemini and ChatGPT weigh nofollow more:

With Gemini, I’m curious if Google weighs nofollow links higher than we have believed them to be in the past. And with ChatGPT, my hypothesis is that Bing is also weighing nofollow links higher (once Google started doing it, too). But this is just a theory, and I don’t have the data to support it at this time.

Image Credit: Kevin Indig

Beyond text-based backlinks, we also tested if image-based backlinks carry the same weight.

And in some cases, they had a stronger relationship to mentions than text-based links.

But how strong?

  • Images vs mentions: Pearson 0.415, Spearman 0.538
  • Text links vs mentions: Pearson 0.334, Spearman 0.472

Image links really start to pay off once you already have some authority.

  • From mid decile tiers up, the relationship turns positive, then strengthens, and is strongest in the top deciles.
  • In low-Ascore deciles (deciles 1 and 2), the images → mentions tie is weak or negative.

If you are targeting mention growth on Perplexity or Search-GPT, image links are especially productive.

  • Images correlate with mentions most on Perplexity and Search-GPT (Spearman ≈ 0.55 and 0.53), then ChatGPT/Gemini (≈ 0.49 – 0.52), then Google-AI (≈ 0.46).

Featured Image: Paulo Bobita/Search Engine Journal

GA4 Five Years Later: The Current State Of Marketing Analytics

As a marketing specialist who has gone through the transition from Universal Analytics to Google Analytics 4 on countless projects, I can confidently say that no platform migration has divided the marketing community quite like GA4.

Five years after the initial launch of GA4 in October 2020, and more than a year since the complete Universal Analytics shutdown, it’s time for an honest review of where we stand with Google’s flagship analytics platform.

The Great Migration: A Bumpy Road To The Future

When Google announced in March 2022 that Universal Analytics would stop processing data by July 2023, the marketing world was in shock. The short window between the announcement and the sunset date caught many marketers off guard, causing mild panic among companies and website owners.

What followed was one of the most contentious platform migrations in digital marketing history.

Starting July 1, 2023, standard Universal Analytics properties stopped processing hits, with Universal Analytics 360 properties receiving a one-time processing extension ending on July 1, 2024.

For many of us who had spent over a decade mastering Universal Analytics, this wasn’t just a platform change; it was the end of an era.

The fundamental shift from UA’s session-based model to GA4’s event-based architecture represented more than a technical upgrade. It was a complete reimagining of how we measure and understand user behavior.

While Google positioned this as future-proofing for a privacy-first, cross-device world, the reality on the ground was far more challenging.

The Promise Vs. The Reality

Google’s marketing pitch for GA4 was compelling: enhanced user journey tracking, privacy-compliant measurement, advanced machine learning, and more intuitive reporting.

As someone who eagerly adopted GA4 early, I was excited about these possibilities. However, the execution has been a mixed bag at best.

The User Experience Crisis

Perhaps the most important criticism of GA4 has been its user interface, with widespread negative feedback from the marketing community.

The interface complaints aren’t just about aesthetics; they’re about productivity. Tasks that took two clicks in Universal Analytics now require six or more steps in GA4. Filtering for a single page, something marketers do dozens of times daily, has become an exercise in frustration.

Data Reliability Concerns

Beyond usability issues, GA4 has struggled with data reliability problems that strike at the heart of marketing decision-making.

According to Piwik PRO’s analysis, conversion tracking discrepancies, inaccurate traffic reports, integration problems with Google Ads, and discrepancies between GA4 data and BigQuery exports have been persistent issues since launch.

These aren’t minor technical glitches; they’re fundamental problems that affect how we measure campaign performance and allocate marketing budgets.

The shift from UA’s goal-based conversion tracking to GA4’s event-based system has created confusion around what we’re actually measuring, particularly when comparing year-over-year performance.

Signs Of Progress: Recent Improvements

To Google’s credit, it hasn’t ignored the criticism. The past year has seen several meaningful updates that address some of the most pressing concerns.

Google Analytics has introduced a Generated Insights feature that summarizes trends and changes in data, helping users make quicker decisions. These insights are displayed at the top of detail reports and include action buttons for report modifications. This AI-powered analysis is genuinely helpful for identifying patterns that might otherwise be missed.

The addition of Anomaly Detection in detail reports automatically flags any unexpected spikes or dips in your data, represented as circles on your charts. For busy marketers juggling multiple campaigns, this proactive approach to data monitoring is a welcome improvement.

Perhaps most significantly for agencies and enterprises, as of March 2025, GA4 finally supports the ability to copy reports and explorations from one property to another. If you’ve ever had to manually rebuild the same custom reports across multiple client accounts, you’ll appreciate how much time this saves.

The Broader Impact On Marketing Analytics

The GA4 transition has forced the entire marketing analytics landscape to evolve. Current data shows that over 15 million websites use GA4, making it the de facto standard for web analytics regardless of individual opinions about the platform.

Screenshot from trends.builtwith.com, August 2025

Looking into historical Universal adoption, more than 21 million websites used Universal Analytics, which leaves a gap to be filled. So, despite GA4 leading the analytics industry, it still has a long way to reach the former adoption rate, which creates some sort of vacuum.

This shift has had several unintended consequences. Many organizations have diversified their analytics stack, supplementing GA4 with specialized tools that fill specific gaps. There is an increased interest in alternatives like Matomo for privacy-focused measurement and more sophisticated attribution modeling platforms for enterprise users.

The emphasis on first-party data collection has also intensified. With the end of third-party cookies and stricter consent rules, website data coverage will decrease, limiting your leverage.

First-party data will become even more important than ever. This has pushed marketing teams to become more strategic about data collection and customer relationship building.

Practical Recommendations For Marketing Teams

After five years of working with GA4, here’s my advice for marketing teams struggling with the transition:

Invest In Education

The learning curve has been steep, but unavoidable.

As former Google Analytics team member Krista Seiden wisely noted:

“The only way to learn a new tool is to dive in and actually get your feet wet.” Budget time and resources for proper training.

Focus On Trends, Not Absolutes

When comparing year-over-year performance, focus on trends and seasonality rather than absolute numbers. GA4’s different measurement methodology means exact numerical comparisons with UA data are largely meaningless.

Supplement Strategically

Don’t try to make GA4 do everything. Identify specific gaps in your analytics needs and fill them with specialized tools.

Many successful marketing teams now use GA4 as their foundation while leveraging additional platforms for detailed attribution, customer journey mapping, or real-time optimization.

Embrace The Event-Based Model

Rather than fighting GA4’s event-based structure, lean into it. Google recommends implementing new logic that makes sense in the event-based context rather than simply copying over existing event logic from UA. This approach will yield better insights in the long run.

Looking Forward

Cookie deprecation and enhanced privacy regulations mean that features like enhanced conversions, consent mode V2, and offline conversion tracking are now necessary rather than nice-to-haves. GA4, despite its flaws, is better positioned for this privacy-first future than Universal Analytics ever was.

The platform will undoubtedly continue improving. Google has shown responsiveness to user feedback, and the recent updates demonstrate a commitment to addressing the most pressing usability concerns. However, marketers should expect GA4 to remain more complex and technical than its predecessor.

The Bottom Line

Five years after its launch, GA4 represents both the promise and peril of modern marketing analytics. It offers capabilities that Universal Analytics couldn’t match: cross-platform tracking, privacy compliance, and AI-powered insights. Yet, it also demands a level of technical sophistication that many marketing teams struggle to achieve.

The forced migration was undoubtedly painful, and the criticism of GA4’s usability is largely justified. However, the platform is here to stay, and fighting that reality serves no one. The organizations that will thrive are those that invest in proper GA4 implementation, supplement it strategically with other tools, and adapt their processes to work with rather than against its event-based philosophy.

As marketers, we’ve weathered platform changes before, and we’ll weather this one, too. The key is approaching GA4 not as a replacement for Universal Analytics, but as a fundamentally different tool for a fundamentally different digital landscape. Once we make that mental shift, GA4 becomes less frustrating and more powerful.

The future of marketing analytics is privacy-first, cross-platform, and AI-enhanced. GA4, for all its current limitations, is our best free gateway to that future. It’s time to stop mourning Universal Analytics and start mastering what comes next.

More Resources:


Featured Image: kenchiro168/Shutterstock

Introducing a new AI-powered package: Track your brand in AI search 

We’re excited to announce the beta release of Yoast AI Brand Insights, available as part of the Yoast SEO AI+ package. This new tool helps you understand how your brand appears in AI-powered answers, and where you can improve your visibility. Ideal for bloggers, marketers, and brand managers, Yoast AI Brand Insights gives you an overview of your brand presence across tools like ChatGPT, Perplexity, and Gemini.

For years, Yoast has helped you get found in search engines. Recently though, search is changing. People aren’t just using Google anymore, they’re turning to AI tools like ChatGPT for answers. Those answers often mention brand names as recommendations. So here’s the big question: when AI tools answer questions in your niche, does your brand show up? Our new tool, Yoast AI Brand Insights (beta), helps you find out. 

Yoast AI Brand Insights lets you see when and how your brand appears in AI-generated answers and helps you understand where you need to focus your effort to improve your visibility. 

Why Yoast AI Brand Insights matters, now 

AI-powered answers are shaping customer decisions faster than ever. Visitors from AI search are often more likely to convert than those from regular search. It’s no surprise, because asking an AI-powered chatbot can feel like getting a personal recommendation. Afterall, word of mouth remains one of the most powerful ways to build trust and spark interest. 

Most analytics tools can’t tell you how your brand appears in AI answers, or if it’s mentioned at all. With more people turning to tools like ChatGPT, Perplexity, and Gemini for advice, that’s a big blind spot if you are trying to get your name out there. 

Yoast AI Brand Insights aims to close that gap. You’ll see when and how your brand appears, what’s being said, and where the information comes from, so you can take action to ensure your brand is part of the conversation. 

See how you stack up against other brands mentioned in your prompts

With just a few clicks, you can: 

  • Check if your brand is mentioned in AI-generated answers for relevant search queries 
  • Benchmark against competitors: see how often your brand comes up 
  • Understand the sentiment connected to your brand: positive, neutral, or negative 
  • Find the sources AI tools use when they mention you 
  • Track your progress over time so you can respond to changes quickly 

Pricing & getting started 

Yoast SEO AI+ is priced at $29.90/month, billed annually ($358.80 plus VAT). The plan includes one automated brand analysis per week per brand, so you can track and compare how your brand is showing up in AI-powered search over time. With each purchase of Yoast SEO AI+ you recieve one extra brand.

With this package you also get the full value of Yoast WooCommerce SEO, which includes everything from Yoast SEO Premium, News SEO, Local SEO, and Video SEO, in addition to one free seat of the Yoast SEO Google Docs add-on.  

For marketers, this means you no longer need to patch together separate solutions for on-page SEO, ecommerce optimization, content creation, or LLM visibility. Everything you need to analyze, optimize, and grow your brand presence is included in one complete package. 

How to get started

  1. Login with MyYoast: secure, single sign-on for all your Yoast tools and products. 
  2. Open Yoast AI Brand Insights: You can find it near the Yoast SEO Academy
  3. Set up your brand: add your brand’s name and a short introduction to your business 
  4. Run your scan: we’ll find relevant AI search queries for you, you can use them or tweak them to your liking. 
  5. Review your results: see relevant mentions and their sources, your brand sentiment, and the AI Visibility Index in an easy-to-read dashboard

Want more details? Check out the full guide to getting started. 

Launching in beta

Yoast AI Brand Insights is now available in beta as part of Yoast SEO AI+. This is your chance to be among the first to explore how your brand shows up in AI-powered search. We’d love your thoughts as we refine the tool, your thoughts here.

See how your brand appears in AI search today 

Get Yoast SEO AI+ today to start your first brand scan. See if and how AI tools are talking about you. 

Google AI Overviews Overlaps Organic Search By 54% via @sejournal, @martinibuster

New research from BrightEdge offers insights into how Google’s AI Overviews ranks websites across different verticals, with implications for what SEOs and publishers should be focusing on.

AIO And Organic Search

The data shows that 54% of the AI Overviews citations matched the web pages ranked in the organic search results. This means that 46% of citations do not overlap with organic search results.  Could this be an artifact of Google’s FastSearch algorithm?

Google’s FastSearch is based on ranking signals generated by the RankEmbed deep-learning model that is trained on search logs and third-party quality raters. The search logs consist of user behavior data, what Google terms “click and query data.” Click data teaches the RankEmbed model about what users mean when they search.

Click behavior is feedback about queries and relevant documents, similar to how the ratings submitted by the quality raters teach RankEmbed about quality. User clicks are a behavioral signal of which documents are relevant. So, as a hypothetical example, if people who search for “How to” tend to click on videos and tutorials, this teaches the model that videos and tutorials tend to satisfy those kinds of queries. RankEmbed “learns” that documents that are semantically similar to a tutorial are good matches for that kind of query. The models aren’t learning in a human sense; they are identifying patterns in the click data.

This doesn’t mean that the 54% of AIO-ranked sites are there because of traditional ranking factors. It could be that the FastSearch algorithm retrieves results that are similar to the regular search results 54% of the time.

Insight About Ranking Factors

BrightEdge’s data could be reflecting the complexity of Google’s FastSearch algorithm, which prioritizes speed and semantic matching of queries to documents without the use of traditional ranking signals like links. This is something that SEOs and publishers should stop and consider because it highlights the importance of content and also the importance of matching the type of content that users prefer to see.

So, if they’re querying about a product, they don’t expect to see a page with an essay about the product; they expect to see a page with the product.

Organic And AIO Overlap Evolved Over Time

When AIO launched, there was only about a 32% overlap between AIO and the classic organic search results. BrightEdge’s data shows that the overlap has grown over the sixteen months between the debut of AI Overviews and today.

Organic And AIO Match Depends On The Vertical

The 54/46 percentage split isn’t across the board. The percentage of AIO-ranked sites that match the organic search results varies according to the vertical.

Your Money Or Your Life (YMYL) content showed a higher rate of overlap between organic and AIO.

BrightEdge’s data shows:

  • Healthcare has a strong overlap: 75.3% overlap (began at 63.3%).
  • Education overlap has increased significantly: 72.6% overlap between organic and AIO, showing +53.2 percentage points growth, from 19.4% to 72.6%.
  • Insurance also experienced increased overlap: 68.6%. That’s a +47.7 percentage points growth from the 20.9% overlap when AIO was first introduced.
  • E-commerce has very little overlap with the organic search results: 22.9% overlap (only +0.6 percentage points change).

I’m going to speculate here and say that Healthcare, Education, and Insurance search results may have a strong overlap because the pool of authoritative sites that users expect to see may be smaller. This may mean that websites in these verticals may have to work hard to be the kind of site that users expect to see. A broad and simplified explanation is that FastSearch does not use traditional organic search ranking factors. It’s ranking the kinds of web pages that match user expectations, meet certain quality standards, and are semantically relevant to the query.

What Is Going On With E-Commerce?

E-commerce is the one area where overlap between organic and AIO remained relatively steady with very little change. BrightEdge notes that AIO coverage actually decreased by 7.6%. AIO may be a good fit for research but is not a good format for users who are ready to make a purchase.

Final Takeaways

Although BrightEdge recommends focusing on traditional SEO for sites in verticals that have over 60% of overlap with organic search, it’s a good idea for all sites, regardless of vertical, to focus on traditional SEO and also to focus on precision, matching user expectations for each query, and pay attention to what users are saying so as to be able to react swiftly to changing trends.

BrightEdge offers the following advice:

“Step 1: Identify Your Overlap Profile Measure what percentage of your AI Overview citations also rank organically and benchmark against the 54% average to understand where you stand.

Step 2: Match Strategy to Intent. High overlap (>60%) means focus on SEO; low overlap (<30%>

Step 3: Monitor the Convergence Track your overlap percentage monthly as it has grown +22% industry-wide in 16 months, watching for shifts like September 2024’s +5.4% jump.”

Read BrightEdge’s report:

AI Overview Citations Now 54% from Organic Rankings

seo enhancements
Why is summarizing essential for modern content?

Table of contents

Content summarization isn’t a new idea. It goes back to the 1950s when Hans Peter Luhn at IBM introduced one of the first algorithms to summarize text. Back then, the goal was straightforward: identify the most important words in a piece of writing and create a shorter version. What began as a technical experiment has now evolved into a fundamental aspect of how we read, learn, and share information. Summarization allows us to cut through overwhelming amounts of text and focus on what really matters, shaping everything from research and education to marketing and SEO.

In this article, we’ll explore why summarizing is essential for modern content and how both humans and AI-driven tools are making information more accessible, trustworthy, and impactful.

What is content summarization?

Content summarization is the process of condensing a large piece of high-quality content into a shorter version while keeping the essential points intact. The aim is straightforward: to produce a clear and concise summary that accurately represents the meaning of the original text without overwhelming the reader.

Summarization makes information easier to process. Imagine reading a lengthy report or book but only needing the key takeaways for a meeting. It also helps individuals and businesses grasp the core message quickly, saving time and effort.

There are two main approaches to summarize moder content:

Manual or human-driven content summarization

Think back to the last time you turned a long article into a short brief for a colleague; that’s a perfect example and explanation of manual content summarization. In this approach, a human reads, weighs what matters, and rewrites the core points for easy digestion of information.

Manual content summarization requires critical thinking to spot what matters and language skills to explain important information clearly and concisely.

Clear advantages of human-driven content summarization are:

  • The ability to notice nuance and implied meaning
  • Flexibility to shape tone and level of detail for a specific audience
  • The creativity to link ideas or highlight unexpected relevance
  • Judgment to keep or discard details based on purpose

This human-led method complements content summarization AI, giving summaries a thoughtful, audience-aware edge.

AI-driven content summarization

The other approach is powered by technology. AI-driven content summarization utilizes natural language processing and machine learning to rapidly scan through text and generate summaries in seconds. It typically works in two ways:

  • Extractive summarization, where the AI selects the most important sentences directly from the content
  • Abstractive summarization, where the AI generates new sentences that capture the main ideas in a more natural way

The benefits are clear: speed, consistency, and scalability. AI can summarize website content, reports, or articles far faster than a human team. However, it has limits. Context can be missed, and nuances like sarcasm or cultural references may be overlooked. The quality also depends on the AI model and the original text.

Both manual and AI-driven summarization play a crucial role today. Humans bring nuance and creativity, while AI delivers efficiency and scale. Together, they make summarization an essential tool for modern communication.

What are some of the core benefits of content summarization?

Turning lengthy information into clear takeaways is more than convenient. It makes content meaningful, easier to use, and far more effective in learning and communication. Whether done manually or supported by AI tools, summarization offers key benefits:

Enhances learning and study preparation

Summarizing strengthens comprehension and critical thinking by distilling main ideas and separating them from supporting details. Students and professionals can also rely on concise notes that save time when revising or preparing presentations.

Improves focus and communication

Condensing text sharpens concentration on what matters most. It also trains you to express ideas in a precise and structured way, which enhances both writing and verbal skills.

Saves time and scales with AI tools

Summaries allow readers to absorb essential points without having to read hours of content. With AI tools, this process scales further, reducing large volumes of text into clear insights within minutes.

Boosts accessibility and approachability

Summarization makes complex or lengthy content approachable and accessible for diverse audiences. Multilingual AI tools extend this further, breaking down language barriers and ensuring knowledge reaches a global audience.

Why summarization matters in the modern content landscape?

We live in an age of too much information and too little time. Every day, there is more content than anyone can read, which means people make split-second choices about what to open, skim, or ignore. This makes it more important that your content presents clear takeaways upfront before readers move on. Content summarization is how you win that first, critical moment of attention.

Information overload

Digital work and life produce an enormous flood of text, messages, reports, and notifications. This makes it challenging for readers to find the right signal in the noise. Therefore, text summaries act as a filter, surfacing the most relevant facts so readers and teams can act faster and with less cognitive friction.

People scan and skim, so clarity wins

Web reading behavior has been stable for years: most users scan pages rather than read every word. Good summaries present the core idea in a scannable form, increasing the chance your content is understood and used. That scannability also improves the odds of search engines and AI LLM comprehension surfacing your content as a quick response to user queries.

Trust and clarity for readers and systems

A clear and crisp text summary signals that the author understands their topic and values the reader’s time. That builds trust. On the search side, concise and well-structured summaries are what engines and AI systems prefer when generating featured snippets or AI overviews. Being chosen for a snippet or overview can boost visibility and credibility in search results.

Faster decision-making

When stakeholders, readers, or customers need to act quickly, summaries provide the necessary context to make informed decisions. Whether it is an executive skimming a report or a user checking if an article answers their question, summaries reduce the time to relevance and accelerate outcomes. This is also why structured summaries can increase the chance of being surfaced by search features that prioritize immediate answers.

Prominent use cases of content summarization

Content summarization is not a nice-to-have. It is one of the main reasons modern content continues to work for busy humans and businesses. Below are the most practical and high-impact ways in which the summarization of modern content is currently being used.

Business reports

Executives and teams rely on concise summaries to make informed decisions quickly and effectively. Executive summaries and one-page briefs transform dense reports into actionable insights, enabling stakeholders to determine what requires attention and what can be deferred. Effective summaries reduce meeting time, expedite approvals, and enhance alignment across teams.

Educational content

Students and educators use summaries to focus on core concepts and to prepare study notes. AI-driven summarization tools can generate revision guides, extract exam-relevant points, and turn long lectures or papers into study-friendly formats. These tools can support personalized learning and speed up content creation for instructors.

Marketing strategies and reporting

Marketers rely on summaries to present campaign performance, highlight key KPIs, and share learnings without overwhelming stakeholders. Condensed campaign briefs and executive summaries enable teams to iterate faster, align on priorities, and uncover insights for strategic changes. Summaries also make it easier to compare campaigns and track trends over time.

Everyday consumption: news digests, newsletters, podcast notes

Readers and listeners increasingly prefer bite-sized overviews. Newsrooms use short summaries and AI-powered digests to connect busy audiences with high-quality reporting. Podcasts and newsletters pair episode or article summaries with timestamps and highlights to improve discoverability and retention. Summaries help users decide what to read, listen to, or save for later.

Content Summarization & SEO: Does it Benefit in Boosting Organic Visibility?

Did you know that content summarization can help your SEO strategy? Search engines prioritize clarity, relevance, and user engagement, and concise summaries play a role in meeting those criteria. They not only shape a smoother user experience but also help search engines quickly grasp the core themes of your content.

Boosting click-through rates

Summaries also support higher CTRs in search results. A clear and compelling meta description written as a summary can serve as a strong preview of the page. For example, a blog on “10 Healthy Recipes” with a summary that highlights “quick breakfasts, vegetarian lunches, and easy weeknight dinners” is more likely to attract clicks than a generic description.

Improving indexing and relevance

From a technical standpoint, summarization helps search engines with indexing and relevance. Algorithms rely on context and keywords, and well-written summaries bring focus to the essence of your content. This is especially important for long-form blogs, case studies, or reports where the main ideas may otherwise get buried.

Another growing benefit is visibility in featured snippets and People Also Ask sections. Summaries that clearly answer a query or highlight structured takeaways increase the chances of being pulled into these high-visibility SERP features, directly boosting organic reach.

Extending multi-channel visibility

Content summarization also creates multi-channel opportunities. The same summaries can be repurposed as social media captions, newsletter highlights, or even adapted for voice search, where users want concise and direct answers.

Supporting AI and LLMs

Lastly, in the age of AI, summaries provide context for LLMs (large language models). Clean, structured summaries make it easier for AI to process and reference your content, which extends your reach beyond search engines into how content is surfaced across AI-powered tools.

How to write SEO-friendly content summaries with Yoast?

The basics of an effective summary are simple: keep it clear, concise, and focused on the main points while signalling relevance to both readers and search engines.

This is exactly where Yoast can make your life easier. With AI Summarize, you can generate instant, editable bullet-point takeaways that boost scannability for readers and improve how search engines interpret your content.

Want to take it further? Yoast SEO Premium unlocks extended AI features, smarter keyword optimization, and advanced SEO tools that save you time while improving your visibility in search.

A smarter analysis in Yoast SEO Premium

Yoast SEO Premium has a smart content analysis that helps you take your content to the next level!

What is AI text summarization?

AI text summarization uses artificial intelligence to condense text, audio, or video content into shorter, more digestible content. Rather than just cutting words, it preserves key ideas and context, making information easier to absorb.

Today, summarization relies on large language models (LLMs), which not only extract sentences but also interpret nuance and generate concise, natural-sounding summaries.

How does AI text summarization work?

AI text summarization relies on a combination of sophisticated systems that help a large-language model deeply understand the content, decipher patterns, and generate content summaries without losing any important facts.

Here’s a brief overview of the process of AI-powered content summarization:

  • Understanding context: AI models analyze entire documents, identifying relationships, sentiment, and flow rather than just looking at keywords, allowing the AI models to understand at a deeper level
  • Generating abstractive summaries: Unlike extractive methods, which simply copy existing sentences, abstractive summarization paraphrases or rephrases content to convey the essence in fresh, coherent language
  • Fine-tuning for accuracy: LLMs can be trained on specific domains such as news, legal, or scientific content, so the summaries reflect the right tone, terminology, and level of detail

Benefits of AI text summarization

The true power of AI summarization lies in the value it creates. By blending scale with accuracy, it turns information overload into actionable knowledge.

  • Scales content summarization: Handles hundreds of pages or documents in minutes, which would otherwise require hours of manual effort
  • Ensures consistency: Produces summaries in a uniform style and structure, making information easier to compare and use
  • Saves time and costs: Frees up teams, researchers, and analysts to focus on insights instead of spending time reading
  • Improves accessibility: Makes complex content digestible for wider audiences, including those unfamiliar with technical details
  • Supports accuracy with human oversight: Editors can refine summaries quickly while still benefiting from automation

Practical use cases of AI summarization

AI summarization is not just theoretical. It has already become part of how businesses, teams, and individuals manage daily information flow. Here are some of the common applications of AI summarization which have become a part of our live:

  • Meetings: Automatically captures key points, decisions, and action items in real time
  • Onboarding: Condenses company or project documentation so new team members can understand essentials quickly
  • Daily recaps: Summarizes Slack, Teams, or email threads into clear, concise updates
  • Surfacing information: Extracts relevant context from long reports, technical documents, or customer feedback, ensuring that critical insights are never overlooked

In fact, AI agents are already being used in professional settings to summarize key provisions in documents, with 38% of professionals relying on these tools to expedite the review process. This demonstrates that AI summarization is not just a future possibility, but an integral part of how modern teams manage complex information.

In summary, don’t skip the summary!

Summarization is no longer a sidekick in your content strategy; it is the main character. It fuels faster human learning, strengthens SEO by making your pages clearer to search engines, and ensures AI systems don’t misrepresent your brand. When your content is easy to scan, you reduce bounce rates, improve trust, and increase visibility across platforms where attention spans are short.

This is exactly where a tool like Yoast SEO Premium becomes invaluable. With features like AI Summarize, you can instantly generate key takeaways that work for readers, search engines, and AI overviews alike. Instead of manually condensing every piece of content, you achieve clarity at scale while maintaining editorial control. Summarization is not just about making content shorter; it is about making it smarter, and Yoast helps you do it with ease.

So, to summarize the summary: invest in doing this right, because the future of content depends on it.

The US may be heading toward a drone-filled future

On Thursday, I published a story about the police-tech giant Flock Safety selling its drones to the private sector to track shoplifters. Keith Kauffman, a former police chief who now leads Flock’s drone efforts, described the ideal scenario: A security team at a Home Depot, say, launches a drone from the roof that follows shoplifting suspects to their car. The drone tracks their car through the streets, transmitting its live video feed directly to the police. 

It’s a vision that, unsurprisingly, alarms civil liberties advocates. They say it will expand the surveillance state created by police drones, license-plate readers, and other crime tech, which has allowed law enforcement to collect massive amounts of private data without warrants. Flock is in the middle of a federal lawsuit in Norfolk, Virginia, that alleges just that. Read the full story to learn more

But the peculiar thing about the world of drones is that its fate in the US—whether the skies above your home in the coming years will be quiet, or abuzz with drones dropping off pizzas, inspecting potholes, or chasing shoplifting suspects—pretty much comes down to one rule. It’s a Federal Aviation Administration (FAA) regulation that stipulates where and how drones can be flown, and it is about to change.

Currently, you need a waiver from the FAA to fly a drone farther than you can see it. This is meant to protect the public and property from in-air collisions and accidents. In 2018, the FAA began granting these waivers for various scenarios, like search and rescues, insurance inspections, or police investigations. With Flock’s help, police departments can get waivers approved in just two weeks. The company’s private-sector customers generally have to wait 60 to 90 days.

For years, industries with a stake in drones—whether e-commerce companies promising doorstep delivery or medical transporters racing to move organs—have pushed the government to scrap the waiver system in favor of easier approval to fly beyond visual line of sight. In June, President Donald Trump echoed that call in an executive order for “American drone dominance,” and in August, the FAA released a new proposed rule.

The proposed rule lays out some broad categories for which drone operators are permitted to fly drones beyond their line of sight, including package delivery, agriculture, aerial surveying, and civic interest, which includes policing. Getting approval to fly beyond sight would become easier for operators from these categories, and would generally expand their range. 

Drone companies, and amateur drone pilots, see it as a win. But it’s a win that comes at the expense of privacy for the rest of us, says Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy and Technology Project who served on the rule-making commission for the FAA.

“The FAA is about to open up the skies enormously, to a lot more [beyond visual line of sight] flights without any privacy protections,” he says. The ACLU has said that fleets of drones enable persistent surveillance, including of protests and gatherings, and impinge on the public’s expectations of privacy.

If you’ve got something to say about the FAA’s proposed rule, you can leave a public comment (they’re being accepted until October 6.) Trump’s executive order directs the FAA to release the final rule by spring 2026.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Scientists can see Earth’s permafrost thawing from space

Something is rotten in the city of Nunapitchuk. In recent years, a crack has formed in the middle of a house. Sewage has leached into the earth. Soil has eroded around buildings, leaving them perched atop precarious lumps of dirt. There are eternal puddles. And mold. The ground can feel squishy, sodden. 

This small town in northern Alaska is experiencing a sometimes overlooked consequence of climate change: thawing permafrost. And Nunapitchuk is far from the only Arctic town to find itself in such a predicament. 

Permafrost, which lies beneath about 15% of the land in the Northern Hemisphere, is defined as ground that has remained frozen for at least two years. Historically, much of the world’s permafrost has remained solid and stable for far longer, allowing people to build whole towns atop it. But as the planet warms, a process that is happening more rapidly near the poles than at more temperate latitudes, permafrost is thawing and causing a host of infrastructural and environmental problems.

Now scientists think they may be able to use satellite data to delve deep beneath the ground’s surface and get a better understanding of how the permafrost thaws, and which areas might be most severely affected because they had more ice to start with. Clues from the short-term behavior of those especially icy areas, seen from space, could portend future problems.

Using information gathered both from space and on the ground, they are working with affected communities to anticipate whether a house’s foundation will crack—and whether it is worth mending that crack or is better to start over in a new house on a stable hilltop. These scientists’ permafrost predictions are already helping communities like Nunapitchuk make those tough calls.

But it’s not just civilian homes that are at risk. One of the top US intelligence agencies, the National Geospatial-Intelligence Agency (NGA), is also interested in understanding permafrost better. That’s because the same problems that plague civilians in the high north also plague military infrastructure, at home and abroad. The NGA is, essentially, an organization full of space spies—people who analyze data from surveillance satellites and make sense of it for the country’s national security apparatus. 

Understanding the potential instabilities of the Alaskan military infrastructure—which includes radar stations that watch for intercontinental ballistic missiles, as well as military bases and National Guard posts—is key to keeping those facilities in good working order and planning for their strengthened future. Understanding the potential permafrost weaknesses that could affect the infrastructure of countries like Russia and China, meanwhile, affords what insiders might call “situational awareness” about competitors. 

The work to understand this thawing will only become more relevant, for civilians and their governments alike, as the world continues to warm. 

The ground beneath

If you live much below the Arctic Circle, you probably don’t think a lot about permafrost. But it affects you no matter where you call home.

In addition to the infrastructural consequences for real towns like Nunapitchuk, thawing permafrost contains sequestered carbon—twice as much as currently inhabits the atmosphere. As the permafrost thaws, the process can release greenhouse gases into the atmosphere. That release can cause a feedback loop: Warmer temperatures thaw permafrost, which releases greenhouse gases, which warms the air more, which then—you get it. 

The microbes themselves, along with previously trapped heavy metals, are also set dangerously free.

For many years, researchers’ primary options for understanding some of these freeze-thaw changes involved hands-on, on-the-ground surveys. But in the late 2000s, Kevin Schaefer, currently a senior scientist at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder, started to investigate a less labor-intensive idea: using radar systems aboard satellites to survey the ground beneath. 

This idea implanted itself in his brain in 2009, when he traveled to a place called Toolik Lake, southwest of the oilfields of Prudhoe Bay in Alaska. One day, after hours of drilling sample cores out of the ground to study permafrost, he was relaxing in the Quonset hut, chatting with colleagues. They began to discuss how  space-based radar could potentially detect how the land sinks and heaves back up as temperatures change. 

Huh, he thought. Yes, radar probably could do that

Scientists call the ground right above permafrost the active layer. The water in this layer of soil contracts and expands with the seasons: during the summer, the ice suffusing the soil melts and the resulting decrease in volume causes the ground to dip. During the winter, the water freezes and expands, bulking the active layer back up. Radar can help measure that height difference, which is usually around one to five centimeters. 

Schaefer realized that he could use radar to measure the ground elevation at the start and end of the thaw. The electromagnetic waves that bounce back at those two times would have traveled slightly different distances. That difference would reveal the tiny shift in elevation over the seasons and would allow him to estimate how much water had thawed and refrozen in the active layer and how far below the surface the thaw had extended.

With radar, Schaefer realized, scientists could cover a lot more literal ground, with less effort and at lower cost.

“It took us two years to figure out how to write a paper on it,” he says; no one had ever made those measurements before. He and colleagues presented the idea at the 2010 meeting of the American Geophysical Union and published a paper in 2012 detailing the method, using it to estimate the thickness of the active layer on Alaska’s North Slope.

When they did, they helped start a new subfield that grew as large-scale data sets started to become available around 5 to 10 years ago, says Roger Michaelides, a geophysicist at Washington University in St. Louis and a collaborator of Schaefer’s. Researchers’ efforts were aided by the growth in space radar systems and smaller, cheaper satellites. 

With the availability of global data sets (sometimes for free, from government-run satellites like the European Space Agency’s Sentinel) and targeted observations from commercial companies like Iceye, permafrost studies are moving from bespoke regional analyses to more automated, large-scale monitoring and prediction.

The remote view

Simon Zwieback, a geospatial and environmental expert at the University of Alaska Fairbanks, sees the consequences of thawing permafrost firsthand every day. His office overlooks a university parking lot, a corner of which is fenced off to keep cars and pedestrians from falling into a brand-new sinkhole. That area of asphalt had been slowly sagging for more than a year, but over a week or two this spring, it finally started to collapse inward. 

Kevin Schaefer stands on top of a melting layer of ice near the Alaskan pipeline on the North Slope of Alaska.
COURTESY OF KEVIN SCHAEFER

The new remote research methods are a large-scale version of Zwieback taking in the view from his window. Researchers look at the ground and measure how its height changes as ice thaws and refreezes. The approach can cover wide swaths of land, but it involves making assumptions about what’s going on below the surface—namely, how much ice suffuses the soil in the active layer and permafrost. Thawing areas with relatively low ice content could mimic thinner layers with more ice. And it’s important to differentiate the two, since more ice in the permafrost means more potential instability. 

To check that they’re on the right track, scientists have historically had to go out into the field. But a few years ago, Zwieback started to explore a way to make better and deeper estimates of ice content using the available remote sensing data. Finding a way to make those kinds of measurements on a large scale was more than an academic exercise: Areas of what he calls “excess ice” are most liable to cause instability at the surface. “In order to plan in these environments, we really need to know how much ice there is, or where those locations are that are rich in ice,” he says.

Zwieback, who did his undergraduate and graduate studies in Switzerland and Austria, wasn’t always so interested in permafrost, or so deeply affected by it. But in 2014, when he was a doctoral student in environmental engineering, he joined an environmental field campaign in Siberia, at the Lena River Delta, which resembles a gigantic piece of coral fanning out into the Arctic Ocean. Zwieback was near a town called Tiksi, one of the world’s northernmost settlements. It’s a military outpost and starting point for expeditions to the North Pole, featuring an abandoned plane near the ocean. Its Soviet-era concrete buildings sometimes bring it to the front page of the r/UrbanHell subreddit. 

Here, Zwieback saw part of the coastline collapse, exposing almost pure ice. It looked like a subterranean glacier, but it was permafrost. “That really had an indelible impact on me,” he says. 

Later, as a doctoral student in Zurich and postdoc in Canada, he used his radar skills to understand the rapid changes that the activity of permafrost impressed upon the landscape. 

And now, with his job in Fairbanks and his ideas about the use of radar sensing, he has done work funded by the NGA, which has an open Arctic data portal. 

In his Arctic research, Zwieback started with the approach underlying most radar permafrost studies: looking at the ground’s seasonal subsidence and heave. “But that’s something that happens very close to the surface,” he says. “It doesn’t really tell us about these long-term destabilizing effects,” he adds.

In warmer summers, he thought, subtle clues would emerge that could indicate how much ice is buried deeper down.

For example, he expected those warmer-than-average periods to exaggerate the amount of change seen on the surface, making it easier to tell which areas are ice-rich. Land that was particularly dense with ice would dip more than it “should”—a precursor of bigger dips to come.

The first step, then, was to measure subsidence directly, as usual. But from there, Zwieback developed an algorithm to ingest data about the subsidence over time—as measured by radar—and other environmental information, like the temperatures at each measurement. He then created a digital model of the land that allowed him to adjust the simulated amount of ground ice and determine when it matched the subsidence seen in the real world. With that, researchers could infer the amount of ice beneath.

Next, he made maps of that ice that could potentially be useful to engineers—whether they were planning a new subdivision or, as his funders might be, keeping watch on a military airfield.

“What was new in my work was to look at these much shorter periods and use them to understand specific aspects of this whole system, and specifically how much ice there is deep down,” Zwieback says. 

The NGA, which has also funded Schaefer’s work, did not respond to an initial request for comment but did later provide feedback for fact-checking. It removed an article on its website about Zwieback’s grant and its application to agency interests around the time that the current presidential administration began to ban mention of climate change in federal research. But the thawing earth is of keen concern. 

To start, the US has significant military infrastructure in Alaska: It’s home to six military bases and 49 National Guard posts, as well as 21 missile-detecting radar sites. Most are vulnerable to thaw now or in the near future, given that 85% of the state is on permafrost. 

Beyond American borders, the broader north is in a state of tension. Russia’s relations with Northern Europe are icy. Its invasion of Ukraine has left those countries fearing that they too could be invaded, prompting Sweden and Finland, for instance, to join NATO. The US has threatened takeovers of Greenland and Canada. And China—which has shipping and resource ambitions for the region—is jockeying to surpass the US as the premier superpower. 

Permafrost plays a role in the situation. “As knowledge has expanded, so has the understanding that thawing permafrost can affect things NGA cares about, including the stability of infrastructure in Russia and China,” read the NGA article. Permafrost covers 60% of Russia, and thaws have affected more than 40% of buildings in northern Russia already, according to statements from the country’s minister of natural resources in 2021. Experts say critical infrastructure like roads and pipelines is at risk, along with military installations. That could weaken both Russia’s strategic position and the security of its residents. In China, meanwhile, according to a report from the Council on Strategic Risks, important moving parts like the Qinghai-Tibet Railway, “which allows Beijing to more quickly move military personnel near contested areas of the Indian border,” is susceptible to ground thaw—as are oil and gas pipelines linking Russia and China. 

In the field

Any permafrost analysis that relies on data from space requires verification on Earth. The hope is that remote methods will become reliable enough to use on their own, but while they’re being developed, researchers must still get their hands muddy with more straightforward and longer tested physical methods. Some use a network called Circumpolar Active Layer Monitoring, which has existed since 1991, incorporating active-layer data from hundreds of measurement sites across the Northern Hemisphere. 

Sometimes, that data comes from people physically probing an area; other sites use tubes permanently inserted into the ground, filled with a liquid that indicates freezing; still others use underground cables that measure soil temperature. Some researchers, like Schaefer, lug ground-penetrating radar systems around the tundra. He’s taken his system to around 50 sites and made more than 200,000 measurements of the active layer.

The field-ready ground-penetrating radar comes in a big box—the size of a steamer trunk—that emits radio pulses. These pulses bounce off the bottom of the active layer, or the top of the permafrost. In this case, the timing of that reflection reveals how thick the active layer is. With handles designed for humans, Schaefer’s team drags this box around the Arctic’s boggier areas. 

The box floats. “I do not,” he says. He has vivid memories of tromping through wetlands, his legs pushing straight down through the muck, his body sinking up to his hips.

Andy Parsekian and Kevin Schaefer haul a ground penetrating radar unit through the tundra near Utqiagvik.
COURTESY OF KEVIN SCHAEFER

Zwieback also needs to verify what he infers from his space data. And so in 2022, he went to the Toolik Field station, a National Science Foundation–funded ecology research facility along the Dalton Highway and adjacent to Schaefer’s Toolik Lake. This road, which goes from Fairbanks up to the Arctic Ocean, is colloquially called the Haul Road; it was made famous in the TV show Ice Road Truckers. From this access point, Zwieback’s team needed to get deep samples of soil whose ice content could be analyzed in the lab.

Every day, two teams would drive along the Dalton Highway to get close to their field sites. Slamming their car doors, they would unload and hop on snow machines to travel the final distance. Often they would see musk oxen, looking like bison that never cut their hair. The grizzlies were also interested in these oxen, and in the nearby caribou. 

At the sites they could reach, they took out a corer, a long, tubular piece of equipment driven by a gas engine, meant to drill deep into the ground. Zwieback or a teammate pressed it into the earth. The barrel’s two blades rotated, slicing a cylinder about five feet down to ensure that their samples went deep enough to generate data that can be compared with the measurements made from space. Then they pulled up and extracted the cylinder, a sausage of earth and ice.

All day every day for a week, they gathered cores that matched up with the pixels in radar images taken from space. In those cores, the ice was apparent to the eye. But Zwieback didn’t want anecdata. “We want to get a number,” he says.

So he and his team would pack their soil cylinders back to the lab. There they sliced them into segments and measured their volume, in both their frozen and their thawed form, to see how well the measured ice content matched estimates from the space-based algorithm. 

The initial validation, which took months, demonstrated the value of using satellites for permafrost work. The ice profiles that Zwieback’s algorithm inferred from the satellite data matched measurements in the lab down to about 1.1 feet, and farther in a warm year, with some uncertainty near the surface and deeper into the permafrost. 

Whereas it cost tens of thousands of dollars to fly in on a helicopter, drive in a car, and switch to a snowmobile to ultimately sample a small area using your hands, only to have to continue the work at home, the team needed just a few hundred dollars to run the algorithm on satellite data that was free and publicly available. 

Michaelides, who is familiar with Zwieback’s work, agrees that estimating excess ice content is key to making infrastructural decisions, and that historical methods of sussing it out have been costly in all senses. Zwieback’s method of using late-summer clues to infer what’s going on at that depth “is a very exciting idea,” he says, and the results “demonstrate that there is considerable promise for this approach.” 

He notes, though, that using space-based radar to understand the thawing ground is complicated: Ground ice content, soil moisture, and vegetation can differ even within a single pixel that a satellite can pick out. “To be clear, this limitation is not unique to Simon’s work,” Michaelides says; it affects all space-radar methods. There is also excess ice below even where Zwieback’s algorithm can probe—something the labor-intensive on-ground methods can pick up that still can’t be seen from space. 

Mapping out the future

After Zwieback did his fieldwork, NGA decided to do its own. The agency’s attempt to independently validate his work—in Prudhoe Bay, Utqiagvik, and Fairbanks—was part of a project it called Frostbyte. 

Its partners in that project—the Army’s Cold Regions Research Engineering Laboratory and Los Alamos National Laboratory—declined requests for interviews. As far as Zwieback knows, they’re still analyzing data. 

But the intelligence community isn’t the only group interested in research like Zwieback’s. He also works with Arctic residents, reaching out to rural Alaskan communities where people are trying to make decisions about whether to relocate or where to build safely. “They typically can’t afford to do expensive coring,” he says. “So the idea is to make these data available to them.” 

Zwieback and his team haul their gear out to gather data from drilled core samples, a process which can be arduous and costly.
ANDREW JOHNSON

Schaefer is also trying to bridge the gap between his science and the people it affects. Through a company called Weather Stream, he is helping communities identify risks to infrastructure before anything collapses, so they can take preventative action.

Making such connections has always been a key concern for Erin Trochim, a geospatial scientist at the University of Alaska Fairbanks. As a researcher who works not just on permafrost but also on policy, she’s seen radar science progress massively in recent years—without commensurate advances on the ground.

For instance, it’s still hard for residents in her town of Fairbanks—or anywhere—to know if there’s permafrost on their property at all, unless they’re willing to do expensive drilling. She’s encountered this problem, still unsolved, on property she owns. And if an expert can’t figure it out, non-experts hardly stand a chance. “It’s just frustrating when a lot of this information that we know from the science side, and [that’s] trickled through the engineering side, hasn’t really translated into the on-the-ground construction,” she says. 

There is a group, though, trying to turn that trickle into a flood: Permafrost Pathways, a venture that launched with a $41 million grant through the TED Audacious Project. In concert with affected communities, including Nunapitchuk, it is building a data-gathering network on the ground, and combining information from that network with satellite data and local knowledge to help understand permafrost thaw and develop adaptation strategies. 

“I think about it often as if you got a diagnosis of a disease,” says Sue Natali, the head of the project. “It’s terrible, but it’s also really great, because when you know what your problem is and what you’re dealing with, it’s only then that you can actually make a plan to address it.” 

And the communities Permafrost Pathways works with are making plans. Nunapitchuk has decided to relocate, and the town and the research group have collaboratively surveyed the proposed new location: a higher spot on hardpacked sand. Permafrost Pathways scientists were able to help validate the stability of the new site—and prove to policymakers that this stability would extend into the future. 

Radar helps with that in part, Natali says, because unlike other satellite detectors, it penetrates clouds. “In Alaska, it’s extremely cloudy,” she says. “So other data sets have been very, very challenging. Sometimes we get one image per year.”

And so radar data, and algorithms like Zwieback’s that help scientists and communities make sense of that data, dig up deeper insight into what’s going on beneath northerners’ feet—and how to step forward on firmer ground. 

Sarah Scoles is a freelance science journalist based in southern Colorado and the author, most recently, of the book Countdown: The Blinding Future of Nuclear Weapons.