Google CEO Addresses Concerns Over AI’s Impact On Search Traffic via @sejournal, @MattGSouthern

In a recent interview, Google CEO Sundar Pichai discussed the company’s implementation of AI in search results and addressed concerns from publishers and website owners about its potential impact on web traffic.

Background On AI In Google Search

Google has been gradually incorporating AI-generated overviews and summaries into its search results.

These AI overviews aim to provide users with quick answers and context upfront on the search page. However, publishers fear this could dramatically reduce website click-through rates.

Pichai Claims AI Drives Traffic

Despite concerns, Pichai maintained an optimistic outlook on how AI will affect the web ecosystem in the long run.

He tells The Verge:

“I remain optimistic. Empirically, what we are seeing throughout these years is that human curiosity is boundless.”

The Google CEO claimed that the company’s internal data shows increased user engagement with AI overviews, including higher click-through rates on links within these previews compared to regular search results.

Pichai stated:

“When you give the context, it also exposes people to various branching off, jumping off, points, and so they engage more. So, actually, this is what drives growth over time.”

Unfortunately, Picahi didn’t provide specific metrics to support this assertion.

Balancing User Experience & Publisher Interests

Pichai claims that Google is attempting to balance meeting user expectations and sending website traffic, stating:

“I look at our journey, even the last year through the Search Generative Experience, and I constantly found us prioritizing approaches that would send more traffic while meeting user expectations.

… what’s positively surprising us is that people engage more, and that will lead to more growth over time for high-quality content.”

When pressed on anecdotal evidence of some websites losing significant traffic, Pichai cautioned against drawing broad conclusions from individual cases.

He argued that Google has provided more traffic to the web ecosystem over the past decade.

Pichai believes the sites losing traffic are the “aggregators in the middle.”

He stated:

“From our standpoint, when I look historically, even over the past decade, we have provided more traffic to the ecosystem, and we’ve driven that growth.

Ironically, there are times when we have made changes to actually send more traffic to the smaller sites. Some of those sites that complain a lot are the aggregators in the middle.

So should the traffic go to the restaurant that has created a website with their menus and stuff or people writing about these restaurants? These are deep questions. I’m not saying there’s a right answer.”

Takeaways For Website Owners & SEO Professionals

For those in the SEO community, Pichai’s comments offer insight into Google’s strategy and perspective but should be viewed with a degree of skepticism.

While the CEO painted a rosy picture of AI’s impact, concrete data was lacking to support his claims. Website owners must monitor their analytics closely to assess the real-world effects of AI overviews on their traffic.

As Google continues to roll out AI features in search, the dust is far from settled on this issue.

Pichai’s optimism aside, the true impact of AI on the web ecosystem remains to be seen. For now, publishers and SEOs must stay vigilant, adaptable, and vocal about their concerns in this rapidly shifting landscape.


Featured Image: Muhammad Alimaki/Shutterstock

38% Of Webpages From 2013 Have Vanished, Pew Study Finds via @sejournal, @MattGSouthern

A new study by Pew Research Center reveals the fleeting nature of online information: 38% of webpages from 2013 are no longer accessible a decade later.

The analysis, conducted in October, examined broken links on government and news websites and in the “References” section of Wikipedia pages.

The findings reveal that:

  • 23% of news webpages and 21% of government webpages contain at least one broken link
  • Local-level government webpages, particularly those belonging to city governments, are especially prone to broken links
  • 54% of Wikipedia pages have at least one link in their “References” section pointing to a non-existent page

Social Media Not Immune To Content Disappearance

To investigate the impact of digital decay on social media, Pew Research collected a real-time sample of tweets on X and monitored them for three months.

The study discovered that “nearly one-in-five tweets are no longer publicly visible on the site just months after being posted.”

In 60% of these cases, the original posting account was made private, suspended, or deleted.

In the remaining 40%, the account holder deleted the tweet, but the account still existed.

Certain types of tweets are more likely to disappear than others, with more than 40% of tweets written in Turkish or Arabic no longer visible within three months of posting.

Additionally, tweets from accounts with default profile settings are particularly susceptible to vanishing from public view.

Defining “Inaccessible” Links & Webpages

For the purpose of this report, Pew Research Center focused on pages that no longer exist when defining inaccessibility.

Other definitions, such as changed content or accessibility issues for visually impaired users, were beyond the scope of the research.

The study used a conservative approach, counting pages as inaccessible if they returned one of nine error codes, indicating that the page and/or its host server no longer exist or have become nonfunctional.

Why SEJ Cares

Digital decay raises important questions about the preservation and accessibility of online content for future generations.

Pew Research Center’s study sheds light on the extent of this problem across various online spaces, from government and news websites to social media platforms.

The high rate of link rot and disappearing webpages has implications for anyone who relies on the internet as a reliable source of information.

It poses challenges for citing online sources, as the original content may no longer be accessible in the future.

What This Means For SEO Professionals

This study underscores the need to regularly audit and update old content, as well as consistently monitor broken links and resolve them promptly.

SEO professionals should also consider the impact of digital decay on backlink profiles.

As external links to a website become inaccessible, it can affect the site’s link equity and authority in the eyes of search engines.

Monitoring and diversifying backlink sources can help mitigate the risk of losing valuable links to digital decay.

Lastly, the study’s findings on social media content prove that SEO efforts should focus on driving users back to more stable, owned channels like websites and email lists.


Featured Image: apghedia/Shutterstock

Mastering The Content Maze: Strategies For Multi-Touchpoint Success via @sejournal, @hethr_campbell

As digital platforms multiply and user engagement becomes more and more fragmented, understanding how to effectively distribute your content across various touchpoints is more crucial than ever.

So how do you tailor your strategy to meet users wherever they are in their journey? 

How do you create quality content that resonates and connects at every stage? 

Join us live on May 29, as we show you how to successfully navigate the ever-evolving landscape of content strategy. 

In this insightful session with iQuanti, we’ll explore innovative strategies to map out and meet the diverse needs of your audience at multiple touchpoints. 

Key takeaways from this webinar

  • Today’s users interact with content in a myriad of ways, spanning across a variety of channels. Whether they’re browsing on their smartphones or exploring on their laptops, understanding these user behaviors is essential for crafting content that captures attention and drives engagement.
  • Mastering the latest methodologies for identifying and addressing content gaps helps keep your content strategy ahead of the curve. Learn how to uncover areas where your content may be lacking or where new opportunities lie. This proactive approach ensures that your content remains relevant and resonates with your audience’s evolving needs.
  • Discover the ins and outs of digital shelf space and how to pivot your content strategy. By strategically positioning your content where your audience is most likely to discover it, you can maximize your digital shelf space and drive better results for your brand.

Wayne Cichanski, Vice President of Search and Site Experience at iQuanti, will walk you through ways to unlock the full potential of your content strategy and outperform your competition. 

Whether you’re a seasoned professional or just starting out, this webinar will provide you with the tools and strategies you need to elevate your content marketing this year.

Don’t miss out on these valuable expert insights. Sign up now and learn how to master the content maze. 

At the end of the presentation, you’ll get a chance to ask Wayne your most pressing content questions in our live Q&A. 

And if you can’t make it to the live event, register here and we’ll send you a recording, following the webinar. 

Google’s AI Vision Driven By Panic, Not User Needs: Former Product Manager via @sejournal, @MattGSouthern

A 16-year Google veteran is raising concerns about the company’s current focus on AI, labeling it a “panic reaction” driven by fear of falling behind competitors.

Scott Jenson, who left Google last month, took to LinkedIn to critique the tech giant’s AI projects as “poorly motivated and driven by this mindless panic that as long as it had ‘AI’ in it, it would be great.”

Veteran’s Criticism Of Google’s AI Focus

Jenson stated that Google’s vision of creating an AI assistant for its ecosystem is “pure catnip” fueled by the fear of letting someone else get there first.

He parallels the ill-fated Google+ product, which he calls a “similar hysterical reaction” to Facebook’s rise.

Jenson wrote:

“This exact thing happened 13 years ago with Google+ (I was there for that fiasco as well). That was a similar hysterical reaction but to Facebook.”

Lack Of User-Driven Motivation

Jenson argues that Google’s strategy lacks motivation driven by genuine user needs, a sentiment echoed by a recent Gizmodo article that described this year’s Google I/O developer conference as “the most boring ever.”

The article, which Jenson linked to in his post, criticized Google for failing to clarify how Gemini’s new AI technology would integrate into its existing products and enhance the user experience.

See Jenson’s full post below:

Can You Turn Off Google’s AI Overviews?

One prime example of Google’s AI overreach is the AI overviews feature, which generates summaries to directly answer search queries by ingesting information from across the web.

This controversial move has sparked legal battles, with publishers accusing Google of violating intellectual property rights and unfairly profiting from their content without permission.

Turning Off AI Overviews

While Google doesn’t provide an official setting to turn off AI overviews, a viral article from Tom’s Hardware suggests using browser extensions.

Alternatively, you can configure Chrome to go directly to web search results, bypassing the AI-generated overviews.

Here are the steps:

  • Open Chrome settings by clicking the three dots in the top-right corner and selecting “Settings” from the menu.
  • In the Settings window, click on the “Search Engine” tab on the left side.
  • Under the “Search Engine” section, click “Manage search engines and site search.”
  • Scroll down to the “Site search” area and click “Add” to create a new entry.

In the new entry, enter the following details:

  • Name: Google (Web)
  • Shortcut: www.google.com
  • URL: {google:baseURL}/search?udm=14&q=%s
  • Click “Add
Screenshot from: chrome://settings/searchEngines, May 2024.

Lastly, click the three dots next to the new “Google (Web)” entry and select “Make default.”

Screenshot from: chrome://settings/searchEngines, May 2024.

After following these steps, Chrome will now default to showing regular web search results instead of the AI overview summaries when you perform searches from the address bar.

Tensions Over Data Usage

The controversy surrounding AI overviews creates tension between tech companies and content creators over using online data for AI training.

Publishers argue that Google’s AI summaries could siphon website traffic, threatening independent creators’ revenue streams, which rely on search referrals.

The debate reflects the need for updated frameworks to balance innovation and fair compensation for content creators, maintaining a sustainable open internet ecosystem.


FAQ

What concerns has Scott Jenson raised about Google’s AI focus?

Scott Jenson, a former Google product manager, has expressed concerns that Google’s current AI focus is more of a “panic reaction” to stay ahead of competitors rather than addressing user needs. He critiques Google’s AI initiatives as poorly motivated and driven by a fear of letting others get ahead.

How does Scott Jenson compare Google’s AI strategy to past projects?

Jenson parallels Google’s current AI focus and the company’s response to Facebook years ago with Google+. He describes both as “hysterical reactions” driven by competition, which, in the case of Google+, resulted in a product that failed to meet its objectives.

Why are content creators concerned about Google’s AI overviews?

Content creators worry that Google’s AI overviews, which generate summaries by ingesting web content, could reduce site traffic. They argue that this practice is unfair as it uses their content without permission and impacts their revenue streams that rely on search referrals.

How can users turn off Google’s AI overviews in Chrome?

Although no official setting exists to disable AI overviews, users can use a workaround by enabling a specific Chrome setting.

Here are the steps:

  • Open Chrome settings by clicking the three dots in the top-right corner and selecting “Settings” from the menu.
  • In the Settings window, click on the “Search Engine” tab on the left side.
  • Under the “Search Engine” section, click “Manage search engines and site search.”
  • Scroll down to the “Site search” area and click “Add” to create a new entry.

In the new entry, enter the following details:

    • Name: Google (Web)
    • Shortcut: www.google.com
    • URL: {google:baseURL}/search?udm=14&q=%s
    • Click “Add

This will force Chrome to skip AI-generated overviews and show the classic list of web links.


Featured Image: Sira Anamwong/Shutterstock

Google Helpfulness Signals Might Change – Why It’s Not Enough via @sejournal, @martinibuster

Google’s John Mueller indicated the possibility of changes to sitewide helpful content signals so that new pages may be allowed to rank. But there is reason to believe that even if that change goes through it may not be enough to help.

Helpful Content Signals

Google’s Helpful Content Signals (aka Helpful Content Update aka HCU) was originally a site-wide signal when launched in 2022. That meant that an entire site would be classified as unhelpful and become unable to rank, regardless if some pages were helpful.

Recently the signals associated with the Helpful Content System were absorbed into Google’s core ranking algorithm, generally changing them to page-level signals, with a caveat.

Google’s documentation advises:

“Our core ranking systems are primarily designed to work on the page level, using a variety of signals and systems to understand the helpfulness of individual pages. We do have some site-wide signals that are also considered.”

There are two important takeaways:

  1. There is no longer a single system for helpfulness. It’s now a collection of signals within the core ranking algorithm.
  2. The signals are page-level but there are site-wide signals that can impact the overall rankings.

Some publishers have tweeted that the site-wide effect is impacting the ability of new helpful pages from ranking and John Mueller offered some hope.

If Google follows through with lightening the helpfulness signals so that individual pages are able to rank, there is reason to believe that it may not impact many websites that publishers and SEOs believe are suffering from sitewide helpfulness signals.

Publishers Express Frustration With Sitewide Algorithm Effects

Someone on X (formerly Twitter) shared:

“It’s frustrating when new content is also being penalized without having a chance to gather positive user signals. I publish something it goes straight to page 4 and stays there, regardless of if there are any articles out on the location.”

Someone else brought up the point that if helpfulness signals are page-level then in theory the better (helpful) pages should begin ranking but that’s not happening.

John Mueller Offers Hope

Google’s John Mueller responded to a query about sitewide helpfulness signals suppressing the rankings of new pages created to be helpful and later indicated there may be a change to the way helpfulness signals are applied sitewide.

Mueller tweeted:

“Yes, and I imagine for most sites strongly affected, the effects will be site-wide for the time being, and it will take until the next update to see similar strong effects (assuming the new state of the site is significantly better than before).”

Possible Change To Helpfulness Signals

Mueller followed up his tweet by saying that the search ranking team is working on a way to surface high quality pages from sites that may contain strong negative sitewide signals indicative of unhelpful content, providing relief to some sites that are burdened by sitewide signals.

He tweeted:

“I can’t make any promises, but the team working on this is explicitly evaluating how sites can / will improve in Search for the next update. It would be great to show more users the content that folks have worked hard on, and where sites have taken helpfulness to heart.”

Why Changes To Sitewide Signal May Not Be Enough

Google’s search console tells publishers when they’ve received a manual action. But it doesn’t tell publishers when their sites lost rankings due to algorithmic issues like helpfulness signals.  Publishers and SEOs don’t and cannot “know” if their sites are affected by helpfulness signals. Just the core ranking algorithm contains hundreds of signals, so it’s important to keep an open mind about what may be affecting search visibility after an update.

Here are five examples of changes during a broad core update that can affect rankings:

  1. The way a query is understood could have changed which affects what kinds of sites are able to rank
  2. Quality signals changed
  3. Rankings may change to respond to search trends
  4. A site may lose rankings because a competitor improved their site
  5. Infrastructure may have changed to accommodate more AI on the back end

A lot of things can influence rankings before, during, and after a core algorithm update. If  rankings don’t improve then it may be time to consider that a knowledge gap is standing in the way of a solution.

Examples Of Getting It Wrong

For example, a publisher who recently lost rankings correlated the date the of their rankings collapse to the announcement of the site Reputation Abuse update. It’s a reasonable assumption that if the rankings drop on the same date of an update then it’s the update.

Here’s the tweet:

“@searchliaison feeling a bit lost here. Judging by the timing, we got hit by the Reputation Abuse algorithm. We don’t do coupons, or sell links, or anything else.

Very, very confused. We’ve been stable through all this and continue to re-work/remove older content that is poor.”

They posted a screenshot of the rankings collapse.

Screenshot Showing Search Visibility Collapse

SearchLiaison responded to that tweet by noting that Google is currently only doing manual actions. It’s reasonable to assume that an update that correlates to a ranking issue is related, one to the other.

But one cannot ever be 100% sure about the cause of a rankings drop, especially if there’s a knowledge gap about other possible reasons (like the five I listed above). This bears repeating: one cannot be certain that a specific signal is the reason for a rankings drop.

In another tweet SearchLiaison remarked about how some publishers mistakenly assumed they had an algorithmic spam action or were suffering from negative Helpful Content Signals.

SearchLiaison tweeted:

“I’ve looked at many sites where people have complained about losing rankings and decide they have a algorithmic spam action against them, but they don’t.

…we do have various systems that try to determine how helpful, useful and reliable individual content and sites are (and they’re not perfect, as I’ve said many times before, anticipating a chorus of “whatabouts…..” Some people who think they are impacted by this, I’ve looked at the same data they can see in Search Console and … not really. “

SearchLiaison, in the same tweet, addressed a person who remarked that getting a manual action is more fair than receiving an algorithmic action, pointing out the inherent knowledge gap that would lead someone to surmise such a thing.

He tweeted:

“…you don’t really want to think “Oh, I just wish I had a manual action, that would be so much easier.” You really don’t want your individual site coming the attention of our spam analysts. First, it’s not like manual actions are somehow instantly processed.”

The point I’m trying to make (and I have 25 years of hands-on SEO experience so I know what I’m talking about), is to keep an open mind that maybe there’s something else going on that is undetected. Yes, there are such things as false positives, but it’s not always the case that Google is making a mistake, it could be a knowledge gap. That’s why I suspect that many people will not experience a lift in rankings if Google makes it easier for new pages to rank and if that happens, keep an open mind about maybe there’s something else going on.

Featured Image by Shutterstock/Sundry Photography

Using AI Ethically In SEO via @sejournal, @wburton27

AI can help brands and marketers be more efficient and productive and do things quicker, but it is not perfect and does have some drawbacks.

With the rise and adoption of AI into SEO workflows, processes, and tools, SEO pros must take an ethical approach to artificial intelligence.

What exactly does an ethical approach to AI mean?

An ethical approach involves using AI technologies transparently, fairly, and responsibly while respecting user privacy and ensuring the accuracy and integrity of information.

We are all aware that using AI is imperfect and can be full of inaccurate, biased, fluffy information, etc., which can cause many problems for agencies and marketers that rely on AI to create content.

With the March core update, sites that use AI content that was not edited, original, or helpful lost a substantial portion of organic traffic.

Here are some ways we can use AI to be more ethical.

Be Transparent And Provide Disclosure

Do not use generative AI to create content for publishing. If you use generative AI in parts of your process, you should be fully transparent to the brands you work with about how you use AI in your SEO practices.

Maintain Accuracy And Integrity

If you’re going to use AI, you should take a human-led approach to writing long-form content. Humans should always do the content creation, but AI can be helpful for brainstorming, organizing, rewording, transcription, and reworking content. In each case, outputs must be checked for originality using Copyscape or the tool of your choice.

Additionally, the information must be trustworthy and accurate. With the HCU being incorporated into the March core update, it’s more important than ever to focus on people-first content rather than content that is not helpful, useful, or satisfying the end user’s intent.

Be Original and Useful

With Google focusing on a good user and people-first content experience, we should not rely on AI content because of the inadequacy in training data, and a lack of originality. AI could be great for compiling a list of notes from people with first-hand experience and pulling them into a cohesive article, for example, but not to produce the list and facts, even with fact-checking.

Follow Compliance With Search Engine Guidelines

It’s imperative that we follow search engine guidelines and ethical standards.

AI should not be used to engage in practices like keyword stuffing, cloaking, or creating doorway pages. Instead, it should support the creation of high-quality and useful content.

Take a look at Google AI Principles – Google AI.

Promote Positive Impact

Ethically using AI in SEO also means considering the broader impact on society. This entails promoting trustworthy, useful content that contributes positively to users’ knowledge and well-being.

Develop Safely & Respect Privacy

If you build your own tools and platforms with AI, ensure you have strong security protocols and practices to prevent causing any harm.

Always assess your technologies before launching them into the production environment to ensure they are safe and secure. Ensure to continue monitoring it after it is released to the general public.

LLMs are not secure. It may be necessary to get legal advice before implementing certain types of AI, like generative AI, in processes that include user/customer information. Updating a privacy policy may not be enough.

Never put proprietary and confidential information into a generative AI chatbot like ChatGPT.  Most LLMs save all user inputs and the information could be used to generate responses to other users.

Respect Intellectual Property & Originality

One of the biggest issues with AI is intellectual property (IP). If I create some content using ChatGPT, who owns it?

We need to ensure that when AI recommends content, it is original and not taken from anywhere else. This can be problematic because some AI platforms don’t list the source of the information unless you specify chatbots to do so.

ChatGPT can tell you where the content sources are coming from if you list them in your prompt. For example, I asked ChatGPT to write me a 750-word blog post on the top things to do in NY and list the sources, and it did.

 listing top sightseeing spots in new york city with descriptions, hyperlinks, and a dialog boxScreenshot from ChatGPT, April 2024

If you’re getting some information from ChatGPT, you need to credit the source and ensure they’re not copying other people’s content. Also, setting clear rules for using AI in making content can help avoid legal problems and ensure you’re fair and honest.

I checked the content that I created in ChatGPT, and according to Copyscape, it is full of similar text.

Screenshot of a web search page displaying results for historic public parks in new york city. the results highlight multiple links and brief summaries about parks like the high line and hamilton park.

Screenshot from Copyscape, April 2024

Note: Please keep in mind that asking LLMs to cite sources doesn’t guarantee you’re citing the right content or that the content is original. The best and safest way to avoid accidental plagiarism is for humans to do the research and write the content.

Google Is Not About Content That Is Artificial And Lacking In Originality

With the rapid growth of AI-based tools entering the market and AI being incorporated into a lot of platforms and being used in daily SEO tasks,  it is extremely important for us to adhere to ethical AI principles to ensure that the use of AI in SEO supports a fair, equitable, and user-focused search ecosystem.

Google has always been about quality and original content that offers value to end users and not content that is fully artificial, offers no value, lacks trust, is thin, duplicate, lacks originality, etc.

In order to compete in today’s competitive and ever-changing SERPs, focusing on improving E-E-A-T is more important than ever before because it is a quality signal that shows Google and end users that you’re the subject matter expert and authority in your niche.

It’s highly recommended to have thought leaders and experts in your niche create your content and show their expertise on your site.

Additionally, it’s important to focus on user experience and ensure that your site loads quickly, is easy to navigate, and helps users find exactly what they came to your site for.

More resources: 


Featured Image: aniqpixel/Shutterstock

Securing Email Addresses for Abandoned Carts

Ecommerce shopping cart abandonment remains high, roughly 70% in 2024. Yet some in the industry — Dynamic Yield, for example — have estimated a good cart recovery email series will convert about a third of the time.

That is an impressive rate, but it assumes a merchant has the shopper’s email address — frequently not the case.

What follows are five tactics for securing an email address before a would-be buyer leaves the cart.

Ask Often

While it is important not to place speedbumps along a buyer’s journey, an online shop should repeatedly ask for a visitor’s email address during each session.

The easiest method is a simple signup form. Follow-up is easy if a shopper subscribes and later leaves items in a cart. The form might be for product updates, a newsletter signup, or even a purchase discount.

Newsletter signup form for Science Fiction Classics.

Offering a newsletter is a good way to collect email addresses for use in a shopping cart abandonment series.

Ask Early

Soliciting an email address during checkout is customary and unsurprising to a shopper. Take advantage of this expectation and ask in the first field of the cart. If the shopper leaves, the merchant can initiate a recovery email series.

Checkout form for Science Fiction Classics showing the email address field.

The first field in the checkout process should be for an email address.

Invest in Newsletter Ads

Editorial newsletters are growing in popularity as the creator economy surges.

Sponsoring newsletters has several advantages, such as not being cookie-dependent. A lesser-known benefit is collecting email addresses, in two ways.

The first of these methods is to advertise a store’s own email newsletter. Sparklook, for example, is a pay-per-subscriber service that works with many creator newsletters. Merchants can advertise in those newsletters and pay a dollar or two per confirmed subscriber.

Next, newsletter ads can include links that pass the shopper’s email address to a form.

For example, the ad might offer a 20% discount on the first purchase. When a shopper clicks a “Subscribe and Save 20%” button, the newsletter’s link automatically subscribes to the shopper in one click, passing the email address directly to the merchant.

Use an Exit Pop-up

An exit-intent pop-up launches when a shopper’s mouse movements or scrolling patterns foretell an eminent exit.

The pop-up offers a minimal, relevant, and compelling alternative to leaving the site immediately. The offer can include an email capture, enabling the merchant to send the all-important shopping cart recovery email series.

Match Visitors to Email Addresses

All of the tactics discussed so far aim to increase the number of known email addresses, but according to Edward Upton, founder and CEO of Littledata, an ecommerce analytics firm, cart abandonment presents another challenge: recognizing a returning subscriber.

Some visitors may have already provided an email address and even followed a link from the store’s own email marketing. But something has gotten in the way: The Arc browser’s ad blocker cleans identifiers from the link.

The effect for many ecommerce shops is losing the connection and thus eliminating subsequent cart recovery efforts.

Littledata and similar tools boost the site’s ability to recognize returning subscribers despite ad blockers. Littledata claims it can increase subscriber identification by upwards of 40% for merchants using Shopify and Klaviyo.

More matching solutions are likely coming, thanks to artificial intelligence and machine learning tools.

Last summer was the hottest in 2,000 years. Here’s how we know.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

I’m ready for summer, but if this year is anything like last year, it’s going to be a doozy. In fact, the summer of 2023 in the Northern Hemisphere was the hottest in over 2,000 years, according to a new study released this week. 

If you’ve been following the headlines, you probably already know that last year was a hot one. But I was gobsmacked by this paper’s title when it came across my desk. The warmest in 2,000 years—how do we even know that?

There weren’t exactly thermometers around in the year 1, so scientists have to get creative when it comes to comparing our climate today with that of centuries, or even millennia, ago. Here’s how our world stacks up against the climate of the past, how we know, and why it matters for our future. 

Today, there are thousands and thousands of weather stations around the globe, tracking the temperature from Death Valley to Mount Everest. So there’s plenty of data to show that 2023 was, in a word, a scorcher. 

Daily global ocean temperatures were the warmest ever recorded for over a year straight. Levels of sea ice hit new lows. And of course, the year saw the highest global average temperatures since record-keeping began in 1850.  

But scientists decided to look even further back into the past for a year that could compare to our current temperatures. To do so, they turned to trees, which can act as low-tech weather stations.

The concentric rings inside a tree are evidence of the plant’s yearly growth cycles. Lighter colors correspond to quick growth over the spring and summer, while the darker rings correspond to the fall and winter. Count the pairs of light and dark rings, and you can tell how many years a tree has lived. 

Trees tend to grow faster during warm, wet years and slower during colder ones. So scientists can not only count the rings but measure their thickness, and use that as a gauge for how warm any particular year was. They also look at factors like density and track different chemical signatures found inside the wood. You don’t even need to cut down a tree to get its help with climatic studies—you can just drill out a small cylinder from the tree’s center, called a core, and study the patterns.

The oldest living trees allow us to peek a few centuries into the past. Beyond that, it’s a matter of cross-referencing the patterns on dead trees with living ones, extending the record back in time like putting a puzzle together. 

It’s taken several decades of work and hundreds of scientists to develop the records that researchers used for this new paper, said Max Torbenson, one of the authors of the study, on a press call. There are over 10,000 trees from nine regions across the Northern Hemisphere represented, allowing the researchers to draw conclusions about individual years over the past two millennia. The year 246 CE once held the crown for the warmest summer in the Northern Hemisphere in the last 2,000 years. But 25 of the last 28 years have beat that record, Torbenson says, and 2023’s summer tops them all. 

These conclusions are limited to the Northern Hemisphere, since there are only a few tree ring records from the Southern Hemisphere, says Jan Esper, lead author of the new study. And using tree rings doesn’t work very well for the tropics because seasons look different there, he adds. Since there’s no winter, there’s usually not as reliable an alternating pattern in tropical tree rings, though some trees do have annual rings that track the wet and dry periods of the year. 

Paleoclimatologists, who study ancient climates, can use other methods to get a general idea of what the climate looked like even earlier—tens of thousands to millions of years ago. 

The biggest difference between the new study using tree rings and methods of looking back further into the past is the precision. Scientists can, with reasonable certainty, use tree rings to draw conclusions about individual years in the Northern Hemisphere (536 CE was the coldest, for instance, likely because of volcanic activity). Any information from further back than the past couple of thousand years will be more of a general trend than a specific data point representing a single year. But those records can still be very useful. 

The oldest glaciers on the planet are at least a million years old, and scientists can drill down into the ice for samples. By examining the ratio of gases like oxygen, carbon dioxide, and nitrogen inside these ice cores, researchers can figure out the temperature of the time corresponding to the layers in the glacier. The oldest continuous ice-core record, which was collected in Antarctica, goes back about 800,000 years. 

Researchers can use fossils to look even further back into Earth’s temperature record. For one 2020 study, researchers drilled into the seabed and looked at the sediment and tiny preserved shells of ancient organisms. From the chemical signatures in those samples, they found that the temperatures we might be on track to record may be hotter than anything the planet has experienced on a global scale in tens of millions of years. 

It’s a bit sobering to know that we’re changing the planet in such a dramatic way. 

The good news is, we know what we need to do to turn things around: cut emissions of planet-warming gases like carbon dioxide and methane. The longer we wait, the more expensive and difficult it will be to stop warming and reverse it, as Esper said on the press call: “We should do as much as possible, as soon as possible.” 


Now read the rest of The Spark

Related reading

Last year broke all sorts of climate records, from emissions to ocean temperatures. For more on the data, check out this story from December.

How hot is too hot for the human body? I tackled that very question in a 2021 story.  

Two engineers in lab coats monitor the thermal battery powering a conveyor belt of bottles

SIMON LANDREIN

Another thing

Readers chose thermal batteries as the 11th Breakthrough Technology of 2024. If you want to hear more about what thermal batteries are, how they work, and why this all matters, join us for the latest in our Roundtables series of online events, where I’ll be getting into the nitty-gritty details and answering some audience questions.

This event is exclusively for subscribers, so subscribe if you haven’t already, and then register here to join us tomorrow, May 16, at noon Eastern time. Hope to see you there! 

Keeping up with climate  

Scientists just recorded the largest ever annual leap in the amount of carbon dioxide in the atmosphere. The concentration of the planet-warming gas in March 2024 was 4.7 parts per million higher than it was a year before. (The Guardian)

Tesla has reportedly begun rehiring some of the workers who were laid off from its charging team in recent weeks. (Bloomberg)

→ To catch up on what’s going on at Tesla, and what it means for the future of EV charging and climate tech more broadly, check out the newsletter from last week if you missed it. (MIT Technology Review)

A new rule could spur thousands of miles of new power lines, making it easier to add renewables to the grid in the US. The Federal Energy Regulatory Commission will require grid operators to plan 20 years ahead, considering things like the speed of wind and solar installations. (New York Times)

Where does carbon dioxide go after it’s been vacuumed out of the atmosphere? Here are 10 options. (Latitude Media)

Ocean temperatures have been extremely high, shattering records over the past year. All that heat could help fuel a particularly busy upcoming hurricane season. (E&E News)

New tariffs in the US will tack on additional costs to a wide range of Chinese imports, including batteries and solar cells. The tariff on EVs will take a particularly drastic jump, going from 27.5% to 102.5%. (Associated Press)

A reporter took a trip to the Beijing Auto Show and drove dozens of EVs. His conclusion? Chinese EVs are advancing much faster than Western automakers can keep up with. (InsideEVs)

Harnessing solar power via satellites in space and beaming it down to Earth is a tempting dream. But the reality, as you might expect, is probably not so rosy. (IEEE Spectrum)

This grim but revolutionary DNA technology is changing how we respond to mass disasters

Seven days

No matter who he called—his mother, his father, his brother, his cousins—the phone would just go to voicemail. Cell service was out around Maui as devastating wildfires swept through the Hawaiian island. But while Raven Imperial kept hoping for someone to answer, he couldn’t keep a terrifying thought from sneaking into his mind: What if his family members had perished in the blaze? What if all of them were gone?

Hours passed; then days. All Raven knew at that point was this: there had been a wildfire on August 8, 2023, in Lahaina, where his multigenerational, tight-knit family lived. But from where he was currently based in Northern California, Raven was in the dark. Had his family evacuated? Were they hurt? He watched from afar as horrifying video clips of Front Street burning circulated online.

Much of the area around Lahaina’s Pioneer Mill Smokestack was totally destroyed by wildfire.
ALAMY

The list of missing residents meanwhile climbed into the hundreds.

Raven remembers how frightened he felt: “I thought I had lost them.”

Raven had spent his youth in a four-bedroom, two-bathroom, cream-colored home on Kopili Street that had long housed not just his immediate family but also around 10 to 12 renters, since home prices were so high on Maui. When he and his brother, Raphael Jr., were kids, their dad put up a basketball hoop outside where they’d shoot hoops with neighbors. Raphael Jr.’s high school sweetheart, Christine Mariano, later moved in, and when the couple had a son in 2021, they raised him there too.

From the initial news reports and posts, it seemed as if the fire had destroyed the Imperials’ entire neighborhood near the Pioneer Mill Smokestack—a 225-foot-high structure left over from the days of Maui’s sugar plantations, which Raven’s grandfather had worked on as an immigrant from the Philippines in the mid-1900s.

Then, finally, on August 11, a call to Raven’s brother went through. He’d managed to get a cell signal while standing on the beach.

“Is everyone okay?” Raven asked.

“We’re just trying to find Dad,” Raphael Jr. told his brother.

Raven Imperial sitting in the grass
From his current home in Northern California, Raven Imperial spent days not knowing what had happened to his family in Maui.
WINNI WINTERMEYER

In the three days following the fire, the rest of the family members had slowly found their way back to each other. Raven would learn that most of his immediate family had been separated for 72 hours: Raphael Jr. had been marooned in Kaanapali, four miles north of Lahaina; Christine had been stuck in Wailuku, more than 20 miles away; both young parents had been separated from their son, who escaped with Christine’s parents. Raven’s mother, Evelyn, had also been in Kaanapali, though not where Raphael Jr. had been.

But no one was in contact with Rafael Sr. Evelyn had left their home around noon on the day of the fire and headed to work. That was the last time she had seen him. The last time they had spoken was when she called him just after 3 p.m. and asked: “Are you working?” He replied “No,” before the phone abruptly cut off.

“Everybody was found,” Raven says. “Except for my father.”

Within the week, Raven boarded a plane and flew back to Maui. He would keep looking for him, he told himself, for as long as it took.


That same week, Kim Gin was also on a plane to Maui. It would take half a day to get there from Alabama, where she had moved after retiring from the Sacramento County Coroner’s Office in California a year earlier. But Gin, now an independent consultant on death investigations, knew she had something to offer the response teams in Lahaina. Of all the forensic investigators in the country, she was one of the few who had experience in the immediate aftermath of a wildfire on the vast scale of Maui’s. She was also one of the rare investigators well versed in employing rapid DNA analysis—an emerging but increasingly vital scientific tool used to identify victims in unfolding mass-casualty events.

Gin started her career in Sacramento in 2001 and was working as the coroner 17 years later when Butte County, California, close to 90 miles north, erupted in flames. She had worked fire investigations before, but nothing like the Camp Fire, which burned more than 150,000 acres—an area larger than the city of Chicago. The tiny town of Paradise, the epicenter of the blaze, didn’t have the capacity to handle the rising death toll. Gin’s office had a refrigerated box truck and a 52-foot semitrailer, as well as a morgue that could handle a couple of hundred bodies.

Kim Gin
Kim Gin, the former Sacramento County coroner, had worked fire investigations in her career, but nothing prepared her for the 2018 Camp Fire.
BRYAN TARNOWSKI

“Even though I knew it was a fire, I expected more identifications by fingerprints or dental [records]. But that was just me being naïve,” she says. She quickly realized that putting names to the dead, many burned beyond recognition, would rely heavily on DNA.

“The problem then became how long it takes to do the traditional DNA [analysis],” Gin explains, speaking to a significant and long-standing challenge in the field—and the reason DNA identification has long been something of a last resort following large-scale disasters.

While more conventional identification methods—think fingerprints, dental information, or matching something like a knee replacement to medical records—can be a long, tedious process, they don’t take nearly as long as traditional DNA testing.

Historically, the process of making genetic identifications would often stretch on for months, even years. In fires and other situations that result in badly degraded bone or tissue, it can become even more challenging and time consuming to process DNA, which traditionally involves reading the 3 billion base pairs of the human genome and comparing samples found in the field against samples from a family member. Meanwhile, investigators frequently need equipment from the US Department of Justice or the county crime lab to test the samples, so backlogs often pile up.

A supply kit with swabs, gloves, and other items needed to take a DNA sample in the field.
A demo chip for ANDE’s rapid DNA box.

This creates a wait that can be horrendous for family members. Death certificates, federal assistance, insurance money—“all that hinges on that ID,” Gin says. Not to mention the emotional toll of not knowing if their loved ones are alive or dead.

But over the past several years, as fires and other climate-change-fueled disasters have become more common and more cataclysmic, the way their aftermath is processed and their victims identified has been transformed. The grim work following a disaster remains—surveying rubble and ash, distinguishing a piece of plastic from a tiny fragment of bone—but landing a positive identification can now take just a fraction of the time it once did, which may in turn bring families some semblance of peace more swiftly than ever before.

The key innovation driving this progress has been rapid DNA analysis, a methodology that focuses on just over two dozen regions of the genome. The 2018 Camp Fire was the first time the technology was used in a large, live disaster setting, and the first time it was used as the primary way to identify victims. The technology—deployed in small high-tech field devices developed by companies like industry leader ANDE, or in a lab with other rapid DNA techniques developed by Thermo Fisher—is increasingly being used by the US military on the battlefield, and by the FBI and local police departments after sexual assaults and in instances where confirming an ID is challenging, like cases of missing or murdered Indigenous people or migrants. Yet arguably the most effective way to use rapid DNA is in incidents of mass death. In the Camp Fire, 22 victims were identified using traditional methods, while rapid DNA analysis helped with 62 of the remaining 63 victims; it has also been used in recent years following hurricanes and floods, and in the war in Ukraine.

“These families are going to have to wait a long period of time to get identification. How do we make this go faster?”

Tiffany Roy, a forensic DNA expert with consulting company ForensicAid, says she’d be concerned about deploying the technology in a crime scene, where quality evidence is limited and can be quickly “exhausted” by well-meaning investigators who are “not trained DNA analysts.” But, on the whole, Roy and other experts see rapid DNA as a major net positive for the field. “It is definitely a game-changer,” adds Sarah Kerrigan, a professor of forensic science at Sam Houston State University and the director of its Institute for Forensic Research, Training, and Innovation.

But back in those early days after the Camp Fire, all Gin knew was that nearly 1,000 people had been listed as missing, and she was tasked with helping to identify the dead. “Oh my goodness,” she remembers thinking. “These families are going to have to wait a long period of time to get identification. How do we make this go faster?”


Ten days

One flier pleading for information about “Uncle Raffy,” as people in the community knew Rafael Sr., was posted on a brick-red stairwell outside Paradise Supermart, a Filipino store and restaurant in Kahului, 25 miles away from the destruction. In it, just below the words “MISSING Lahaina Victim,” the 63-year-old grandfather smiled with closed lips, wearing a blue Hawaiian shirt, his right hand curled in the shaka sign, thumb and pinky pointing out.

Raphael Imperial Sr
Raven remembers how hard his dad, Rafael, worked. His three jobs took him all over town and earned him the nickname “Mr. Aloha.”
COURTESY OF RAVEN IMPERIAL

“Everybody knew him from restaurant businesses,” Raven says. “He was all over Lahaina, very friendly to everybody.” Raven remembers how hard his dad worked, juggling three jobs: as a draft tech for Anheuser-Busch, setting up services and delivering beer all across town; as a security officer at Allied Universal security services; and as a parking booth attendant at the Sheraton Maui. He connected with so many people that coworkers, friends, and other locals gave him another nickname: “Mr. Aloha.”

Raven also remembers how his dad had always loved karaoke, where he would sing “My Way,” by Frank Sinatra. “That’s the only song that he would sing,” Raven says. “Like, on repeat.” 

Since their home had burned down, the Imperials ran their search out of a rental unit in Kihei, which was owned by a local woman one of them knew through her job. The woman had opened her rental to three families in all. It quickly grew crowded with side-by-side beds and piles of donations.

Each day, Evelyn waited for her husband to call.

She managed to catch up with one of their former tenants, who recalled asking Rafael Sr. to leave the house on the day of the fires. But she did not know if he actually did. Evelyn spoke to other neighbors who also remembered seeing Rafael Sr. that day; they told her that they had seen him go back into the house. But they too did not know what happened to him after.

A friend of Raven’s who got into the largely restricted burn zone told him he’d spotted Rafael Sr.’s Toyota Tacoma on the street, not far from their house. He sent a photo. The pickup was burned out, but a passenger-side door was open. The family wondered: Could he have escaped?

Evelyn called the Red Cross. She called the police. Nothing. They waited and hoped.


Back in Paradise in 2018, as Gin worried about the scores of waiting families, she learned there might in fact be a better way to get a positive ID—and a much quicker one. A company called ANDE Rapid DNA had already volunteered its services to the Butte County sheriff and promised that its technology could process DNA and get a match in less than two hours.

“I’ll try anything at this point,” Gin remembers telling the sheriff. “Let’s see this magic box and what it’s going to do.”

In truth, Gin did not think it would work, and certainly not in two hours. When the device arrived, it was “not something huge and fantastical,” she recalls thinking. A little bigger than a microwave, it looked “like an ordinary box that beeps, and you put stuff in, and out comes a result.”

The “stuff,” more specifically, was a cheek or bloodstain swab, or a piece of muscle, or a fragment of bone that had been crushed and demineralized. Instead of reading 3 billion base pairs in this sample, Selden’s machine examined just 27 genome regions characterized by particular repeating sequences. It would be nearly impossible for two unrelated people to have the same repeating sequence in those regions. But a parent and child, or siblings, would match, meaning you could compare DNA found in human remains with DNA samples taken from potential victims’ family members. Making it even more efficient for a coroner like Gin, the machine could run up to five tests at a time and could be operated by anyone with just a little basic training.

ANDE’s chief scientific officer, Richard Selden, a pediatrician who has a PhD in genetics from Harvard, didn’t come up with the idea to focus on a smaller, more manageable number of base pairs to speed up DNA analysis. But it did become something of an obsession for him after he watched the O.J. Simpson trial in the mid-1990s and began to grasp just how long it took for DNA samples to get processed in crime cases. By this point, the FBI had already set up a system for identifying DNA by looking at just 13 regions of the genome; it would later add seven more. Researchers in other countries had also identified other sets of regions to analyze. Drawing on these various methodologies, Selden homed in on the 27 specific areas of DNA he thought would be most effective to examine, and he launched ANDE in 2004.

But he had to build a device to do the analysis. Selden wanted it to be small, portable, and easily used by anyone in the field. In a conventional lab, he says, “from the moment you take that cheek swab to the moment that you have the answer, there are hundreds of laboratory steps.” Traditionally, a human is holding test tubes and iPads and sorting through or processing paperwork. Selden compares it all to using a “conventional typewriter.” He effectively created the more efficient laptop version of DNA analysis by figuring out how to speed up that same process.

No longer would a human have to “open up this bottle and put [the sample] in a pipette and figure out how much, then move it into a tube here.” It is all automated, and the process is confined to a single device.

gloved hands load a chip cartridge into the ANDE machine
The rapid DNA analysis boxes from ANDE can be used in the field by anyone with just a bit of training.
ANDE

Once a sample is placed in the box, the DNA binds to a filter in water and the rest of the sample is washed away. Air pressure propels the purified DNA to a reconstitution chamber and then flattens it into a sheet less than a millimeter thick, which is subjected to about 6,000 volts of electricity. It’s “kind of an obstacle course for the DNA,” he explains.

The machine then interprets the donor’s genome and and provides an allele table with a graph showing the peaks for each region and its size. This data is then compared with samples from potential relatives, and the machine reports when it has a match.

Rapid DNA analysis as a technology first received approval for use by the US military in 2014, and in the FBI two years later. Then the Rapid DNA Act of 2017 enabled all US law enforcement agencies to use the technology on site and in real time as an alternative to sending samples off to labs and waiting for results.

But by the time of the Camp Fire the following year, most coroners and local police officers still had no familiarity or experience with it. Neither did Gin. So she decided to put the “magic box” through a test: she gave Selden, who had arrived at the scene to help with the technology, a DNA sample from a victim whose identity she’d already confirmed via fingerprint. The box took about 90 minutes to come back with a result. And to Gin’s surprise, it was the same identification she had already made. Just to make sure, she ran several more samples through the box, also from victims she had already identified. Again, results were returned swiftly, and they confirmed hers.

“I was a believer,” she says.

The next year, Gin helped investigators use rapid DNA technology in the 2019 Conception disaster, when a dive boat caught fire off the Channel Islands in Santa Barbara. “We ID’d 34 victims in 10 days,” Gin says. “Completely done.” Gin now works independently to assist other investigators in mass-fatality events and helps them learn to use the ANDE system.

Its speed made the box a groundbreaking innovation. Death investigations, Gin learned long ago, are not as much about the dead as about giving peace of mind, justice, and closure to the living.


Fourteen days

Many of the people who were initially on the Lahaina missing persons list turned up in the days following the fire. Tearful reunions ensued.

Two weeks after the fire, the Imperials hoped they’d have the same outcome as they loaded into a truck to check out some exciting news: someone had reported seeing Rafael Sr. at a local church. He’d been eating and had burns on his hands and looked disoriented. The caller said the sighting had occurred three days after the fire. Could he still be in the vicinity?

When the family arrived, they couldn’t confirm the lead.

“We were getting a lot of calls,” Raven says. “There were a lot of rumors saying that they found him.”

None of them panned out. They kept looking.


The scenes following large-scale destructive events like the fires in Paradise and Lahaina can be sprawling and dangerous, with victims sometimes dispersed across a large swath of land if many people died trying to escape. Teams need to meticulously and tediously search mountains of mixed, melted, or burned debris just to find bits of human remains that might otherwise be mistaken for a piece of plastic or drywall. Compounding the challenge is the comingling of remains—from people who died huddled together, or in the same location, or alongside pets or other animals.

This is when the work of forensic anthropologists is essential: they have the skills to differentiate between human and animal bones and to find the critical samples that are needed by DNA specialists, fire and arson investigators, forensic pathologists and dentists, and other experts. Rapid DNA analysis “works best in tandem with forensic anthropologists, particularly in wildfires,” Gin explains.

“The first step is determining, is it a bone?” says Robert Mann, a forensic anthropologist at the University of Hawaii John A. Burns School of Medicine on Oahu. Then, is it a human bone? And if so, which one?

Rober Mann in a lab coat with a human skeleton on the table in front of him
Forensic anthropologist Robert Mann has spent his career identifying human remains.
AP PHOTO/LUCY PEMONI

Mann has served on teams that have helped identify the remains of victims after the terrorist attacks of September 11, 2001, and the 2004 Indian Ocean tsunami, among other mass-casualty events. He remembers how in one investigation he received an object believed to be a human bone; it turned out to be a plastic replica. In another case, he was looking through the wreckage of a car accident and spotted what appeared to be a human rib fragment. Upon closer examination, he identified it as a piece of rubber weather stripping from the rear window. “We examine every bone and tooth, no matter how small, fragmented, or burned it might be,” he says. “It’s a time-consuming but critical process because we can’t afford to make a mistake or overlook anything that might help us establish the identity of a person.”

For Mann, the Maui disaster felt particularly immediate. It was right near his home. He was deployed to Lahaina about a week after the fire, as one of more than a dozen forensic anthropologists on scene from universities in places including Oregon, California, and Hawaii.

While some anthropologists searched the recovery zone—looking through what was left of homes, cars, buildings, and streets, and preserving fragmented and burned bone, body parts, and teeth—Mann was stationed in the morgue, where samples were sent for processing.

It used to be much harder to find samples that scientists believed could provide DNA for analysis, but that’s also changed recently as researchers have learned more about what kind of DNA can survive disasters. Two kinds are used in forensic identity testing: nuclear DNA (found within the nuclei of eukaryotic cells) and mitochondrial DNA (found in the mitochondria, organelles located outside the nucleus). Both, it turns out, have survived plane crashes, wars, floods, volcanic eruptions, and fires.

Theories have also been evolving over the past few decades about how to preserve and recover DNA specifically after intense heat exposure. One 2018 study found that a majority of the samples actually survived high heat. Researchers are also learning more about how bone characteristics change depending on the degree. “Different temperatures and how long a body or bone has been exposed to high temperatures affect the likelihood that it will or will not yield usable DNA,” Mann says.

Typically, forensic anthropologists help select which bone or tooth to use for DNA testing, says Mann. Until recently, he explains, scientists believed “you cannot get usable DNA out of burned bone.” But thanks to these new developments, researchers are realizing that with some bone that has been charred, “they’re able to get usable, good DNA out of it,” Mann says. “And that’s new.” Indeed, Selden explains that “in a typical bad fire, what I would expect is 80% to 90% of the samples are going to have enough intact DNA” to get a result from rapid analysis. The rest, he says, may require deeper sequencing.

The aftermath of large-scale destructive events like the fire in Lahaina can be sprawling and dangerous. Teams need to meticulously search through mountains of mixed, melted, or burned debris to find bits of human remains.
GLENN FAWCETT VIA ALAMY

Anthropologists can often tell “simply by looking” if a sample will be good enough to help create an ID. If it’s been burned and blackened, “it might be a good candidate for DNA testing,” Mann says. But if it’s calcined (white and “china-like”), he says, the DNA has probably been destroyed.

On Maui, Mann adds, rapid DNA analysis made the entire process more efficient, with tests coming back in just two hours. “That means while you’re doing the examination of this individual right here on the table, you may be able to get results back on who this person is,” he says. From inside the lab, he watched the science unfold as the number of missing on Maui quickly began to go down.

Within three days, 42 people’s remains were recovered inside Maui homes or buildings and another 39 outside, along with 15 inside vehicles and one in the water. The first confirmed identification of a victim on the island occurred four days after the fire—this one via fingerprint. The ANDE rapid DNA team arrived two days after the fire and deployed four boxes to analyze multiple samples of DNA simultaneously. The first rapid DNA identification happened within that first week.


Sixteen days

More than two weeks after the fire, the list of missing and unaccounted-for individuals was dwindling, but it still had 388 people on it. Rafael Sr. was one of them.

Raven and Raphael Jr. raced to another location: Cupies café in Kahului, more than 20 miles from Lahaina. Someone had reported seeing him there.

Rafael’s family hung posters around the island, desperately hoping for reliable information. (Phone number redacted by MIT Technology Review.)
ERIKA HAYASAKI

The tip was another false lead.

As family and friends continued to search, they stopped by support hubs that had sprouted up around the island, receiving information about Red Cross and FEMA assistance or donation programs as volunteers distributed meals and clothes. These hubs also sometimes offered DNA testing.

Raven still had a “50-50” feeling that his dad might be out there somewhere. But he was beginning to lose some of that hope.


Gin was stationed at one of the support hubs, which offered food, shelter, clothes, and support. “You could also go in and give biological samples,” she says. “We actually moved one of the rapid DNA instruments into the family assistance center, and we were running the family samples there.” Eliminating the need to transport samples from a site to a testing center further cut down any lag time.

Selden had once believed that the biggest hurdle for his technology would be building the actual device, which took about eight years to design and another four years to perfect. But at least in Lahaina, it was something else: persuading distraught and traumatized family members to offer samples for the test.

Nationally, there are serious privacy concerns when it comes to rapid DNA technology. Organizations like the ACLU warn that as police departments and governments begin deploying it more often, there must be more oversight, monitoring, and training in place to ensure that it is always used responsibly, even if that adds some time and expense. But the space is still largely unregulated, and the ACLU fears it could give rise to rogue DNA databases “with far fewer quality, privacy, and security controls than federal databases.”

Family support centers popped up around Maui to offer clothing, food, and other assistance, and sometimes to take DNA samples to help find missing family members.

In a place like Hawaii, these fears are even more palpable. The islands have a long history of US colonialism, military dominance, and exploitation of the Native population and of the large immigrant working-class population employed in the tourism industry.

Native Hawaiians in particular have a fraught relationship with DNA testing. Under a US law signed in 1921, thousands have a right to live on 200,000 designated acres of land trust, almost for free. It was a kind of reparations measure put in place to assist Native Hawaiians whose land had been stolen. Back in 1893, a small group of American sugar plantation owners and descendants of Christian missionaries, backed by US Marines, held Hawaii’s Queen Lili‘uokalani in her palace at gunpoint and forced her to sign over 1.8 million acres to the US, which ultimately seized the islands in 1898.

Queen Liliuokalani in a formal seated portrait
Hawaii’s Queen Lili‘uokalani was forced to sign over 1.8 million acres to the US.
PUBLIC DOMAIN VIA WIKIMEDIA COMMONS

To lay their claim to the designated land and property, individuals first must prove via DNA tests how much Hawaiian blood they have. But many residents who have submitted their DNA and qualified for the land have died on waiting lists before ever receiving it. Today, Native Hawaiians are struggling to stay on the islands amid skyrocketing housing prices, while others have been forced to move away.

Meanwhile, after the fires, Filipino families faced particularly stark barriers to getting information about financial support, government assistance, housing, and DNA testing. Filipinos make up about 25% of Hawaii’s population and 40% of its workers in the tourism industry. They also make up 46% of undocumented residents in Hawaii—more than any other group. Some encountered language barriers, since they primarily spoke Tagalog or Ilocano. Some worried that people would try to take over their burned land and develop it for themselves. For many, being asked for DNA samples only added to the confusion and suspicion.

Selden says he hears the overall concerns about DNA testing: “If you ask people about DNA in general, they think of Brave New World and [fear] the information is going to be used to somehow harm or control people.” But just like regular DNA analysis, he explains, rapid DNA analysis “has no information on the person’s appearance, their ethnicity, their health, their behavior either in the past, present, or future.” He describes it as a more accurate fingerprint.

Gin tried to help the Lahaina family members understand that their DNA “isn’t going to go anywhere else.” She told them their sample would ultimately be destroyed, something programmed to occur inside ANDE’s machine. (Selden says the boxes were designed to do this for privacy purposes.) But sometimes, Gin realizes, these promises are not enough.

“You still have a large population of people that, in my experience, don’t want to give up their DNA to a government entity,” she says. “They just don’t.”

Kim Gin
Gin understands that family members are often nervous to give their DNA samples. She promises the process of rapid DNA analysis respects their privacy, but she knows sometimes promises aren’t enough.
BRYAN TARNOWSKI

The immediate aftermath of a disaster, when people are suffering from shock, PTSD, and displacement, is the worst possible moment to try to educate them about DNA tests and explain the technology and privacy policies. “A lot of them don’t have anything,” Gin says. “They’re just wondering where they’re going to lay their heads down, and how they’re going to get food and shelter and transportation.”

Unfortunately, Lahaina’s survivors won’t be the last people in this position. Particularly given the world’s current climate trajectory, the risk of deadly events in just about every neighborhood and community will rise. And figuring out who survived and who didn’t will be increasingly difficult. Mann recalls his work on the Indian Ocean tsunami, when over 227,000 people died. “The bodies would float off, and they ended up 100 miles away,” he says. Investigators were at times left with remains that had been consumed by sea creatures or degraded by water and weather. He remembers how they struggled to determine: “Who is the person?”

Mann has spent his own career identifying people including “missing soldiers, sailors, airmen, Marines, from all past wars,” as well as people who have died recently. That closure is meaningful for family members, some of them decades, or even lifetimes, removed.

In the end, distrust and conspiracy theories did in fact hinder DNA-identification efforts on Maui, according to a police department report.


33 days

By the time Raven went to a family resource center to submit a swab, some four weeks had gone by. He remembers the quick rub inside his cheek.

Some of his family had already offered their own samples before Raven provided his. For them, waiting wasn’t an issue of mistrusting the testing as much as experiencing confusion and chaos in the weeks after the fire. They believed Uncle Raffy was still alive, and they still held hope of finding him. Offering DNA was a final step in their search.

“I did it for my mom,” Raven says. She still wanted to believe he was alive, but Raven says: “I just had this feeling.” His father, he told himself, must be gone.

Just a day after he gave his sample—on September 11, more than a month after the fire—he was at the temporary house in Kihei when he got the call: “It was,” Raven says, “an automatic match.”

Raven gave a cheek swab about a month after the disappearance of his father. It didn’t take long for him to get a phone call: “It was an automatic match.”
WINNI WINTERMEYER

The investigators let the family know the address where the remains of Rafael Sr. had been found, several blocks away from their home. They put it into Google Maps and realized it was where some family friends lived. The mother and son of that family had been listed as missing too. Rafael Sr., it seemed, had been with or near them in the end.

By October, investigators in Lahaina had obtained and analyzed 215 DNA samples from family members of the missing. By December, DNA analysis had confirmed the identities of 63 of the most recent count of 101 victims. Seventeen more had been identified by fingerprint, 14 via dental records, and two through medical devices, along with three who died in the hospital. While some of the most damaged remains would still be undergoing DNA testing months after the fires, it’s a drastic improvement over the identification processes for 9/11 victims, for instance—today, over 20 years later, some are still being identified by DNA.

Raphael Imperial Sr
Raven remembers how much his father loved karaoke. His favorite song was “My Way,” by Frank Sinatra. 
COURTESY OF RAVEN IMPERIAL

Rafael Sr. was born on October 22, 1959, in Naga City, the Philippines. The family held his funeral on his birthday last year. His relatives flew in from Michigan, the Philippines, and California.

Raven says in those weeks of waiting—after all the false tips, the searches, the prayers, the glimmers of hope—deep down the family had already known he was gone. But for Evelyn, Raphael Jr., and the rest of their family, DNA tests were necessary—and, ultimately, a relief, Raven says. “They just needed that closure.”

Erika Hayasaki is an independent journalist based in Southern California.

How cuddly robots could change dementia care

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Last week, I scoured the internet in search of a robotic dog. I wanted a belated birthday present for my aunt, who was recently diagnosed with Alzheimer’s disease. Studies suggest that having a companion animal can stave off some of the loneliness, anxiety, and agitation that come with Alzheimer’s. My aunt would love a real dog, but she can’t have one.

That’s how I discovered the Golden Pup from Joy for All. It cocks its head. It sports a jaunty red bandana. It barks when you talk. It wags when you touch it. It has a realistic heartbeat. And it’s just one of the many, many robots designed for people with Alzheimer’s and dementia.

This week on The Checkup, join me as I go down a rabbit hole. Let’s look at the prospect of  using robots to change dementia care.

Golden pup robot with red kerchief

As robots go, Golden Pup is decidedly low tech. It retails for $140. For around $6,000 you can opt for Paro, a fluffy robotic baby seal developed in Japan, which can sense touch, light, sound, temperature, and posture. Its manufacturer says it develops its own character, remembering behaviors that led its owner to give it attention.  

Golden Pup and Paro are available now. But researchers are working on much more  sophisticated robots for people with cognitive disorders—devices that leverage AI to converse and play games. Researchers from Indiana University Bloomington are tweaking a commercially available robot system called QT to serve people with dementia and Alzheimer’s. The researchers’ two-foot-tall robot looks a little like a toddler in an astronaut suit. Its round white head holds a screen that displays two eyebrows, two eyes, and a mouth that together form a variety of expressions. The robot engages people in  conversation, asking AI-generated questions to keep them talking. 

The AI model they’re using isn’t perfect, and neither are the robot’s responses. In one awkward conversation, a study participant told the robot that she has a sister. “I’m sorry to hear that,” the robot responded. “How are you doing?”

But as large language models improve—which is happening already—so will the quality of the conversations. When the QT robot made that awkward comment, it was running Open AI’s GPT-3, which was released in 2020. The latest version of that model, GPT-4o, which was released this week, is faster and provides for more seamless conversations. You can interrupt the conversation, and the model will adjust.  

The idea of using robots to keep dementia patients engaged and connected isn’t always an easy sell. Some people see it as an abdication of our social responsibilities. And then there are privacy concerns. The best robotic companions are personalized. They collect information about people’s lives, learn their likes and dislikes, and figure out when to approach them. That kind of data collection can be unnerving, not just for patients but also for medical staff. Lillian Hung, creator of the Innovation in Dementia care and Aging (IDEA) lab at the University of British Columbia in Vancouver, Canada, told one reporter about an incident that happened during a focus group at a care facility.  She and her colleagues popped out for lunch. When they returned, they found that staff had unplugged the robot and placed a bag over its head. “They were worried it was secretly recording them,” she said.

On the other hand, robots have some advantages over humans in talking to people with dementia. Their attention doesn’t flag. They don’t get annoyed or angry when they have to repeat themselves. They can’t get stressed. 

What’s more, there are increasing numbers of people with dementia, and too few people to care for them. According to the latest report from the Alzheimer’s Association, we’re going to need more than a million additional care workers to meet the needs of people living with dementia between 2021 and 2031. That is the largest gap between labor supply and demand for any single occupation in the United States.

Have you been in an understaffed or poorly staffed memory care facility? I have. Patients are often sedated to make them easier to deal with. They get strapped into wheelchairs and parked in hallways. We barely have enough care workers to take care of the physical needs of people with dementia, let alone provide them with social connection and an enriching environment.

“Caregiving is not just about tending to someone’s bodily concerns; it also means caring for the spirit,” writes Kat McGowan in this beautiful Wired story about her parents’ dementia and the promise of social robots. “The needs of adults with and without dementia are not so different: We all search for a sense of belonging, for meaning, for self-actualization.”

If robots can enrich the lives of people with dementia even in the smallest way, and if they can provide companionship where none exists, that’s a win.

“We are currently at an inflection point, where it is becoming relatively easy and inexpensive to develop and deploy [cognitively assistive robots] to deliver personalized interventions to people with dementia, and many companies are vying to capitalize on this trend,” write a team of researchers from the University of California, San Diego, in a 2021 article in Proceedings of We Robot. “However, it is important to carefully consider the ramifications.”

Many of the more advanced social robots may not be ready for prime time, but the low-tech Golden Pup is readily available. My aunt’s illness has been progressing rapidly, and she occasionally gets frustrated and agitated. I’m hoping that Golden Pup might provide a welcome (and calming) distraction. Maybe  it will spark joy during a time that has been incredibly confusing and painful for my aunt and uncle. Or maybe not. Certainly a robotic pup isn’t for everyone. Golden Pup may not be a dog. But I’m hoping it can be a friendly companion.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Robots are cool, and with new advances in AI they might also finally be useful around the house, writes Melissa Heikkilä. 

Social robots could help make personalized therapy more affordable and accessible to kids with autism. Karen Hao has the story

Japan is already using robots to help with elder care, but in many cases they require as much work as they save. And reactions among the older people they’re meant to serve are mixed. James Wright wonders whether the robots are “a shiny, expensive distraction from tough choices about how we value people and allocate resources in our societies.” 

From around the web

A tiny probe can work its way through arteries in the brain to help doctors spot clots and other problems. The new tool could help surgeons make diagnoses, decide on treatment strategies, and provide assurance that clots have been removed. (Stat

Richard Slayman, the first recipient of a pig kidney transplant, has died, although the hospital that performed the transplant says the death doesn’t seem to be linked to the kidney. (Washington Post)

EcoHealth, the virus-hunting nonprofit at the center of covid lab-eak theories, has been banned from receiving federal funding. (NYT)

In a first, scientists report that they can translate brain signals into speech without any vocalization or mouth movements, at least for a handful of words. (Nature)