This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
The study didn’t garner as much media attention as previous, much smaller studies that focused on helping people with paralysis walk. Tech that allows people to type slightly faster or put their hair in a ponytail unaided just doesn’t have the same allure. “The image of a paralyzed person getting up and walking is almost biblical,” Charles Liu, director of the Neurorestoration Center at the University of Southern California, once told a reporter.
For the people who have spinal cord injuries, however, incremental gains can have a huge impact on quality of life.
So today in The Checkup, let’s talk about this tech and who it serves.
In 2004, Kim Anderson-Erisman, a researcher at Case Western Reserve University, who also happens to be paralyzed, surveyed more than 600 people with spinal cord injuries. Wanting to better understand their priorities, she asked them to consider seven different functions—everything from hand and arm mobility to bowel and bladder function to sexual function. She asked respondents to rank these functions according to how big an impact recovery would have on their quality of life.
Walking was one of the functions, but it wasn’t the top priority for most people. Most quadriplegics put hand and arm function at the top of the list. For paraplegics, meanwhile, the top priority was sexual function. I interviewed Anderson-Erisman for a story I wrote in 2019 about research on implantable stimulators as a way to help people with spinal cord injuries walk. For many people, “not being able to walk is the easy part of spinal cord injury,” she told me. “[If] you don’t have enough upper-extremity strength or ability to take care of yourself independently, that’s a bigger problem than not being able to walk.”
One of the research groups I focused on was at the University of Louisville. When I visited in 2019, the team had recently made the news because two people with spinal cord injuries in one of their studies had regained the ability to walk, thanks to an implanted stimulator. “Experimental device helps paralyzed man walk the length of four football fields,” one headline had trumpeted.
But when I visited one of those participants, Jeff Marquis, in his condo in Louisville, I learned that walking was something he could only do in the lab. To walk he needed to hold onto parallel bars supported by other people and wear a harness to catch him if he fell. Even if he had extra help at home, there wasn’t enough room for the apparatus. Instead, he gets around his condo the same way he gets around outside his condo: in a wheelchair. Marquis does stand at home, but even that requires a bulky frame. And the standing he does is only for therapy. “I mostly just watch TV while I’m doing that,” he said.
That’s not to say the tech has been useless. The implant helped Marquis gain some balance, stamina, and trunk stability. “Trunk stability is kind of underrated in how much easier that makes every other activity I do,” he told me. “That’s the biggest thing that stays with me when I have [the stimulator] turned off.”
What’s exciting to me about this latest study is that the tech gave the participants skills they could use beyond the lab. And because the stimulator is external, it is likely to be more accessible and vastly cheaper. Yes, the newly enabled movements are small, but if you listen to the palpable excitement of one study participant as he demonstrates how he can move a small ball into a cup, you’ll appreciate that incremental gains are far from insignificant. That’s according to Melanie Reid, one of the participants in the latest trial, who spoke at a press conference last week. “There [are] no miracles in spinal injury, but tiny gains can be life-changing.”
Now read the rest of The Checkup
Read more from MIT Technology Review’s archive
In 2017, we hailed as a breakthrough technology electronic interfaces designed to reverse paralysis by reconnecting the brain and body. Antonio Regalado has the story.
An implanted stimulator changed John Mumford’s life, allowing him to once again grasp objects after a spinal cord injury left him paralyzed. But when the company that made the device folded, Mumford was left with few options for keeping the device running. “Limp limbs can be reanimated by technology, but they can be quieted again by basic market economics,” wrote Brian Bergstein in 2015.
In 2014, Courtney Humphries covered some of the rat research that laid the foundation for the technological developments that have allowed paralyzed people to walk.
From around the web
Lots of bird flu news this week. A second person in the US has tested positive for the illness after working with infected livestock. (NBC)
The livestock industry, which depends on shipping tens of millions of live animals, provides some ideal conditions for the spread of pathogens, including bird flu. (NYT)
Long read: How the death of a nine-year-old boy in Cambodia triggered a global H5N1 alert. (NYT)
You’ve heard about tracking viruses via wastewater. H5N1 is the first one we’re tracking via store-bought milk. (STAT)
The first organ transplants from pigs to humans have not ended well, but scientists are learning valuable lessons about what they need to do better. (Nature)
Another long read that’s worth your time: an inside look at just how long 3M knew about the pervasiveness of “forever chemicals.” (New Yorker)
Modern life is noisy. If you don’t like it, noise-canceling headphones can reduce the sounds in your environment. But they muffle sounds indiscriminately, so you can easily end up missing something you actually want to hear.
A new prototype AI system for such headphones aims to solve this. Called Target Speech Hearing, the system gives users the ability to select a person whose voice will remain audible even when all other sounds are canceled out.
Although the technology is currently a proof of concept, its creators say they are in talks to embed it in popular brands of noise-canceling earbuds and are also working to make it available for hearing aids.
“Listening to specific people is such a fundamental aspect of how we communicate and how we interact in the world with other humans,” says Shyam Gollakota, a professor at the University of Washington, who worked on the project. “But it can get really challenging, even if you don’t have any hearing loss issues, to focus on specific people when it comes to noisy situations.”
The same researchers previously managed to train a neural network to recognize and filter out certain sounds, such as babies crying, birds tweeting, or alarms ringing. But separating out human voices is a tougher challenge, requiring much more complex neural networks.
That complexity is a problem when AI models need to work in real time in a pair of headphones with limited computing power and battery life. To meet such constraints, the neural networks needed to be small and energy efficient. So the team used an AI compression technique called knowledge distillation. This meant taking a huge AI model that had been trained on millions of voices (the “teacher”) and having it train a much smaller model (the “student”) to imitate its behavior and performance to the same standard.
The student was then taught to extract the vocal patterns of specific voices from the surrounding noise captured by microphones attached to a pair of commercially available noise-canceling headphones.
To activate the Target Speech Hearing system, the wearer holds down a button on the headphones for several seconds while facing the person to be focused on. During this “enrollment” process, the system captures an audio sample from both headphones and uses this recording to extract the speaker’s vocal characteristics, even when there are other speakers and noises in the vicinity.
These characteristics are fed into a second neural network running on a microcontroller computer connected to the headphones via USB cable. This network runs continuously, keeping the chosen voice separate from those of other people and playing it back to the listener. Once the system has locked onto a speaker, it keeps prioritizing that person’s voice, even if the wearer turns away. The more training data the system gains by focusing on a speaker’s voice, the better its ability to isolate it becomes.
For now, the system is only able to successfully enroll a targeted speaker whose voice is the only loud one present, butthe team aims to make it work even when the loudest voice in a particular direction is not the target speaker.
Singling out a single voice in a loud environment is very tough, says Sefik Emre Eskimez, a senior researcher at Microsoft who works on speech and AI, but who did not work on the research. “I know that companies want to do this,” he says. “If they can achieve it, it opens up lots of applications, particularly in a meeting scenario.”
While speech separation research tends to be more theoretical than practical, this work has clear real-world applications, says Samuele Cornell, a researcher at Carnegie Mellon University’s Language Technologies Institute, who did not work on the research. “I think it’s a step in the right direction,” Cornell says. “It’s a breath of fresh air.”
An animated video posted this week has a voice-over that sounds like a late-night TV ad, but the pitch is straight out of the far future. The arms of an octopus-like robotic surgeon swirl, swiftly removing the head of a dying man and placing it onto a young, healthy body.
This is BrainBridge, the animated video claims—“the world’s first revolutionary concept for a head transplant machine, which uses state-of-the-art robotics and artificial intelligence to conduct complete head and face transplantation.”
First posted on Tuesday, the video has millions of views, more than 24,000 comments on Facebook, and a content warning on TikTok for its grisly depictions of severed heads. A slick BrainBridge website has several job postings, including one for a “neuroscience team leader” and another for a “government relations adviser.” It is all convincing enough for the New York Post to announce that BrainBridge is “a biomedical engineering startup” and that “the company” plans a surgery within eight years.
We can report that BrainBridge is not a real company—it’s not incorporated anywhere. The video was made by Hashem Al-Ghaili, a Yemeni science communicator and film director who in 2022 made a viral video called “EctoLife,” about artificial wombs, that also left journalists scrambling to determine if it was real or not.
Yet BrainBridge is not merely a provocative work of art. This video is better understood as the first public billboard for a hugely controversial scheme to defeat death that’s recently been gaining attention among some life-extension proponents and entrepreneurs.
“It’s about recruiting newcomers to join the project,” says Al-Ghaili.
This morning, Al-Ghaili, who lives in Dubai, was up at 5 a.m., tracking the video as its viewership ballooned around social media. “I am monitoring its progress,” he says, but he insists he didn’t make the film for clicks: “Being viral is not the goal. I can be viral anytime. It’s pushing boundaries and testing feasibility.”
The video project was bankrolled in part by Alex Zhavoronkov, the founder of Insilico Medicine, a large AI drug discovery company, who is also a prominent figure in anti-aging research. After Zhavoronkov posted the video on his LinkedIn account, commenters noticed that it is his face on the two bodies shown in the video.
“I can confirm I helped design and fund a few things,” Zhavoronkov told MIT Technology Review in a WhatsApp message, in which he also claimed that “some important and famous people are supporting [it] financially.”
Zhavoronkov declined to name these individuals. He also didn’t respond when asked if the job ads—whose cookie-cutter descriptions of qualifications and responsibilities appear to have been written by an AI—are real roles or make-believe positions.
Aging bypass
What is certain is that head transplantation—or body transplant, as some prefer to call it—is a subject of growing, if speculative, interest in longevity circles, the kind inhabited by biohackers, techno-anarchists, and others on the fringes of biotechnology and the startup scene and who form the most dedicated cadre of extreme life-extensionists.
Many proponents of longer life spans will admit things don’t look good. Anti-aging medicine so far hasn’t achieved any breakthroughs. In fact, as research advances into the molecular details, the problem of death only looks more and more complicated. As we age, our billions of cells gradually succumb to the irreversible effects of entropy. Fixing that may never be possible.
By comparison, putting your head on a young body looks comparatively easy—a way to bypass aging in a single stroke, at least as long as your brain holds out. The idea was strongly endorsed in a technical road map put forward this year by the Longevity Biotech Fellowship, a group espousing radical life extension, which rated “body replacement” as the cheapest, fastest pathway to “solve aging.”
Will head transplants work? In a crude way, they already have. In the early 1970s, the American neurosurgeon Robert White performed a “cephalic exchange,” cutting off the head of a monkey, placing it on the body of another, and sewing together their circulatory systems. Reports suggest the head remained conscious, and able to see, for a few days before it died.
Most likely, a human head transplant would also be fatal. But even if you lived, you’d be a mind atop a paralyzed body, since exchanging heads means severing the spinal cord.
Yet head-swapping proponents can point to plausible solutions for that, too—a number of which appear in the BrainBridge video. In Europe, for instance, some paralyzed people have walked again after doctors bridged their spinal injuries with electronics. Other scientists in China are studying growth factors to regrow nerves.
Joined at the neck
As shocking as the video is, BrainBridge is in some ways overly conventional in its thinking. If you want to keep your brain going, why must it be on a human body? You might instead keep the head alive on a heart-lung machine—with an Elon Musk neural implant to let it surf the internet, for as long as it lives. Or consider how doctors hoping to solve the organ shortage have started putting hearts and kidneys from genetically engineered pigs into patients. If you don’t mind having a tail and four legs, maybe your head could be placed onto a pig’s body.
Let’s take it a step further. Why does the body “donor” have to be dead at all? Anatomically, it’s possible to have two heads. There are conjoined twins who share one body. If your spouse were diagnosed with a fatal cancer, you would surely welcome his or her head next to yours, if it allowed their mind to live on. After all, the concept of a “living donor” is widely accepted in transplant medicine already, and married couples are often said to be joined at the hip. Why not at the neck, too?
If the video is an attempt to take the public’s temperature and gauge reactions, it’s been successful. Since it was posted, thousands of commenters have explored the moral dilemmas posed by the procedure. For instance, if someone is left brain dead—say, in a motorcycle accident—surgeons can use their heart, liver, and kidneys to save multiple other people. Would it be ethical to use a body to help only one person?
“The most common question is ‘Where do you get the bodies from?’” says Al-Ghaili. The BrainBridge website answers this question by stating it will source “ethically grown” unconscious bodies from EctoLife, the artificial womb company that is Al-Ghaili’s previous fiction. He also suggests that people undergoing euthanasia because of chronic pain, or even psychiatric problems, could provide an additional supply.
For the most part, the public seems to hate the idea. On Facebook, a pastor, Matthew. W. Tucker, called the concept “disgusting, immoral, unnecessary, pagan, demonic and outright idiotic,” adding that “they have no idea what they are doing.” A poster from the Middle East apologized for the video, joking that its creator “is one of our psychiatric patients who escaped last night.” “We urge the public to go about [their] business as everything is under control,” this person said.
Al-Ghaili is monitoring the feedback with interest and some concern. “The negativity is huge, to be honest,” he says. “But behind that are the ones who are sending emails. These are people who want to invest, or who are expressing their personal health challenges. These are the ones who matter.”
He says if suitable job applicants appear, the backers of BrainBridge are prepared to fund a small technical feasibility study to see if their idea has legs.
Meta has seen strikingly little AI-generated misinformation around the 2024 elections despite major votes in countries such as Indonesia, Taiwan, and Bangladesh, said the company’s president of global affairs, Nick Clegg, on Wednesday.
“It is there; it is discernible. It’s really not happening on … a volume or a systemic level,” he said. Clegg said Meta has seen attempts at interference in, for example, the Taiwanese election, but that the scale of that interference is at a “manageable amount.”
As voters will head to polls this year in more than 50 countries, experts have raised the alarm over AI-generated political disinformation and the prospect that malicious actors will use generative AI and social media to interfere with elections. Meta has previously faced criticism over its content moderation policies around past elections—for example, when it failed to prevent the January 6 rioters from organizing on its platforms.
Clegg defended the company’s efforts at preventing violent groups from organizing, but he also stressed the difficulty of keeping up. “This is a highly adversarial space. You play Whack-a-Mole, candidly. You remove one group, they rename themselves, rebrand themselves, and so on,” he said.
Clegg argued that compared with 2016, the company is now “utterly different” when it comes to moderating election content. Since then, it has removed over 200 “networks of coordinated inauthentic behavior,” he said. The company now relies on fact checkers and AI technology to identify unwanted groups on its platforms.
Earlier this year, Meta announced it would label AI-generated images on Facebook, Instagram, and Threads. Meta has started adding visible markers to such images, as well as invisible watermarks and metadata in the image file. The watermarks will be added to images created using Meta’s generative AI systems or ones that carry invisible industry-standard markers. The company says its measures are in line with best practices laid out by the Partnership on AI, an AI research nonprofit.
But at the same time, Clegg admitted that tools to detect AI-generated content are still imperfect and immature. Watermarks in AI systems are not adopted industry-wide, and they are easy to tamper with. They are also hard to implement robustly in AI-generated text, audio, and video.
Ultimately that should not matter, Clegg said, because Meta’s systems should be able to catch and detect mis- and disinformation regardless of its origins.
“AI is a sword and a shield in this,” he said.
Clegg also defended the company’s decision to allow ads claiming that the 2020 US election was stolen, noting that these kinds of claims are common throughout the world and saying it’s “not feasible” for Meta to relitigate past elections. Just this month, eight state secretaries of state wrote a letter to Meta CEO Mark Zuckerberg arguing that the ads could still be dangerous, and that they have the potential to further threaten public trust in elections and the safety of individual election workers.
You can watch the full interview with Nick Clegg and MIT Technology Review executive editor Amy Nordrum below.
Artificial intelligence has brought a big boost in productivity—to the criminal underworld.
Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro.
Most criminals are “not living in some dark lair and plotting things,” says Ciancaglini. “Most of them are regular folks that carry on regular activities that require productivity as well.”
Last year saw the rise and fall of WormGPT, an AI language model built on top of an open-source model and trained on malware-related data, which was created to assist hackers and had no ethical rules or restrictions. But last summer, its creators announced they were shutting the model down after it started attracting media attention. Since then, cybercriminals have mostly stopped developing their own AI models. Instead, they are opting for tricks with existing tools that work reliably.
That’s because criminals want an easy life and quick gains, Ciancaglini explains. For any new technology to be worth the unknown risks associated with adopting it—for example, a higher risk of getting caught—it has to be better and bring higher rewards than what they’re currently using.
Here are five ways criminals are using AI now.
Phishing
The biggest use case for generative AI among criminals right now is phishing, which involves trying to trick people into revealing sensitive information that can be used for malicious purposes, says Mislav Balunović, an AI security researcher at ETH Zurich. Researchers have found that the rise of ChatGPT has been accompanied by a huge spike in the number of phishing emails.
Spam-generating services, such as GoMail Pro, have ChatGPT integrated into them, which allows criminal users to translate or improve the messages sent to victims, says Ciancaglini. OpenAI’s policies restrict people from using their products for illegal activities, but that is difficult to police in practice, because many innocent-sounding prompts could be used for malicious purposes too, says Ciancaglini.
OpenAI says it uses a mix of human reviewers and automated systems to identify and enforce against misuse of its models, and issues warnings, temporary suspensions and bans if users violate the company’s policies.
“We take the safety of our products seriously and are continually improving our safety measures based on how people use our products,” a spokesperson for OpenAI told us. “We are constantly working to make our models safer and more robust against abuse and jailbreaks, while also maintaining the models’ usefulness and task performance,” they added.
In a report from February, OpenAI said it had closed five accounts associated with state-affiliated malicous actors.
Before, so-called Nigerian prince scams, in which someone promises the victim a large sum of money in exchange for a small up-front payment, were relatively easy to spot because the English in the messages was clumsy and riddled with grammatical errors, Ciancaglini. says. Language models allow scammers to generate messages that sound like something a native speaker would have written.
“English speakers used to be relatively safe from non-English-speaking [criminals] because you could spot their messages,” Ciancaglini says. That’s not the case anymore.
Thanks to better AI translation, different criminal groups around the world can also communicate better with each other. The risk is that they could coordinate large-scale operations that span beyond their nations and target victims in other countries, says Ciancaglini.
Deepfake audio scams
Generative AI has allowed deepfake development to take a big leap forward, with synthetic images, videos, and audio looking and sounding more realistic than ever. This has not gone unnoticed by the criminal underworld.
Earlier this year, an employee in Hong Kong was reportedly scammed out of $25 million after cybercriminals used a deepfake of the company’s chief financial officer to convince the employee to transfer the money to the scammer’s account. “We’ve seen deepfakes finally being marketed in the underground,” says Ciancaglini. His team found people on platforms such as Telegram showing off their “portfolio” of deepfakes and selling their services for as little as $10 per image or $500 per minute of video. One of the most popular people for criminals to deepfake is Elon Musk, says Ciancaglini.
And while deepfake videos remain complicated to make and easier for humans to spot, that is not the case for audio deepfakes. They are cheap to make and require only a couple of seconds of someone’s voice—taken, for example, from social media—to generate something scarily convincing.
In the US, there have been high-profile cases where people have received distressing calls from loved ones saying they’ve been kidnapped and asking for money to be freed, only for the caller to turn out to be a scammer using a deepfake voice recording.
“People need to be aware that now these things are possible, and people need to be aware that now the Nigerian king doesn’t speak in broken English anymore,” says Ciancaglini. “People can call you with another voice, and they can put you in a very stressful situation,” he adds.
There are some for people to protect themselves, he says. Ciancaglini recommends agreeing on a regularly changing secret safe word between loved ones that could help confirm the identity of the person on the other end of the line.
“I password-protected my grandma,” he says.
Bypassing identity checks
Another way criminals are using deepfakes is to bypass “know your customer” verification systems. Banks and cryptocurrency exchanges use these systems to verify that their customers are real people. They require new users to take a photo of themselves holding a physical identification document in front of a camera. But criminals have started selling apps on platforms such as Telegram that allow people to get around the requirement.
They work by offering a fake or stolen ID and imposing a deepfake image on top of a real person’s face to trick the verification system on an Android phone’s camera. Ciancaglini has found examples where people are offering these services for cryptocurrency website Binance for as little as $70.
“They are still fairly basic,” Ciancaglini says. The techniques they use are similar to Instagram filters, where someone else’s face is swapped for your own.
“What we can expect in the future is that [criminals] will use actual deepfakes … so that you can do more complex authentication,” he says.
An example of a stolen ID and a criminal using face swapping technology to bypass identity verification systems.
Jailbreak-as-a-service
If you ask most AI systems how to make a bomb, you won’t get a useful response.
That’s because AI companies have put in place various safeguards to prevent their models from spewing harmful or dangerous information. Instead of building their own AI models without these safeguards, which is expensive, time-consuming, and difficult, cybercriminals have begun to embrace a new trend: jailbreak-as-a-service.
Most models come with rules around how they can be used. Jailbreaking allows users to manipulate the AI system to generate outputs that violate those policies—for example, to write code for ransomware or generate text that could be used in scam emails.
Services such as EscapeGPT and BlackhatGPT offer anonymized access to language-model APIs and jailbreaking prompts that update frequently. To fight back against this growing cottage industry, AI companies such as OpenAI and Google frequently have to plug security holes that could allow their models to be abused.
Jailbreaking services use different tricks to break through safety mechanisms, such as posing hypothetical questions or asking questions in foreign languages. There is a constant cat-and-mouse game between AI companies trying to prevent their models from misbehaving and malicious actors coming up with ever more creative jailbreaking prompts.
These services are hitting the sweet spot for criminals, says Ciancaglini.
“Keeping up with jailbreaks is a tedious activity. You come up with a new one, then you need to test it, then it’s going to work for a couple of weeks, and then Open AI updates their model,” he adds. “Jailbreaking is a super-interesting service for criminals.”
Doxxing and surveillance
AI language models are a perfect tool for not only phishing but for doxxing (revealing private, identifying information about someone online), says Balunović. This is because AI language models are trained on vast amounts of internet data, including personal data, and can deduce where, for example, someone might be located.
As an example of how this works, you could ask a chatbot to pretend to be a private investigator with experience in profiling. Then you could ask it to analyze text the victim has written, and infer personal information from small clues in that text—for example, their age based on when they went to high school, or where they live based on landmarks they mention on their commute. The more information there is about them on the internet, the more vulnerable they are to being identified.
Balunović was part of a team of researchers that found late last year that large language models, such as GPT-4, Llama 2, and Claude, are able to infer sensitive information such as people’s ethnicity, location, and occupation purely from mundane conversations with a chatbot. In theory, anyone with access to these models could use them this way.
Since their paper came out, new services that exploit this feature of language models have emerged.
While the existence of these services doesn’t indicate criminal activity, it points out the new capabilities malicious actors could get their hands on. And if regular people can build surveillance tools like this, state actors probably have far better systems, Balunović says.
“The only way for us to prevent these things is to work on defenses,” he says.
Companies should invest in data protection and security, he adds.
For individuals, increased awareness is key. People should think twice about what they share online and decide whether they are comfortable with having their personal details being used in language models, Balunović says.
Humans are complicated beings. The ways we communicate are multilayered, and psychologists have devised many kinds of tests to measure our ability to infer meaning and understanding from interactions with each other.
AI models are getting better at these tests. New research published today in Nature Human Behavior found that some large language models (LLMs) perform as well as, and in some cases better than, humans when presented with tasks designed to test the ability to track people’s mental states, known as “theory of mind.”
This doesn’t mean AI systems are actually able to work out how we’re feeling. But it does demonstrate that these models are performing better and better in experiments designed to assess abilities that psychologists believe are unique to humans. To learn more about the processes behind LLMs’ successes and failures in these tasks, the researchers wanted to apply the same systematic approach they use to test theory of mind in humans.
In theory, the better AI models are at mimicking humans, the more useful and empathetic they can seem in their interactions with us. Both OpenAI and Google announced supercharged AI assistants last week; GPT-4o and Astra are designed to deliver much smoother, more naturalistic responses than their predecessors. But we must avoid falling into the trap of believing that their abilities are humanlike, even if they appear that way.
“We have a natural tendency to attribute mental states and mind and intentionality to entities that do not have a mind,” says Cristina Becchio, a professor of neuroscience at the University Medical Center Hamburg-Eppendorf, who worked on the research. “The risk of attributing a theory of mind to large language models is there.”
Theory of mind is a hallmark of emotional and social intelligence that allows us to infer people’s intentions and engage and empathize with one another. Most children pick up these kinds of skills between three and five years of age.
The researchers tested two families of large language models, OpenAI’s GPT-3.5 and GPT-4 and three versions of Meta’s Llama, on tasks designed to test the theory of mind in humans, including identifying false beliefs, recognizing faux pas, and understanding what is being implied rather than said directly. They also tested 1,907 human participants in order to compare the sets of scores.
The team conducted five types of tests. The first, the hinting task, is designed to measure someone’s ability to infer someone else’s real intentions through indirect comments. The second, the false-belief task, assesses whether someone can infer that someone else might reasonably be expected to believe something they happen to know isn’t the case. Another test measured the ability to recognize when someone is making a faux pas, while a fourth test consisted of telling strange stories, in which a protagonist does something unusual, in order to assess whether someone can explain the contrast between what was said and what was meant. They also included a test of whether people can comprehend irony.
The AI models were given each test 15 times in separate chats, so that they would treat each request independently, and their responses were scored in the same manner used for humans. The researchers then tested the human volunteers, and the two sets of scores were compared.
Both versions of GPT performed at, or sometimes above, human averages in tasks that involved indirect requests, misdirection, and false beliefs, while GPT-4 outperformed humans in the irony, hinting, and strange stories tests. Llama 2’s three models performed below the human average.
However, Llama 2, the biggest of the three Meta models tested, outperformed humans when it came to recognizing faux pas scenarios, whereas GPT consistently provided incorrect responses. The authors believe this is due to GPT’s general aversion to generating conclusions about opinions, because the models largely responded that there wasn’t enough information for them to answer one way or another.
“These models aren’t demonstrating the theory of mind of a human, for sure,” he says. “But what we do show is that there’s a competence here for arriving at mentalistic inferences and reasoning about characters’ or people’s minds.”
One reason the LLMs may have performed as well as they did was that these psychological tests are so well established, and were therefore likely to have been included in their training data, says Maarten Sap, an assistant professor at Carnegie Mellon University, who did not work on the research. “It’s really important to acknowledge that when you administer a false-belief test to a child, they have probably never seen that exact test before, but language models might,” he says.
Ultimately, we still don’t understand how LLMs work. Research like this can help deepen our understanding of what these kinds of models can and cannot do, says Tomer Ullman, a cognitive scientist at Harvard University, who did not work on the project. But it’s important to bear in mind what we’re really measuring when we set LLMs tests like these. If an AI outperforms a human on a test designed to measure theory of mind, it does not mean that AI has theory of mind. “I’m not anti-benchmark, but I am part of a group of people who are concerned that we’re currently reaching the end of usefulness in the way that we’ve been using benchmarks,” Ullman says. “However this thing learned to pass the benchmark, it’s not— I don’t think—in a human-like way.”
Fourteen years ago, a journalist named Melanie Reid attempted a jump on horseback and fell. The accident left her mostly paralyzed from the chest down. Eventually she regained control of her right hand, but her left remained “useless,” she told reporters at a press conference last week.
Now, thanks to a new noninvasive device that delivers electrical stimulation to the spinal cord, she has regained some control of her left hand. She can use it to sweep her hair into a ponytail, scroll on a tablet, and even squeeze hard enough to release a seatbelt latch. These may seem like small wins, but they’re crucial, Reid says.
“Everyone thinks that [after] spinal injury, all you want to do is be able to walk again. But if you’re a tetraplegic or a quadriplegic, what matters most is working hands,” she said.
Reid received the device, called ARCex, as part of a 60-person clinical trial. She and the other participants completed two months of physical therapy, followed by two months of physical therapy combined with stimulation. The results, published today in Nature Medicine, show that the vast majority of participants benefited. By the end of the four-month trial, 72% experienced some improvement in both strength and function of their hands or arms when the stimulator was turned off. Ninety percent had improvement in at least one of those measures. And 87% reported an improvement in their quality of life.
This isn’t the first study to test whether noninvasive stimulation of the spine can help people who are paralyzed regain function in their upper body, but it’s important because a trial has never been done before in this number of rehabilitation centers or in this number of subjects, says Igor Lavrov, a neuroscientist at the Mayo Clinic in Minnesota, who was not involved in the study. He points out, however, that the therapy seems to work best in people who have some ability to move below the site of their injury.
The trial was the last hurdle before the researchers behind the device could request regulatory approval, and they hope it might be approved in the US by the end of the year.
ARCex consists of a small stimulator connected by wires to electrodes placed on the spine—in this case, in the area responsible for hand and arm control, just below the neck. It was developed by Onward Medical, a company cofounded by Grégoire Courtine, a neuroscientist at the Swiss Federal Institute of Technology in Lausanne and now chief scientific officer at the company.
The stimulation won’t work in the small percentage of people who have no remaining connection between the brain and spine below their injury. But for people who still have a connection, the stimulation appears to make voluntary movements easier by making the nerves more likely to transmit a signal. Studies over the past couple of decades in animals suggest that the stimulation activates remaining nerve fibers and, over time, helps new nerves grow. That’s why the benefits persist even when the stimulator is turned off.
The big advantage of an external stimulation system over an implant is that it doesn’t require surgery, which makes using the device less of a commitment. “There are many, many people who are not interested in invasive technologies,” said Edelle Field-Fote, director of research on spinal cord injury at the Shepherd Center, at the press conference. An external device is also likely to be cheaper than any surgical options, although the company hasn’t yet set a price on ARCex.
“What we’re looking at here is a device that integrates really seamlessly with the physical therapy and occupational therapy that’s already offered in the clinic,” said Chet Moritz, an engineer and neuroscientist at the University of Washington in Seattle, at the press conference. The rehab that happens soon after the injury is crucial, because that’s when the opportunity for recovery is greatest. “Being able to bring that function back without requiring a surgery could be life-changing for the majority of people with spinal cord injury,” he adds.
Reid wishes she could have used the device soon after her injury, but she is astonished by the amount of function she was able to regain after all this time. “After 14 years, you think, well, I am where I am and nothing’s going change,” she says. So to suddenly find she had strength and power in her left hand—“It was extraordinary,” she says.
Onward is also developing implantable devices, which can deliver stronger, more targeted stimulation and thus could be effective even in people with complete paralysis. The company hopes to launch a trial of those next year.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
I’m ready for summer, but if this year is anything like last year, it’s going to be a doozy. In fact, the summer of 2023 in the Northern Hemisphere was the hottest in over 2,000 years, according to a new study released this week.
If you’ve been following the headlines, you probably already know that last year was a hot one. But I was gobsmacked by this paper’s title when it came across my desk. The warmest in 2,000 years—how do we even know that?
There weren’t exactly thermometers around in the year 1, so scientists have to get creative when it comes to comparing our climate today with that of centuries, or even millennia, ago. Here’s how our world stacks up against the climate of the past, how we know, and why it matters for our future.
Today, there are thousands and thousands of weather stations around the globe, tracking the temperature from Death Valley to Mount Everest. So there’s plenty of data to show that 2023 was, in a word, a scorcher.
But scientists decided to look even further back into the past for a year that could compare to our current temperatures. To do so, they turned to trees, which can act as low-tech weather stations.
The concentric rings inside a tree are evidence of the plant’s yearly growth cycles. Lighter colors correspond to quick growth over the spring and summer, while the darker rings correspond to the fall and winter. Count the pairs of light and dark rings, and you can tell how many years a tree has lived.
Trees tend to grow faster during warm, wet years and slower during colder ones. So scientists can not only count the rings but measure their thickness, and use that as a gauge for how warm any particular year was. They also look at factors like density and track different chemical signatures found inside the wood. You don’t even need to cut down a tree to get its help with climatic studies—you can just drill out a small cylinder from the tree’s center, called a core, and study the patterns.
The oldest living trees allow us to peek a few centuries into the past. Beyond that, it’s a matter of cross-referencing the patterns on dead trees with living ones, extending the record back in time like putting a puzzle together.
It’s taken several decades of work and hundreds of scientists to develop the records that researchers used for this new paper, said Max Torbenson, one of the authors of the study, on a press call. There are over 10,000 trees from nine regions across the Northern Hemisphere represented, allowing the researchers to draw conclusions about individual years over the past two millennia. The year 246 CE once held the crown for the warmest summer in the Northern Hemisphere in the last 2,000 years. But 25 of the last 28 years have beat that record, Torbenson says, and 2023’s summer tops them all.
These conclusions are limited to the Northern Hemisphere, since there are only a few tree ring records from the Southern Hemisphere, says Jan Esper, lead author of the new study. And using tree rings doesn’t work very well for the tropics because seasons look different there, he adds. Since there’s no winter, there’s usually not as reliable an alternating pattern in tropical tree rings, though some trees do have annual rings that track the wet and dry periods of the year.
Paleoclimatologists, who study ancient climates, can use other methods to get a general idea of what the climate looked like even earlier—tens of thousands to millions of years ago.
The biggest difference between the new study using tree rings and methods of looking back further into the past is the precision. Scientists can, with reasonable certainty, use tree rings to draw conclusions about individual years in the Northern Hemisphere (536 CE was the coldest, for instance, likely because of volcanic activity). Any information from further back than the past couple of thousand years will be more of a general trend than a specific data point representing a single year. But those records can still be very useful.
The oldest glaciers on the planet are at least a million years old, and scientists can drill down into the ice for samples. By examining the ratio of gases like oxygen, carbon dioxide, and nitrogen inside these ice cores, researchers can figure out the temperature of the time corresponding to the layers in the glacier. The oldest continuous ice-core record, which was collected in Antarctica, goes back about 800,000 years.
Researchers can use fossils to look even further back into Earth’s temperature record. For one 2020 study, researchers drilled into the seabed and looked at the sediment and tiny preserved shells of ancient organisms. From the chemical signatures in those samples, they found that the temperatures we might be on track to record may be hotter than anything the planet has experienced on a global scale in tens of millions of years.
It’s a bit sobering to know that we’re changing the planet in such a dramatic way.
The good news is, we know what we need to do to turn things around: cut emissions of planet-warming gases like carbon dioxide and methane. The longer we wait, the more expensive and difficult it will be to stop warming and reverse it, as Esper said on the press call: “We should do as much as possible, as soon as possible.”
Now read the rest of The Spark
Related reading
Last year broke all sorts of climate records, from emissions to ocean temperatures. For more on the data, check out this story from December.
Readers chose thermal batteries as the 11th Breakthrough Technology of 2024. If you want to hear more about what thermal batteries are, how they work, and why this all matters, join us for the latest in our Roundtables series of online events, where I’ll be getting into the nitty-gritty details and answering some audience questions.
This event is exclusively for subscribers, so subscribe if you haven’t already, and then register here to join us tomorrow, May 16, at noon Eastern time. Hope to see you there!
Keeping up with climate
Scientists just recorded the largest ever annual leap in the amount of carbon dioxide in the atmosphere. The concentration of the planet-warming gas in March 2024 was 4.7 parts per million higher than it was a year before. (The Guardian)
Tesla has reportedly begun rehiring some of the workers who were laid off from its charging team in recent weeks. (Bloomberg)
→ To catch up on what’s going on at Tesla, and what it means for the future of EV charging and climate tech more broadly, check out the newsletter from last week if you missed it. (MIT Technology Review)
A new rule could spur thousands of miles of new power lines, making it easier to add renewables to the grid in the US. The Federal Energy Regulatory Commission will require grid operators to plan 20 years ahead, considering things like the speed of wind and solar installations. (New York Times)
Where does carbon dioxide go after it’s been vacuumed out of the atmosphere? Here are 10 options. (Latitude Media)
Ocean temperatures have been extremely high, shattering records over the past year. All that heat could help fuel a particularly busy upcoming hurricane season. (E&E News)
New tariffs in the US will tack on additional costs to a wide range of Chinese imports, including batteries and solar cells. The tariff on EVs will take a particularly drastic jump, going from 27.5% to 102.5%. (Associated Press)
A reporter took a trip to the Beijing Auto Show and drove dozens of EVs. His conclusion? Chinese EVs are advancing much faster than Western automakers can keep up with. (InsideEVs)
Harnessing solar power via satellites in space and beaming it down to Earth is a tempting dream. But the reality, as you might expect, is probably not so rosy. (IEEE Spectrum)
No matter who he called—his mother, his father, his brother, his cousins—the phone would just go to voicemail. Cell service was out around Maui as devastating wildfires swept through the Hawaiian island. But while Raven Imperial kept hoping for someone to answer, he couldn’t keep a terrifying thought from sneaking into his mind: What if his family members had perished in the blaze? What if all of them were gone?
Hours passed; then days. All Raven knew at that point was this: there had been a wildfire on August 8, 2023, in Lahaina, where his multigenerational, tight-knit family lived. But from where he was currently based in Northern California, Raven was in the dark. Had his family evacuated? Were they hurt? He watched from afar as horrifying video clips of Front Street burning circulated online.
Much of the area around Lahaina’s Pioneer Mill Smokestack was totally destroyed by wildfire.
ALAMY
The list of missing residents meanwhile climbed into the hundreds.
Raven remembers how frightened he felt: “I thought I had lost them.”
Raven had spent his youth in a four-bedroom, two-bathroom, cream-colored home on Kopili Street that had long housed not just his immediate family but also around 10 to 12 renters, since home prices were so high on Maui. When he and his brother, Raphael Jr., were kids, their dad put up a basketball hoop outside where they’d shoot hoops with neighbors. Raphael Jr.’s high school sweetheart, Christine Mariano, later moved in, and when the couple had a son in 2021, they raised him there too.
From the initial news reports and posts, it seemed as if the fire had destroyed the Imperials’ entire neighborhood near the Pioneer Mill Smokestack—a 225-foot-high structure left over from the days of Maui’s sugar plantations, which Raven’s grandfather had worked on as an immigrant from the Philippines in the mid-1900s.
Then, finally, on August 11, a call to Raven’s brother went through. He’d managed to get a cell signal while standing on the beach.
“Is everyone okay?” Raven asked.
“We’re just trying to find Dad,” Raphael Jr. told his brother.
From his current home in Northern California, Raven Imperial spent days not knowing what had happened to his family in Maui.
WINNI WINTERMEYER
In the three days following the fire, the rest of the family members had slowly found their way back to each other. Raven would learn that most of his immediate family had been separated for 72 hours: Raphael Jr. had been marooned in Kaanapali, four miles north of Lahaina; Christine had been stuck in Wailuku, more than 20 miles away; both young parents had been separated from their son, who escaped with Christine’s parents. Raven’s mother, Evelyn, had also been in Kaanapali, though not where Raphael Jr. had been.
But no one was in contact with Rafael Sr. Evelyn had left their home around noon on the day of the fire and headed to work. That was the last time she had seen him. The last time they had spoken was when she called him just after 3 p.m. and asked: “Are you working?” He replied “No,” before the phone abruptly cut off.
“Everybody was found,” Raven says. “Except for my father.”
Within the week, Raven boarded a plane and flew back to Maui. He would keep looking for him, he told himself, for as long as it took.
That same week, Kim Gin was also on a plane to Maui. It would take half a day to get there from Alabama, where she had moved after retiring from the Sacramento County Coroner’s Office in California a year earlier. But Gin, now an independent consultant on death investigations, knew she had something to offer the response teams in Lahaina. Of all the forensic investigators in the country, she was one of the few who had experience in the immediate aftermath of a wildfire on the vast scale of Maui’s. She was also one of the rare investigators well versed in employing rapid DNA analysis—an emerging but increasingly vital scientific tool used to identify victims in unfolding mass-casualty events.
Gin started her career in Sacramento in 2001 and was working as the coroner 17 years later when Butte County, California, close to 90 miles north, erupted in flames. She had worked fire investigations before, but nothing like the Camp Fire, which burned more than 150,000 acres—an area larger than the city of Chicago. The tiny town of Paradise, the epicenter of the blaze, didn’t have the capacity to handle the rising death toll. Gin’s office had a refrigerated box truck and a 52-foot semitrailer, as well as a morgue that could handle a couple of hundred bodies.
Kim Gin, the former Sacramento County coroner, had worked fire investigations in her career, but nothing prepared her for the 2018 Camp Fire.
BRYAN TARNOWSKI
“Even though I knew it was a fire, I expected more identifications by fingerprints or dental [records]. But that was just me being naïve,” she says. She quickly realized that putting names to the dead, many burned beyond recognition, would rely heavily on DNA.
“The problem then became how long it takes to do the traditional DNA [analysis],” Gin explains, speaking to a significant and long-standing challenge in the field—and the reason DNA identification has long been something of a last resort following large-scale disasters.
While more conventional identification methods—think fingerprints, dental information, or matching something like a knee replacement to medical records—can be a long, tedious process, they don’t take nearly as long as traditional DNA testing.
Historically, the process of making genetic identifications would often stretch on for months, even years. In fires and other situations that result in badly degraded bone or tissue, it can become even more challenging and time consuming to process DNA, which traditionally involves reading the 3 billion base pairs of the human genome and comparing samples found in the field against samples from a family member. Meanwhile, investigators frequently need equipment from the US Department of Justice or the county crime lab to test the samples, so backlogs often pile up.
A supply kit with swabs, gloves, and other items needed to take a DNA sample in the field.
A demo chip for ANDE’s rapid DNA box.
This creates a wait that can be horrendous for family members. Death certificates, federal assistance, insurance money—“all that hinges on that ID,” Gin says. Not to mention the emotional toll of not knowing if their loved ones are alive or dead.
But over the past several years, as fires and other climate-change-fueled disasters have become more common and more cataclysmic, the way their aftermath is processed and their victims identified has been transformed. The grim work following a disaster remains—surveying rubble and ash, distinguishing a piece of plastic from a tiny fragment of bone—but landing a positive identification can now take just a fraction of the time it once did, which may in turn bring families some semblance of peace more swiftly than ever before.
The key innovation driving this progress has been rapid DNA analysis, a methodology that focuses on just over two dozen regions of the genome. The 2018 Camp Fire was the first time the technology was used in a large, live disaster setting, and the first time it was used as the primary way to identify victims. The technology—deployed in small high-tech field devices developed by companies like industry leader ANDE, or in a lab with other rapid DNA techniques developed by Thermo Fisher—is increasingly being used by the US military on the battlefield, and by the FBI and local police departments after sexual assaults and in instances where confirming an ID is challenging, like cases of missing or murdered Indigenous people or migrants. Yet arguably the most effective way to use rapid DNA is in incidents of mass death. In the Camp Fire, 22 victims were identified using traditional methods, while rapid DNA analysis helped with 62 of the remaining 63 victims; it has also been used in recent years following hurricanes and floods, and in the war in Ukraine.
“These families are going to have to wait a long period of time to get identification. How do we make this go faster?”
Tiffany Roy, a forensic DNA expert with consulting company ForensicAid, says she’d be concerned about deploying the technology in a crime scene, where quality evidence is limited and can be quickly “exhausted” by well-meaning investigators who are “not trained DNA analysts.” But, on the whole, Roy and other experts see rapid DNA as a major net positive for the field. “It is definitely a game-changer,” adds Sarah Kerrigan, a professor of forensic science at Sam Houston State University and the director of its Institute for Forensic Research, Training, and Innovation.
But back in those early days after the Camp Fire, all Gin knew was that nearly 1,000 people had been listed as missing, and she was tasked with helping to identify the dead. “Oh my goodness,” she remembers thinking. “These families are going to have to wait a long period of time to get identification. How do we make this go faster?”
Ten days
One flier pleading for information about “Uncle Raffy,” as people in the community knew Rafael Sr., was posted on a brick-red stairwell outside Paradise Supermart, a Filipino store and restaurant in Kahului, 25 miles away from the destruction. In it, just below the words “MISSING Lahaina Victim,” the 63-year-old grandfather smiled with closed lips, wearing a blue Hawaiian shirt, his right hand curled in the shaka sign, thumb and pinky pointing out.
Raven remembers how hard his dad, Rafael, worked. His three jobs took him all over town and earned him the nickname “Mr. Aloha.”
COURTESY OF RAVEN IMPERIAL
“Everybody knew him from restaurant businesses,” Raven says. “He was all over Lahaina, very friendly to everybody.” Raven remembers how hard his dad worked, juggling three jobs: as a draft tech for Anheuser-Busch, setting up services and delivering beer all across town; as a security officer at Allied Universal security services; and as a parking booth attendant at the Sheraton Maui. He connected with so many people that coworkers, friends, and other locals gave him another nickname: “Mr. Aloha.”
Raven also remembers how his dad had always loved karaoke, where he would sing “My Way,” by Frank Sinatra. “That’s the only song that he would sing,” Raven says. “Like, on repeat.”
Since their home had burned down, the Imperials ran their search out of a rental unit in Kihei, which was owned by a local woman one of them knew through her job. The woman had opened her rental to three families in all. It quickly grew crowded with side-by-side beds and piles of donations.
Each day, Evelyn waited for her husband to call.
She managed to catch up with one of their former tenants, who recalled asking Rafael Sr. to leave the house on the day of the fires. But she did not know if he actually did. Evelyn spoke to other neighbors who also remembered seeing Rafael Sr. that day; they told her that they had seen him go back into the house. But they too did not know what happened to him after.
A friend of Raven’s who got into the largely restricted burn zone told him he’d spotted Rafael Sr.’s Toyota Tacoma on the street, not far from their house. He sent a photo. The pickup was burned out, but a passenger-side door was open. The family wondered: Could he have escaped?
Evelyn called the Red Cross. She called the police. Nothing. They waited and hoped.
Back in Paradise in 2018, as Gin worried about the scores of waiting families, she learned there might in fact be a better way to get a positive ID—and a much quicker one. A company called ANDE Rapid DNA had already volunteered its services to the Butte County sheriff and promised that its technology could process DNA and get a match in less than two hours.
“I’ll try anything at this point,” Gin remembers telling the sheriff. “Let’s see this magic box and what it’s going to do.”
In truth, Gin did not think it would work, and certainly not in two hours. When the device arrived, it was “not something huge and fantastical,” she recalls thinking. A little bigger than a microwave, it looked “like an ordinary box that beeps, and you put stuff in, and out comes a result.”
The “stuff,” more specifically, was a cheek or bloodstain swab, or a piece of muscle, or a fragment of bone that had been crushed and demineralized. Instead of reading 3 billion base pairs in this sample, Selden’s machine examined just 27 genome regions characterized by particular repeating sequences. It would be nearly impossible for two unrelated people to have the same repeating sequence in those regions. But a parent and child, or siblings, would match, meaning you could compare DNA found in human remains with DNA samples taken from potential victims’ family members. Making it even more efficient for a coroner like Gin, the machine could run up to five tests at a time and could be operated by anyone with just a little basic training.
ANDE’s chief scientific officer, Richard Selden, a pediatrician who has a PhD in genetics from Harvard, didn’t come up with the idea to focus on a smaller, more manageable number of base pairs to speed up DNA analysis. But it did become something of an obsession for him after he watched the O.J. Simpson trial in the mid-1990s and began to grasp just how long it took for DNA samples to get processed in crime cases. By this point, the FBI had already set up a system for identifying DNA by looking at just 13 regions of the genome; it would later add seven more. Researchers in other countries had also identified other sets of regions to analyze. Drawing on these various methodologies, Selden homed in on the 27 specific areas of DNA he thought would be most effective to examine, and he launched ANDE in 2004.
But he had to build a device to do the analysis. Selden wanted it to be small, portable, and easily used by anyone in the field. In a conventional lab, he says, “from the moment you take that cheek swab to the moment that you have the answer, there are hundreds of laboratory steps.” Traditionally, a human is holding test tubes and iPads and sorting through or processing paperwork. Selden compares it all to using a “conventional typewriter.” He effectively created the more efficient laptop version of DNA analysis by figuring out how to speed up that same process.
No longer would a human have to “open up this bottle and put [the sample] in a pipette and figure out how much, then move it into a tube here.” It is all automated, and the process is confined to a single device.
The rapid DNA analysis boxes from ANDE can be used in the field by anyone with just a bit of training.
ANDE
Once a sample is placed in the box, the DNA binds to a filter in water and the rest of the sample is washed away. Air pressure propels the purified DNA to a reconstitution chamber and then flattens it into a sheet less than a millimeter thick, which is subjected to about 6,000 volts of electricity. It’s “kind of an obstacle course for the DNA,” he explains.
The machine then interprets the donor’s genome and and provides an allele table with a graph showing the peaks for each region and its size. This data is then compared with samples from potential relatives, and the machine reports when it has a match.
Rapid DNA analysis as a technology first received approval for use by the US military in 2014, and in the FBI two years later. Then the Rapid DNA Act of 2017 enabled all US law enforcement agencies to use the technology on site and in real time as an alternative to sending samples off to labs and waiting for results.
But by the time of the Camp Fire the following year, most coroners and local police officers still had no familiarity or experience with it. Neither did Gin. So she decided to put the “magic box” through a test: she gave Selden, who had arrived at the scene to help with the technology, a DNA sample from a victim whose identity she’d already confirmed via fingerprint. The box took about 90 minutes to come back with a result. And to Gin’s surprise, it was the same identification she had already made. Just to make sure, she ran several more samples through the box, also from victims she had already identified. Again, results were returned swiftly, and they confirmed hers.
“I was a believer,” she says.
The next year, Gin helped investigators use rapid DNA technology in the 2019 Conception disaster, when a dive boat caught fire off the Channel Islands in Santa Barbara. “We ID’d 34 victims in 10 days,” Gin says. “Completely done.” Gin now works independently to assist other investigators in mass-fatality events and helps them learn to use the ANDE system.
Its speed made the box a groundbreaking innovation. Death investigations, Gin learned long ago, are not as much about the dead as about giving peace of mind, justice, and closure to the living.
Fourteen days
Many of the people who were initially on the Lahaina missing persons list turned up in the days following the fire. Tearful reunions ensued.
Two weeks after the fire, the Imperials hoped they’d have the same outcome as they loaded into a truck to check out some exciting news: someone had reported seeing Rafael Sr. at a local church. He’d been eating and had burns on his hands and looked disoriented. The caller said the sighting had occurred three days after the fire. Could he still be in the vicinity?
When the family arrived, they couldn’t confirm the lead.
“We were getting a lot of calls,” Raven says. “There were a lot of rumors saying that they found him.”
None of them panned out. They kept looking.
The scenes following large-scale destructive events like the fires in Paradise and Lahaina can be sprawling and dangerous, with victims sometimes dispersed across a large swath of land if many people died trying to escape. Teams need to meticulously and tediously search mountains of mixed, melted, or burned debris just to find bits of human remains that might otherwise be mistaken for a piece of plastic or drywall. Compounding the challenge is the comingling of remains—from people who died huddled together, or in the same location, or alongside pets or other animals.
This is when the work of forensic anthropologists is essential: they have the skills to differentiate between human and animal bones and to find the critical samples that are needed by DNA specialists, fire and arson investigators, forensic pathologists and dentists, and other experts. Rapid DNA analysis “works best in tandem with forensic anthropologists, particularly in wildfires,” Gin explains.
“The first step is determining, is it a bone?” says Robert Mann, a forensic anthropologist at the University of Hawaii John A. Burns School of Medicine on Oahu. Then, is it a human bone? And if so, which one?
Forensic anthropologist Robert Mann has spent his career identifying human remains.
AP PHOTO/LUCY PEMONI
Mann has served on teams that have helped identify the remains of victims after the terrorist attacks of September 11, 2001, and the 2004 Indian Ocean tsunami, among other mass-casualty events. He remembers how in one investigation he received an object believed to be a human bone; it turned out to be a plastic replica. In another case, he was looking through the wreckage of a car accident and spotted what appeared to be a human rib fragment. Upon closer examination, he identified it as a piece of rubber weather stripping from the rear window. “We examine every bone and tooth, no matter how small, fragmented, or burned it might be,” he says. “It’s a time-consuming but critical process because we can’t afford to make a mistake or overlook anything that might help us establish the identity of a person.”
For Mann, the Maui disaster felt particularly immediate. It was right near his home. He was deployed to Lahaina about a week after the fire, as one of more than a dozen forensic anthropologists on scene from universities in places including Oregon, California, and Hawaii.
While some anthropologists searched the recovery zone—looking through what was left of homes, cars, buildings, and streets, and preserving fragmented and burned bone, body parts, and teeth—Mann was stationed in the morgue, where samples were sent for processing.
It used to be much harder to find samples that scientists believed could provide DNA for analysis, but that’s also changed recently as researchers have learned more about what kind of DNA can survive disasters. Two kinds are used in forensic identity testing: nuclear DNA (found within the nuclei of eukaryotic cells) and mitochondrial DNA (found in the mitochondria, organelles located outside the nucleus). Both, it turns out, have survived plane crashes, wars, floods, volcanic eruptions, and fires.
Theories have also been evolving over the past few decades about how to preserve and recover DNA specifically after intense heat exposure. One 2018 study found that a majority of the samples actually survived high heat. Researchers are also learning more about how bone characteristics change depending on the degree. “Different temperatures and how long a body or bone has been exposed to high temperatures affect the likelihood that it will or will not yield usable DNA,” Mann says.
Typically, forensic anthropologists help select which bone or tooth to use for DNA testing, says Mann. Until recently, he explains, scientists believed “you cannot get usable DNA out of burned bone.” But thanks to these new developments, researchers are realizing that with some bone that has been charred, “they’re able to get usable, good DNA out of it,” Mann says. “And that’s new.” Indeed, Selden explains that “in a typical bad fire, what I would expect is 80% to 90% of the samples are going to have enough intact DNA” to get a result from rapid analysis. The rest, he says, may require deeper sequencing.
The aftermath of large-scale destructive events like the fire in Lahaina can be sprawling and dangerous. Teams need to meticulously search through mountains of mixed, melted, or burned debris to find bits of human remains.
GLENN FAWCETT VIA ALAMY
Anthropologists can often tell “simply by looking” if a sample will be good enough to help create an ID. If it’s been burned and blackened, “it might be a good candidate for DNA testing,” Mann says. But if it’s calcined (white and “china-like”), he says, the DNA has probably been destroyed.
On Maui, Mann adds, rapid DNA analysis made the entire process more efficient, with tests coming back in just two hours. “That means while you’re doing the examination of this individual right here on the table, you may be able to get results back on who this person is,” he says. From inside the lab, he watched the science unfold as the number of missing on Maui quickly began to go down.
Within three days, 42 people’s remains were recovered inside Maui homes or buildings and another 39 outside, along with 15 inside vehicles and one in the water. The first confirmed identification of a victim on the island occurred four days after the fire—this one via fingerprint. The ANDE rapid DNA team arrived two days after the fire and deployed four boxes to analyze multiple samples of DNA simultaneously. The first rapid DNA identification happened within that first week.
Sixteen days
More than two weeks after the fire, the list of missing and unaccounted-for individuals was dwindling, but it still had 388 people on it. Rafael Sr. was one of them.
Raven and Raphael Jr. raced to another location: Cupies café in Kahului, more than 20 miles from Lahaina. Someone had reported seeing him there.
Rafael’s family hung posters around the island, desperately hoping for reliable information. (Phone number redacted by MIT Technology Review.)
ERIKA HAYASAKI
The tip was another false lead.
As family and friends continued to search, they stopped by support hubs that had sprouted up around the island, receiving information about Red Cross and FEMA assistance or donation programs as volunteers distributed meals and clothes. These hubs also sometimes offered DNA testing.
Raven still had a “50-50” feeling that his dad might be out there somewhere. But he was beginning to lose some of that hope.
Gin was stationed at one of the support hubs, which offered food, shelter, clothes, and support. “You could also go in and give biological samples,” she says. “We actually moved one of the rapid DNA instruments into the family assistance center, and we were running the family samples there.” Eliminating the need to transport samples from a site to a testing center further cut down any lag time.
Selden had once believed that the biggest hurdle for his technology would be building the actual device, which took about eight years to design and another four years to perfect. But at least in Lahaina, it was something else: persuading distraught and traumatized family members to offer samples for the test.
Nationally, there are serious privacy concerns when it comes to rapid DNA technology. Organizations like the ACLU warn that as police departments and governments begin deploying it more often, there must be more oversight, monitoring, and training in place to ensure that it is always used responsibly, even if that adds some time and expense. But the space is still largely unregulated, and the ACLU fears it could give rise to rogue DNA databases “with far fewer quality, privacy, and security controls than federal databases.”
Family support centers popped up around Maui to offer clothing, food, and other assistance, and sometimes to take DNA samples to help find missing family members.
In a place like Hawaii, these fears are even more palpable. The islands have a long history of US colonialism, military dominance, and exploitation of the Native population and of the large immigrant working-class population employed in the tourism industry.
Native Hawaiians in particular have a fraught relationship with DNA testing. Under a US law signed in 1921, thousands have a right to live on 200,000 designated acres of land trust, almost for free. It was a kind of reparations measure put in place to assist Native Hawaiians whose land had been stolen. Back in 1893, a small group of American sugar plantation owners and descendants of Christian missionaries, backed by US Marines, held Hawaii’s Queen Lili‘uokalani in her palace at gunpoint and forced her to sign over 1.8 million acres to the US, which ultimately seized the islands in 1898.
Hawaii’s Queen Lili‘uokalani was forced to sign over 1.8 million acres to the US.
PUBLIC DOMAIN VIA WIKIMEDIA COMMONS
To lay their claim to the designated land and property, individuals first must prove via DNA tests how much Hawaiian blood they have. But many residents who have submitted their DNA and qualified for the land have died on waiting lists before ever receiving it. Today, Native Hawaiians are struggling to stay on the islands amid skyrocketing housing prices, while others have been forced to move away.
Meanwhile, after the fires, Filipino families faced particularly stark barriers to getting information about financial support, government assistance, housing, and DNA testing. Filipinos make up about 25% of Hawaii’s population and 40% of its workers in the tourism industry. They also make up 46% of undocumented residents in Hawaii—more than any other group. Some encountered language barriers, since they primarily spoke Tagalog or Ilocano. Some worried that people would try to take over their burned land and develop it for themselves. For many, being asked for DNA samples only added to the confusion and suspicion.
Selden says he hears the overall concerns about DNA testing: “If you ask people about DNA in general, they think of Brave New World and [fear] the information is going to be used to somehow harm or control people.” But just like regular DNA analysis, he explains, rapid DNA analysis “has no information on the person’s appearance, their ethnicity, their health, their behavior either in the past, present, or future.” He describes it as a more accurate fingerprint.
Gin tried to help the Lahaina family members understand that their DNA “isn’t going to go anywhere else.” She told them their sample would ultimately be destroyed, something programmed to occur inside ANDE’s machine. (Selden says the boxes were designed to do this for privacy purposes.) But sometimes, Gin realizes, these promises are not enough.
“You still have a large population of people that, in my experience, don’t want to give up their DNA to a government entity,” she says. “They just don’t.”
Gin understands that family members are often nervous to give their DNA samples. She promises the process of rapid DNA analysis respects their privacy, but she knows sometimes promises aren’t enough.
BRYAN TARNOWSKI
The immediate aftermath of a disaster, when people are suffering from shock, PTSD, and displacement, is the worst possible moment to try to educate them about DNA tests and explain the technology and privacy policies. “A lot of them don’t have anything,” Gin says. “They’re just wondering where they’re going to lay their heads down, and how they’re going to get food and shelter and transportation.”
Unfortunately, Lahaina’s survivors won’t be the last people in this position. Particularly given the world’s current climate trajectory, the risk of deadly events in just about every neighborhood and community will rise. And figuring out who survived and who didn’t will be increasingly difficult. Mann recalls his work on the Indian Ocean tsunami, when over 227,000 people died. “The bodies would float off, and they ended up 100 miles away,” he says. Investigators were at times left with remains that had been consumed by sea creatures or degraded by water and weather. He remembers how they struggled to determine: “Who is the person?”
Mann has spent his own career identifying people including “missing soldiers, sailors, airmen, Marines, from all past wars,” as well as people who have died recently. That closure is meaningful for family members, some of them decades, or even lifetimes, removed.
In the end, distrust and conspiracy theories did in fact hinder DNA-identification efforts on Maui, according to a police department report.
33 days
By the time Raven went to a family resource center to submit a swab, some four weeks had gone by. He remembers the quick rub inside his cheek.
Some of his family had already offered their own samples before Raven provided his. For them, waiting wasn’t an issue of mistrusting the testing as much as experiencing confusion and chaos in the weeks after the fire. They believed Uncle Raffy was still alive, and they still held hope of finding him. Offering DNA was a final step in their search.
“I did it for my mom,” Raven says. She still wanted to believe he was alive, but Raven says: “I just had this feeling.” His father, he told himself, must be gone.
Just a day after he gave his sample—on September 11, more than a month after the fire—he was at the temporary house in Kihei when he got the call: “It was,” Raven says, “an automatic match.”
Raven gave a cheek swab about a month after the disappearance of his father. It didn’t take long for him to get a phone call: “It was an automatic match.”
WINNI WINTERMEYER
The investigators let the family know the address where the remains of Rafael Sr. had been found, several blocks away from their home. They put it into Google Maps and realized it was where some family friends lived. The mother and son of that family had been listed as missing too. Rafael Sr., it seemed, had been with or near them in the end.
By October, investigators in Lahaina had obtained and analyzed 215 DNA samples from family members of the missing. By December, DNA analysis had confirmed the identities of 63 of the most recent count of 101 victims. Seventeen more had been identified by fingerprint, 14 via dental records, and two through medical devices, along with three who died in the hospital. While some of the most damaged remains would still be undergoing DNA testing months after the fires, it’s a drastic improvement over the identification processes for 9/11 victims, for instance—today, over 20 years later, some are still being identified by DNA.
Raven remembers how much his father loved karaoke. His favorite song was “My Way,” by Frank Sinatra.
COURTESY OF RAVEN IMPERIAL
Rafael Sr. was born on October 22, 1959, in Naga City, the Philippines. The family held his funeral on his birthday last year. His relatives flew in from Michigan, the Philippines, and California.
Raven says in those weeks of waiting—after all the false tips, the searches, the prayers, the glimmers of hope—deep down the family had already known he was gone. But for Evelyn, Raphael Jr., and the rest of their family, DNA tests were necessary—and, ultimately, a relief, Raven says. “They just needed that closure.”
Erika Hayasaki is an independent journalist based in Southern California.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
Last week, I scoured the internet in search of a robotic dog. I wanted a belated birthday present for my aunt, who was recently diagnosed with Alzheimer’s disease. Studies suggest that having a companion animal can stave off some of the loneliness, anxiety, and agitation that come with Alzheimer’s. My aunt would love a real dog, but she can’t have one.
That’s how I discovered the Golden Pup from Joy for All. It cocks its head. It sports a jaunty red bandana. It barks when you talk. It wags when you touch it. It has a realistic heartbeat. And it’s just one of the many, many robots designed for people with Alzheimer’s and dementia.
This week on The Checkup, join me as I go down a rabbit hole. Let’s look at the prospect of using robots to change dementia care.
As robots go, Golden Pup is decidedly low tech. It retails for $140. For around $6,000 you can opt for Paro, a fluffy robotic baby seal developed in Japan, which can sense touch, light, sound, temperature, and posture. Its manufacturer says it develops its own character, remembering behaviors that led its owner to give it attention.
Golden Pup and Paro are available now. But researchers are working on much more sophisticated robots for people with cognitive disorders—devices that leverage AI to converse and play games. Researchers from Indiana University Bloomington are tweaking a commercially available robot system called QT to serve people with dementia and Alzheimer’s. The researchers’ two-foot-tall robot looks a little like a toddler in an astronaut suit. Its round white head holds a screen that displays two eyebrows, two eyes, and a mouth that together form a variety of expressions. The robot engages people in conversation, asking AI-generated questions to keep them talking.
The AI model they’re using isn’t perfect, and neither are the robot’s responses. In one awkward conversation, a study participant told the robot that she has a sister. “I’m sorry to hear that,” the robot responded. “How are you doing?”
But as large language models improve—which is happening already—so will the quality of the conversations. When the QT robot made that awkward comment, it was running Open AI’s GPT-3, which was released in 2020. The latest version of that model, GPT-4o, which was released this week, is faster and provides for more seamless conversations. You can interrupt the conversation, and the model will adjust.
The idea of using robots to keep dementia patients engaged and connected isn’t always an easy sell. Some people see it as an abdication of our social responsibilities. And then there are privacy concerns. The best robotic companions are personalized. They collect information about people’s lives, learn their likes and dislikes, and figure out when to approach them. That kind of data collection can be unnerving, not just for patients but also for medical staff. Lillian Hung, creator of the Innovation in Dementia care and Aging (IDEA) lab at the University of British Columbia in Vancouver, Canada, told one reporter about an incident that happened during a focus group at a care facility. She and her colleagues popped out for lunch. When they returned, they found that staff had unplugged the robot and placed a bag over its head. “They were worried it was secretly recording them,” she said.
On the other hand, robots have some advantages over humans in talking to people with dementia. Their attention doesn’t flag. They don’t get annoyed or angry when they have to repeat themselves. They can’t get stressed.
What’s more, there are increasing numbers of people with dementia, and too few people to care for them. According to the latest report from the Alzheimer’s Association, we’re going to need more than a million additional care workers to meet the needs of people living with dementia between 2021 and 2031. That is the largest gap between labor supply and demand for any single occupation in the United States.
Have you been in an understaffed or poorly staffed memory care facility? I have. Patients are often sedated to make them easier to deal with. They get strapped into wheelchairs and parked in hallways. We barely have enough care workers to take care of the physical needs of people with dementia, let alone provide them with social connection and an enriching environment.
“Caregiving is not just about tending to someone’s bodily concerns; it also means caring for the spirit,” writes Kat McGowan in this beautiful Wiredstory about her parents’ dementia and the promise of social robots. “The needs of adults with and without dementia are not so different: We all search for a sense of belonging, for meaning, for self-actualization.”
If robots can enrich the lives of people with dementia even in the smallest way, and if they can provide companionship where none exists, that’s a win.
“We are currently at an inflection point, where it is becoming relatively easy and inexpensive to develop and deploy [cognitively assistive robots] to deliver personalized interventions to people with dementia, and many companies are vying to capitalize on this trend,” write a team of researchers from the University of California, San Diego, in a 2021 article in Proceedings of We Robot. “However, it is important to carefully consider the ramifications.”
Many of the more advanced social robots may not be ready for prime time, but the low-tech Golden Pup is readily available. My aunt’s illness has been progressing rapidly, and she occasionally gets frustrated and agitated. I’m hoping that Golden Pup might provide a welcome (and calming) distraction. Maybe it will spark joy during a time that has been incredibly confusing and painful for my aunt and uncle. Or maybe not. Certainly a robotic pup isn’t for everyone. Golden Pup may not be a dog. But I’m hoping it can be a friendly companion.
Now read the rest of The Checkup
Read more from MIT Technology Review’s archive
Robots are cool, and with new advances in AI they might also finally be useful around the house, writes Melissa Heikkilä.
Social robots could help make personalized therapy more affordable and accessible to kids with autism. Karen Hao has the story.
Japan is already using robots to help with elder care, but in many cases they require as much work as they save. And reactions among the older people they’re meant to serve are mixed. James Wright wonders whether the robots are “a shiny, expensive distraction from tough choices about how we value people and allocate resources in our societies.”
From around the web
A tiny probe can work its way through arteries in the brain to help doctors spot clots and other problems. The new tool could help surgeons make diagnoses, decide on treatment strategies, and provide assurance that clots have been removed. (Stat)
Richard Slayman, the first recipient of a pig kidney transplant, has died, although the hospital that performed the transplant says the death doesn’t seem to be linked to the kidney. (Washington Post)
EcoHealth, the virus-hunting nonprofit at the center of covid lab-eak theories, has been banned from receiving federal funding. (NYT)
In a first, scientists report that they can translate brain signals into speech without any vocalization or mouth movements, at least for a handful of words. (Nature)