This grim but revolutionary DNA technology is changing how we respond to mass disasters

Seven days

No matter who he called—his mother, his father, his brother, his cousins—the phone would just go to voicemail. Cell service was out around Maui as devastating wildfires swept through the Hawaiian island. But while Raven Imperial kept hoping for someone to answer, he couldn’t keep a terrifying thought from sneaking into his mind: What if his family members had perished in the blaze? What if all of them were gone?

Hours passed; then days. All Raven knew at that point was this: there had been a wildfire on August 8, 2023, in Lahaina, where his multigenerational, tight-knit family lived. But from where he was currently based in Northern California, Raven was in the dark. Had his family evacuated? Were they hurt? He watched from afar as horrifying video clips of Front Street burning circulated online.

Much of the area around Lahaina’s Pioneer Mill Smokestack was totally destroyed by wildfire.
ALAMY

The list of missing residents meanwhile climbed into the hundreds.

Raven remembers how frightened he felt: “I thought I had lost them.”

Raven had spent his youth in a four-bedroom, two-bathroom, cream-colored home on Kopili Street that had long housed not just his immediate family but also around 10 to 12 renters, since home prices were so high on Maui. When he and his brother, Raphael Jr., were kids, their dad put up a basketball hoop outside where they’d shoot hoops with neighbors. Raphael Jr.’s high school sweetheart, Christine Mariano, later moved in, and when the couple had a son in 2021, they raised him there too.

From the initial news reports and posts, it seemed as if the fire had destroyed the Imperials’ entire neighborhood near the Pioneer Mill Smokestack—a 225-foot-high structure left over from the days of Maui’s sugar plantations, which Raven’s grandfather had worked on as an immigrant from the Philippines in the mid-1900s.

Then, finally, on August 11, a call to Raven’s brother went through. He’d managed to get a cell signal while standing on the beach.

“Is everyone okay?” Raven asked.

“We’re just trying to find Dad,” Raphael Jr. told his brother.

Raven Imperial sitting in the grass
From his current home in Northern California, Raven Imperial spent days not knowing what had happened to his family in Maui.
WINNI WINTERMEYER

In the three days following the fire, the rest of the family members had slowly found their way back to each other. Raven would learn that most of his immediate family had been separated for 72 hours: Raphael Jr. had been marooned in Kaanapali, four miles north of Lahaina; Christine had been stuck in Wailuku, more than 20 miles away; both young parents had been separated from their son, who escaped with Christine’s parents. Raven’s mother, Evelyn, had also been in Kaanapali, though not where Raphael Jr. had been.

But no one was in contact with Rafael Sr. Evelyn had left their home around noon on the day of the fire and headed to work. That was the last time she had seen him. The last time they had spoken was when she called him just after 3 p.m. and asked: “Are you working?” He replied “No,” before the phone abruptly cut off.

“Everybody was found,” Raven says. “Except for my father.”

Within the week, Raven boarded a plane and flew back to Maui. He would keep looking for him, he told himself, for as long as it took.


That same week, Kim Gin was also on a plane to Maui. It would take half a day to get there from Alabama, where she had moved after retiring from the Sacramento County Coroner’s Office in California a year earlier. But Gin, now an independent consultant on death investigations, knew she had something to offer the response teams in Lahaina. Of all the forensic investigators in the country, she was one of the few who had experience in the immediate aftermath of a wildfire on the vast scale of Maui’s. She was also one of the rare investigators well versed in employing rapid DNA analysis—an emerging but increasingly vital scientific tool used to identify victims in unfolding mass-casualty events.

Gin started her career in Sacramento in 2001 and was working as the coroner 17 years later when Butte County, California, close to 90 miles north, erupted in flames. She had worked fire investigations before, but nothing like the Camp Fire, which burned more than 150,000 acres—an area larger than the city of Chicago. The tiny town of Paradise, the epicenter of the blaze, didn’t have the capacity to handle the rising death toll. Gin’s office had a refrigerated box truck and a 52-foot semitrailer, as well as a morgue that could handle a couple of hundred bodies.

Kim Gin
Kim Gin, the former Sacramento County coroner, had worked fire investigations in her career, but nothing prepared her for the 2018 Camp Fire.
BRYAN TARNOWSKI

“Even though I knew it was a fire, I expected more identifications by fingerprints or dental [records]. But that was just me being naïve,” she says. She quickly realized that putting names to the dead, many burned beyond recognition, would rely heavily on DNA.

“The problem then became how long it takes to do the traditional DNA [analysis],” Gin explains, speaking to a significant and long-standing challenge in the field—and the reason DNA identification has long been something of a last resort following large-scale disasters.

While more conventional identification methods—think fingerprints, dental information, or matching something like a knee replacement to medical records—can be a long, tedious process, they don’t take nearly as long as traditional DNA testing.

Historically, the process of making genetic identifications would often stretch on for months, even years. In fires and other situations that result in badly degraded bone or tissue, it can become even more challenging and time consuming to process DNA, which traditionally involves reading the 3 billion base pairs of the human genome and comparing samples found in the field against samples from a family member. Meanwhile, investigators frequently need equipment from the US Department of Justice or the county crime lab to test the samples, so backlogs often pile up.

A supply kit with swabs, gloves, and other items needed to take a DNA sample in the field.
A demo chip for ANDE’s rapid DNA box.

This creates a wait that can be horrendous for family members. Death certificates, federal assistance, insurance money—“all that hinges on that ID,” Gin says. Not to mention the emotional toll of not knowing if their loved ones are alive or dead.

But over the past several years, as fires and other climate-change-fueled disasters have become more common and more cataclysmic, the way their aftermath is processed and their victims identified has been transformed. The grim work following a disaster remains—surveying rubble and ash, distinguishing a piece of plastic from a tiny fragment of bone—but landing a positive identification can now take just a fraction of the time it once did, which may in turn bring families some semblance of peace more swiftly than ever before.

The key innovation driving this progress has been rapid DNA analysis, a methodology that focuses on just over two dozen regions of the genome. The 2018 Camp Fire was the first time the technology was used in a large, live disaster setting, and the first time it was used as the primary way to identify victims. The technology—deployed in small high-tech field devices developed by companies like industry leader ANDE, or in a lab with other rapid DNA techniques developed by Thermo Fisher—is increasingly being used by the US military on the battlefield, and by the FBI and local police departments after sexual assaults and in instances where confirming an ID is challenging, like cases of missing or murdered Indigenous people or migrants. Yet arguably the most effective way to use rapid DNA is in incidents of mass death. In the Camp Fire, 22 victims were identified using traditional methods, while rapid DNA analysis helped with 62 of the remaining 63 victims; it has also been used in recent years following hurricanes and floods, and in the war in Ukraine.

“These families are going to have to wait a long period of time to get identification. How do we make this go faster?”

Tiffany Roy, a forensic DNA expert with consulting company ForensicAid, says she’d be concerned about deploying the technology in a crime scene, where quality evidence is limited and can be quickly “exhausted” by well-meaning investigators who are “not trained DNA analysts.” But, on the whole, Roy and other experts see rapid DNA as a major net positive for the field. “It is definitely a game-changer,” adds Sarah Kerrigan, a professor of forensic science at Sam Houston State University and the director of its Institute for Forensic Research, Training, and Innovation.

But back in those early days after the Camp Fire, all Gin knew was that nearly 1,000 people had been listed as missing, and she was tasked with helping to identify the dead. “Oh my goodness,” she remembers thinking. “These families are going to have to wait a long period of time to get identification. How do we make this go faster?”


Ten days

One flier pleading for information about “Uncle Raffy,” as people in the community knew Rafael Sr., was posted on a brick-red stairwell outside Paradise Supermart, a Filipino store and restaurant in Kahului, 25 miles away from the destruction. In it, just below the words “MISSING Lahaina Victim,” the 63-year-old grandfather smiled with closed lips, wearing a blue Hawaiian shirt, his right hand curled in the shaka sign, thumb and pinky pointing out.

Raphael Imperial Sr
Raven remembers how hard his dad, Rafael, worked. His three jobs took him all over town and earned him the nickname “Mr. Aloha.”
COURTESY OF RAVEN IMPERIAL

“Everybody knew him from restaurant businesses,” Raven says. “He was all over Lahaina, very friendly to everybody.” Raven remembers how hard his dad worked, juggling three jobs: as a draft tech for Anheuser-Busch, setting up services and delivering beer all across town; as a security officer at Allied Universal security services; and as a parking booth attendant at the Sheraton Maui. He connected with so many people that coworkers, friends, and other locals gave him another nickname: “Mr. Aloha.”

Raven also remembers how his dad had always loved karaoke, where he would sing “My Way,” by Frank Sinatra. “That’s the only song that he would sing,” Raven says. “Like, on repeat.” 

Since their home had burned down, the Imperials ran their search out of a rental unit in Kihei, which was owned by a local woman one of them knew through her job. The woman had opened her rental to three families in all. It quickly grew crowded with side-by-side beds and piles of donations.

Each day, Evelyn waited for her husband to call.

She managed to catch up with one of their former tenants, who recalled asking Rafael Sr. to leave the house on the day of the fires. But she did not know if he actually did. Evelyn spoke to other neighbors who also remembered seeing Rafael Sr. that day; they told her that they had seen him go back into the house. But they too did not know what happened to him after.

A friend of Raven’s who got into the largely restricted burn zone told him he’d spotted Rafael Sr.’s Toyota Tacoma on the street, not far from their house. He sent a photo. The pickup was burned out, but a passenger-side door was open. The family wondered: Could he have escaped?

Evelyn called the Red Cross. She called the police. Nothing. They waited and hoped.


Back in Paradise in 2018, as Gin worried about the scores of waiting families, she learned there might in fact be a better way to get a positive ID—and a much quicker one. A company called ANDE Rapid DNA had already volunteered its services to the Butte County sheriff and promised that its technology could process DNA and get a match in less than two hours.

“I’ll try anything at this point,” Gin remembers telling the sheriff. “Let’s see this magic box and what it’s going to do.”

In truth, Gin did not think it would work, and certainly not in two hours. When the device arrived, it was “not something huge and fantastical,” she recalls thinking. A little bigger than a microwave, it looked “like an ordinary box that beeps, and you put stuff in, and out comes a result.”

The “stuff,” more specifically, was a cheek or bloodstain swab, or a piece of muscle, or a fragment of bone that had been crushed and demineralized. Instead of reading 3 billion base pairs in this sample, Selden’s machine examined just 27 genome regions characterized by particular repeating sequences. It would be nearly impossible for two unrelated people to have the same repeating sequence in those regions. But a parent and child, or siblings, would match, meaning you could compare DNA found in human remains with DNA samples taken from potential victims’ family members. Making it even more efficient for a coroner like Gin, the machine could run up to five tests at a time and could be operated by anyone with just a little basic training.

ANDE’s chief scientific officer, Richard Selden, a pediatrician who has a PhD in genetics from Harvard, didn’t come up with the idea to focus on a smaller, more manageable number of base pairs to speed up DNA analysis. But it did become something of an obsession for him after he watched the O.J. Simpson trial in the mid-1990s and began to grasp just how long it took for DNA samples to get processed in crime cases. By this point, the FBI had already set up a system for identifying DNA by looking at just 13 regions of the genome; it would later add seven more. Researchers in other countries had also identified other sets of regions to analyze. Drawing on these various methodologies, Selden homed in on the 27 specific areas of DNA he thought would be most effective to examine, and he launched ANDE in 2004.

But he had to build a device to do the analysis. Selden wanted it to be small, portable, and easily used by anyone in the field. In a conventional lab, he says, “from the moment you take that cheek swab to the moment that you have the answer, there are hundreds of laboratory steps.” Traditionally, a human is holding test tubes and iPads and sorting through or processing paperwork. Selden compares it all to using a “conventional typewriter.” He effectively created the more efficient laptop version of DNA analysis by figuring out how to speed up that same process.

No longer would a human have to “open up this bottle and put [the sample] in a pipette and figure out how much, then move it into a tube here.” It is all automated, and the process is confined to a single device.

gloved hands load a chip cartridge into the ANDE machine
The rapid DNA analysis boxes from ANDE can be used in the field by anyone with just a bit of training.
ANDE

Once a sample is placed in the box, the DNA binds to a filter in water and the rest of the sample is washed away. Air pressure propels the purified DNA to a reconstitution chamber and then flattens it into a sheet less than a millimeter thick, which is subjected to about 6,000 volts of electricity. It’s “kind of an obstacle course for the DNA,” he explains.

The machine then interprets the donor’s genome and and provides an allele table with a graph showing the peaks for each region and its size. This data is then compared with samples from potential relatives, and the machine reports when it has a match.

Rapid DNA analysis as a technology first received approval for use by the US military in 2014, and in the FBI two years later. Then the Rapid DNA Act of 2017 enabled all US law enforcement agencies to use the technology on site and in real time as an alternative to sending samples off to labs and waiting for results.

But by the time of the Camp Fire the following year, most coroners and local police officers still had no familiarity or experience with it. Neither did Gin. So she decided to put the “magic box” through a test: she gave Selden, who had arrived at the scene to help with the technology, a DNA sample from a victim whose identity she’d already confirmed via fingerprint. The box took about 90 minutes to come back with a result. And to Gin’s surprise, it was the same identification she had already made. Just to make sure, she ran several more samples through the box, also from victims she had already identified. Again, results were returned swiftly, and they confirmed hers.

“I was a believer,” she says.

The next year, Gin helped investigators use rapid DNA technology in the 2019 Conception disaster, when a dive boat caught fire off the Channel Islands in Santa Barbara. “We ID’d 34 victims in 10 days,” Gin says. “Completely done.” Gin now works independently to assist other investigators in mass-fatality events and helps them learn to use the ANDE system.

Its speed made the box a groundbreaking innovation. Death investigations, Gin learned long ago, are not as much about the dead as about giving peace of mind, justice, and closure to the living.


Fourteen days

Many of the people who were initially on the Lahaina missing persons list turned up in the days following the fire. Tearful reunions ensued.

Two weeks after the fire, the Imperials hoped they’d have the same outcome as they loaded into a truck to check out some exciting news: someone had reported seeing Rafael Sr. at a local church. He’d been eating and had burns on his hands and looked disoriented. The caller said the sighting had occurred three days after the fire. Could he still be in the vicinity?

When the family arrived, they couldn’t confirm the lead.

“We were getting a lot of calls,” Raven says. “There were a lot of rumors saying that they found him.”

None of them panned out. They kept looking.


The scenes following large-scale destructive events like the fires in Paradise and Lahaina can be sprawling and dangerous, with victims sometimes dispersed across a large swath of land if many people died trying to escape. Teams need to meticulously and tediously search mountains of mixed, melted, or burned debris just to find bits of human remains that might otherwise be mistaken for a piece of plastic or drywall. Compounding the challenge is the comingling of remains—from people who died huddled together, or in the same location, or alongside pets or other animals.

This is when the work of forensic anthropologists is essential: they have the skills to differentiate between human and animal bones and to find the critical samples that are needed by DNA specialists, fire and arson investigators, forensic pathologists and dentists, and other experts. Rapid DNA analysis “works best in tandem with forensic anthropologists, particularly in wildfires,” Gin explains.

“The first step is determining, is it a bone?” says Robert Mann, a forensic anthropologist at the University of Hawaii John A. Burns School of Medicine on Oahu. Then, is it a human bone? And if so, which one?

Rober Mann in a lab coat with a human skeleton on the table in front of him
Forensic anthropologist Robert Mann has spent his career identifying human remains.
AP PHOTO/LUCY PEMONI

Mann has served on teams that have helped identify the remains of victims after the terrorist attacks of September 11, 2001, and the 2004 Indian Ocean tsunami, among other mass-casualty events. He remembers how in one investigation he received an object believed to be a human bone; it turned out to be a plastic replica. In another case, he was looking through the wreckage of a car accident and spotted what appeared to be a human rib fragment. Upon closer examination, he identified it as a piece of rubber weather stripping from the rear window. “We examine every bone and tooth, no matter how small, fragmented, or burned it might be,” he says. “It’s a time-consuming but critical process because we can’t afford to make a mistake or overlook anything that might help us establish the identity of a person.”

For Mann, the Maui disaster felt particularly immediate. It was right near his home. He was deployed to Lahaina about a week after the fire, as one of more than a dozen forensic anthropologists on scene from universities in places including Oregon, California, and Hawaii.

While some anthropologists searched the recovery zone—looking through what was left of homes, cars, buildings, and streets, and preserving fragmented and burned bone, body parts, and teeth—Mann was stationed in the morgue, where samples were sent for processing.

It used to be much harder to find samples that scientists believed could provide DNA for analysis, but that’s also changed recently as researchers have learned more about what kind of DNA can survive disasters. Two kinds are used in forensic identity testing: nuclear DNA (found within the nuclei of eukaryotic cells) and mitochondrial DNA (found in the mitochondria, organelles located outside the nucleus). Both, it turns out, have survived plane crashes, wars, floods, volcanic eruptions, and fires.

Theories have also been evolving over the past few decades about how to preserve and recover DNA specifically after intense heat exposure. One 2018 study found that a majority of the samples actually survived high heat. Researchers are also learning more about how bone characteristics change depending on the degree. “Different temperatures and how long a body or bone has been exposed to high temperatures affect the likelihood that it will or will not yield usable DNA,” Mann says.

Typically, forensic anthropologists help select which bone or tooth to use for DNA testing, says Mann. Until recently, he explains, scientists believed “you cannot get usable DNA out of burned bone.” But thanks to these new developments, researchers are realizing that with some bone that has been charred, “they’re able to get usable, good DNA out of it,” Mann says. “And that’s new.” Indeed, Selden explains that “in a typical bad fire, what I would expect is 80% to 90% of the samples are going to have enough intact DNA” to get a result from rapid analysis. The rest, he says, may require deeper sequencing.

The aftermath of large-scale destructive events like the fire in Lahaina can be sprawling and dangerous. Teams need to meticulously search through mountains of mixed, melted, or burned debris to find bits of human remains.
GLENN FAWCETT VIA ALAMY

Anthropologists can often tell “simply by looking” if a sample will be good enough to help create an ID. If it’s been burned and blackened, “it might be a good candidate for DNA testing,” Mann says. But if it’s calcined (white and “china-like”), he says, the DNA has probably been destroyed.

On Maui, Mann adds, rapid DNA analysis made the entire process more efficient, with tests coming back in just two hours. “That means while you’re doing the examination of this individual right here on the table, you may be able to get results back on who this person is,” he says. From inside the lab, he watched the science unfold as the number of missing on Maui quickly began to go down.

Within three days, 42 people’s remains were recovered inside Maui homes or buildings and another 39 outside, along with 15 inside vehicles and one in the water. The first confirmed identification of a victim on the island occurred four days after the fire—this one via fingerprint. The ANDE rapid DNA team arrived two days after the fire and deployed four boxes to analyze multiple samples of DNA simultaneously. The first rapid DNA identification happened within that first week.


Sixteen days

More than two weeks after the fire, the list of missing and unaccounted-for individuals was dwindling, but it still had 388 people on it. Rafael Sr. was one of them.

Raven and Raphael Jr. raced to another location: Cupies café in Kahului, more than 20 miles from Lahaina. Someone had reported seeing him there.

Rafael’s family hung posters around the island, desperately hoping for reliable information. (Phone number redacted by MIT Technology Review.)
ERIKA HAYASAKI

The tip was another false lead.

As family and friends continued to search, they stopped by support hubs that had sprouted up around the island, receiving information about Red Cross and FEMA assistance or donation programs as volunteers distributed meals and clothes. These hubs also sometimes offered DNA testing.

Raven still had a “50-50” feeling that his dad might be out there somewhere. But he was beginning to lose some of that hope.


Gin was stationed at one of the support hubs, which offered food, shelter, clothes, and support. “You could also go in and give biological samples,” she says. “We actually moved one of the rapid DNA instruments into the family assistance center, and we were running the family samples there.” Eliminating the need to transport samples from a site to a testing center further cut down any lag time.

Selden had once believed that the biggest hurdle for his technology would be building the actual device, which took about eight years to design and another four years to perfect. But at least in Lahaina, it was something else: persuading distraught and traumatized family members to offer samples for the test.

Nationally, there are serious privacy concerns when it comes to rapid DNA technology. Organizations like the ACLU warn that as police departments and governments begin deploying it more often, there must be more oversight, monitoring, and training in place to ensure that it is always used responsibly, even if that adds some time and expense. But the space is still largely unregulated, and the ACLU fears it could give rise to rogue DNA databases “with far fewer quality, privacy, and security controls than federal databases.”

Family support centers popped up around Maui to offer clothing, food, and other assistance, and sometimes to take DNA samples to help find missing family members.

In a place like Hawaii, these fears are even more palpable. The islands have a long history of US colonialism, military dominance, and exploitation of the Native population and of the large immigrant working-class population employed in the tourism industry.

Native Hawaiians in particular have a fraught relationship with DNA testing. Under a US law signed in 1921, thousands have a right to live on 200,000 designated acres of land trust, almost for free. It was a kind of reparations measure put in place to assist Native Hawaiians whose land had been stolen. Back in 1893, a small group of American sugar plantation owners and descendants of Christian missionaries, backed by US Marines, held Hawaii’s Queen Lili‘uokalani in her palace at gunpoint and forced her to sign over 1.8 million acres to the US, which ultimately seized the islands in 1898.

Queen Liliuokalani in a formal seated portrait
Hawaii’s Queen Lili‘uokalani was forced to sign over 1.8 million acres to the US.
PUBLIC DOMAIN VIA WIKIMEDIA COMMONS

To lay their claim to the designated land and property, individuals first must prove via DNA tests how much Hawaiian blood they have. But many residents who have submitted their DNA and qualified for the land have died on waiting lists before ever receiving it. Today, Native Hawaiians are struggling to stay on the islands amid skyrocketing housing prices, while others have been forced to move away.

Meanwhile, after the fires, Filipino families faced particularly stark barriers to getting information about financial support, government assistance, housing, and DNA testing. Filipinos make up about 25% of Hawaii’s population and 40% of its workers in the tourism industry. They also make up 46% of undocumented residents in Hawaii—more than any other group. Some encountered language barriers, since they primarily spoke Tagalog or Ilocano. Some worried that people would try to take over their burned land and develop it for themselves. For many, being asked for DNA samples only added to the confusion and suspicion.

Selden says he hears the overall concerns about DNA testing: “If you ask people about DNA in general, they think of Brave New World and [fear] the information is going to be used to somehow harm or control people.” But just like regular DNA analysis, he explains, rapid DNA analysis “has no information on the person’s appearance, their ethnicity, their health, their behavior either in the past, present, or future.” He describes it as a more accurate fingerprint.

Gin tried to help the Lahaina family members understand that their DNA “isn’t going to go anywhere else.” She told them their sample would ultimately be destroyed, something programmed to occur inside ANDE’s machine. (Selden says the boxes were designed to do this for privacy purposes.) But sometimes, Gin realizes, these promises are not enough.

“You still have a large population of people that, in my experience, don’t want to give up their DNA to a government entity,” she says. “They just don’t.”

Kim Gin
Gin understands that family members are often nervous to give their DNA samples. She promises the process of rapid DNA analysis respects their privacy, but she knows sometimes promises aren’t enough.
BRYAN TARNOWSKI

The immediate aftermath of a disaster, when people are suffering from shock, PTSD, and displacement, is the worst possible moment to try to educate them about DNA tests and explain the technology and privacy policies. “A lot of them don’t have anything,” Gin says. “They’re just wondering where they’re going to lay their heads down, and how they’re going to get food and shelter and transportation.”

Unfortunately, Lahaina’s survivors won’t be the last people in this position. Particularly given the world’s current climate trajectory, the risk of deadly events in just about every neighborhood and community will rise. And figuring out who survived and who didn’t will be increasingly difficult. Mann recalls his work on the Indian Ocean tsunami, when over 227,000 people died. “The bodies would float off, and they ended up 100 miles away,” he says. Investigators were at times left with remains that had been consumed by sea creatures or degraded by water and weather. He remembers how they struggled to determine: “Who is the person?”

Mann has spent his own career identifying people including “missing soldiers, sailors, airmen, Marines, from all past wars,” as well as people who have died recently. That closure is meaningful for family members, some of them decades, or even lifetimes, removed.

In the end, distrust and conspiracy theories did in fact hinder DNA-identification efforts on Maui, according to a police department report.


33 days

By the time Raven went to a family resource center to submit a swab, some four weeks had gone by. He remembers the quick rub inside his cheek.

Some of his family had already offered their own samples before Raven provided his. For them, waiting wasn’t an issue of mistrusting the testing as much as experiencing confusion and chaos in the weeks after the fire. They believed Uncle Raffy was still alive, and they still held hope of finding him. Offering DNA was a final step in their search.

“I did it for my mom,” Raven says. She still wanted to believe he was alive, but Raven says: “I just had this feeling.” His father, he told himself, must be gone.

Just a day after he gave his sample—on September 11, more than a month after the fire—he was at the temporary house in Kihei when he got the call: “It was,” Raven says, “an automatic match.”

Raven gave a cheek swab about a month after the disappearance of his father. It didn’t take long for him to get a phone call: “It was an automatic match.”
WINNI WINTERMEYER

The investigators let the family know the address where the remains of Rafael Sr. had been found, several blocks away from their home. They put it into Google Maps and realized it was where some family friends lived. The mother and son of that family had been listed as missing too. Rafael Sr., it seemed, had been with or near them in the end.

By October, investigators in Lahaina had obtained and analyzed 215 DNA samples from family members of the missing. By December, DNA analysis had confirmed the identities of 63 of the most recent count of 101 victims. Seventeen more had been identified by fingerprint, 14 via dental records, and two through medical devices, along with three who died in the hospital. While some of the most damaged remains would still be undergoing DNA testing months after the fires, it’s a drastic improvement over the identification processes for 9/11 victims, for instance—today, over 20 years later, some are still being identified by DNA.

Raphael Imperial Sr
Raven remembers how much his father loved karaoke. His favorite song was “My Way,” by Frank Sinatra. 
COURTESY OF RAVEN IMPERIAL

Rafael Sr. was born on October 22, 1959, in Naga City, the Philippines. The family held his funeral on his birthday last year. His relatives flew in from Michigan, the Philippines, and California.

Raven says in those weeks of waiting—after all the false tips, the searches, the prayers, the glimmers of hope—deep down the family had already known he was gone. But for Evelyn, Raphael Jr., and the rest of their family, DNA tests were necessary—and, ultimately, a relief, Raven says. “They just needed that closure.”

Erika Hayasaki is an independent journalist based in Southern California.

Want less mining? Switch to clean energy.

Political fights over mining and minerals are heating up, and there are growing environmental and sociological concerns about how to source the materials the world needs to build new energy technologies. 

But low-emissions energy sources, including wind, solar, and nuclear power, have a smaller mining footprint than coal and natural gas, according to a new report from the Breakthrough Institute released today.

The report’s findings add to a growing body of evidence that technologies used to address climate change will likely lead to a future with less mining than a world powered by fossil fuels. However, experts point out that oversight will be necessary to minimize harm from the mining needed to transition to lower-emission energy sources. 

“In many ways, we talk so much about the mining of clean energy technologies, and we forget about the dirtiness of our current system,” says Seaver Wang, an author of the report and co-director of Climate and Energy at the Breakthrough Institute, an environmental research center.  

In the new analysis, Wang and his colleagues considered the total mining footprint of different energy technologies, including the amount of material needed for these energy sources and the total amount of rock that needs to be moved to extract that material.

Many minerals appear in small concentrations in source rock, so the process of extracting them has a large footprint relative to the amount of final product. A mining operation would need to move about seven kilograms of rock to get one kilogram of aluminum, for instance. For copper, the ratio is much higher, at over 500 to one. Taking these ratios into account allows for a more direct comparison of the total mining required for different energy sources. 

With this adjustment, it becomes clear that the energy source with the highest mining burden is coal. Generating one gigawatt-hour of electricity with coal requires 20 times the mining footprint as generating the same electricity with low-carbon power sources like wind and solar. Producing the same electricity with natural gas requires moving about twice as much rock.

Tallying up the amount of rock moved is an imperfect approximation of the potential environmental and sociological impact of mining related to different technologies, Wang says, but the report’s results allow researchers to draw some broad conclusions. One is that we’re on track for less mining in the future. 

Other researchers have projected a decrease in mining accompanying a move to low-emissions energy sources. “We mine so many fossil fuels today that the sum of mining activities decreases even when we assume an incredibly rapid expansion of clean energy technologies,” Joey Nijnens, a consultant at Monitor Deloitte and author of another recent study on mining demand, said in an email.

That being said, potentially moving less rock around in the future “hardly means that society shouldn’t look for further opportunities to reduce mining impacts throughout the energy transition,” Wang says.

There’s already been progress in cutting down on the material required for technologies like wind and solar. Solar modules have gotten more efficient, so the same amount of material can yield more electricity generation. Recycling can help further cut material demand in the future, and it will be especially crucial to reduce the mining needed to build batteries.  

Resource extraction may decrease overall, but it’s also likely to increase in some places as our demands change, researchers pointed out in a 2021 study. Between 32% and 40% of the mining increase in the future could occur in countries with weak, poor, or failing resource governance, where mining is more likely to harm the environment and may fail to benefit people living near the mining projects. 

“We need to ensure that the energy transition is accompanied by responsible mining that benefits local communities,” Takuma Watari, a researcher at the National Institute for Environmental Studies and an author of the study, said via email. Otherwise, the shift to lower-emissions energy sources could lead to a reduction of carbon emissions in the Global North “at the expense of increasing socio-environmental risks in local mining areas, often in the Global South.” 

Strong oversight and accountability are crucial to make sure that we can source minerals in a responsible way, Wang says: “We want a rapid energy transition, but we also want an energy transition that’s equitable.”

How to build a thermal battery

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The votes have been tallied, and the results are in. The winner of the 11th Breakthrough Technology, 2024 edition, is … drumroll please … thermal batteries! 

While the editors of MIT Technology Review choose the annual list of 10 Breakthrough Technologies, in 2022 we started having readers weigh in on an 11th technology. And I don’t mean to flatter you, but I think you picked a fascinating one this year. 

Thermal energy storage is a convenient way to stockpile energy for later. This could be crucial in connecting cheap but inconsistent renewable energy with industrial facilities, which often require a constant supply of heat. 

I wrote about why this technology is having a moment, and where it might wind up being used, in a story published Monday. For the newsletter this week, let’s take a deeper look at the different kinds of thermal batteries out there, because there’s a wide world of possibilities. 

Step 1: Choose your energy source

In the journey to build a thermal battery, the crucial first step is to choose where your heat comes from. Most of the companies I’ve come across are building some sort of power-to-heat system, meaning electricity goes in and heat comes out. Heat often gets generated by running a current through a resistive material in a process similar to what happens when you turn on a toaster.

Some projects may take electricity directly from sources like wind turbines or solar panels that aren’t hooked up to the grid. That could reduce energy costs, since you don’t have to pay surcharges built into grid electricity rates, explains Jeffrey Rissman, senior director of industry at Energy Innovation, a policy and research firm specializing in energy and climate. 

Otherwise, thermal batteries can be hooked up to the grid directly. These systems could allow a facility to charge up when electricity prices are low or when there’s a lot of renewable energy on the grid. 

Some thermal storage systems are soaking up waste heat rather than relying on electricity. Brenmiller Energy, for example, is building thermal batteries that can be charged up with heat or electricity, depending on the customer’s needs. 

Depending on the heat source, systems using waste heat may not be able to reach temperatures as high as their electricity-powered counterparts, but they could help increase the efficiency of facilities that would otherwise waste that energy. There’s especially high potential for high-temperature processes, like cement and steel production. 

Step 2: Choose your storage material

Next up: pick out a heat storage medium. These materials should probably be inexpensive and able to reach and withstand high temperatures. 

Bricks and carbon blocks are popular choices, as they can be packed together and, depending on the material, reach temperatures well over 1,000 °C (1,800 °F). Rondo Energy, Antora Energy, and Electrified Thermal Solutions are among the companies using blocks and bricks to store heat at these high temperatures. 

Crushed-up rocks are another option, and the storage medium of choice for Brenmiller Energy. Caldera is using a mixture of aluminum and crushed rock. 

Molten materials can offer even more options for delivering thermal energy later, since they can be pumped around (though this can also add more complexity to the system). Malta is building thermal storage systems that use molten salt, and companies like Fourth Power are using systems that rely in part on molten metals. 

Step 3: Choose your delivery method

Last, and perhaps most important, is deciding how to get energy back out of your storage system. Generally, thermal storage systems can deliver heat, use it to generate electricity, or go with some combination of the two. 

Delivering heat is the most straightforward option. Typically, air or another gas gets blown over the hot thermal storage material, and that heated gas can be used to warm up equipment or to generate steam. 

Some companies are working to use heat storage to deliver electricity instead. This could allow thermal storage systems to play a role not only in industry but potentially on the electrical grid as an electricity storage solution. One downside? These systems generally take a hit on efficiency, the amount of energy that can be returned from storage. But they may be right for some situations, such as facilities that need both heat and electricity on demand. Antora Energy is aiming to use thermophotovoltaic materials to turn heat stored in its carbon blocks back into electricity. 

Some companies plan to offer a middle path, delivering a combination of heat and electricity, depending on what a facility needs. Rondo Energy’s heat batteries can deliver high-pressure steam that can be used either for heating alone or to generate some electricity using cogeneration units. 

The possibilities are seemingly endless for thermal batteries, and I’m seeing new players with new ideas all the time. Stay tuned for much more coverage of this hot technology (sorry, I had to). 


Now read the rest of The Spark

Related reading

Read more about why thermal batteries won the title of 11th breakthrough technology in my story from Monday.

I first wrote about heat as energy storage in this piece last year. As I put it then: the hottest new climate technology is bricks. 

Companies have made some progress in scaling up thermal batteries—our former fellow June Kim wrote about one new manufacturing facility in October.

VIRGINIA HANUSIK

Another thing

The state of Louisiana in the southeast US has lost over a million acres of its coast to erosion. A pilot project aims to save some homes in the state by raising them up to avoid the worst of flooding. 

It’s an ambitious attempt to build a solution to a crisis, and the effort could help keep communities together. But some experts worry that elevation projects offer too rosy an outlook and think we need to focus on relocation instead. Read more in this fascinating feature story from Xander Peters.

Keeping up with climate  

It can be easy to forget, but we’ve actually already made a lot of progress on addressing climate change. A decade ago, the world was on track for about 3.7 °C of warming over preindustrial levels. Today, it’s 2.7 °C with current actions and policies—higher than it should be but lower than it might have been. (Cipher News)

We’re probably going to have more batteries than we actually need for a while. Today, China alone makes enough batteries to satisfy global demand, which could make things tough for new players in the battery game. (Bloomberg

2023 was a record year for wind power. The world installed 117 gigawatts of new capacity last year, 50% more than the year before. (Associated Press)

Here’s what’s coming next for offshore wind. (MIT Technology Review)

Coal power grew in 2023, driven by a surge of new plants coming online in China and a slowdown of retirements in Europe and the US. (New York Times)

People who live near solar farms generally have positive feelings about their electricity-producing neighbors. There’s more negative sentiment among people who live very close to the biggest projects, though. (Inside Climate News)

E-scooters have been zipping through city streets for eight years, but they haven’t exactly ushered in the zero-emissions micro-mobility future that some had hoped for. Shared scooters can cut emissions, but it all depends on rider behavior and company practices. (Grist)

The grid could use a renovation. Replacing existing power lines with new materials could double grid capacity in many parts of the US, clearing the way for more renewables. (New York Times

The first all-electric tugboat in the US is about to launch in San Diego. The small boats are crucial to help larger vessels in and around ports, and the fossil-fuel-powered ones are a climate nightmare. (Canary Media)

How to reopen a nuclear power plant

A shut-down nuclear power plant in Michigan could get a second life thanks to a $1.52 billion loan from the US Department of Energy. If successful, it will be the first time a shuttered nuclear power plant reopens in the US.  

Palisades Power Plant shut down on May 20, 2022, after 50 years of generating low-carbon electricity. But the plant’s new owner thinks economic conditions have improved in the past few years and plans to reopen by the end of 2025.

A successful restart would be a major milestone for the US nuclear fleet, and the reactor’s 800 megawatts of capacity could help inch the country closer to climate goals. But reopening isn’t as simple as flipping on a light switch—there are technical, administrative, and regulatory hurdles ahead before Palisades can start operating again. Here’s what it takes to reopen a nuclear power plant.

Step 1: Stay ready

One of the major reasons Palisades has any shot of restarting is that the site’s new owner has been planning on this for years. “Technically, the stars had all aligned for the plant to stay operating,” says Patrick White, research director at the Nuclear Innovation Alliance, a nonprofit think tank.

Holtec International supplies equipment for nuclear reactors and waste and provides services like decommissioning nuclear plants. Holtec originally purchased Palisades with the intention of shutting it down, taking apart the facilities, and cleaning up the site. The company has decommissioned other recently shuttered nuclear plants, including Indian Point Energy Center in New York. 

Changing economic conditions have made continued operation too expensive to justify for many nuclear power plants, especially smaller ones. Those with a single, relatively small reactor, like Palisades, have been the most vulnerable.  

Once a nuclear power plant shuts down, it can quickly become difficult to start it back up. As with a car left out in the yard, White says, “you expect some degradation.” Maintenance and testing of critical support systems might slow down or stop. Backup diesel generators, for example, would need to be checked and tested regularly while a reactor is online, but they likely wouldn’t be treated the same way after a plant’s shutdown, White says.

Holtec took possession of Palisades in 2022 after the reactor shut down and the fuel was removed. Even then, there were already calls to keep the plant’s low-carbon power on the grid, says Nick Culp, senior manager for government affairs and communications at Holtec.

The company quickly pivoted and decided to try to keep the plant open, so records and maintenance work largely continued. “It looks like it shut down yesterday,” Culp says.

Because of the continued investment of time and resources, starting the plant back up will be more akin to restarting after a regular refueling or maintenance outage than starting a fully defunct plant. After maintenance is finished and fresh fuel loaded in, the Palisades reactor could restart and provide enough electricity for roughly 800,000 homes.

Step 2: Line up money and permission

Support has poured in for Palisades, with the state of Michigan setting aside $300 million in funding for the plant’s restart in the last two years. And now, the Department of Energy has issued a conditional loan commitment for $1.52 billion.

Holtec will need to meet certain technical and legal conditions to get the loan money, which will eventually be repaid with interest. (Holtec and the DOE Loan Programs Office declined to give more information about the loan’s conditions or timeline.)

The state funding and federal loan will help support the fixes and upgrades needed for the plant’s equipment and continue paying the approximately 200 workers who have stayed on since its shutdown. The plant employed about 700 people while it was operating, and the company is now working on rehiring additional workers to help with the restart, Culp says.  

One of the major remaining steps in a possible Palisades restart is getting authorization from regulators, as no plant in the US has restarted operations after shutting down. “We’re breaking new ground here,” says Jacopo Buongiorno, a professor of nuclear engineering at MIT. 

The Nuclear Regulatory Commission oversees nuclear power plants in the US, but the agency doesn’t have a specific regulatory framework for restarting operations at a nuclear power plant that has shut down and entered decommissioning, White says. The NRC created a panel that will oversee reopening efforts.

Palisades effectively gave up the legal right to operate when it shut down and took the fuel out of the reactor. Holtec will need to submit detailed plans to the NRC with information about how it plans to reopen and operate the plant safely. Holtec formally began the process of reauthorizing operations with the NRC in October 2023 and plans to submit the rest of its materials this year.

Step 3: Profit?

If regulators sign off, the plan is to have Palisades up and running again by the end of 2025. The fuel supply is already lined up, and the company has long-term buyers committed for the plant’s full power output, Culp says.

If all goes well, the plant could be generating power until at least 2051, 80 years after it originally began operations.

Expanded support for low-carbon electricity sources, and nuclear in particular, have helped make it possible to extend the life of older plants across the US. “This restart of a nuclear plant represents a sea change in support for clean firm power,” says Julie Kozeracki, a senior advisor for the US Department of Energy’s Loan Programs Office.

As of last year, a majority of Americans (57%) support more nuclear power in the country, up from 43% in 2016, according to a poll from the Pew Research Center. There’s growing funding available for the technology as well, including billions of dollars in tax credits for nuclear and other low-carbon energy included in the Inflation Reduction Act

Growing support and funding, alongside rising electricity prices, contribute to making existing nuclear plants much more valuable than they were just a few years ago, says MIT’s Buongiorno. “Everything has changed,” he adds.   

But even a successful Palisades restart wouldn’t mean that we’ll see a wave of other shuttered nuclear plants reopening around the US. “This is a really rare case where you had someone doing a lot of forward thinking,” White says. For other plants that are nearing decommissioning, it would be cheaper, simpler, and more efficient to extend their operations rather than allowing them to shut down in the first place. 

Update: This story has been updated with additional details regarding how the NRC may reauthorize Palisades Nuclear Plant.

Why the lifetime of nuclear plants is getting longer

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Aging can be scary. As you get older, you might not be able to do everything you used to, and it can be hard to keep up with the changing times. Just ask nuclear reactors.

The average age of reactors in nuclear power plants around the world is creeping up. In the US, which has more operating reactors than any other country, the average reactor is 42 years old, as of 2023. Nearly 90% of reactors in Europe have been around for 30 years or more

Older reactors, especially smaller ones, have been shut down in droves due to economic pressures, particularly in areas with other inexpensive sources of electricity, like cheap natural gas. But there could still be a lot of life left in older nuclear reactors. 

The new owner of a plant in Michigan that was shut down in 2022 is now working to reopen it, as I reported in my latest story. If the restart is successful, the plant could operate for a total of 80 years. Others are seeing 20-year extensions to their reactors’ licenses. Extending the lifetime of existing nuclear plants could help cut emissions and is generally cheaper than building new ones. So just how long can we expect nuclear power plants to last? 

In the US, the Nuclear Regulatory Commission (NRC) licenses nuclear reactors for 40-year operating lifespans. But plants can certainly operate longer than that, and many do. 

The 40-year timeline wasn’t designed to put an endpoint on a plant’s life, says Patrick White, research director at the Nuclear Innovation Alliance, a nonprofit think tank. Rather, it was meant to ensure that plants would be able to operate long enough to make back the money invested in building them, he says. 

The NRC has granted 20-year license extensions to much of the existing US nuclear fleet, allowing them to operate for 60 years. Now some operators are applying for an additional extension. A handful of reactors have already been approved to operate for a total of 80 years, including two units at Turkey Point in Florida. Getting those extensions has been bumpy, though. The NRC has since partially walked back some of its approvals and is requiring several of the previously approved sites to go through additional environmental reviews using more recent data. 

And while the oldest operating reactors in the world today are only 54, there’s already early research investigating extending lifetimes to 100 years, White says. 

The reality is that a nuclear power plant has very few truly life-limiting components. Equipment like pumps, valves, and heat exchangers in the water cooling system and support infrastructure can all be maintained, repaired, or replaced. They might even get upgraded as technology improves to help a plant generate electricity more efficiently. 

Two main components determine a plant’s lifetime: the reactor pressure vessel and the containment structure, says Jacopo Buongiorno, a professor of nuclear engineering at MIT. 

  • The reactor pressure vessel is the heart of a nuclear power plant, containing the reactor core as well as the associated cooling system. The structure must keep the reactor core at a high temperature and pressure without leaking. 
  • The containment structure is a shell around the nuclear reactor. It is designed to be airtight and to keep any radioactive material contained in an emergency. 

Both components are crucial to the safe operation of a nuclear power plant and are generally too expensive or too difficult to replace. So as regulators examine applications for extending plant lifetimes, they are the most concerned about the condition and lifespan of those components, Buongiorno says. 

Researchers are searching for new ways to tackle issues that have threatened to take some plants offline, like the corrosion that chewed through reactor components in one Ohio plant, causing it to be closed for two years. New ways of monitoring the materials inside nuclear power plants, as well as new materials that resist degradation, could help reactors operate more safely, for longer. 

Extending the lifetime of nuclear plants could help the world meet clean energy and climate goals. 

In some places, shutting down nuclear power plants can result in more carbon pollution as fossil fuels are brought in to fill the gap. When New York shut down its Indian Point nuclear plant in 2021, natural gas use spiked and greenhouse gas emissions rose

Germany shut down the last of its nuclear reactors in 2023, and the country’s emissions have fallen to a record low, though some experts say most of that drop has more to do with an economic slowdown than increasing use of renewables like wind and solar. 

Extending the global nuclear fleet’s lifetime by 10 years would add 26,000 terawatt-hours of low carbon electricity to the grid over the coming decades, according to a report from the International Atomic Energy Agency. That adds up to roughly a year’s worth of current global electricity demand. That could help cut emissions while the world expands low-carbon power capacity. 

So when it comes to cleaning up the power grid, there’s value in respecting your elders, including nuclear reactors. 


Now read the rest of The Spark

Related reading

A nuclear power plant in Michigan could be the first reactor in the US to reenter operation after shutting down, as I wrote in my latest story

Germany shut down the last of its nuclear reactors in 2023 after years of controversy in the country. Read more in our newsletter from last April.  

The next generation of nuclear reactors is getting more advanced. Kairos Power is working on cooling its reactors with salt instead of pressurized water, as I reported in January

Another thing

A total solar eclipse will sweep across the US on Monday, April 8. Yes, it will affect solar power, especially in states like Texas that have installed a lot of solar capacity since the 2017 eclipse. No, it probably won’t be a big issue for utilities, which are able to plan far in advance for the short dip in solar capacity. Read more in this story from Business Insider. 

Keeping up with climate  

Tesla’s EV sales slipped in the first quarter compared to last year. The automaker still outsold Chinese EV giant BYD, which briefly held the crown for EV sales in late 2023. (New York Times)

A startup is making cleaner steel in a commercial prototype. Electra wants to help tackle the 7% of global emissions that come from producing the material. (Bloomberg)

Burying plant waste can help remove carbon dioxide from the atmosphere. But there are problems with biomass burial, a growing trend in carbon removal. (Canary Media)

Shareholders are voting on whether recycling labels on Kraft Heinz products are deceptive. It’s part of a growing pushback against companies overselling the recyclability of their packaging. (Inside Climate News)

→ Think your plastic is being recycled? Think again. (MIT Technology Review)

Soil in Australia is shaping up to be a major climate problem. While soil is often pitched as a way to soak up carbon emissions, agriculture practices and changing weather conditions are turning things around. (The Guardian)

Two climate journalists attempted to ditch natural gas in their home. But electrification turned into quite the saga, illustrating some of the problems with efforts to decarbonize buildings. (Grist)

Solar panels are getting so cheap, some homes in Europe are sticking them on fences. With costs having more to do with installation than the cost of solar panels, we could see them going up in increasingly quirky places. (Financial Times)

Why methane emissions are still a mystery

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

If you follow papers in climate and energy for long enough, you’re bound to recognize some patterns. 

There are a few things I’ll basically always see when I’m sifting through the latest climate and energy research: one study finding that perovskite solar cells are getting even more efficient; another showing that climate change is damaging an ecosystem in some strange and unexpected way. And there’s always some new paper finding that we’re still underestimating methane emissions. 

That last one is what I’ve been thinking about this week, as I’ve been reporting on a new survey of methane leaks from oil and gas operations in the US. (Yes, there are more emissions than we thought there were—get the details in my story here.) But what I find even more interesting than the consistent underestimation of methane is why this gas is so tricky to track down. 

Methane is the second most abundant greenhouse gas in the atmosphere, and it’s responsible for around 30% of global warming so far. The good news is that methane breaks down quickly in the atmosphere. The bad news is that while it’s floating around, it’s a super-powerful greenhouse gas, way more potent than carbon dioxide. (Just how much more potent is a complicated question that depends on what time scale you’re talking about—read more in this Q&A.)

The problem is, it’s difficult to figure out where all this methane is coming from. We can measure the total concentration in the atmosphere, but there are methane emissions from human activities, there are natural methane sources, and there are ecosystems that soak up a portion of all those emissions (these are called methane sinks). 

Narrowing down specific sources can be a challenge, especially in the oil and gas industry, which is responsible for a huge range of methane leaks. Some are small and come from old equipment in remote areas. Other sources are larger, spewing huge amounts of the greenhouse gas into the atmosphere but only for short times. 

A lot of stories about tracking methane have been in the news recently, mostly because of a methane-hunting satellite launched earlier this month. It’s designed to track down methane using tools called spectrometers, which measure how light is reflected and absorbed. 

This is just one of a growing number of satellites that are keeping an eye on the planet for methane emissions. Some take a wide view, spotting which regions have high emissions. Other satellites are hunting for specific sources and can see within a few dozen meters where a leak is coming from. (If you want to read more about why there are so many methane satellites, I recommend this story from Emily Pontecorvo at Heatmap.)

But methane tracking isn’t just a space game. In a new study published in Nature, researchers used nearly a million measurements taken from airplanes flown over oil- and gas-producing regions to estimate total emissions. 

The results are pretty staggering: researchers found that, on average, roughly 3% of oil and gas production at the sites they examined winds up as methane emissions. That’s about three times the official government estimates used by the US Environmental Protection Agency. 

I spoke with one of the authors of the study, Evan Sherwin, who completed the research as a postdoc at Stanford. He compared the challenge of understanding methane leaks to the parable of the blind men and the elephant: there are many pieces of the puzzle (satellites, planes, ground-based detection), and getting the complete story requires fitting them all together. 

“I think we’re really starting to see an elephant,” Sherwin told me. 

That picture will continue to get clearer as MethaneSAT and other surveillance satellites come online and researchers get to sift through the data. And that understanding will be crucial as governments around the world race to keep promises about slashing methane emissions. 


Now read the rest of The Spark

Related reading

For more on how researchers are working to understand methane emissions, give my latest story a read

If you’ve missed the news on methane-hunting satellites, check out this story about MethaneSAT from last month

Pulling methane out of the atmosphere could be a major boost for climate action. Some startups hope that spraying iron particles above the ocean could help, as my colleague James Temple wrote in December

five planes flying out of white puffy clouds at different angles across a blue sky, leaving contrails behind

PHOTO ILLUSTRATION | GETTY IMAGES

Another thing

Making minor changes to airplane routes could put a significant dent in emissions, and a new study found that these changes could be cheap to implement. 

The key is contrails, thin clouds that planes produce when they fly. Minimizing contrails means less warming, and changing flight paths can reduce the amount of contrail formation. Read more about how in the latest from my colleague James Temple

Keeping up with climate  

New rules from the US Securities and Exchange Commission were watered down, cutting off the best chance we’ve had at forcing companies to reckon with the dangers of climate change, as Dara O’Rourke writes in a new opinion piece. (MIT Technology Review)

Yes, heat pumps slash emissions, even if they’re hooked up to a pretty dirty grid. Switching to a heat pump is better than heating with fossil fuels basically everywhere in the US. (Canary Media)

Rivian announced its new R2, a small SUV set to go on sale in 2026. The reveal signals a shift to focusing on mass-market vehicles for the brand. (Heatmap)

Toyota has focused on selling hybrid vehicles instead of fully electric ones, and it’s paying off financially. (New York Times)

→ Here’s why I wrote in December 2022 that EVs wouldn’t be fully replacing hybrids anytime soon. (MIT Technology Review)

Some scientists think we should all pay more attention to tiny aquatic plants called azolla. They can fix their own nitrogen and capture a lot of carbon, making them a good candidate for crops and even biofuels. (Wired)

New York is suing the world’s largest meat company. The company has said it’ll produce meat with no emissions by 2040, a claim that is false and misleading, according to the New York attorney general’s office. (Vox)

A massive fire in Texas has destroyed hundreds of homes. Climate change has fueled dry conditions, and power equipment sparked an intense fire that firefighters struggled to contain. (Grist)

→ Many of the homes destroyed in the blaze are uninsured, creating a tough path ahead for recovery. (Texas Tribune)

Methane leaks in the US are worse than we thought

Methane emissions in the US are worse than scientists previously estimated, a new study has found.

The study, published today in Nature, represents one of the most comprehensive surveys yet of methane emissions from US oil- and gas-producing regions. Using measurements taken from planes, the researchers found that emissions from many of the targeted areas were significantly higher than government estimates had found. The undercounting highlights the urgent need for new and better ways of tracking the powerful greenhouse gas.

Methane emissions are responsible for nearly a third of the total warming the planet has experienced so far. While there are natural sources of the greenhouse gas, including wetlands, human activities like agriculture and fossil-fuel production have dumped millions of metric tons of additional methane into the atmosphere. The concentration of methane has more than doubled over the past 200 years. But there are still large uncertainties about where, exactly, emissions are coming from.

Answering these questions is a challenging but crucial first step to cutting emissions and addressing climate change. To do so, researchers are using tools ranging from satellites like the recently launched MethaneSAT to ground and aerial surveys. 

The US Environmental Protection Agency estimates that roughly 1% of oil and gas produced winds up leaking into the atmosphere as methane pollution. But survey after survey has suggested that the official numbers underestimate the true extent of the methane problem.  

For the sites examined in the new study, “methane emissions appear to be higher than government estimates, on average,” says Evan Sherwin, a research scientist at Lawrence Berkeley National Laboratory, who conducted the analysis as a postdoctoral fellow at Stanford University.  

The data Sherwin used comes from one of the largest surveys of US fossil-fuel production sites to date. Starting in 2018, Kairos Aerospace and the Carbon Mapper Project mapped six major oil- and gas-producing regions, which together account for about 50% of onshore oil production and about 30% of gas production. Planes flying overhead gathered nearly 1 million measurements of well sites using spectrometers, which can detect methane using specific wavelengths of light. 

Sherwin et al., Nature

Here’s where things get complicated. Methane sources in oil and gas production come in all shapes and sizes. Some small wells slowly leak the gas at a rate of roughly one kilogram of methane an hour. Other sources are significantly bigger, emitting hundreds or even thousands of kilograms per hour, but these leaks may last for only a short period.

The planes used in these surveys detect mostly the largest leaks, above roughly 100 kilograms per hour (though they catch smaller ones sometimes, down to around one-tenth that size, Sherwin says). Combining measurements of these large leak sites with modeling to estimate smaller sources, researchers estimated that the larger leaks account for an outsize proportion of emissions. In many cases, around 1% of well sites can make up over half the total methane emissions, Sherwin says.

But some scientists say that this and other studies are still limited by the measurement tools available. “This is an indication of the current technology limits,” says Ritesh Gautam, a lead senior scientist at the Environmental Defense Fund.

Because the researchers used aerial measurements to detect large methane leaks and modeled smaller sources, it’s possible that the study may be overestimating the importance of the larger leaks, Gautam says. He pointed to several other recent studies, which found that smaller wells contribute a larger fraction of methane emissions.

The problem is, it’s basically impossible to use just one instrument to measure all these different methane sources. We’ll need all the measurement technologies available to get a clearer picture, Gautam explains.

Ground-based tools attached to towers can keep constant watch over an area and detect small emissions sources, though they generally can’t survey large regions. Aerial surveys using planes can cover more ground but tend to detect only larger leaks. They also represent a snapshot in time, so they can miss sources that only leak methane for periods.

And then there are the satellites. Earlier this month, Google and EDF launched MethaneSAT, which joined the growing constellation of methane-detecting satellites orbiting the planet. Some of the existing satellites map huge areas, getting detail only on the order of kilometers. Others have much higher resolution, with the ability to pin methane emissions down to within a few dozen meters. 

Satellites will be especially helpful in finding out more about the many countries around the world that haven’t been as closely measured and mapped as the US has, Gautham says. 

Understanding methane emissions is one thing; actually addressing them is another matter. After identifying a leak, companies then need to take actions like patching faulty pipelines or other equipment, or closing up the vents and flares that routinely release methane into the atmosphere. Roughly 40% of methane emissions from oil and gas production have no net cost, since the money saved by not losing the methane is more than enough to cover the cost of the abatement, according to estimates from the International Energy Agency.

Over 100 countries joined the Global Methane Pledge in 2021, taking on a goal of cutting methane emissions 30% from 2020 levels by the end of the decade. New rules for oil and gas producers announced by the Biden administration could help the US meet those targets. Earlier this year, the EPA released details of a proposed methane fee for fossil-fuel companies, to be calculated on the basis of excess methane released into the atmosphere.

While researchers are slowly getting a better picture of methane emissions, addressing them will be a challenge, as Sherwin notes: “There’s a long way to go.”

Emissions hit a record high in 2023. Blame hydropower.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Hydropower is a staple of clean energy—the modern version has been around for over a century, and it’s one of the world’s largest sources of renewable electricity.

But last year, weather conditions caused hydropower to fall short in a major way, with generation dropping by a record amount. In fact, the decrease was significant enough to have a measurable effect on global emissions. Total energy-related emissions rose by about 1.1% in 2023, and a shortfall of hydroelectric power accounts for 40% of that rise, according to a new report from the International Energy Agency.

Between year-to-year weather variability and climate change, there could be rocky times ahead for hydropower. Here’s what we can expect from the power source and what it might mean for climate goals. 

Drying up

Hydroelectric power plants use moving water to generate electricity. The majority of plants today use dams to hold back water, creating reservoirs. Operators can allow water to flow through the power plant as needed, creating an energy source that can be turned on and off on demand. 

This dispatchability is a godsend for the grid, especially because some renewables, like wind and solar, aren’t quite so easy to control. (If anyone figures out how to send more sunshine my way, please let me know—I could use more of it.) 

But while most hydroelectric plants do have some level of dispatchability, the power source is still reliant on the weather, since rain and snow are generally what fills up reservoirs. That’s been a problem for the past few years, when many regions around the world have faced major droughts. 

The world actually added about 20 gigawatts of hydropower capacity in 2023, but because of weather conditions, the amount of electricity generated from hydropower fell overall.

The shortfall was especially bad in China, with generation falling by 4.9% there. North America also faced droughts that contributed to hydro’s troubles, partly because El Niño brought warmer and drier conditions. Europe was one of the few places where conditions improved in 2023—mostly because 2022 was an even worse year for drought on the continent.

As hydroelectric plants fell short, fossil fuels like coal and natural gas stepped in to fill the gap, contributing to a rise in global emissions. In total, changes in hydropower output had more of an effect on global emissions than the post-pandemic aviation industry’s growth from 2022 to 2023. 

A trickle

Some of the changes in the weather that caused falling hydropower output last year can be chalked up to expected yearly variation. But in a changing climate, a question looms: Is hydropower in trouble?

The effects of climate change on rainfall patterns can be complicated and not entirely clear. But there are a few key mechanisms by which hydropower is likely to be affected, as one 2022 review paper outlined

  • Rising temperatures will mean more droughts, since warmer air sucks up more moisture, causing rivers, soil, and plants to dry out more quickly. 
  • Winters will generally be warmer, meaning less snowpack and ice, which often fills up reservoirs in the early spring in places like the western US. 
  • There’s going to be more variability in precipitation, with periods of more extreme rainfall that can cause flooding (meaning water isn’t stored neatly in reservoirs for later use in a power plant).

What all this will mean for electricity generation depends on the region of the world in question. One global study from 2021 found that around half of countries with hydropower capacity could expect to see a 20% reduction in generation once per decade. Another report focused on China found that in more extreme emissions scenarios, nearly a quarter of power plants in the country could see that level of reduced generation consistently. 

It’s not likely that hydropower will slow to a mere trickle, even during dry years. But the grid of the future will need to be prepared for variations in the weather. Having a wide range of electricity sources and tying them together with transmission infrastructure over wide geographic areas will help keep the grid robust and ready for our changing climate. 

Related reading

Droughts across the western US have been cutting into hydropower for years. Here’s how changing weather could affect climate goals in California.

While adaptation can help people avoid the worst impacts of climate change, there’s a limit to how much adapting can really help, as I found when I traveled to El Paso, Texas, famously called the “drought-proof city.”

Drought is creating new challenges for herders, who have to handle a litany of threats to their animals and way of life. Access to data could be key in helping them navigate a changing world.

road closed blockade

STEPHANIE ARNETT/MITTR | ENVATO

Another thing

Chinese EVs have entered center stage in the ongoing tensions between the US and China. The vehicles could help address climate change, but the Biden administration is wary of allowing them into the market. There are two major motivations: security and the economy. Read more in my colleague Zeyi Yang’s latest newsletter here

Keeping up with climate  

A new satellite that launched this week will be keeping an eye on methane emissions. Tracking leaks of the powerful greenhouse gas could be key in addressing climate change. (New York Times)

→ This isn’t our first attempt at tracking greenhouse gases from space—but here’s how MethaneSAT is different from other methane-detecting satellites. (Heatmap)

Smarter charging of EVs could be essential to the grid of the future, and California is working on a new program to test it out. (Canary Media)

The magnets that power wind turbines nearly always wind up in a landfill. A new program aims to change that by supporting new methods of recycling. (Grist)

→ One company wants to do without the rare earth metals that are used in today’s powerful magnets. (MIT Technology Review)

Data centers burn through water to keep machinery cool. As more of the facilities pop up, in part to support AI tools like ChatGPT, they could stretch water supplies thin in some places. (The Atlantic)

No US state has been more enthusiastic about heat pumps than Maine. While it might seem an unlikely match—the appliances can lose some of their efficiency in the cold—the state is a success story for the technology. (New York Times)

New rules from the US Securities and Exchange Commission would require companies to report their emissions and expected climate risks. The final version is watered down from an earlier proposal, which would have included a wider variety of emissions. (Associated Press)

The SEC’s new climate rules were a missed opportunity to accelerate corporate action

This week, the US Securities and Exchange Commission enacted a set of long-awaited climate rules, requiring most publicly traded companies to disclose their greenhouse-gas emissions and the climate risks building up on their balance sheets. 

Unfortunately, the federal agency watered down the regulations amid intense lobbying from business interests, undermining their ultimate effectiveness—and missing the best shot the US may have for some time at forcing companies to reckon with the rising dangers of a warming world. 

These new regulations were driven by the growing realization that climate risks are financial risks. Global corporations now face climate-related supply chain disruptions. Their physical assets are vulnerable to storms, their workers will be exposed to extreme heat events, and some of their customers may be forced to relocate. There are fossil-fuel assets on their balance sheets that they may never be able to sell, and their business models will be challenged by a rapidly changing planet.

These are not just coal and oil companies. They are utilities, transportation companies, material producers, consumer product companies, even food companies. And investors—you, me, your aunt’s pension—are buying and holding these fossilized stocks, often unknowingly.

Investors, policymakers, and the general public all need clearer, better information on how businesses are accelerating climate change, what they are doing to address those impacts, and what the cascading effects could mean for their bottom line.

The new SEC rules formalize and mandate what has essentially been a voluntary system of corporate carbon governance, now requiring corporations to report how climate-related risks may affect their business.

They also must disclose their “direct emissions” from sources they own or control, as well as their indirect emissions from the generation of “purchased energy,” which generally means their use of electricity and heat. 

But crucially, companies will have to do so only when they determine that the information is financially “material,” providing companies considerable latitude over whether they do or don’t provide those details.

The original draft of the SEC rules would have also required corporations to report emissions from “upstream and downstream activities” in their value chains. That generally refers to the associated emissions from their suppliers and customers, which can often make up 80% of a company’s total climate pollution.  

The loss of that requirement and the addition of the “materiality” standard both seem attributable to intense pressure from business groups. 

To be sure, these rules should help make it clearer how some companies are grappling with climate change and their contributions to it. Out of legal caution, plenty of businesses are likely to determine that emissions are material.

And clearer information will help accelerate corporate climate action as firms concerned about their reputation increasingly feel pressure from customers, competitors, and some investors to reduce their emissions. 

But the SEC could and should have gone much further. 

After all, the EU’s similar policies are much more comprehensive and stringent. California’s emissions disclosure law, signed this past October, goes further still, requiring both public and private corporations with revenues over $1 billion to report every category of emissions, and then to have this data audited by a third party.

Unfortunately, the SEC rules merely move corporations to the starting line of the process required to decarbonize the economy, at a time when they should already be deep into the race. We know these rules don’t go far enough, because firms already following similar voluntary protocols have shown minimal progress in reducing their greenhouse-gas emissions. 

The disclosure system upon which the SEC rules are based faces two underlying problems that have limited how much and how effectively any carbon accounting and reporting can be put to use. 

First: problems with the data itself. The SEC rules grant firms significant latitude in carbon accounting, allowing them to set different boundaries for their “carbon footprint,” model and measure emissions differently, and even vary how they report their emissions. In aggregate, what we will end up with are corporate reports of the previous year’s partial emissions, without any way to know what a company actually did to reduce its carbon pollution.

Second: limitations in how stakeholders can use this data. As we’ve seen with voluntary corporate climate commitments, the wide variations in reporting make it impossible to compare firms accurately. Or as the New Climate Institute argues, “The rapid acceleration in the volume of corporate climate pledges, combined with the fragmentation of approaches and the general lack of regulation or oversight, means that it is more difficult than ever to distinguish between real climate leadership and unsubstantiated greenwashing.”

Investor efforts to evaluate carbon emissions, decarbonization plans, and climate risks through ESG (environmental, social, and governance) rating schemes have merely produced what some academics call “aggregate confusion.” And corporations have faced few penalties for failing to clearly disclose emissions or even meet their own standards. 

All of which is to say that a new set of SEC carbon accounting and reporting rules that largely replicate the problems with voluntary corporate action, by failing to require consistent and actionable disclosures, isn’t going to drive the changes we need, at the speed we need. 

Companies, investors, and the public require rules that drive changes inside companies and that can be properly assessed from outside them. 

This system needs to track the main sources of corporate emissions and incentivize companies to make real investments in efforts to achieve deep emissions cuts, both within the company and across its supply chain.

The good news is that even though the rules in place are limited and flawed, regulators, regions, and companies themselves can build upon them to move toward more meaningful climate action.

The smartest firms and investors are already going beyond the SEC regulations. They’re developing better systems to track the drivers and costs of carbon emissions, and taking concrete steps to address them: reducing fuel use, building energy-efficient infrastructure, and adopting lower-carbon materials, products, and processes. 

It is now just good business to look for carbon reductions that actually save money.

The SEC has taken an important, albeit flawed, first step in nudging our financial laws to recognize climate impacts and risks. But regulators and corporations need to pick up the pace from here, ensuring that they’re providing a clear picture of how quickly or slowly companies are moving as they take the steps and make the investments needed to thrive in a transitioning economy—and on an increasingly risky planet.

Dara O’Rourke is an associate professor and co-director of the master of climate solutions program at the University of California, Berkeley.

Why concerns over the sustainability of carbon removal are growing

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

There’s a looming problem in the carbon removal space.

By one count, nearly 800 companies around the world are exploring a wide variety of methods for drawing planet-warming greenhouse gas out of the atmosphere and storing it away or putting it to use, a gigantic leap from the five startups I could have named in 2019. Globally, venture investors poured more than $4 billion into this sector between 2020 and the end of last year, according to data provided by PitchBook. 

The trouble is, carbon dioxide removal (CDR) is a very expensive product that, strictly speaking, no one needs right now. It’s not a widget; it’s waste management for invisible garbage, a public good that nobody is eager to pay for.

“CDR is a pure cost, and we’re trying to force it to be something that’s profitable—and the only way you can do that is with public money or through voluntary markets,” says Emily Grubert, an associate professor at Notre Dame, who previously served as deputy assistant secretary in the US Energy Department’s Office of Carbon Management.

Both of those are playing a part to certain degrees. So far, the main markets for carbon removal come from government procurement, which is limited; government subsidies, which don’t cover the cost; and voluntary purchases by corporations and individuals, which are restricted to those willing to pay the true cost of high-quality, reliable removal. You can also use the CO2 as a feedstock in other products, but then you’re generally starting with a high-cost version of a cheap commodity.

Given these market challenges, some investors are scratching their heads as they witness the huge sums flowing into the space.

In a report last summer, the venture capital firm DCVC said that all of the approaches it evaluated faced “multiple feasibility constraints.” It noted that carbon-sucking direct-air-capture factories are particularly expensive, charging customers hundreds of dollars per ton.

“That will still likely be the case in five, seven, even 10 years—which is why we at DCVC are somewhat surprised to see hundreds of millions of dollars in capital flowing into early-stage direct air capture companies,” the authors wrote.

Rachel Slaybaugh, a DCVC partner, said of direct-air capture in the report: “I’m not saying we won’t need it. And I’m not saying there won’t eventually be good businesses here. I’m saying right now the markets are very nascent, and I don’t see how you can possibly make a venture return.” 

In background conversations, several industry insiders I’ve spoken with acknowledge that the number of carbon removal companies is simply unsustainable, and that a sizable share will flame out at some point.

The sector has taken off, in part, because a growing body of studies has found that a huge amount of carbon removal will be needed to keep rising temperatures in check. By some estimates, nations may have to remove 10 billion tons of carbon dioxide a year by midcentury to keep the planet from blowing past 2 °C of warming, or to pull it back into safer terrain.

On top of that, companies are looking for ways to meet their net-zero commitments. For now, some businesses are willing to pay the really high current costs for carbon removal, in part to help the sector scale up. These include Microsoft and companies participating in the $1 billion Frontier program

At the moment, I’m told, corporate demand is outstripping the availability of reliable forms of carbon removal. There are only a handful of direct-air-capture plants, which take years to construct, and companies are still testing out or scaling up other approaches, like burying biochar and pumping bio-oil deep underground.

Costs are sure to come down, but it’s always going to be relatively expensive to do this well, and there are only so many corporate customers that will be willing to pay the true cost, observers say. So as carbon removal capacity catches up with that corporate demand, the fate of the industry will increasingly depend on how much more help governments are willing to provide—and on how thoughtfully they craft any accompanying rules.

Countries may support the emerging industry through carbon trading markets, direct purchases, mandates on polluters, fuel standards, or other measures. 

It seems safe to assume that nations will continue to dangle more carrots or wield bigger sticks to help the sector along. Notably, the European Commission is developing a framework for certifying carbon dioxide removal, which could allow countries to eventually use various approaches to work toward the EU goal of climate neutrality by 2050. But it’s far from clear that such government support will grow as much and as quickly as investors hope or as entrepreneurs need.

Indeed, some observers argue it’s a “fantasy” that nations will ever fund high-quality carbon removal—on the scale of billions of tons a year—just because climate scientists said they should (see: our decades of inaction on climate change). To put it in perspective, the DCVC report notes that removing 100 billion tons at $100 a ton would add up to $10 trillion—“more than a tenth of global GDP.”

Growing financial pressures in the sector could play out in a variety of worrisome ways. 

“One possibility is there’s a bubble and it pops and a lot of investors lose their shirts,” says Danny Cullenward, a climate economist and research fellow with the Institute for Responsible Carbon Removal at American University. 

If so, that could shut down the development of otherwise promising carbon removal methods before we’ve learned how well and affordably they work (or not). 

The other danger is that when an especially frothy sector fizzles, it can turn public or political sentiment against the space and kill the appetite for further investment. This, after all, is precisely what played out after the cleantech 1.0 bubble burst. Conservatives assailed government lending to green startups, and VCs, feeling burned, backed away for the better part of a decade.

But Cullenward fears another possibility even more. As funding runs dry, startups eager to bring in revenue and expand the market may resort to selling cheaper, but less reliable, forms of carbon removal—and lobbying for looser standards to allow them.

He sees a scenario where the sector replicates the sort of widespread credibility problems that have occurred with voluntary carbon offsets, building up big marketplaces that move a lot of money around but don’t achieve all that much for the atmosphere.


Now read the rest of The Spark

Related reading

In December, I highlighted an essay by Grubert and another former DOE staffer, in which they warned that sucking down greenhouse gas to cancel out corporate emissions could come at the expense of more pressing public needs.

In an earlier piece, I explored how the energy, attention, and money flowing into carbon removal could feed unrealistic expectations about how much we can rely on it—and thus how much we can carry on emitting.

My colleague and former editor David Rotman recently dug into the hard lessons of the cleantech 1.0 boom and bust—and the high stakes of the current investment wave.

Keeping up with climate 

In a story out today, Tech Review’s Casey Crownhart explains why hydrogen vehicles may be lurching toward a dead end, as vehicle sales stagnate and fueling stations shut down. (MIT Technology Review)

A Trump victory would be bad news for climate change. In particular, I took a hard look at what it might mean for Joe Biden’s landmark law, the Inflation Reduction Act. (Short answer: nothing good.) (MIT Technology Review)

The Inflation Reduction Act includes a little-known methane fee, which kicks into effect for excess emissions in 2024. Grist reports that the US’s largest oil and gas companies could be on the hook for more than $1 billion, based on recent emissions patterns—marking another reason why, as I reported, Trump would likely try to rescind the provision. (Grist)

The US Securities and Exchange Commission could release long-awaited climate rules as soon as next week, requiring companies to disclose their corporate emissions and exposure to climate risks. Heatmap explores why the SEC is doing this and what it may mean for businesses, climate progress, and the cottage industry forming to conduct emissions accounting.  (Heatmap)