This grim but revolutionary DNA technology is changing how we respond to mass disasters

Seven days

No matter who he called—his mother, his father, his brother, his cousins—the phone would just go to voicemail. Cell service was out around Maui as devastating wildfires swept through the Hawaiian island. But while Raven Imperial kept hoping for someone to answer, he couldn’t keep a terrifying thought from sneaking into his mind: What if his family members had perished in the blaze? What if all of them were gone?

Hours passed; then days. All Raven knew at that point was this: there had been a wildfire on August 8, 2023, in Lahaina, where his multigenerational, tight-knit family lived. But from where he was currently based in Northern California, Raven was in the dark. Had his family evacuated? Were they hurt? He watched from afar as horrifying video clips of Front Street burning circulated online.

Much of the area around Lahaina’s Pioneer Mill Smokestack was totally destroyed by wildfire.
ALAMY

The list of missing residents meanwhile climbed into the hundreds.

Raven remembers how frightened he felt: “I thought I had lost them.”

Raven had spent his youth in a four-bedroom, two-bathroom, cream-colored home on Kopili Street that had long housed not just his immediate family but also around 10 to 12 renters, since home prices were so high on Maui. When he and his brother, Raphael Jr., were kids, their dad put up a basketball hoop outside where they’d shoot hoops with neighbors. Raphael Jr.’s high school sweetheart, Christine Mariano, later moved in, and when the couple had a son in 2021, they raised him there too.

From the initial news reports and posts, it seemed as if the fire had destroyed the Imperials’ entire neighborhood near the Pioneer Mill Smokestack—a 225-foot-high structure left over from the days of Maui’s sugar plantations, which Raven’s grandfather had worked on as an immigrant from the Philippines in the mid-1900s.

Then, finally, on August 11, a call to Raven’s brother went through. He’d managed to get a cell signal while standing on the beach.

“Is everyone okay?” Raven asked.

“We’re just trying to find Dad,” Raphael Jr. told his brother.

Raven Imperial sitting in the grass
From his current home in Northern California, Raven Imperial spent days not knowing what had happened to his family in Maui.
WINNI WINTERMEYER

In the three days following the fire, the rest of the family members had slowly found their way back to each other. Raven would learn that most of his immediate family had been separated for 72 hours: Raphael Jr. had been marooned in Kaanapali, four miles north of Lahaina; Christine had been stuck in Wailuku, more than 20 miles away; both young parents had been separated from their son, who escaped with Christine’s parents. Raven’s mother, Evelyn, had also been in Kaanapali, though not where Raphael Jr. had been.

But no one was in contact with Rafael Sr. Evelyn had left their home around noon on the day of the fire and headed to work. That was the last time she had seen him. The last time they had spoken was when she called him just after 3 p.m. and asked: “Are you working?” He replied “No,” before the phone abruptly cut off.

“Everybody was found,” Raven says. “Except for my father.”

Within the week, Raven boarded a plane and flew back to Maui. He would keep looking for him, he told himself, for as long as it took.


That same week, Kim Gin was also on a plane to Maui. It would take half a day to get there from Alabama, where she had moved after retiring from the Sacramento County Coroner’s Office in California a year earlier. But Gin, now an independent consultant on death investigations, knew she had something to offer the response teams in Lahaina. Of all the forensic investigators in the country, she was one of the few who had experience in the immediate aftermath of a wildfire on the vast scale of Maui’s. She was also one of the rare investigators well versed in employing rapid DNA analysis—an emerging but increasingly vital scientific tool used to identify victims in unfolding mass-casualty events.

Gin started her career in Sacramento in 2001 and was working as the coroner 17 years later when Butte County, California, close to 90 miles north, erupted in flames. She had worked fire investigations before, but nothing like the Camp Fire, which burned more than 150,000 acres—an area larger than the city of Chicago. The tiny town of Paradise, the epicenter of the blaze, didn’t have the capacity to handle the rising death toll. Gin’s office had a refrigerated box truck and a 52-foot semitrailer, as well as a morgue that could handle a couple of hundred bodies.

Kim Gin
Kim Gin, the former Sacramento County coroner, had worked fire investigations in her career, but nothing prepared her for the 2018 Camp Fire.
BRYAN TARNOWSKI

“Even though I knew it was a fire, I expected more identifications by fingerprints or dental [records]. But that was just me being naïve,” she says. She quickly realized that putting names to the dead, many burned beyond recognition, would rely heavily on DNA.

“The problem then became how long it takes to do the traditional DNA [analysis],” Gin explains, speaking to a significant and long-standing challenge in the field—and the reason DNA identification has long been something of a last resort following large-scale disasters.

While more conventional identification methods—think fingerprints, dental information, or matching something like a knee replacement to medical records—can be a long, tedious process, they don’t take nearly as long as traditional DNA testing.

Historically, the process of making genetic identifications would often stretch on for months, even years. In fires and other situations that result in badly degraded bone or tissue, it can become even more challenging and time consuming to process DNA, which traditionally involves reading the 3 billion base pairs of the human genome and comparing samples found in the field against samples from a family member. Meanwhile, investigators frequently need equipment from the US Department of Justice or the county crime lab to test the samples, so backlogs often pile up.

A supply kit with swabs, gloves, and other items needed to take a DNA sample in the field.
A demo chip for ANDE’s rapid DNA box.

This creates a wait that can be horrendous for family members. Death certificates, federal assistance, insurance money—“all that hinges on that ID,” Gin says. Not to mention the emotional toll of not knowing if their loved ones are alive or dead.

But over the past several years, as fires and other climate-change-fueled disasters have become more common and more cataclysmic, the way their aftermath is processed and their victims identified has been transformed. The grim work following a disaster remains—surveying rubble and ash, distinguishing a piece of plastic from a tiny fragment of bone—but landing a positive identification can now take just a fraction of the time it once did, which may in turn bring families some semblance of peace more swiftly than ever before.

The key innovation driving this progress has been rapid DNA analysis, a methodology that focuses on just over two dozen regions of the genome. The 2018 Camp Fire was the first time the technology was used in a large, live disaster setting, and the first time it was used as the primary way to identify victims. The technology—deployed in small high-tech field devices developed by companies like industry leader ANDE, or in a lab with other rapid DNA techniques developed by Thermo Fisher—is increasingly being used by the US military on the battlefield, and by the FBI and local police departments after sexual assaults and in instances where confirming an ID is challenging, like cases of missing or murdered Indigenous people or migrants. Yet arguably the most effective way to use rapid DNA is in incidents of mass death. In the Camp Fire, 22 victims were identified using traditional methods, while rapid DNA analysis helped with 62 of the remaining 63 victims; it has also been used in recent years following hurricanes and floods, and in the war in Ukraine.

“These families are going to have to wait a long period of time to get identification. How do we make this go faster?”

Tiffany Roy, a forensic DNA expert with consulting company ForensicAid, says she’d be concerned about deploying the technology in a crime scene, where quality evidence is limited and can be quickly “exhausted” by well-meaning investigators who are “not trained DNA analysts.” But, on the whole, Roy and other experts see rapid DNA as a major net positive for the field. “It is definitely a game-changer,” adds Sarah Kerrigan, a professor of forensic science at Sam Houston State University and the director of its Institute for Forensic Research, Training, and Innovation.

But back in those early days after the Camp Fire, all Gin knew was that nearly 1,000 people had been listed as missing, and she was tasked with helping to identify the dead. “Oh my goodness,” she remembers thinking. “These families are going to have to wait a long period of time to get identification. How do we make this go faster?”


Ten days

One flier pleading for information about “Uncle Raffy,” as people in the community knew Rafael Sr., was posted on a brick-red stairwell outside Paradise Supermart, a Filipino store and restaurant in Kahului, 25 miles away from the destruction. In it, just below the words “MISSING Lahaina Victim,” the 63-year-old grandfather smiled with closed lips, wearing a blue Hawaiian shirt, his right hand curled in the shaka sign, thumb and pinky pointing out.

Raphael Imperial Sr
Raven remembers how hard his dad, Rafael, worked. His three jobs took him all over town and earned him the nickname “Mr. Aloha.”
COURTESY OF RAVEN IMPERIAL

“Everybody knew him from restaurant businesses,” Raven says. “He was all over Lahaina, very friendly to everybody.” Raven remembers how hard his dad worked, juggling three jobs: as a draft tech for Anheuser-Busch, setting up services and delivering beer all across town; as a security officer at Allied Universal security services; and as a parking booth attendant at the Sheraton Maui. He connected with so many people that coworkers, friends, and other locals gave him another nickname: “Mr. Aloha.”

Raven also remembers how his dad had always loved karaoke, where he would sing “My Way,” by Frank Sinatra. “That’s the only song that he would sing,” Raven says. “Like, on repeat.” 

Since their home had burned down, the Imperials ran their search out of a rental unit in Kihei, which was owned by a local woman one of them knew through her job. The woman had opened her rental to three families in all. It quickly grew crowded with side-by-side beds and piles of donations.

Each day, Evelyn waited for her husband to call.

She managed to catch up with one of their former tenants, who recalled asking Rafael Sr. to leave the house on the day of the fires. But she did not know if he actually did. Evelyn spoke to other neighbors who also remembered seeing Rafael Sr. that day; they told her that they had seen him go back into the house. But they too did not know what happened to him after.

A friend of Raven’s who got into the largely restricted burn zone told him he’d spotted Rafael Sr.’s Toyota Tacoma on the street, not far from their house. He sent a photo. The pickup was burned out, but a passenger-side door was open. The family wondered: Could he have escaped?

Evelyn called the Red Cross. She called the police. Nothing. They waited and hoped.


Back in Paradise in 2018, as Gin worried about the scores of waiting families, she learned there might in fact be a better way to get a positive ID—and a much quicker one. A company called ANDE Rapid DNA had already volunteered its services to the Butte County sheriff and promised that its technology could process DNA and get a match in less than two hours.

“I’ll try anything at this point,” Gin remembers telling the sheriff. “Let’s see this magic box and what it’s going to do.”

In truth, Gin did not think it would work, and certainly not in two hours. When the device arrived, it was “not something huge and fantastical,” she recalls thinking. A little bigger than a microwave, it looked “like an ordinary box that beeps, and you put stuff in, and out comes a result.”

The “stuff,” more specifically, was a cheek or bloodstain swab, or a piece of muscle, or a fragment of bone that had been crushed and demineralized. Instead of reading 3 billion base pairs in this sample, Selden’s machine examined just 27 genome regions characterized by particular repeating sequences. It would be nearly impossible for two unrelated people to have the same repeating sequence in those regions. But a parent and child, or siblings, would match, meaning you could compare DNA found in human remains with DNA samples taken from potential victims’ family members. Making it even more efficient for a coroner like Gin, the machine could run up to five tests at a time and could be operated by anyone with just a little basic training.

ANDE’s chief scientific officer, Richard Selden, a pediatrician who has a PhD in genetics from Harvard, didn’t come up with the idea to focus on a smaller, more manageable number of base pairs to speed up DNA analysis. But it did become something of an obsession for him after he watched the O.J. Simpson trial in the mid-1990s and began to grasp just how long it took for DNA samples to get processed in crime cases. By this point, the FBI had already set up a system for identifying DNA by looking at just 13 regions of the genome; it would later add seven more. Researchers in other countries had also identified other sets of regions to analyze. Drawing on these various methodologies, Selden homed in on the 27 specific areas of DNA he thought would be most effective to examine, and he launched ANDE in 2004.

But he had to build a device to do the analysis. Selden wanted it to be small, portable, and easily used by anyone in the field. In a conventional lab, he says, “from the moment you take that cheek swab to the moment that you have the answer, there are hundreds of laboratory steps.” Traditionally, a human is holding test tubes and iPads and sorting through or processing paperwork. Selden compares it all to using a “conventional typewriter.” He effectively created the more efficient laptop version of DNA analysis by figuring out how to speed up that same process.

No longer would a human have to “open up this bottle and put [the sample] in a pipette and figure out how much, then move it into a tube here.” It is all automated, and the process is confined to a single device.

gloved hands load a chip cartridge into the ANDE machine
The rapid DNA analysis boxes from ANDE can be used in the field by anyone with just a bit of training.
ANDE

Once a sample is placed in the box, the DNA binds to a filter in water and the rest of the sample is washed away. Air pressure propels the purified DNA to a reconstitution chamber and then flattens it into a sheet less than a millimeter thick, which is subjected to about 6,000 volts of electricity. It’s “kind of an obstacle course for the DNA,” he explains.

The machine then interprets the donor’s genome and and provides an allele table with a graph showing the peaks for each region and its size. This data is then compared with samples from potential relatives, and the machine reports when it has a match.

Rapid DNA analysis as a technology first received approval for use by the US military in 2014, and in the FBI two years later. Then the Rapid DNA Act of 2017 enabled all US law enforcement agencies to use the technology on site and in real time as an alternative to sending samples off to labs and waiting for results.

But by the time of the Camp Fire the following year, most coroners and local police officers still had no familiarity or experience with it. Neither did Gin. So she decided to put the “magic box” through a test: she gave Selden, who had arrived at the scene to help with the technology, a DNA sample from a victim whose identity she’d already confirmed via fingerprint. The box took about 90 minutes to come back with a result. And to Gin’s surprise, it was the same identification she had already made. Just to make sure, she ran several more samples through the box, also from victims she had already identified. Again, results were returned swiftly, and they confirmed hers.

“I was a believer,” she says.

The next year, Gin helped investigators use rapid DNA technology in the 2019 Conception disaster, when a dive boat caught fire off the Channel Islands in Santa Barbara. “We ID’d 34 victims in 10 days,” Gin says. “Completely done.” Gin now works independently to assist other investigators in mass-fatality events and helps them learn to use the ANDE system.

Its speed made the box a groundbreaking innovation. Death investigations, Gin learned long ago, are not as much about the dead as about giving peace of mind, justice, and closure to the living.


Fourteen days

Many of the people who were initially on the Lahaina missing persons list turned up in the days following the fire. Tearful reunions ensued.

Two weeks after the fire, the Imperials hoped they’d have the same outcome as they loaded into a truck to check out some exciting news: someone had reported seeing Rafael Sr. at a local church. He’d been eating and had burns on his hands and looked disoriented. The caller said the sighting had occurred three days after the fire. Could he still be in the vicinity?

When the family arrived, they couldn’t confirm the lead.

“We were getting a lot of calls,” Raven says. “There were a lot of rumors saying that they found him.”

None of them panned out. They kept looking.


The scenes following large-scale destructive events like the fires in Paradise and Lahaina can be sprawling and dangerous, with victims sometimes dispersed across a large swath of land if many people died trying to escape. Teams need to meticulously and tediously search mountains of mixed, melted, or burned debris just to find bits of human remains that might otherwise be mistaken for a piece of plastic or drywall. Compounding the challenge is the comingling of remains—from people who died huddled together, or in the same location, or alongside pets or other animals.

This is when the work of forensic anthropologists is essential: they have the skills to differentiate between human and animal bones and to find the critical samples that are needed by DNA specialists, fire and arson investigators, forensic pathologists and dentists, and other experts. Rapid DNA analysis “works best in tandem with forensic anthropologists, particularly in wildfires,” Gin explains.

“The first step is determining, is it a bone?” says Robert Mann, a forensic anthropologist at the University of Hawaii John A. Burns School of Medicine on Oahu. Then, is it a human bone? And if so, which one?

Rober Mann in a lab coat with a human skeleton on the table in front of him
Forensic anthropologist Robert Mann has spent his career identifying human remains.
AP PHOTO/LUCY PEMONI

Mann has served on teams that have helped identify the remains of victims after the terrorist attacks of September 11, 2001, and the 2004 Indian Ocean tsunami, among other mass-casualty events. He remembers how in one investigation he received an object believed to be a human bone; it turned out to be a plastic replica. In another case, he was looking through the wreckage of a car accident and spotted what appeared to be a human rib fragment. Upon closer examination, he identified it as a piece of rubber weather stripping from the rear window. “We examine every bone and tooth, no matter how small, fragmented, or burned it might be,” he says. “It’s a time-consuming but critical process because we can’t afford to make a mistake or overlook anything that might help us establish the identity of a person.”

For Mann, the Maui disaster felt particularly immediate. It was right near his home. He was deployed to Lahaina about a week after the fire, as one of more than a dozen forensic anthropologists on scene from universities in places including Oregon, California, and Hawaii.

While some anthropologists searched the recovery zone—looking through what was left of homes, cars, buildings, and streets, and preserving fragmented and burned bone, body parts, and teeth—Mann was stationed in the morgue, where samples were sent for processing.

It used to be much harder to find samples that scientists believed could provide DNA for analysis, but that’s also changed recently as researchers have learned more about what kind of DNA can survive disasters. Two kinds are used in forensic identity testing: nuclear DNA (found within the nuclei of eukaryotic cells) and mitochondrial DNA (found in the mitochondria, organelles located outside the nucleus). Both, it turns out, have survived plane crashes, wars, floods, volcanic eruptions, and fires.

Theories have also been evolving over the past few decades about how to preserve and recover DNA specifically after intense heat exposure. One 2018 study found that a majority of the samples actually survived high heat. Researchers are also learning more about how bone characteristics change depending on the degree. “Different temperatures and how long a body or bone has been exposed to high temperatures affect the likelihood that it will or will not yield usable DNA,” Mann says.

Typically, forensic anthropologists help select which bone or tooth to use for DNA testing, says Mann. Until recently, he explains, scientists believed “you cannot get usable DNA out of burned bone.” But thanks to these new developments, researchers are realizing that with some bone that has been charred, “they’re able to get usable, good DNA out of it,” Mann says. “And that’s new.” Indeed, Selden explains that “in a typical bad fire, what I would expect is 80% to 90% of the samples are going to have enough intact DNA” to get a result from rapid analysis. The rest, he says, may require deeper sequencing.

The aftermath of large-scale destructive events like the fire in Lahaina can be sprawling and dangerous. Teams need to meticulously search through mountains of mixed, melted, or burned debris to find bits of human remains.
GLENN FAWCETT VIA ALAMY

Anthropologists can often tell “simply by looking” if a sample will be good enough to help create an ID. If it’s been burned and blackened, “it might be a good candidate for DNA testing,” Mann says. But if it’s calcined (white and “china-like”), he says, the DNA has probably been destroyed.

On Maui, Mann adds, rapid DNA analysis made the entire process more efficient, with tests coming back in just two hours. “That means while you’re doing the examination of this individual right here on the table, you may be able to get results back on who this person is,” he says. From inside the lab, he watched the science unfold as the number of missing on Maui quickly began to go down.

Within three days, 42 people’s remains were recovered inside Maui homes or buildings and another 39 outside, along with 15 inside vehicles and one in the water. The first confirmed identification of a victim on the island occurred four days after the fire—this one via fingerprint. The ANDE rapid DNA team arrived two days after the fire and deployed four boxes to analyze multiple samples of DNA simultaneously. The first rapid DNA identification happened within that first week.


Sixteen days

More than two weeks after the fire, the list of missing and unaccounted-for individuals was dwindling, but it still had 388 people on it. Rafael Sr. was one of them.

Raven and Raphael Jr. raced to another location: Cupies café in Kahului, more than 20 miles from Lahaina. Someone had reported seeing him there.

Rafael’s family hung posters around the island, desperately hoping for reliable information. (Phone number redacted by MIT Technology Review.)
ERIKA HAYASAKI

The tip was another false lead.

As family and friends continued to search, they stopped by support hubs that had sprouted up around the island, receiving information about Red Cross and FEMA assistance or donation programs as volunteers distributed meals and clothes. These hubs also sometimes offered DNA testing.

Raven still had a “50-50” feeling that his dad might be out there somewhere. But he was beginning to lose some of that hope.


Gin was stationed at one of the support hubs, which offered food, shelter, clothes, and support. “You could also go in and give biological samples,” she says. “We actually moved one of the rapid DNA instruments into the family assistance center, and we were running the family samples there.” Eliminating the need to transport samples from a site to a testing center further cut down any lag time.

Selden had once believed that the biggest hurdle for his technology would be building the actual device, which took about eight years to design and another four years to perfect. But at least in Lahaina, it was something else: persuading distraught and traumatized family members to offer samples for the test.

Nationally, there are serious privacy concerns when it comes to rapid DNA technology. Organizations like the ACLU warn that as police departments and governments begin deploying it more often, there must be more oversight, monitoring, and training in place to ensure that it is always used responsibly, even if that adds some time and expense. But the space is still largely unregulated, and the ACLU fears it could give rise to rogue DNA databases “with far fewer quality, privacy, and security controls than federal databases.”

Family support centers popped up around Maui to offer clothing, food, and other assistance, and sometimes to take DNA samples to help find missing family members.

In a place like Hawaii, these fears are even more palpable. The islands have a long history of US colonialism, military dominance, and exploitation of the Native population and of the large immigrant working-class population employed in the tourism industry.

Native Hawaiians in particular have a fraught relationship with DNA testing. Under a US law signed in 1921, thousands have a right to live on 200,000 designated acres of land trust, almost for free. It was a kind of reparations measure put in place to assist Native Hawaiians whose land had been stolen. Back in 1893, a small group of American sugar plantation owners and descendants of Christian missionaries, backed by US Marines, held Hawaii’s Queen Lili‘uokalani in her palace at gunpoint and forced her to sign over 1.8 million acres to the US, which ultimately seized the islands in 1898.

Queen Liliuokalani in a formal seated portrait
Hawaii’s Queen Lili‘uokalani was forced to sign over 1.8 million acres to the US.
PUBLIC DOMAIN VIA WIKIMEDIA COMMONS

To lay their claim to the designated land and property, individuals first must prove via DNA tests how much Hawaiian blood they have. But many residents who have submitted their DNA and qualified for the land have died on waiting lists before ever receiving it. Today, Native Hawaiians are struggling to stay on the islands amid skyrocketing housing prices, while others have been forced to move away.

Meanwhile, after the fires, Filipino families faced particularly stark barriers to getting information about financial support, government assistance, housing, and DNA testing. Filipinos make up about 25% of Hawaii’s population and 40% of its workers in the tourism industry. They also make up 46% of undocumented residents in Hawaii—more than any other group. Some encountered language barriers, since they primarily spoke Tagalog or Ilocano. Some worried that people would try to take over their burned land and develop it for themselves. For many, being asked for DNA samples only added to the confusion and suspicion.

Selden says he hears the overall concerns about DNA testing: “If you ask people about DNA in general, they think of Brave New World and [fear] the information is going to be used to somehow harm or control people.” But just like regular DNA analysis, he explains, rapid DNA analysis “has no information on the person’s appearance, their ethnicity, their health, their behavior either in the past, present, or future.” He describes it as a more accurate fingerprint.

Gin tried to help the Lahaina family members understand that their DNA “isn’t going to go anywhere else.” She told them their sample would ultimately be destroyed, something programmed to occur inside ANDE’s machine. (Selden says the boxes were designed to do this for privacy purposes.) But sometimes, Gin realizes, these promises are not enough.

“You still have a large population of people that, in my experience, don’t want to give up their DNA to a government entity,” she says. “They just don’t.”

Kim Gin
Gin understands that family members are often nervous to give their DNA samples. She promises the process of rapid DNA analysis respects their privacy, but she knows sometimes promises aren’t enough.
BRYAN TARNOWSKI

The immediate aftermath of a disaster, when people are suffering from shock, PTSD, and displacement, is the worst possible moment to try to educate them about DNA tests and explain the technology and privacy policies. “A lot of them don’t have anything,” Gin says. “They’re just wondering where they’re going to lay their heads down, and how they’re going to get food and shelter and transportation.”

Unfortunately, Lahaina’s survivors won’t be the last people in this position. Particularly given the world’s current climate trajectory, the risk of deadly events in just about every neighborhood and community will rise. And figuring out who survived and who didn’t will be increasingly difficult. Mann recalls his work on the Indian Ocean tsunami, when over 227,000 people died. “The bodies would float off, and they ended up 100 miles away,” he says. Investigators were at times left with remains that had been consumed by sea creatures or degraded by water and weather. He remembers how they struggled to determine: “Who is the person?”

Mann has spent his own career identifying people including “missing soldiers, sailors, airmen, Marines, from all past wars,” as well as people who have died recently. That closure is meaningful for family members, some of them decades, or even lifetimes, removed.

In the end, distrust and conspiracy theories did in fact hinder DNA-identification efforts on Maui, according to a police department report.


33 days

By the time Raven went to a family resource center to submit a swab, some four weeks had gone by. He remembers the quick rub inside his cheek.

Some of his family had already offered their own samples before Raven provided his. For them, waiting wasn’t an issue of mistrusting the testing as much as experiencing confusion and chaos in the weeks after the fire. They believed Uncle Raffy was still alive, and they still held hope of finding him. Offering DNA was a final step in their search.

“I did it for my mom,” Raven says. She still wanted to believe he was alive, but Raven says: “I just had this feeling.” His father, he told himself, must be gone.

Just a day after he gave his sample—on September 11, more than a month after the fire—he was at the temporary house in Kihei when he got the call: “It was,” Raven says, “an automatic match.”

Raven gave a cheek swab about a month after the disappearance of his father. It didn’t take long for him to get a phone call: “It was an automatic match.”
WINNI WINTERMEYER

The investigators let the family know the address where the remains of Rafael Sr. had been found, several blocks away from their home. They put it into Google Maps and realized it was where some family friends lived. The mother and son of that family had been listed as missing too. Rafael Sr., it seemed, had been with or near them in the end.

By October, investigators in Lahaina had obtained and analyzed 215 DNA samples from family members of the missing. By December, DNA analysis had confirmed the identities of 63 of the most recent count of 101 victims. Seventeen more had been identified by fingerprint, 14 via dental records, and two through medical devices, along with three who died in the hospital. While some of the most damaged remains would still be undergoing DNA testing months after the fires, it’s a drastic improvement over the identification processes for 9/11 victims, for instance—today, over 20 years later, some are still being identified by DNA.

Raphael Imperial Sr
Raven remembers how much his father loved karaoke. His favorite song was “My Way,” by Frank Sinatra. 
COURTESY OF RAVEN IMPERIAL

Rafael Sr. was born on October 22, 1959, in Naga City, the Philippines. The family held his funeral on his birthday last year. His relatives flew in from Michigan, the Philippines, and California.

Raven says in those weeks of waiting—after all the false tips, the searches, the prayers, the glimmers of hope—deep down the family had already known he was gone. But for Evelyn, Raphael Jr., and the rest of their family, DNA tests were necessary—and, ultimately, a relief, Raven says. “They just needed that closure.”

Erika Hayasaki is an independent journalist based in Southern California.

Is robotics about to have its own ChatGPT moment?

Silent. Rigid. Clumsy.

Henry and Jane Evans are used to awkward houseguests. For more than a decade, the couple, who live in Los Altos Hills, California, have hosted a slew of robots in their home. 

In 2002, at age 40, Henry had a massive stroke, which left him with quadriplegia and an inability to speak. Since then, he’s learned how to communicate by moving his eyes over a letter board, but he is highly reliant on caregivers and his wife, Jane. 

Henry got a glimmer of a different kind of life when he saw Charlie Kemp on CNN in 2010. Kemp, a robotics professor at Georgia Tech, was on TV talking about PR2, a robot developed by the company Willow Garage. PR2 was a massive two-armed machine on wheels that looked like a crude metal butler. Kemp was demonstrating how the robot worked, and talking about his research on how health-care robots could help people. He showed how the PR2 robot could hand some medicine to the television host.    

“All of a sudden, Henry turns to me and says, ‘Why can’t that robot be an extension of my body?’ And I said, ‘Why not?’” Jane says. 

There was a solid reason why not. While engineers have made great progress in getting robots to work in tightly controlled environments like labs and factories, the home has proved difficult to design for. Out in the real, messy world, furniture and floor plans differ wildly; children and pets can jump in a robot’s way; and clothes that need folding come in different shapes, colors, and sizes. Managing such unpredictable settings and varied conditions has been beyond the capabilities of even the most advanced robot prototypes. 

That seems to finally be changing, in large part thanks to artificial intelligence. For decades, roboticists have more or less focused on controlling robots’ “bodies”—their arms, legs, levers, wheels, and the like—via purpose-­driven software. But a new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes. 

Progress won’t happen overnight, though, as the Evanses know far too well from their many years of using various robot prototypes. 

PR2 was the first robot they brought in, and it opened entirely new skills for Henry. It would hold a beard shaver and Henry would move his face against it, allowing him to shave and scratch an itch by himself for the first time in a decade. But at 450 pounds (200 kilograms) or so and $400,000, the robot was difficult to have around. “It could easily take out a wall in your house,” Jane says. “I wasn’t a big fan.”

More recently, the Evanses have been testing out a smaller robot called Stretch, which Kemp developed through his startup Hello Robot. The first iteration launched during the pandemic with a much more reasonable price tag of around $18,000. 

Stretch weighs about 50 pounds. It has a small mobile base, a stick with a camera dangling off it, and an adjustable arm featuring a gripper with suction cups at the ends. It can be controlled with a console controller. Henry controls Stretch using a laptop, with a tool that that tracks his head movements to move a cursor around. He is able to move his thumb and index finger enough to click a computer mouse. Last summer, Stretch was with the couple for more than a month, and Henry says it gave him a whole new level of autonomy. “It was practical, and I could see using it every day,” he says. 

a robot arm holds a brush over the head of Henry Evans which rests on a pillow
Henry Evans used the Stretch robot to brush his hair, eat, and even
play with his granddaughter.
PETER ADAMS

Using his laptop, he could get the robot to brush his hair and have it hold fruit kebabs for him to snack on. It also opened up Henry’s relationship with his granddaughter Teddie. Before, they barely interacted. “She didn’t hug him at all goodbye. Nothing like that,” Jane says. But “Papa Wheelie” and Teddie used Stretch to play, engaging in relay races, bowling, and magnetic fishing. 

Stretch doesn’t have much in the way of smarts: it comes with some pre­installed software, such as the web interface that Henry uses to control it, and other capabilities such as AI-enabled navigation. The main benefit of Stretch is that people can plug in their own AI models and use them to do experiments. But it offers a glimpse of what a world with useful home robots could look like. Robots that can do many of the things humans do in the home—tasks such as folding laundry, cooking meals, and cleaning—have been a dream of robotics research since the inception of the field in the 1950s. For a long time, it’s been just that: “Robotics is full of dreamers,” says Kemp.

But the field is at an inflection point, says Ken Goldberg, a robotics professor at the University of California, Berkeley. Previous efforts to build a useful home robot, he says, have emphatically failed to meet the expectations set by popular culture—think the robotic maid from The Jetsons. Now things are very different. Thanks to cheap hardware like Stretch, along with efforts to collect and share data and advances in generative AI, robots are getting more competent and helpful faster than ever before. “We’re at a point where we’re very close to getting capability that is really going to be useful,” Goldberg says. 

Folding laundry, cooking shrimp, wiping surfaces, unloading shopping baskets—today’s AI-powered robots are learning to do tasks that for their predecessors would have been extremely difficult. 

Missing pieces

There’s a well-known observation among roboticists: What is hard for humans is easy for machines, and what is easy for humans is hard for machines. Called Moravec’s paradox, it was first articulated in the 1980s by Hans Moravec, thena roboticist at the Robotics Institute of Carnegie Mellon University. A robot can play chess or hold an object still for hours on end with no problem. Tying a shoelace, catching a ball, or having a conversation is another matter. 

There are three reasons for this, says Goldberg. First, robots lack precise control and coordination. Second, their understanding of the surrounding world is limited because they are reliant on cameras and sensors to perceive it. Third, they lack an innate sense of practical physics. 

“Pick up a hammer, and it will probably fall out of your gripper, unless you grab it near the heavy part. But you don’t know that if you just look at it, unless you know how hammers work,” Goldberg says. 

On top of these basic considerations, there are many other technical things that need to be just right, from motors to cameras to Wi-Fi connections, and hardware can be prohibitively expensive. 

Mechanically, we’ve been able to do fairly complex things for a while. In a video from 1957, two large robotic arms are dexterous enough to pinch a cigarette, place it in the mouth of a woman at a typewriter, and reapply her lipstick. But the intelligence and the spatial awareness of that robot came from the person who was operating it. 

In a video from 1957, a man operates two large robotic arms and uses the machine to apply a woman’s lipstick. Robots
have come a long way since.
“LIGHTER SIDE OF THE NEWS –ATOMIC ROBOT A HANDY GUY” (1957) VIA YOUTUBE

“The missing piece is: How do we get software to do [these things] automatically?” says Deepak Pathak, an assistant professor of computer science at Carnegie Mellon.  

Researchers training robots have traditionally approached this problem by planning everything the robot does in excruciating detail. Robotics giant Boston Dynamics used this approach when it developed its boogying and parkouring humanoid robot Atlas. Cameras and computer vision are used to identify objects and scenes. Researchers then use that data to make models that can be used to predict with extreme precision what will happen if a robot moves a certain way. Using these models, roboticists plan the motions of their machines by writing a very specific list of actions for them to take. The engineers then test these motions in the laboratory many times and tweak them to perfection. 

This approach has its limits. Robots trained like this are strictly choreographed to work in one specific setting. Take them out of the laboratory and into an unfamiliar location, and they are likely to topple over. 

Compared with other fields, such as computer vision, robotics has been in the dark ages, Pathak says. But that might not be the case for much longer, because the field is seeing a big shake-up. Thanks to the AI boom, he says, the focus is now shifting from feats of physical dexterity to building “general-purpose robot brains” in the form of neural networks. Much as the human brain is adaptable and can control different aspects of the human body, these networks can be adapted to work in different robots and different scenarios. Early signs of this work show promising results. 

Robots, meet AI 

For a long time, robotics research was an unforgiving field, plagued by slow progress. At the Robotics Institute at Carnegie Mellon, where Pathak works, he says, “there used to be a saying that if you touch a robot, you add one year to your PhD.” Now, he says, students get exposure to many robots and see results in a matter of weeks.

What separates this new crop of robots is their software. Instead of the traditional painstaking planning and training, roboticists have started using deep learning and neural networks to create systems that learn from their environment on the go and adjust their behavior accordingly. At the same time, new, cheaper hardware, such as off-the-shelf components and robots like Stretch, is making this sort of experimentation more accessible. 

Broadly speaking, there are two popular ways researchers are using AI to train robots. Pathak has been using reinforcement learning, an AI technique that allows systems to improve through trial and error, to get robots to adapt their movements in new environments. This is a technique that Boston Dynamics has also started using  in its robot “dogs” called Spot.

Deepak Pathak’s team at Carnegie Mellon has used an AI technique called reinforcement learning to create a robotic dog that can do extreme parkour with minimal pre-programming.

In 2022, Pathak’s team used this method to create four-legged robot “dogs” capable of scrambling up steps and navigating tricky terrain. The robots were first trained to move around in a general way in a simulator. Then they were set loose in the real world, with a single built-in camera and computer vision software to guide them. Other similar robots rely on tightly prescribed internal maps of the world and cannot navigate beyond them.

Pathak says the team’s approach was inspired by human navigation. Humans receive information about the surrounding world from their eyes, and this helps them instinctively place one foot in front of the other to get around in an appropriate way. Humans don’t typically look down at the ground under their feet when they walk, but a few steps ahead, at a spot where they want to go. Pathak’s team trained its robots to take a similar approach to walking: each one used the camera to look ahead. The robot was then able to memorize what was in front of it for long enough to guide its leg placement. The robots learned about the world in real time, without internal maps, and adjusted their behavior accordingly. At the time, experts told MIT Technology Review the technique was a “breakthrough in robot learning and autonomy” and could allow researchers to build legged robots capable of being deployed in the wild.   

Pathak’s robot dogs have since leveled up. The team’s latest algorithm allows a quadruped robot to do extreme parkour. The robot was again trained to move around in a general way in a simulation. But using reinforcement learning, it was then able to teach itself new skills on the go, such as how to jump long distances, walk on its front legs, and clamber up tall boxes twice its height. These behaviors were not something the researchers programmed. Instead, the robot learned through trial and error and visual input from its front camera. “I didn’t believe it was possible three years ago,” Pathak says. 

In the other popular technique, called imitation learning, models learn to perform tasks by, for example, imitating the actions of a human teleoperating a robot or using a VR headset to collect data on a robot. It’s a technique that has gone in and out of fashion over decades but has recently become more popular with robots that do manipulation tasks, says Russ Tedrake, vice president of robotics research at the Toyota Research Institute and an MIT professor.

By pairing this technique with generative AI, researchers at the Toyota Research Institute, Columbia University, and MIT have been able to quickly teach robots to do many new tasks. They believe they have found a way to extend the technology propelling generative AI from the realm of text, images, and videos into the domain of robot movements. 

The idea is to start with a human, who manually controls the robot to demonstrate behaviors such as whisking eggs or picking up plates. Using a technique called diffusion policy, the robot is then able to use the data fed into it to learn skills. The researchers have taught robots more than 200 skills, such as peeling vegetables and pouring liquids, and say they are working toward teaching 1,000 skills by the end of the year. 

Many others have taken advantage of generative AI as well. Covariant, a robotics startup that spun off from OpenAI’s now-shuttered robotics research unit, has built a multimodal model called RFM-1. It can accept prompts in the form of text, image, video, robot instructions, or measurements. Generative AI allows the robot to both understand instructions and generate images or videos relating to those tasks. 

The Toyota Research Institute team hopes this will one day lead to “large behavior models,” which are analogous to large language models, says Tedrake. “A lot of people think behavior cloning is going to get us to a ChatGPT moment for robotics,” he says. 

In a similar demonstration, earlier this year a team at Stanford managed to use a relatively cheap off-the-shelf robot costing $32,000 to do complex manipulation tasks such as cooking shrimp and cleaning stains. It learned those new skills quickly with AI. 

Called Mobile ALOHA (a loose acronym for “a low-cost open-source hardware teleoperation system”), the robot learned to cook shrimp with the help of just 20 human demonstrations and data from other tasks, such as tearing off a paper towel or piece of tape. The Stanford researchers found that AI can help robots acquire transferable skills: training on one task can improve its performance for others.

While the current generation of generative AI works with images and language, researchers at the Toyota Research Institute, Columbia University, and MIT believe the approach can extend to the domain of robot motion.

This is all laying the groundwork for robots that can be useful in homes. Human needs change over time, and teaching robots to reliably do a wide range of tasks is important, as it will help them adapt to us. That is also crucial to commercialization—first-generation home robots will come with a hefty price tag, and the robots need to have enough useful skills for regular consumers to want to invest in them. 

For a long time, a lot of the robotics community was very skeptical of these kinds of approaches, says Chelsea Finn, an assistant professor of computer science and electrical engineering at Stanford University and an advisor for the Mobile ALOHA project. Finn says that nearly a decade ago, learning-based approaches were rare at robotics conferences and disparaged in the robotics community. “The [natural-language-processing] boom has been convincing more of the community that this approach is really, really powerful,” she says. 

There is one catch, however. In order to imitate new behaviors, the AI models need plenty of data. 

More is more

Unlike chatbots, which can be trained by using billions of data points hoovered from the internet, robots need data specifically created for robots. They need physical demonstrations of how washing machines and fridges are opened, dishes picked up, or laundry folded, says Lerrel Pinto, an assistant professor of computer science at New York University. Right now that data is very scarce, and it takes a long time for humans to collect.

top frame shows a person recording themself opening a kitchen drawer with a grabber, and the bottom shows a robot attempting the same action

“ON BRINGING ROBOTS HOME,” NUR MUHAMMAD (MAHI) SHAFIULLAH, ET AL.

Some researchers are trying to use existing videos of humans doing things to train robots, hoping the machines will be able to copy the actions without the need for physical demonstrations. 

Pinto’s lab has also developed a neat, cheap data collection approach that connects robotic movements to desired actions. Researchers took a reacher-grabber stick, similar to ones used to pick up trash, and attached an iPhone to it. Human volunteers can use this system to film themselves doing household chores, mimicking the robot’s view of the end of its robotic arm. Using this stand-in for Stretch’s robotic arm and an open-source system called DOBB-E, Pinto’s team was able to get a Stretch robot to learn tasks such as pouring from a cup and opening shower curtains with just 20 minutes of iPhone data.  

But for more complex tasks, robots would need even more data and more demonstrations.  

The requisite scale would be hard to reach with DOBB-E, says Pinto, because you’d basically need to persuade every human on Earth to buy the reacher-­grabber system, collect data, and upload it to the internet. 

A new initiative kick-started by Google DeepMind, called the Open X-Embodiment Collaboration, aims to change that. Last year, the company partnered with 34 research labs and about 150 researchers to collect data from 22 different robots, including Hello Robot’s Stretch. The resulting data set, which was published in October 2023, consists of robots demonstrating 527 skills, such as picking, pushing, and moving.  

Sergey Levine, a computer scientist at UC Berkeley who participated in the project, says the goal was to create a “robot internet” by collecting data from labs around the world. This would give researchers access to bigger, more scalable, and more diverse data sets. The deep-learning revolution that led to the generative AI of today started in 2012 with the rise of ImageNet, a vast online data set of images. The Open X-Embodiment Collaboration is an attempt by the robotics community to do something similar for robot data. 

Early signs show that more data is leading to smarter robots. The researchers built two versions of a model for robots, called RT-X, that could be either run locally on individual labs’ computers or accessed via the web. The larger, web-accessible model was pretrained with internet data to develop a “visual common sense,” or a baseline understanding of the world, from the large language and image models. 

When the researchers ran the RT-X model on many different robots, they discovered that the robots were able to learn skills 50% more successfully than in the systems each individual lab was developing.

“I don’t think anybody saw that coming,” says Vincent Vanhoucke, Google DeepMind’s head of robotics. “Suddenly there is a path to basically leveraging all these other sources of data to bring about very intelligent behaviors in robotics.”

Many roboticists think that large vision-language models, which are able to analyze image and language data, might offer robots important hints as to how the surrounding world works, Vanhoucke says. They offer semantic clues about the world and could help robots with reasoning, deducing things, and learning by interpreting images. To test this, researchers took a robot that had been trained on the larger model and asked it to point to a picture of Taylor Swift. The researchers had not shown the robot pictures of Swift, but it was still able to identify the pop star because it had a web-scale understanding of who she was even without photos of her in its data set, says Vanhoucke.

RT-2, a recent model for robotic control, was trained on online text
and images as well as interactions with the real world.
KELSEY MCCLELLAN

Vanhoucke says Google DeepMind is increasingly using techniques similar to those it would use for machine translation to translate from English to robotics. Last summer, Google introduced a vision-language-­action model called RT-2. This model gets its general understanding of the world from online text and images it has been trained on, as well as its own interactions in the real world. It translates that data into robotic actions. Each robot has a slightly different way of translating English into action, he adds.  

“We increasingly feel like a robot is essentially a chatbot that speaks robotese,” Vanhoucke says. 

Baby steps

Despite the fast pace of development, robots still face many challenges before they can be released into the real world. They are still way too clumsy for regular consumers to justify spending tens of thousands of dollars on them. Robots also still lack the sort of common sense that would allow them to multitask. And they need to move from just picking things up and placing them somewhere to putting things together, says Goldberg—for example, putting a deck of cards or a board game back in its box and then into the games cupboard. 

But to judge from the early results of integrating AI into robots, roboticists are not wasting their time, says Pinto. 

“I feel fairly confident that we will see some semblance of a general-purpose home robot. Now, will it be accessible to the general public? I don’t think so,” he says. “But in terms of raw intelligence, we are already seeing signs right now.” 

Building the next generation of robots might not just assist humans in their everyday chores or help people like Henry Evans live a more independent life. For researchers like Pinto, there is an even bigger goal in sight.

Home robotics offers one of the best benchmarks for human-level machine intelligence, he says. The fact that a human can operate intelligently in the home environment, he adds, means we know this is a level of intelligence that can be reached. 

“It’s something which we can potentially solve. We just don’t know how to solve it,” he says. 

Evans in the foreground with computer screen.  A table with playing cards separates him from two other people in the room
Thanks to Stretch, Henry Evans was able to hold his own playing cards
for the first time in two decades.
VY NGUYEN

For Henry and Jane Evans, a big win would be to get a robot that simply works reliably. The Stretch robot that the Evanses experimented with is still too buggy to use without researchers present to troubleshoot, and their home doesn’t always have the dependable Wi-Fi connectivity Henry needs in order to communicate with Stretch using a laptop.

Even so, Henry says, one of the greatest benefits of his experiment with robots has been independence: “All I do is lay in bed, and now I can do things for myself that involve manipulating my physical environment.”

Thanks to Stretch, for the first time in two decades, Henry was able to hold his own playing cards during a match. 

“I kicked everyone’s butt several times,” he says. 

“Okay, let’s not talk too big here,” Jane says, and laughs.

How one mine could unlock billions in EV subsidies

A collection of brown pipes emerge at odd angles from the mud and overgrown grasses on a pine farm north of the tiny town of Tamarack, Minnesota.

Beneath these capped drill holes, Talon Metals has uncovered one of America’s densest nickel deposits—and now it wants to begin tunneling deep into the rock to extract hundreds of thousands of metric tons of mineral-rich ore a year.

If regulators approve the mine, it could mark the starting point in what this mining exploration company claims would become the country’s first complete domestic nickel supply chain, running from the bedrock beneath the Minnesota earth to the batteries in electric vehicles across the nation.


This is the second story in a two-part series exploring the hopes and fears surrounding a single mining proposal in a tiny Minnesota town. You can read the first part here.


The US government is poised to provide generous support at every step, distributing millions to billions of dollars in subsidies for those refining the metal, manufacturing the batteries, and buying the cars and trucks they power.

The products generated with the raw nickel that would flow from this one mining project could theoretically net more than $26 billion in subsidies, just through federal tax credits created by the Inflation Reduction Act (IRA). That’s according to an original analysis by Bentley Allan, an associate professor of political science at Johns Hopkins University and co-director of the Net Zero Industrial Policy Lab, produced in coordination with MIT Technology Review

One of the largest beneficiaries would be battery manufacturers that use Talon’s nickel, which could secure more than $8 billion in tax credits. About half of that could go to the EV giant Tesla, which has already agreed to purchase tens of thousands of metric tons of the metal from this mine. 

But the biggest winner, at least collectively, would be American consumers who buy EVs powered by those batteries. All told, they could enjoy nearly $18 billion in savings. 

While it’s been widely reported that the IRA could unleash at least hundreds of billions of federal dollars, MIT Technology Review wanted to provide a clearer sense of the law’s on-the-ground impact by zeroing in on a single project and examining how these rich subsidies could be unlocked at each point along the supply chain. (Read my related story on Talon’s proposal and the community reaction to it here.) 

We consulted with Allan to figure out just how much money is potentially in play, where it’s likely to go, and what it may mean for emerging industries and the broader economy. 

These calculations are all high-end estimates meant to assess the full potential of the act, and they assume that every company and customer qualifies for every tax credit available at each point along the supply chain. In the end, the government almost certainly won’t hand out the full amounts that Allan calculated, given the varied and complex restrictions in the IRA and other factors.

In addition, Talon itself may not obtain any subsidies directly through the law, according to recent but not-yet-final IRS interpretations. But thanks to rich EV incentives that will stimulate demand for domestic critical minerals, the company still stands to benefit indirectly from the IRA.


How $26 billion in tax credits could break down across a new US nickel supply chain


The sheer scale of the numbers offer a glimpse into how and why the IRA, signed into law in August 2022, has already begun to drive projects, reconfigure sourcing arrangements, and accelerate the shift away from fossil fuels.

Indeed, the policies have dramatically altered the math for corporations considering whether, where, and when to build new facilities and factories, helping to spur at least tens of billions of dollars’ worth of private investments into the nation’s critical-mineral-to-EV supply chain, according to several analyses.

“If you try to work out the math on these for five minutes, you start to be really shocked by what you see on paper,” Allan says, noting that the IRA’s incentives ensure that many more projects could be profitably and competitively developed in the US. “It’s going to transform the country in a serious way.”

An urgent game of catch-up

For decades, the US steadily offshored the messy business of mining and processing metals, leaving other nations to deal with the environmental damage and community conflicts that these industries often cause. But the country is increasingly eager to revitalize these sectors as climate change and simmering trade tensions with China raise the economic, environmental, and geopolitical stakes. 

Critical minerals like lithium, cobalt, nickel, and copper are the engine of the emerging clean-energy economy, essential for producing solar panels, wind turbines, batteries, and EVs. Yet China dominates production of the source materials, components, and finished goods for most of these products, following decades of strategic government investments and targeted trade policies. It refines 71% of the type of nickel used for batteries and produces more than 85% of the world’s battery cells, according to Benchmark Mineral Intelligence. 

The US is now in a high-stakes scramble to catch up and ensure its unfettered access to these materials, either by boosting domestic production or by locking in supply chains through friendly trading partners. The IRA is the nation’s biggest bet, by far, on bolstering these industries and countering China’s dominance over global cleantech supply chains. By some estimates, it could unlock more than $1 trillion in federal incentives.

“It should be sufficient to drive transformational progress in clean-energy adoption in the United States,” says Kimberly Clausing, a professor at the UCLA School of Law who previously served as deputy assistant secretary for tax analysis at the Treasury Department. “The best modeling seems to show it will reduce emissions substantially, getting us halfway to our Paris Agreement goals.”

Among other subsidies, the IRA provides tax credits that companies can earn for producing critical minerals, electrode materials, and batteries, enabling them to substantially cut their federal tax obligations. 

But the provisions that are really driving the rethinking of sourcing and supply chains are the so-called domestic content requirements contained in the tax credits for purchasing EVs. For consumers to earn the full credits, and for EV makers to benefit from the boost in demand they’ll generate, a significant share of the critical minerals the batteries contain must be produced in the US, sourced from free-trade partners, or recycled in North America, among other requirements. 

This makes the critical minerals coming out of a mine like Talon’s especially valuable to US car companies since it could help ensure that their EV models and customers qualify for these credits. 

Mining and refining

Nickel, like the deposits found in Minnesota, is of particular importance for cleaning up the auto sector. The metal boosts the amount of energy that can be packed into battery cathodes, extending the range of cars and making possible heavier electric vehicles, like trucks and even semis.

Global nickel demand could rise 112% by 2040, according to the International Energy Agency, owing primarily to an expected ninefold increase in demand for EV batteries. But there’s only one dedicated nickel mine operating in the US today, and most processing of the metal happens overseas. 

A former Talon worker pulls tubes of bedrock from drill pipe and places them into a box for further inspection.
ACKERMAN + GRUBER

In a preliminary economic analysis of the proposed mine released in 2021, Talon said it hoped to dig up nearly 11 million metric tons of ore over a nine-year period, including more than 140,000 tons of nickel. That’s enough to produce lithium-ion batteries that could power almost 2.4 million electric vehicles, Allan finds. 

After Talon mines the ore, the company plans to ship the material more than 400 miles west by rail to a planned processing site in central North Dakota that would produce what’s known as “nickel in concentrate,” which is generally around 10% pure. 

But that’s not enough to earn any subsidies under the current interpretation of the IRA’s tax credit for critical-mineral production. The law specifies that a company must convert nickel into a highly refined form known as “nickel sulphate” or process the metal to at least 99% purity by mass to be eligible for tax credits that cover 10% of the operating cost. Allan estimates that whichever company or companies carry out that step could earn subsidies that exceed $55 million. 

From there, the nickel would still need to be processed and mixed with other metals to produce the “cathode active materials” that go into a battery. Whatever companies carry out that step could secure some share of another $126.5 million in tax savings, thanks to a separate credit covering 10% of the costs of generating these materials, Allan notes.

Some share of the subsidies from these two tax credits might go to Tesla, which has stressed that it’s bringing more aspects of battery manufacturing in-house. For instance, it’s in the process of constructing its own lithium refinery and cathode plant in Texas. 

But it’s not yet clear what other companies could be involved in processing the nickel mined by Talon and, thus, who would benefit from these particular provisions.

Talon and other mining companies have campaigned to have the costs for mining raw materials included in the critical-mineral production tax credit, but the IRS recently stated in a proposed rule that this step won’t qualify.

Todd Malan, Talon’s chief external affairs officer and head of climate strategy, argues that this and other recent determinations will limit the incentives for companies to develop new mines in the US, or to make sure that any mines that are developed meet the higher environmental and labor standards the Biden administration and others have been calling for.

(The determinations could change since the Treasury Department and IRS have said they are still considering including the costs of mining in the tax credits. They have requested additional comments on the matter.) 

Even if Talon doesn’t obtain any IRA subsidies, it still stands to earn federal funds in several other ways. The company is set to receive a nearly $115 million grant from the Department of Energy to build the North Dakota processing site, through funds freed up under the Bipartisan Infrastructure Law. In addition, in September Talon secured nearly $21 million in matching grants through the Defense Production Act, which will support further nickel exploration in Minnesota and at another site the company is evaluating in Michigan. (These numbers are not included in Allan’s overall $26 billion estimate.)


Talon Metals could receive $136 million in federal subsidies

$115 million to build a nickel processing site in North Dakota with funds from the Bipartisan Infrastructure Law
$21 million through the Defense Production Act to support additional nickel exploration in the Midwest.

The math

Allan says that his findings are best thought of as ballpark figures. Some of Talon’s estimates have already changed, and the actual mineral quantities and operating costs will depend on a variety of factors, including how the company’s plans shift, what state and local regulators ultimately approve, what Talon actually pulls out of the ground, how much nickel the ore contains, and how much costs shift throughout the supply chain in the coming years.

His analysis assumes a preparation cost of $6.68 per kilowatt-hour for cathode active materials, based on an earlier analysis in the journal Energies. It did not evaluate any potential subsidies associated with other metals that Talon may extract from the mine, such as iron, copper, and cobalt. Please see his full research brief on the Net Zero Industrial Policy Lab site. 

Companies can use the IRA tax credits to reduce or even eliminate their federal tax obligations, both now and in tax years to come. In addition, businesses can transfer and sell the tax credits to other taxpayers.

Most of the tax credits in the IRA begin to phase out in 2030, so companies need to move fast to take advantage of them. The subsidies for critical-mineral production, however, don’t have any such cutoff.

Where will the money go and what will it do?

The $136 million in direct federal grants would double Talon’s funds for exploratory drilling efforts and cover about 27% of the development cost for its North Dakota processing plant.

The company says that these projects will help accelerate the country’s shift toward EVs and reduce the nation’s reliance on China for critical minerals. Further, Talon notes the mine will provide significant local economic benefits, including about 300 new jobs. That’s in addition to the nearly 100 employees already working in or near Tamarack. The company also expects the operation to generate nearly $110 million in mineral royalties and taxes paid to the state, local government, and the regional school district.

Plenty of citizens around Tamarack, however, argue that any economic benefits will come with steep trade-offs in terms of environmental and community impacts. A number of local tribal members fear the project could contaminate waterways and harm the region’s plants and animals. 

“The energy transition cannot be built by desecrating native lands,” said Leanna Goose, a member of the Leech Lake Band of Ojibwe, in an email. “If these ‘critical’ minerals leave the ground and are taken out from on or near our reservations, our people would be left with polluted water and land.”

Meanwhile, as it becomes clear just how much federal money is at stake, opposition to the IRA and other climate-related laws is hardening. Congressional Republicans, some of whom have portrayed the tax subsidies as corporate handouts to the “wealthy and well connected,” have repeatedly attempted to repeal key provisions of the laws. In addition, some environmentalists and left-wing critics have chided the government for offering generous subsidies to controversial companies and projects, including Talon’s. 

Talon stresses that it has made significant efforts to limit pollution and address Indigenous concerns. In addition, Malan pushed back on Allan’s findings. He says the overall estimate of $26 billion in subsidies across the supply chain significantly exaggerates the likely outcome, given numerous ways that companies and consumers might fail to qualify for the tax credits.

“I think it’s too much to tie it back to a little mining company in Minnesota,” he says. 

He emphasizes that Talon will earn money only for selling the metal it extracts, and that it will receive other federal grants only if it secures permits to proceed on its projects. (The company could also apply to receive separate IRA tax credits that cover a portion of the investments made into certain types of energy projects, but it has not at this time.)

Boosting the battery sector

The next stop in the supply chain is the battery makers. 

The amount of nickel that Talon expects to pull from the mine could be used to produce cathodes for nearly 190 million kilowatt-hours’ worth of lithium-ion batteries, according to Allan’s findings. 

Manufacturing that many batteries could generate some $8.5 billion from a pair of IRA tax credits worth $45 per kilowatt-hour, dwarfing the potential subsidies for processing the nickel.

Any number of companies might purchase metals from Talon to build batteries, but Tesla has already agreed to buy 75,000 tons of nickel in concentrate from the North Dakota facility. (The companies have not disclosed the financial terms of the deal.)

Given the batteries that could be produced with this amount of metal, Tesla’s share of these tax savings could exceed $4 billion, Allan found. 

The tax credits add up to “a third of the cost of the battery, full stop,” he says. “These are big numbers. The entire cost of building the plant, at least, is covered by the IRA.”


What Talon’s nickel may mean for Tesla


The math

The subsidies for battery makers would flow from two credits within the IRA. Those include a $35-per-kilowatt-hour tax credit for manufacturing battery cells and a $10-per-kilowatt-hour credit for producing battery modules, which are the bundles of interoperating cells that slot into vehicles. Allan’s calculations assume that all the metal will be used to produce nickel-rich NMC 811 batteries, and that every EV will include an 80-kilowatt-hour battery pack that costs $153 per kilowatt-hour to produce.

Where will the money go and what will it do?

Those billions are just what Tesla could secure in tax credits from the nickel it buys from Talon. It and other battery makers could qualify for still more government subsidies for batteries produced with critical minerals from other sources. 

Tesla didn’t respond to inquiries from MIT Technology Review. But its executives have said they believe Tesla’s batteries will qualify for the manufacturing tax credits, even before Talon’s mining and processing plants are up and running.

On an earnings call last January, Zachary Kirkhorn, who was then the company’s chief financial officer, said that Tesla expected the battery subsidies from its current production lines to total $150 million to $250 million per quarter in 2023. He said the company intends to use the tax credits to lower prices and promote greater adoption of electric vehicles: “We want to use this to accelerate sustainable energy, which is our mission and also the goal of [the IRA].” 

But these potential subsidies are clear evidence that the US government is dedicating funds to the wrong societal priorities, says Jenna Yeakle, an organizer for the Sierra Club North Star Chapter in Minnesota, which added its name to a letter to the White House criticizing federal support for Talon’s proposals. 

“People are struggling to pay rent and put food on the table and to navigate our monopolized corporate health-care system,” she says. “Do we need to be subsidizing Elon Musk’s bank account?”

Still, the IRA’s tax credits will go to numerous battery companies beyond Tesla. 

In fact, the incentives are already reshaping the marketplace, driving a sharp increase in the number of battery and electric-vehicle projects announced, according to the EV Supply Chain Dashboard, a database managed by Jay Turner, a professor of environmental studies at Wellesley College and author of Charged: A History of Batteries and Lessons for a Clean Energy Future. 

As of press time, 81 battery and EV-related projects representing $79 billion in investments and more than 50,000 jobs have been announced across the US since Biden signed the IRA. On an annual basis, that’s nearly three times the average dollar figures announced in recent years before the law was enacted. The projects include BMW, Hyundai, and Ford battery plants, Tesla’s semi manufacturing pilot plant in Nevada, and Redwoods Materials’ battery recycling facility in South Carolina. 

“It’s really exceptional,” Turner says. “I don’t think anybody expected to see so many battery projects, so many jobs, and so many investments over the past year.”

Driving EV sales

The biggest subsidy, though also the most diffuse one, would go to American consumers. 

The IRA offers two tax credits worth up to $7,500 combined for purchasing EVs and plug-in hybrids if the battery materials and components comply with the domestic content requirements.

Since the nickel that Talon expects to extract from the Minnesota mine could power nearly 2.4 million electric vehicles, consumers could collectively see $17.7 billion in potential savings if all those vehicles qualify for both credits, Allan finds. 

Talon’s Malan says this estimate significantly overstates the likely consumer savings, noting that many purchases won’t qualify. Indeed, an individual with a gross income that exceeds $150,000 won’t be eligible, nor will pickups, vans, and SUVs that cost more than $80,000. That would rule out, for instance, the high-end model of Tesla’s Cybertruck.

A number of Tesla models are currently excluded from one or both consumer credits, for varied and confusing reasons. But the Talon deal and other recent sourcing arrangements, as well as the company’s plans to manufacture more of its own batteries, could help more of Tesla’s vehicles to qualify in the coming months or years. 

The IRA’s consumer incentives are likely to do more to stimulate demand than previous federal EV policies, in large part because customers can take them in the form of a price cut at the point of sale, says Gil Tal, director of the Electric Vehicle Research Center at the University of California, Davis. Previously, such incentives would simply reduce the buyer’s federal obligations come tax season. 

RMI, a nonprofit research group focused on clean energy, projects that all the EV provisions within the IRA, which also include subsidies for new charging stations, will spur the sales of an additional 37 million electric cars and trucks by 2032. That would propel EV sales to around 80% of new passenger-automobile purchases. Those vehicles, in turn, could eliminate 2.4 billion tons of transportation emissions by 2040. 

red Tesla Model3
In a preliminary economic analysis, Talon said it hoped to dig up more than 140,000 tons of nickel. That’s enough to produce lithium-ion batteries that could power almost 2.4 million electric vehicles.
TESLA

The math

The IRA offers two tax credits that could apply to EV buyers. The first is a $3,750 credit for those who purchase vehicles with batteries that contain a significant portion of critical minerals that were mined or processed in the US, or in a country with which the US has a free-trade agreement. The required share is 50% in 2024 but reaches 80% beginning in 2027. Cars and trucks may also qualify if the materials came from recycling in North America.

Buyers can also earn a separate $3,750 credit if a specified share of the battery components in the vehicle were manufactured or assembled in North America. The share is 60% this year and next but reaches 100% in 2029.

The big bet

There are lingering questions about how many of the projects sparked by the country’s new green industrial policies will ultimately be built—and what the US will get for all the money it’s giving up. 

After all, the tens of billions of dollars’ worth of tax credits that could be granted throughout the Talon-to-Tesla-to-consumer nickel supply chain is money that isn’t going to the federal government, and isn’t funding services for American taxpayers.

The IRA’s impacts on tax coffers are certain to come under greater scrutiny as the programs ramp up, the dollar figures rise, projects run into trouble, and the companies or executives benefiting engage in questionable practices. After all, that’s exactly what happened in the aftermath of the country’s first major green industrial policy efforts a decade ago, when the high-profile failures of Solyndra, Fisker, and other government-backed clean-energy ventures fueled outrage among conservative critics. 

Nevertheless, Tom Moerenhout, a research scholar at Columbia University’s Center on Global Energy Policy, insists it’s wrong to think of these tax credits as forgone federal revenue. 

In many cases, the projects set to get subsidies for 10% of their operating costs would not otherwise have existed in the first place, since those processing plants and manufacturing facilities would have been built in other, cheaper countries. “They would simply go to China,” he says.

UCLA’s Clausing doesn’t entirely agree with that take, noting that some of this money will go to projects that would have happened anyway, and some of the resources will simply be pulled from other sectors of the economy or different project types. 

“It doesn’t behoove us as experts to argue this is free money,” she says. “Resources really do have costs. Money doesn’t grow on trees.”

But any federal expenses here are “still cheaper than the social cost of carbon,” she adds, referring to the estimated costs from the damage associated with ongoing greenhouse-gas pollution. “And we should keep our eyes on the prize and remember that there are some social priorities worth paying for—and this is one of those.”

In the end, few expect the US’s sweeping climate laws to completely achieve any of the hopes underlying them on their own. They won’t propel the US to net-zero emissions. They won’t enable the country to close China’s massive lead in key minerals and cleantech, or fully break free from its reliance on the rival nation. Meanwhile, the battle to lock down access to critical minerals will only become increasingly competitive as more nations accelerate efforts to move away from fossil fuels—and it will generate even more controversy as communities push back against proposals over concerns about environmental destruction.

But the evidence is building that the IRA in particular is spurring real change, delivering at least some progress on most of the goals that drove its passage: galvanizing green-tech projects, cutting emissions, creating jobs, and moving the nation closer to its clean-energy future. 

“It is catalyzing investment up and down the supply chain across North America,” Allan says. “It is a huge shot in the arm of American industry.”

The worst technology failures of 2023

Welcome to our annual list of the worst technologies. This year, one technology disaster in particular holds lessons for the rest of us: the Titan submersible that imploded while diving to see the Titanic

Everyone had warned Stockton Rush, the sub’s creator, that it wasn’t safe. But he believed innovation meant tossing out the rule book and taking chances. He set aside good engineering in favor of wishful thinking. He and four others died. 

To us it shows how the spirit of innovation can pull ahead of reality, sometimes with unpleasant consequences. It was a phenomenon we saw time and again this year, like when GM’s Cruise division put robotaxis into circulation before they were ready. Was the company in such a hurry because it’s been losing $2 billion a year? Others find convoluted ways to keep hopes alive, like a company that is showing off its industrial equipment but is quietly still using bespoke methods to craft its lab-grown meat. The worst cringe, though, is when true believers can’t see the looming disaster, but we do. That’s the case for the new “Ai Pin,” developed at a cost of tens of millions, that’s meant to replace smartphones. It looks like a titanic failure to us. 

Titan submersible

This summer we were glued to our news feeds as drama unfolded 3,500 meters below the ocean’s surface. An experimental submarine with five people aboard was lost after descending to see the wreck of the Titanic.  

the oceangate submersible underwater

GDA VIA AP IMAGES

The Titan was a radical design for a deep-sea submersible: a minivan-size carbon fiber tube, operated with a joystick, that aerospace engineer Stockton Rush believed would open the depths to a new kind of tourism. His company, OceanGate, had been warned the vessel hadn’t been proved to withstand 400 atmospheres of pressure. His answer? “I think it was General MacArthur who said ‘You’re remembered for the rules you break,” Rush told a YouTuber.

But breaking the rules of physics doesn’t work. On June 22, four days after contact was lost with the Titan, a deep-sea robot spotted the sub’s remains. It was most likely destroyed in a catastrophic implosion.

In addition to Rush, the following passengers perished:

  • Hamish Harding, 58, tourist
  • Shahzada Dawood, 48, tourist
  • Suleman Dawood, 19, tourist
  • Paul-Henri Nargeolet, 77, Titanic expert

More: The Titan Submersible Was “an Accident Waiting to Happen” (The New Yorker), OceanGate Was Warned of Potential for “Catastrophic” Problems With Titanic Mission (New York Times), OceanGate CEO Stockton Rush said in 2021 he knew he’d “broken some rules” (Business Insider)


Lab-grown meat

Instead of killing animals for food, why not manufacture beef or chicken in a laboratory vat? That’s the humane idea behind “lab-grown meat.”

The problem, though, is making the stuff at a large scale. Take Upside Foods. The startup, based in Berkeley, California, had raised more than half a billion dollars and was showing off rows of big, gleaming steel bioreactors.

But journalists soon learned that Upside was a bird in borrowed feathers. Its big tanks weren’t working; it was growing chicken skin cells in much smaller plastic laboratory flasks. Thin layers of cells were then being manually scooped up and pressed into chicken pieces. In other words, Upside was using lots of labor, plastic, and energy to make hardly any meat.

Samir Qurashi, a former employee, told the Wall Street Journal he knows why Upside puffed up the potential of lab-grown food. “It’s the ‘fake it till you make it’ principle,” he said.

And even though lab-grown chicken has FDA approval, there’s doubt whether lab meat will ever compete with the real thing. Chicken goes for $4.99 a pound at the supermarket. Upside still isn’t saying how much the lab version costs to make, but a few bites of it sell for $45 at a Michelin-starred restaurant in San Francisco.

Upside has admitted the challenges. “We signed up for this work not because it’s easy, but because the world urgently needs it,” the company says.

More: I tried lab-grown chicken at a Michelin-starred restaurant (MIT Technology Review), The Biggest Problem With Lab-Grown Chicken Is Growing the Chicken (Bloomberg), Insiders Reveal Major Problems at Lab-Grown-Meat Startup Upside Foods (Wired)


Cruise robotaxi

Sorry, autopilot fans, but we can’t ignore the setbacks this year. Tesla just did a massive software recall after cars set on self-driving mode slammed into emergency vehicles. But the biggest reversal was at Cruise, the division of GM that became the first company to offer driverless taxi rides in San Francisco, day or night, with a fleet exceeding 400 cars.

Cruise argues that robotaxis don’t get tired, don’t get drunk, and don’t get distracted. It even ran a full-page newspaper ad declaring that “humans are terrible drivers.”

a Cruise vehicle parked on the street in front of a residential home as a person descends a front staircase in the background

CRUISE

But Cruise forgot that to err is human—not what we want from robots. Soon, it was Cruise’s sensor-laden Chevy Bolts that started racking up noticeable mishaps, including dragging a pedestrian for 20 feet. This October, the California Department of Motor Vehicles suspended GM’s robotaxis, citing an “unreasonable risk to public safety.”

It’s a blow for Cruise, which has since laid off 25% of its staff and fired its CEO and cofounder, Kyle Vogt, a onetime MIT student. “We have temporarily paused driverless service,” Cruise’s website now reads. It says it’s reviewing safety and taking steps to “regain public trust.”

More: GM’s Self-Driving Car Unit Skids Off Course (Wall Street Journal), Important Updates from Cruise (Getcruise.com)


Plastic proliferation

Plastic is great. It’s strong, it’s light, and it can be pressed into just about any shape: lawn chairs, bobbleheads, bags, tires, or thread.

The problem is there’s too much of it, as Doug Main reported in MIT Technology Review this year. Humans make 430 million tons of plastic a year (significantly more than the weight of all people combined), but only 9% gets recycled. The rest ends up in landfills and, increasingly, in the environment. Not only does the average whale have kilograms of the stuff in its belly, but tiny bits of “microplastic” have been found in soft drinks, plankton, and human bloodstreams, and even floating in the air. The health effects of spreading microplastic pollution have barely been studied.

Awareness of the planetary scourge is growing, and some are calling for a “plastics treaty” to help stop the pollution. It’s going to be a hard sell. That’s because plastic is so cheap and useful. Yet researchers say the best way to cut plastic waste is not to make it in the first place.

More: Think your plastic is being recycled? Think again (MIT Technology Review),  Oh Good, Hurricanes Are Now Made of Microplastics (Wired)


Humane Ai Pin

The New York Times declared it Silicon Valley’s “Big, Bold Sci-Fi Bet” for what comes after the smartphone. The product? A plastic badge called the Ai Pin, with a camera, chips, and sensors.

Humane's AI Pin worn on a sweatshirt

HUMANE

A device to wean us off our phone addiction is a worthy goal, but this blocky $699 pin (which also requires a $24-a-month subscription) isn’t it. An early review called the device, developed by startup Humane Ai, “equal parts magic and awkward.” Emphasis on the awkward. Users must speak voice commands to send messages or chat with an AI (a laser projector in the pin will also display information on your hand). It weighs as much as a golf ball, so you probably won’t be attaching it to a T-shirt. 

It is the creation of a husband-and-wife team of former Apple executives, Bethany Bongiorno and Imran Chaudhri, who were led to their product idea with the guidance of a Buddhist monk named Brother Spirit, raising $240 million and filing 25 patents along the way, according to the Times.

Clearly, there’s a lot of thought, money, and engineering involved in its creation. But as The Verge’s wearables reviewer Victoria Song points out, “it flouts the chief rule of good wearable design: you have to want to wear the damn thing.” As it is, the Ai Pin is neat, but it’s still no competition for the lure of a screen.

More: Can A.I. and Lasers Cure Our Smartphone Addiction? (New York Times) Screens are good, actually (The Verge)


Social media superconductor

A room-temperature superconductor is a material offering no electrical resistance. If it existed, it would make possible new types of batteries and powerful quantum computers, and bring nuclear fusion closer to reality. It’s a true Holy Grail.

So when a report emerged this July from Korea that a substance called LK-99 was the real thing, attention seekers on the internet were ready to share. The news popped up first in Asia, along with an online video of a bit of material floating above a magnet. Then came the booster fuel of social media hot takes.

Pellet of LK-99 being repelled by a magnet

HYUN-TAK KIM/WIKIMEDIA

“Today might have seen the biggest physics discovery of my lifetime,” said a post to X that has been viewed 30 million times. “I don’t think people fully grasp the implications … Here’s how it could totally change our lives.”

No matter that the post had been written by a marketer at a coffee company. It was exciting—and hilarious—to see well-funded startups drop their work on rockets and biotech to try to make the magic substance. Kenneth Chang, a reporter at the New York Times, dubbed LK-99 “the Superconductor of the Summer.”

But summer’s dreams soon ripped at the seams after real physicists couldn’t replicate the work. No, LK-99 is not a superconductor. Instead, impurities in the recipe could have misled the Korean researchers—and, thanks to social media, the rest of us too.

More: LK-99 Is the Superconductor of the Summer (New York Times)  LK-99 isn’t a superconductor—how science sleuths solved the mystery (Nature)


Rogue geoengineering

Solar geoengineering is the idea to cool the planet by releasing reflective materials into the atmosphere. It’s a fraught concept, because it won’t stop the greenhouse effect—only mask it. And who gets to decide to block the sun?

Mexico banned geoengineering trials early this year after a startup called Make Sunsets decided it could commercialize the effort. Cofounder Luke Iseman decided to launch balloons in Mexico designed to disperse reflective sulfur dioxide into the sky. The startup is still selling “cooling credits” for $10 each on its website.

Injecting particles into the sky is theoretically cheap and easy, and climate warming is a huge threat. But moving too fast can create a backlash that stalls progress, according to my colleague James Temple. “They’re violating the rights of communities to dictate their own future,” one critic said.

Iseman remains unrepentant. “I don’t poll billions before taking a flight,” he has said. “I’m not going to ask for permission from every person in the world before I try to do a bit to cool Earth.” 

More: The flawed logic of rushing out extreme climate solutions (MIT Technology Review), Mexico bans solar geoengineering experiments after startup’s field tests (The Verge), Researchers launched a solar geoengineering test flight in the UK last fall (MIT Technology Review)

How Meta and AI companies recruited striking actors to train AI

One evening in early September, T, a 28-year-old actor who asked to be identified by his first initial, took his seat in a rented Hollywood studio space in front of three cameras, a director, and a producer for a somewhat unusual gig.

The two-hour shoot produced footage that was not meant to be viewed by the public—at least, not a human public. 

Rather, T’s voice, face, movements, and expressions would be fed into an AI database “to better understand and express human emotions.” That database would then help train “virtual avatars” for Meta, as well as algorithms for a London-based emotion AI company called Realeyes. (Realeyes was running the project; participants only learned about Meta’s involvement once they arrived on site.)

The “emotion study” ran from July through September, specifically recruiting actors. The project coincided with Hollywood’s historic dual strikes by the Writers Guild of America and the Screen Actors Guild (SAG-AFTRA). With the industry at a standstill, the larger-than-usual number of out-of-work actors may have been a boon for Meta and Realeyes: here was a new pool of “trainers”—and data points—perfectly suited to teaching their AI to appear more human. 

For actors like T, it was a great opportunity too: a way to make good, easy money on the side, without having to cross the picket line. 

“There aren’t really clear rules right now.”

“This is fully a research-based project,” the job posting said. It offered $150 per hour for at least two hours of work, and asserted that “your individual likeness will not be used for any commercial purposes.”  

The actors may have assumed this meant that their faces and performances wouldn’t turn up in a TV show or movie, but the broad nature of what they signed makes it impossible to know the full implications for sure. In fact, in order to participate, they had to sign away certain rights “in perpetuity” for technologies and use cases that may not yet exist. 

And while the job posting insisted that the project “does not qualify as struck work” (that is, work produced by employers against whom the union is striking), it nevertheless speaks to some of the strike’s core issues: how actors’ likenesses can be used, how actors should be compensated for that use, and what informed consent should look like in the age of AI. 

“This isn’t a contract battle between a union and a company,” said Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, at a panel on AI in entertainment at San Diego Comic-Con this summer. “It’s existential.”

Many actors across the industry, particularly background actors (also known as extras), worry that AI—much like the models described in the emotion study—could be used to replace them, whether or not their exact faces are copied. And in this case, by providing the facial expressions that will teach AI to appear more human, study participants may in fact have been the ones inadvertently training their own potential replacements. 

“Our studies have nothing to do with the strike,” Max Kalehoff, Realeyes’s vice president for growth and marketing, said in an email. “The vast majority of our work is in evaluating the effectiveness of advertising for clients—which has nothing to do with actors and the entertainment industry except to gauge audience reaction.” The timing, he added, was “an unfortunate coincidence.” Meta did not respond to multiple requests for comment.

Given how technological advancements so often build upon one another, not to mention how quickly the field of artificial intelligence is evolving, experts point out that there’s only so much these companies can truly promise. 

In addition to the job posting, MIT Technology Review has obtained and reviewed a copy of the data license agreement, and its potential implications are indeed vast. To put it bluntly: whether the actors who participated knew it or not, for as little as $300, they appear to have authorized Realeyes, Meta, and other parties of the two companies’ choosing to access and use not just their faces but also their expressions, and anything derived from them, almost however and whenever they want—as long as they do not reproduce any individual likenesses. 

Some actors, like Jessica, who asked to be identified by just her first name, felt there was something “exploitative” about the project—both in the financial incentives for out-of-work actors and in the fight over AI and the use of an actor’s image. 

Jessica, a New York–based background actor, says she has seen a growing number of listings for AI jobs over the past few years. “There aren’t really clear rules right now,” she says, “so I don’t know. Maybe … their intention [is] to get these images before the union signs a contract and sets them.”

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

All this leaves actors, struggling after three months of limited to no work, primed to accept the terms from Realeyes and Meta—and, intentionally or not, to affect all actors, whether or not they personally choose to engage with AI. 

“It’s hurt now or hurt later,” says Maurice Compte, an actor and SAG-AFTRA member who has had principal roles on shows like Narcos and Breaking Bad. After reviewing the job posting, he couldn’t help but see nefarious intent. Yes, he said, of course it’s beneficial to have work, but he sees it as beneficial “in the way that the Native Americans did when they took blankets from white settlers,” adding: “They were getting blankets out of it in a time of cold.”  

Humans as data 

Artificial intelligence is powered by data, and data, in turn, is provided by humans. 

It is human labor that prepares, cleans, and annotates data to make it more understandable to machines; as MIT Technology Review has reported, for example, robot vacuums know to avoid running over dog poop because human data labelers have first clicked through and identified millions of images of pet waste—and other objects—inside homes. 

When it comes to facial recognition, other biometric analysis, or generative AI models that aim to generate humans or human-like avatars, it is human faces, movements, and voices that serve as the data. 

Initially, these models were powered by data scraped off the internet—including, on several occasions, private surveillance camera footage that was shared or sold without the knowledge of anyone being captured.

But as the need for higher-quality data has grown, alongside concerns about whether data is collected ethically and with proper consent, tech companies have progressed from “scraping data from publicly available sources” to “building data sets with professionals,” explains Julian Posada, an assistant professor at Yale University who studies platforms and labor. Or, at the very least, “with people who have been recruited, compensated, [and] signed [consent] forms.”

But the need for human data, especially in the entertainment industry, runs up against a significant concern in Hollywood: publicity rights, or “the right to control your use of your name and likeness,” according to Corynne McSherry, the legal director of the Electronic Frontier Foundation (EFF), a digital rights group.

This was an issue long before AI, but AI has amplified the concern. Generative AI in particular makes it easy to create realistic replicas of anyone by training algorithms on existing data, like photos and videos of the person. The more data that is available, the easier it is to create a realistic image. This has a particularly large effect on performers. 

He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

Some actors have been able to monetize the characteristics that make them unique. James Earl Jones, the voice of Darth Vader, signed off on the use of archived recordings of his voice so that AI could continue to generate it for future Star Wars films. Meanwhile, de-aging AI has allowed Harrison Ford, Tom Hanks, and Robin Wright to portray younger versions of themselves on screen. Metaphysic AI, the company behind the de-aging technology, recently signed a deal with Creative Artists Agency to put generative AI to use for its artists. 

But many deepfakes, or images of fake events created with deep-learning AI, are generated without consent. Earlier this month, Hanks posted on Instagram that an ad purporting to show him promoting a dental plan was not actually him. 

The AI landscape is different for noncelebrities. Background actors are increasingly being asked to undergo digital body scans on set, where they have little power to push back or even get clarity on how those scans will be used in the future. Studios say that scans are used primarily to augment crowd scenes, which they have been doing with other technology in postproduction for years—but according to SAG representatives, once the studios have captured actors’ likenesses, they reserve the rights to use them forever. (There have already been multiple reports from voice actors that their voices have appeared in video games other than the ones they were hired for.)

In the case of the Realeyes and Meta study, it might be “study data” rather than body scans, but actors are dealing with the same uncertainty as to how else their digital likenesses could one day be used.

Teaching AI to appear more human

At $150 per hour, the Realeyes study paid far more than the roughly $200 daily rate in the current Screen Actors Guild contract (nonunion jobs pay even less). 

This made the gig an attractive proposition for young actors like T, just starting out in Hollywood—a notoriously challenging environment even had he not arrived just before the SAG-AFTRA strike started. (T has not worked enough union jobs to officially join the union, though he hopes to one day.) 

In fact, even more than a standard acting job, T described performing for Realeyes as “like an acting workshop where … you get a chance to work on your acting chops, which I thought helped me a little bit.”

For two hours, T responded to prompts like “Tell us something that makes you angry,” “Share a sad story,” or “Do a scary scene where you’re scared,” improvising an appropriate story or scene for each one. He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

In addition to wanting the pay, T participated in the study because, as he understood it, no one would see the results publicly. Rather, it was research for Meta, as he learned when he arrived at the studio space and signed a data license agreement with the company that he only skimmed through. It was the first he’d heard that Meta was even connected with the project. (He had previously signed a separate contract with Realeyes covering the terms of the job.) 

The data license agreement says that Realeyes is the sole owner of the data and has full rights to “license, distribute, reproduce, modify, or otherwise create and use derivative works” generated from it, “irrevocably and in all formats and media existing now or in the future.” 

This kind of legalese can be hard to parse, particularly when it deals with technology that is changing at such a rapid pace. But what it essentially means is that “you may be giving away things you didn’t realize … because those things didn’t exist yet,” says Emily Poler, a litigator who represents clients in disputes at the intersection of media, technology, and intellectual property.

“If I was a lawyer for an actor here, I would definitely be looking into whether one can knowingly waive rights where things don’t even exist yet,” she adds. 

As Jessica argues, “Once they have your image, they can use it whenever and however.” She thinks that actors’ likenesses could be used in the same way that other artists’ works, like paintings, songs, and poetry, have been used to train generative AI, and she worries that the AI could just “create a composite that looks ‘human,’ like believable as human,” but “it wouldn’t be recognizable as you, so you can’t potentially sue them”—even if that AI-generated human was based on you. 

This feels especially plausible to Jessica given her experience as an Asian-American background actor in an industry where representation often amounts to being the token minority. Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

It’s not just images that actors should be worried about, says Adam Harvey, an applied researcher who focuses on computer vision, privacy, and surveillance and is one of the co-creators of Exposing.AI, which catalogues the data sets used to train facial recognition systems. 

What constitutes “likeness,” he says, is changing. While the word is now understood primarily to mean a photographic likeness, musicians are challenging that definition to include vocal likenesses. Eventually, he believes, “it will also … be challenged on the emotional frontier”—that is, actors could argue that their microexpressions are unique and should be protected. 

Realeyes’s Kalehoff did not say what specifically the company would be using the study results for, though he elaborated in an email that there could be “a variety of use cases, such as building better digital media experiences, in medical diagnoses (i.e. skin/muscle conditions), safety alertness detection, or robotic tools to support medical disorders related to recognition of facial expressions (like autism).”

Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

When asked how Realeyes defined “likeness,” he replied that the company used that term—as well as “commercial,” another word for which there are assumed but no universally agreed-upon definitions—in a manner that is “the same for us as [a] general business.” He added, “We do not have a specific definition different from standard usage.”  

But for T, and for other actors, “commercial” would typically mean appearing in some sort of advertisement or a TV spot—“something,” T says, “that’s directly sold to the consumer.” 

Outside of the narrow understanding in the entertainment industry, the EFF’s McSherry questions what the company means: “It’s a commercial company doing commercial things.”

Kalehoff also said, “If a client would ask us to use such images [from the study], we would insist on 100% consent, fair pay for participants, and transparency. However, that is not our work or what we do.” 

Yet this statement does not align with the language of the data license agreement, which stipulates that while Realeyes is the owner of the intellectual property stemming from the study data, Meta and “Meta parties acting on behalf of Meta” have broad rights to the data—including the rights to share and sell it. This means that, ultimately, how it’s used may be out of Realeyes’s hands. 

As explained in the agreement, the rights of Meta and parties acting on its behalf also include: 

  • Asserting certain rights to the participants’ identities (“identifying or recognizing you … creating a unique template of your face and/or voice … and/or protecting against impersonation and identity misuse”)
  • Allowing other researchers to conduct future research, using the study data however they see fit (“conducting future research studies and activities … in collaboration with third party researchers, who may further use the Study Data beyond the control of Meta”)
  • Creating derivative works from the study data for any kind of use at any time (“using, distributing, reproducing, publicly performing, publicly displaying, disclosing, and modifying or otherwise creating derivative works from the Study Data, worldwide, irrevocably and in perpetuity, and in all formats and media existing now or in the future”)

The only limit on use was that Meta and parties would “not use Study Data to develop machine learning models that generate your specific face or voice in any Meta product” (emphasis added). Still, the variety of possible use cases—and users—is sweeping. And the agreement does little to quell actors’ specific anxieties that “down the line, that database is used to generate a work and that work ends up seeming a lot like [someone’s] performance,” as McSherry puts it.

When I asked Kalehoff about the apparent gap between his comments and the agreement, he denied any discrepancy: “We believe there are no contradictions in any agreements, and we stand by our commitment to actors as stated in all of our agreements to fully protect their image and their privacy.” Kalehoff declined to comment on Realeyes’s work with clients, or to confirm that the study was in collaboration with Meta.

Meanwhile, Meta has been building  photorealistic 3D “Codec avatars,” which go far beyond the cartoonish images in Horizon Worlds and require human training data to perfect. CEO Mark Zuckerberg recently described these avatars on the popular podcast from AI researcher Lex Fridman as core to his vision of the future—where physical, virtual, and augmented reality all coexist. He envisions the avatars “delivering a sense of presence as if you’re there together, no matter where you actually are in the world.”

Despite multiple requests for comment, Meta did not respond to any questions from MIT Technology Review, so we cannot confirm what it would use the data for, or who it means by “parties acting on its behalf.” 

Individual choice, collective impact 

Throughout the strikes by writers and actors, there has been a palpable sense that Hollywood is charging into a new frontier that will shape how we—all of us—engage with artificial intelligence. Usually, that frontier is described with reference to workers’ rights; the idea is that whatever happens here will affect workers in other industries who are grappling with what AI will mean for their own livelihoods. 

Already, the gains won by the Writers Guild have provided a model for how to regulate AI’s impact on creative work. The union’s new contract with studios limits the use of AI in writers’ rooms and stipulates that only human authors can be credited on stories, which prevents studios from copyrighting AI-generated work and further serves as a major disincentive to use AI to write scripts. 

In early October, the actors’ union and the studios also returned to the bargaining table, hoping to provide similar guidance for actors. But talks quickly broke down because “it is clear that the gap between the AMPTP [Alliance of Motion Picture and Television Producers] and SAG-AFTRA is too great,” as the studio alliance put it in a press release. Generative AI—specifically, how and when background actors should be expected to consent to body scanning—was reportedly one of the sticking points. 

Whatever final agreement they come to won’t forbid the use of AI by studios—that was never the point. Even the actors who took issue with the AI training projects have more nuanced views about the use of the technology. “We’re not going to fully cut out AI,” acknowledges Compte, the Breaking Bad actor. Rather, we “just have to find ways that are going to benefit the larger picture… [It] is really about living wages.”

But a future agreement, which is specifically between the studios and SAG, will not be applicable to tech companies conducting “research” projects, like Meta and Realeyes. Technological advances created for one purpose—perhaps those that come out of a “research” study—will also have broader applications, in film and beyond. 

“The likelihood that the technology that is developed is only used for that [audience engagement or Codec avatars] is vanishingly small. That’s not how it works,” says the EFF’s McSherry. For instance, while the data agreement for the emotion study does not explicitly mention using the results for facial recognition AI, McSherry believes that they could be used to improve any kind of AI involving human faces or expressions.

(Besides, emotion detection algorithms are themselves controversial, whether or not they even work the way developers say they do. Do we really want “our faces to be judged all the time [based] on whatever products we’re looking at?” asks Posada, the Yale professor.)

This all makes consent for these broad research studies even trickier: there’s no way for a participant to opt in or out of specific use cases. T, for one, would be happy if his participation meant better avatar options for virtual worlds, like those he uses with his Oculus—though he isn’t agreeing to that specifically. 

But what are individual study participants—who may need the income—to do? What power do they really have in this situation? And what power do other people—even people who declined to participate—have to ensure that they are not affected? The decision to train AI may be an individual one, but the impact is not; it’s collective.

“Once they feed your image and … a certain amount of people’s images, they can create an endless variety of similar-looking people,” says Jessica. “It’s not infringing on your face, per se.” But maybe that’s the point: “They’re using your image without … being held liable for it.”

T has considered the possibility that, one day, the research he has contributed to could very well replace actors. 

But at least for now, it’s a hypothetical. 

“I’d be upset,” he acknowledges, “but at the same time, if it wasn’t me doing it, they’d probably figure out a different way—a sneakier way, without getting people’s consent.” Besides, T adds, “they paid really well.” 

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

How Meta and AI companies recruited striking actors to train AI

One evening in early September, T, a 28-year-old actor who asked to be identified by his first initial, took his seat in a rented Hollywood studio space in front of three cameras, a director, and a producer for a somewhat unusual gig.

The two-hour shoot produced footage that was not meant to be viewed by the public—at least, not a human public. 

Rather, T’s voice, face, movements, and expressions would be fed into an AI database “to better understand and express human emotions.” That database would then help train “virtual avatars” for Meta, as well as algorithms for a London-based emotion AI company called Realeyes. (Realeyes was running the project; participants only learned about Meta’s involvement once they arrived on site.)

The “emotion study” ran from July through September, specifically recruiting actors. The project coincided with Hollywood’s historic dual strikes by the Writers Guild of America and the Screen Actors Guild (SAG-AFTRA). With the industry at a standstill, the larger-than-usual number of out-of-work actors may have been a boon for Meta and Realeyes: here was a new pool of “trainers”—and data points—perfectly suited to teaching their AI to appear more human. 

For actors like T, it was a great opportunity too: a way to make good, easy money on the side, without having to cross the picket line. 

“There aren’t really clear rules right now.”

“This is fully a research-based project,” the job posting said. It offered $150 per hour for at least two hours of work, and asserted that “your individual likeness will not be used for any commercial purposes.”  

The actors may have assumed this meant that their faces and performances wouldn’t turn up in a TV show or movie, but the broad nature of what they signed makes it impossible to know the full implications for sure. In fact, in order to participate, they had to sign away certain rights “in perpetuity” for technologies and use cases that may not yet exist. 

And while the job posting insisted that the project “does not qualify as struck work” (that is, work produced by employers against whom the union is striking), it nevertheless speaks to some of the strike’s core issues: how actors’ likenesses can be used, how actors should be compensated for that use, and what informed consent should look like in the age of AI. 

“This isn’t a contract battle between a union and a company,” said Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, at a panel on AI in entertainment at San Diego Comic-Con this summer. “It’s existential.”

Many actors across the industry, particularly background actors (also known as extras), worry that AI—much like the models described in the emotion study—could be used to replace them, whether or not their exact faces are copied. And in this case, by providing the facial expressions that will teach AI to appear more human, study participants may in fact have been the ones inadvertently training their own potential replacements. 

“Our studies have nothing to do with the strike,” Max Kalehoff, Realeyes’s vice president for growth and marketing, said in an email. “The vast majority of our work is in evaluating the effectiveness of advertising for clients—which has nothing to do with actors and the entertainment industry except to gauge audience reaction.” The timing, he added, was “an unfortunate coincidence.” Meta did not respond to multiple requests for comment.

Given how technological advancements so often build upon one another, not to mention how quickly the field of artificial intelligence is evolving, experts point out that there’s only so much these companies can truly promise. 

In addition to the job posting, MIT Technology Review has obtained and reviewed a copy of the data license agreement, and its potential implications are indeed vast. To put it bluntly: whether the actors who participated knew it or not, for as little as $300, they appear to have authorized Realeyes, Meta, and other parties of the two companies’ choosing to access and use not just their faces but also their expressions, and anything derived from them, almost however and whenever they want—as long as they do not reproduce any individual likenesses. 

Some actors, like Jessica, who asked to be identified by just her first name, felt there was something “exploitative” about the project—both in the financial incentives for out-of-work actors and in the fight over AI and the use of an actor’s image. 

Jessica, a New York–based background actor, says she has seen a growing number of listings for AI jobs over the past few years. “There aren’t really clear rules right now,” she says, “so I don’t know. Maybe … their intention [is] to get these images before the union signs a contract and sets them.”

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

All this leaves actors, struggling after three months of limited to no work, primed to accept the terms from Realeyes and Meta—and, intentionally or not, to affect all actors, whether or not they personally choose to engage with AI. 

“It’s hurt now or hurt later,” says Maurice Compte, an actor and SAG-AFTRA member who has had principal roles on shows like Narcos and Breaking Bad. After reviewing the job posting, he couldn’t help but see nefarious intent. Yes, he said, of course it’s beneficial to have work, but he sees it as beneficial “in the way that the Native Americans did when they took blankets from white settlers,” adding: “They were getting blankets out of it in a time of cold.”  

Humans as data 

Artificial intelligence is powered by data, and data, in turn, is provided by humans. 

It is human labor that prepares, cleans, and annotates data to make it more understandable to machines; as MIT Technology Review has reported, for example, robot vacuums know to avoid running over dog poop because human data labelers have first clicked through and identified millions of images of pet waste—and other objects—inside homes. 

When it comes to facial recognition, other biometric analysis, or generative AI models that aim to generate humans or human-like avatars, it is human faces, movements, and voices that serve as the data. 

Initially, these models were powered by data scraped off the internet—including, on several occasions, private surveillance camera footage that was shared or sold without the knowledge of anyone being captured.

But as the need for higher-quality data has grown, alongside concerns about whether data is collected ethically and with proper consent, tech companies have progressed from “scraping data from publicly available sources” to “building data sets with professionals,” explains Julian Posada, an assistant professor at Yale University who studies platforms and labor. Or, at the very least, “with people who have been recruited, compensated, [and] signed [consent] forms.”

But the need for human data, especially in the entertainment industry, runs up against a significant concern in Hollywood: publicity rights, or “the right to control your use of your name and likeness,” according to Corynne McSherry, the legal director of the Electronic Frontier Foundation (EFF), a digital rights group.

This was an issue long before AI, but AI has amplified the concern. Generative AI in particular makes it easy to create realistic replicas of anyone by training algorithms on existing data, like photos and videos of the person. The more data that is available, the easier it is to create a realistic image. This has a particularly large effect on performers. 

He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

Some actors have been able to monetize the characteristics that make them unique. James Earl Jones, the voice of Darth Vader, signed off on the use of archived recordings of his voice so that AI could continue to generate it for future Star Wars films. Meanwhile, de-aging AI has allowed Harrison Ford, Tom Hanks, and Robin Wright to portray younger versions of themselves on screen. Metaphysic AI, the company behind the de-aging technology, recently signed a deal with Creative Artists Agency to put generative AI to use for its artists. 

But many deepfakes, or images of fake events created with deep-learning AI, are generated without consent. Earlier this month, Hanks posted on Instagram that an ad purporting to show him promoting a dental plan was not actually him. 

The AI landscape is different for noncelebrities. Background actors are increasingly being asked to undergo digital body scans on set, where they have little power to push back or even get clarity on how those scans will be used in the future. Studios say that scans are used primarily to augment crowd scenes, which they have been doing with other technology in postproduction for years—but according to SAG representatives, once the studios have captured actors’ likenesses, they reserve the rights to use them forever. (There have already been multiple reports from voice actors that their voices have appeared in video games other than the ones they were hired for.)

In the case of the Realeyes and Meta study, it might be “study data” rather than body scans, but actors are dealing with the same uncertainty as to how else their digital likenesses could one day be used.

Teaching AI to appear more human

At $150 per hour, the Realeyes study paid far more than the roughly $200 daily rate in the current Screen Actors Guild contract (nonunion jobs pay even less). 

This made the gig an attractive proposition for young actors like T, just starting out in Hollywood—a notoriously challenging environment even had he not arrived just before the SAG-AFTRA strike started. (T has not worked enough union jobs to officially join the union, though he hopes to one day.) 

In fact, even more than a standard acting job, T described performing for Realeyes as “like an acting workshop where … you get a chance to work on your acting chops, which I thought helped me a little bit.”

For two hours, T responded to prompts like “Tell us something that makes you angry,” “Share a sad story,” or “Do a scary scene where you’re scared,” improvising an appropriate story or scene for each one. He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

In addition to wanting the pay, T participated in the study because, as he understood it, no one would see the results publicly. Rather, it was research for Meta, as he learned when he arrived at the studio space and signed a data license agreement with the company that he only skimmed through. It was the first he’d heard that Meta was even connected with the project. (He had previously signed a separate contract with Realeyes covering the terms of the job.) 

The data license agreement says that Realeyes is the sole owner of the data and has full rights to “license, distribute, reproduce, modify, or otherwise create and use derivative works” generated from it, “irrevocably and in all formats and media existing now or in the future.” 

This kind of legalese can be hard to parse, particularly when it deals with technology that is changing at such a rapid pace. But what it essentially means is that “you may be giving away things you didn’t realize … because those things didn’t exist yet,” says Emily Poler, a litigator who represents clients in disputes at the intersection of media, technology, and intellectual property.

“If I was a lawyer for an actor here, I would definitely be looking into whether one can knowingly waive rights where things don’t even exist yet,” she adds. 

As Jessica argues, “Once they have your image, they can use it whenever and however.” She thinks that actors’ likenesses could be used in the same way that other artists’ works, like paintings, songs, and poetry, have been used to train generative AI, and she worries that the AI could just “create a composite that looks ‘human,’ like believable as human,” but “it wouldn’t be recognizable as you, so you can’t potentially sue them”—even if that AI-generated human was based on you. 

This feels especially plausible to Jessica given her experience as an Asian-American background actor in an industry where representation often amounts to being the token minority. Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

It’s not just images that actors should be worried about, says Adam Harvey, an applied researcher who focuses on computer vision, privacy, and surveillance and is one of the co-creators of Exposing.AI, which catalogues the data sets used to train facial recognition systems. 

What constitutes “likeness,” he says, is changing. While the word is now understood primarily to mean a photographic likeness, musicians are challenging that definition to include vocal likenesses. Eventually, he believes, “it will also … be challenged on the emotional frontier”—that is, actors could argue that their microexpressions are unique and should be protected. 

Realeyes’s Kalehoff did not say what specifically the company would be using the study results for, though he elaborated in an email that there could be “a variety of use cases, such as building better digital media experiences, in medical diagnoses (i.e. skin/muscle conditions), safety alertness detection, or robotic tools to support medical disorders related to recognition of facial expressions (like autism).”

Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

When asked how Realeyes defined “likeness,” he replied that the company used that term—as well as “commercial,” another word for which there are assumed but no universally agreed-upon definitions—in a manner that is “the same for us as [a] general business.” He added, “We do not have a specific definition different from standard usage.”  

But for T, and for other actors, “commercial” would typically mean appearing in some sort of advertisement or a TV spot—“something,” T says, “that’s directly sold to the consumer.” 

Outside of the narrow understanding in the entertainment industry, the EFF’s McSherry questions what the company means: “It’s a commercial company doing commercial things.”

Kalehoff also said, “If a client would ask us to use such images [from the study], we would insist on 100% consent, fair pay for participants, and transparency. However, that is not our work or what we do.” 

Yet this statement does not align with the language of the data license agreement, which stipulates that while Realeyes is the owner of the intellectual property stemming from the study data, Meta and “Meta parties acting on behalf of Meta” have broad rights to the data—including the rights to share and sell it. This means that, ultimately, how it’s used may be out of Realeyes’s hands. 

As explained in the agreement, the rights of Meta and parties acting on its behalf also include: 

  • Asserting certain rights to the participants’ identities (“identifying or recognizing you … creating a unique template of your face and/or voice … and/or protecting against impersonation and identity misuse”)
  • Allowing other researchers to conduct future research, using the study data however they see fit (“conducting future research studies and activities … in collaboration with third party researchers, who may further use the Study Data beyond the control of Meta”)
  • Creating derivative works from the study data for any kind of use at any time (“using, distributing, reproducing, publicly performing, publicly displaying, disclosing, and modifying or otherwise creating derivative works from the Study Data, worldwide, irrevocably and in perpetuity, and in all formats and media existing now or in the future”)

The only limit on use was that Meta and parties would “not use Study Data to develop machine learning models that generate your specific face or voice in any Meta product” (emphasis added). Still, the variety of possible use cases—and users—is sweeping. And the agreement does little to quell actors’ specific anxieties that “down the line, that database is used to generate a work and that work ends up seeming a lot like [someone’s] performance,” as McSherry puts it.

When I asked Kalehoff about the apparent gap between his comments and the agreement, he denied any discrepancy: “We believe there are no contradictions in any agreements, and we stand by our commitment to actors as stated in all of our agreements to fully protect their image and their privacy.” Kalehoff declined to comment on Realeyes’s work with clients, or to confirm that the study was in collaboration with Meta.

Meanwhile, Meta has been building  photorealistic 3D “Codec avatars,” which go far beyond the cartoonish images in Horizon Worlds and require human training data to perfect. CEO Mark Zuckerberg recently described these avatars on the popular podcast from AI researcher Lex Fridman as core to his vision of the future—where physical, virtual, and augmented reality all coexist. He envisions the avatars “delivering a sense of presence as if you’re there together, no matter where you actually are in the world.”

Despite multiple requests for comment, Meta did not respond to any questions from MIT Technology Review, so we cannot confirm what it would use the data for, or who it means by “parties acting on its behalf.” 

Individual choice, collective impact 

Throughout the strikes by writers and actors, there has been a palpable sense that Hollywood is charging into a new frontier that will shape how we—all of us—engage with artificial intelligence. Usually, that frontier is described with reference to workers’ rights; the idea is that whatever happens here will affect workers in other industries who are grappling with what AI will mean for their own livelihoods. 

Already, the gains won by the Writers Guild have provided a model for how to regulate AI’s impact on creative work. The union’s new contract with studios limits the use of AI in writers’ rooms and stipulates that only human authors can be credited on stories, which prevents studios from copyrighting AI-generated work and further serves as a major disincentive to use AI to write scripts. 

In early October, the actors’ union and the studios also returned to the bargaining table, hoping to provide similar guidance for actors. But talks quickly broke down because “it is clear that the gap between the AMPTP [Alliance of Motion Picture and Television Producers] and SAG-AFTRA is too great,” as the studio alliance put it in a press release. Generative AI—specifically, how and when background actors should be expected to consent to body scanning—was reportedly one of the sticking points. 

Whatever final agreement they come to won’t forbid the use of AI by studios—that was never the point. Even the actors who took issue with the AI training projects have more nuanced views about the use of the technology. “We’re not going to fully cut out AI,” acknowledges Compte, the Breaking Bad actor. Rather, we “just have to find ways that are going to benefit the larger picture… [It] is really about living wages.”

But a future agreement, which is specifically between the studios and SAG, will not be applicable to tech companies conducting “research” projects, like Meta and Realeyes. Technological advances created for one purpose—perhaps those that come out of a “research” study—will also have broader applications, in film and beyond. 

“The likelihood that the technology that is developed is only used for that [audience engagement or Codec avatars] is vanishingly small. That’s not how it works,” says the EFF’s McSherry. For instance, while the data agreement for the emotion study does not explicitly mention using the results for facial recognition AI, McSherry believes that they could be used to improve any kind of AI involving human faces or expressions.

(Besides, emotion detection algorithms are themselves controversial, whether or not they even work the way developers say they do. Do we really want “our faces to be judged all the time [based] on whatever products we’re looking at?” asks Posada, the Yale professor.)

This all makes consent for these broad research studies even trickier: there’s no way for a participant to opt in or out of specific use cases. T, for one, would be happy if his participation meant better avatar options for virtual worlds, like those he uses with his Oculus—though he isn’t agreeing to that specifically. 

But what are individual study participants—who may need the income—to do? What power do they really have in this situation? And what power do other people—even people who declined to participate—have to ensure that they are not affected? The decision to train AI may be an individual one, but the impact is not; it’s collective.

“Once they feed your image and … a certain amount of people’s images, they can create an endless variety of similar-looking people,” says Jessica. “It’s not infringing on your face, per se.” But maybe that’s the point: “They’re using your image without … being held liable for it.”

T has considered the possibility that, one day, the research he has contributed to could very well replace actors. 

But at least for now, it’s a hypothetical. 

“I’d be upset,” he acknowledges, “but at the same time, if it wasn’t me doing it, they’d probably figure out a different way—a sneakier way, without getting people’s consent.” Besides, T adds, “they paid really well.” 

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489.