Is robotics about to have its own ChatGPT moment?

Silent. Rigid. Clumsy.

Henry and Jane Evans are used to awkward houseguests. For more than a decade, the couple, who live in Los Altos Hills, California, have hosted a slew of robots in their home. 

In 2002, at age 40, Henry had a massive stroke, which left him with quadriplegia and an inability to speak. Since then, he’s learned how to communicate by moving his eyes over a letter board, but he is highly reliant on caregivers and his wife, Jane. 

Henry got a glimmer of a different kind of life when he saw Charlie Kemp on CNN in 2010. Kemp, a robotics professor at Georgia Tech, was on TV talking about PR2, a robot developed by the company Willow Garage. PR2 was a massive two-armed machine on wheels that looked like a crude metal butler. Kemp was demonstrating how the robot worked, and talking about his research on how health-care robots could help people. He showed how the PR2 robot could hand some medicine to the television host.    

“All of a sudden, Henry turns to me and says, ‘Why can’t that robot be an extension of my body?’ And I said, ‘Why not?’” Jane says. 

There was a solid reason why not. While engineers have made great progress in getting robots to work in tightly controlled environments like labs and factories, the home has proved difficult to design for. Out in the real, messy world, furniture and floor plans differ wildly; children and pets can jump in a robot’s way; and clothes that need folding come in different shapes, colors, and sizes. Managing such unpredictable settings and varied conditions has been beyond the capabilities of even the most advanced robot prototypes. 

That seems to finally be changing, in large part thanks to artificial intelligence. For decades, roboticists have more or less focused on controlling robots’ “bodies”—their arms, legs, levers, wheels, and the like—via purpose-­driven software. But a new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes. 

Progress won’t happen overnight, though, as the Evanses know far too well from their many years of using various robot prototypes. 

PR2 was the first robot they brought in, and it opened entirely new skills for Henry. It would hold a beard shaver and Henry would move his face against it, allowing him to shave and scratch an itch by himself for the first time in a decade. But at 450 pounds (200 kilograms) or so and $400,000, the robot was difficult to have around. “It could easily take out a wall in your house,” Jane says. “I wasn’t a big fan.”

More recently, the Evanses have been testing out a smaller robot called Stretch, which Kemp developed through his startup Hello Robot. The first iteration launched during the pandemic with a much more reasonable price tag of around $18,000. 

Stretch weighs about 50 pounds. It has a small mobile base, a stick with a camera dangling off it, and an adjustable arm featuring a gripper with suction cups at the ends. It can be controlled with a console controller. Henry controls Stretch using a laptop, with a tool that that tracks his head movements to move a cursor around. He is able to move his thumb and index finger enough to click a computer mouse. Last summer, Stretch was with the couple for more than a month, and Henry says it gave him a whole new level of autonomy. “It was practical, and I could see using it every day,” he says. 

a robot arm holds a brush over the head of Henry Evans which rests on a pillow
Henry Evans used the Stretch robot to brush his hair, eat, and even
play with his granddaughter.
PETER ADAMS

Using his laptop, he could get the robot to brush his hair and have it hold fruit kebabs for him to snack on. It also opened up Henry’s relationship with his granddaughter Teddie. Before, they barely interacted. “She didn’t hug him at all goodbye. Nothing like that,” Jane says. But “Papa Wheelie” and Teddie used Stretch to play, engaging in relay races, bowling, and magnetic fishing. 

Stretch doesn’t have much in the way of smarts: it comes with some pre­installed software, such as the web interface that Henry uses to control it, and other capabilities such as AI-enabled navigation. The main benefit of Stretch is that people can plug in their own AI models and use them to do experiments. But it offers a glimpse of what a world with useful home robots could look like. Robots that can do many of the things humans do in the home—tasks such as folding laundry, cooking meals, and cleaning—have been a dream of robotics research since the inception of the field in the 1950s. For a long time, it’s been just that: “Robotics is full of dreamers,” says Kemp.

But the field is at an inflection point, says Ken Goldberg, a robotics professor at the University of California, Berkeley. Previous efforts to build a useful home robot, he says, have emphatically failed to meet the expectations set by popular culture—think the robotic maid from The Jetsons. Now things are very different. Thanks to cheap hardware like Stretch, along with efforts to collect and share data and advances in generative AI, robots are getting more competent and helpful faster than ever before. “We’re at a point where we’re very close to getting capability that is really going to be useful,” Goldberg says. 

Folding laundry, cooking shrimp, wiping surfaces, unloading shopping baskets—today’s AI-powered robots are learning to do tasks that for their predecessors would have been extremely difficult. 

Missing pieces

There’s a well-known observation among roboticists: What is hard for humans is easy for machines, and what is easy for humans is hard for machines. Called Moravec’s paradox, it was first articulated in the 1980s by Hans Moravec, thena roboticist at the Robotics Institute of Carnegie Mellon University. A robot can play chess or hold an object still for hours on end with no problem. Tying a shoelace, catching a ball, or having a conversation is another matter. 

There are three reasons for this, says Goldberg. First, robots lack precise control and coordination. Second, their understanding of the surrounding world is limited because they are reliant on cameras and sensors to perceive it. Third, they lack an innate sense of practical physics. 

“Pick up a hammer, and it will probably fall out of your gripper, unless you grab it near the heavy part. But you don’t know that if you just look at it, unless you know how hammers work,” Goldberg says. 

On top of these basic considerations, there are many other technical things that need to be just right, from motors to cameras to Wi-Fi connections, and hardware can be prohibitively expensive. 

Mechanically, we’ve been able to do fairly complex things for a while. In a video from 1957, two large robotic arms are dexterous enough to pinch a cigarette, place it in the mouth of a woman at a typewriter, and reapply her lipstick. But the intelligence and the spatial awareness of that robot came from the person who was operating it. 

In a video from 1957, a man operates two large robotic arms and uses the machine to apply a woman’s lipstick. Robots
have come a long way since.
“LIGHTER SIDE OF THE NEWS –ATOMIC ROBOT A HANDY GUY” (1957) VIA YOUTUBE

“The missing piece is: How do we get software to do [these things] automatically?” says Deepak Pathak, an assistant professor of computer science at Carnegie Mellon.  

Researchers training robots have traditionally approached this problem by planning everything the robot does in excruciating detail. Robotics giant Boston Dynamics used this approach when it developed its boogying and parkouring humanoid robot Atlas. Cameras and computer vision are used to identify objects and scenes. Researchers then use that data to make models that can be used to predict with extreme precision what will happen if a robot moves a certain way. Using these models, roboticists plan the motions of their machines by writing a very specific list of actions for them to take. The engineers then test these motions in the laboratory many times and tweak them to perfection. 

This approach has its limits. Robots trained like this are strictly choreographed to work in one specific setting. Take them out of the laboratory and into an unfamiliar location, and they are likely to topple over. 

Compared with other fields, such as computer vision, robotics has been in the dark ages, Pathak says. But that might not be the case for much longer, because the field is seeing a big shake-up. Thanks to the AI boom, he says, the focus is now shifting from feats of physical dexterity to building “general-purpose robot brains” in the form of neural networks. Much as the human brain is adaptable and can control different aspects of the human body, these networks can be adapted to work in different robots and different scenarios. Early signs of this work show promising results. 

Robots, meet AI 

For a long time, robotics research was an unforgiving field, plagued by slow progress. At the Robotics Institute at Carnegie Mellon, where Pathak works, he says, “there used to be a saying that if you touch a robot, you add one year to your PhD.” Now, he says, students get exposure to many robots and see results in a matter of weeks.

What separates this new crop of robots is their software. Instead of the traditional painstaking planning and training, roboticists have started using deep learning and neural networks to create systems that learn from their environment on the go and adjust their behavior accordingly. At the same time, new, cheaper hardware, such as off-the-shelf components and robots like Stretch, is making this sort of experimentation more accessible. 

Broadly speaking, there are two popular ways researchers are using AI to train robots. Pathak has been using reinforcement learning, an AI technique that allows systems to improve through trial and error, to get robots to adapt their movements in new environments. This is a technique that Boston Dynamics has also started using  in its robot “dogs” called Spot.

Deepak Pathak’s team at Carnegie Mellon has used an AI technique called reinforcement learning to create a robotic dog that can do extreme parkour with minimal pre-programming.

In 2022, Pathak’s team used this method to create four-legged robot “dogs” capable of scrambling up steps and navigating tricky terrain. The robots were first trained to move around in a general way in a simulator. Then they were set loose in the real world, with a single built-in camera and computer vision software to guide them. Other similar robots rely on tightly prescribed internal maps of the world and cannot navigate beyond them.

Pathak says the team’s approach was inspired by human navigation. Humans receive information about the surrounding world from their eyes, and this helps them instinctively place one foot in front of the other to get around in an appropriate way. Humans don’t typically look down at the ground under their feet when they walk, but a few steps ahead, at a spot where they want to go. Pathak’s team trained its robots to take a similar approach to walking: each one used the camera to look ahead. The robot was then able to memorize what was in front of it for long enough to guide its leg placement. The robots learned about the world in real time, without internal maps, and adjusted their behavior accordingly. At the time, experts told MIT Technology Review the technique was a “breakthrough in robot learning and autonomy” and could allow researchers to build legged robots capable of being deployed in the wild.   

Pathak’s robot dogs have since leveled up. The team’s latest algorithm allows a quadruped robot to do extreme parkour. The robot was again trained to move around in a general way in a simulation. But using reinforcement learning, it was then able to teach itself new skills on the go, such as how to jump long distances, walk on its front legs, and clamber up tall boxes twice its height. These behaviors were not something the researchers programmed. Instead, the robot learned through trial and error and visual input from its front camera. “I didn’t believe it was possible three years ago,” Pathak says. 

In the other popular technique, called imitation learning, models learn to perform tasks by, for example, imitating the actions of a human teleoperating a robot or using a VR headset to collect data on a robot. It’s a technique that has gone in and out of fashion over decades but has recently become more popular with robots that do manipulation tasks, says Russ Tedrake, vice president of robotics research at the Toyota Research Institute and an MIT professor.

By pairing this technique with generative AI, researchers at the Toyota Research Institute, Columbia University, and MIT have been able to quickly teach robots to do many new tasks. They believe they have found a way to extend the technology propelling generative AI from the realm of text, images, and videos into the domain of robot movements. 

The idea is to start with a human, who manually controls the robot to demonstrate behaviors such as whisking eggs or picking up plates. Using a technique called diffusion policy, the robot is then able to use the data fed into it to learn skills. The researchers have taught robots more than 200 skills, such as peeling vegetables and pouring liquids, and say they are working toward teaching 1,000 skills by the end of the year. 

Many others have taken advantage of generative AI as well. Covariant, a robotics startup that spun off from OpenAI’s now-shuttered robotics research unit, has built a multimodal model called RFM-1. It can accept prompts in the form of text, image, video, robot instructions, or measurements. Generative AI allows the robot to both understand instructions and generate images or videos relating to those tasks. 

The Toyota Research Institute team hopes this will one day lead to “large behavior models,” which are analogous to large language models, says Tedrake. “A lot of people think behavior cloning is going to get us to a ChatGPT moment for robotics,” he says. 

In a similar demonstration, earlier this year a team at Stanford managed to use a relatively cheap off-the-shelf robot costing $32,000 to do complex manipulation tasks such as cooking shrimp and cleaning stains. It learned those new skills quickly with AI. 

Called Mobile ALOHA (a loose acronym for “a low-cost open-source hardware teleoperation system”), the robot learned to cook shrimp with the help of just 20 human demonstrations and data from other tasks, such as tearing off a paper towel or piece of tape. The Stanford researchers found that AI can help robots acquire transferable skills: training on one task can improve its performance for others.

While the current generation of generative AI works with images and language, researchers at the Toyota Research Institute, Columbia University, and MIT believe the approach can extend to the domain of robot motion.

This is all laying the groundwork for robots that can be useful in homes. Human needs change over time, and teaching robots to reliably do a wide range of tasks is important, as it will help them adapt to us. That is also crucial to commercialization—first-generation home robots will come with a hefty price tag, and the robots need to have enough useful skills for regular consumers to want to invest in them. 

For a long time, a lot of the robotics community was very skeptical of these kinds of approaches, says Chelsea Finn, an assistant professor of computer science and electrical engineering at Stanford University and an advisor for the Mobile ALOHA project. Finn says that nearly a decade ago, learning-based approaches were rare at robotics conferences and disparaged in the robotics community. “The [natural-language-processing] boom has been convincing more of the community that this approach is really, really powerful,” she says. 

There is one catch, however. In order to imitate new behaviors, the AI models need plenty of data. 

More is more

Unlike chatbots, which can be trained by using billions of data points hoovered from the internet, robots need data specifically created for robots. They need physical demonstrations of how washing machines and fridges are opened, dishes picked up, or laundry folded, says Lerrel Pinto, an assistant professor of computer science at New York University. Right now that data is very scarce, and it takes a long time for humans to collect.

top frame shows a person recording themself opening a kitchen drawer with a grabber, and the bottom shows a robot attempting the same action

“ON BRINGING ROBOTS HOME,” NUR MUHAMMAD (MAHI) SHAFIULLAH, ET AL.

Some researchers are trying to use existing videos of humans doing things to train robots, hoping the machines will be able to copy the actions without the need for physical demonstrations. 

Pinto’s lab has also developed a neat, cheap data collection approach that connects robotic movements to desired actions. Researchers took a reacher-grabber stick, similar to ones used to pick up trash, and attached an iPhone to it. Human volunteers can use this system to film themselves doing household chores, mimicking the robot’s view of the end of its robotic arm. Using this stand-in for Stretch’s robotic arm and an open-source system called DOBB-E, Pinto’s team was able to get a Stretch robot to learn tasks such as pouring from a cup and opening shower curtains with just 20 minutes of iPhone data.  

But for more complex tasks, robots would need even more data and more demonstrations.  

The requisite scale would be hard to reach with DOBB-E, says Pinto, because you’d basically need to persuade every human on Earth to buy the reacher-­grabber system, collect data, and upload it to the internet. 

A new initiative kick-started by Google DeepMind, called the Open X-Embodiment Collaboration, aims to change that. Last year, the company partnered with 34 research labs and about 150 researchers to collect data from 22 different robots, including Hello Robot’s Stretch. The resulting data set, which was published in October 2023, consists of robots demonstrating 527 skills, such as picking, pushing, and moving.  

Sergey Levine, a computer scientist at UC Berkeley who participated in the project, says the goal was to create a “robot internet” by collecting data from labs around the world. This would give researchers access to bigger, more scalable, and more diverse data sets. The deep-learning revolution that led to the generative AI of today started in 2012 with the rise of ImageNet, a vast online data set of images. The Open X-Embodiment Collaboration is an attempt by the robotics community to do something similar for robot data. 

Early signs show that more data is leading to smarter robots. The researchers built two versions of a model for robots, called RT-X, that could be either run locally on individual labs’ computers or accessed via the web. The larger, web-accessible model was pretrained with internet data to develop a “visual common sense,” or a baseline understanding of the world, from the large language and image models. 

When the researchers ran the RT-X model on many different robots, they discovered that the robots were able to learn skills 50% more successfully than in the systems each individual lab was developing.

“I don’t think anybody saw that coming,” says Vincent Vanhoucke, Google DeepMind’s head of robotics. “Suddenly there is a path to basically leveraging all these other sources of data to bring about very intelligent behaviors in robotics.”

Many roboticists think that large vision-language models, which are able to analyze image and language data, might offer robots important hints as to how the surrounding world works, Vanhoucke says. They offer semantic clues about the world and could help robots with reasoning, deducing things, and learning by interpreting images. To test this, researchers took a robot that had been trained on the larger model and asked it to point to a picture of Taylor Swift. The researchers had not shown the robot pictures of Swift, but it was still able to identify the pop star because it had a web-scale understanding of who she was even without photos of her in its data set, says Vanhoucke.

RT-2, a recent model for robotic control, was trained on online text
and images as well as interactions with the real world.
KELSEY MCCLELLAN

Vanhoucke says Google DeepMind is increasingly using techniques similar to those it would use for machine translation to translate from English to robotics. Last summer, Google introduced a vision-language-­action model called RT-2. This model gets its general understanding of the world from online text and images it has been trained on, as well as its own interactions in the real world. It translates that data into robotic actions. Each robot has a slightly different way of translating English into action, he adds.  

“We increasingly feel like a robot is essentially a chatbot that speaks robotese,” Vanhoucke says. 

Baby steps

Despite the fast pace of development, robots still face many challenges before they can be released into the real world. They are still way too clumsy for regular consumers to justify spending tens of thousands of dollars on them. Robots also still lack the sort of common sense that would allow them to multitask. And they need to move from just picking things up and placing them somewhere to putting things together, says Goldberg—for example, putting a deck of cards or a board game back in its box and then into the games cupboard. 

But to judge from the early results of integrating AI into robots, roboticists are not wasting their time, says Pinto. 

“I feel fairly confident that we will see some semblance of a general-purpose home robot. Now, will it be accessible to the general public? I don’t think so,” he says. “But in terms of raw intelligence, we are already seeing signs right now.” 

Building the next generation of robots might not just assist humans in their everyday chores or help people like Henry Evans live a more independent life. For researchers like Pinto, there is an even bigger goal in sight.

Home robotics offers one of the best benchmarks for human-level machine intelligence, he says. The fact that a human can operate intelligently in the home environment, he adds, means we know this is a level of intelligence that can be reached. 

“It’s something which we can potentially solve. We just don’t know how to solve it,” he says. 

Evans in the foreground with computer screen.  A table with playing cards separates him from two other people in the room
Thanks to Stretch, Henry Evans was able to hold his own playing cards
for the first time in two decades.
VY NGUYEN

For Henry and Jane Evans, a big win would be to get a robot that simply works reliably. The Stretch robot that the Evanses experimented with is still too buggy to use without researchers present to troubleshoot, and their home doesn’t always have the dependable Wi-Fi connectivity Henry needs in order to communicate with Stretch using a laptop.

Even so, Henry says, one of the greatest benefits of his experiment with robots has been independence: “All I do is lay in bed, and now I can do things for myself that involve manipulating my physical environment.”

Thanks to Stretch, for the first time in two decades, Henry was able to hold his own playing cards during a match. 

“I kicked everyone’s butt several times,” he says. 

“Okay, let’s not talk too big here,” Jane says, and laughs.

How one mine could unlock billions in EV subsidies

A collection of brown pipes emerge at odd angles from the mud and overgrown grasses on a pine farm north of the tiny town of Tamarack, Minnesota.

Beneath these capped drill holes, Talon Metals has uncovered one of America’s densest nickel deposits—and now it wants to begin tunneling deep into the rock to extract hundreds of thousands of metric tons of mineral-rich ore a year.

If regulators approve the mine, it could mark the starting point in what this mining exploration company claims would become the country’s first complete domestic nickel supply chain, running from the bedrock beneath the Minnesota earth to the batteries in electric vehicles across the nation.


This is the second story in a two-part series exploring the hopes and fears surrounding a single mining proposal in a tiny Minnesota town. You can read the first part here.


The US government is poised to provide generous support at every step, distributing millions to billions of dollars in subsidies for those refining the metal, manufacturing the batteries, and buying the cars and trucks they power.

The products generated with the raw nickel that would flow from this one mining project could theoretically net more than $26 billion in subsidies, just through federal tax credits created by the Inflation Reduction Act (IRA). That’s according to an original analysis by Bentley Allan, an associate professor of political science at Johns Hopkins University and co-director of the Net Zero Industrial Policy Lab, produced in coordination with MIT Technology Review

One of the largest beneficiaries would be battery manufacturers that use Talon’s nickel, which could secure more than $8 billion in tax credits. About half of that could go to the EV giant Tesla, which has already agreed to purchase tens of thousands of metric tons of the metal from this mine. 

But the biggest winner, at least collectively, would be American consumers who buy EVs powered by those batteries. All told, they could enjoy nearly $18 billion in savings. 

While it’s been widely reported that the IRA could unleash at least hundreds of billions of federal dollars, MIT Technology Review wanted to provide a clearer sense of the law’s on-the-ground impact by zeroing in on a single project and examining how these rich subsidies could be unlocked at each point along the supply chain. (Read my related story on Talon’s proposal and the community reaction to it here.) 

We consulted with Allan to figure out just how much money is potentially in play, where it’s likely to go, and what it may mean for emerging industries and the broader economy. 

These calculations are all high-end estimates meant to assess the full potential of the act, and they assume that every company and customer qualifies for every tax credit available at each point along the supply chain. In the end, the government almost certainly won’t hand out the full amounts that Allan calculated, given the varied and complex restrictions in the IRA and other factors.

In addition, Talon itself may not obtain any subsidies directly through the law, according to recent but not-yet-final IRS interpretations. But thanks to rich EV incentives that will stimulate demand for domestic critical minerals, the company still stands to benefit indirectly from the IRA.


How $26 billion in tax credits could break down across a new US nickel supply chain


The sheer scale of the numbers offer a glimpse into how and why the IRA, signed into law in August 2022, has already begun to drive projects, reconfigure sourcing arrangements, and accelerate the shift away from fossil fuels.

Indeed, the policies have dramatically altered the math for corporations considering whether, where, and when to build new facilities and factories, helping to spur at least tens of billions of dollars’ worth of private investments into the nation’s critical-mineral-to-EV supply chain, according to several analyses.

“If you try to work out the math on these for five minutes, you start to be really shocked by what you see on paper,” Allan says, noting that the IRA’s incentives ensure that many more projects could be profitably and competitively developed in the US. “It’s going to transform the country in a serious way.”

An urgent game of catch-up

For decades, the US steadily offshored the messy business of mining and processing metals, leaving other nations to deal with the environmental damage and community conflicts that these industries often cause. But the country is increasingly eager to revitalize these sectors as climate change and simmering trade tensions with China raise the economic, environmental, and geopolitical stakes. 

Critical minerals like lithium, cobalt, nickel, and copper are the engine of the emerging clean-energy economy, essential for producing solar panels, wind turbines, batteries, and EVs. Yet China dominates production of the source materials, components, and finished goods for most of these products, following decades of strategic government investments and targeted trade policies. It refines 71% of the type of nickel used for batteries and produces more than 85% of the world’s battery cells, according to Benchmark Mineral Intelligence. 

The US is now in a high-stakes scramble to catch up and ensure its unfettered access to these materials, either by boosting domestic production or by locking in supply chains through friendly trading partners. The IRA is the nation’s biggest bet, by far, on bolstering these industries and countering China’s dominance over global cleantech supply chains. By some estimates, it could unlock more than $1 trillion in federal incentives.

“It should be sufficient to drive transformational progress in clean-energy adoption in the United States,” says Kimberly Clausing, a professor at the UCLA School of Law who previously served as deputy assistant secretary for tax analysis at the Treasury Department. “The best modeling seems to show it will reduce emissions substantially, getting us halfway to our Paris Agreement goals.”

Among other subsidies, the IRA provides tax credits that companies can earn for producing critical minerals, electrode materials, and batteries, enabling them to substantially cut their federal tax obligations. 

But the provisions that are really driving the rethinking of sourcing and supply chains are the so-called domestic content requirements contained in the tax credits for purchasing EVs. For consumers to earn the full credits, and for EV makers to benefit from the boost in demand they’ll generate, a significant share of the critical minerals the batteries contain must be produced in the US, sourced from free-trade partners, or recycled in North America, among other requirements. 

This makes the critical minerals coming out of a mine like Talon’s especially valuable to US car companies since it could help ensure that their EV models and customers qualify for these credits. 

Mining and refining

Nickel, like the deposits found in Minnesota, is of particular importance for cleaning up the auto sector. The metal boosts the amount of energy that can be packed into battery cathodes, extending the range of cars and making possible heavier electric vehicles, like trucks and even semis.

Global nickel demand could rise 112% by 2040, according to the International Energy Agency, owing primarily to an expected ninefold increase in demand for EV batteries. But there’s only one dedicated nickel mine operating in the US today, and most processing of the metal happens overseas. 

A former Talon worker pulls tubes of bedrock from drill pipe and places them into a box for further inspection.
ACKERMAN + GRUBER

In a preliminary economic analysis of the proposed mine released in 2021, Talon said it hoped to dig up nearly 11 million metric tons of ore over a nine-year period, including more than 140,000 tons of nickel. That’s enough to produce lithium-ion batteries that could power almost 2.4 million electric vehicles, Allan finds. 

After Talon mines the ore, the company plans to ship the material more than 400 miles west by rail to a planned processing site in central North Dakota that would produce what’s known as “nickel in concentrate,” which is generally around 10% pure. 

But that’s not enough to earn any subsidies under the current interpretation of the IRA’s tax credit for critical-mineral production. The law specifies that a company must convert nickel into a highly refined form known as “nickel sulphate” or process the metal to at least 99% purity by mass to be eligible for tax credits that cover 10% of the operating cost. Allan estimates that whichever company or companies carry out that step could earn subsidies that exceed $55 million. 

From there, the nickel would still need to be processed and mixed with other metals to produce the “cathode active materials” that go into a battery. Whatever companies carry out that step could secure some share of another $126.5 million in tax savings, thanks to a separate credit covering 10% of the costs of generating these materials, Allan notes.

Some share of the subsidies from these two tax credits might go to Tesla, which has stressed that it’s bringing more aspects of battery manufacturing in-house. For instance, it’s in the process of constructing its own lithium refinery and cathode plant in Texas. 

But it’s not yet clear what other companies could be involved in processing the nickel mined by Talon and, thus, who would benefit from these particular provisions.

Talon and other mining companies have campaigned to have the costs for mining raw materials included in the critical-mineral production tax credit, but the IRS recently stated in a proposed rule that this step won’t qualify.

Todd Malan, Talon’s chief external affairs officer and head of climate strategy, argues that this and other recent determinations will limit the incentives for companies to develop new mines in the US, or to make sure that any mines that are developed meet the higher environmental and labor standards the Biden administration and others have been calling for.

(The determinations could change since the Treasury Department and IRS have said they are still considering including the costs of mining in the tax credits. They have requested additional comments on the matter.) 

Even if Talon doesn’t obtain any IRA subsidies, it still stands to earn federal funds in several other ways. The company is set to receive a nearly $115 million grant from the Department of Energy to build the North Dakota processing site, through funds freed up under the Bipartisan Infrastructure Law. In addition, in September Talon secured nearly $21 million in matching grants through the Defense Production Act, which will support further nickel exploration in Minnesota and at another site the company is evaluating in Michigan. (These numbers are not included in Allan’s overall $26 billion estimate.)


Talon Metals could receive $136 million in federal subsidies

$115 million to build a nickel processing site in North Dakota with funds from the Bipartisan Infrastructure Law
$21 million through the Defense Production Act to support additional nickel exploration in the Midwest.

The math

Allan says that his findings are best thought of as ballpark figures. Some of Talon’s estimates have already changed, and the actual mineral quantities and operating costs will depend on a variety of factors, including how the company’s plans shift, what state and local regulators ultimately approve, what Talon actually pulls out of the ground, how much nickel the ore contains, and how much costs shift throughout the supply chain in the coming years.

His analysis assumes a preparation cost of $6.68 per kilowatt-hour for cathode active materials, based on an earlier analysis in the journal Energies. It did not evaluate any potential subsidies associated with other metals that Talon may extract from the mine, such as iron, copper, and cobalt. Please see his full research brief on the Net Zero Industrial Policy Lab site. 

Companies can use the IRA tax credits to reduce or even eliminate their federal tax obligations, both now and in tax years to come. In addition, businesses can transfer and sell the tax credits to other taxpayers.

Most of the tax credits in the IRA begin to phase out in 2030, so companies need to move fast to take advantage of them. The subsidies for critical-mineral production, however, don’t have any such cutoff.

Where will the money go and what will it do?

The $136 million in direct federal grants would double Talon’s funds for exploratory drilling efforts and cover about 27% of the development cost for its North Dakota processing plant.

The company says that these projects will help accelerate the country’s shift toward EVs and reduce the nation’s reliance on China for critical minerals. Further, Talon notes the mine will provide significant local economic benefits, including about 300 new jobs. That’s in addition to the nearly 100 employees already working in or near Tamarack. The company also expects the operation to generate nearly $110 million in mineral royalties and taxes paid to the state, local government, and the regional school district.

Plenty of citizens around Tamarack, however, argue that any economic benefits will come with steep trade-offs in terms of environmental and community impacts. A number of local tribal members fear the project could contaminate waterways and harm the region’s plants and animals. 

“The energy transition cannot be built by desecrating native lands,” said Leanna Goose, a member of the Leech Lake Band of Ojibwe, in an email. “If these ‘critical’ minerals leave the ground and are taken out from on or near our reservations, our people would be left with polluted water and land.”

Meanwhile, as it becomes clear just how much federal money is at stake, opposition to the IRA and other climate-related laws is hardening. Congressional Republicans, some of whom have portrayed the tax subsidies as corporate handouts to the “wealthy and well connected,” have repeatedly attempted to repeal key provisions of the laws. In addition, some environmentalists and left-wing critics have chided the government for offering generous subsidies to controversial companies and projects, including Talon’s. 

Talon stresses that it has made significant efforts to limit pollution and address Indigenous concerns. In addition, Malan pushed back on Allan’s findings. He says the overall estimate of $26 billion in subsidies across the supply chain significantly exaggerates the likely outcome, given numerous ways that companies and consumers might fail to qualify for the tax credits.

“I think it’s too much to tie it back to a little mining company in Minnesota,” he says. 

He emphasizes that Talon will earn money only for selling the metal it extracts, and that it will receive other federal grants only if it secures permits to proceed on its projects. (The company could also apply to receive separate IRA tax credits that cover a portion of the investments made into certain types of energy projects, but it has not at this time.)

Boosting the battery sector

The next stop in the supply chain is the battery makers. 

The amount of nickel that Talon expects to pull from the mine could be used to produce cathodes for nearly 190 million kilowatt-hours’ worth of lithium-ion batteries, according to Allan’s findings. 

Manufacturing that many batteries could generate some $8.5 billion from a pair of IRA tax credits worth $45 per kilowatt-hour, dwarfing the potential subsidies for processing the nickel.

Any number of companies might purchase metals from Talon to build batteries, but Tesla has already agreed to buy 75,000 tons of nickel in concentrate from the North Dakota facility. (The companies have not disclosed the financial terms of the deal.)

Given the batteries that could be produced with this amount of metal, Tesla’s share of these tax savings could exceed $4 billion, Allan found. 

The tax credits add up to “a third of the cost of the battery, full stop,” he says. “These are big numbers. The entire cost of building the plant, at least, is covered by the IRA.”


What Talon’s nickel may mean for Tesla


The math

The subsidies for battery makers would flow from two credits within the IRA. Those include a $35-per-kilowatt-hour tax credit for manufacturing battery cells and a $10-per-kilowatt-hour credit for producing battery modules, which are the bundles of interoperating cells that slot into vehicles. Allan’s calculations assume that all the metal will be used to produce nickel-rich NMC 811 batteries, and that every EV will include an 80-kilowatt-hour battery pack that costs $153 per kilowatt-hour to produce.

Where will the money go and what will it do?

Those billions are just what Tesla could secure in tax credits from the nickel it buys from Talon. It and other battery makers could qualify for still more government subsidies for batteries produced with critical minerals from other sources. 

Tesla didn’t respond to inquiries from MIT Technology Review. But its executives have said they believe Tesla’s batteries will qualify for the manufacturing tax credits, even before Talon’s mining and processing plants are up and running.

On an earnings call last January, Zachary Kirkhorn, who was then the company’s chief financial officer, said that Tesla expected the battery subsidies from its current production lines to total $150 million to $250 million per quarter in 2023. He said the company intends to use the tax credits to lower prices and promote greater adoption of electric vehicles: “We want to use this to accelerate sustainable energy, which is our mission and also the goal of [the IRA].” 

But these potential subsidies are clear evidence that the US government is dedicating funds to the wrong societal priorities, says Jenna Yeakle, an organizer for the Sierra Club North Star Chapter in Minnesota, which added its name to a letter to the White House criticizing federal support for Talon’s proposals. 

“People are struggling to pay rent and put food on the table and to navigate our monopolized corporate health-care system,” she says. “Do we need to be subsidizing Elon Musk’s bank account?”

Still, the IRA’s tax credits will go to numerous battery companies beyond Tesla. 

In fact, the incentives are already reshaping the marketplace, driving a sharp increase in the number of battery and electric-vehicle projects announced, according to the EV Supply Chain Dashboard, a database managed by Jay Turner, a professor of environmental studies at Wellesley College and author of Charged: A History of Batteries and Lessons for a Clean Energy Future. 

As of press time, 81 battery and EV-related projects representing $79 billion in investments and more than 50,000 jobs have been announced across the US since Biden signed the IRA. On an annual basis, that’s nearly three times the average dollar figures announced in recent years before the law was enacted. The projects include BMW, Hyundai, and Ford battery plants, Tesla’s semi manufacturing pilot plant in Nevada, and Redwoods Materials’ battery recycling facility in South Carolina. 

“It’s really exceptional,” Turner says. “I don’t think anybody expected to see so many battery projects, so many jobs, and so many investments over the past year.”

Driving EV sales

The biggest subsidy, though also the most diffuse one, would go to American consumers. 

The IRA offers two tax credits worth up to $7,500 combined for purchasing EVs and plug-in hybrids if the battery materials and components comply with the domestic content requirements.

Since the nickel that Talon expects to extract from the Minnesota mine could power nearly 2.4 million electric vehicles, consumers could collectively see $17.7 billion in potential savings if all those vehicles qualify for both credits, Allan finds. 

Talon’s Malan says this estimate significantly overstates the likely consumer savings, noting that many purchases won’t qualify. Indeed, an individual with a gross income that exceeds $150,000 won’t be eligible, nor will pickups, vans, and SUVs that cost more than $80,000. That would rule out, for instance, the high-end model of Tesla’s Cybertruck.

A number of Tesla models are currently excluded from one or both consumer credits, for varied and confusing reasons. But the Talon deal and other recent sourcing arrangements, as well as the company’s plans to manufacture more of its own batteries, could help more of Tesla’s vehicles to qualify in the coming months or years. 

The IRA’s consumer incentives are likely to do more to stimulate demand than previous federal EV policies, in large part because customers can take them in the form of a price cut at the point of sale, says Gil Tal, director of the Electric Vehicle Research Center at the University of California, Davis. Previously, such incentives would simply reduce the buyer’s federal obligations come tax season. 

RMI, a nonprofit research group focused on clean energy, projects that all the EV provisions within the IRA, which also include subsidies for new charging stations, will spur the sales of an additional 37 million electric cars and trucks by 2032. That would propel EV sales to around 80% of new passenger-automobile purchases. Those vehicles, in turn, could eliminate 2.4 billion tons of transportation emissions by 2040. 

red Tesla Model3
In a preliminary economic analysis, Talon said it hoped to dig up more than 140,000 tons of nickel. That’s enough to produce lithium-ion batteries that could power almost 2.4 million electric vehicles.
TESLA

The math

The IRA offers two tax credits that could apply to EV buyers. The first is a $3,750 credit for those who purchase vehicles with batteries that contain a significant portion of critical minerals that were mined or processed in the US, or in a country with which the US has a free-trade agreement. The required share is 50% in 2024 but reaches 80% beginning in 2027. Cars and trucks may also qualify if the materials came from recycling in North America.

Buyers can also earn a separate $3,750 credit if a specified share of the battery components in the vehicle were manufactured or assembled in North America. The share is 60% this year and next but reaches 100% in 2029.

The big bet

There are lingering questions about how many of the projects sparked by the country’s new green industrial policies will ultimately be built—and what the US will get for all the money it’s giving up. 

After all, the tens of billions of dollars’ worth of tax credits that could be granted throughout the Talon-to-Tesla-to-consumer nickel supply chain is money that isn’t going to the federal government, and isn’t funding services for American taxpayers.

The IRA’s impacts on tax coffers are certain to come under greater scrutiny as the programs ramp up, the dollar figures rise, projects run into trouble, and the companies or executives benefiting engage in questionable practices. After all, that’s exactly what happened in the aftermath of the country’s first major green industrial policy efforts a decade ago, when the high-profile failures of Solyndra, Fisker, and other government-backed clean-energy ventures fueled outrage among conservative critics. 

Nevertheless, Tom Moerenhout, a research scholar at Columbia University’s Center on Global Energy Policy, insists it’s wrong to think of these tax credits as forgone federal revenue. 

In many cases, the projects set to get subsidies for 10% of their operating costs would not otherwise have existed in the first place, since those processing plants and manufacturing facilities would have been built in other, cheaper countries. “They would simply go to China,” he says.

UCLA’s Clausing doesn’t entirely agree with that take, noting that some of this money will go to projects that would have happened anyway, and some of the resources will simply be pulled from other sectors of the economy or different project types. 

“It doesn’t behoove us as experts to argue this is free money,” she says. “Resources really do have costs. Money doesn’t grow on trees.”

But any federal expenses here are “still cheaper than the social cost of carbon,” she adds, referring to the estimated costs from the damage associated with ongoing greenhouse-gas pollution. “And we should keep our eyes on the prize and remember that there are some social priorities worth paying for—and this is one of those.”

In the end, few expect the US’s sweeping climate laws to completely achieve any of the hopes underlying them on their own. They won’t propel the US to net-zero emissions. They won’t enable the country to close China’s massive lead in key minerals and cleantech, or fully break free from its reliance on the rival nation. Meanwhile, the battle to lock down access to critical minerals will only become increasingly competitive as more nations accelerate efforts to move away from fossil fuels—and it will generate even more controversy as communities push back against proposals over concerns about environmental destruction.

But the evidence is building that the IRA in particular is spurring real change, delivering at least some progress on most of the goals that drove its passage: galvanizing green-tech projects, cutting emissions, creating jobs, and moving the nation closer to its clean-energy future. 

“It is catalyzing investment up and down the supply chain across North America,” Allan says. “It is a huge shot in the arm of American industry.”

The worst technology failures of 2023

Welcome to our annual list of the worst technologies. This year, one technology disaster in particular holds lessons for the rest of us: the Titan submersible that imploded while diving to see the Titanic

Everyone had warned Stockton Rush, the sub’s creator, that it wasn’t safe. But he believed innovation meant tossing out the rule book and taking chances. He set aside good engineering in favor of wishful thinking. He and four others died. 

To us it shows how the spirit of innovation can pull ahead of reality, sometimes with unpleasant consequences. It was a phenomenon we saw time and again this year, like when GM’s Cruise division put robotaxis into circulation before they were ready. Was the company in such a hurry because it’s been losing $2 billion a year? Others find convoluted ways to keep hopes alive, like a company that is showing off its industrial equipment but is quietly still using bespoke methods to craft its lab-grown meat. The worst cringe, though, is when true believers can’t see the looming disaster, but we do. That’s the case for the new “Ai Pin,” developed at a cost of tens of millions, that’s meant to replace smartphones. It looks like a titanic failure to us. 

Titan submersible

This summer we were glued to our news feeds as drama unfolded 3,500 meters below the ocean’s surface. An experimental submarine with five people aboard was lost after descending to see the wreck of the Titanic.  

the oceangate submersible underwater

GDA VIA AP IMAGES

The Titan was a radical design for a deep-sea submersible: a minivan-size carbon fiber tube, operated with a joystick, that aerospace engineer Stockton Rush believed would open the depths to a new kind of tourism. His company, OceanGate, had been warned the vessel hadn’t been proved to withstand 400 atmospheres of pressure. His answer? “I think it was General MacArthur who said ‘You’re remembered for the rules you break,” Rush told a YouTuber.

But breaking the rules of physics doesn’t work. On June 22, four days after contact was lost with the Titan, a deep-sea robot spotted the sub’s remains. It was most likely destroyed in a catastrophic implosion.

In addition to Rush, the following passengers perished:

  • Hamish Harding, 58, tourist
  • Shahzada Dawood, 48, tourist
  • Suleman Dawood, 19, tourist
  • Paul-Henri Nargeolet, 77, Titanic expert

More: The Titan Submersible Was “an Accident Waiting to Happen” (The New Yorker), OceanGate Was Warned of Potential for “Catastrophic” Problems With Titanic Mission (New York Times), OceanGate CEO Stockton Rush said in 2021 he knew he’d “broken some rules” (Business Insider)


Lab-grown meat

Instead of killing animals for food, why not manufacture beef or chicken in a laboratory vat? That’s the humane idea behind “lab-grown meat.”

The problem, though, is making the stuff at a large scale. Take Upside Foods. The startup, based in Berkeley, California, had raised more than half a billion dollars and was showing off rows of big, gleaming steel bioreactors.

But journalists soon learned that Upside was a bird in borrowed feathers. Its big tanks weren’t working; it was growing chicken skin cells in much smaller plastic laboratory flasks. Thin layers of cells were then being manually scooped up and pressed into chicken pieces. In other words, Upside was using lots of labor, plastic, and energy to make hardly any meat.

Samir Qurashi, a former employee, told the Wall Street Journal he knows why Upside puffed up the potential of lab-grown food. “It’s the ‘fake it till you make it’ principle,” he said.

And even though lab-grown chicken has FDA approval, there’s doubt whether lab meat will ever compete with the real thing. Chicken goes for $4.99 a pound at the supermarket. Upside still isn’t saying how much the lab version costs to make, but a few bites of it sell for $45 at a Michelin-starred restaurant in San Francisco.

Upside has admitted the challenges. “We signed up for this work not because it’s easy, but because the world urgently needs it,” the company says.

More: I tried lab-grown chicken at a Michelin-starred restaurant (MIT Technology Review), The Biggest Problem With Lab-Grown Chicken Is Growing the Chicken (Bloomberg), Insiders Reveal Major Problems at Lab-Grown-Meat Startup Upside Foods (Wired)


Cruise robotaxi

Sorry, autopilot fans, but we can’t ignore the setbacks this year. Tesla just did a massive software recall after cars set on self-driving mode slammed into emergency vehicles. But the biggest reversal was at Cruise, the division of GM that became the first company to offer driverless taxi rides in San Francisco, day or night, with a fleet exceeding 400 cars.

Cruise argues that robotaxis don’t get tired, don’t get drunk, and don’t get distracted. It even ran a full-page newspaper ad declaring that “humans are terrible drivers.”

a Cruise vehicle parked on the street in front of a residential home as a person descends a front staircase in the background

CRUISE

But Cruise forgot that to err is human—not what we want from robots. Soon, it was Cruise’s sensor-laden Chevy Bolts that started racking up noticeable mishaps, including dragging a pedestrian for 20 feet. This October, the California Department of Motor Vehicles suspended GM’s robotaxis, citing an “unreasonable risk to public safety.”

It’s a blow for Cruise, which has since laid off 25% of its staff and fired its CEO and cofounder, Kyle Vogt, a onetime MIT student. “We have temporarily paused driverless service,” Cruise’s website now reads. It says it’s reviewing safety and taking steps to “regain public trust.”

More: GM’s Self-Driving Car Unit Skids Off Course (Wall Street Journal), Important Updates from Cruise (Getcruise.com)


Plastic proliferation

Plastic is great. It’s strong, it’s light, and it can be pressed into just about any shape: lawn chairs, bobbleheads, bags, tires, or thread.

The problem is there’s too much of it, as Doug Main reported in MIT Technology Review this year. Humans make 430 million tons of plastic a year (significantly more than the weight of all people combined), but only 9% gets recycled. The rest ends up in landfills and, increasingly, in the environment. Not only does the average whale have kilograms of the stuff in its belly, but tiny bits of “microplastic” have been found in soft drinks, plankton, and human bloodstreams, and even floating in the air. The health effects of spreading microplastic pollution have barely been studied.

Awareness of the planetary scourge is growing, and some are calling for a “plastics treaty” to help stop the pollution. It’s going to be a hard sell. That’s because plastic is so cheap and useful. Yet researchers say the best way to cut plastic waste is not to make it in the first place.

More: Think your plastic is being recycled? Think again (MIT Technology Review),  Oh Good, Hurricanes Are Now Made of Microplastics (Wired)


Humane Ai Pin

The New York Times declared it Silicon Valley’s “Big, Bold Sci-Fi Bet” for what comes after the smartphone. The product? A plastic badge called the Ai Pin, with a camera, chips, and sensors.

Humane's AI Pin worn on a sweatshirt

HUMANE

A device to wean us off our phone addiction is a worthy goal, but this blocky $699 pin (which also requires a $24-a-month subscription) isn’t it. An early review called the device, developed by startup Humane Ai, “equal parts magic and awkward.” Emphasis on the awkward. Users must speak voice commands to send messages or chat with an AI (a laser projector in the pin will also display information on your hand). It weighs as much as a golf ball, so you probably won’t be attaching it to a T-shirt. 

It is the creation of a husband-and-wife team of former Apple executives, Bethany Bongiorno and Imran Chaudhri, who were led to their product idea with the guidance of a Buddhist monk named Brother Spirit, raising $240 million and filing 25 patents along the way, according to the Times.

Clearly, there’s a lot of thought, money, and engineering involved in its creation. But as The Verge’s wearables reviewer Victoria Song points out, “it flouts the chief rule of good wearable design: you have to want to wear the damn thing.” As it is, the Ai Pin is neat, but it’s still no competition for the lure of a screen.

More: Can A.I. and Lasers Cure Our Smartphone Addiction? (New York Times) Screens are good, actually (The Verge)


Social media superconductor

A room-temperature superconductor is a material offering no electrical resistance. If it existed, it would make possible new types of batteries and powerful quantum computers, and bring nuclear fusion closer to reality. It’s a true Holy Grail.

So when a report emerged this July from Korea that a substance called LK-99 was the real thing, attention seekers on the internet were ready to share. The news popped up first in Asia, along with an online video of a bit of material floating above a magnet. Then came the booster fuel of social media hot takes.

Pellet of LK-99 being repelled by a magnet

HYUN-TAK KIM/WIKIMEDIA

“Today might have seen the biggest physics discovery of my lifetime,” said a post to X that has been viewed 30 million times. “I don’t think people fully grasp the implications … Here’s how it could totally change our lives.”

No matter that the post had been written by a marketer at a coffee company. It was exciting—and hilarious—to see well-funded startups drop their work on rockets and biotech to try to make the magic substance. Kenneth Chang, a reporter at the New York Times, dubbed LK-99 “the Superconductor of the Summer.”

But summer’s dreams soon ripped at the seams after real physicists couldn’t replicate the work. No, LK-99 is not a superconductor. Instead, impurities in the recipe could have misled the Korean researchers—and, thanks to social media, the rest of us too.

More: LK-99 Is the Superconductor of the Summer (New York Times)  LK-99 isn’t a superconductor—how science sleuths solved the mystery (Nature)


Rogue geoengineering

Solar geoengineering is the idea to cool the planet by releasing reflective materials into the atmosphere. It’s a fraught concept, because it won’t stop the greenhouse effect—only mask it. And who gets to decide to block the sun?

Mexico banned geoengineering trials early this year after a startup called Make Sunsets decided it could commercialize the effort. Cofounder Luke Iseman decided to launch balloons in Mexico designed to disperse reflective sulfur dioxide into the sky. The startup is still selling “cooling credits” for $10 each on its website.

Injecting particles into the sky is theoretically cheap and easy, and climate warming is a huge threat. But moving too fast can create a backlash that stalls progress, according to my colleague James Temple. “They’re violating the rights of communities to dictate their own future,” one critic said.

Iseman remains unrepentant. “I don’t poll billions before taking a flight,” he has said. “I’m not going to ask for permission from every person in the world before I try to do a bit to cool Earth.” 

More: The flawed logic of rushing out extreme climate solutions (MIT Technology Review), Mexico bans solar geoengineering experiments after startup’s field tests (The Verge), Researchers launched a solar geoengineering test flight in the UK last fall (MIT Technology Review)

How Meta and AI companies recruited striking actors to train AI

One evening in early September, T, a 28-year-old actor who asked to be identified by his first initial, took his seat in a rented Hollywood studio space in front of three cameras, a director, and a producer for a somewhat unusual gig.

The two-hour shoot produced footage that was not meant to be viewed by the public—at least, not a human public. 

Rather, T’s voice, face, movements, and expressions would be fed into an AI database “to better understand and express human emotions.” That database would then help train “virtual avatars” for Meta, as well as algorithms for a London-based emotion AI company called Realeyes. (Realeyes was running the project; participants only learned about Meta’s involvement once they arrived on site.)

The “emotion study” ran from July through September, specifically recruiting actors. The project coincided with Hollywood’s historic dual strikes by the Writers Guild of America and the Screen Actors Guild (SAG-AFTRA). With the industry at a standstill, the larger-than-usual number of out-of-work actors may have been a boon for Meta and Realeyes: here was a new pool of “trainers”—and data points—perfectly suited to teaching their AI to appear more human. 

For actors like T, it was a great opportunity too: a way to make good, easy money on the side, without having to cross the picket line. 

“There aren’t really clear rules right now.”

“This is fully a research-based project,” the job posting said. It offered $150 per hour for at least two hours of work, and asserted that “your individual likeness will not be used for any commercial purposes.”  

The actors may have assumed this meant that their faces and performances wouldn’t turn up in a TV show or movie, but the broad nature of what they signed makes it impossible to know the full implications for sure. In fact, in order to participate, they had to sign away certain rights “in perpetuity” for technologies and use cases that may not yet exist. 

And while the job posting insisted that the project “does not qualify as struck work” (that is, work produced by employers against whom the union is striking), it nevertheless speaks to some of the strike’s core issues: how actors’ likenesses can be used, how actors should be compensated for that use, and what informed consent should look like in the age of AI. 

“This isn’t a contract battle between a union and a company,” said Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, at a panel on AI in entertainment at San Diego Comic-Con this summer. “It’s existential.”

Many actors across the industry, particularly background actors (also known as extras), worry that AI—much like the models described in the emotion study—could be used to replace them, whether or not their exact faces are copied. And in this case, by providing the facial expressions that will teach AI to appear more human, study participants may in fact have been the ones inadvertently training their own potential replacements. 

“Our studies have nothing to do with the strike,” Max Kalehoff, Realeyes’s vice president for growth and marketing, said in an email. “The vast majority of our work is in evaluating the effectiveness of advertising for clients—which has nothing to do with actors and the entertainment industry except to gauge audience reaction.” The timing, he added, was “an unfortunate coincidence.” Meta did not respond to multiple requests for comment.

Given how technological advancements so often build upon one another, not to mention how quickly the field of artificial intelligence is evolving, experts point out that there’s only so much these companies can truly promise. 

In addition to the job posting, MIT Technology Review has obtained and reviewed a copy of the data license agreement, and its potential implications are indeed vast. To put it bluntly: whether the actors who participated knew it or not, for as little as $300, they appear to have authorized Realeyes, Meta, and other parties of the two companies’ choosing to access and use not just their faces but also their expressions, and anything derived from them, almost however and whenever they want—as long as they do not reproduce any individual likenesses. 

Some actors, like Jessica, who asked to be identified by just her first name, felt there was something “exploitative” about the project—both in the financial incentives for out-of-work actors and in the fight over AI and the use of an actor’s image. 

Jessica, a New York–based background actor, says she has seen a growing number of listings for AI jobs over the past few years. “There aren’t really clear rules right now,” she says, “so I don’t know. Maybe … their intention [is] to get these images before the union signs a contract and sets them.”

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

All this leaves actors, struggling after three months of limited to no work, primed to accept the terms from Realeyes and Meta—and, intentionally or not, to affect all actors, whether or not they personally choose to engage with AI. 

“It’s hurt now or hurt later,” says Maurice Compte, an actor and SAG-AFTRA member who has had principal roles on shows like Narcos and Breaking Bad. After reviewing the job posting, he couldn’t help but see nefarious intent. Yes, he said, of course it’s beneficial to have work, but he sees it as beneficial “in the way that the Native Americans did when they took blankets from white settlers,” adding: “They were getting blankets out of it in a time of cold.”  

Humans as data 

Artificial intelligence is powered by data, and data, in turn, is provided by humans. 

It is human labor that prepares, cleans, and annotates data to make it more understandable to machines; as MIT Technology Review has reported, for example, robot vacuums know to avoid running over dog poop because human data labelers have first clicked through and identified millions of images of pet waste—and other objects—inside homes. 

When it comes to facial recognition, other biometric analysis, or generative AI models that aim to generate humans or human-like avatars, it is human faces, movements, and voices that serve as the data. 

Initially, these models were powered by data scraped off the internet—including, on several occasions, private surveillance camera footage that was shared or sold without the knowledge of anyone being captured.

But as the need for higher-quality data has grown, alongside concerns about whether data is collected ethically and with proper consent, tech companies have progressed from “scraping data from publicly available sources” to “building data sets with professionals,” explains Julian Posada, an assistant professor at Yale University who studies platforms and labor. Or, at the very least, “with people who have been recruited, compensated, [and] signed [consent] forms.”

But the need for human data, especially in the entertainment industry, runs up against a significant concern in Hollywood: publicity rights, or “the right to control your use of your name and likeness,” according to Corynne McSherry, the legal director of the Electronic Frontier Foundation (EFF), a digital rights group.

This was an issue long before AI, but AI has amplified the concern. Generative AI in particular makes it easy to create realistic replicas of anyone by training algorithms on existing data, like photos and videos of the person. The more data that is available, the easier it is to create a realistic image. This has a particularly large effect on performers. 

He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

Some actors have been able to monetize the characteristics that make them unique. James Earl Jones, the voice of Darth Vader, signed off on the use of archived recordings of his voice so that AI could continue to generate it for future Star Wars films. Meanwhile, de-aging AI has allowed Harrison Ford, Tom Hanks, and Robin Wright to portray younger versions of themselves on screen. Metaphysic AI, the company behind the de-aging technology, recently signed a deal with Creative Artists Agency to put generative AI to use for its artists. 

But many deepfakes, or images of fake events created with deep-learning AI, are generated without consent. Earlier this month, Hanks posted on Instagram that an ad purporting to show him promoting a dental plan was not actually him. 

The AI landscape is different for noncelebrities. Background actors are increasingly being asked to undergo digital body scans on set, where they have little power to push back or even get clarity on how those scans will be used in the future. Studios say that scans are used primarily to augment crowd scenes, which they have been doing with other technology in postproduction for years—but according to SAG representatives, once the studios have captured actors’ likenesses, they reserve the rights to use them forever. (There have already been multiple reports from voice actors that their voices have appeared in video games other than the ones they were hired for.)

In the case of the Realeyes and Meta study, it might be “study data” rather than body scans, but actors are dealing with the same uncertainty as to how else their digital likenesses could one day be used.

Teaching AI to appear more human

At $150 per hour, the Realeyes study paid far more than the roughly $200 daily rate in the current Screen Actors Guild contract (nonunion jobs pay even less). 

This made the gig an attractive proposition for young actors like T, just starting out in Hollywood—a notoriously challenging environment even had he not arrived just before the SAG-AFTRA strike started. (T has not worked enough union jobs to officially join the union, though he hopes to one day.) 

In fact, even more than a standard acting job, T described performing for Realeyes as “like an acting workshop where … you get a chance to work on your acting chops, which I thought helped me a little bit.”

For two hours, T responded to prompts like “Tell us something that makes you angry,” “Share a sad story,” or “Do a scary scene where you’re scared,” improvising an appropriate story or scene for each one. He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

In addition to wanting the pay, T participated in the study because, as he understood it, no one would see the results publicly. Rather, it was research for Meta, as he learned when he arrived at the studio space and signed a data license agreement with the company that he only skimmed through. It was the first he’d heard that Meta was even connected with the project. (He had previously signed a separate contract with Realeyes covering the terms of the job.) 

The data license agreement says that Realeyes is the sole owner of the data and has full rights to “license, distribute, reproduce, modify, or otherwise create and use derivative works” generated from it, “irrevocably and in all formats and media existing now or in the future.” 

This kind of legalese can be hard to parse, particularly when it deals with technology that is changing at such a rapid pace. But what it essentially means is that “you may be giving away things you didn’t realize … because those things didn’t exist yet,” says Emily Poler, a litigator who represents clients in disputes at the intersection of media, technology, and intellectual property.

“If I was a lawyer for an actor here, I would definitely be looking into whether one can knowingly waive rights where things don’t even exist yet,” she adds. 

As Jessica argues, “Once they have your image, they can use it whenever and however.” She thinks that actors’ likenesses could be used in the same way that other artists’ works, like paintings, songs, and poetry, have been used to train generative AI, and she worries that the AI could just “create a composite that looks ‘human,’ like believable as human,” but “it wouldn’t be recognizable as you, so you can’t potentially sue them”—even if that AI-generated human was based on you. 

This feels especially plausible to Jessica given her experience as an Asian-American background actor in an industry where representation often amounts to being the token minority. Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

It’s not just images that actors should be worried about, says Adam Harvey, an applied researcher who focuses on computer vision, privacy, and surveillance and is one of the co-creators of Exposing.AI, which catalogues the data sets used to train facial recognition systems. 

What constitutes “likeness,” he says, is changing. While the word is now understood primarily to mean a photographic likeness, musicians are challenging that definition to include vocal likenesses. Eventually, he believes, “it will also … be challenged on the emotional frontier”—that is, actors could argue that their microexpressions are unique and should be protected. 

Realeyes’s Kalehoff did not say what specifically the company would be using the study results for, though he elaborated in an email that there could be “a variety of use cases, such as building better digital media experiences, in medical diagnoses (i.e. skin/muscle conditions), safety alertness detection, or robotic tools to support medical disorders related to recognition of facial expressions (like autism).”

Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

When asked how Realeyes defined “likeness,” he replied that the company used that term—as well as “commercial,” another word for which there are assumed but no universally agreed-upon definitions—in a manner that is “the same for us as [a] general business.” He added, “We do not have a specific definition different from standard usage.”  

But for T, and for other actors, “commercial” would typically mean appearing in some sort of advertisement or a TV spot—“something,” T says, “that’s directly sold to the consumer.” 

Outside of the narrow understanding in the entertainment industry, the EFF’s McSherry questions what the company means: “It’s a commercial company doing commercial things.”

Kalehoff also said, “If a client would ask us to use such images [from the study], we would insist on 100% consent, fair pay for participants, and transparency. However, that is not our work or what we do.” 

Yet this statement does not align with the language of the data license agreement, which stipulates that while Realeyes is the owner of the intellectual property stemming from the study data, Meta and “Meta parties acting on behalf of Meta” have broad rights to the data—including the rights to share and sell it. This means that, ultimately, how it’s used may be out of Realeyes’s hands. 

As explained in the agreement, the rights of Meta and parties acting on its behalf also include: 

  • Asserting certain rights to the participants’ identities (“identifying or recognizing you … creating a unique template of your face and/or voice … and/or protecting against impersonation and identity misuse”)
  • Allowing other researchers to conduct future research, using the study data however they see fit (“conducting future research studies and activities … in collaboration with third party researchers, who may further use the Study Data beyond the control of Meta”)
  • Creating derivative works from the study data for any kind of use at any time (“using, distributing, reproducing, publicly performing, publicly displaying, disclosing, and modifying or otherwise creating derivative works from the Study Data, worldwide, irrevocably and in perpetuity, and in all formats and media existing now or in the future”)

The only limit on use was that Meta and parties would “not use Study Data to develop machine learning models that generate your specific face or voice in any Meta product” (emphasis added). Still, the variety of possible use cases—and users—is sweeping. And the agreement does little to quell actors’ specific anxieties that “down the line, that database is used to generate a work and that work ends up seeming a lot like [someone’s] performance,” as McSherry puts it.

When I asked Kalehoff about the apparent gap between his comments and the agreement, he denied any discrepancy: “We believe there are no contradictions in any agreements, and we stand by our commitment to actors as stated in all of our agreements to fully protect their image and their privacy.” Kalehoff declined to comment on Realeyes’s work with clients, or to confirm that the study was in collaboration with Meta.

Meanwhile, Meta has been building  photorealistic 3D “Codec avatars,” which go far beyond the cartoonish images in Horizon Worlds and require human training data to perfect. CEO Mark Zuckerberg recently described these avatars on the popular podcast from AI researcher Lex Fridman as core to his vision of the future—where physical, virtual, and augmented reality all coexist. He envisions the avatars “delivering a sense of presence as if you’re there together, no matter where you actually are in the world.”

Despite multiple requests for comment, Meta did not respond to any questions from MIT Technology Review, so we cannot confirm what it would use the data for, or who it means by “parties acting on its behalf.” 

Individual choice, collective impact 

Throughout the strikes by writers and actors, there has been a palpable sense that Hollywood is charging into a new frontier that will shape how we—all of us—engage with artificial intelligence. Usually, that frontier is described with reference to workers’ rights; the idea is that whatever happens here will affect workers in other industries who are grappling with what AI will mean for their own livelihoods. 

Already, the gains won by the Writers Guild have provided a model for how to regulate AI’s impact on creative work. The union’s new contract with studios limits the use of AI in writers’ rooms and stipulates that only human authors can be credited on stories, which prevents studios from copyrighting AI-generated work and further serves as a major disincentive to use AI to write scripts. 

In early October, the actors’ union and the studios also returned to the bargaining table, hoping to provide similar guidance for actors. But talks quickly broke down because “it is clear that the gap between the AMPTP [Alliance of Motion Picture and Television Producers] and SAG-AFTRA is too great,” as the studio alliance put it in a press release. Generative AI—specifically, how and when background actors should be expected to consent to body scanning—was reportedly one of the sticking points. 

Whatever final agreement they come to won’t forbid the use of AI by studios—that was never the point. Even the actors who took issue with the AI training projects have more nuanced views about the use of the technology. “We’re not going to fully cut out AI,” acknowledges Compte, the Breaking Bad actor. Rather, we “just have to find ways that are going to benefit the larger picture… [It] is really about living wages.”

But a future agreement, which is specifically between the studios and SAG, will not be applicable to tech companies conducting “research” projects, like Meta and Realeyes. Technological advances created for one purpose—perhaps those that come out of a “research” study—will also have broader applications, in film and beyond. 

“The likelihood that the technology that is developed is only used for that [audience engagement or Codec avatars] is vanishingly small. That’s not how it works,” says the EFF’s McSherry. For instance, while the data agreement for the emotion study does not explicitly mention using the results for facial recognition AI, McSherry believes that they could be used to improve any kind of AI involving human faces or expressions.

(Besides, emotion detection algorithms are themselves controversial, whether or not they even work the way developers say they do. Do we really want “our faces to be judged all the time [based] on whatever products we’re looking at?” asks Posada, the Yale professor.)

This all makes consent for these broad research studies even trickier: there’s no way for a participant to opt in or out of specific use cases. T, for one, would be happy if his participation meant better avatar options for virtual worlds, like those he uses with his Oculus—though he isn’t agreeing to that specifically. 

But what are individual study participants—who may need the income—to do? What power do they really have in this situation? And what power do other people—even people who declined to participate—have to ensure that they are not affected? The decision to train AI may be an individual one, but the impact is not; it’s collective.

“Once they feed your image and … a certain amount of people’s images, they can create an endless variety of similar-looking people,” says Jessica. “It’s not infringing on your face, per se.” But maybe that’s the point: “They’re using your image without … being held liable for it.”

T has considered the possibility that, one day, the research he has contributed to could very well replace actors. 

But at least for now, it’s a hypothetical. 

“I’d be upset,” he acknowledges, “but at the same time, if it wasn’t me doing it, they’d probably figure out a different way—a sneakier way, without getting people’s consent.” Besides, T adds, “they paid really well.” 

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

How Meta and AI companies recruited striking actors to train AI

One evening in early September, T, a 28-year-old actor who asked to be identified by his first initial, took his seat in a rented Hollywood studio space in front of three cameras, a director, and a producer for a somewhat unusual gig.

The two-hour shoot produced footage that was not meant to be viewed by the public—at least, not a human public. 

Rather, T’s voice, face, movements, and expressions would be fed into an AI database “to better understand and express human emotions.” That database would then help train “virtual avatars” for Meta, as well as algorithms for a London-based emotion AI company called Realeyes. (Realeyes was running the project; participants only learned about Meta’s involvement once they arrived on site.)

The “emotion study” ran from July through September, specifically recruiting actors. The project coincided with Hollywood’s historic dual strikes by the Writers Guild of America and the Screen Actors Guild (SAG-AFTRA). With the industry at a standstill, the larger-than-usual number of out-of-work actors may have been a boon for Meta and Realeyes: here was a new pool of “trainers”—and data points—perfectly suited to teaching their AI to appear more human. 

For actors like T, it was a great opportunity too: a way to make good, easy money on the side, without having to cross the picket line. 

“There aren’t really clear rules right now.”

“This is fully a research-based project,” the job posting said. It offered $150 per hour for at least two hours of work, and asserted that “your individual likeness will not be used for any commercial purposes.”  

The actors may have assumed this meant that their faces and performances wouldn’t turn up in a TV show or movie, but the broad nature of what they signed makes it impossible to know the full implications for sure. In fact, in order to participate, they had to sign away certain rights “in perpetuity” for technologies and use cases that may not yet exist. 

And while the job posting insisted that the project “does not qualify as struck work” (that is, work produced by employers against whom the union is striking), it nevertheless speaks to some of the strike’s core issues: how actors’ likenesses can be used, how actors should be compensated for that use, and what informed consent should look like in the age of AI. 

“This isn’t a contract battle between a union and a company,” said Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, at a panel on AI in entertainment at San Diego Comic-Con this summer. “It’s existential.”

Many actors across the industry, particularly background actors (also known as extras), worry that AI—much like the models described in the emotion study—could be used to replace them, whether or not their exact faces are copied. And in this case, by providing the facial expressions that will teach AI to appear more human, study participants may in fact have been the ones inadvertently training their own potential replacements. 

“Our studies have nothing to do with the strike,” Max Kalehoff, Realeyes’s vice president for growth and marketing, said in an email. “The vast majority of our work is in evaluating the effectiveness of advertising for clients—which has nothing to do with actors and the entertainment industry except to gauge audience reaction.” The timing, he added, was “an unfortunate coincidence.” Meta did not respond to multiple requests for comment.

Given how technological advancements so often build upon one another, not to mention how quickly the field of artificial intelligence is evolving, experts point out that there’s only so much these companies can truly promise. 

In addition to the job posting, MIT Technology Review has obtained and reviewed a copy of the data license agreement, and its potential implications are indeed vast. To put it bluntly: whether the actors who participated knew it or not, for as little as $300, they appear to have authorized Realeyes, Meta, and other parties of the two companies’ choosing to access and use not just their faces but also their expressions, and anything derived from them, almost however and whenever they want—as long as they do not reproduce any individual likenesses. 

Some actors, like Jessica, who asked to be identified by just her first name, felt there was something “exploitative” about the project—both in the financial incentives for out-of-work actors and in the fight over AI and the use of an actor’s image. 

Jessica, a New York–based background actor, says she has seen a growing number of listings for AI jobs over the past few years. “There aren’t really clear rules right now,” she says, “so I don’t know. Maybe … their intention [is] to get these images before the union signs a contract and sets them.”

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

All this leaves actors, struggling after three months of limited to no work, primed to accept the terms from Realeyes and Meta—and, intentionally or not, to affect all actors, whether or not they personally choose to engage with AI. 

“It’s hurt now or hurt later,” says Maurice Compte, an actor and SAG-AFTRA member who has had principal roles on shows like Narcos and Breaking Bad. After reviewing the job posting, he couldn’t help but see nefarious intent. Yes, he said, of course it’s beneficial to have work, but he sees it as beneficial “in the way that the Native Americans did when they took blankets from white settlers,” adding: “They were getting blankets out of it in a time of cold.”  

Humans as data 

Artificial intelligence is powered by data, and data, in turn, is provided by humans. 

It is human labor that prepares, cleans, and annotates data to make it more understandable to machines; as MIT Technology Review has reported, for example, robot vacuums know to avoid running over dog poop because human data labelers have first clicked through and identified millions of images of pet waste—and other objects—inside homes. 

When it comes to facial recognition, other biometric analysis, or generative AI models that aim to generate humans or human-like avatars, it is human faces, movements, and voices that serve as the data. 

Initially, these models were powered by data scraped off the internet—including, on several occasions, private surveillance camera footage that was shared or sold without the knowledge of anyone being captured.

But as the need for higher-quality data has grown, alongside concerns about whether data is collected ethically and with proper consent, tech companies have progressed from “scraping data from publicly available sources” to “building data sets with professionals,” explains Julian Posada, an assistant professor at Yale University who studies platforms and labor. Or, at the very least, “with people who have been recruited, compensated, [and] signed [consent] forms.”

But the need for human data, especially in the entertainment industry, runs up against a significant concern in Hollywood: publicity rights, or “the right to control your use of your name and likeness,” according to Corynne McSherry, the legal director of the Electronic Frontier Foundation (EFF), a digital rights group.

This was an issue long before AI, but AI has amplified the concern. Generative AI in particular makes it easy to create realistic replicas of anyone by training algorithms on existing data, like photos and videos of the person. The more data that is available, the easier it is to create a realistic image. This has a particularly large effect on performers. 

He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

Some actors have been able to monetize the characteristics that make them unique. James Earl Jones, the voice of Darth Vader, signed off on the use of archived recordings of his voice so that AI could continue to generate it for future Star Wars films. Meanwhile, de-aging AI has allowed Harrison Ford, Tom Hanks, and Robin Wright to portray younger versions of themselves on screen. Metaphysic AI, the company behind the de-aging technology, recently signed a deal with Creative Artists Agency to put generative AI to use for its artists. 

But many deepfakes, or images of fake events created with deep-learning AI, are generated without consent. Earlier this month, Hanks posted on Instagram that an ad purporting to show him promoting a dental plan was not actually him. 

The AI landscape is different for noncelebrities. Background actors are increasingly being asked to undergo digital body scans on set, where they have little power to push back or even get clarity on how those scans will be used in the future. Studios say that scans are used primarily to augment crowd scenes, which they have been doing with other technology in postproduction for years—but according to SAG representatives, once the studios have captured actors’ likenesses, they reserve the rights to use them forever. (There have already been multiple reports from voice actors that their voices have appeared in video games other than the ones they were hired for.)

In the case of the Realeyes and Meta study, it might be “study data” rather than body scans, but actors are dealing with the same uncertainty as to how else their digital likenesses could one day be used.

Teaching AI to appear more human

At $150 per hour, the Realeyes study paid far more than the roughly $200 daily rate in the current Screen Actors Guild contract (nonunion jobs pay even less). 

This made the gig an attractive proposition for young actors like T, just starting out in Hollywood—a notoriously challenging environment even had he not arrived just before the SAG-AFTRA strike started. (T has not worked enough union jobs to officially join the union, though he hopes to one day.) 

In fact, even more than a standard acting job, T described performing for Realeyes as “like an acting workshop where … you get a chance to work on your acting chops, which I thought helped me a little bit.”

For two hours, T responded to prompts like “Tell us something that makes you angry,” “Share a sad story,” or “Do a scary scene where you’re scared,” improvising an appropriate story or scene for each one. He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

In addition to wanting the pay, T participated in the study because, as he understood it, no one would see the results publicly. Rather, it was research for Meta, as he learned when he arrived at the studio space and signed a data license agreement with the company that he only skimmed through. It was the first he’d heard that Meta was even connected with the project. (He had previously signed a separate contract with Realeyes covering the terms of the job.) 

The data license agreement says that Realeyes is the sole owner of the data and has full rights to “license, distribute, reproduce, modify, or otherwise create and use derivative works” generated from it, “irrevocably and in all formats and media existing now or in the future.” 

This kind of legalese can be hard to parse, particularly when it deals with technology that is changing at such a rapid pace. But what it essentially means is that “you may be giving away things you didn’t realize … because those things didn’t exist yet,” says Emily Poler, a litigator who represents clients in disputes at the intersection of media, technology, and intellectual property.

“If I was a lawyer for an actor here, I would definitely be looking into whether one can knowingly waive rights where things don’t even exist yet,” she adds. 

As Jessica argues, “Once they have your image, they can use it whenever and however.” She thinks that actors’ likenesses could be used in the same way that other artists’ works, like paintings, songs, and poetry, have been used to train generative AI, and she worries that the AI could just “create a composite that looks ‘human,’ like believable as human,” but “it wouldn’t be recognizable as you, so you can’t potentially sue them”—even if that AI-generated human was based on you. 

This feels especially plausible to Jessica given her experience as an Asian-American background actor in an industry where representation often amounts to being the token minority. Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

It’s not just images that actors should be worried about, says Adam Harvey, an applied researcher who focuses on computer vision, privacy, and surveillance and is one of the co-creators of Exposing.AI, which catalogues the data sets used to train facial recognition systems. 

What constitutes “likeness,” he says, is changing. While the word is now understood primarily to mean a photographic likeness, musicians are challenging that definition to include vocal likenesses. Eventually, he believes, “it will also … be challenged on the emotional frontier”—that is, actors could argue that their microexpressions are unique and should be protected. 

Realeyes’s Kalehoff did not say what specifically the company would be using the study results for, though he elaborated in an email that there could be “a variety of use cases, such as building better digital media experiences, in medical diagnoses (i.e. skin/muscle conditions), safety alertness detection, or robotic tools to support medical disorders related to recognition of facial expressions (like autism).”

Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

When asked how Realeyes defined “likeness,” he replied that the company used that term—as well as “commercial,” another word for which there are assumed but no universally agreed-upon definitions—in a manner that is “the same for us as [a] general business.” He added, “We do not have a specific definition different from standard usage.”  

But for T, and for other actors, “commercial” would typically mean appearing in some sort of advertisement or a TV spot—“something,” T says, “that’s directly sold to the consumer.” 

Outside of the narrow understanding in the entertainment industry, the EFF’s McSherry questions what the company means: “It’s a commercial company doing commercial things.”

Kalehoff also said, “If a client would ask us to use such images [from the study], we would insist on 100% consent, fair pay for participants, and transparency. However, that is not our work or what we do.” 

Yet this statement does not align with the language of the data license agreement, which stipulates that while Realeyes is the owner of the intellectual property stemming from the study data, Meta and “Meta parties acting on behalf of Meta” have broad rights to the data—including the rights to share and sell it. This means that, ultimately, how it’s used may be out of Realeyes’s hands. 

As explained in the agreement, the rights of Meta and parties acting on its behalf also include: 

  • Asserting certain rights to the participants’ identities (“identifying or recognizing you … creating a unique template of your face and/or voice … and/or protecting against impersonation and identity misuse”)
  • Allowing other researchers to conduct future research, using the study data however they see fit (“conducting future research studies and activities … in collaboration with third party researchers, who may further use the Study Data beyond the control of Meta”)
  • Creating derivative works from the study data for any kind of use at any time (“using, distributing, reproducing, publicly performing, publicly displaying, disclosing, and modifying or otherwise creating derivative works from the Study Data, worldwide, irrevocably and in perpetuity, and in all formats and media existing now or in the future”)

The only limit on use was that Meta and parties would “not use Study Data to develop machine learning models that generate your specific face or voice in any Meta product” (emphasis added). Still, the variety of possible use cases—and users—is sweeping. And the agreement does little to quell actors’ specific anxieties that “down the line, that database is used to generate a work and that work ends up seeming a lot like [someone’s] performance,” as McSherry puts it.

When I asked Kalehoff about the apparent gap between his comments and the agreement, he denied any discrepancy: “We believe there are no contradictions in any agreements, and we stand by our commitment to actors as stated in all of our agreements to fully protect their image and their privacy.” Kalehoff declined to comment on Realeyes’s work with clients, or to confirm that the study was in collaboration with Meta.

Meanwhile, Meta has been building  photorealistic 3D “Codec avatars,” which go far beyond the cartoonish images in Horizon Worlds and require human training data to perfect. CEO Mark Zuckerberg recently described these avatars on the popular podcast from AI researcher Lex Fridman as core to his vision of the future—where physical, virtual, and augmented reality all coexist. He envisions the avatars “delivering a sense of presence as if you’re there together, no matter where you actually are in the world.”

Despite multiple requests for comment, Meta did not respond to any questions from MIT Technology Review, so we cannot confirm what it would use the data for, or who it means by “parties acting on its behalf.” 

Individual choice, collective impact 

Throughout the strikes by writers and actors, there has been a palpable sense that Hollywood is charging into a new frontier that will shape how we—all of us—engage with artificial intelligence. Usually, that frontier is described with reference to workers’ rights; the idea is that whatever happens here will affect workers in other industries who are grappling with what AI will mean for their own livelihoods. 

Already, the gains won by the Writers Guild have provided a model for how to regulate AI’s impact on creative work. The union’s new contract with studios limits the use of AI in writers’ rooms and stipulates that only human authors can be credited on stories, which prevents studios from copyrighting AI-generated work and further serves as a major disincentive to use AI to write scripts. 

In early October, the actors’ union and the studios also returned to the bargaining table, hoping to provide similar guidance for actors. But talks quickly broke down because “it is clear that the gap between the AMPTP [Alliance of Motion Picture and Television Producers] and SAG-AFTRA is too great,” as the studio alliance put it in a press release. Generative AI—specifically, how and when background actors should be expected to consent to body scanning—was reportedly one of the sticking points. 

Whatever final agreement they come to won’t forbid the use of AI by studios—that was never the point. Even the actors who took issue with the AI training projects have more nuanced views about the use of the technology. “We’re not going to fully cut out AI,” acknowledges Compte, the Breaking Bad actor. Rather, we “just have to find ways that are going to benefit the larger picture… [It] is really about living wages.”

But a future agreement, which is specifically between the studios and SAG, will not be applicable to tech companies conducting “research” projects, like Meta and Realeyes. Technological advances created for one purpose—perhaps those that come out of a “research” study—will also have broader applications, in film and beyond. 

“The likelihood that the technology that is developed is only used for that [audience engagement or Codec avatars] is vanishingly small. That’s not how it works,” says the EFF’s McSherry. For instance, while the data agreement for the emotion study does not explicitly mention using the results for facial recognition AI, McSherry believes that they could be used to improve any kind of AI involving human faces or expressions.

(Besides, emotion detection algorithms are themselves controversial, whether or not they even work the way developers say they do. Do we really want “our faces to be judged all the time [based] on whatever products we’re looking at?” asks Posada, the Yale professor.)

This all makes consent for these broad research studies even trickier: there’s no way for a participant to opt in or out of specific use cases. T, for one, would be happy if his participation meant better avatar options for virtual worlds, like those he uses with his Oculus—though he isn’t agreeing to that specifically. 

But what are individual study participants—who may need the income—to do? What power do they really have in this situation? And what power do other people—even people who declined to participate—have to ensure that they are not affected? The decision to train AI may be an individual one, but the impact is not; it’s collective.

“Once they feed your image and … a certain amount of people’s images, they can create an endless variety of similar-looking people,” says Jessica. “It’s not infringing on your face, per se.” But maybe that’s the point: “They’re using your image without … being held liable for it.”

T has considered the possibility that, one day, the research he has contributed to could very well replace actors. 

But at least for now, it’s a hypothetical. 

“I’d be upset,” he acknowledges, “but at the same time, if it wasn’t me doing it, they’d probably figure out a different way—a sneakier way, without getting people’s consent.” Besides, T adds, “they paid really well.” 

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489.