Here’s the defense tech at the center of US aid to Israel, Ukraine, and Taiwan

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

After weeks of drawn-out congressional debate over how much the United States should spend on conflicts abroad, President Joe Biden signed a $95.3 billion aid package into law on Wednesday.

The bill will send a significant quantity of supplies to Ukraine and Israel, while also supporting Taiwan with submarine technology to aid its defenses against China. It’s also sparked renewed calls for stronger crackdowns on Iranian-produced drones. 

Though much of the money will go toward replenishing fairly standard munitions and supplies, the spending bill provides a window into US strategies around four key defense technologies that continue to reshape how today’s major conflicts are being fought.

For a closer look at the military technology at the center of the aid package, I spoke with Andrew Metrick, a fellow with the defense program at the Center for a New American Security, a think tank.

Ukraine and the role of long-range missiles

Ukraine has long sought the Army Tactical Missile System (ATACMS), a long-range ballistic missile made by Lockheed Martin. First debuted in Operation Desert Storm in Iraq in 1990, it’s 13 feet high, two feet wide, and over 3,600 pounds. It can use GPS to accurately hit targets 190 miles away. 

Last year, President Biden was apprehensive about sending such missiles to Ukraine, as US stockpiles of the weapons were relatively low. In October, the administration changed tack. The US sent shipments of ATACMS, a move celebrated by President Volodymyr Zelensky of Ukraine, but they came with restrictions: the missiles were older models with a shorter range, and Ukraine was instructed not to fire them into Russian territory, only Ukrainian territory. 

This week, just hours before the new aid package was signed, multiple news outlets reported that the US had secretly sent more powerful long-range ATACMS to Ukraine several weeks before. They were used on Tuesday, April 23, to target a Russian airfield in Crimea and Russian troops in Berdiansk, 50 miles southwest of Mariupol.

The long range of the weapons has proved essential for Ukraine, says Metrick. “It allows the Ukrainians to strike Russian targets at ranges for which they have very few other options,” he says. That means being able to hit locations like supply depots, command centers, and airfields behind Russia’s front lines in Ukraine. This capacity has grown more important as Ukraine’s troop numbers have waned, Metrick says.

Replenishing Israel’s Iron Dome

On April 13, Iran launched its first-ever direct attack on Israeli soil. In the attack, which Iran says was retaliation for Israel’s airstrike on its embassy in Syria, hundreds of missiles were lobbed into Israeli airspace. Many of them were neutralized by the web of cutting-edge missile launchers dispersed throughout Israel that can automatically detonate incoming strikes before they hit land. 

One of those systems is Israel’s Iron Dome, in which radar systems detect projectiles and then signal units to launch defensive missiles that detonate the target high in the sky before it strikes populated areas. Israel’s other system, called David’s Sling, works a similar way but can identify rockets coming from a greater distance, upwards of 180 miles. 

Both systems are hugely costly to research and build, and the new US aid package allocates $15 billion to replenish their missile stockpile. The missiles can cost anywhere from $100,000 to $10 million each, and a system like Iron Dome might fire them daily during intense periods of conflict. 

The aid comes as funding for Israel has grown more contentious amid the dire conditions faced by displaced Palestinians in Gaza. While the spending bill worked its way through Congress, increasing numbers of Democrats sought to put conditions on the military aid to Israel, particularly after an Israeli air strike on April 1 killed seven aid workers from World Central Kitchen, an international food charity. The funding package does provide $9 billion in humanitarian assistance for the conflict, but the efforts to impose conditions for Israeli military aid failed. 

Taiwan and underwater defenses against China

A rising concern for the US defense community—and a subject of “wargaming” simulations that Metrick has carried out—is an amphibious invasion of Taiwan from China. The rising risk of that scenario has driven the US to build and deploy larger numbers of advanced submarines, Metrick says. A bigger fleet of these submarines would be more likely to keep attacks from China at bay, thereby protecting Taiwan.

The trouble is that the US shipbuilding effort, experts say, is too slow. It’s been hampered by budget cuts and labor shortages, but the new aid bill aims to jump-start it. It will provide $3.3 billion to do so, specifically for the production of Columbia-class submarines, which carry nuclear weapons, and Virginia-class submarines, which carry conventional weapons. 

Though these funds aim to support Taiwan by building up the US supply of submarines, the package also includes more direct support, like $2 billion to help it purchase weapons and defense equipment from the US. 

The US’s Iranian drone problem 

Shahed drones are used almost daily on the Russia-Ukraine battlefield, and Iran launched more than 100 against Israel earlier this month. Produced by Iran and resembling model planes, the drones are fast, cheap, and lightweight, capable of being launched from the back of a pickup truck. They’re used frequently for potent one-way attacks, where they detonate upon reaching their target. US experts say the technology is tipping the scales toward Russian and Iranian military groups and their allies. 

The trouble of combating them is partly one of cost. Shooting down the drones, which can be bought for as little as $40,000, can cost millions in ammunition.

“Shooting down Shaheds with an expensive missile is not, in the long term, a winning proposition,” Metrick says. “That’s what the Iranians, I think, are banking on. They can wear people down.”

This week’s aid package renewed White House calls for stronger sanctions aimed at curbing production of the drones. The United Nations previously passed rules restricting any drone-related material from entering or leaving Iran, but those expired in October. The US now wants them reinstated. 

Even if that happens, it’s unlikely the rules would do much to contain the Shahed’s dominance. The components of the drones are not all that complex or hard to obtain to begin with, but experts also say that Iran has built a sprawling global supply chain to acquire the materials needed to manufacture them and has worked with Russia to build factories. 

“Sanctions regimes are pretty dang leaky,” Metrick says. “They [Iran] have friends all around the world.”

How virtual power plants are shaping tomorrow’s energy system

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

For more than a century, the prevalent image of power plants has been characterized by towering smokestacks, endless coal trains, and loud spinning turbines. But the plants powering our future will look radically different—in fact, many may not have a physical form at all. Welcome to the era of virtual power plants (VPPs).

The shift from conventional energy sources like coal and gas to variable renewable alternatives such as solar and wind means the decades-old way we operate the energy system is changing. 

Governments and private companies alike are now counting on VPPs’ potential to help keep costs down and stop the grid from becoming overburdened. 

Here’s what you need to know about VPPs—and why they could be the key to helping us bring more clean power and energy storage online.

What are virtual power plants and how do they work?

A virtual power plant is a system of distributed energy resources—like rooftop solar panels, electric vehicle chargers, and smart water heaters—that work together to balance energy supply and demand on a large scale. They are usually run by local utility companies who oversee this balancing act.

A VPP is a way of “stitching together” a portfolio of resources, says Rudy Shankar, director of Lehigh University’s Energy Systems Engineering, that can help the grid respond to high energy demand while reducing the energy system’s carbon footprint.

The “virtual” nature of VPPs comes from its lack of a central physical facility, like a traditional coal or gas plant. By generating electricity and balancing the energy load, the aggregated batteries and solar panels provide many of the functions of conventional power plants.

They also have unique advantages.

Kevin Brehm, a manager at Rocky Mountain Institute who focuses on carbon-free electricity, says comparing VPPs to traditional plants is a “helpful analogy,” but VPPs “do certain things differently and therefore can provide services that traditional power plants can’t.”

One significant difference is VPPs’ ability to shape consumers’ energy use in real time. Unlike conventional power plants, VPPs can communicate with distributed energy resources and allow grid operators to control the demand from end users.

For example, smart thermostats linked to air conditioning units can adjust home temperatures and manage how much electricity the units consume. On hot summer days these thermostats can pre-cool homes before peak hours, when air conditioning usage surges. Staggering cooling times can help prevent abrupt demand hikes that might overwhelm the grid and cause outages. Similarly, electric vehicle chargers can adapt to the grid’s requirements by either supplying or utilizing electricity. 

These distributed energy sources connect to the grid through communication technologies like Wi-Fi, Bluetooth, and cellular services. In aggregate, adding VPPs can increase overall system resilience. By coordinating hundreds of thousands of devices, VPPs have a meaningful impact on the grid—they shape demand, supply power, and keep the electricity flowing reliably.

How popular are VPPs now?

Until recently, VPPs were mostly used to control consumer energy use. But because solar and battery technology has evolved, utilities can now use them to supply electricity back to the grid when needed.

In the United States, the Department of Energy estimates VPP capacity at around 30 to 60 gigawatts. This represents about 4% to 8% of peak electricity demand nationwide, a minor fraction within the overall system. However, some states and utility companies are moving quickly to add more VPPs to their grids.

Green Mountain Power, Vermont’s largest utility company, made headlines last year when it expanded its subsidized home battery program. Customers have the option to lease a Tesla home battery at a discounted rate or purchase their own, receiving assistance of up to $10,500, if they agree to share stored energy with the utility as required. The Vermont Public Utility Commission, which approved the program, said it can also provide emergency power during outages.

In Massachusetts, three utility companies (National Grid, Eversource, and Cape Light Compact) have implemented a VPP program that pays customers in exchange for utility control of their home batteries.

Meanwhile, in Colorado efforts are underway to launch the state’s first VPP system. The Colorado Public Utilities Commission is urging Xcel Energy, its largest utility company, to develop a fully operational VPP pilot by this summer.

Why are VPPs important for the clean energy transition?

Grid operators must meet the annual or daily “peak load,” the moment of highest electricity demand. To do that, they often resort to using gas “peaker” plants, ones that remain dormant most of the year that they can switch during in times of high demand. VPPs will reduce the grids’ reliance on these plants.

The Department of Energy currently aims to expand national VPP capacity to 80 to 160 GW by 2030. That’s roughly equivalent to 80 to 160 fossil fuel plants that need not be built, says Brehm.

Many utilities say VPPs can lower energy bills for consumers in addition to reducing emissions. Research suggests that leveraging distributed sources during peak demand is up to 60% more cost effective than relying on gas plants.

Another significant, if less tangible, advantage of VPPs is that they encourage people to be more involved in the energy system. Usually, customers merely receive electricity. Within a VPP system, they both consume power and contribute it back to the grid. This dual role can improve their understanding of the grid and get them more invested in the transition to clean energy.

What’s next for VPPs?

The capacity of distributed energy sources is expanding rapidly, according to the Department of Energy, owing to the widespread adoption of electric vehicles, charging stations, and smart home devices. Connecting these to VPP systems enhances the grid’s ability to balance electricity demand and supply in real time. Better AI can also help VPPs become more adept at coordinating diverse assets, says Shankar.

Regulators are also coming on board. The National Association of Regulatory Utility Commissioners has started holding panels and workshops to educate its members about VPPs and how to implement them in their states. The California Energy Commission is set to fund research exploring the benefits of integrating VPPs into its grid system. This kind of interest from regulators is new but promising, says Brehm.

Still, hurdles remain. Enrolling in a VPP can be confusing for consumers because the process varies among states and companies. Simplifying it for people will help utility companies make the most of distributed energy resources such as EVs and heat pumps. Standardizing the deployment of VPPs can also speed up their growth nationally by making it easier to replicate successful projects across regions.

“It really comes down to policy,” says Brehm. “The technology is in place. We are continuing to learn about how to best implement these solutions and how to interface with consumers.”

A controversial US surveillance program is up for renewal. Critics are speaking out.

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

For the past week my social feeds have been filled with a pretty important tech policy debate that I want to key you in on: the renewal of a controversial program of American surveillance.

The program, outlined in Section 702 of the Foreign Intelligence Surveillance Act (FISA), was created in 2008. It was designed to expand the power of US agencies to collect electronic “foreign intelligence information,” whether about spies, terrorists, or cybercriminals abroad, and to do so without a warrant. 

Tech companies, in other words, are compelled to hand over communications records like phone calls, texts, and emails to US intelligence agencies including the FBI, CIA, and NSA. A lot of data about Americans who communicate with people internationally gets swept up in these searches. Critics say that is unconstitutional

Despite a history of abuses by intelligence agencies, Section 702 was successfully renewed in both 2012 and 2017. The program, which has to be periodically renewed by Congress, is set to expire again at the end of December. But a broad group that transcends parties is calling for reforming the program, out of concern about the vast surveillance it enables. Here is what you need to know.

What do the critics of Section 702 say?

Of particular concern is that while the program intends to target people who aren’t Americans, a lot of data from US citizens gets swept up if they communicate with anyone abroad—and, again, this is without a warrant. The 2022 annual report on the program revealed that intelligence agencies ran searches on an estimated 3.4 million “US persons” during the previous year; that’s an unusually high number for the program, though the FBI attributed it to an uptick in investigations of Russia-based cybercrime that targeted US infrastructure. Critics have raised alarms about the ways the FBI has used the program to surveil Americans including Black Lives Matter activists and a member of Congress.  

In a letter to Senate Majority Leader Chuck Schumer this week, over 25 civil society organizations, including the American Civil Liberties Union (ACLU), the Center for Democracy & Technology, and the Freedom of the Press Foundation, said they “strongly oppose even a short-term reauthorization of Section 702.”

Wikimedia, the foundation that runs Wikipedia, also opposes the program in its current form, saying it leaves international open-source projects vulnerable to surveillance. “Wikimedia projects are edited and governed by nearly 300,000 volunteers around the world who share free knowledge and serve billions of readers globally. Under Section 702, every interaction on these projects is currently subject to surveillance by the NSA,” says a spokesperson for the Wikimedia Foundation. “Research shows that online surveillance has a ‘chilling effect’ on Wikipedia users, who will engage in self-censorship to avoid the threat of governmental reprisals for accurately documenting or accessing certain kinds of information.”

And what about the proponents?

The main supporters of the program’s reauthorization are the intelligence agencies themselves, which say it enables them to gather critical information about foreign adversaries and online criminal activities like ransomware and cyberattacks. 

In defense of the provision, FBI director Christopher Wray has also pointed to procedural changes at the bureau in recent years that have reduced the number of Americans being surveilled from 3.4 million in 2021 to 200,000 in 2022. 

The Biden administration has also broadly pushed for the reauthorization of Section 702 without reform.  

“Section 702 is a necessary instrument within the intelligence community, leveraging the United States’ global telecommunication footprint through legal and court-approved means,” says Sabine Neschke, a senior policy analyst at the Bipartisan Policy Center. “Ultimately, Congress must strike a balance between ensuring national security and safeguarding individual rights.”

What would reform look like?

The proposal to reform the program, called the Government Surveillance Reform Act, was announced last week and focuses on narrowing the government’s authority to collect information on US citizens.

It would require warrants to collect Americans’ location data and web browsing or search records under the program and documentation that the queries were “reasonably likely to retrieve foreign intelligence information.” In a hearing before the House Committee on Homeland Security on Wednesday, Wray said that a warrant requirement would be a “significant blow” to the program, calling it a “de facto ban.”

Senator Ron Wyden, who cosponsored the reform bill and sits on the Senate Select Committee on Intelligence, has said he won’t vote to renew the program unless some of its powers are curbed. “Congress must have a real debate about reforming warrantless government surveillance of Americans,” Wyden said in a statement to MIT Technology Review. “Therefore, the administration and congressional leaders should listen to the overwhelming bipartisan coalition that supports adopting common-sense protections for Americans’ privacy and extending key national security authorities at the same time.”

The reform bill does not, as some civil society groups had hoped, limit the government’s powers for surveillance of people outside of the US. 

While it’s not yet clear whether these reforms will pass, intelligence agencies have never faced such a broad, bipartisan coalition of opponents. As for what happens next, we’ll have to wait and see. 

What else I’m reading

  • Here’s a great story from the New Yorker about how facial recognition searches can lead police to ignore other pieces of an investigation. 
  • I loved this excerpt of Broken Code, a new book from reporter Jeff Horwitz, who broke the Facebook Files revealed by whistleblower Frances Haugen. It’s a nice insidery look at the company’s AI strategy. 
  • Meta says that age verification requirements, such as those being proposed by child online safety bills, should be up to app stores like Apple’s and Google’s. It’s an interesting stance that the company says would help take the burden off individual websites to comply with the new regulations. 

What I learned this week

Some researchers and technologists have been calling for new and more precise language around artificial intelligence. This week, Google DeepMind released a paper outlining different levels of artificial general intelligence, often referred to as AGI, as my colleague Will Douglas Heaven reports.

“The team outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals),” Will writes. “They note that no level beyond emerging AGI has been achieved.” We’ll certainly be hearing more about what words we should use when referring to AI in the future.

Three things to know about the White House’s executive order on AI

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

The US has set out its most sweeping set of AI rules and guidelines yet in an executive order issued by President Joe Biden today. The order will require more transparency from AI companies about how their models work and will establish a raft of new standards, most notably for labeling AI-generated content. 

The goal of the order, according to the White House, is to improve “AI safety and security.” It also includes a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. This is a surprising move that invokes the Defense Production Act, typically used during times of national emergency.

The executive order advances the voluntary requirements for AI policy that the White House set back in August, though it lacks specifics on how the rules will be enforced. Executive orders are also vulnerable to being overturned at any time by a future president, and they lack the legitimacy of congressional legislation on AI, which looks unlikely in the short term.  

“The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” says Anu Bradford, a law professor at Columbia University who specializes in digital regulation.

Nevertheless, AI experts have hailed the order as an important step forward, especially thanks to its focus on watermarking and standards set by the National Institute of Standards and Technology (NIST). However, others argue that it does not go far enough to protect people against immediate harms inflicted by AI.

Here are the three most important things you need to know about the executive order and the impact it could have. 

What are the new rules around labeling AI-generated content? 

The White House’s executive order requires the Department of Commerce to develop guidance for labeling AI-generated content. AI companies will use this guidance to develop labeling and watermarking tools that the White House hopes federal agencies will adopt. “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world,” according to a fact sheet that the White House shared over the weekend. 

The hope is that labeling the origins of text, audio, and visual content will make it easier for us to know what’s been created using AI online. These sorts of tools are widely proposed as a solution to AI-enabled problems such as deepfakes and disinformation, and in a voluntary pledge with the White House announced in August, leading AI companies such as Google and Open AI pledged to develop such technologies

The trouble is that technologies such as watermarks are still very much works in progress. There currently are no fully reliable ways to label text or investigate whether a piece of content was machine generated. AI detection tools are still easy to fool

The executive order also falls short of requiring industry players or government agencies to use these technologies.

On a call with reporters on Sunday, a White House spokesperson responded to a question from MIT Technology Review about whether any requirements are anticipated for the future, saying, “I can imagine, honestly, a version of a call like this in some number of years from now and there’ll be a cryptographic signature attached to it that you know you’re actually speaking to [the White House press team] and not an AI version.” This executive order intends to “facilitate technological development that needs to take place before we can get to that point.”

The White House says it plans to push forward the development and use of these technologies with the Coalition for Content Provenance and Authenticity, called the C2PA initiative. As we’ve previously reported, the initiative and its affiliated open-source community has been growing rapidly in recent months as companies rush to label AI-generated content. The collective includes some major companies like Adobe, Intel, and Microsoft and has devised a new internet protocol that uses cryptographic techniques to encode information about the origins of a piece of content.

The coalition does not have a formal relationship with the White House, and it’s unclear what that collaboration would look like. In response to questions, Mounir Ibrahim, the cochair of the governmental affairs team, said, “C2PA has been in regular contact with various offices at the NSC [National Security Council] and White House for some time.”

The emphasis on developing watermarking is good, says Emily Bender, a professor of linguistics at the University of Washington. She says she also hopes content labeling systems can be developed for text; current watermarking technologies work best on images and audio. “[The executive order] of course wouldn’t be a requirement to watermark, but even an existence proof of reasonable systems for doing so would be an important step,” Bender says.

Will this executive order have teeth? Is it enforceable? 

While Biden’s executive order goes beyond previous US government attempts to regulate AI, it places far more emphasis on establishing best practices and standards than on how, or even whether, the new directives will be enforced. 

The order calls on the National Institute of Standards and Technology to set standards for extensive “red team” testing—meaning tests meant to break the models in order to expose vulnerabilities—before models are launched. NIST has been somewhat effective at documenting how accurate or biased AI systems such as facial recognition are already. In 2019, a NIST study of over 200 facial recognition systems revealed widespread racial bias in the technology.

However, the executive order does not require that AI companies adhere to NIST standards or testing methods. “Many aspects of the EO still rely on voluntary cooperation by tech companies,” says Bradford, the law professor at Columbia.

The executive order requires all companies developing new AI models whose computational size exceeds a certain threshold to notify the federal government when training the system and then share the results of safety tests in accordance with the Defense Production Act. This law has traditionally been used to intervene in commercial production at times of war or national emergencies such as the covid-19 pandemic, so this is an unusual way to push through regulations. A White House spokesperson says this mandate will be enforceable and will apply to all future commercial AI models in the US, but will likely not apply to AI models that have already been launched. The threshold is set at a point where all major AI models that could pose risks “to national security, national economic security, or national public health and safety” are likely to fall under the order, according to the White House’s fact  sheet. 

The executive order also calls for federal agencies to develop rules and guidelines for different applications, such as supporting workers’ rights, protecting consumers, ensuring fair competition, and administering government services. These more specific guidelines prioritize privacy and bias protections.

“Throughout, at least, there is the empowering of other agencies, who may be able to address these issues seriously,” says Margaret Mitchell, researcher and chief ethics scientist at AI startup Hugging Face. “Albeit with a much harder and more exhausting battle for some of the people most negatively affected by AI, in order to actually have their rights taken seriously.”

What has the reaction to the order been so far? 

Major tech companies have largely welcomed the executive order. 

Brad Smith, the vice chair and president of Microsoft, hailed it as “another critical step forward in the governance of AI technology.” Google’s president of global affairs, Kent Walker, said the company looks “forward to engaging constructively with government agencies to maximize AI’s potential—including by making government services better, faster, and more secure.”

“It’s great to see the White House investing in AI’s growth by creating a framework for responsible AI practices,” said Adobe’s general counsel and chief trust officer, Dana Rao. 

The White House’s approach remains friendly to Silicon Valley, emphasizing innovation and competition rather than limitation and restriction. The strategy is in line with the policy priorities for AI regulation set forth by Senate Majority Leader Chuck Schumer, and it further crystallizes the lighter touch of the American approach to AI regulation. 

However, some AI researchers say that sort of approach is cause for concern. “The biggest concern to me in this is it ignores a lot of work on how to train and develop models to minimize foreseeable harms,” says Mitchell.

Instead of preventing AI harms before deployment—for example, by making tech companies’ data practices better—the White House is using a “whack-a-mole” approach, tackling problems that have already emerged, she adds.  

The highly anticipated executive order on artificial intelligence comes two days before the UK’s AI Safety Summit and attempts to position the US as a global leader on AI policy. 

It will likely have implications outside the US, adds Bradford. It will set the tone for the UK summit and will likely embolden the European Union to finalize its AI Act, as the executive order sends a clear message that the US agrees with many of the EU’s policy goals.

“The executive order is probably the best we can expect from the US government at this time,” says Bradford.

Correction: A previous version of this story had Emily Bender’s title wrong. This has now been corrected. We apologize for any inconvenience.

Everything you need to know about artificial wombs

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

On September 19, US Food and Drug Administration advisors met to discuss how to move research on artificial wombs from animals into humans. These medical devices are designed to give extremely premature infants a bit more time to develop in a womblike environment before entering the outside world. They have been tested with hundreds of lambs (and some piglets), but animal models can’t fully predict how the technology will work for humans. 

“The most challenging question to answer is how much unknown is acceptable,” said An Massaro, FDA’s lead neonatologist in the Office of Pediatric Therapeutics, at the committee meeting. That’s a question regulators will have to grapple with as this research moves out of the lab and into first-in-human trials.

What is an artificial womb?

An artificial womb is an experimental medical device intended to provide a womblike environment for extremely premature infants. In most of the technologies, the infant would float in a clear “biobag,” surrounded by fluid. The idea is that preemies could spend a few weeks continuing to develop in this device after birth, so that “when they’re transitioned from the device, they’re more capable of surviving and having fewer complications with conventional treatment,” says George Mychaliska, a pediatric surgeon at the University of Michigan.

One of the main limiting factors for survival in extremely premature babies is lung development. Rather than breathing air, babies in an artificial womb would have their lungs filled with lab-made amniotic fluid, that mimics the amniotic fluid they would have hadjust like they would in utero. Neonatologists would insert tubes into blood vessels in the umbilical cord so that the infant’s blood could cycle through an artificial lung to pick up oxygen. 

The device closest to being ready to be tested in humans, called the EXTrauterine Environment for Newborn Development, or EXTEND, encases the baby in a container filled with lab-made amniotic fluid. It was invented by Alan Flake and Marcus Davey at the Children’s Hospital of Philadelphia and is being developed by Vitara Biomedical

Other researchers are working on artificial wombs too, though they’re a bit farther behind. Scientists in Australia and Japan are developing a system very similar to EXTEND. In Europe, the Perinatal Life Support project is working on its own technology. And in Canada, researchers have been testing their version of an artificial womb on piglets. Researchers at the University of Michigan are working on similar technology intended to be used within preemies for whom conventional therapies aren’t likely to work. Rather than floating in fluid, the infants would only have their lungs filled. It’s a system that could be used in existing ICUs with relatively few modifications, so “we believe that that has more clinical applicability,” says Mychaliska,who is leading the project.  

When will this technology be tested in humans?

The technology used in the EXTEND system has been tested on lamb fetuses, about 300 so far, with good results. The lambs can survive and develop inside the sack for three or even four weeks.

To move forward with human testing, the company needs an investigational device exemption from the FDA. At a meeting in June, Flake said Vitara might be ready to request that exemption in September or October. But at the September advisory committee meeting, when Flake was directly asked how far the technology had advanced he declined to answer. He said he could discuss timing with the advisory committee during the portion of the meeting that was closed to the public. To greenlight a trial, FDA officials need to be convinced that babies who try EXTEND are likely to benefit from the system, and that they’ll fare at least as well as babies who receive the current standard of care.

What would the first human tests look like?

The procedure requires a carefully choreographed transfer. First, the baby must be delivered via cesarean section and immediately have tubes inserted into the umbilical cord before being transferred into the fluid-filled container.

The technology would likely be used first on infants born at 22 or 23 weeks who don’t have many other options. “You don’t want to put an infant on this device who would otherwise do well with conventional therapy,” Mychaliska says. At 22 weeks gestation, babies are tiny, often weighing less than a pound. And their lungs are still developing. When researchers looked at babies born between 2013 and 2018, survival among those who were resuscitated at 22 weeks was 30%. That number rose to nearly 56% at 23 weeks. And babies born at that stage who do survive have an increased risk of neurodevelopmental problems, cerebral palsy, mobility problems, hearing impairments, and other disabilities. 

Selecting the right participants will be tricky. Some experts argue that gestational age shouldn’t be the only criteria. One complicating factor is that prognosis varies widely from center to center, and it’s improving as hospitals learn how best to treat these preemies. At the University of Iowa Stead Family Children’s Hospital, for example, survival rates are much higher than average: 64% for babies born at 22 weeks. They’ve even managed to keep a handful of infants born at 21 weeks alive. “These babies are not a hopeless case. They very much can survive. They very much can thrive if you are managing them appropriately,” says Brady Thomas, a neonatologist at Stead. “Are you really going to make that much of a bigger impact by adding in this technology, and what risks might exist to those patients as you’re starting to trial it?”

Prognosis also varies widely from baby to baby depending on a variety of factors. “The girls do better than the boys. The bigger ones do better than the smaller ones,” says Mark Mercurio, a neonatologist and pediatric bioethicist at the Yale School of Medicine. So “how bad does the prognosis with current therapy need to be to justify use of an artificial womb?” That’s a question Mercurio would like to see answered.

What are the risks?

One ever-present concern in the tiniest babies is brain bleeds. “That’s due to a number of factors—a combination of their brain immaturity, and in part associated with the treatment that we provide,” Mychaliska says. Babies in an artificial womb would need to be on a blood thinner to prevent clots from forming where the tubes enter the body. “I believe that places a premature infant at very high risk for brain bleeding,” he says.  

And it’s not just about the baby. To be eligible for EXTEND, infants must be delivered via cesarean section, which puts the pregnant person at higher risk for infection and bleeding. Delivery via a C-section can also have an impact on future pregnancies.  

So if it works, could babies be grown entirely outside the womb?

Not anytime soon. Maybe not ever. In a paper published in 2022, Flake and his colleagues called this scenario “a technically and developmentally naive, yet sensationally speculative, pipe dream.” The problem is twofold. First, fetal development is a carefully choreographed process that relies on chemical communication between the pregnant parent’s body and the fetus. Even if researchers understood all the factors that contribute to fetal development—and they don’t—there’s no guarantee they could recreate those conditions. 

The second issue is size. The artificial womb systems being developed require doctors to insert a small tube into the infant’s umbilical cord to deliver oxygenated blood. The smaller the umbilical cord, the more difficult this becomes.

What are the ethical concerns?

In the near term, there are concerns about how to ensure that researchers are obtaining proper informed consent from parents who may be desperate to save their babies. “This is an issue that comes up with lots of last-chance therapies,” says Vardit Ravitsky, a bioethicist and president of the Hastings Center, a bioethics research institute. 

If the artificial wombs work, more significant questions will come up. When these devices are used to save infants born too soon, “this is obviously potentially a wonderful technology,” Ravitsky says. But as with any technology, other uses might arise. Imagine that a woman wants to terminate a pregnancy at 21 or 22 weeks and this technology is available. How would that impact a woman’s right to choose whether to carry a pregnancy to term? “When we say that a woman has the right to terminate, do we mean the right to physically separate from the fetus? Or do we mean the right not to become a biological mother?” Ravitsky asks.

With the technology at an early stage, that situation might seem far-fetched, but it’s worth thinking about the implications now. Elizabeth Chloe Romanis, who studies health-care law and bioethics at Durham University in the UK, argued at the advisory meeting that “an entity undergoing gestation outside the body is a unique human entity,” one that might have different needs and require different protections. 

The advent of an artificial womb raises all kinds of questions, Ravitsky says: “What’s a fetus, what’s a baby, what’s a newborn, what’s birth, what’s viability?” These questions have ethical implications, but also legal ones. “If we don’t start thinking about it, now we’re going to have lots of blind spots,” she says.  

China just fought back in the semiconductor exports war. Here’s what you need to know.

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more here.

China has been on the receiving end of semiconductor export restrictions for years. Now, it is striking back with the same tactic. On July 3, the Chinese Ministry of Commerce announced that the export of gallium and germanium, two elements used in producing chips, solar panels, and fiber optics, will soon be subject to a license system for national security reasons. That means exports of the materials will need to be approved by the government, and Western companies that rely on them could have a hard time securing a consistent supply from China. 

The move follows years of restrictions by the US and Western allies on exports of cutting-edge technologies like high-performing chips, lithography machines, and even chip design software. The policies have created a bottleneck for China’s tech growth, especially for a few major companies like Huawei.

China’s announcement is a clear signal it aims to retaliate, says Kevin Klyman, a technology researcher on the Avoiding Great Power War Project at the Harvard Kennedy School’s Belfer Center for Science and International Affairs. “Every day the technology war is getting worse,” Klyman says. “This is a notable day that accelerated things further.” 

But even though they immediately sent the price of gallium and germanium up, China’s new curbs are not likely to hit the US as hard as American export restrictions have hit China. These two raw materials, though they are important, still have relatively niche applications in the semiconductor industry. And while China dominates gallium and germanium production, other countries could ramp up their own production and export enough to substitute for the supply from China.

Here’s a quick look at where things stand and what comes next.

What are gallium and germanium? What are they used for?

Gallium and germanium are two chemical elements that are commonly extracted along with more familiar minerals. Gallium is usually produced in the process of mining zinc and alumina, while germanium is acquired during zinc mining or separated from brown coal.

“Beijing likely chose gallium and germanium because both are important for semiconductor manufacturing,” says Felix Chang, a senior fellow at the Foreign Policy Research Institute. “That is especially true for germanium, which is prized for its high electrical conductivity. Meanwhile, gallium has unusual crystallization properties that lead to some useful alloying effects.” Gallium is used in the manufacture of radio communication equipment and LED displays, while germanium is widely used in fiber optics, infrared optics, and solar cells. These applications also make them useful components in modern weapons.

Currently, about 60% of the world’s germanium and 90% of the world’s gallium is produced in China, according to the Chinese metal industry research firm Antaike. But because China doesn’t have the capacity to turn these materials into later-stage semiconductor or optical products, a big chunk of it is exported to companies in Japan and Europe. 

What’s the immediate impact?

The new export license regime will start being implemented on August 1. Right after it was announced, purchase orders reportedly began swarming into Chinese gallium and germanium producers. The stockpiling has raised the price of the two materials, as well as the stock prices of Chinese companies that produce them

AXT, an American maker of semiconductor wafers, quickly responded to say that its China-based subsidiary would apply for an export license to maintain business as usual.

It’s important to remember that this is not a ban but a licensing system, which means the impact will depend on how difficult it is to secure an export license. “We see no evidence that no licenses will be granted. They will not be granted to US defense contractors, I imagine,” says Klyman, who notes that American defense companies Raytheon and Lockheed Martin were the first two names added to China’s newly established “unreliable entity list” earlier this year.

But the ability to control who can be granted the permits will give China more leverage in trade negotiations with other countries, particularly those—like Japan and Korea—that rely on such imports for their own semiconductor industries. 

Why is China announcing these restrictions now?

The US government has spent the past year lobbying allies to join forces in restricting China from sourcing high-end chipmaking equipment like lithography machines, and the results are showing. In June, both Japan and the Netherlands announced their decisions to restrict the export of chip-related materials and equipment to China. China certainly is feeling the pressure, and its attempts to negotiate with the US on the restrictions have been unsuccessful.

Many experts point to the China visit of Janet Yellen, the US secretary of the treasury, which happened last week, as the major reason these export controls were announced when they were. “Beijing was … sending a signal before the Yellen visit that China will play the game of controlling exports in key sectors of concern to the US government,” says Paul Triolo, a senior vice president for China and technology policy lead at the consultancy Albright Stonebridge Group. Control of gallium and germanium is one of the tools Beijing wields to push the US and its allies back to the negotiation table.

There’s also a strategic concern that holding onto these critical materials could serve China’s interests if a conflict breaks out, says Xiaomeng Lu, director of geotechnology practice at the Eurasia Group. “Russia has been pretty much blocked out of the global tech ecosystem at this point … but they still have oil, they still have food, and that’s how they survived. That’s the worst-case scenario Chinese leadership keep at the back of their mind,” Lu says. “If the worst-case scenario happens, we need to hold the raw materials that we have in our reserve as much as possible.”

What will happen to the gallium and germanium supply chain?

The Chinese government may be seizing stronger control of the supply chain for now, but the added uncertainty of the licensing regime will cause foreign importers of gallium and germanium to look elsewhere for a more reliable supply. Most people agree that these export restrictions may not be beneficial to China in the long run.

“My read is that the US government is happy about this move,” says Klyman. “This forces suppliers to diversify their supply of gallium, germanium, and other critical minerals, and it will cause markets to reinterpret the value of mining in North America and other regions.”

Mining companies in Congo and Russia have already said they intend to increase production of germanium to meet demand. Some Western countries, including the US, Canada, Germany, and Japan, also produce these materials, but ramping up production could be difficult. The mining process causes significant pollution, which was one of the reasons production was offshored to China in the beginning.

“The West will have to accelerate its innovation of new processes to separate and purify rare-earth metals. Otherwise, it may have to relax the environmental regulations that constrain traditional separation and purification techniques in the West,” says Chang.

Could China’s export controls be as successful as the American ones?

Probably not. Germanium and gallium can be mined elsewhere. But cutting-edge technologies are more restricted in their availability; the EUV lithography machines that the US wanted barred from export to China, for example, are made by a single company. “Export control is not as effective if the technologies are available in other markets,” says Sarah Bauerle Danzman, an associate professor of international studies at Indiana University Bloomington.

The US also has other advantages that make export control work more efficiently, she says, like the international importance of the dollar. The US chip curbs have an extraterritorial effect because companies fear being sanctioned if they don’t comply. They could be excluded from receiving payments in US dollars. 

For China, the export controls could hurt its own economy, Bauerle Danzman adds, because it relies more on export trade than that of the US. Restricting Chinese companies from working with the rest of the world will undermine their business. “Unless [China] is going to get Japan and South Korea and the EU to agree to not trade with the US, in order for it to really execute on a strategy like this, it not only has to stop exports to the US—it has to stop exports to basically everywhere,” she says.

Has China restricted the export of critical raw materials before?

This is not the first time China has tried to restrict the export of raw materials. In 2010, it reduced the allotment of rare-earth elements available for export by 40%, citing an interest in environmental conservation. The same year, the country was accused of unofficially banning rare-earth exports to Japan over a territorial dispute. 

Rare-earth elements are used in manufacturing a variety of products, including magnets, motors, batteries, and LED lights. The quota was later challenged by the US, EU, and Japan in a World Trade Organization dispute. China’s environmental protection justifications didn’t convince the settlement panel. It ruled against China and asked it to roll back the restrictions, which happened in 2015. 

This time, the Japanese government has again said it could raise the issue with WTO, but China likely won’t need to worry about it as much as the last time. With the rise in trade protectionism and self-preserving supply-chain policies during the pandemic era, the organization has increasingly lost its authority among member countries. “Today, WTO is less relevant, and China is trying to find a more nuanced policy argument to back up their actions.” says Lu.

It doesn’t need to look far. In December, China filed a dispute with the WTO around the US semiconductor export controls, calling them “politically motivated and disguised restrictions on trade.” In a brief official response, the US delegate to the WTO said every country has the authority to take measures it considers “necessary to the protection of its essential security interests,” an argument that China can easily use for itself. 

Will China have more export controls in the future?

China most likely won’t stop at gallium and germanium when it comes to export controls. Wei Jianguo, a former Chinese vice minister of commerce, was quoted in the state-owned publication China Daily as saying that “this is just the beginning of China’s countermeasures, and China’s toolbox has many more types of measures available.”

Gallium and germanium, while important, don’t represent the worst pain China could inflict on the raw materials front. “It’s giving the global system a little pinch, showing that we have the capability to cause a bigger pain sometime down the road,” says Lu. 

That could come if China chooses to clamp down again on the export of rare-earth elements. Or the materials used in making electric-vehicle batteries—lithium, cobalt, nickel, graphite. Because these materials are used in much greater quantities, it’s more difficult to find a substitute supply in a short time. They are the real trump card China may hold at the future negotiation table.

Here’s what we know about lab-grown meat and climate change

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more here.

Soon, the menu in your favorite burger joint could include not only options made with meat, mushrooms, and black beans but also patties packed with lab-grown animal cells.

Not only did the US just approve the sale of cultivated meat for the first time, but the industry, made up of over 150 companies, is raising billions of dollars to bring products to restaurants and grocery stores. 

In theory, that should be a big win for the climate. 

One of the major drivers for businesses focusing on cultivated (or lab-grown, or cultured) meat is its potential for cleaning up the climate impact of our current food system. Greenhouse-gas emissions from the animals we eat (mostly cows) account for nearly 15% of the global total, a fraction that’s expected to increase in the coming decades.

But whether cultivated meat is better for the environment is still not entirely clear.

That’s because there are still many unknowns around how production will work at commercial scales. Many of the startups are just now planning the move from research labs to bigger facilities to start producing food that real, paying customers will finally get to eat.

Exactly how this shift happens will not only determine whether these new food options will be cheap enough to make it into people’s carts. It may also decide whether cultivated meat can ever deliver on its big climate promises.

Moo-ve over, cows

Raising livestock, especially beef, is infamously emissions intensive. Feeding animals on farms requires a lot of land and energy, both of which can produce carbon dioxide emissions. In addition, cows (along with some other livestock, like sheep) produce large amounts of methane during digestion. If you add it all up and take a global average, one kilogram of beef can account for emissions roughly equivalent to 100 kilograms of carbon dioxide. (Exact estimates can vary depending on where cows are raised, what they’re fed, and how farms are run.)  

At a cellular level, cultivated meat is made from basically the same ingredients as the meat we eat today. By taking a sample of tissue from a young animal or fertilized egg, isolating the cells, and growing them in a reactor, scientists can make animal-derived meat without the constraints of feeding and raising animals for slaughter.

The USDA just gave two California-based companies, Eat Just and Upside Foods, the green light to produce and sell their cultivated chicken products. This makes the US the second country to allow sales of meat grown in labs, after Singapore.

Cultivated meat will still produce emissions, since energy is required to run the reactors that house the cells as they grow. In the US and most places around the world today, that will likely involve fossil fuels. Renewables could eventually be available widely and consistently enough to power facilities producing cultivated meat. However, even in this case, the reactors, pipes, and all other necessary equipment for production facilities often have associated emissions that are tough to eliminate entirely. In addition, animal cells need to be fed and cared for, and the supply chain involved in that also comes with emissions attached. 

And the emissions from cultivated meat might be significant. Some of the early work in the field has relied on materials and techniques borrowed from the biopharmaceutical industry, where companies sometimes grow cells in order to produce drugs. It’s a painstaking and tightly regulated process involving high-purity ingredients, expensive reactors, and a whole lot of energy, says Edward Spang, an associate professor of food science and technology at the University of California, Davis.

Spang and his team set out to estimate the climate impacts of cultivated meat assuming current production techniques. To quantify the potential climate benefits, the researchers examined the total environmental impacts of both animal agriculture and cultivated meat in an analysis known as a life-cycle assessment. This type of analysis adds up all the energy, water, and materials needed to make a product, putting everything in terms of equivalent carbon dioxide emissions.

In a recent preprint study that hasn’t yet been peer-reviewed, Spang estimated the total global-warming potential of cultivated meat in several scenarios based on assumptions about the current state of the industry.

The scenarios were divided into two categories. The first set assumed that cultivated meat would be produced with processes and materials similar to those used in the biopharmaceutical industry—specifically including an energy-intensive purification step to remove contaminants. The other scenarios assumed that cultivated meat production wouldn’t require ultra-high-purity ingredients and would instead rely on inputs like those used in the food industry today, meaning lower energy requirements and emissions.

The two sets of results have very different climate outcomes. A food-grade process results in the equivalent of 10 to 75 kilograms of carbon dioxide emissions—lower than the global average  emissions from beef and in line with production in some countries today. But in the biopharmaceutical-like process, cultivated meat leads to significantly more emissions than beef production today: between 250 and 1,000 kilograms of carbon dioxide equivalent for every kilogram of beef, depending on the specific scenario. 

Where’s the beef?

Spang’s preprint, which appeared in April, sparked splashy news headlines about the potential for sky-high emissions. The study also drew quick criticism from some in the industry, including a widely circulated open letter questioning its assumptions. 

Experts particularly took issue with the assumption that materials used in producing cultivated meat would need to use pharmaceutical-grade ingredients and go through intense purification steps to remove contaminants called endotoxins. Endotoxins are pieces of the outer membranes of some bacteria, and they’re shed as the microbes grow and when they die. Removing them is often necessary in biopharmaceutical processes, since even very small quantities can harm the growth of some cell types and provoke immune responses.  

The process that removes those contaminants is the major contributor to the high emissions seen in one group of the preprint’s scenarios. However, that purification step won’t be necessary in commercial production of cultivated meat, says Elliot Swartz, a principal scientist at the industry group Good Food Institute and one of the authors of the open letter. Different cell types are affected by endotoxins differently, and the ones that will be used for cultivated meat should be able to tolerate higher levels, meaning less purification is needed, Swartz says.

The study’s results do differ from those of many previous analyses in the field, which generally found that cultivated meat would reduce emissions compared with conventional beef production. Most of those studies assume that producers of cultivated meat will be able to avoid the energy-intensive methods described in the preprint, and will instead scale up to large commercial facilities and progress toward using more widely available, food-grade ingredients.

Experience will provide a better picture of the industry’s potential climate impact, says Pelle Sinke, a researcher at CE Delft, an independent research firm and consultancy focusing on energy and the environment. “In all innovative technologies, there’s an enormous learning curve,” Sinke says. “I’m not sure we should worry that much that [cultivated meat] will add an enormous burden to the climate globally.” 

In an analysis published in January 2023, he and his team set out to estimate emissions associated with cultivated meat in 2030, assuming that the production process can use food-grade ingredients and will reach commercial scale sometime in the next decade. That study put the potential climate impact at between three and 14 kilograms of carbon dioxide per kilogram of cultivated meat.

Where the total emissions from cultivated meat production will fall in this range depends largely on where the energy comes from to run the bioreactors: if it comes from the electrical grid, which will still rely partly on fossil fuels, the carbon impact will be much higher than it will be if renewables are used to power the facility. It also depends on what ingredients are in the media used to grow the cells.

In any case, Sinke’s study found that total emissions would be significantly lower than emissions from beef production, which his study estimated as equivalent to 35 kilograms of carbon dioxide in an optimized system in western Europe. (Chicken and pork came in at roughly three and five kilograms of carbon dioxide, respectively.)

Sinke’s analysis is far from the first to estimate that cultivated meat could have a smaller climate impact than conventional agriculture. An early analysis in the field, published in 2011, estimated that cultivated meat production would reduce greenhouse-gas emissions by between 78% and 96% compared with meat production in Europe, assuming production took place at commercial scale.

Cultivated meat could eventually have major climate benefits, says Hanna Tuomisto, an associate professor at the University of Helsinki and the lead author of the 2011 study. Tuomisto recently published another study that also found potential climate benefits for cultivated meat. However, she adds, the industry’s true climate impacts are yet to be determined. “There are many, many open questions still, because not very many companies have built anything at larger scale,” Tuomisto says.

Till the cows come home

Scaling up to make cultivated meat in larger production facilities is an ongoing process.

Upside Foods, one of the two companies that received the recent USDA nod, currently runs a pilot facility with a maximum capacity of about 400,000 pounds (180,000 kilograms) per year, though its current production capability is closer to 50,000 pounds. The company’s first commercial facility, which it’s currently in the process of designing, will be much larger, with a capacity of millions of pounds per year. 

“In all innovative technologies, there’s an enormous learning curve.”

Pelle Sinke

According to internal estimates, Upside’s products should take less water and land to produce than conventional meat, said Eric Schulze, the company’s VP of global scientific and regulatory affairs, in an email. However, he added, “we will need to be producing at a larger scale to truly measure and start to see the impact that we want to have.”

Eat Just is currently operating a demonstration plant in the US and constructing one in Singapore. Those facilities include reactors with capacities of 3,500 and 6,000 liters, respectively. Eventually, the company plans to produce millions of pounds of meat each year in a future commercial facility containing 10 reactors with a capacity of 250,000 liters each. 

There are already “plenty of reasons to be hopeful” about the climate impacts of cultivated meat, said Andrew Noyes, VP of communications at Eat Just, in an email. “However, achieving those goals is dependent on several factors tied to the optimization and scale-up of our production process, as well as the design of future large-scale manufacturing facilities.”

Even though recent regulatory approvals have been celebrated as a milestone for the cultivated meat industry, these products won’t be in your burger joint anytime soon. To cut their production costs, companies still need to build those larger facilities and get them running smoothly. 

Part of that growth will mean turning away from the more expensive equipment and ingredients the industry has borrowed from other businesses, says Jess Krieger, founder and CEO of Ohayo Valley, a cultivated meat company: “This is not how we’re going to be doing it in the future.” The factors that led to Spang’s worst-case emissions scenario, like intensive purification, expensive reactors, and pharmaceutical-grade media, aren’t necessary for production, she says. 

It is true that early-stage companies still often use pharmaceutical-grade ingredients, says Elliot Swartz of the Good Food Institute. However, there are already cheaper, food-grade options available on the market. Both Eat Just and Upside Foods say they plan to use these nonpharmaceutical ingredients in their eventual commercial operations. 

Energy-intensive methods aren’t just unsustainable for the planet, says Sinke, the researcher with CE Delft. Many processes that lean on biopharmaceutical techniques won’t be used in industry not just because they’d produce high emissions, he says, but “because nobody can afford them.”

For his part, Spang agrees that economics will likely keep cultivated meat from following the type of production path that would lead to extreme climate impacts. “If it requires pharmaceutical inputs, I don’t think there will be much of an industry,” he says. “It will be too expensive; I just don’t think that’s a viable pathway.” 

But for him, there are still many open questions to answer, and plans to execute, before the industry can start taking credit as a climate solution. “The leap from lab-scale science to cost-effective climate impact—there’s a substantial amount of distance there, in my opinion,” Spang says. 

It’s still possible for cultivated meat to become a major positive for the climate, especially as renewables like wind and solar become more widely available. An industry where cells can be grown efficiently in massive reactors while being fed widely available ingredients, in a process all powered by renewable electricity, could be a significant way to help clean up our food system. 

But the facilities that would make that possible are mostly still in the planning phases—and it’s not yet clear which path cultivated meat might take to reach our plates.