How virtual power plants are shaping tomorrow’s energy system

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

For more than a century, the prevalent image of power plants has been characterized by towering smokestacks, endless coal trains, and loud spinning turbines. But the plants powering our future will look radically different—in fact, many may not have a physical form at all. Welcome to the era of virtual power plants (VPPs).

The shift from conventional energy sources like coal and gas to variable renewable alternatives such as solar and wind means the decades-old way we operate the energy system is changing. 

Governments and private companies alike are now counting on VPPs’ potential to help keep costs down and stop the grid from becoming overburdened. 

Here’s what you need to know about VPPs—and why they could be the key to helping us bring more clean power and energy storage online.

What are virtual power plants and how do they work?

A virtual power plant is a system of distributed energy resources—like rooftop solar panels, electric vehicle chargers, and smart water heaters—that work together to balance energy supply and demand on a large scale. They are usually run by local utility companies who oversee this balancing act.

A VPP is a way of “stitching together” a portfolio of resources, says Rudy Shankar, director of Lehigh University’s Energy Systems Engineering, that can help the grid respond to high energy demand while reducing the energy system’s carbon footprint.

The “virtual” nature of VPPs comes from its lack of a central physical facility, like a traditional coal or gas plant. By generating electricity and balancing the energy load, the aggregated batteries and solar panels provide many of the functions of conventional power plants.

They also have unique advantages.

Kevin Brehm, a manager at Rocky Mountain Institute who focuses on carbon-free electricity, says comparing VPPs to traditional plants is a “helpful analogy,” but VPPs “do certain things differently and therefore can provide services that traditional power plants can’t.”

One significant difference is VPPs’ ability to shape consumers’ energy use in real time. Unlike conventional power plants, VPPs can communicate with distributed energy resources and allow grid operators to control the demand from end users.

For example, smart thermostats linked to air conditioning units can adjust home temperatures and manage how much electricity the units consume. On hot summer days these thermostats can pre-cool homes before peak hours, when air conditioning usage surges. Staggering cooling times can help prevent abrupt demand hikes that might overwhelm the grid and cause outages. Similarly, electric vehicle chargers can adapt to the grid’s requirements by either supplying or utilizing electricity. 

These distributed energy sources connect to the grid through communication technologies like Wi-Fi, Bluetooth, and cellular services. In aggregate, adding VPPs can increase overall system resilience. By coordinating hundreds of thousands of devices, VPPs have a meaningful impact on the grid—they shape demand, supply power, and keep the electricity flowing reliably.

How popular are VPPs now?

Until recently, VPPs were mostly used to control consumer energy use. But because solar and battery technology has evolved, utilities can now use them to supply electricity back to the grid when needed.

In the United States, the Department of Energy estimates VPP capacity at around 30 to 60 gigawatts. This represents about 4% to 8% of peak electricity demand nationwide, a minor fraction within the overall system. However, some states and utility companies are moving quickly to add more VPPs to their grids.

Green Mountain Power, Vermont’s largest utility company, made headlines last year when it expanded its subsidized home battery program. Customers have the option to lease a Tesla home battery at a discounted rate or purchase their own, receiving assistance of up to $10,500, if they agree to share stored energy with the utility as required. The Vermont Public Utility Commission, which approved the program, said it can also provide emergency power during outages.

In Massachusetts, three utility companies (National Grid, Eversource, and Cape Light Compact) have implemented a VPP program that pays customers in exchange for utility control of their home batteries.

Meanwhile, in Colorado efforts are underway to launch the state’s first VPP system. The Colorado Public Utilities Commission is urging Xcel Energy, its largest utility company, to develop a fully operational VPP pilot by this summer.

Why are VPPs important for the clean energy transition?

Grid operators must meet the annual or daily “peak load,” the moment of highest electricity demand. To do that, they often resort to using gas “peaker” plants, ones that remain dormant most of the year that they can switch during in times of high demand. VPPs will reduce the grids’ reliance on these plants.

The Department of Energy currently aims to expand national VPP capacity to 80 to 160 GW by 2030. That’s roughly equivalent to 80 to 160 fossil fuel plants that need not be built, says Brehm.

Many utilities say VPPs can lower energy bills for consumers in addition to reducing emissions. Research suggests that leveraging distributed sources during peak demand is up to 60% more cost effective than relying on gas plants.

Another significant, if less tangible, advantage of VPPs is that they encourage people to be more involved in the energy system. Usually, customers merely receive electricity. Within a VPP system, they both consume power and contribute it back to the grid. This dual role can improve their understanding of the grid and get them more invested in the transition to clean energy.

What’s next for VPPs?

The capacity of distributed energy sources is expanding rapidly, according to the Department of Energy, owing to the widespread adoption of electric vehicles, charging stations, and smart home devices. Connecting these to VPP systems enhances the grid’s ability to balance electricity demand and supply in real time. Better AI can also help VPPs become more adept at coordinating diverse assets, says Shankar.

Regulators are also coming on board. The National Association of Regulatory Utility Commissioners has started holding panels and workshops to educate its members about VPPs and how to implement them in their states. The California Energy Commission is set to fund research exploring the benefits of integrating VPPs into its grid system. This kind of interest from regulators is new but promising, says Brehm.

Still, hurdles remain. Enrolling in a VPP can be confusing for consumers because the process varies among states and companies. Simplifying it for people will help utility companies make the most of distributed energy resources such as EVs and heat pumps. Standardizing the deployment of VPPs can also speed up their growth nationally by making it easier to replicate successful projects across regions.

“It really comes down to policy,” says Brehm. “The technology is in place. We are continuing to learn about how to best implement these solutions and how to interface with consumers.”

A controversial US surveillance program is up for renewal. Critics are speaking out.

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

For the past week my social feeds have been filled with a pretty important tech policy debate that I want to key you in on: the renewal of a controversial program of American surveillance.

The program, outlined in Section 702 of the Foreign Intelligence Surveillance Act (FISA), was created in 2008. It was designed to expand the power of US agencies to collect electronic “foreign intelligence information,” whether about spies, terrorists, or cybercriminals abroad, and to do so without a warrant. 

Tech companies, in other words, are compelled to hand over communications records like phone calls, texts, and emails to US intelligence agencies including the FBI, CIA, and NSA. A lot of data about Americans who communicate with people internationally gets swept up in these searches. Critics say that is unconstitutional

Despite a history of abuses by intelligence agencies, Section 702 was successfully renewed in both 2012 and 2017. The program, which has to be periodically renewed by Congress, is set to expire again at the end of December. But a broad group that transcends parties is calling for reforming the program, out of concern about the vast surveillance it enables. Here is what you need to know.

What do the critics of Section 702 say?

Of particular concern is that while the program intends to target people who aren’t Americans, a lot of data from US citizens gets swept up if they communicate with anyone abroad—and, again, this is without a warrant. The 2022 annual report on the program revealed that intelligence agencies ran searches on an estimated 3.4 million “US persons” during the previous year; that’s an unusually high number for the program, though the FBI attributed it to an uptick in investigations of Russia-based cybercrime that targeted US infrastructure. Critics have raised alarms about the ways the FBI has used the program to surveil Americans including Black Lives Matter activists and a member of Congress.  

In a letter to Senate Majority Leader Chuck Schumer this week, over 25 civil society organizations, including the American Civil Liberties Union (ACLU), the Center for Democracy & Technology, and the Freedom of the Press Foundation, said they “strongly oppose even a short-term reauthorization of Section 702.”

Wikimedia, the foundation that runs Wikipedia, also opposes the program in its current form, saying it leaves international open-source projects vulnerable to surveillance. “Wikimedia projects are edited and governed by nearly 300,000 volunteers around the world who share free knowledge and serve billions of readers globally. Under Section 702, every interaction on these projects is currently subject to surveillance by the NSA,” says a spokesperson for the Wikimedia Foundation. “Research shows that online surveillance has a ‘chilling effect’ on Wikipedia users, who will engage in self-censorship to avoid the threat of governmental reprisals for accurately documenting or accessing certain kinds of information.”

And what about the proponents?

The main supporters of the program’s reauthorization are the intelligence agencies themselves, which say it enables them to gather critical information about foreign adversaries and online criminal activities like ransomware and cyberattacks. 

In defense of the provision, FBI director Christopher Wray has also pointed to procedural changes at the bureau in recent years that have reduced the number of Americans being surveilled from 3.4 million in 2021 to 200,000 in 2022. 

The Biden administration has also broadly pushed for the reauthorization of Section 702 without reform.  

“Section 702 is a necessary instrument within the intelligence community, leveraging the United States’ global telecommunication footprint through legal and court-approved means,” says Sabine Neschke, a senior policy analyst at the Bipartisan Policy Center. “Ultimately, Congress must strike a balance between ensuring national security and safeguarding individual rights.”

What would reform look like?

The proposal to reform the program, called the Government Surveillance Reform Act, was announced last week and focuses on narrowing the government’s authority to collect information on US citizens.

It would require warrants to collect Americans’ location data and web browsing or search records under the program and documentation that the queries were “reasonably likely to retrieve foreign intelligence information.” In a hearing before the House Committee on Homeland Security on Wednesday, Wray said that a warrant requirement would be a “significant blow” to the program, calling it a “de facto ban.”

Senator Ron Wyden, who cosponsored the reform bill and sits on the Senate Select Committee on Intelligence, has said he won’t vote to renew the program unless some of its powers are curbed. “Congress must have a real debate about reforming warrantless government surveillance of Americans,” Wyden said in a statement to MIT Technology Review. “Therefore, the administration and congressional leaders should listen to the overwhelming bipartisan coalition that supports adopting common-sense protections for Americans’ privacy and extending key national security authorities at the same time.”

The reform bill does not, as some civil society groups had hoped, limit the government’s powers for surveillance of people outside of the US. 

While it’s not yet clear whether these reforms will pass, intelligence agencies have never faced such a broad, bipartisan coalition of opponents. As for what happens next, we’ll have to wait and see. 

What else I’m reading

  • Here’s a great story from the New Yorker about how facial recognition searches can lead police to ignore other pieces of an investigation. 
  • I loved this excerpt of Broken Code, a new book from reporter Jeff Horwitz, who broke the Facebook Files revealed by whistleblower Frances Haugen. It’s a nice insidery look at the company’s AI strategy. 
  • Meta says that age verification requirements, such as those being proposed by child online safety bills, should be up to app stores like Apple’s and Google’s. It’s an interesting stance that the company says would help take the burden off individual websites to comply with the new regulations. 

What I learned this week

Some researchers and technologists have been calling for new and more precise language around artificial intelligence. This week, Google DeepMind released a paper outlining different levels of artificial general intelligence, often referred to as AGI, as my colleague Will Douglas Heaven reports.

“The team outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals),” Will writes. “They note that no level beyond emerging AGI has been achieved.” We’ll certainly be hearing more about what words we should use when referring to AI in the future.

Three things to know about the White House’s executive order on AI

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

The US has set out its most sweeping set of AI rules and guidelines yet in an executive order issued by President Joe Biden today. The order will require more transparency from AI companies about how their models work and will establish a raft of new standards, most notably for labeling AI-generated content. 

The goal of the order, according to the White House, is to improve “AI safety and security.” It also includes a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. This is a surprising move that invokes the Defense Production Act, typically used during times of national emergency.

The executive order advances the voluntary requirements for AI policy that the White House set back in August, though it lacks specifics on how the rules will be enforced. Executive orders are also vulnerable to being overturned at any time by a future president, and they lack the legitimacy of congressional legislation on AI, which looks unlikely in the short term.  

“The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” says Anu Bradford, a law professor at Columbia University who specializes in digital regulation.

Nevertheless, AI experts have hailed the order as an important step forward, especially thanks to its focus on watermarking and standards set by the National Institute of Standards and Technology (NIST). However, others argue that it does not go far enough to protect people against immediate harms inflicted by AI.

Here are the three most important things you need to know about the executive order and the impact it could have. 

What are the new rules around labeling AI-generated content? 

The White House’s executive order requires the Department of Commerce to develop guidance for labeling AI-generated content. AI companies will use this guidance to develop labeling and watermarking tools that the White House hopes federal agencies will adopt. “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world,” according to a fact sheet that the White House shared over the weekend. 

The hope is that labeling the origins of text, audio, and visual content will make it easier for us to know what’s been created using AI online. These sorts of tools are widely proposed as a solution to AI-enabled problems such as deepfakes and disinformation, and in a voluntary pledge with the White House announced in August, leading AI companies such as Google and Open AI pledged to develop such technologies

The trouble is that technologies such as watermarks are still very much works in progress. There currently are no fully reliable ways to label text or investigate whether a piece of content was machine generated. AI detection tools are still easy to fool

The executive order also falls short of requiring industry players or government agencies to use these technologies.

On a call with reporters on Sunday, a White House spokesperson responded to a question from MIT Technology Review about whether any requirements are anticipated for the future, saying, “I can imagine, honestly, a version of a call like this in some number of years from now and there’ll be a cryptographic signature attached to it that you know you’re actually speaking to [the White House press team] and not an AI version.” This executive order intends to “facilitate technological development that needs to take place before we can get to that point.”

The White House says it plans to push forward the development and use of these technologies with the Coalition for Content Provenance and Authenticity, called the C2PA initiative. As we’ve previously reported, the initiative and its affiliated open-source community has been growing rapidly in recent months as companies rush to label AI-generated content. The collective includes some major companies like Adobe, Intel, and Microsoft and has devised a new internet protocol that uses cryptographic techniques to encode information about the origins of a piece of content.

The coalition does not have a formal relationship with the White House, and it’s unclear what that collaboration would look like. In response to questions, Mounir Ibrahim, the cochair of the governmental affairs team, said, “C2PA has been in regular contact with various offices at the NSC [National Security Council] and White House for some time.”

The emphasis on developing watermarking is good, says Emily Bender, a professor of linguistics at the University of Washington. She says she also hopes content labeling systems can be developed for text; current watermarking technologies work best on images and audio. “[The executive order] of course wouldn’t be a requirement to watermark, but even an existence proof of reasonable systems for doing so would be an important step,” Bender says.

Will this executive order have teeth? Is it enforceable? 

While Biden’s executive order goes beyond previous US government attempts to regulate AI, it places far more emphasis on establishing best practices and standards than on how, or even whether, the new directives will be enforced. 

The order calls on the National Institute of Standards and Technology to set standards for extensive “red team” testing—meaning tests meant to break the models in order to expose vulnerabilities—before models are launched. NIST has been somewhat effective at documenting how accurate or biased AI systems such as facial recognition are already. In 2019, a NIST study of over 200 facial recognition systems revealed widespread racial bias in the technology.

However, the executive order does not require that AI companies adhere to NIST standards or testing methods. “Many aspects of the EO still rely on voluntary cooperation by tech companies,” says Bradford, the law professor at Columbia.

The executive order requires all companies developing new AI models whose computational size exceeds a certain threshold to notify the federal government when training the system and then share the results of safety tests in accordance with the Defense Production Act. This law has traditionally been used to intervene in commercial production at times of war or national emergencies such as the covid-19 pandemic, so this is an unusual way to push through regulations. A White House spokesperson says this mandate will be enforceable and will apply to all future commercial AI models in the US, but will likely not apply to AI models that have already been launched. The threshold is set at a point where all major AI models that could pose risks “to national security, national economic security, or national public health and safety” are likely to fall under the order, according to the White House’s fact  sheet. 

The executive order also calls for federal agencies to develop rules and guidelines for different applications, such as supporting workers’ rights, protecting consumers, ensuring fair competition, and administering government services. These more specific guidelines prioritize privacy and bias protections.

“Throughout, at least, there is the empowering of other agencies, who may be able to address these issues seriously,” says Margaret Mitchell, researcher and chief ethics scientist at AI startup Hugging Face. “Albeit with a much harder and more exhausting battle for some of the people most negatively affected by AI, in order to actually have their rights taken seriously.”

What has the reaction to the order been so far? 

Major tech companies have largely welcomed the executive order. 

Brad Smith, the vice chair and president of Microsoft, hailed it as “another critical step forward in the governance of AI technology.” Google’s president of global affairs, Kent Walker, said the company looks “forward to engaging constructively with government agencies to maximize AI’s potential—including by making government services better, faster, and more secure.”

“It’s great to see the White House investing in AI’s growth by creating a framework for responsible AI practices,” said Adobe’s general counsel and chief trust officer, Dana Rao. 

The White House’s approach remains friendly to Silicon Valley, emphasizing innovation and competition rather than limitation and restriction. The strategy is in line with the policy priorities for AI regulation set forth by Senate Majority Leader Chuck Schumer, and it further crystallizes the lighter touch of the American approach to AI regulation. 

However, some AI researchers say that sort of approach is cause for concern. “The biggest concern to me in this is it ignores a lot of work on how to train and develop models to minimize foreseeable harms,” says Mitchell.

Instead of preventing AI harms before deployment—for example, by making tech companies’ data practices better—the White House is using a “whack-a-mole” approach, tackling problems that have already emerged, she adds.  

The highly anticipated executive order on artificial intelligence comes two days before the UK’s AI Safety Summit and attempts to position the US as a global leader on AI policy. 

It will likely have implications outside the US, adds Bradford. It will set the tone for the UK summit and will likely embolden the European Union to finalize its AI Act, as the executive order sends a clear message that the US agrees with many of the EU’s policy goals.

“The executive order is probably the best we can expect from the US government at this time,” says Bradford.

Correction: A previous version of this story had Emily Bender’s title wrong. This has now been corrected. We apologize for any inconvenience.

Everything you need to know about artificial wombs

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

On September 19, US Food and Drug Administration advisors met to discuss how to move research on artificial wombs from animals into humans. These medical devices are designed to give extremely premature infants a bit more time to develop in a womblike environment before entering the outside world. They have been tested with hundreds of lambs (and some piglets), but animal models can’t fully predict how the technology will work for humans. 

“The most challenging question to answer is how much unknown is acceptable,” said An Massaro, FDA’s lead neonatologist in the Office of Pediatric Therapeutics, at the committee meeting. That’s a question regulators will have to grapple with as this research moves out of the lab and into first-in-human trials.

What is an artificial womb?

An artificial womb is an experimental medical device intended to provide a womblike environment for extremely premature infants. In most of the technologies, the infant would float in a clear “biobag,” surrounded by fluid. The idea is that preemies could spend a few weeks continuing to develop in this device after birth, so that “when they’re transitioned from the device, they’re more capable of surviving and having fewer complications with conventional treatment,” says George Mychaliska, a pediatric surgeon at the University of Michigan.

One of the main limiting factors for survival in extremely premature babies is lung development. Rather than breathing air, babies in an artificial womb would have their lungs filled with lab-made amniotic fluid, that mimics the amniotic fluid they would have hadjust like they would in utero. Neonatologists would insert tubes into blood vessels in the umbilical cord so that the infant’s blood could cycle through an artificial lung to pick up oxygen. 

The device closest to being ready to be tested in humans, called the EXTrauterine Environment for Newborn Development, or EXTEND, encases the baby in a container filled with lab-made amniotic fluid. It was invented by Alan Flake and Marcus Davey at the Children’s Hospital of Philadelphia and is being developed by Vitara Biomedical

Other researchers are working on artificial wombs too, though they’re a bit farther behind. Scientists in Australia and Japan are developing a system very similar to EXTEND. In Europe, the Perinatal Life Support project is working on its own technology. And in Canada, researchers have been testing their version of an artificial womb on piglets. Researchers at the University of Michigan are working on similar technology intended to be used within preemies for whom conventional therapies aren’t likely to work. Rather than floating in fluid, the infants would only have their lungs filled. It’s a system that could be used in existing ICUs with relatively few modifications, so “we believe that that has more clinical applicability,” says Mychaliska,who is leading the project.  

When will this technology be tested in humans?

The technology used in the EXTEND system has been tested on lamb fetuses, about 300 so far, with good results. The lambs can survive and develop inside the sack for three or even four weeks.

To move forward with human testing, the company needs an investigational device exemption from the FDA. At a meeting in June, Flake said Vitara might be ready to request that exemption in September or October. But at the September advisory committee meeting, when Flake was directly asked how far the technology had advanced he declined to answer. He said he could discuss timing with the advisory committee during the portion of the meeting that was closed to the public. To greenlight a trial, FDA officials need to be convinced that babies who try EXTEND are likely to benefit from the system, and that they’ll fare at least as well as babies who receive the current standard of care.

What would the first human tests look like?

The procedure requires a carefully choreographed transfer. First, the baby must be delivered via cesarean section and immediately have tubes inserted into the umbilical cord before being transferred into the fluid-filled container.

The technology would likely be used first on infants born at 22 or 23 weeks who don’t have many other options. “You don’t want to put an infant on this device who would otherwise do well with conventional therapy,” Mychaliska says. At 22 weeks gestation, babies are tiny, often weighing less than a pound. And their lungs are still developing. When researchers looked at babies born between 2013 and 2018, survival among those who were resuscitated at 22 weeks was 30%. That number rose to nearly 56% at 23 weeks. And babies born at that stage who do survive have an increased risk of neurodevelopmental problems, cerebral palsy, mobility problems, hearing impairments, and other disabilities. 

Selecting the right participants will be tricky. Some experts argue that gestational age shouldn’t be the only criteria. One complicating factor is that prognosis varies widely from center to center, and it’s improving as hospitals learn how best to treat these preemies. At the University of Iowa Stead Family Children’s Hospital, for example, survival rates are much higher than average: 64% for babies born at 22 weeks. They’ve even managed to keep a handful of infants born at 21 weeks alive. “These babies are not a hopeless case. They very much can survive. They very much can thrive if you are managing them appropriately,” says Brady Thomas, a neonatologist at Stead. “Are you really going to make that much of a bigger impact by adding in this technology, and what risks might exist to those patients as you’re starting to trial it?”

Prognosis also varies widely from baby to baby depending on a variety of factors. “The girls do better than the boys. The bigger ones do better than the smaller ones,” says Mark Mercurio, a neonatologist and pediatric bioethicist at the Yale School of Medicine. So “how bad does the prognosis with current therapy need to be to justify use of an artificial womb?” That’s a question Mercurio would like to see answered.

What are the risks?

One ever-present concern in the tiniest babies is brain bleeds. “That’s due to a number of factors—a combination of their brain immaturity, and in part associated with the treatment that we provide,” Mychaliska says. Babies in an artificial womb would need to be on a blood thinner to prevent clots from forming where the tubes enter the body. “I believe that places a premature infant at very high risk for brain bleeding,” he says.  

And it’s not just about the baby. To be eligible for EXTEND, infants must be delivered via cesarean section, which puts the pregnant person at higher risk for infection and bleeding. Delivery via a C-section can also have an impact on future pregnancies.  

So if it works, could babies be grown entirely outside the womb?

Not anytime soon. Maybe not ever. In a paper published in 2022, Flake and his colleagues called this scenario “a technically and developmentally naive, yet sensationally speculative, pipe dream.” The problem is twofold. First, fetal development is a carefully choreographed process that relies on chemical communication between the pregnant parent’s body and the fetus. Even if researchers understood all the factors that contribute to fetal development—and they don’t—there’s no guarantee they could recreate those conditions. 

The second issue is size. The artificial womb systems being developed require doctors to insert a small tube into the infant’s umbilical cord to deliver oxygenated blood. The smaller the umbilical cord, the more difficult this becomes.

What are the ethical concerns?

In the near term, there are concerns about how to ensure that researchers are obtaining proper informed consent from parents who may be desperate to save their babies. “This is an issue that comes up with lots of last-chance therapies,” says Vardit Ravitsky, a bioethicist and president of the Hastings Center, a bioethics research institute. 

If the artificial wombs work, more significant questions will come up. When these devices are used to save infants born too soon, “this is obviously potentially a wonderful technology,” Ravitsky says. But as with any technology, other uses might arise. Imagine that a woman wants to terminate a pregnancy at 21 or 22 weeks and this technology is available. How would that impact a woman’s right to choose whether to carry a pregnancy to term? “When we say that a woman has the right to terminate, do we mean the right to physically separate from the fetus? Or do we mean the right not to become a biological mother?” Ravitsky asks.

With the technology at an early stage, that situation might seem far-fetched, but it’s worth thinking about the implications now. Elizabeth Chloe Romanis, who studies health-care law and bioethics at Durham University in the UK, argued at the advisory meeting that “an entity undergoing gestation outside the body is a unique human entity,” one that might have different needs and require different protections. 

The advent of an artificial womb raises all kinds of questions, Ravitsky says: “What’s a fetus, what’s a baby, what’s a newborn, what’s birth, what’s viability?” These questions have ethical implications, but also legal ones. “If we don’t start thinking about it, now we’re going to have lots of blind spots,” she says.