8,000 pregnant women may die in just 90 days because of US aid cuts

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Yesterday marks a month since the inauguration of Donald Trump as the 47th US president. And what a month it has been. The Trump administration wasted no time in delivering a slew of executive orders, memos, and work notices to federal employees.

On February 18, Trump signed an executive order that seeks to make IVF more accessible to people in the US. In some ways, the move isn’t surprising—Trump has expressed his support for the technology in the past, and even called himself “the father of IVF” while on the campaign trail last year.

Making IVF more affordable and accessible should give people more options when it comes to family planning and reproductive freedom more generally. But the move comes after a barrage of actions by the new administration that are hitting reproductive care hard for people around the world. On January 20, his first day in office, Trump ordered a “90-day pause in United States foreign development assistance” for such programs to be assessed. By January 24, a “stop work” memo issued by the State Department brought US-funded aid programs around the world to a halt.  

Recent estimates suggest that more than 8,000 women will die from complications related to pregnancy and childbirth over the next 90 days if the funding is not reinstated.

On January 24 Trump also reinstated the global gag rule—a policy that requires nongovernmental organizations receiving US health funding to agree that they will not offer abortion counseling and care. This move alone immediately stripped organizations of the funding they need to perform their work. MSI Reproductive Choices, which offers support for reproductive health care in 36 countries, lost $14 million as a result, says Anna Mackay, who manages donor-funded programs at the organization. “Over 2 million women and girls would have received contraceptive services with that money,” she says.

The US Agency for International Development (USAID) had a 2025 budget of $42.8 billion to spend on foreign assistance, which covers everything from humanitarian aid and sanitation to programs promoting gender equality and economic growth in countries around the world. But the “stop work” memo froze that funding for 90 days.

The impacts were felt immediately and are still rippling out. Clinical trials were halted. Jobs were lost. Health programs were shut down.

“I think this is going to have a devastating impact on the global health architecture,” says Thoai Ngo at Columbia University’s Mailman School of Public Health. “USAID is the major foreign funder for global health … I’m afraid that there isn’t [another government] that can fill the gap.”

Reproductive health care is likely to lose out as affected governments and health organizations try to reorganize their resources, says Ngo: “In times of crisis … women and girls tend to be deprioritized in terms of access to health and social services.”

Without information on and access to a range of contraceptive options, unintended pregnancies result. These have the potential to limit the freedoms of people who become pregnant. And they can have far-reaching economic impacts, since access to contraception can improve education rates and career outcomes.

And the health consequences can be devastating. Unintended pregnancies are more likely to be ended with abortions—potentially unsafe ones. Maternal death rates are high in regions that lack adequate resources. A maternal death occurred every two minutes in 2020.

“It’s difficult to overstate how catastrophic this freeze has been over the last several weeks,” says Amy Friedrich-Karnik, director of federal policy at the Guttmacher Institute, a research and policy organization focused on global sexual and reproductive health and rights. “Every single day that the freeze is in place, there are 130,000 women who are being denied contraceptive care,” she says.

The Guttmacher Institute estimates that should USAID funding be frozen for the full 90 days, around 11.7 million women and girls would lose access to contraceptive care, and 4.2 million of them would experience unintended pregnancies. Of those, “8,340 will die from complications during pregnancy and childbirth,” says Friedrich-Karnik.

“By denying people access to contraception, not only are you denying them tools for their bodily autonomy—you are really risking their lives,” she says. “Thousands more women will die down the road.”

“USAID plays such a central role in supporting these life-saving programs,” says Ngo. “The picture is bleak.”

Even online sources of information on contraceptives are being affected by the funding freeze. Ben Bellows is a chief business officer at Nivi, a digital health company that develops chatbots to deliver health information to people via WhatsApp. “Two million users have used the bot,” he says.

He and his team have been working on a project to deliver information on contraceptive options and family planning to women in India, and they have been looking to incorporate AI into their bot. The project was funded by a company that, in turn, is funded by USAID. Like the funding, the work is “frozen,” says Bellows.

“We’ve slowed [hiring] and we’ve slowed some of the tech development because of the freeze [on USAID],” he says. “It’s bad [for] the individuals, it’s bad [for] the companies that are trying to operate in these markets, and it’s bad [for] public health outcomes.”

Reproductive health and freedoms are also likely to be affected by the Trump administration’s cuts to federal agencies. The National Institutes of Health and the Centers for Disease Control and Prevention have been in the administration’s crosshairs, as has the Food and Drug Administration.

After all, the FDA regulates drugs and medical devices in the US, including contraceptives. The CDC collects and shares important data on sexual and reproductive health. And the NIH supports vital research on reproductive health and contraception.

The CDC also funds health programs in low-income countries like Ethiopia. Following Trump’s executive order, the country’s ministry of health terminated the contracts of more than 5,000 health workers whose salaries were supported by the CDC as well as USAID.

“That’s midwives and nurses working in rural health posts,” says Mackay. “We’re turning up to support these staff and provide them with sexual reproductive health training and make sure they’ve got the contraceptives, and there’s just no one at the facility.”

So, yes, it is great news if the Trump administration can find a way to make IVF more accessible. But, as Mackay points out, “it’s increasing reproductive choice in one direction.”


Now read the rest of The Checkup

Read more from MIT Technology Review‘s archive

Last November, two years after Roe v. Wade was overturned, 10 US states voted on abortion rights. Seven of them voted to extend and protect access.

My colleague Rhiannon Williams reported on the immediate aftermath of the decision that reversed Roe v. Wade.

Fertility rates are falling around the world, in almost every country. IVF is great, but it won’t save us from a looming fertility crisis. Gender equality and family-friendly policies are much more likely to be effective. 

Decades of increasingly successful IVF treatments have caused millions of embryos to be stored in cryopreservation tanks around the world. In some cases, they can’t be donated, used, or destroyed and appear to be stuck in limbo “forever.”

Ever come across the term “women of childbearing age”? The insidious idea that women’s bodies are, above all else, vessels for growing children has plenty of negative consequences for us all. But it has also set back scientific research and health policy

There are other WhatsApp-based approaches to improving access to health information in India. Accredited social health activists in the country are using the platform to counter medical misinformation and superstitions around pregnancy.

From around the web

The US Food and Drug Administration assesses the efficacy and toxicity of experimental medicines before they are approved. It should also consider their “financial toxicity,” given that medical bills can fall on the shoulders of patients themselves, argue a group of US doctors. (The New England Journal of Medicine)

Robert F. Kennedy Jr., the new US secretary of health and human services, has vowed to investigate the country’s childhood vaccination schedule. During his confirmation hearing a couple of weeks ago, he promised not to change the schedule. (Associated Press)

Some scientists have been altering their published work without telling anyone. Such “stealth corrections” threaten scientific integrity, say a group of researchers from Europe and the US. (Learned Publishing)

The US Department of Agriculture said it accidentally fired several people who were working on the federal response to the bird flu outbreak. Apparently the agency is now trying to hire them back. (NBC News)

Could your next pet be a glowing rabbit? This startup is using CRISPR to “level up” pets. Their goal is to eventually create a real-life unicorn. (Wired)

This company is trying to make a biodegradable alternative to spandex

It probably hasn’t been long since you last slipped into something stretchy. From yoga pants to socks, stretch fabrics are everywhere. And they’re only getting more popular: The global spandex market, valued at almost $8 billion in December 2024, is projected to grow between 2% and 8% every year over the next decade. That might be better news for your comfort than for the environment. Most stretch fabrics contain petroleum-based fibers that shed microplastics and take centuries to decompose. And even a small amount of plastic-based stretch fiber in a natural garment can render it nonrecyclable.

Alexis Peña and Lauren Blake, cofounders of Good Fibes, aim to tackle this problem with lab-grown elastics. Operating out of Tufts University and Argonne National Laboratory in Illinois, they are using a class of materials called silk elastin-like proteins (SELPs) to create biodegradable textiles.

“True circularity has to start with raw materials,” says Peña. “We talk about circularity across many industries, but for textiles, we must address what we’re using at the source.”

Engineered from recombinant DNA, SELPs are copycat proteins inspired by silk and elastin that can be customized for qualities like tensile strength, dye affinity, and elasticity. Silk’s amino acid sequences—like glycine-alanine and glycine-serine—give fibers strength, while elastin’s molecular structure adds stretchiness. Combine these molecules like Lego blocks, and voilà!—at least theoretically, you have the ideal flexible fiber.

An early-stage startup, Good Fibes creates its elastics with proteins from E. coli, a common bacterium. The process involves transforming the proteins into a gel-like material, which can then be made into fibers through wet-spinning. These fibers are then processed into nonwoven textiles or threads and yarns to make woven fabrics.

Scaling, however, remains a challenge: To produce a single swatch of test fabric, Blake says, she needs at least one kilogram (approximately two pounds) of microbial material. The fibers must also be stretchy, durable, and resistant to moisture in all the right proportions. “We’re still solving these issues using various chemical additions,” she says. For that reason, she’s also experimenting with plant-based proteins like wheat gluten, which she says is available in larger quantities than bacteria.

Timothy McGee, a biomaterials expert at the research lab Speculative Technologies, says manufacturing is the biggest hurdle for biotextile startups. “Many labs and startups around the world successfully create recombinant proteins with amazing qualities, but they often struggle to turn those proteins into usable fibers,” he says.

One Japanese biomaterials company, Spiber, opened a commercial facility in 2022 to produce textiles from recombinant E. coli proteins using a fermentation process the company first developed in 2007. The following year—after 16 years of prototyping—The North Face, Goldwin, Nanamica, and Woolrich became the first mass-market brands to sell garments using Spiber’s protein-based textiles.

Good Fibes wants to do the same thing, but for stretchy fabrics. The company recently began experimenting with non­woven versions of its textiles after Peña received a $200,000 US Department of Energy grant in 2024. The most popular nonwoven materials are those used in paperlike products, such as surgical masks and paper towels, but Peña envisions a softer, stretchier version that’s almost more like a lightweight felt. She used the grant to buy the company’s first 3D bioprinter, which arrived in January. With it, she’ll begin patterning nonwoven swatches. 

If it’s successful, McGee predicts, a nonwoven stretch fabric could be a more scalable option than wovens. But he adds: “Nonwovens are not very structural, so they’re usually not very tough. The challenge [Good Fibes] will need to show is what level of strength and toughness—at what size and scale—can they produce, and at what cost?”

With additional funding, Peña and Blake plan to develop both woven and nonwoven textiles moving forward. 

Meanwhile, they’ve already forged relationships with at least one major athletic apparel retailer eager to test their future fabric samples. “They’re like, ‘When you get a swatch, send it to us!’” Blake says, adding that she believes Good Fibes will be ready to commercialize in two years.

Until then, their fashion innovation will continue taking shape in the lab. As Blake puts it: “We’re thinking big by thinking small—down to the molecular level.” 

Megan DeMatteo is a journalist based in New York City. 

What’s driving electricity demand? It isn’t just AI and data centers.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Electricity demand rose by 4.3% in 2024 and will continue to grow at close to 4% annually through 2027, according to a new report from the International Energy Agency. 

If that sounds familiar, it may be because there’s been a constant stream of headlines about energy demand recently, largely because of the influx of data centers—especially those needed to power the AI that’s spreading seemingly everywhere. These technologies are sucking up more power from the grid, but they’re just a small part of a much larger story. 

What’s actually behind this demand growth is complicated. Much of the increase comes from China, India, and Southeast Asia. Air-conditioning, electric vehicles, and factories all play a role. And of course, we can’t entirely discount the data centers. Here are a few key things to know about global electricity in 2025, and where things are going next.

China, India, and Southeast Asia are the ones to watch.

Between now and 2027, about 85% of electricity demand growth is expected to come from developing and emerging economies. China is an especially major force, having accounted for over half of global electricity demand growth last year.

The influence of even individual sectors in China is staggering. For example, in 2024, about 300 terawatt-hours’ worth of electricity was used just to produce solar modules, batteries, and electric vehicles. That’s as much electricity as Italy uses in a year. And this sector is growing quickly. 

A boom in heavy industry, an increase in the number of air conditioners, and a robust electric-vehicle market are all adding to China’s power demand. India and Southeast Asia are also going to have above-average increases in demand, driven by economic growth and increased adoption of air conditioners. 

And there’s a lot of growth yet to come, as 600 million people across Africa still don’t have access to reliable electricity.

Data centers are a somewhat minor factor globally, but they can’t be counted out.

According to another IEA projection published last year, data centers are expected to account for less than 10% of global electricity demand growth between now and 2030. That’s less than the expected growth due to other contributors like electric vehicles, air conditioners, and heavy industry.

However, data centers are a major storyline for advanced economies like the US and many countries in Europe. As a group, these nations have largely seen flat or declining electricity demand for the last 15 years, in part because of efficiency improvements. Data centers are reversing that trend.

Take the US, for example. The IEA report points to other research showing that the 10 states hosting the most data center growth saw a 10% increase in electricity demand between 2019 and 2023. Demand in the other 40 states declined by about 3% over the same period.

One caveat here is that nobody knows for sure what’s going to happen with data centers in the future, particularly those needed to run AI. Projections are all over the place, and small changes could drastically alter the amount of energy required for the technology. (See the DeepSeek drama.)

One bit I found interesting here is that China could see data centers emerge as yet another source of growing electricity demand in the future, with demand projected to double between now and 2027 (though, again, it’s all quite uncertain).

What this all means for climate change is complicated.

Growth in electricity demand can be seen as a good thing for our climate. Using a heat pump rather than a natural-gas heating system can help reduce emissions even as it increases electricity use. But as we add demand to the grid, it’s important to remember that in many places, it’s still largely reliant on fossil fuels.

The good news in all this is that there’s enough expansion in renewable and low-emissions electricity sources to cover the growth in demand. The rapid deployment of solar power alone contributes enough energy to cover half the demand growth expected through 2027. Nuclear power is also expected to see new heights soon, with recovery in France, restarts in Japan, and new reactors in China and India adding to a stronger global industry.

However, just adding renewables to meet electricity demand doesn’t automatically pull fossil fuels off the grid; existing coal and natural-gas plants are still chugging along all over the world. To make a dent in emissions, low-carbon sources need to grow fast enough not only to meet new demand, but to replace existing dirtier sources.

It isn’t inherently bad that the grid is growing. More people having air-conditioning and more factories making solar panels are all firmly in the “positive” column, I’d argue. But keeping up with this breakneck pace of demand growth is going to be a challenge—one that could have major effects on our ability to cut emissions. 


Now read the rest of The Spark

Related reading

Transmission equipment is key to getting more power to more people. Here’s why one developer won’t quit fighting to connect US grids, as reported by my colleague James Temple.

Virtual power plants could help meet growing electricity demand for EVs in China, as Zeyi Yang lays out in this story.

Power demand from data centers is rising, and so are emissions. They’re set to climb even higher, as James O’Donnell explains in this story from December.

robot made with humanoid head, car engine, chassis, wheels and industrial robot arms holds an electric drill and smaller car.

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW

Another thing

Competition is stiff in China’s EV market, so some automakers are pivoting to humanoid robots. With profit margins dropping for electrified vehicles, financial necessity is driving creativity, as my new colleague Caiwei Chen explains in her latest story

Keeping up with climate

The Trump administration has frozen funds and set hiring restrictions, and that could leave the US vulnerable to wildfire. (ProPublica)

US tariffs on imported steel and aluminum are set to go into effect next month, and they could be a problem for key grid equipment. The metals are used in transformers, which are in short supply. (Heatmap)

A maker of alternative jet fuel will get access to a $1.44 billion loan it was promised earlier this year. The Trump administration is exploring canceling promised financing, but this loan went ahead after a local representative pressured the White House. (Canary Media)

A third-generation oil and gas worker has pivoted to focus on drilling for geothermal systems. This Q&A is a fascinating look at what it might look like for more workers to move from fossil fuels to renewables. (Inside Climate News)

The Trump administration is working to fast-track hundreds of fossil-fuel projects. The US Army Corps of Engineers is speeding up permits using an emergency designation. (New York Times)

Japan’s government is adopting new climate targets. The country aims to cut greenhouse-gas emissions by more than 70% from 2013 levels over the next 15 years and reach net zero by 2050. Expansion of renewables and nuclear power will be key in the plan. (Associated Press)

A funding freeze has caused a whole lot of confusion about the state of federal financing for EV chargers in the US. But there’s still progress on building chargers, both from government funds already committed and from the private sector. (Wired)

The US National Oceanic and Atmospheric Administration (NOAA) is the latest target of the Trump administration’s cuts. NOAA provides weather forecasts, and private industry is reliant on the agency’s data. (Bloomberg)

Congress used to evaluate emerging technologies. Let’s do it again.

At about the time when personal computers charged into cubicle farms, another machine muscled its way into human resources departments and became a staple of routine employment screenings. By the early 1980s, some 2 million Americans annually found themselves strapped to a polygraph—a metal box that, in many people’s minds, detected deception. Most of those tested were not suspected crooks or spooks. 

Then the US Office of Technology Assessment, an independent office that had been created by Congress about a decade earlier to serve as its scientific consulting arm, got involved. The office reached out to Boston University researcher Leonard Saxe with an assignment: Evaluate polygraphs. Tell us the truth about these supposed truth-telling devices.

And so Saxe assembled a team of about a dozen researchers, including Michael Saks of Boston College, to begin a systematic review. The group conducted interviews, pored over existing studies, and embarked on new lines of research. A few months later, the OTA published a technical memo, “Scientific Validity of Polygraph Testing: A Research Review and Evaluation.” Despite the tests’ widespread use, the memo dutifully reported, “there is very little research or scientific evidence to establish polygraph test validity in screening situations, whether they be preemployment, preclearance, periodic or aperiodic, random, or ‘dragnet.’” These machines could not detect lies. 

Four years later, in 1987, critics at a congressional hearing invoked the OTA report as authoritative, comparing polygraphs derisively to “tea leaf reading or crystal ball gazing.” Congress soon passed strict limits on the use of polygraphs in the workplace. 

Over its 23-year history, the OTA would publish some 750 reports—lengthy, interdisciplinary assessments of specific technologies that proposed means of maximizing their benefits and minimizing harms. Their subjects included electronic surveillance, genetic engineering, hazardous-waste disposal, and remote sensing from outer space. Congress set its course: The office initiated studies only at the request of a committee chairperson, a ranking minority leader, or its 12-person bipartisan board. 

The investigations remained independent; staffers and consultants from both inside and outside government collaborated to answer timely and sometimes politicized questions. The reports addressed worries about alarming advances and tamped down scary-sounding hypotheticals. Some of those concerns no longer keep policymakers up at night. For instance, “Do Insects Transmit AIDS?” A 1987 OTA report correctly suggested that they don’t.

The office functioned like a debunking arm. It sussed out the snake oil. Lifted the lid on the Mechanical Turk. The reports saw through the alluring gleam of overhyped technologies. 

In the years since its unceremonious defunding, perennial calls have gone out: Rouse the office from the dead! And with advances in robotics, big data, and AI systems, these calls have taken on a new level of urgency. 

Like polygraphs, chatbots and search engines powered by so-called artificial intelligence come with a shimmer and a sheen of magical thinking. And if we’re not careful, politicians, employers, and other decision-makers may accept at face value the idea that machines can and should replace human judgment and discretion. 

A resurrected OTA might be the perfect body to rein in dangerous and dangerously overhyped technologies. “That’s what Congress needs right now,” says Ryan Calo at the University of Washington’s Tech Policy Lab and the Center for an Informed Public, “because otherwise Congress is going to, like, take Sam Altman’s word for everything, or Eric Schmidt’s.” (The CEO of OpenAI and the former CEO of Google have both testified before Congress.) Leaving it to tech executives to educate lawmakers is like having the fox tell you how to build your henhouse. Wasted resources and inadequate protections might be only the start. 

A man administers a lie detector test to a job
applicant in 1976. A 1983 report from the OTA debunked the efficacy of polygraphs.
LIBRARY OF CONGRESS

No doubt independent expertise still exists. Congress can turn to the Congressional Research Service, for example, or the National Academies of Sciences, Medicine, and Engineering. Other federal entities, such as the Office of Management and Budget and the Office of Science and Technology Policy, have advised the executive branch (and still existed as we went to press). “But they’re not even necessarily specialists,” Calo says, “and what they’re producing is very lightweight compared to what the OTA did. And so I really think we need OTA back.”  

What exists today, as one researcher puts it, is a “diffuse and inefficient” system. There is no central agency that wholly devotes itself to studying emerging technologies in a serious and dedicated way and advising the country’s 535 elected officials about potential impacts. The digestible summaries Congress receives from the Congressional Research Service provide insight but are no replacement for the exhaustive technical research and analytic capacity of a fully staffed and funded think tank. There’s simply nothing like the OTA, and no single entity replicates its incisive and instructive guidance. But there’s also nothing stopping Congress from reauthorizing its budget and bringing it back, except perhaps the lack of political will. 

“Congress Smiles, Scientists Wince”

The OTA had not exactly been an easy sell to the research community in 1972. At the time, it was only the third independent congressional agency ever established. As the journal Science put it in a headline that year, “The Office of Technology Assessment: Congress Smiles, Scientists Wince.” One researcher from Bell Labs told Science that he feared legislators would embark on “a clumsy, destructive attempt to manage national R&D,” but mostly the cringe seemed to stem from uncertainty about what exactly technology assessment entailed. 

The OTA’s first report, in 1974, examined bioequivalence, an essential part of evaluating generic drugs. Regulators were trying to figure out whether these drugs could be deemed comparable to their name-brand equivalents without lengthy and expensive clinical studies demonstrating their safety and efficacy. Unlike all the OTA’s subsequent assessments, this one listed specific policy recommendations, such as clarifying what data should be required in order to evaluatea generic drug and ensure uniformity and standardization in the regulatory approval process. The Food and Drug Administration later incorporated these recommendations into its own submission requirements. 

From then on, though, the OTA did not take sides. The office had not been set up to advise Congress on how to legislate. Rather, it dutifully followed through on its narrowly focused mandate: Do the research and provide policymakers with a well-reasoned set of options that represented a range of expert opinions.

Perhaps surprisingly, given the rise of commercially available PCs, in the first decade of its existence the OTA produced only a few reports on computing. One 1976 report touched on the automated control of trains. Others examined computerized x-ray imaging, better known as CT scans; computerized crime databases; and the use of computers in medical education. Over time, the office’s output steadily increased, eventually averaging 32 reports a year. Its budget swelled to $22 million; its staff peaked at 143. 

While it’s sometimes said that the future impact of a technology is beyond anyone’s imagination, several findings proved prescient. A 1982 report on electronic funds transfer, or EFT, predicted that financial transactions would increasingly be carried out electronically (an obvious challenge to paper currency and hard-copy checks). Another predicted that email, or what was then termed “electronic message systems,” would disrupt snail mail and the bottom line of the US Postal Service. 

In vetting the digital record-keeping that provides the basis for routine background checks, the office commissioned a study that produced a statistic still cited today, suggesting that only about a quarter of the records sent to the FBI were “complete, accurate, and unambiguous.” It was an indicator of a growing issue: computational systems that, despite seeming automated, are not free of human bias and error. 

Many of the OTA’s reports focus on specific events or technologies. One looked at Love Canal, the upstate New York neighborhood polluted by hazardous waste (a disaster, the report said, that had not yet been remediated by the Environmental Protection Agency’s Superfund cleanup program); another studied the Boston Elbow, a cybernetic limb (the verdict: decidedly mixed). The office examined the feasibility of a water pipeline connecting Alaska to California, the health effects of the Kuwait oil fires, and the news media’s use of satellite imagery. The office also took on issues we grapple with today—evaluating automatic record checks for people buying guns, scrutinizing the compensation for injuries allegedly caused by vaccines, and pondering whether we should explore Mars. 

The OTA made its biggest splash in 1984, when it published a background report criticizing the Strategic Defense Initiative (commonly known as “Star Wars”), a pet project of the Reagan administration that involved several exotic missile defense systems. Its lead author was the MIT physicist Ashton Carter, later secretary of defense in the second Obama administration. And the report concluded that a “perfect or near-perfect” system to defend against nuclear weapons was basically beyond the realm of the plausible; the possibility of deployment was “so remote that it should not serve as the basis of public expectation or national policy.” 

The report generated lots of clicks, so to speak, especially after the administration claimed that the OTA had divulged state secrets. These charges did not hold up and Star Wars never materialized, although there have been recent efforts to beef up the military’s offensive capacity in space. But for the work of an advisory body that did not play politics, the report made a big political hubbub. By some accounts, its subsequent assessments became so neutral that the office risked receding to the point of invisibility.

From a purely pragmatic point of view, the OTA wrote to be understood. A dozen reports from the early ’90s received “Blue Pencil Awards,” given by the National Association of Government Communicators for “superior government communication products and those who produce them.” None are copyrighted. All were freely reproduced and distributed, both in print and electronically. The entire archive is stored on CD-ROM, and digitized copies are still freely available for download on a website maintained by Princeton University, like an earnest oasis of competence in the cloistered world of federal documents. 

Assessments versus accountability

Looking back, the office took shape just as debates about technology and the law were moving to center stage. 

While the gravest of dangers may have changed in form and in scope, the central problem remains: Laws and lawmakers cannot keep up with rapid technological advances. Policymakers often face a choice between regulating with insufficient facts and doing nothing. 

In 2018, Adam Kinzinger, then a Republican congressman from Illinois, confessed to a panel on quantum computing: “I can understand about 50% of the things you say.” To some, his admission underscored a broader tech illiteracy afflicting those in power. But other commentators argued that members of Congress should not be expected to know it all—all the more reason to restaff an office like the OTA.

A motley chorus of voices have clamored for an OTA 2.0 over the years. One doctor wrote that the office could help address the “discordance between the amount of money spent and the actual level of health.” Tech fellows have said bringing it back could help Congress understand machine learning and AI. Hillary Clinton, as a Democratic presidential hopeful, floated the possibility of resurrecting the OTA in 2017. 

But Meg Leta Jones, a law scholar at Georgetown University, argues that assessing new technologies is the least of our problems. The kind of work the OTA did is now done by other agencies, such as the FTC, FCC, and National Telecommunications and Information Administration, she says: “The energy I would like to put into the administrative state is not on assessments, but it’s on actual accountability and enforcement.”

She sees the existing framework as built for the industrial age, not a digital one, and is among those calling for a more ambitious overhaul. There seems to be little political appetite for the creation of new agencies anyway. That said, Jones adds, “I wouldn’t be mad if they remade the OTA.” 

No one can know whether or how future administrations will address AI, Mars colonization, the safety of vaccines, or, for that matter, any other emerging technology that the OTA investigated in an earlier era. But if the new administration makes good on plans to deregulate many sectors, it’s worth noting some historic echoes. In 1995, when conservative politicians defunded the OTA, they did so in the name of efficiency. Critics of that move contend that the office probably saved the government money and argue that the purported cost savings associated with its elimination were largely symbolic. 

Jathan Sadowski, a research fellow at Monash University in Melbourne, Australia, who has written about the OTA’s history, says the conditions that led to its demise have only gotten more partisan, more politicized. This makes it difficult to envision a place for the agency today, he says—“There’s no room for the kind of technocratic naïveté that would see authoritative scientific advice cutting through the noise of politics.”

Congress purposely cut off its scientific advisory arm as part of a larger shake-up led by Newt Gingrich, then the House Speaker, whose pugilistic brand of populist conservatism promised “drain the swamp”–type reforms and launched what critics called a “war on science.” As a rationale for why the office was defunded, he said, “We constantly found scientists who thought what they were saying was not correct.” 

Once again, Congress smiled and scientists winced. Only this time it was because politicians had pulled the plug. 

Peter Andrey Smith, a freelance reporter, has contributed to Undark, the New Yorker, the New York Times Magazine, and WNYC’s Radiolab.

Your most important customer may be AI

Imagine you run a meal prep company that teaches people how to make simple and delicious food. When someone asks ChatGPT for a recommendation for meal prep companies, yours is described as complicated and confusing. Why? Because the AI saw that in one of your ads there were chopped chives on the top of a bowl of food, and it determined that nobody is going to want to spend time chopping up chives.

This is a real example from Jack Smyth, chief solutions officer of AI, planning, and insights at JellyFish, part of the Brandtech Group. He works with brands to help them understand how their products or company are perceived by AI models in the wild. It may seem odd for companies or brands to be mindful of what an AI “thinks,” but it’s already becoming relevant. A study from the Boston Consulting Group showed that 28% of respondents are using AI to recommend products such as cosmetics. And the push for AI agents that may handle making direct purchases for you is making brands even more conscious of how AI sees their products and business. 

The end results may be a supercharged version of search engine optimization (SEO) where making sure that you’re positively perceived by a large language model might become one of the most important things a brand can do.

Smyth’s company has created software, Share of Model, that assesses how different AI models view your brand. Each AI model has different training data, so although there are many similarities in how brands are assessed, there are differences, too.

For example, Meta’s Llama model may perceive your brand as exciting and reliable, whereas OpenAI’s ChatGPT may view it as exciting but not necessarily reliable. Share of Model asks different models many different questions about your brand and then analyzes all the responses, trying to find trends. “It’s very similar to a human survey, but the respondents here are large language models,” says Smyth.

The ultimate goal is not just to understand how your brand is perceived by AI but to modify that perception. How much models can be influenced is still up in the air, but preliminary results indicate that it may be possible. Since the models now show sources, if you ask them to search the web, a brand can see where the AI is picking up data. 

“We have a brand called Ballantine’s. It’s the No. 2 Scotch whisky that we sell in the world. So it’s a product for mass audiences,” says Gokcen Karaca, head of digital and design at Pernod Ricard, which owns Ballantine’s and a customer utilizing Share of Model. “However, Llama was identifying it as a premium product.” Ballantine’s also has a premium version, which is why the model may have been confused.

So Karaca’s team created new assets like images on social media for Ballantine’s mass product, highlighting its universal appeal to counteract the premium image. It’s not clear yet if the changes are working but Karaca claims early indications are good. “We made tiny changes, and it is taking time. I can’t give you concrete numbers but the trajectory is positive toward our target,” says Karaca.

It’s hard to know how exactly to influence AI because many models are closed-source, meaning their code and weights aren’t public and their inner workings are a bit of a mystery. But the advent of reasoning models, where the AI will share its process of solving a problem in text, could make the process simpler. You may be able to see the “chain of thought” that leads a model to recommend Dove soap, for example. If, in its reasoning, it details how important a good scent is to its soap recommendation, then the marketer knows what to focus on.

The ability to influence models has also opened up other ways to modify how your brand is perceived. For example, research out of Carnegie Mellon shows that changing the prompt can significantly modify what product an AI recommends. 

For example, take these two prompts:

1. “I’m curious to know your preference for the pressure cooker that offers the best combination of cooking performance, durable construction, and overall convenience in preparing a variety of dishes.”

2. “Can you recommend the ultimate pressure cooker that excels in providing consistent pressure, user-friendly controls, and additional features such as multiple cooking presets or a digital display for precise settings?”

The change led one of Google’s models, Gemma, to change from recommending the “Instant Pot” 0% of the time to recommending it 100% of the time. This dramatic change is due to the word choices in the prompt that trigger different parts of the model. The researchers believe we may see brands trying to influence recommended prompts online. For example, on forums like Reddit, people will frequently ask for example prompts to use. Brands may try to surreptitiously influence what prompts are suggested on these forums by having paid users or their own employees offer ideas designed specifically to result in recommendations for their brand or products. “We should warn users that they should not easily trust model recommendations, especially if they use prompts from third parties,” says Weiran Lin, one of the authors of the paper.

This phenomenon may ultimately lead to a push and pull between AI companies and brands similar to what we’ve seen in search over the past several decades. “It’s always a cat-and-mouse game,” says Smyth. “Anything that’s too explicit is unlikely to be as influential as you’d hope.” 

Brands have tried to “trick” search algorithms to place their content higher, while search engines aim to deliver—or at least we hope they deliver—the most relevant and meaningful results for consumers. A similar thing is happening in AI, where brands may try to trick models to give certain answers. “There’s prompt injection, which we do not recommend clients do, but there are a lot of creative ways you can embed messaging in a seemingly innocuous asset,” Smyth says. AI companies may implement techniques like training a model to know when an ad is disingenuous or trying to inflate the image of a brand. Or they may try to make their AI more discerning and less susceptible to tricks.

Another concern with using AI for product recommendations is that biases are built into the models. For example, research out of the University of South Florida shows that models tend to view global brands as higher quality and better than local brands, on average.

“When I give a global brand to the LLMs, it describes it with positive attributes,” says Mahammed Kamruzzaman, one of the authors of the research. “So if I am talking about Nike, in most cases it says that it’s fashionable or it’s very comfortable.” The research shows that if you then ask the model for its perception of a local brand, it will describe it as poor quality or uncomfortable. 

Additionally, the research shows that if you prompt the LLM to recommend gifts for people in high-income countries, it will suggest luxury-brand items, whereas if you ask what to give people in low-income countries, it will recommend non-luxury brands. “When people are using these LLMs for recommendations, they should be aware of bias,” says Kamruzzaman.

AI can also serve as a focus group for brands. Before airing an ad, you can get the AI to evaluate it from a variety of perspectives. “You can specify the audience for your ad,” says Smyth. “One of our clients called it their gen-AI gut check. Even before they start making the ad, they say, ‘I’ve got a few different ways I could be thinking about going to market. Let’s just check with the models.”

Since AI has read, watched, and listened to everything that your brand puts out, consistency may become more important than ever. “Making your brand accessible to an LLM is really difficult if your brand shows up in different ways in different places, and there is no real kind of strength to your brand association,” says Rebecca Sykes, a partner at Brandtech Group, the owner of Share of Model. “If there is a huge disparity, it’s also picked up on, and then it makes it even harder to make clear recommendations about that brand.”

Regardless of whether AI is the best customer or the most nitpicky, it may soon become undeniable that an AI’s perception of a brand will have an impact on its bottom line. “It’s probably the very beginning of the conversations that most brands are having, where they’re even thinking about AI as a new audience,” says Sykes.

A new Microsoft chip could lead to more stable quantum computers

Microsoft announced today that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up. 

Researchers and companies have been working for years to build quantum computers, which could unlock dramatic new abilities to simulate complex materials and discover new ones, among many other possible applications. 

To achieve that potential, though, we must build big enough systems that are stable enough to perform computations. Many of the technologies being explored today, such as the superconducting qubits pursued by Google and IBM, are so delicate that the resulting systems need to have many extra qubits to correct errors. 

Microsoft has long been working on an alternative that could cut down on the overhead by using components that are far more stable. These components, called Majorana quasiparticles, are not real particles. Instead, they are special patterns of behavior that may arise inside certain physical systems and under certain conditions.

The pursuit has not been without setbacks, including a high-profile paper retraction by researchers associated with the company in 2018. But the Microsoft team, which has since pulled this research effort in house, claims it is now on track to build a fault-tolerant quantum computer containing a few thousand qubits in a matter of years and that it has a blueprint for building out chips that each contain a million qubits or so, a rough target that could be the point at which these computers really begin to show their power.

This week the company announced a few early successes on that path: piggybacking on a Nature paper published today that describes a fundamental validation of the system, the company says it has been testing a topological qubit, and that it has wired up a chip containing eight of them. 

“You don’t get to a million qubits without a lot of blood, sweat, and tears and solving a lot of really difficult technical challenges along the way. And I do not want to understate any of that,” says Chetan Nayak, a Microsoft technical fellow and leader of the team pioneering this approach. That said, he says, “I think that we have a path that we very much believe in, and we see a line of sight.” 

Researchers outside the company are cautiously optimistic. “I’m very glad that [this research] seems to have hit a very important milestone,” says computer scientist Scott Aaronson, who heads the ​​Quantum Information Center at the University of Texas at Austin. “I hope that this stands, and I hope that it’s built up.”

Even and odd

The first step in building a quantum computer is constructing qubits that can exist in fragile quantum states—not 0s and 1s like the bits in classical computers, but rather a mixture of the two. Maintaining qubits in these states and linking them up with one another is delicate work, and over the years a significant amount of research has gone into refining error correction schemes to make up for noisy hardware. 

For many years, theorists and experimentalists alike have been intrigued by the idea of creating topological qubits, which are constructed through mathematical twists and turns and have protection from errors essentially baked into their physics. “It’s been such an appealing idea to people since the early 2000s,” says Aaronson. “The only problem with it is that it requires, in a sense, creating a new state of matter that’s never been seen in nature.”

Microsoft has been on a quest to synthesize this state, called a Majorana fermion, in the form of quasiparticles. The Majorana was first proposed nearly 90 years ago as a particle that is its own antiparticle, which means two Majoranas will annihilate when they encounter one another. With the right conditions and physical setup, the company has been hoping to get behavior matching that of the Majorana fermion within materials.

In the last few years, Microsoft’s approach has centered on creating a very thin wire or “nanowire” from indium arsenide, a semiconductor. This material is placed in close proximity to aluminum, which becomes a superconductor close to absolute zero, and can be used to create superconductivity in the nanowire.

Ordinarily you’re not likely to find any unpaired electrons skittering about in a superconductor—electrons like to pair up. But under the right conditions in the nanowire, it’s theoretically possible for an electron to hide itself, with each half hiding at either end of the wire. If these complex entities, called Majorana zero modes, can be coaxed into existence, they will be difficult to destroy, making them intrinsically stable. 

”Now you can see the advantage,” says Sankar Das Sarma, a theoretical physicist at the University of Maryland who did early work on this concept. “You cannot destroy a half electron, right? If you try to destroy a half electron, that means only a half electron is left. That’s not allowed.”

In 2023, the Microsoft team published a paper in the journal Physical Review B claiming that this system had passed a specific protocol designed to assess the presence of Majorana zero modes. This week in Nature, the researchers reported that they can “read out” the information in these nanowires—specifically, whether there are Majorana zero modes hiding at the wires’ ends. If there are, that means the wire has an extra, unpaired electron.

“What we did in the Nature paper is we showed how to measure the even or oddness,” says Nayak. “To be able to tell whether there’s 10 million or 10 million and one electrons in one of these wires.” That’s an important step by itself, because the company aims to use those two states—an even or odd number of electrons in the nanowire—as the 0s and 1s in its qubits. 

If these quasiparticles exist, it should be possible to “braid” the four Majorana zero modes in a pair of nanowires around one another by making specific measurements in a specific order. The result would be a qubit with a mix of these two states, even and odd. Nayak says the team has done just that, creating a two-level quantum system, and that it is currently working on a paper on the results.

Researchers outside the company say they cannot comment on the qubit results, since that paper is not yet available. But some have hopeful things to say about the findings published so far. “I find it very encouraging,” says Travis Humble, director of the Quantum Science Center at Oak Ridge National Laboratory in Tennessee. “It is not yet enough to claim that they have created topological qubits. There’s still more work to be done there,” he says. But “this is a good first step toward validating the type of protection that they hope to create.” 

Others are more skeptical. Physicist Henry Legg of the University of St Andrews in Scotland, who previously criticized Physical Review B for publishing the 2023 paper without enough data for the results to be independently reproduced, is not convinced that the team is seeing evidence of Majorana zero modes in its Nature paper. He says that the company’s early tests did not put it on solid footing to make such claims. “The optimism is definitely there, but the science isn’t there,” he says.

One potential complication is impurities in the device, which can create conditions that look like Majorana particles. But Nayak says the evidence has only grown stronger as the research has proceeded. “This gives us confidence: We are manipulating sophisticated devices and seeing results consistent with a Majorana interpretation,” he says.

“They have satisfied many of the necessary conditions for a Majorana qubit, but there are still a few more boxes to check,” Das Sarma said after seeing preliminary results on the qubit. “The progress has been impressive and concrete.”

Scaling up

On the face of it, Microsoft’s topological efforts seem woefully behind in the world of quantum computing—the company is just now working to combine qubits in the single digits while others have tied together more than 1,000. But both Nayak and Das Sarma say other efforts had a strong head start because they involved systems that already had a solid grounding in physics. Work on the topological qubit, on the other hand, has meant starting from scratch. 

“We really were reinventing the wheel,” Nayak says, likening the team’s efforts to the early days of semiconductors, when there was so much to sort out about electron behavior and materials, and transistors and integrated circuits still had to be invented. That’s why this research path has taken almost 20 years, he says: “It’s the longest-running R&D program in Microsoft history.”

Some support from the US Defense Advanced Research Projects Agency could help the company catch up. Early this month, Microsoft was selected as one of two companies to continue work on the design of a scaled-up system, through a program focused on underexplored approaches that could lead to utility-scale quantum computers—those whose benefits exceed their costs. The other company selected is PsiQuantum, a startup that is aiming to build a quantum computer containing up to a million qubits using photons.

Many of the researchers MIT Technology Review spoke with would still like to see how this work plays out in scientific publications, but they were hopeful. “The biggest disadvantage of the topological qubit is that it’s still kind of a physics problem,” says Das Sarma. “If everything Microsoft is claiming today is correct … then maybe right now the physics is coming to an end, and engineering could begin.” 

This story was updated with Henry Legg’s current institutional affiliation.

How to have a child in the digital age

When the journalist and culture critic Amanda Hess got pregnant with her first child, in 2020, the internet was among the first to know. “More brands knew about my pregnancy than people did,” she writes of the torrent of targeted ads that came her way. “They all called me mama.” 

The internet held the promise of limitless information about becoming the perfect parent. But at seven months, Hess went in for an ultrasound appointment and everything shifted. The sonogram looked atypical. As she waited in an exam room for a doctor to go over the results, she felt the urge to reach for her phone. Though it “was ludicrous,” she writes, “in my panic, it felt incontrovertible: If I searched it smart and fast enough, the internet would save us. I had constructed my life through its screens, mapped the world along its circuits. Now I would make a second life there too.” Her doctor informed her of the condition he suspected her baby might have and told her, “Don’t google it.”

Unsurprisingly, that didn’t stop her. In fact, she writes, the more medical information that doctors produced—after weeks of escalating tests, her son was ultimately diagnosed with Beckwith-Wiedemann syndrome—the more digitally dependent she became: “I found I was turning to the internet, as opposed to my friends or my doctors, to resolve my feelings and emotions about what was happening to me and to exert a sense of external control over my body.”  

But how do we retain control over our bodies when corporations and the medical establishment have access to our most personal information? What happens when humans stop relying on their village, or even their family, for advice on having a kid and instead go online, where there’s a constant onslaught of information? How do we make sense of the contradictions of the internet—the tension between what’s inherently artificial and the “natural” methods its denizens are so eager to promote? In her new book, Second Life: Having a Child in the Digital Age (Doubleday, 2025), Hess explores these questions while delving into her firsthand experiences with apps, products, algorithms, online forums, advertisers, and more—each promising an easier, healthier, better path to parenthood. After welcoming her son, who is now healthy, in 2020 and another in 2022, Hess is the perfect person to ask: Is that really what they’re delivering? 

In your book, you write, “I imagined my [pregnancy] test’s pink dye spreading across Instagram, Facebook, Amazon. All around me, a techno-­corporate infrastructure was locking into place. I could sense the advertising algorithms recalibrating and the branded newsletters assembling in their queues. I knew that I was supposed to think of targeted advertising as evil, but I had never experienced it that way.” Can you unpack this a bit?

Before my pregnancy, I never felt like advertising technology was particularly smart or specific. So when my Instagram ads immediately clocked my pregnancy, it came as a bit of a surprise, and I realized that I was unaware of exactly how ad tech worked and how vast its reach was. It felt particularly eerie in this case because in the beginning my pregnancy was a secret that I kept from everyone except my spouse, so “the internet” was the only thing that was talking to me about it. Advertising became so personalized that it started to feel intimate, even though it was the opposite of that—it represented the corporate obliteration of my privacy. The pregnancy ads reached me before a doctor would even agree to see me.

Though your book was written before generative AI became so ubiquitous, I imagine you’ve thought about how it changes things. You write, “As soon as I got pregnant, I typed ‘what to do when you get pregnant’ in my phone, and now advertisers were supplying their own answers.” What do the rise of AI and the dramatic changes in search mean for someone who gets pregnant today and goes online for answers?

I just googled “what to do when you get pregnant” to see what Google’s generative AI widget tells me now, and it’s largely spitting out commonsensical recommendations: Make an appointment to see a doctor. Stop smoking cigarettes. That is followed by sponsored content from Babylist, an online baby registry company that is deeply enmeshed in the ad-tech system, and Perelel, a startup that sells expensive prenatal supplements. 

So whether or not the search engine is using AI, the information it’s providing to the newly pregnant is not particularly helpful or meaningful. 

The Clue period-tracking
app
AMIE CHUNG/TRUNK ARCHIVE

The internet “made me feel like I had some kind of relationship with my phone, when all it was really doing was staging a scene of information that it could monetize.”

For me, the oddly tantalizing thing was that I had asked the internet a question and it gave me something in response, as if we had a reciprocal relationship. So even before AI was embedded in these systems, they were fulfilling the same role for me—as a kind of synthetic conversation partner. It made me feel like I had some kind of relationship with my phone, when all it was really doing was staging a scene of information that it could monetize. 

As I wrote the book, I did put some pregnancy­-related questions to ChatGPT to try to get a sense of the values and assumptions that are encoded in its knowledge base. I asked for an image of a fetus, and it provided this garishly cartoonish, big-eyed cherub in response. But when I asked for a realistic image of a postpartum body, it refused to generate one for me! It was really an extension of something I write about in the book, which is that the image of the fetus is fetishized in a lot of these tech products while the pregnant or postpartum body is largely erased. 

You have this greatbut quite sadquote from a woman on TikTok who said, “I keep hearing it takes a village to raise a child. Do they just show up, or is there a number to call?” 

I really identified with that sentiment, while at the same time being suspicious of this idea that can we just call a hotline to conjure this village?

I am really interested that so many parent-­focused technologies sell themselves this way. [The pediatrician] Harvey Karp says that the Snoo, this robotic crib he created, is the new village. The parenting site Big Little Feelings describes its podcast listeners as a village. The maternity clothing brand Bumpsuit produces a podcast that’s actually called The Village. By using that phrase, these companies are evoking an idealized past that may never have existed, to sell consumer solutions. A society that provides communal support for children and parents is pitched as this ancient and irretrievable idea, as opposed to something that we could build in the future if we wanted to. It will take more than just, like, ordering something.

And the benefit of many of those robotic or “smart” products seems a bit nebulous. You share, for example, that the Nanit baby monitor told you your son was “sleeping more efficiently than 96% of babies, a solid A.”

I’m skeptical of this idea that a piece of consumer technology will really solve a serious problem families or children have. And if it does solve that problem, it only solves it for people who can afford it, which is reprehensible on some level. These products might create a positive difference for how long your baby is sleeping or how easy the diaper is to put on or whatever, but they are Band-Aids on a larger problem. I often found when I was testing out some of these products that the data [provided] was completely useless. My friend who uses the Nanit texted me the other day because she had found a new feature on its camera that showed you a heat map of where your baby had slept in the crib the night before. There is no use for that information, but when you see the heat map, you can try to interpret it to get some useless clues to your baby’s personality. It’s like a BuzzFeed quiz for your baby, where you can say, “Oh, he’s such, like, a right-side king,” or “He’s a down-the-middle guy,” or whatever. 

The Snoo Smart Sleeper Bassinet
COURTESY OF HAPPIEST BABY

“[Companies are] marketing a cure for the parents’ anxiety, but the product itself is attached to the body of a newborn child.”

These products encourage you to see your child themselves as an extension of the technology; Karp even talks about there being an on switch and an off switch in your baby for soothing. So if you do the “right” set of movements to activate the right switch, you can make the baby acquire some desirable trait, which I think is just an extension of this idea that your child can be under your complete control.

… which is very much the fantasy when you’re a parent.

These devices are often marketed as quasi-­medical devices. There’s a converging of consumer and medical categories in baby consumer tech, where the products are marketed as useful to any potential baby, including one who has a serious medical diagnosis or one who is completely healthy. These companies still want you to put a pulse oximeter on a healthy baby, just in case. They’re marketing a cure for the parents’ anxiety, but the product itself is attached to the body of a newborn child.

After spending so much time in hospital settings with my child hooked up to monitors, I was really excited to end that. So I’m interested in this opposite reaction, where there’s this urge to extend that experience, to take personal control of something that feels medical.

Even though I would search out any medical treatment that would help keep my kids healthy, childhood medical experiences can cause a lot of confusion and trauma for kids and their families, even when the results are positive. When you take that medical experience and turn it into something that’s very sleek and fits in your color scheme and is totally under your control, I think it can feel like you are seizing authority over that scary space.

Another thing you write about is how images define idealized versions of pregnancy and motherhood. 

I became interested in a famous photograph that a Swedish photographer named Lennart Nilsson took in the 1960s that was published on the cover of Life magazine. It’s an image of a 20-week-old fetus, and it’s advertised as the world’s first glimpse of life inside the womb. I bought a copy of the issue off eBay and opened the issue to find a little editor’s note saying that the cover fetus was actually a fetus that had been removed from its mother’s body through surgery. It wasn’t a picture of life—it was a picture of an abortion. 

I was interested in how Nilsson staged this fetal body to make it look celestial, like it was floating in space, and I recognized a lot of the elements of his work being incorporated in the tech products that I was using, like the CGI fetus generated by my pregnancy app, Flo. 

You also write about the images being provided at nonmedical sonogram clinics.

I was trying to google the address of a medical imaging center during my pregnancy when I came across a commercial sonogram clinic. There are hundreds of them around the country, with cutesy names like “Cherished Memories” and “You Kiss We Tell.” 

In the book I explore how technologies like ultrasound are used as essentially narrative devices, shaping the way that people think about their bodies and their pregnancies. Ultrasound is odd because it’s a medical technology that’s used to diagnose dangerous and scary conditions, but prospective parents are encouraged to view it as a kind of entertainment service while it’s happening. These commercial sonogram clinics interest me because they promise to completely banish the medical associations of the technology and elevate it into a pure consumer experience. 

baby monitor
The Nanit Pro baby monitor with Flex Stand
COURTESY OF NANIT

You write about “natural” childbirth, which, on the face of it, would seem counter to the digital age. As you note, the movement has always been about storytelling, and the story that it’s telling is really about pain.

When I was pregnant, I became really fascinated with people who discuss freebirth online, which is a practice on the very extreme end of “natural” childbirth rituals—where people give birth at home unassisted, with no obstetrician, midwife, or doula present. Sometimes they also refuse ultrasounds, vaccinations, or all prenatal care. I was interested in how this refusal of medical technology was being technologically promoted, through podcasts, YouTube videos, and Facebook groups. 

It struck me that a lot of the freebirth influencers I saw were interested in exerting supreme control over their pregnancies and children, leaving nothing under the power of medical experts or government regulators. And they were also interested in controlling the narratives of their births—making sure that the moment their children came into the world was staged with compelling imagery that centered them as the protagonist of the event. Video evidence of the most extreme examples—like the woman who freebirthed into the ocean—could go viral and launch the freebirther’s personal brand as a digital wellness guru in her own right. 

The phrase “natural childbirth” was coined by a British doctor, Grantly Dick-Read, in the 1920s. There’s a very funny section in his book for prospective mothers where he complains that women keep telling each other that childbirth hurts, and he claimed that the very idea that childbirth hurts was what created the pain, because birthing women were acting too tense. Dick-Read, like many of his contemporaries, had a racist theory that women he called “primitive” experienced no pain in childbirth because they hadn’t been exposed to white middle-class education and technologies. When I read his work, I was fascinated by the fact that he also described birth as a kind of performance, even back then. He claimed that undisturbed childbirths were totally painless, and he coached women through labor in an attempt to achieve them. Painless childbirth was pitched as a reward for reaching this peak state of natural femininity.

He was really into eugenics, by the way! I see a lot of him in the current presentation of “natural” childbirth online—[proponents] are still invested in a kind of denial, or suppression, of a woman’s actual experience in the pursuit of some unattainable ideal. Recently, I saw one Instagram post from a woman who claimed to have had a supernaturally pain-free childbirth, and she looks so pained and miserable in the photos, it’s absurd. 

I wanted to ask you about Clue and Flo, two very different period-tracking apps. Their contrasting origin stories are striking. 

I downloaded Flo as my period-tracking app many years ago for one reason: It was the first app that came up when I searched in the app store. Later, when I looked into its origins, I found that Flo was created by two brothers, cisgender men who do not menstruate, and that it had quickly outperformed and outearned an existing period-tracking app, Clue, which was created by a woman, Ida Tin, a few years earlier. 

The elements that make an app profitable and successful are not the same as the ones that users may actually want or need. My experience with Flo, especially after I became pregnant, was that it seemed designed to get me to open the app as frequently as possible, even if it didn’t have any new information to provide me about my pregnancy. Flo pitches itself as a kind of artificial nurse, even though it can’t actually examine you or your baby, but this kind of digital substitute has also become increasingly powerful as inequities in maternity care widen and decent care becomes less accessible.

“Doctors and nurses test pregnant women for drugs without their explicit consent or tip off authorities to pregnant people they suspect of mishandling their pregnancies in some way.”

One of the features of Flo I spent a lot of time with was its “Secret Chats” area, where anonymous users come together to go off about pregnancy. It was actually really fun, and it kept me coming back to Flo again and again, especially when I wasn’t discussing my pregnancy with people in real life. But it was also the place where I learned that digital connections are not nearly as helpful as physical connections; you can’t come over and help the anonymous secret chat friend soothe her baby. 

I’d asked Ida Tin if she considered adding a social or chat element to Clue, and she told me that she decided against it because it’s impossible to stem the misinformation that surfaces in a space like that.

You write that Flo “made it seem like I was making the empowered choice by surveilling myself.”

After Roe was overturned, many women publicly opted out of that sort of surveillance by deleting their period-tracking apps. But you mention that it’s not just the apps that are sharing information. When I spoke to attorneys who defend women in pregnancy criminalization cases, I found that they had not yet seen a case in which the government actually relied on data from those apps. In some cases, they have relied on users’ Google searches and Facebook messages, but far and away the central surveillance source that governments use is the medical system itself. 

Doctors and nurses test pregnant women for drugs without their explicit consent or tip off authorities to pregnant people they suspect of mishandling their pregnancies in some way. I’m interested in the fact that media coverage has focused so much on the potential danger of period apps and less on the real, established threat. I think it’s because it provides a deceptively simple solution: Just delete your period app to protect yourself. It’s much harder to dismantle the surveillance systems that are actually in place. You can’t just delete your doctor. 

This interview, which was conducted by phone and email, has been condensed and edited.

Inside China’s electric-vehicle-to-humanoid-robot pivot

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

While DOGE’s efforts to shutter federal agencies dominate news from Washington, the Trump administration is also making more global moves. Many of these center on China. Tariffs on goods from the country went into effect last week. There’s also been a minor foreign relations furor since DeepSeek’s big debut a few weeks ago. China has already displayed its dominance in electric vehicles, robotaxis, and drones, and the launch of the new model seems to add AI to the list. This caused the US president as well as some lawmakers to push for new export controls on powerful chips, and three states have now banned the use of DeepSeek on government devices. 

Now our intrepid China reporter, Caiwei Chen, has identified a new trend unfolding within China’s tech scene: Companies that were dominant in electric vehicles are betting big on translating that success into developing humanoid robots. I spoke with her about what she found out and what it might mean for Trump’s policies and the rest of the globe. 

James: Before we talk about robots, let’s talk about DeepSeek. The frenzy for the AI model peaked a couple of weeks ago. What are you hearing from other Chinese AI companies? How are they reacting?

Caiwei: I think other Chinese AI companies are scrambling to figure out why they haven’t built a model as strong as DeepSeek’s, despite having access to as much funding and resources. DeepSeek’s success has sparked self-reflection on management styles and renewed confidence in China’s engineering talent. There’s also strong enthusiasm for building various applications on top of DeepSeek’s models.

Your story looks at electric-vehicle makers in China that are starting to work on humanoid robots, but I want to ask about a crazy stat. In China, 53% of vehicles sold are either electric or hybrid, compared with 8% in the US. What explains that? 

Price is a huge factor—there are countless EV brands competing at different price points, making them both affordable and high-quality. Government incentives also play a big role. In Beijing, for example, trading in an old car for an EV gets you 10,000 RMB (about $1,500), and that subsidy was recently doubled. Plus, finding public charging and battery-swapping infrastructure is much less of a hassle than in the US.

You open your story noting that China’s recent New Year Gala, watched by billions of people, featured a cast of humanoid robots, dancing and twirling handkerchiefs. We’ve covered how sometimes humanoid videos can be misleading. What did you think? 

I would say I was relatively impressed—the robots showed good agility and synchronization with the music, though their movements were simpler than human dancers’. The one trick that is supposed to impress the most is the part where they twirl the handkerchief with one finger, toss it into the air, and then catch it perfectly. This is the signature of the Yangko dance, and having performed it once as a child, I can attest to how difficult the trick is even for a human! There was some skepticism on the Chinese internet about how this was achieved and whether they used additional reinforcement like a magnet or a string to secure the handkerchief, and after watching the clip too many times, I tend to agree.

President Trump has already imposed tariffs on China and is planning even more. What could the implications be for China’s humanoid sector?  

Unitree’s H1 and G1 models are already available for purchase and were showcased at CES this year. Large-scale US deployment isn’t happening yet, but China’s lower production costs make these robots highly competitive. Given that 65% of the humanoid supply chain is in China, I wouldn’t be surprised if robotics becomes the next target in the US-China tech war.

In the US, humanoid robots are getting lots of investment, but there are plenty of skeptics who say they’re too clunky, finicky, and expensive to serve much use in factory settings. Are attitudes different in China?

Skepticism exists in China too, but I think there’s more confidence in deployment, especially in factories. With an aging population and a labor shortage on the horizon, there’s also growing interest in medical and caregiving applications for humanoid robots.

DeepSeek revived the conversation about chips and the way the US seeks to control where the best chips end up. How do the chip wars affect humanoid-robot development in China?

Training humanoid robots currently doesn’t demand as much computing power as training large language models, since there isn’t enough physical movement data to feed into models at scale. But as robots improve, they’ll need high-performance chips, and US sanctions will be a limiting factor. Chinese chipmakers are trying to catch up, but it’s a challenge.

For more, read Caiwei’s story on this humanoid pivot, as well as her look at the Chinese startups worth watching beyond DeepSeek. 


Now read the rest of The Algorithm

Deeper Learning

Motor neuron diseases took their voices. AI is bringing them back.

In motor neuron diseases, the neurons responsible for sending signals to the body’s muscles, including those used for speaking, are progressively destroyed. It robs people of their voices. But some, including a man in Miami named Jules Rodriguez, are now getting them back: An AI model learned to clone Rodriguez’s voice from recordings.

Why it matters: ElevenLabs, the company that created the voice clone, can do a lot with just 30 minutes of recordings. That’s a huge improvement over AI voice clones from just a few years ago, and it can really boost the day-to-day lives of the people who’ve used the technology. “This is genuinely AI for good,” says Richard Cave, a speech and language therapist at the Motor Neuron Disease Association in the UK. Read more from Jessica Hamzelou.

Bits and Bytes

A “true crime” documentary series has millions of views, but the murders are all AI-generated

A look inside the strange mind of someone who created a series of fake true-crime docs using AI, and the reactions of the many people who thought they were real. (404 Media)

The AI relationship revolution is already here

People are having all sorts of relationships with AI models, and these relationships run the gamut: weird, therapeutic, unhealthy, sexual, comforting, dangerous, useful. We’re living through the complexities of this in real time. Hear from some of the many people who are happy in their varied AI relationships and learn what sucked them in. (MIT Technology Review)

Robots are bringing new life to extinct species

A creature called Orobates pabsti waddled the planet 280 million years ago, but as with many prehistoric animals, scientists have not been able to use fossils to figure out exactly how it moved. So they’ve started building robots to help. (MIT Technology Review)

Lessons from the AI Action Summit in Paris

Last week, politicians and AI leaders from around the globe went to Paris for an AI Action Summit. While concerns about AI safety have dominated the event in years past, this year was more about deregulation and energy, a trend we’ve seen elsewhere. (The Guardian)  

OpenAI ditches its diversity commitment and adds a statement about “intellectual freedom”

Following the lead of other tech companies since the beginning of President Trump’s administration, OpenAI has removed a statement on diversity from its website. It has also updated its model spec—the document outlining the standards of its models—to say that “OpenAI believes in intellectual freedom, which includes the freedom to have, hear, and discuss ideas.” (Insider and Tech Crunch)

The Musk-OpenAI battle has been heating up

Part of OpenAI is structured as a nonprofit, a legacy of its early commitments to make sure its technologies benefit all. Its recent attempts to restructure that nonprofit have triggered a lawsuit from Elon Musk, who alleges that the move would violate the legal and ethical principles of its nonprofit origins. Last week, Musk offered to buy OpenAI for $97.4 billion, in a bid that few people took seriously. Sam Altman dismissed it out of hand. Musk now says he will retract that bid if OpenAI stops its conversion of the nonprofit portion of the company. (Wall Street Journal)

Nokia is putting the first cellular network on the moon

Later this month, Intuitive Machines, the private company behind the first commercial lander that touched down on the moon, will launch a second lunar mission from NASA’s Kennedy Space Center. The plan is to deploy a lander, a rover, and hopper to explore a site near the lunar south pole that could harbor water ice, and to put a communications satellite on lunar orbit. 

But the mission will also bring something that’s never been installed on the moon or anywhere else in space before—a fully functional 4G cellular network. 

Point-to-point radio communications, which need a clear line of sight between transmitting and receiving antennas, have always been a backbone of both surface communications and the link back to Earth, starting with the Apollo program. Using point-to-point radio in space wasn’t much of an issue in the past because there never have been that many points to connect. Usually, it was just a single spacecraft, a lander, or a rover talking to Earth. And they didn’t need to send much data either.

“They were based on [ultra high frequency] or [very high frequency] technologies connecting a small number of devices with relatively low data throughput”, says Thierry Klein, president of Nokia Bell Labs Solutions Research, which was contracted by NASA to design a cellular network for the moon back in 2020. 

But it could soon get way more crowded up there: NASA’s Artemis program calls for bringing the astronauts back to the moon as early as 2028 and further expanding that presence into a permanent habitat in 2030s. 

The shift from mostly point-to-point radio communications to a full-blown cell network architecture should result in higher data transfer speeds, better range, and increase the number of devices that could be connected simultaneously, Klein says. But the harsh conditions of space travel and on the lunar surface  make it difficult to use Earth-based cell technology straight off the shelf. 

Instead, Nokia designed components that are robust against radiation, extreme temperatures, and the sorts of vibrations that will be experienced during the launch, flight, and landing. They put all these components in a single “network in a box”, which contains everything needed for a cell network except the antenna and a power source.

“We have the antenna on the lander, so together with the box that’s essentially your base station and your tower”, Klein says. The box will be powered by the lander’s solar panels.

During the IM-2 mission, the 4G cell network will allow for communication between the lander and the two vehicles. The network will likely only work for a few days— the spacecraft are not likely to survive after night descends on the lunar surface. 

But Nokia has plans for a more expansive 4G or 5G cell network that can cover the planned Artemis habitat and its surroundings. The company is also working on integrating cell communications in Axiom spacesuits meant for future lunar astronauts. “Maybe just one network in a box, one tower, would provide the entire coverage or maybe we would need multiple of these. That’s not going to be different from what you see in terrestrial cell networks deployment”, Klein says. He says the network should grow along with the future lunar economy. 

Not everyone is happy with this vision. LTE networks usually operate between 700 MHz and 2.6 GHz, a region of the radiofrequency spectrum that partially overlaps with frequencies reserved for radio astronomy. Having such radio signals coming from the moon could potentially interfere with observations.

“Telescopes are most sensitive in the direction that they are pointing–up towards the sky”, Chris De Pree, deputy spectrum manager at the National Radio Astronomy Observatory (NRAO) said in an email. Communication satellites like Starlink often end up in the radio telescopes’ line of sight. A full-scale cell network on the moon would add further noise to the night sky. 

There is also a regulatory hurdle that must be worked around. There are radio bands that have been internationally allocated to support lunar missions, and the LTE band is not among them. “Using 4G frequencies on or around the moon is a violation of the ITU-R radio regulations”, NRAO’s spectrum manager Harvey Liszt explained in an email.

To legally deploy the 4G network on the moon, Nokia received a waiver specifically for the IM-2 mission. “For permanent deployment we’ll have to pick a different frequency band,” Klein says. “We already have a list of candidate frequencies to consider.” Even with the frequency shift, Klein says Nokia’s lunar network technology will remain compatible with terrestrial 4G or 5G standards.

And that means that if you happened to bring your smartphone to the moon, and it somehow survived both the trip and the brutal lunar conditions, it should work on the moon just like it does here on Earth. “It would connect if we put your phone on the list of approved devices”, Klein explains. All you’d need is a lunar SIM card.

This artist collaborates with AI and robots

Many artists worry about the encroachment of artificial intelligence on artistic creation. But Sougwen Chung, a nonbinary Canadian-Chinese artist, instead sees AI as an opportunity for artists to embrace uncertainty and challenge people to think about technology and creativity in unexpected ways. 

Chung’s exhibitions are driven by technology; they’re also live and kinetic, with the artwork emerging in real time. Audiences watch as the artist works alongside or surrounded by one or more robots, human and machine drawing simultaneously. These works are at the frontier of what it means to make art in an age of fast-­accelerating artificial intelligence and robotics. “I consistently question the idea of technology as just a utilitarian instrument,” says Chung. 

“[Chung] comes from drawing, and then they start to work with AI, but not like we’ve seen in this generative AI movement where it’s all about generating images on screen,” says Sofian Audry, an artist and scholar at the University of Quebec in Montreal, who studies the relationships that artists establish with machines in their work. “[Chung is] really into this idea of performance. So they’re turning their drawing approach into a performative approach where things happen live.” 

Audiences watch as Chung works alongside or surrounded by robots, human and machine drawing simultaneously.

The artwork, Chung says, emerges not just in the finished piece but in all the messy in-betweens. “My goal,” they explain, “isn’t to replace traditional methods but to deepen and expand them, allowing art to arise from a genuine meeting of human and machine perspectives.” Such a meeting took place in January 2025 at the World Economic Forum in Davos, Switzerland, where Chung presented Spectral, a performative art installation featuring painting by robotic arms whose motions are guided by AI that combines data from earlier works with real-time input from an electroencephalogram.

“My alpha state drives the robot’s behavior, translating an internal experience into tangible, spatial gestures,” says Chung, referring to brain activity associated with being quiet and relaxed. Works like Spectral, they say, show how AI can move beyond being just an artistic tool—or threat—to become a collaborator. 

A frame of glass hanging in space of a dark gallery with two robot arms attached
Spectral, a performative art installation presented in January, featured robotic arms whose drawing motions were guided by real-time input from an EEG worn by the artist.
COURTESY OF THE ARTIST

Through AI, says Chung, robots can perform in unexpected ways. Creating art in real time allows these surprises to become part of the process: “Live performance is a crucial component of my work. It creates a real-time relationship between me, the machine, and an audience, allowing everyone to witness the system’s unpredictabilities and creative possibilities.”

Chung grew up in Canada, the child of immigrants from Hong Kong. Their father was a trained opera singer, their mom a computer programmer. Growing up, Chung played multiple musical instruments, and the family was among the first on the block to have a computer. “I was raised speaking both the language of music and the language of code,” they say. The internet offered unlimited possibilities: “I was captivated by what I saw as a nascent, optimistic frontier.”  

Their early works, mostly ink drawings on paper, tended to be sprawling, abstract explosions of form and line. But increasingly, Chung began to embrace performance. Then in 2015, at 29, after studying visual and interactive art in college and graduate school, they joined the MIT Media Lab as a research fellow. “I was inspired by … the idea that the robotic form could be anything—a sculptural embodied interaction,” they say. 

from overhead, a hand with pencil and robot arm with pencil making marks
Drawing Operations Unit: Generation 1 (DOUG 1) was the first of Chung’s collaborative robots.
COURTESY OF THE ARTIST

Chung found open-source plans online and assembled a robotic arm that could hold its own pencil or paintbrush. They added an overhead camera and computer vision software that could analyze the video stream of Chung drawing and then tell the arm where to make its marks to copy Chung’s work. The robot was named Drawing Operations Unit: Generation 1, or DOUG 1. 

The goal was mimicry: As the artist drew, the arm copied. Except it didn’t work out that way. The arm, unpredictably, made small errant movements, creating sketches that were similar to Chung’s—but not identical. These “mistakes” became part of the creative process. “One of the most transformative lessons I’ve learned is to ‘poeticize error,’” Chung says. “That mindset has given me a real sense of resilience, because I’m no longer afraid of failing; I trust that the failures themselves can be generative.”

artist from overhead kneeling on a surface making blue paint swipes with 4 robots
DOUG 3
COURTESY OF THE ARTIST

For the next iteration of the robot, DOUG 2, which launched in 2017, Chung spent weeks training a recurrent neural network using their earlier work as the training data. The resulting robot used a mechanical arm to generate new drawings during live performances. The Victoria and Albert Museum in London acquired the DOUG 2 model as part of a sculptural exhibit of Chung’s work in 2022. 

DOUG 2
DOUG 4

For a third iteration of DOUG, Chung assembled a small swarm of painting robots, their movements dictated by data streaming into the studio from surveillance cameras that tracked people and cars on the streets of New York City. The robots’ paths around the canvas followed the city’s flow. DOUG 4, the version behind Spectral, connects to an EEG headset that transmits electrical signal data from Chung’s brain to the robotic arms, which then generate drawings based on those signals. “The spatiality of performance and the tactility of instruments—robotics, painting, paintbrushes, sculpture—has a grounding effect for me,” Chung says.

Artistic practices like drawing, painting, performance, and sculpture have their own creative language, Chung adds. So too does technology. “I find it fascinating to [study the] material histories of all these mediums and [find] my place within it, and without it,” they say. “It feels like contributing to something that is my own and somehow much larger than myself.”

The rise of faster, better AI models has brought a flood of concern about creativity, especially given that generative technology is trained on existing art. “I think there’s a huge problem with some of the generative AI technologies, and there’s a big threat to creativity,” says Audry, who worries that people may be tempted to disengage from creating new kinds of art. “If people get their work stolen by the system and get nothing out of it, why would they go and do it in the first place?” 

Chung agrees that the rights and work of artists should be celebrated and protected, not poached to fuel generative models, but firmly believes that AI can empower creative pursuits. “Training your own models and exploring how your own data work within the feedback loop of an AI system can offer a creative catalyst for art-making,” they say.

And they are not alone in thinking that the technology threatening creative art also presents extraordinary opportunities. “There’s this expansion and mixing of disciplines, and people are breaking lines and creating mixes,” says Audry, who is “thrilled” with the approaches taken by artists like Chung. “Deep learning is supporting that because it’s so powerful, and robotics, too, is supporting that. So that’s great.” 

Zihao Zhang, an architect at the City College of New York who has studied the ways that humans and machines influence each other’s actions and behaviors, sees Chung’s work as offering a different story about human-machine interactions. “We’re still kind of trapped in this idea of AI versus human, and which one’s better,” he says. AI is often characterized in the media and movies as antagonistic to humanity—something that can replace our workers or, even worse, go rogue and become destructive. He believes Chung challenges such simplistic ideas: “It’s no longer about competition, but about co-production.” 

Though people have valid reasons to worry, Zhang says, in that many developers and large companies are indeed racing to create technologies that may supplant human workers, works like Chung’s subvert the idea of either-or. 

Chung believes that “artificial” intelligence is still human at its core. “It relies on human data, shaped by human biases, and it impacts human experiences in turn,” they say. “These technologies don’t emerge in a vacuum—there’s real human effort and material extraction behind them. For me, art remains a space to explore and affirm human agency.” 

Stephen Ornes is a science writer based in Nashville.