Why the term “women of childbearing age” is problematic

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Every journalist has favorite topics. Regular Checkup readers might already know some of mine, which include the quest to delay or reverse human aging, and new technologies for reproductive health and fertility. So when I saw trailers for The Substance, a film centered on one middle-aged woman’s attempt to reexperience youth, I had to watch it.

I won’t spoil the movie for anyone who hasn’t seen it yet (although I should warn that it is not for the squeamish, or anyone with an aversion to gratuitous close-ups of bums and nipples). But a key premise of the film involves harmful attitudes toward female aging.

“Hey, did you know that a woman’s fertility starts to decrease by the age of 25?” a powerful male character asks early in the film. “At 50, it just stops,” he later adds. He never explains what stops, exactly, but to the viewer the message is pretty clear: If you’re a woman, your worth is tied to your fertility. Once your fertile window is over, so are you.

The insidious idea that women’s bodies are, above all else, vessels for growing children has plenty of negative consequences for us all. But it has also set back scientific research and health policy.

Earlier this week, I chatted about this with Alana Cattapan, a political scientist at the University of Waterloo in Ontario, Canada. Cattapan has been exploring the concept of “women of reproductive age”—a descriptor that is ubiquitous in health research and policy.

The idea for the research project came to her when the Zika virus was making headlines around eight years ago. “I was planning on going to the Caribbean for a trip related to my partner’s research, and I kept getting advice that women of reproductive age shouldn’t go,” she told me. At the time, Zika was being linked to microcephaly—unusually small heads—in newborn babies. It was thought that the virus was affecting key stages of fetal development.

Cattapan wasn’t pregnant. And she wasn’t planning on becoming pregnant at the time. So why was she being advised to stay away from areas with the virus?

The experience got her thinking about the ways in which attitudes toward our bodies are governed by the idea of potential pregnancy. Take, for example, biomedical research on the causes and treatment of disease. Women’s health has lagged behind men’s as a focus of such work, for multiple reasons. Male bodies have long been considered the “default” human form, for example. And clinical trials have historically been designed in ways that make them less accessible for women.

Fears about the potential effects of drugs on fetuses have also played a significant role in keeping people who have the potential to become pregnant out of studies. “Scientific research has excluded women of ‘reproductive age,’ or women who might potentially conceive, in a blanket way,” says Cattapan. “The research that we have on many, many drugs does not include women and certainly doesn’t include women in pregnancy.”  

This lack of research goes some way to explaining why women are much more likely to experience side effects from drugs—some of them fatal. Over the last couple of decades, greater effort has been made to include people with ovaries and uteruses in clinical research. But we still have a long way to go.

Women are also often subjected to medical advice designed to protect a potential fetus, whether they are pregnant or not. Official guidelines on how much mercury-containing fish it is safe to eat can be different for “women of childbearing age,” according to the US Environmental Protection Agency, for example.  And in 2021, the World Health Organization used the same language to describe people who should be a focus of policies to reduce alcohol consumption

The takeaway message is that it’s women who should be thinking about fetal health, says Cattapan. Not the industries producing these chemicals or the agencies that regulate them. Not even the men who contribute to a pregnancy. Just women who stand a chance of getting pregnant, whether they intend to or not. “It puts the onus of the health of future generations squarely on the shoulders of women,” she says.

Another problem is the language itself. The term “women of reproductive age” typically includes women between 15 and 44. Women at one end of that spectrum will have very different bodies and a very different set of health risks from those at the other. And the term doesn’t account for people who might be able to get pregnant but don’t necessarily identify as female.

In other cases it is overly broad. In the context of the Zika virus, for example, it was not all women between the ages of 15 and 44 who should have considered taking precautions. The travel advice didn’t apply to people who’d had hysterectomies or did not have sex with men, for example, says Cattapan. “Precision here matters,” she says. 

More nuanced health advice would be helpful in cases like these. Guidelines often read as though they’re written for people assumed to be stupid, she adds. “I don’t think that needs to be the case.”

Another thing

On Thursday, president-elect Donald Trump said that he will nominate Robert F. Kennedy Jr. to lead the US Department of Health and Human Services. The news was not entirely a surprise, given that Trump had told an audience at a campaign rally that he would let Kennedy “go wild” on health, “the foods,” and “the medicines.”

The role would give Kennedy some control over multiple agencies, including the Food and Drug Administration, which regulates medicines in the US, and the Centers for Disease Control and Prevention, which coordinates public health advice and programs.

That’s extremely concerning to scientists, doctors, and health researchers, given Kennedy’s positions on evidence-based medicine, including his antivaccine stance. A few weeks ago, in a post on X, he referred to the FDA’s “aggressive suppression of psychedelics, peptides, stem cells, raw milk, hyperbaric therapies, chelating compounds, ivermectin, hydroxychloroquine, vitamins, clean foods, sunshine, exercise, nutraceuticals and anything else that advances human health and can’t be patented by Pharma.”  

“If you work for the FDA and are part of this corrupt system, I have two messages for you,” continued the post. “1. Preserve your records, and 2. Pack your bags.”

There’s a lot to unpack here. But briefly, we don’t yet have good evidence that mind-altering psychedelic drugs are the mental-health cure-alls some claim they are. There’s not enough evidence to support the many unapproved stem-cell treatments sold by clinics throughout the US and beyond, either. These “treatments” can be dangerous.

Health agencies are currently warning against the consumption of raw unpasteurized milk, because it might carry the bird flu virus that has been circulating in US dairy farms. And it’s far too simplistic to lump all vitamins together—some might be of benefit to some people, but not everyone needs supplements, and high doses can be harmful.

Kennedy’s 2021 book The Real Anthony Fauci has already helped spread misinformation about AIDS. Here at MIT Technology Review, we’ll continue our work reporting on whatever comes next. Watch this space.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

The tech industry has a gender problem, as the Gamergate and various #MeToo scandals made clear. A new generation of activists is hoping to remedy it

Male and female immune systems work differently. Which is another reason why it’s vital to study both women and female animals as well as males

Both of the above articles were published in the Gender issue of MIT Technology Review magazine. You can read more from that issue online here.

Women are more likely to receive abuse online. My colleague Charlotte Jee spoke to the technologists working on an alternative way to interact online: a feminist internet.

From around the web 

The scientific community and biopharma investors are reacting to the news of Robert F. Kennedy Jr.’s nomination to lead the Department of Health and Human Services. “It’s hard to see HHS functioning,” said one biotech analyst. (STAT)

Virologist Beata Halassy successfully treated her own breast cancer with viruses she grew in the lab. She has no regrets. (Nature)

Could diet influence the growth of endometriosis lesions? Potentially, according to research in mice fed high-fat, low-fiber “Western” diets. (BMC Medicine)

Last week, 43 female rhesus macaque monkeys escaped from a lab in South Carolina. The animals may have a legal claim to freedom. (Vox)

What Africa needs to do to become a major AI player

Kessel Okinga-Koumu paced around a crowded hallway. It was her first time presenting at the Deep Learning Indaba, she told the crowd gathered to hear her, filled with researchers from Africa’s machine-learning community. The annual weeklong conference (‘Indaba’ is a Zulu word for gathering), was held most recently in September at Amadou Mahtar Mbow University in Dakar, Senegal. It attracted over 700 attendees to hear about—and debate—the potential of Africa-centric AI and how it’s being deployed in agriculture, education, health care, and other critical sectors of the continent’s economy.     

A 28-year-old computer science student at the University of the Western Cape in Cape Town, South Africa, Okinga-Koumu spoke about how she’s tackling a common problem: the lack of lab equipment at her university. Lecturers have long been forced to use chalkboards or printed 2D representations of equipment to simulate practical lessons that need microscopes, centrifuges, or other expensive tools. “In some cases, they even ask students to draw the equipment during practical lessons,” she lamented. 

Okinga-Koumu pulled a phone from the pocket of her blue jeans and opened a prototype web app she’s built. Using VR and AI features, the app allows students to simulate using the necessary lab equipment—exploring 3D models of the tools in a real-world setting, like a classroom or lab. “Students could have detailed VR of lab equipment, making their hands-on experience more effective,” she said. 

Established in 2017, the Deep Learning Indaba now has chapters in 47 of the 55 African nations and aims to boost AI development across the continent by providing training and resources to African AI researchers like Okinga-Koumu. Africa is still early in the process of adopting AI technologies, but organizers say the continent is uniquely hospitable to it for several reasons, including a relatively young and increasingly well-educated population, a rapidly growing ecosystem of AI startups, and lots of potential consumers. 

“The building and ownership of AI solutions tailored to local contexts is crucial for equitable development,” says Shakir Mohamed, a senior research scientist at Google DeepMind and cofounder of the organization sponsoring the conference. Africa, more than other continents in the world, can address specific challenges with AI and will benefit immensely from its young talent, he says: “There is amazing expertise everywhere across the continent.” 

However, researchers’ ambitious efforts to develop AI tools that answer the needs of Africans face numerous hurdles. The biggest are inadequate funding and poor infrastructure. Not only is it very expensive to build AI systems, but research to provide AI training data in original African languages has been hamstrung by poor financing of linguistics departments at many African universities and the fact that citizens increasingly don’t speak or write local languages themselves. Limited internet access and a scarcity of domestic data centers also mean that developers might not be able to deploy cutting-edge AI capabilities.

Attendees of Deep Learning Indaba 2024 in session hall on their computers

DEEP LEARNING INDABA 2024

Complicating this further is a lack of overarching policies or strategies for harnessing AI’s immense benefits—and regulating its downsides. While there are various draft policy documents, researchers are in conflict over a continent-wide strategy. And they disagree about which policies would most benefit Africa, not the wealthy Western governments and corporations that have often funded technological innovation.

Taken together, researchers worry, these issues will hold Africa’s AI sector back and hamper its efforts to pave its own pathway in the global AI race.          

On the cusp of change

Africa’s researchers are already making the most of generative AI’s impressive capabilities. In South Africa, for instance, to help address the HIV epidemic, scientists have designed an app called Your Choice, powered by an LLM-based chatbot that interacts with people to obtain their sexual history without stigma or discrimination. In Kenya, farmers are using AI apps  to diagnose diseases in crops and increase productivity. And in Nigeria, Awarri, a newly minted AI startup, is trying to build the country’s first large language model, with the endorsement of the government, so that Nigerian languages can be integrated into AI tools. 

The Deep Learning Indaba is another sign of how Africa’s AI research scene is starting to flourish. At the Dakar meeting, researchers presented 150 posters and 62 papers. Of those, 30 will be published in top-tier journals, according to Mohamed. 

Meanwhile, an analysis of 1,646 publications in AI between 2013 and 2022 found “a significant increase in publications” from Africa. And Masakhane, a cousin organization to Deep Learning Indaba that pushes for natural-language-processing research in African languages, has released over 400 open-source models and 20 African-language data sets since it was founded in 2018. 

“These metrics speak a lot to the capacity building that’s happening,” says Kathleen Siminyu, a computer scientist from Kenya, who researches NLP tools for her native Kiswahili. “We’re starting to see a critical mass of people having basic foundational skills. They then go on to specialize.”      

She adds: “It’s like a wave that cannot be stopped.”   

Khadija Ba, a Senegalese entrepreneur and investor at the pan-African VC fund P1 Ventures who was at this year’s conference, says that she sees African AI startups as particularly attractive because their local approaches have potential to be scaled for the global market. African startups often build solutions in the absence of robust infrastructure, yet “these innovations work efficiently, making them adaptable to other regions facing similar challenges,” she says. 

In recent years, funding in Africa’s tech ecosystem has picked up: VC investment totaled $4.5 billion last year, more than double what it was just five years ago, according to a report by the African Private Capital Association. And this October, Google announced a $5.8 million commitment to support AI training initiatives in Kenya, Nigeria, and South Africa. But researchers say local funding remains sluggish. Take the Google-backed fund rolled out, also in October, in Nigeria, Africa’s most populous country. It will pay out $6,000 each to 10 AI startups—not even enough to purchase the equipment needed to power their systems.

Lilian Wanzare, a lecturer and NLP researcher at Maseno University in Kisumu, Kenya, bridles at African governments’ lackadaisical support for local AI initiatives and complains as well that the government charges exorbitant fees for access to publicly generated data, hindering data sharing and collaboration. “[We] researchers are just blocked,” she says. “The government is saying they’re willing to support us, but the structures have not been put in place for us.”

Language barriers 

Researchers who want to make Africa-centric AI don’t face just insufficient local investment and inaccessible data. There are major linguistic challenges, too.  

During one discussion at the Indaba, Ife Adebara, a Nigerian computational linguist, posed a question: “How many people can write a bachelor’s thesis in their native African language?” 

Zero hands went up. 

Then the audience disintegrated into laughter.   

Africans want AI to speak their local languages, but many Africans cannot speak and write in these languages themselves, Adebara said.      

Although Africa accounts for one-third of all languages in the world, many oral languages are slowly disappearing, their population of native speakers declining. And LLMs developed by Western-based tech companies fail to serve African languages; they don’t understand locally relevant context and culture. 

For Adebara and others researching NLP tools, the lack of people who have the ability to read and write in African languages poses a major hurdle to development of bespoke AI-enabled technologies. “Without literacy in our local languages, the future of AI in Africa is not as bright as we think,” she says.      

On top of all that, there’s little machine-readable data for African languages. One reason is that linguistic departments in public universities are poorly funded, Adebara says, limiting linguists’ participation in work that could create such data and benefit AI development. 

This year, she and her colleagues established EqualyzAI, a for-profit company seeking to preserve African languages through digital technology. They have built voice tools and AI models, covering about 517 African languages.       

Lelapa AI, a software company that’s building data sets and NLP tools for African languages, is also trying to address these language-specific challenges. Its cofounders met in 2017 at the first Deep Learning Indaba and launched the company in 2022. In 2023, it released its first AI tool, Vulavula, a speech-to-text program that recognizes several languages spoken in South Africa. 

This year, Lelapa AI released InkubaLM, a first-of-its-kind small language model that currently supports a range of African languages: IsiXhosa, Yoruba, Swahili, IsiZulu, and Hausa. InkubaLM can answer questions and perform tasks like English translation and sentiment analysis. In tests, it performed as well as some larger models. But it’s still in early stages. The hope is that InkubaLM will someday power Vulavula, says Jade Abbott, cofounder and chief operating officer of Lelapa AI. 

“It’s the first iteration of us really expressing our long-term vision of what we want, and where we see African AI in the future,” Abbott says. “What we’re really building is a small language model that punches above its weight.”

InkubaLM is trained on two open-source data sets with 1.9 billion tokens, built and curated by Masakhane and other African developers who worked with real people in local communities. They paid native speakers of languages to attend writing workshops to create data for their model.

Fundamentally, this approach will always be better, says Wanzare, because it’s informed by people who represent the language and culture.

A clash over strategy

Another issue that came up again and again at the Indaba was that Africa’s AI scene lacks the sort of regulation and support from governments that you find elsewhere in the world—in Europe, the US, China, and, increasingly, the Middle East. 

Of the 55 African nations, only seven—Senegal, Egypt, Mauritius, Rwanda, Algeria, Nigeria, and Benin—have developed their own formal AI strategies. And many of those are still in the early stages.  

A major point of tension at the Indaba, though, was the regulatory framework that will govern the approach to AI across the entire continent. In March, the African Union Development Agency published a white paper, developed over a three-year period, that lays out this strategy. The 200-page document includes recommendations for industry codes and practices, standards to assess and benchmark AI systems, and a blueprint of AI regulations for African nations to adopt. The hope is that it will be endorsed by the heads of African governments in February 2025 and eventually passed by the African Union.  

But in July, the African Union Commission in Addis Ababa, Ethiopia, another African governing body that wields more power than the development agency, released a rival continental AI strategy—a 66-page document that diverges from the initial white paper. 

It’s unclear what’s behind the second strategy, but Seydina Ndiaye, a program director at the Cheikh Hamidou Kane Digital University in Dakar who helped draft the development agency’s white paper, claims it was drafted by a tech lobbyist from Switzerland. The commission’s strategy calls for African Union member states to declare AI a national priority, promote AI startups, and develop regulatory frameworks to address safety and security challenges. But Ndiaye expressed concerns that the document does not reflect the perspectives, aspirations, knowledge, and work of grassroots African AI communities. “It’s a copy-paste of what’s going on outside the continent,” he says.               

Vukosi Marivate, a computer scientist at the University of Pretoria in South Africa who helped found the Deep Learning Indaba and is known as an advocate for the African machine-learning movement, expressed fury over this turn of events at the conference. “These are things we shouldn’t accept,” he declared. The room full of data wonks, linguists, and international funders brimmed with frustration. But Marivate encouraged the group to forge ahead with building AI that benefits Africans: “We don’t have to wait for the rules to act right,” he said.  

Barbara Glover, a program manager for the African Union Development Agency, acknowledges that AI researchers are angry and frustrated. There’s been a push to harmonize the two continental AI strategies, but she says the process has been fractious: “That engagement didn’t go as envisioned.” Her agency plans to keep its own version of the continental AI strategy, Glover says, adding that it was developed by African experts rather than outsiders. “We are capable, as Africans, of driving our own AI agenda,” she says.       

crowd of attendees mingle around display booths at Deep Learning Indaba 2024. Booth signs for Mila, Meta and OpenAI can be seen in the frame.

DEEP LEARNING INDABA 2024

This all speaks to a broader tension over foreign influence in the African AI scene, one that goes beyond any single strategic document. Mirroring the skepticism toward the African Union Commission strategy, critics say the Deep Learning Indaba is tainted by its reliance on funding from big foreign tech companies; roughly 50% of its $500,000 annual budget comes from international donors and the rest from corporations like Google DeepMind, Apple, Open AI, and Meta. They argue that this cash could pollute the Indaba’s activities and influence the topics and speakers chosen for discussion. 

But Mohamed, the Indaba cofounder who is a researcher at Google DeepMind, says that “almost all that goes back to our beneficiaries across the continent,” and the organization helps connect them to training opportunities in tech companies. He says it benefits from some of its cofounders’ ties with these companies but that they do not set the agenda.

Ndiaye says that the funding is necessary to keep the conference going. “But we need to have more African governments involved,” he says.     

To Timnit Gebru, founder and executive director at the nonprofit Distributed AI Research Institute (DAIR), which supports equitable AI research in Africa, the angst about foreign funding for AI development comes down to skepticism of exploitative, profit-driven international tech companies. “Africans [need] to do something different and not replicate the same issues we’re fighting against,” Gebru says. She warns about the pressure to adopt “AI for everything in Africa,” adding that there’s “a lot of push from international development organizations” to use AI as an “antidote” for all Africa’s challenges.       

Siminyu, who is also a researcher at DAIR, agrees with that view. She hopes that African governments will fund and work with people in Africa to build AI tools that reach underrepresented communities—tools that can be used in positive ways and in a context that works for Africans. “We should be afforded the dignity of having AI tools in a way that others do,” she says.     

Life-seeking, ice-melting robots could punch through Europa’s icy shell

At long last, NASA’s Europa Clipper mission is on its way. After overcoming financial and technological hurdles, the $5 billion mission launched on October 14 from Florida’s Kennedy Space Center. It is now en route to its target: Jupiter’s ice-covered moon Europa, whose frozen shell almost certainly conceals a warm saltwater ocean. When the spacecraft gets there, it will conduct dozens of close flybys in order to determine what that ocean is like and, crucially, where it might be hospitable to life.

Europa Clipper is still years away from its destination—it is not slated to reach the Jupiter system until 2030. But that hasn’t stopped engineers and scientists from working on what would come next if the results are promising: a mission capable of finding evidence of life itself.

This would likely have three parts: a lander, an autonomous ice-thawing robot, and some sort of self-navigating submersible. Indeed, several groups from multiple countries already have working prototypes of ice-diving robots and smart submersibles that they are set to test in Earth’s own frigid landscapes, from Alaska to Antarctica, in the next few years

But Earth’s oceans are pale simulacra of Europa’s extreme environment. To plumb the ocean of this Jovian moon, engineers must work out a way to get missions to survive a  never-ending rain of radiation that fries electronic circuits. They must also plow through an ice shell that’s at least twice as thick as Mount Everest is tall.

“There are a lot of hard problems that push up right against the limits of what’s possible,” says Richard Camilli, an expert on autonomous robotic systems at the Woods Hole Oceanographic Institution’s Deep Submergence Laboratory. But you’ve got to start somewhere, and Earth’s seas will be a vital testing ground. 

“We’re doing something nobody has done before,” says Sebastian Meckel, a researcher at the Center for Marine Environmental Sciences at the University of Bremen, Germany, who is helping to develop one such futuristic Europan submersible. If the field tests prove successful, the descendants of these aquatic explorers could very well be those that uncover the first evidence of extraterrestrial life.

Hellish descent

The hunt for signs of extraterrestrial biology has predominantly taken place on Mars, our dusty, diminutive planetary neighbor. Looking for life in an icy ocean world is a whole new kettle of (alien) fish, but exobiologists think it’s certainly worth the effort. On Mars, scientists hope to find microscopic evidence of past life on, or just under, its dry and frozen surface. But on Europa, which has a wealth of liquid water (kept warm by Jupiter, whose intense gravity generates plenty of internal friction and heat there), it is possible that microbial critters, and perhaps even more advanced small aquatic animals, may be present in the here and now.

The bad news is that Europa is one of the most hostile environments in the solar system—at least, for anything above its concealed ocean. 

When NASA’s Clipper mission arrives in 2030, it will be confronted by an endless storm of high-energy particles being whipped about by Jupiter’s immense and intense magnetic field, largely raining down onto Europa itself. “It’s enough to kill a regular person within a few seconds,” says Camilli. No human will be present on Europa, but that radiation is so extreme that it can frazzle most electronic circuits. This poses a major hazard for Europa Clipper, which is why it’s doing only quick flybys of the moon as its orbit around Jupiter periodically dips close.

Clipper has an impressive collection of remote sensing tools that will allow it to survey the ocean’s physical and chemical properties, even though it will never touch the moon itself. But almost all scientists expect that uncovering evidence of biological activity will require something to pierce through the ice shell and swim about in the ocean.

A cross-section view of an ice-melting probe called PRIME on the surface of the moon, with small robots being deployed in the subsurface ocean, against the backdrop of Jupiter.
An illustration of two Europa exploration concepts from NASA. An ice-melting probe called PRIME sits on the surface of the moon, with small wedge-shaped SWIM robots deployed below.
NASA/JPL-CALTECH

The good news is that any Europan life-hunting mission has a great technological legacy to build upon. Over the years, scientists have developed and deployed robotic subs that have uncovered a cornucopia of strange life and bizarre geology dwelling in the deep. These include remotely operated vehicles (ROVs), which are often tethered to a surface vessel and are piloted by a person atop the waves, and autonomous underwater vehicles (AUVs), which freely traverse the seas by themselves before reporting back to the surface.

Hopeful Europa explorers usually cite an AUV as their best option—something that a lander can drop off and let loose in those alien waters that will then return and share its data so it can be beamed back to Earth. “The whole idea is very exciting and cool,” says Bill Chadwick, a research professor at Oregon State University’s Hatfield Marine Science Center in Newport, Oregon. But on a technical level, he adds, “it seems incredibly daunting.”

Presuming that a life-finding robotic mission is sufficiently radiation-proof and can land and sit safely on Europa’s surface, it would then encounter the colossal obstacle that is Europa’s ice shell, estimated to be 10 to 15 miles thick. Something is going to have to drill or melt its way through all that before reaching the ocean, a process that will likely take several years. “And there’s no guarantee that the ice is going to be static as you’re going through,” says Camilli. Thanks to gravitational tugs from Jupiter, and the internal heat they generate, Europa is a geologically tumultuous world, with ice constantly fragmenting, convulsing and even erupting on its surface. “How do you deal with that?”

Europa’s lack of an atmosphere is also an issue. Say your robot does reach the ocean below all that ice. That’s great, but if the thawed tunnel isn’t sealed shut behind the robot, then the higher pressure of the oceanic depths will come up against a vacuum high above. “If you drill through and you don’t have some kind of pressure control, you can get the equivalent of a blowout, like an oil well,” says Camilli—and your robot could get rudely blasted into space.

Even if you manage to pass through that gauntlet, you must then make sure the diver maintains a link with the surface lander, and with Earth. “What would be worse than finally finding life somewhere else and not being able to tell anyone about it?” says Morgan Cable, a research scientist at NASA’s Jet Propulsion Laboratory (JPL).

Pioneering probes

What these divers will do when they breach Europa’s ocean almost doesn’t matter at this stage. The scientific analysis is currently secondary to the primary problem: Can robots actually get through that ice shell and survive the journey? 

A simple way to start is with a cryobot—a melt probe that can gradually thaw its way through the shell, pulled down by gravity. That’s the idea behind NASA’s Probe using Radioisotopes for Icy Moons Exploration, or PRIME. As the name suggests, this cryobot would use the heat from the radioactive decay of an element like plutonium-238 to melt ice. If you know the thickness of the ice shell, you know exactly how many tablespoons of radioactive matter to bring aboard. 

Once it gets through the ice, the cryobot could unfurl a suite of scientific investigation tools, or perhaps deploy an independent submersible that could work in tandem with the cryobot—all while making sure none of that radioactive matter contaminates the ocean. NASA’s Sensing with Independent Micro-Swimmers project, for example, has sketched out plans to deploy a school of wedge-shaped robots—a fleet of sleuths that would work together to survey the depths before reporting back to base.

These concepts remain hypothetical. To get an idea of what’s technically possible, several teams are building and field-testing their own prototype ice divers. 

One of the furthest-along efforts is the Ocean Worlds Reconnaissance and Characterization of Astrobiological Analogs project, or ORCAA, led by JPL. After some preliminary fieldwork, the group is now ready for prime time; next year, a team will set up camp on Alaska’s expansive Juneau Icefield and deploy an eight-foot tall, two-inch wide cryobot. Its goal will be to get through 1,000 feet of ice, through a glasslike upper layer, down into ancient ices, and ultimately into a subglacial lake.

A shows two team members near a supraglacial lake (a body of water on top of the glacier), where biologists could take water samples and compare them to samples taken from the borehole.
ORCAA team members stand by a lake on top of a glacier during Alaska fieldwork.
NASA/JPL-CALTECH

This cryobot won’t be powered by radioactive matter. “I don’t see NASA and the Department of Energy being game for that yet,” says Samuel Howell, an ocean worlds scientist at JPL and the ORCAA principal investigator. Instead, it will be electrically heated (with power delivered via a tether to the surface), and that heat will pump warm water out in front of the cryobot, melting the ice and allowing it to migrate downward.

The cryobot will be permanently tethered to the surface, using that link to communicate its rudimentary scientific data and return samples of water back to a team of scientists at base camp atop the ice. Those scientists will act as if they are an astrobiology suite of instruments similar to what might eventually be fitted on a cryobot sent to Europa. 

The 2025 field experiment “has all the pieces of a cryobot mission,” says Howell. “We’re just duct-taping them together and trying to see what breaks.”

Space scientists and marine engineers are also teaming up at Germany’s Center for Marine Environmental Sciences (MARUM) to forge their own underwater explorer. Under the auspices of the Technologies for Rapid Ice Penetration and Subglacial Lake Exploration project, or TRIPLE, they are developing an ice-thawing cryobot, an astrobiological laboratory suite, and an AUV designed to be used in Earth’s seas and Europa’s ocean.

Their cryobot is somewhat like the one ORCAA is using; it’s an electrically heated thawing machine tethered to the surface. But onboard MARUM’s “ice shuttle” will be a remarkably small AUV, just 20 inches long and four inches wide. The team plans to deploy both on the Antarctic ice shelf, near the Neumayer III station, in the spring of 2026. 

Model of the miniature underwater vehicle being developed at MARUM with partners from industry. It will have a diameter of around ten and a length of about 50 centimeters.
Germany’s Center for Marine Environmental Sciences is developing a small AUV that it plans to deploy in Antarctica in 2026.
MARUM – CENTER FOR MARINE ENVIRONMENTAL SCIENCES, UNIVERSITY OF BREMEN.

From a surface station, the ice shuttle will thaw its way down through the ice shell, aiming to reach the bitingly cold water hundreds of feet below. Once it does so, a hatch will open and the tiny AUV will be dropped off to swim about (on a probably preprogrammed route), wirelessly communicating with the ice shuttle throughout. It will take a sample of the water, return to the ice shuttle, dock with it, and recharge its batteries. For the field test, the ice shuttle, which will have some rudimentary scientific tools, will return the water sample back to the surface for analysis; for the space mission itself, the idea is that an array of instruments onboard the shuttle will examine that water.

As with ORCAA, the scientific aspect of this is not paramount. “What we’re focusing on now is form and function,” says project member Ralf Bachmayer, a marine robotics researcher at MARUM. Can their prototype Europan explorer get down to the hidden waters, deploy a scout, and return to base intact?

Bachmayer can’t wait to find out. “For engineers, it’s a dream come true to work on this project,” he says.

Swarms and serpents

A submersible-like AUV isn’t the only way scientists are thinking of investigating icy oceanic moons. JPL’s Exobiology Extant Life Surveyor, or EELS, involves a working, wriggling, serpentine robot inspired by the desire to crawl through the vents of Saturn’s own water-laden moon, Enceladus. The robotic snake has already been field-tested; it recently navigated through the icy crevasses and moulins of the Athabasca Glacier in Alberta, Canada.

Although an AUV-like cryobot mission is likely to be the first explorer of an icy oceanic moon, “a crazy idea like a robotic snake could work,” says Cable, the science lead for EELS. She hopes the project is “opening the eyes of scientists and engineers alike to new possibilities when it comes to accessing the hard-to-reach, and often most scientifically compelling, places of planetary environments.”

It might be that we’ll need such creative, and perhaps unexpected, designs to find our way to Europa’s ocean. Space agencies exploring the solar system have achieved remarkable things, but “NASA has never flown an aqueous instrument before,” says Howell.

But one day, thanks to this work, it might—and, just maybe, one of them will find life blooming in Europa’s watery shadows.

Robin George Andrews is an award-winning science journalist and doctor of volcanoes based in London. He regularly writes about the Earth, space, and planetary sciences, and is the author of two critically acclaimed books: Super Volcanoes (2021) and How To Kill An Asteroid (October 2024).

Why AI could eat quantum computing’s lunch

Tech companies have been funneling billions of dollars into quantum computers for years. The hope is that they’ll be a game changer for fields as diverse as finance, drug discovery, and logistics.

Those expectations have been especially high in physics and chemistry, where the weird effects of quantum mechanics come into play. In theory, this is where quantum computers could have a huge advantage over conventional machines.

But while the field struggles with the realities of tricky quantum hardware, another challenger is making headway in some of these most promising use cases. AI is now being applied to fundamental physics, chemistry, and materials science in a way that suggests quantum computing’s purported home turf might not be so safe after all.

The scale and complexity of quantum systems that can be simulated using AI is advancing rapidly, says Giuseppe Carleo, a professor of computational physics at the Swiss Federal Institute of Technology (EPFL). Last month, he coauthored a paper published in Science showing that neural-network-based approaches are rapidly becoming the leading technique for modeling materials with strong quantum properties. Meta also recently unveiled an AI model trained on a massive new data set of materials that has jumped to the top of a leaderboard for machine-learning approaches to material discovery.

Given the pace of recent advances, a growing number of researchers are now asking whether AI could solve a substantial chunk of the most interesting problems in chemistry and materials science before large-scale quantum computers become a reality. 

“The existence of these new contenders in machine learning is a serious hit to the potential applications of quantum computers,” says Carleo “In my opinion, these companies will find out sooner or later that their investments are not justified.”

Exponential problems

The promise of quantum computers lies in their potential to carry out certain calculations much faster than conventional computers. Realizing this promise will require much larger quantum processors than we have today. The biggest devices have just crossed the thousand-qubit mark, but achieving an undeniable advantage over classical computers will likely require tens of thousands, if not millions. Once that hardware is available, though, a handful of quantum algorithms, like the encryption-cracking Shor’s algorithm, have the potential to solve problems exponentially faster than classical algorithms can. 

But for many quantum algorithms with more obvious commercial applications, like searching databases, solving optimization problems, or powering AI, the speed advantage is more modest. And last year, a paper coauthored by Microsoft’s head of quantum computing, Matthias Troyer, showed that these theoretical advantages disappear if you account for the fact that quantum hardware operates orders of magnitude slower than modern computer chips. The difficulty of getting large amounts of classical data in and out of a quantum computer is also a major barrier. 

So Troyer and his colleagues concluded that quantum computers should instead focus on problems in chemistry and materials science that require simulation of systems where quantum effects dominate. A computer that operates along the same quantum principles as these systems should, in theory, have a natural advantage here. In fact, this has been a driving idea behind quantum computing ever since the renowned physicist Richard Feynman first proposed the idea.

The rules of quantum mechanics govern many things with huge practical and commercial value, like proteins, drugs, and materials. Their properties are determined by the interactions of their constituent particles, in particular their electrons—and simulating these interactions in a computer should make it possible to predict what kinds of characteristics a molecule will exhibit. This could prove invaluable for discovering things like new medicines or more efficient battery chemistries, for example. 

But the intuition-defying rules of quantum mechanics—in particular, the phenomenon of entanglement, which allows the quantum states of distant particles to become intrinsically linked—can make these interactions incredibly complex. Precisely tracking them requires complicated math that gets exponentially tougher the more particles are involved. That can make simulating large quantum systems intractable on classical machines.

This is where quantum computers could shine. Because they also operate on quantum principles, they are able to represent quantum states much more efficiently than is possible on classical machines. They could also take advantage of quantum effects to speed up their calculations.

But not all quantum systems are the same. Their complexity is determined by the extent to which their particles interact, or correlate, with each other. In systems where these interactions are strong, tracking all these relationships can quickly explode the number of calculations required to model the system. But in most that are of practical interest to chemists and materials scientists, correlation is weak, says Carleo. That means their particles don’t affect each other’s behavior significantly, which makes the systems far simpler to model.

The upshot, says Carleo, is that quantum computers are unlikely to provide any advantage for most problems in chemistry and materials science. Classical tools that can accurately model weakly correlated systems already exist, the most prominent being density functional theory (DFT). The insight behind DFT is that all you need to understand a system’s key properties is its electron density, a measure of how its electrons are distributed in space. This makes for much simpler computation but can still provide accurate results for weakly correlated systems.

Simulating large systems using these approaches requires considerable computing power. But in recent years there’s been an explosion of research using DFT to generate data on chemicals, biomolecules, and materials—data that can be used to train neural networks. These AI models learn patterns in the data that allow them to predict what properties a particular chemical structure is likely to have, but they are orders of magnitude cheaper to run than conventional DFT calculations. 

This has dramatically expanded the size of systems that can be modeled—to as many as 100,000 atoms at a time—and how long simulations can run, says Alexandre Tkatchenko, a physics professor at the University of Luxembourg. “It’s wonderful. You can really do most of chemistry,” he says.

Olexandr Isayev, a chemistry professor at Carnegie Mellon University, says these techniques are already being widely applied by companies in chemistry and life sciences. And for researchers, previously out of reach problems such as optimizing chemical reactions, developing new battery materials, and understanding protein binding are finally becoming tractable.

As with most AI applications, the biggest bottleneck is data, says Isayev. Meta’s recently released materials data set was made up of DFT calculations on 118 million molecules. A model trained on this data achieved state-of-the-art performance, but creating the training material took vast computing resources, well beyond what’s accessible to most research teams. That means fulfilling the full promise of this approach will require massive investment.

Modeling a weakly correlated system using DFT is not an exponentially scaling problem, though. This suggests that with more data and computing resources, AI-based classical approaches could simulate even the largest of these systems, says Tkatchenko. Given that quantum computers powerful enough to compete are likely still decades away, he adds, AI’s current trajectory suggests it could reach important milestones, such as precisely simulating how drugs bind to a protein, much sooner.

Strong correlations

When it comes to simulating strongly correlated quantum systems—ones whose particles interact a lot—methods like DFT quickly run out of steam. While more exotic, these systems include materials with potentially transformative capabilities, like high-temperature superconductivity or ultra-precise sensing. But even here, AI is making significant strides.

In 2017, EPFL’s Carleo and Microsoft’s Troyer published a seminal paper in Science showing that neural networks could model strongly correlated quantum systems. The approach doesn’t learn from data in the classical sense. Instead, Carleo says, it is similar to DeepMind’s AlphaZero model, which mastered the games of Go, chess, and shogi using nothing more than the rules of each game and the ability to play itself.

In this case, the rules of the game are provided by Schrödinger’s equation, which can precisely describe a system’s quantum state, or wave function. The model plays against itself by arranging particles in a certain configuration and then measuring the system’s energy level. The goal is to reach the lowest energy configuration (known as the ground state), which determines the system’s properties. The model repeats this process until energy levels stop falling, indicating that the ground state—or something close to it—has been reached.

The power of these models is their ability to compress information, says Carleo. “The wave function is a very complicated mathematical object,” he says. “What has been shown by several papers now is that [the neural network] is able to capture the complexity of this object in a way that can be handled by a classical machine.”

Since the 2017 paper, the approach has been extended to a wide range of strongly correlated systems, says Carleo, and results have been impressive. The Science paper he published with colleagues last month put leading classical simulation techniques to the test on a variety of tricky quantum simulation problems, with the goal of creating a benchmark to judge advances in both classical and quantum approaches.

Carleo says that neural-network-based techniques are now the best approach for simulating many of the most complex quantum systems they tested. “Machine learning is really taking the lead in many of these problems,” he says.

These techniques are catching the eye of some big players in the tech industry. In August, researchers at DeepMind showed in a paper in Science that they could accurately model excited states in quantum systems, which could one day help predict the behavior of things like solar cells, sensors, and lasers. Scientists at Microsoft Research have also developed an open-source software suite to help more researchers use neural networks for simulation.

One of the main advantages of the approach is that it piggybacks on massive investments in AI software and hardware, says Filippo Vicentini, a professor of AI and condensed-matter physics at École Polytechnique in France, who was also a coauthor on the Science benchmarking paper: “Being able to leverage these kinds of technological advancements gives us a huge edge.”

There is a caveat: Because the ground states are effectively found through trial and error rather than explicit calculations, they are only approximations. But this is also why the approach could make progress on what has looked like an intractable problem, says Juan Carrasquilla, a researcher at ETH Zurich, and another coauthor on the Science benchmarking paper.

If you want to precisely track all the interactions in a strongly correlated system, the number of calculations you need to do rises exponentially with the system’s size. But if you’re happy with an answer that is just good enough, there’s plenty of scope for taking shortcuts. 

“Perhaps there’s no hope to capture it exactly,” says Carrasquilla. “But there’s hope to capture enough information that we capture all the aspects that physicists care about. And if we do that, it’s basically indistinguishable from a true solution.”

And while strongly correlated systems are generally too hard to simulate classically, there are notable instances where this isn’t the case. That includes some systems that are relevant for modeling high-temperature superconductors, according to a 2023 paper in Nature Communications.

“Because of the exponential complexity, you can always find problems for which you can’t find a shortcut,” says Frank Noe, research manager at Microsoft Research, who has led much of the company’s work in this area. “But I think the number of systems for which you can’t find a good shortcut will just become much smaller.”

No magic bullets

However, Stefanie Czischek, an assistant professor of physics at the University of Ottawa, says it can be hard to predict what problems neural networks can feasibly solve. For some complex systems they do incredibly well, but then on other seemingly simple ones, computational costs balloon unexpectedly. “We don’t really know their limitations,” she says. “No one really knows yet what are the conditions that make it hard to represent systems using these neural networks.”

Meanwhile, there have also been significant advances in other classical quantum simulation techniques, says Antoine Georges, director of the Center for Computational Quantum Physics at the Flatiron Institute in New York, who also contributed to the recent Science benchmarking paper. “They are all successful in their own right, and they are also very complementary,” he says. “So I don’t think these machine-learning methods are just going to completely put all the other methods out of business.”

Quantum computers will also have their niche, says Martin Roetteler, senior director of quantum solutions at IonQ, which is developing quantum computers built from trapped ions. While he agrees that classical approaches will likely be sufficient for simulating weakly correlated systems, he’s confident that some large, strongly correlated systems will be beyond their reach. “The exponential is going to bite you,” he says. “There are cases with strongly correlated systems that we cannot treat classically. I’m strongly convinced that that’s the case.”

In contrast, he says, a future fault-tolerant quantum computer with many more qubits than today’s devices will be able to simulate such systems. This could help find new catalysts or improve understanding of metabolic processes in the body—an area of interest to the pharmaceutical industry.

Neural networks are likely to increase the scope of problems that can be solved, says Jay Gambetta, who leads IBM’s quantum computing efforts, but he’s unconvinced they’ll solve the hardest challenges businesses are interested in.

“That’s why many different companies that essentially have chemistry as their requirement are still investigating quantum—because they know exactly where these approximation methods break down,” he says.

Gambetta also rejects the idea that the technologies are rivals. He says the future of computing is likely to involve a hybrid of the two approaches, with quantum and classical subroutines working together to solve problems. “I don’t think they’re in competition. I think they actually add to each other,” he says.

But Scott Aaronson, who directs the Quantum Information Center at the University of Texas, says machine-learning approaches are directly competing against quantum computers in areas like quantum chemistry and condensed-matter physics. He predicts that a combination of machine learning and quantum simulations will outperform purely classical approaches in many cases, but that won’t become clear until larger, more reliable quantum computers are available.

“From the very beginning, I’ve treated quantum computing as first and foremost a scientific quest, with any industrial applications as icing on the cake,” he says. “So if quantum simulation turns out to beat classical machine learning only rarely, I won’t be quite as crestfallen as some of my colleagues.”

One area where quantum computers look likely to have a clear advantage is in simulating how complex quantum systems evolve over time, says EPFL’s Carleo. This could provide invaluable insights for scientists in fields like statistical mechanics and high-energy physics, but it seems unlikely to lead to practical uses in the near term. “These are more niche applications that, in my opinion, do not justify the massive investments and the massive hype,” Carleo adds.

Nonetheless, the experts MIT Technology Review spoke to said a lack of commercial applications is not a reason to stop pursuing quantum computing, which could lead to fundamental scientific breakthroughs in the long run.

“Science is like a set of nested boxes—you solve one problem and you find five other problems,” says Vicentini. “The complexity of the things we study will increase over time, so we will always need more powerful tools.”

What’s next for reproductive rights in the US

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Earlier this week, Americans cast their votes in a seminal presidential election. But it wasn’t just the future president of the US that was on the ballot. Ten states also voted on abortion rights.

Two years ago, the US Supreme Court overturned Roe v. Wade, a legal decision that protected the right to abortion. Since then, abortion bans have been enacted in multiple states, and millions of people in the US have lost access to local clinics.

Now, some states are voting to extend and protect access to abortion. This week, seven states voted in support of such measures. And voters in Missouri, a state that has long restricted access, have voted to overturn its ban.

It’s not all good news for proponents of reproductive rights—some states voted against abortion access. And questions remain over the impact of a second term under former president Donald Trump, who is set to return to the post in January.

Roe v. Wade, the legal decision that enshrined a constitutional right to abortion in the US in 1973, guaranteed the right to an abortion up to the point of fetal viability, which is generally considered to be around 24 weeks of pregnancy. It was overturned by the US Supreme Court in the summer of 2022.

Within 100 days of the decision, 13 states had enacted total bans on abortion from the moment of conception. Clinics in these states could no longer offer abortions. Other states also restricted abortion access. In that 100-day period, 66 of the 79 clinics across 15 states stopped offering abortion services, and 26 closed completely, according to research by the Guttmacher Institute.

The political backlash to the decision was intense. This week, abortion was on the ballot in 10 states: Arizona, Colorado, Florida, Maryland, Missouri, Montana, Nebraska, Nevada, New York, and South Dakota. And seven of them voted in support of abortion access.

The impact of these votes will vary by state. Abortion was already legal in Maryland, for example. But the new measures should make it more difficult for lawmakers to restrict reproductive rights in the future. In Arizona, abortions after 15 weeks had been banned since 2022. There, voters approved an amendment to the state constitution that will guarantee access to abortion until fetal viability.

Missouri was the first state to enact an abortion ban once Roe v. Wade was overturned. The state’s current Right to Life of the Unborn Child Act prohibits doctors from performing abortions unless there is a medical emergency. It has no exceptions for rape or incest. This week, the state voted to overturn that ban and protect access to abortion up to fetal viability. 

Not all states voted in support of reproductive rights. Amendments to expand access failed to garner enough support in Nebraska, South Dakota, and Florida. In Florida, for example, where abortions after six weeks of pregnancy are banned, an amendment to protect access until fetal viability got 57% of the vote, falling just short of the 60% the state required for it to pass.

It’s hard to predict how reproductive rights will fare over the course of a second Trump term. Trump himself has been inconsistent on the issue. During his first term, he installed members of the Supreme Court who helped overturn Roe v. Wade. During his most recent campaign he said that decisions on reproductive rights should be left to individual states.

Trump, himself a Florida resident, has refused to comment on how he voted in the state’s recent ballot question on abortion rights. When asked, he said that the reporter who posed the question “should just stop talking about that,” according to the Associated Press.

State decisions can affect reproductive rights beyond abortion access. Just look at Alabama. In February, the Alabama Supreme Court ruled that frozen embryos can be considered children under state law. Embryos are routinely cryopreserved in the course of in vitro fertilization treatment, and the ruling was considered likely to significantly restrict access to IVF in the state. (In March, the state passed another law protecting clinics from legal repercussions should they damage or destroy embryos during IVF procedures, but the status of embryos remains unchanged.)

The fertility treatment became a hot topic during this year’s campaign. In October, Trump bizarrely referred to himself as “the father of IVF.” That title is usually reserved for Robert Edwards, the British researcher who won the 2010 Nobel prize in physiology or medicine for developing the technology in the 1970s.

Whatever is in store for reproductive rights in the US in the coming months and years, all we’ve seen so far suggests that it’s likely to be a bumpy ride.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

My colleague Rhiannon Williams reported on the immediate aftermath of the decision that reversed Roe v. Wade when it was announced a couple of years ago. 

The Alabama Supreme Court ruling on embryos could also affect the development of technologies designed to serve as “artificial wombs,” as Antonio Regalado explained at the time.

Other technologies are set to change the way we have babies. Some, which could lead to the creation of children with four parents or none at all, stand to transform our understanding of parenthood.  

We’ve also reported on attempts to create embryo-like structures using stem cells. These structures look like embryos but are created without eggs or sperm. There’s a “wild race” afoot to make these more like the real thing. But both scientific and ethical questions remain over how far we can—and—should go.

My colleagues have been exploring what the US election outcome might mean for climate policies. Senior climate editor James Temple writes that Trump’s victory is “a stunning setback for climate change.” And senior reporter Casey Crownhart explains how efforts including a trio of laws implemented by the Biden administration, which massively increased climate funding, could be undone.

From around the web

Donald Trump has said he’ll let Robert F. Kennedy Jr. “go wild on health.” Here’s where the former environmental lawyer and independent candidate—who has no medical or public health degrees—stands on vaccines, fluoride, and the Affordable Care Act. (New York Times)

Bird flu has been detected in pigs on a farm in Oregon. It’s a worrying development that virologists were dreading. (The Conversation)

And, in case you need it, here’s some lighter reading:

Scientists are sequencing the DNA of tiny marine plankton for the first time. (Come for the story of the scientific expedition; stay for the beautiful images of jellies and sea sapphires.) (The Guardian)

Dolphins are known to communicate with whistles and clicks. But scientists were surprised to find a “highly vocal” solitary dolphin in the Baltic Sea. They think the animal is engaging in “dolphin self-talk.” (Bioacoustics)

How much do you know about baby animals? Test your knowledge in this quiz. (National Geographic)

The US is about to make a sharp turn on climate policy

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Voters have elected Donald Trump to a second term in the White House.

In the days leading up to the election, I kept thinking about what four years means for climate change right now. We’re at a critical moment that requires decisive action to rapidly slash greenhouse-gas emissions from power plants, transportation, industry, and the rest of the economy if we’re going to achieve our climate goals.

The past four years have seen the US take climate action seriously, working with the international community and pumping money into solutions. Now, we’re facing a period where things are going to be very different. A Trump presidency will have impacts far beyond climate, but for the sake of this newsletter, we’ll stay focused on what four years means in the climate fight as we start to make sense of this next chapter. 

Joe Biden arguably did more to combat climate change than any other American president. One of his first actions in office was rejoining the Paris climate accord—Trump pulled out of the international agreement to fight climate change during his first term in office. Biden then quickly set a new national goal to cut US carbon emissions in half, relative to their peak, by 2030.

The Environmental Protection Agency rolled out rules for power plants to slash pollution that harms both human health and the climate. The agency also announced new regulations for vehicle emissions to push the country toward EVs.

And the cornerstone of the Biden years has been unprecedented climate investment. A trio of laws—the Bipartisan Infrastructure Law, the CHIPS and Science Act, and the Inflation Reduction Act—pumped hundreds of billions of dollars into infrastructure and research, much of it on climate.

Now, this ship is about to make a quick turn. Donald Trump has regularly dismissed the threat of climate change and promised throughout the campaign to counter some of Biden’s key moves.

We can expect to see a dramatic shift in how the US talks about climate on the international stage. Trump has vowed to once again withdraw from the Paris agreement. Things are going to be weird at the annual global climate talks that kick off next week.

We can also expect to see efforts to undo some of Biden’s key climate actions, most centrally the Inflation Reduction Act, as my colleague James Temple covered earlier this year.

What, exactly, Trump can do will depend on whether Republicans take control of both houses of Congress. A clean sweep would open up more lanes for targeting legislation passed under Biden. (As of sending this email, Republicans have secured enough seats to control the Senate, but the House is uncertain and could be for days or even weeks.)

I don’t think the rug will be entirely pulled out from under the IRA—portions of the investment from the law are beginning to pay off, and the majority of the money has gone to Republican districts. But there will certainly be challenges to pieces, especially the EV tax credits, which Trump has been laser-focused on during the campaign.

This all adds up to a very different course on climate than what many had hoped we might see for the rest of this decade.

A Trump presidency could add 4 billion metric tons of carbon dioxide emissions to the atmosphere by 2030 over what was expected from a second Biden term, according to an analysis published in April by the website Carbon Brief (this was before Biden dropped out of the race). That projection sees emissions under Trump dropping by 28% below the peak by the end of the decade—nowhere near the 50% target set by Biden at the beginning of his term.

The US, which is currently the world’s second-largest greenhouse-gas emitter and has added more climate pollution to the atmosphere than any other nation, is now very unlikely to hit Biden’s 2030 goal. That’s basically the final nail in the coffin for efforts to limit global warming to 1.5 °C (2.7 °F) over preindustrial levels.

In the days, weeks, and years ahead we’ll be covering what this change will mean for efforts to combat climate change and to protect the most vulnerable from the dangerous world we’re marching toward—indeed, already living in. Stay tuned for more from us.


Now read the rest of The Spark

Related reading

Trump wants to unravel Biden’s landmark climate law. Read our coverage from earlier this year to see what’s most at risk

It’s been two years since the Inflation Reduction Act was passed, ushering in hundreds of billions of dollars in climate investment. Read more about the key provisions in this newsletter from August

silhouette of a cow with letters C,T,G,A floating inside in brilliant orange light

MIT TECHNOLOGY REVIEW | GETTY

Another thing

Jennifer Doudna, one of the inventors of the gene-editing tool CRISPR, says the tech could be a major tool to help address climate change and deal with the growing risks of our changing world. 

The hope is that CRISPR’s ability to chop out specific pieces of DNA will make it faster and easier to produce climate-resilient crops and livestock, while avoiding the pitfalls of previous attempts to tweak the genomes of plants and animals. Read the full story from my colleague James Temple.

Keeping up with climate  

Startup Redoxblox is building a technology that’s not exactly a thermal battery, but it’s not not a thermal battery either. The company raised just over $30 million to build its systems, which store energy in both heat and chemical bonds. (Heatmap)

It’s been a weird fall in the US Northeast—a rare drought has brought a string of wildfires, and New York City is seeing calls to conserve water. (New York Times)

It’s been bumpy skies this week for electric-plane startups. Beta Technologies raised over $300 million in funding, while Lilium may be filing for insolvency soon. (Canary Media)

→ The runway for futuristic electric planes is still a long one. (MIT Technology Review)

Meta’s plan to build a nuclear-powered AI data center has been derailed by a rare species of bee living on land earmarked for the project. (Financial Times)

The atmospheric concentration of methane—a powerful greenhouse gas—has been mysteriously climbing since 2007, and that growth nearly doubled in 2020. Now scientists may have finally figured out the culprits: microbes in wetlands that are getting warmer and wetter. (Washington Post)

Greenhouse-gas emissions from the European Union fell by 8% in 2023. The drop is thanks to efforts to shut down coal-fired power plants and generate more electricity from renewables like solar and wind. (The Guardian)

Four electric school buses could help officials figure out how to charge future bus fleets. A project in Brooklyn will aim to use onsite renewables and smart charging to control the costs and grid stress of EV charging depots. (Canary Media)

How ChatGPT search paves the way for AI agents

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

OpenAI’s Olivier Godement, head of product for its platform, and Romain Huet, head of developer experience, are on a whistle-stop tour around the world. Last week, I sat down with the pair in London before DevDay, the company’s annual developer conference. London’s DevDay is the first one for the company outside San Francisco. Godement and Huet are heading to Singapore next. 

It’s been a busy few weeks for the company. In London, OpenAI announced updates to its new Realtime API platform, which allows developers to build voice features into their applications. The company is rolling out new voices and a function that lets developers generate prompts, which will allow them to build apps and more helpful voice assistants more quickly. Meanwhile for consumers, OpenAI announced it was launching ChatGPT search, which allows users to search the internet using the chatbot. Read more here

Both developments pave the way for the next big thing in AI: agents. These are AI assistants that can complete complex chains of tasks, such as booking flights. (You can read my explainer on agents here.) 

“Fast-forward a few years—every human on Earth, every business, has an agent. That agent knows you extremely well. It knows your preferences,” Godement says. The agent will have access to your emails, apps, and calendars and will act like a chief of staff, interacting with each of these tools and even working on long-term problems, such as writing a paper on a particular topic, he says. 

OpenAI’s strategy is to both build agents itself and allow developers to use its software to build their own agents, says Godement. Voice will play an important role in what agents will look and feel like. 

“At the moment most of the apps are chat based … which is cool, but not suitable for all use cases. There are some use cases where you’re not typing, not even looking at the screen, and so voice essentially has a much better modality for that,” he says. 

But there are two big hurdles that need to be overcome before agents can become a reality, Godement says. 

The first is reasoning. Building AI agents requires us to be able to trust that they will be able to complete complex tasks and do the right things, says Huet. That’s where OpenAI “reasoning” feature comes in. Introduced in OpenAI’s o1 model last month, it uses reinforcement learning to teach the model how to process information using “chain of thought.” Giving the model more time to generate answers allows it to recognize and correct mistakes, break down problems into smaller ones, and try different approaches to answering questions, Godement says. 

But OpenAI’s claims about reasoning should be taken with a pinch of salt, says Chirag Shah, a computer science professor at the University of Washington. Large language models are not exhibiting true reasoning. It’s most likely that they have picked up what looks like logic from something they’ve seen in their training data.

“These models sometimes seem to be really amazing at reasoning, but it’s just like they’re really good at pretending, and it only takes a little bit of picking at them to break them,” he says.

There is still much more work to be done, Godement admits. In the short term, AI models such as o1 need to be much more reliable, faster, and cheaper. In the long term, the company needs to apply its chain-of-thought technique to a wider pool of use cases. OpenAI has focused on science, coding, and math. Now it wants to address other fields, such as law, accounting, and economics, he says. 

Second on the to-do list is the ability to connect different tools, Godement says. An AI model’s capabilities will be limited if it has to rely on its training data alone. It needs to be able to surf the web and look for up-to-date information. ChatGPT search is one powerful way OpenAI’s new tools can now do that. 

These tools need to be able not only to retrieve information but to take actions in the real world. Competitor Anthropic announced a new feature where its Claude chatbot can “use” a computer by interacting with its interface to click on things, for example. This is an important feature for agents if they are going to be able to execute tasks like booking flights. Godement says o1 can “sort of” use tools, though not very reliably, and that research on tool use is a “promising development.” 

In the next year, Godemont says, he expects the adoption of AI for customer support and other assistant-based tasks to grow. However, he says that it can be hard to predict how people will adopt and use OpenAI’s technology. 

“Frankly, looking back every year, I’m surprised by use cases that popped up that I did not even anticipate,” he says. “I expect there will be quite a few surprises that you know none of us could predict.” 


Now read the rest of The Algorithm

Deeper Learning

This AI-generated version of Minecraft may represent the future of real-time video generation

When you walk around in a version of the video game Minecraft from the AI companies Decart and Etched, it feels a little off. Sure, you can move forward, cut down a tree, and lay down a dirt block, just like in the real thing. If you turn around, though, the dirt block you just placed may have morphed into a totally new environment. That doesn’t happen in Minecraft. But this new version is entirely AI-generated, so it’s prone to hallucinations. Not a single line of code was written.

Ready, set, go: This version of Minecraft is generated in real time, using a technique known as next-frame prediction. The AI companies behind it did this by training their model, Oasis, on millions of hours of Minecraft game play and recordings of the corresponding actions a user would take in the game. The AI is able to sort out the physics, environments, and controls of Minecraft from this data alone. Read more from Scott J. Mulligan.

Bits and Bytes

AI search could break the web
At its best, AI search can better infer a user’s intent, amplify quality content, and synthesize information from diverse sources. But if AI search becomes our primary portal to the web, it threatens to disrupt an already precarious digital economy, argues Benjamin Brooks, a fellow at the Berkman Klein Center at Harvard University, who used to lead public policy for Stability AI. (MIT Technology Review

AI will add to the e-waste problem. Here’s what we can do about it.
Equipment used to train and run generative AI models could produce up to 5 million tons of e-waste by 2030, a relatively small but significant fraction of the global total. (MIT Technology Review

How an “interview” with a dead luminary exposed the pitfalls of AI
A state-funded radio station in Poland fired its on-air talent and brought in AI-generated presenters. But the experiment caused an outcry and was stopped when tone of them  “interviewed” a dead Nobel laureate. (The New York Times

Meta says yes, please, to more AI-generated slop
In Meta’s latest earnings call, CEO Mark Zuckerberg said we’re likely to see 
“a whole new category of content, which is AI generated or AI summarized content or kind of existing content pulled together by AI in some way.” Zuckerberg added that he thinks “that’s going to be just very exciting.” (404 Media

How exosomes could become more than just an “anti-aging” fad

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Over the past month or so, I’ve been working on a story about exosomes. You might have seen them advertised—they’re being touted as a hot new beauty treatment, a fountain of youth, and generally a cure-all therapy for a whole host of ailments.

Any cell biologist, though, will tell you what exosomes really are: tiny little blobs that bud off from cells and contain a mixture of proteins and other components. We’re not entirely clear what those components are or what they do, despite the promises made by medspas and cosmetic clinics charging thousands of dollars for exosome “therapies.” As one recipient of an exosome treatment told me, “I feel like it’s a little bit of health marketing bullshit.”

But there is some very exciting scientific research underway to better understand exactly what exosomes do. Scientists are exploring not only how these tiny particles might help cells communicate, but also how they might be used to diagnose or treat diseases. One company is trying to use exosomes to deliver drugs to the brains of people with rare neurological disorders.

It might take longer for these kinds of exosome applications to get to the clinic, but when they do, at least they’ll be evidence based.

Exosomes are a type of extracellular vesicle. This is a scientific way of saying they are basically little packages that bud off from cells. They were once thought to contain cellular garbage, but now scientists believe they convey important signals between cells and tissues.

Exactly what those signals are is still being figured out.  The contents of exosomes from cancer cells will probably be somewhat different to those from healthy cells, for example.

Because of that, many scientists hope that exosomes could one day be used to help us diagnose diseases. In theory, you could isolate exosomes from a blood sample, examine their contents, and figure out what might be going on in a person’s cells. Exosomes might provide clues as to how stressed or close to death a cell is. They might indicate the presence of a tumor.

Raghu Kalluri, a cancer biologist at MD Anderson Cancer Center in Houston, is one of the researchers exploring this possibility. “I believe that exosomes are likely providing a forensic fingerprint of what the cells are undergoing,” he says.

But understanding these signals won’t be straightforward. Exosomes from cancer cells might send signals to surrounding cells in order to “subjugate” them into helping the cancer grow, says Kalluri. Cells around a tumor might also send distress signals, alerting the immune system to fight back against it. “There’s definitely a role for these exosomes in cancer progression and metastasis,” he says. “Precisely what [that role is] is an active area of research right now.”

Exosomes could also be useful for delivering drug treatments. After all, they are essentially little packages of proteins and other matter that can be shuttled between cells. Why not fill them with a medicine and use them to target specific regions of the body?

Because exosomes are made in our bodies, they are less likely to be seen as “foreign” and rejected by our immune systems. And the outer layer of an exosome can serve as a protective coat, shielding the drug from being degraded until it reaches its destination, says James Edgar, who studies exosomes at the University of Cambridge. “It’s a really attractive method for drug delivery,” he says.

Dave Carter is one scientist working on it. Carter and his colleagues at Evox Therapeutics in Oxford, UK, are engineering cells to produce compounds that might help treat rare neurological diseases. These compounds could then be released from the cells in exosomes.

In their research, Carter and his colleagues can change almost everything about the exosomes they study. They can alter their contents, loading them with proteins or viruses or even gene-editing therapies. They can tweak the proteins on their surfaces to make them target different cells and tissues. They can control how long exosomes stay in an animal’s circulation.

“I always used to love playing with Lego,” he adds. “I feel like I’m playing with Lego when I’m working with exosomes.”

Others are hopeful that exosomes themselves hold some kind of therapeutic value. Some hope that exosomes derived from stem cells, for example, might have some regenerative capacity.

Ke Cheng at Columbia University in New York is interested in the idea of using exosomes to treat heart and lung conditions. Several preliminary studies suggest that exosomes from heart and stem cells might help animals like mice and pigs recover from heart injuries, such as those caused by a heart attack.

There are certainly plenty of clinical trials of exosomes underway. When I searched for “exosomes” on clinicaltrials.gov, I got over 400 results. These are early-stage trials, however—and are of variable quality.

Still, it’s an exciting time for exosome research. “It’s a growing field … I think we will see a lot of exciting science in the next five years,” says Cheng. “I’m very optimistic.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

You can read the piece about the costly exosome treatments being sold in aesthetic clinics and medspas in my longer piece, which was published earlier this week. 

It can be difficult to establish credibility in a medical field when you’re being undercut by clinics selling unapproved treatments and individuals making outlandish claims. Just ask the doctors and scientists trying to legitimize longevity medicine

Some treatments can take off culturally without the backing of rigorous evidence, only to go up in flames when the trial results come in. We saw this earlier this year, when FDA advisors rejected the use of MDMA (or ecstasy) for post-traumatic stress disorder (PTSD) owing to “significant confounders” in the trials. 

For some people, unproven treatments might represent a last hope for survival. In those cases, how do we balance access to experimental medicine with the need to protect people who are vulnerable?

Stem cells from human embryos promised to “launch a medical revolution in which ailing organs and tissues might be repaired” when they were isolated just over 25 years ago. So why haven’t they?  

From around the web

Having a disability shouldn’t prevent you from getting married. But that’s exactly the conundrum facing some people in the US, as this heartbreaking short documentary shows. (STAT)

A Neuralink rival says its eye implant restored vision in blind people. Science Corporation’s retinal implant enabled some legally blind individuals to read from a book, play cards, and fill out crossword puzzles. (Wired)

Women in Texas are dying after doctors delay treating them for miscarriages. Doctors treating Josseli Barnica waited 40 hours for the heart of her fetus to stop beating, despite the fact that miscarriage was “inevitable.” Her husband says doctors worried that “it would be a crime to give her an abortion.” She died of a preventable infection three days later. (ProPublica)

Between 30% and 50% of twins share a secret language or mode of communication, a phenomenon known as cryptophasia. The Youlden twins call theirs Umeri. (BBC Future)

Can a machine express fear? Try your hand at creating AI-generated images frightening enough to “spook the machine” as part of a project to explore how machines might express humanlike emotions. It is Halloween, after all. (Spook the Machine)

The surprising barrier that keeps us from building the housing we need

Ahead of abortion access, ahead of immigration, and way ahead of climate change, US voters under 30 are most concerned about one issue: housing affordability. And it’s not just young voters who are identifying soaring rents and eye-watering home sale prices as among their top worries. For the first time in recent memory, the cost of housing could be a major factor in the presidential election.  

It’s not hard to see why. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.

Vice President Kamala Harris has offered an ambitious plan to build more: “Right now, a serious housing shortage is part of what is driving up cost,” she said last month in Las Vegas. “So we will cut the red tape and work with the private sector to build 3 million new homes.” Included in her proposals is a $40 billion innovation fund to support housing construction.

Former president Donald Trump, meanwhile, has also called for cutting regulations but mostly emphasizes a far different way to tackle the housing crunch: mass deportation of the immigrants he says are flooding the country, and whose need for housing he claims is responsible for the huge jump in prices. (While a few studies show some local impact on the cost of housing from immigration in general, the effect is relatively small, and there is no plausible economic scenario in which the number of immigrants over the last few years accounts for the magnitude of the increase in home prices and rents across much of the country.)

The opposing views offered by Trump and Harris have implications not only for how we try to lower home prices but for how we view the importance of building. Moreover, this attention on the housing crisis also reveals a broader issue with the construction industry at large: This sector has been tech-averse for decades, and it has become less productive over the past 50 years.

The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years. Depending on how you count it, the US has a shortage of around 1.2 million to more than 5.5 million single-family houses.

Permitting delays and strict zoning rules create huge obstacles to building more and faster—as do other widely recognized issues, like the political power of NIMBY activists across the country and an ongoing shortage of skilled workers. But there is also another, less talked-about problem that’s plaguing the industry: We’re not very efficient at building, and we seem somehow to be getting worse.

Together these forces have made it more expensive to build houses, leading to increases in prices. Albert Saiz, a professor of urban economics and real estate at MIT, calculates that construction costs account for more than two-thirds of the price of a new house in much of the country, including the Southwest and West, where much of the building is happening. Even in places like California and New England, where land is extremely expensive, construction accounts for 40% to 60% of value of a new home, according to Saiz.

Part of the problem, Saiz says, is that “if you go to any construction site, you’ll see the same methods used 30 years ago.”

The productivity woes are evident across the construction industry, not just in the housing sector. From clean-energy advocates dreaming of renewables and an expanded power grid to tech companies racing to add data centers, everyone seems to agree: We need to build more and do it quickly. The practical reality, though, is that it costs more, and takes more time, to construct anything.

For decades, companies across the industry have largely ignored ways they could improve the efficiency of their operations. They have shunned data science and the kinds of automation that have transformed the other sectors of the economy. According to an estimation by the McKinsey Global Institute, construction, one of the largest parts of the global economy, is the least digitized major sector worldwide—and it isn’t even close.

The reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff.

The awful truth

Productivity is our best measure of long-term progress in an industry, at least according to economists. Technically, it’s a measure of how much a worker can produce; as companies adopt more efficient practices and new technologies, productivity grows and businesses can make stuff (in this case, homes and buildings) faster and more cheaply. Yet something shocking has happened in the construction industry: Productivity seems to have stalled and even gone into reverse over the last few decades.

In a recent paper called “The Strange and Awful Path of Productivity in the US Construction Sector,” two leading economists at the University of Chicago showed that productivity growth in US construction came to a halt beginning around 1970. Productivity is notoriously difficult to quantify, but the Chicago researchers calculated it in one of the key parts of the construction business: housing. They found that the number of houses or total square footage (houses are getting bigger) built per employee each year was flat or even falling over the last 50 years. And the researchers believe the lack of productivity growth holds true for all different types of construction.

Chad Syverson, one of the authors, admits he is still trying to pinpoint the reason—“It’s probably a few things.” While he says it’s difficult to quantify the specific impact of various factors on productivity, including the effects of regulatory red tape and political fights that often delay construction, “part of the industry’s problem is its own operational inefficiency,” he says. “There’s no doubt about it.” In other words, the industry just isn’t very innovative.

The lack of productivity in construction over the last half-century, at a time when all other sectors grew dramatically, is “really amazing,” he says—and not in a good way.

US manufacturing, in contrast, continued growing at around 2% to 3% annually over the same period. Auto workers, as a result, now produce far more cars than they once did, leading to cheaper vehicles if you adjust for inflation (and, by most measures, safer and better ones).

Productivity in construction is not just a US problem, according to the McKinsey Global Institute, which has tracked the issue for nearly a decade. Not all countries are faring as badly as the US, but worldwide construction productivity has been flat over the last few decades, says Jan Mischke, who heads the McKinsey work.

Beyond adding to the costs and threatening the financial viability of many planned projects, Mischke says, the lack of productivity is “reflected in all the mess, time and cost overruns, concerns about quality, rework, and all the things that everyone who has ever built anything will have seen.” 

The nature of construction work can make it difficult to improve longstanding processes and introduce new technologies, he says: “Most other sectors become better over time by doing the same thing twice or three times or 3 million times. They learn and improve. All that is essentially missing in construction, where every single project starts from scratch and reinvents the wheel.”

Mischke also sees another reason for the industry’s lack of productivity: the “misaligned incentives” of the various players, who often make more money the longer a project takes.

Though the challenges are endemic to the business, Mischke adds that builders can take steps to overcome them by moving to digital technologies, implementing more standardized processes, and improving the efficiency of their business practices.

“Most other sectors become better over time by doing the same thing twice or three times or 3 million times. All that is essentially missing in construction.”

It’s an urgent problem to solve as many countries race to build housing, expand clean-energy capabilities, and update infrastructure like roads and airports. In their latest report, the McKinsey researchers warn of the dangers if productivity doesn’t improve: “The net-zero transition may be delayed, growth ambitions may be deferred, and countries may struggle to meet the infrastructure and housing needs for their populations.”

But the report also says there’s a flip side to the lack of progress in much of the industry: Individual companies that begin to improve their efficiency could gain a huge competitive advantage.

Building on the data

When Jit Kee Chin joined Suffolk Construction as its chief data officer in 2017, the title was unique in the industry. But Chin, armed with a PhD in experimental physics from MIT and a 10-year stint at McKinsey, brought to the large Boston-based firm the kind of technical and management expertise often missing from construction companies. And she recognized that large construction projects—including the high-rise apartment buildings and sprawling data centers that Suffolk often builds—generate vast amounts of useful data.

At the time, much of the data was siloed; information on the progress of a project was in one place, scheduling in another, and safety data and reports in yet another. “The systems didn’t talk to each other, and it was very difficult to cross-correlate,” says Chin. Getting all the data together so it could be understood and utilized across the business was an early task.

“Almost all construction companies are talking about how to better use their data now,” says Chin, who is currently Suffolk’s CTO, and since her hiring, “a couple others have even appointed chief data officers.” But despite such encouraging signs, she sees the effort to improve productivity in the industry as still very much a work in progress.  

One ongoing and obvious target: the numerous documents that are constantly being revised as they move along from architect to engineers to subcontractors. It’s the lifeblood of any construction project, and Chin says the process “is by no means seamless.” Architects and subcontractors sometimes use different software; meanwhile, the legally binding documents spelling out details of a project are still circulated as printouts. A more frictionless flow of information among the multitude of players is critical to better coordinate the complex building process.

Ultimately, though, building is a physical activity. And while automation has largely been absent from building trades, robots are finally cheap enough to be attractive to builders, especially companies facing a shortage of workers. “The cost of off-the-shelf robotic components has come down to a point where it is feasible to think of simple robots automating a very repetitive task,” says Chin. And advances in robotic image recognition, lidar, AI, and dexterity, she says, mean robots are starting to be able to safely navigate construction sites.

One step in construction where digital designs meet the physical world is the process of laying out blueprints for walls and other structures on the floor of a building. It’s an exacting, time-consuming manual practice, prone to errors.

The Dusty Robotics field printer marks the layout for walls and other structures.
DUSTY ROBOTICS

And startups like Dusty Robotics are betting it’s an almost perfect application for a Roomba-like robot. Tessa Lau, its CEO, recalls that when she researched the industry before founding the company in 2018, she was struck by seeing “people on their hands and knees snapping chalk lines.”

Based in Silicon Valley, the company builds a box-shaped machine that scoots about a site on sturdy wheels to mark the layout. Though the company often markets it as a field printer to allay any fears about automation, it’s an AI-powered robot with advanced sensors that plan and guide its travels.

Not only does the robot automate a critical job, but because that task is so central in the construction process, it also helps open a digital window into the overall workflow of a project.

A history lesson

Whatever the outcome of the upcoming election, don’t hold your breath waiting for home prices to fall; even if we do build more (or somehow decrease demand), it will probably take years for the supply to catch up. But the political spotlight on housing affordability could be a rare opportunity to focus on the broad problem of construction productivity.  

While some critics have argued that Harris’s plan is too vague and lacks the ambition required to solve the housing crisis, her message that we need to build more and faster is the right one. “It takes too long and it costs too much to build. Whether it’s a new housing development, a new factory, or a new bridge, projects take too long to go from concept to reality,” Harris said in a speech in late September. Then she asked: “You know long it took to build [the Empire State Building]?”

Harris stresses cutting red tape to unleash a building boom. That’s critical, but it’s only part of the long-term answer. The construction of the famous New York City skyscraper took just over a year in 1931—a feat that provides valuable clues to how the industry itself can finally increase its productivity.

The explanation for why it was built so quickly has less to do with new technologies—in fact, the engineers mostly opted for processes and materials that were familiar and well-tested at the time—and more to do with how the project leaders managed every aspect of the design and construction process for speed and efficiency. The activity of the thousands of workers was carefully scheduled and tracked, and the workflow was highly choreographed to minimize delays. Even the look of the 1,250-foot building was largely a result of choosing the fastest and simplest way to build.

To a construction executive like Suffolk’s Chin, who estimates it would take at least four years to construct such a building today, the lessons of the Empire State Building resonate, especially the operational discipline and the urgency to finish the structure as quickly as possible. “It’s a stark difference when you think about how much time it took and how much time it would take to build that building now,” she says.

If we want an affordable future, the construction business needs to recapture that sense of urgency and efficiency. To do so, the industry will need to change the way it operates and alter its incentive structures; it will need to incorporate the right mix of automation and find financial models that will transform outdated business practices. The good news is that advances in data science, automation, and AI are offering companies new opportunities to do just that.

The hope, then, is that capitalism will do capitalism. Innovative firms will (hopefully) build more cheaply and faster, boost their profits, and become more competitive. Such companies will prosper, and others will begin to mimic the early adopters, investing in the new technologies and business models. In other words, the reality of seeing some builders profit by using data and automation will finally help drag the construction industry into the modern digital age.

Inside a fusion energy facility

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

On an overcast day in early October, I picked up a rental car and drove to Devens, Massachusetts, to visit a hole in the ground.

Commonwealth Fusion Systems has raised over $2 billion in funding since it spun out of MIT in 2018, all in service of building the first commercial fusion reactor. The company has ambitions to build power plants, but currently the goal is to finish putting together its first demonstration system, the SPARC reactor. The plan is to have it operating by 2026.

I visited the company’s site recently to check in on progress. Things are starting to come together around the hole in the floor where SPARC will eventually be installed. Looking around the site, I found it becoming easier to imagine a future that could actually include fusion energy. But there’s still a lot of work left to do. 

Fusion power has been a dream for decades. The idea is simple: Slam atoms together and use the energy that’s released to power the world. The systems would require small amounts of abundant fuel and wouldn’t produce dangerous waste. The problem is, executing this vision has been much slower than many had hoped.

Commonwealth is one of the leaders in commercial fusion. My colleague James Temple wrote a feature story, published in early 2022, about the company’s attempts to bring the technology to reality. At the time, the Devens location was still a muddy construction site, with the steel and concrete just starting to go into the ground.

Things are much more polished now—when I visited earlier this month, I pulled into one of the designated visitor parking spots and checked in at a reception desk in a bustling office building before beginning my tour. There were two main things to see: the working magnet factory and the cluster of buildings that will house and support the SPARC reactor.

We started in the magnet factory. SPARC is a tokamak, a device relying on powerful magnets to contain the plasma where fusion reactions take place. There will be three different types of magnets in SPARC, all arranged to keep the plasma in position and moving around in the right way.

The company is making its own magnets powered with tape made from a high-temperature superconductor, which generates a magnetic field when an electric current runs through it. SPARC will contain thousands of miles’ worth of this tape in its magnets. In the factory, specialized equipment winds up the tape and tucks it into metal cases, which are then stacked together and welded into protective shells.  

After our quick loop around the magnet factory, I donned a helmet, neon vest, and safety glasses and got a short safety talk that included a stern warning to not stare directly at any welding. Then we walked across a patio and down a gravel driveway to the main complex of buildings that will house the SPARC reactor.

Except for some remaining plywood stairs and dust, the complex appeared to be nearly completed. There’s a huge wall of glass on the front of the building—a feature intended to show that the company is open with the community about the goings-on inside, as my tour guide, chief marketing officer Joe Paluska, put it.  

Four main buildings surround the central tokamak hall. These house support equipment needed to cool down the magnets, heat up the plasma, and measure conditions in the reactor. Most of these big, industrial systems that support SPARC are close to being ready to turn on or are actively being installed, explained Alex Creely, director of tokamak operations, in a call after my tour.

When it was finally time to see the tokamak hall that will house SPARC, we had to take a winding route to get there. A maze of concrete walls funneled us to the entrance, and I lost track of my left and right turns. Called the labyrinth, this is a safety feature, designed to keep stray neutrons from escaping the hall once the reactor is operating. (Neutrons are a form of radiation, and enough exposure can be dangerous to humans.) 

Finally, we stepped into a cavernous space. From our elevated vantage point on a metal walkway, we peered down into a room with gleaming white floors and equipment scattered around the perimeter. At the center was a hole, covered with a tarp and surrounded by bright-yellow railings. That empty slot is where the star of the show, SPARC, will eventually be installed.

tokamak hall at Commonwealth Fusion Systems
The tokamak hall at Commonwealth Fusion Systems will house the company’s SPARC reactor.
COMMONWEALTH FUSION SYSTEMS

While there’s still very little tokamak in the tokamak hall right now, Commonwealth has an ambitious timeline planned: The goal is to have SPARC running and the first plasma in the reactor by 2026. The company plans to demonstrate that it can produce more energy in the reactor than is needed to power it (a milestone known as Q>1 in the fusion world) by 2027.

When we published our 2022 story on Commonwealth, the plan was to flip on the reactor and reach the Q>1 milestone by 2025, so the timeline has slipped. It’s not uncommon for big projects in virtually every industry to take longer than expected. But there’s an especially long and fraught history of promises and missed milestones in fusion. 

Commonwealth has certainly made progress over the past few years, and it’s getting easier to imagine the company actually turning on a reactor and meeting the milestones the field has been working toward for decades. But there’s still a tokamak-shaped hole in suburban Massachusetts waiting to be filled. 


Now read the rest of The Spark

Related reading

Read our 2022 feature on Commonwealth Fusion Systems and its path to commercializing fusion energy here

In late 2022, a reactor at a national lab in the US generated more energy than was put in, a first for the industry. Here’s what meeting that milestone actually means for clean energy

There’s still a lot of research to be done in fusion—here’s what’s coming next

Another company called Helion says its first fusion power plant is five years away. Experts are skeptical, to say the least.

AI e-waste

PHOTO ILLUSTRATION BY SARAH ROGERS/MITTR | PHOTOS GETTY

Another thing

Generative AI will add to our growing e-waste problem. A new study estimates that AI could add up to 5 million tons of e-waste by 2030. 

It’s a small fraction of the total, but there’s still good reason to think carefully about how we handle discarded servers and high-performance computing equipment, according to experts. Read more in my latest story

Keeping up with climate  

New York City will buy 10,000 induction stoves from a startup called Copper. The stoves will be installed in public housing in the city. (Heatmap)

Demand is growing for electric cabs in India, but experts say there’s not nearly enough supply to meet it. (Rest of World)

Pivot Bio aims to tweak the DNA of bacteria so they can help deliver nutrients to plants. The company is trying to break into an industry dominated by massive agriculture and chemical companies. (New York Times)

→ Check out our profile of Pivot Bio, which was one of our 15 Climate Tech Companies to Watch this year. (MIT Technology Review)

At least 62 people are dead and many more are missing in dangerous flooding across Spain. (Washington Post

A massive offshore wind lease sale this week offered up eight patches of ocean off the coast of Maine in the US. Four sold, opening the door for up to 6.8 gigawatts of additional offshore wind power. (Canary Media)

Climate change contributed to the deaths of 38,000 people across Europe in the summer of 2022, according to a new study. (The Guardian)

→ The legacy of Europe’s heat waves will be more air-conditioning, and that could be its own problem. (MIT Technology Review)

There are nearly 9,000 public fast-charging sites in the US, and a surprising wave of installations in the Midwest and Southeast. (Bloomberg)

Some proposed legislation aims to ban factory farming, but determining what that category includes is way more complicated than you might think. (Ambrook Research)