Here’s our forecast for AI this year

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

In December, our small but mighty AI reporting team was asked by our editors to make a prediction: What’s coming next for AI? 

In 2024, AI contributed both to Nobel Prize–winning chemistry breakthroughs and a mountain of cheaply made content that few people asked for but that nonetheless flooded the internet. Take AI-generated Shrimp Jesus images, among other examples. There was also a spike in greenhouse-gas emissions last year that can be attributed partly to the surge in energy-intensive AI. Our team got to thinking about how all of this will shake out in the year to come. 

As we look ahead, certain things are a given. We know that agents—AI models that do more than just converse with you and can actually go off and complete tasks for you—are the focus of many AI companies right now. Building them will raise lots of privacy questions about how much of our data and preferences we’re willing to give up in exchange for tools that will (allegedly) save us time. Similarly, the need to make AI faster and more energy efficient is putting so-called small language models in the spotlight. 

We instead wanted to focus on less obvious predictions. Mine were about how AI companies that previously shunned work in defense and national security might be tempted this year by contracts from the Pentagon, and how Donald Trump’s attitudes toward China could escalate the global race for the best semiconductors. Read the full list.

What’s not evident in that story is that the other predictions were not so clear-cut. Arguments ensued about whether or not 2025 will be the year of intimate relationships with chatbots, AI throuples, or traumatic AI breakups. To witness the fallout from our team’s lively debates (and hear more about what didn’t make the list), you can join our upcoming LinkedIn Live this Thursday, January 16. I’ll be talking it all over with Will Douglas Heaven, our senior editor for AI, and our news editor, Charlotte Jee. 

There are a couple other things I’ll be watching closely in 2025. One is how little the major AI players—namely OpenAI, Microsoft, and Google—are disclosing about the environmental burden of their models. Lots of evidence suggests that asking an AI model like ChatGPT about knowable facts, like the capital of Mexico, consumes much more energy (and releases far more emissions) than simply asking a search engine. Nonetheless, OpenAI’s Sam Altman in recent interviews has spoken positively about the idea of ChatGPT replacing the googling that we’ve all learned to do in the past two decades. It’s already happening, in fact. 

The environmental cost of all this will be top of mind for me in 2025, as will the possible cultural cost. We will go from searching for information by clicking links and (hopefully) evaluating sources to simply reading the responses that AI search engines serve up for us. As our editor in chief, Mat Honan, said in his piece on the subject, “Who wants to have to learn when you can just know?”


Now read the rest of The Algorithm

Deeper Learning

What’s next for our privacy?

The US Federal Trade Commission has taken a number of enforcement actions against data brokers, some of which have  tracked and sold geolocation data from users at sensitive locations like churches, hospitals, and military installations without explicit consent. Though limited in nature, these actions may offer some new and improved protections for Americans’ personal information. 

Why it matters: A consensus is growing that Americans need better privacy protections—and that the best way to deliver them would be for Congress to pass comprehensive federal privacy legislation. Unfortunately, that’s not going to happen anytime soon. Enforcement actions from agencies like the FTC might be the next best thing in the meantime. Read more in Eileen Guo’s excellent story here.

Bits and Bytes

Meta trained its AI on a notorious piracy database

New court records, Wired reports, reveal that Meta used “a notorious so-called shadow library of pirated books that originated in Russia” to train its generative AI models. (Wired)

OpenAI’s top reasoning model struggles with the NYT Connections game

The game requires players to identify how groups of words are related. OpenAI’s o1 reasoning model had a hard time. (Mind Matters)

Anthropic’s chief scientist on 5 ways agents will be even better in 2025

The AI company Anthropic is now worth $60 billion. The company’s cofounder and chief scientist, Jared Kaplan, shared how AI agents will develop in the coming year. (MIT Technology Review)

A New York legislator attempts to regulate AI with a new bill

This year, a high-profile bill in California to regulate the AI industry was vetoed by Governor Gavin Newsom. Now, a legislator in New York is trying to revive the effort in his own state. (MIT Technology Review)

What’s next for nuclear power

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

While nuclear reactors have been generating power around the world for over 70 years, the current moment is one of potentially radical transformation for the technology.

As electricity demand rises around the world for everything from electric vehicles to data centers, there’s renewed interest in building new nuclear capacity, as well as extending the lifetime of existing plants and even reopening facilities that have been shut down. Efforts are also growing to rethink reactor designs, and 2025 marks a major test for so-called advanced reactors as they begin to move from ideas on paper into the construction phase.

That’s significant because nuclear power promises a steady source of electricity as climate change pushes global temperatures to new heights and energy demand surges around the world. Here’s what to expect next for the industry.  

A global patchwork

The past two years have seen a new commitment to nuclear power around the globe, including an agreement at the UN climate talks that 31 countries pledged to triple global nuclear energy capacity by 2050. However, the prospects for the nuclear industry differ depending on where you look.

The US is currently home to the highest number of operational nuclear reactors in the world. If its specific capacity were to triple, that would mean adding a somewhat staggering 200 gigawatts of new nuclear energy capacity to the current total of roughly 100 gigawatts. And that’s in addition to replacing any expected retirements from a relatively old fleet. But the country has come to something of a stall. A new reactor at the Vogtle plant in Georgia came online last year (following significant delays and cost overruns), but there are no major conventional reactors under construction or in review by regulators in the US now.

This year also brings an uncertain atmosphere for nuclear power in the US as the incoming Trump administration takes office. While the technology tends to have wide political support, it’s possible that policies like tariffs could affect the industry by increasing the cost of building materials like steel, says Jessica Lovering, cofounder at the Good Energy Collective, a policy research organization that advocates for the use of nuclear energy.

Globally, most reactors under construction or in planning phases are in Asia, and growth in China is particularly impressive. The country’s first nuclear power plant connected to the grid in 1991, and in just a few decades it has built the third-largest fleet in the world, after only France and the US. China has four large reactors likely to come online this year, and another handful are scheduled for commissioning in 2026.

This year will see both Bangladesh and Turkey start up their first nuclear reactors. Egypt also has its first nuclear plant under construction, though it’s not expected to undergo commissioning for several years.  

Advancing along

Commercial nuclear reactors on the grid today, and most of those currently under construction, generally follow a similar blueprint: The fuel that powers the reactor is low-enriched uranium, and water is used as a coolant to control the temperature inside.

But newer, advanced reactors are inching closer to commercial use. A wide range of these so-called Generation IV reactors are in development around the world, all deviating from the current blueprint in one way or another in an attempt to improve safety, efficiency, or both. Some use molten salt or a metal like lead as a coolant, while others use a more enriched version of uranium as a fuel. Often, there’s a mix-and-match approach with variations on the fuel type and cooling methods.

The next couple of years will be crucial for advanced nuclear technology as proposals and designs move toward the building process. “We’re watching paper reactors turn into real reactors,” says Patrick White, research director at the Nuclear Innovation Alliance, a nonprofit think tank.

Much of the funding and industrial activity in advanced reactors is centered in the US, where several companies are close to demonstrating their technology.

Kairos Power is building reactors cooled by molten salt, specifically a fluorine-containing material called Flibe. The company received a construction permit from the US Nuclear Regulatory Commission (NRC) for its first demonstration reactor in late 2023, and a second permit for another plant in late 2024. Construction will take place on both facilities over the next few years, and the plan is to complete the first demonstration facility in 2027.

TerraPower is another US-based company working on Gen IV reactors, though the design for its Natrium reactor uses liquid sodium as a coolant. The company is taking a slightly different approach to construction, too: by separating the nuclear and non-nuclear portions of the facility, it was able to break ground on part of its site in June of 2024. It’s still waiting for construction approval from the NRC to begin work on the nuclear side, which the company expects to do by 2026.

A US Department of Defense project could be the first in-progress Gen IV reactor to generate electricity, though it’ll be at a very small scale. Project Pele is a transportable microreactor being manufactured by BWXT Advanced Technologies. Assembly is set to begin early this year, with transportation to the final site at Idaho National Lab expected in 2026.

Advanced reactors certainly aren’t limited to the US. Even as China is quickly building conventional reactors, the country is starting to make waves in a range of advanced technologies as well. Much of the focus is on high-temperature gas-cooled reactors, says Lorenzo Vergari, an assistant professor at the University of Illinois Urbana-Champaign. These reactors use helium gas as a coolant and reach temperatures over 1,500 °C, much higher than other designs.

China’s first commercial demonstration reactor of this type came online in late 2023, and a handful of larger reactors that employ the technology are currently in planning phases or under construction.

Squeezing capacity

It will take years, or even decades, for even the farthest-along advanced reactor projects to truly pay off with large amounts of electricity on the grid. So amid growing global electricity demand around the world, there’s renewed interest in getting as much power out of existing nuclear plants as possible.

One trend that’s taken off in countries with relatively old nuclear fleets is license extension. While many plants built in the 20th century were originally licensed to run for 40 years, there’s no reason many of them can’t run for longer if they’re properly maintained and some equipment is replaced.

Regulators in the US have granted 20-year extensions to much of the fleet, bringing the expected lifetime of many to 60 years. A handful of reactors have seen their licenses extended even beyond that, to 80 years. Countries including France and Spain have also recently extended licenses of operating reactors beyond their 40-year initial lifetimes. Such extensions are likely to continue, and the next few years could see more reactors in the US relicensed for up to 80-year lifetimes.

In addition, there’s interest in reopening shuttered plants, particularly those that have shut down recently for economic reasons. Palisades Nuclear Plant in Michigan is the target of one such effort, and the project secured a $1.52 billion loan from the US Department of Energy to help with the costs of reviving it. Holtec, the plant’s owner and operator, is aiming to have the facility back online in 2025. 

However, the NRC has reported possible damage to some of the equipment at the plant, specifically the steam generators. Depending on the extent of the repairs needed, the additional cost could potentially make reopening uneconomical, White says.

A reactor at the former Three Mile Island Nuclear Facility is another target. The site’s owner says the reactor could be running again by 2028, though battles over connecting the plant to the grid could play out in the coming year or so. Finally, the owners of the Duane Arnold Energy Center in Iowa are reportedly considering reopening the nuclear plant, which shut down in 2020.

Big Tech’s big appetite

One of the factors driving the rising appetite for nuclear power is the stunning growth of AI, which relies on data centers requiring a huge amount of energy. Last year brought new interest from tech giants looking to nuclear as a potential solution to the AI power crunch.

Microsoft had a major hand in plans to reopen the reactor at Three Mile Island—the company signed a deal in 2024 to purchase power from the facility if it’s able to reopen. And that’s just the beginning.

Google signed a deal with Kairos Power in October 2024 that would see the startup build up to 500 megawatts’ worth of power plants by 2035, with Google purchasing the energy. Amazon went one step further than these deals, investing directly in X-energy, a company building small modular reactors. The money will directly fund the development, licensing, and construction of a project in Washington.

Funding from big tech companies could be a major help in keeping existing reactors running and getting advanced projects off the ground, but many of these commitments so far are vague, says Good Energy Collective’s Lovering. Major milestones to watch for include big financial commitments, contracts signed, and applications submitted to regulators, she says.

“Nuclear had an incredible 2024, probably the most exciting year for nuclear in many decades,” says Staffan Qvist, a nuclear engineer and CEO of Quantified Carbon, an international consultancy focused on decarbonizing energy and industry. Deploying it at the scale required will be a big challenge, but interest is ratcheting up. As he puts it, “There’s a big world out there hungry for power.”

Inside the strange limbo facing millions of IVF embryos

Lisa Holligan already had two children when she decided to try for another baby. Her first two pregnancies had come easily. But for some unknown reason, the third didn’t. Holligan and her husband experienced miscarriage after miscarriage after miscarriage.

Like many other people struggling to conceive, Holligan turned to in vitro fertilization, or IVF. The technology allows embryologists to take sperm and eggs and fuse them outside the body, creating embryos that can then be transferred into a person’s uterus.

The fertility clinic treating Holligan was able to create six embryos using her eggs and her husband’s sperm. Genetic tests revealed that only three of these were “genetically normal.” After the first was transferred, Holligan got pregnant. Then she experienced yet another miscarriage. “I felt numb,” she recalls. But the second transfer, which took place several months later, stuck. And little Quinn, who turns four in February, was the eventual happy result. “She is the light in our lives,” says Holligan.

Holligan, who lives in the UK, opted to donate her “genetically abnormal” embryos for scientific research. But she still has one healthy embryo frozen in storage. And she doesn’t know what to do with it.

Should she and her husband donate it to another family? Destroy it? “It’s almost four years down the line, and we still haven’t done anything with [the embryo],” she says. The clinic hasn’t been helpful—Holligan doesn’t remember talking about what to do with leftover embryos at the time, and no one there has been in touch with her for years, she says.

Holligan’s embryo is far from the only one in this peculiar limbo. Millions—or potentially tens of millions—of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates. 

At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections. The problem is that no one can really agree on what that status is. To some, they’re human cells and nothing else. To others, they’re morally equivalent to children. Many feel they exist somewhere between those two extremes.

There are debates, too, over how we should classify embryos in law. Are they property? Do they have a legal status? These questions are important: There have been multiple legal disputes over who gets to use embryos, who is responsible if they are damaged, and who gets the final say over their fate. And the answers will depend not only on scientific factors, but also on ethical, cultural, and religious ones.  

The options currently available to people with leftover IVF embryos mirror this confusion. As a UK resident, Holligan can choose to discard her embryos, make them available to other prospective parents, or donate them for research. People in the US can also opt for “adoption,” “placing” their embryos with families they get to choose. In Germany, people are not typically allowed to freeze embryos at all. And in Italy, embryos that are not used by the intended parents cannot be discarded or donated. They must remain frozen, ostensibly forever. 

While these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? 

Meanwhile, many of these same people are trying to find ways to bring down the total number of embryos in storage. Maintenance costs are high. Some clinics are running out of space. And with a greater number of embryos in storage, there are more opportunities for human error. They are grappling with how to get a handle on the growing number of embryos stuck in storage with nowhere to go.

The embryo boom

There are a few reasons why this has become such a conundrum. And they largely come down to an increasing demand for IVF and improvements in the way it is practiced. “It’s a problem of our own creation,” says Pietro Bortoletto, a reproductive endocrinologist at Boston IVF in Massachusetts. IVF has only become as successful as it is today by “generating lots of excess eggs and embryos along the way,” he says. 

To have the best chance of creating healthy embryos that will attach to the uterus and grow in a successful pregnancy, clinics will try to collect multiple eggs. People who undergo IVF will typically take a course of hormone injections to stimulate their ovaries. Instead of releasing a single egg that month, they can expect to produce somewhere between seven and 20 eggs. These eggs can be collected via a needle that passes through the vagina and into the ovaries. The eggs are then taken to a lab, where they are introduced to sperm. Around 70% to 80% of IVF eggs are successfully fertilized to create embryos.

The embryos are then grown in the lab. After around five to seven days an embryo reaches a stage of development at which it is called a blastocyst, and it is ready to be transferred to a uterus. Not all IVF embryos reach this stage, however—only around 30% to 50% of them make it to day five. This process might leave a person with no viable embryos. It could also result in more than 10, only one of which is typically transferred in each pregnancy attempt. In a typical IVF cycle, one embryo might be transferred to the person’s uterus “fresh,” while any others that were created are frozen and stored.

IVF success rates have increased over time, in large part thanks to improvements in this storage technology. A little over a decade ago, embryologists tended to use a “slow freeze” technique, says Bortoletto, and many embryos didn’t survive the process. Embryos are now vitrified instead, using liquid nitrogen to rapidly cool them from room temperature to -196 °C in less than two seconds. Vitrification essentially turns all the water in the embryos into a glasslike state, avoiding the formation of damaging ice crystals. 

Now, clinics increasingly take a “freeze all” approach, in which they cryopreserve all the viable embryos and don’t start transferring them until later. In some cases, this is so that the clinic has a chance to perform genetic tests on the embryo they plan to transfer.

An assortment of sperm and embryos, preserved in liquid nitrogen.
ALAMY

Once a lab-grown embryo is around seven days old, embryologists can remove a few cells for preimplantation genetic testing (PGT), which screens for genetic factors that might make healthy development less likely or predispose any resulting children to genetic diseases. PGT is increasingly popular in the US—in 2014, it was used in 13% of IVF cycles, but by 2016, that figure had increased to 27%. Embryos that undergo PGT have to be frozen while the tests are run, which typically takes a week or two, says Bortoletto: “You can’t continue to grow them until you get those results back.”

And there doesn’t seem to be a limit to how long an embryo can stay in storage. In 2022, a couple in Oregon had twins who developed from embryos that had been frozen for 30 years.

Put this all together, and it’s easy to see how the number of embryos in storage is rocketing. We’re making and storing more embryos than ever before. When you combine that with the growing demand for IVF, which is increasing in use by the year, perhaps it’s not surprising that the number of embryos sitting in storage tanks is estimated to be in the millions.

I say estimated, because no one really knows how many there are. In 2003, the results of a survey of fertility clinics in the US suggested that there were around 400,000 in storage. Ten years later, in 2013, another pair of researchers estimated that, in total, around 1.4 million embryos had been cryopreserved in the US. But Alana Cattapan, now a political scientist at the University of Waterloo in Ontario, Canada, and her colleagues found flaws in the study and wrote in 2015 that the number could be closer to 4 million.  

That was a decade ago. When I asked embryologists what they thought the number might be in the US today, I got responses between 1 million and 10 million. Bortoletto puts it somewhere around 5 million.

Globally, the figure is much higher. There could be tens of millions of embryos, invisible to the naked eye, kept in a form of suspended animation. Some for months, years, or decades. Others indefinitely.

Stuck in limbo

In theory, people who have embryos left over from IVF have a few options for what to do with them. They could donate the embryos for someone else to use. Often this can be done anonymously (although genetic tests might later reveal the biological parents of any children that result). They could also donate the embryos for research purposes. Or they could choose to discard them. One way to do this is to expose the embryos to air, causing the cells to die.

Studies suggest that around 40% of people with cryopreserved embryos struggle to make this decision, and that many put it off for five years or more. For some people, none of the options are appealing.

In practice, too, the available options vary greatly depending on where you are. And many of them lead to limbo.

Take Spain, for example, which is a European fertility hub, partly because IVF there is a lot cheaper than in other Western European countries, says Giuliana Baccino, managing director of New Life Bank, a storage facility for eggs and sperm in Buenos Aires, Argentina, and vice chair of the European Fertility Society. Operating costs are low, and there’s healthy competition—there are around 330 IVF clinics operating in Spain. (For comparison, there are around 500 IVF clinics in the US, which has a population almost seven times greater.)

Baccino, who is based in Madrid, says she often hears of foreign patients in their late 40s who create eight or nine embryos for IVF in Spain but end up using only one or two of them. They go back to their home countries to have their babies, and the embryos stay in Spain, she says. These individuals often don’t come back for their remaining embryos, either because they have completed their families or because they age out of IVF eligibility (Spanish clinics tend not to offer the treatment to people over 50). 

Doctors hands removing embryo samples from cryogenic storage
An embryo sample is removed from cryogenic storage.
GETTY IMAGES

In 2023, the Spanish Fertility Society estimated that there were 668,082 embryos in storage in Spain, and that around 60,000 of them were “in a situation of abandonment.” In these cases the clinics might not be able to reach the intended parents, or might not have a clear directive from them, and might not want to destroy any embryos in case the patients ask for them later. But Spanish clinics are wary of discarding embryos even when they have permission to do so, says Baccino. “We always try to avoid trouble,” she says. “And we end up with embryos in this black hole.”

This happens to embryos in the US, too. Clinics can lose touch with their patients, who may move away or forget about their remaining embryos once they have completed their families. Other people may put off making decisions about those embryos and stop communicating with the clinic. In cases like these, clinics tend to hold onto the embryos, covering the storage fees themselves.

Nowadays clinics ask their patients to sign contracts that cover long-term storage of embryos—and the conditions of their disposal. But even with those in hand, it can be easier for clinics to leave the embryos in place indefinitely. “Clinics are wary of disposing of them without explicit consent, because of potential liability,” says Cattapan, who has researched the issue. “People put so much time, energy, money into creating these embryos. What if they come back?”

Bortoletto’s clinic has been in business for 35 years, and the handful of sites it operates in the US have a total of over 47,000 embryos in storage, he says. “Our oldest embryo in storage was frozen in 1989,” he adds. 

Some people may not even know where their embryos are. Sam Everingham, who founded and directs Growing Families, an organization offering advice on surrogacy and cross-border donations, traveled with his partner from their home in Melbourne, Australia, to India to find an egg donor and surrogate back in 2009. “It was a Wild West back then,” he recalls. Everingham and his partner used donor eggs to create eight embryos with their sperm.

Everingham found the experience of trying to bring those embryos to birth traumatic. Baby Zac was stillborn. Baby Ben died at seven weeks. “We picked ourselves up and went again,” he recalls. Two embryo transfers were successful, and the pair have two daughters today.

But the fate of the rest of their embryos is unclear. India’s government decided to ban commercial surrogacy for foreigners in 2015, and Everingham lost track of where they are. He says he’s okay with that. As far as he’s concerned, those embryos are just cells.

He knows not everyone feels the same way. A few days before we spoke, Everingham had hosted a couple for dinner. They had embryos in storage and couldn’t agree on what to do with them. “The mother … wanted them donated to somebody,” says Everingham. Her husband was very uncomfortable with the idea. “[They have] paid storage fees for 14 years for those embryos because neither can agree on what to do with them,” says Everingham. “And this is a very typical scenario.”

Lisa Holligan’s experience is similar. Holligan thought she’d like to donate her last embryo to another person—someone else who might have been struggling to conceive. “But my husband and I had very different views on it,” she recalls. He saw the embryo as their child and said he wouldn’t feel comfortable with giving it up to another family. “I started having these thoughts about a child coming to me when they’re older, saying they’ve had a terrible life, and [asking] ‘Why didn’t you have me?’” she says.

After all, her daughter Quinn began as an embryo that was in storage for months. “She was frozen in time. She could have been frozen for five years like [the leftover] embryo and still be her,” she says. “I know it sounds a bit strange, but this embryo could be a child in 20 years’ time. The science is just mind-blowing, and I think I just block it out. It’s far too much to think about.”

No choice at all

Choosing the fate of your embryos can be difficult. But some people have no options at all.

This is the case in Italy, where the laws surrounding assisted reproductive technology have grown increasingly restrictive. Since 2004, IVF has been accessible only to heterosexual couples who are either married or cohabiting. Surrogacy has also been prohibited in the country for the last 20 years, and in 2024, it was made a “universal crime.” The move means Italians can be prosecuted for engaging in surrogacy anywhere in the world, a position Italy has also taken on the crimes of genocide and torture, says Sara Dalla Costa, a lawyer specializing in assisted reproduction and an IVF clinic manager at Instituto Bernabeu on the outskirts of Venice.

The law surrounding leftover embryos is similarly inflexible. Dalla Costa says there are around 900,000 embryos in storage in Italy, basing the estimate on figures published in 2021 and the number of IVF cycles performed since then. By law, these embryos cannot be discarded. They cannot be donated to other people, and they cannot be used for research. 

Even when genetic tests show that the embryo has genetic features making it “incompatible with life,” it must remain in storage, forever, says Dalla Costa. 

“There are a lot of patients that want to destroy embryos,” she says. For that, they must transfer their embryos to Spain or other countries where it is allowed.

Even people who want to use their embryos may “age out” of using them. Dalla Costa gives the example of a 48-year-old woman who undergoes IVF and creates five embryos. If the first embryo transfer happens to result in a successful pregnancy, the other four will end up in storage. Once she turns 50, this woman won’t be eligible for IVF in Italy. Her remaining embryos become stuck in limbo. “They will be stored in our biobanks forever,” says Dalla Costa.

Dalla Costa says she has “a lot of examples” of couples who separate after creating embryos together. For many of them, the stored embryos become a psychological burden. With no way of discarding them, these couples are forever connected through their cryopreserved cells. “A lot of our patients are stressed for this reason,” she says.

Earlier this year, one of Dalla Costa’s clients passed away, leaving behind the embryos she’d created with her husband. He asked the clinic to destroy them. In cases like these, Dalla Costa will contact the Italian Ministry of Health. She has never been granted permission to discard an embryo, but she hopes that highlighting cases like these might at least raise awareness about the dilemmas the country’s policies are creating for some people.

Snowflakes and embabies

In Italy, embryos have a legal status. They have protected rights and are viewed almost as children. This sentiment isn’t specific to Italy. It is shared by plenty of individuals who have been through IVF. “Some people call them ‘embabies’ or ‘freezer babies,’” says Cattapan.

It is also shared by embryo adoption agencies in the US. Beth Button is executive director of one such program, called Snowflakes—a division of Nightlight Christian Adoptions agency, which considers cryopreserved embryos to be children, frozen in time, waiting to be born. Snowflakes matches embryo donors, or “placing families,” with recipients, termed “adopting families.” Both parties share their information and essentially get to choose who they donate to or receive from. By the end of 2024, 1,316 babies had been born through the Snowflakes embryo adoption program, says Button. 

Button thinks that far too many embryos are being created in IVF labs around the US. Around 10 years ago, her agency received a donation from a couple that had around 38 leftover embryos to donate. “We really encourage [people with leftover embryos in storage] to make a decision [about their fate], even though it’s an emotional, difficult decision,” she says. “Obviously, we just try to keep [that discussion] focused on the child,” she says. “Is it better for these children to be sitting in a freezer, even though that might be easier for you, or is it better for them to have a chance to be born into a loving family? That kind of pushes them to the point where they’re ready to make that decision.”

Button and her colleagues feel especially strongly about embryos that have been in storage for a long time. These embryos are usually difficult to place, because they are thought to be of poorer quality, or less likely to successfully thaw and result in a healthy birth. The agency runs a program called Open Hearts specifically to place them, along with others that are harder to match for various reasons. People who accept one but fail to conceive are given a shot with another embryo, free of charge.

These nitrogen tanks at New Hope Fertility Center in New York hold tens of thousands of frozen embryos and eggs.
GETTY IMAGES

“We have seen perfectly healthy children born from very old embryos, [as well as] embryos that were considered such poor quality that doctors didn’t even want to transfer them,” says Button. “Right now, we have a couple who is pregnant with [an embryo] that was frozen for 30 and a half years. If that pregnancy is successful, that will be a record for us, and I think it will be a worldwide record as well.”

Many embryologists bristle at the idea of calling an embryo a child, though. “Embryos are property. They are not unborn children,” says Bortoletto. In the best case, embryos create pregnancies around 65% of the time, he says. “They are not unborn children,” he repeats.

Person or property?

In 2020, an unauthorized person allegedly entered an IVF clinic in Alabama and pulled frozen embryos from storage, destroying them. Three sets of intended parents filed suit over their “wrongful death.” A trial court dismissed the claims, but the Alabama Supreme Court disagreed, essentially determining that those embryos were people. The ruling shocked many and was expected to have a chilling effect on IVF in the state, although within a few weeks, the state legislature granted criminal and civil immunity to IVF clinics.

But the Alabama decision is the exception. While there are active efforts in some states to endow embryos with the same legal rights as people, a move that could potentially limit access to abortion, “most of the [legal] rulings in this area have made it very clear that embryos are not people,” says Rich Vaughn, an attorney specializing in fertility law and the founder of the US-based International Fertility Law Group. At the same time, embryos are not just property. “They’re something in between,” says Vaughn. “They’re sort of a special type of property.” 

UK law takes a similar approach: The language surrounding embryos and IVF was drafted with the idea that the embryo has some kind of “special status,” although it was never made entirely clear exactly what that special status is, says James Lawford Davies, a solicitor and partner at LDMH Partners, a law firm based in York, England, that specializes in life sciences. Over the years, the language has been tweaked to encompass embryos that might arise from IVF, cloning, or other means; it is “a bit of a fudge,” says Lawford Davies. Today, the official—if somewhat circular—legal definition in the Human Fertilisation and Embryology Act reads: “embryo means a live human embryo.” 

And while people who use their eggs or sperm to create embryos might view these embryos as theirs, according to UK law, embryos are more like “a stateless bundle of cells,” says Lawford Davies. They’re not quite property—people don’t own embryos. They just have control over how they are used. 

Many legal disputes revolve around who has control. This was the experience of Natallie Evans, who created embryos with her then partner Howard Johnston in the UK in 2001. The couple separated in 2002. Johnston wrote to the clinic to ask that their embryos be destroyed. But Evans, who had been diagnosed with ovarian cancer in 2001, wanted to use them. She argued that Johnston had already consented to their creation, storage, and use and should not be allowed to change his mind. The case eventually made it to the European Court of Human Rights, and Evans lost. The case set a precedent that consent was key and could be withdrawn at any time.

In Italy, on the other hand, withdrawing consent isn’t always possible. In 2021, a case like Natallie Evans’s unfolded in the Italian courts: A woman who wanted to proceed with implantation after separating from her partner went to court for authorization. “She said that it was her last chance to be a mother,” says Dalla Costa. The judge ruled in her favor.

Dalla Costa’s clinics in Italy are now changing their policies to align with this decision. Male partners must sign a form acknowledging that they cannot prevent embryos from being used once they’ve been created.

The US situation is even more complicated, because each state has its own approach to fertility regulation. When I looked through a series of published legal disputes over embryos, I found little consistency—sometimes courts ruled to allow a woman to use an embryo without the consent of her former partner, and sometimes they didn’t. “Some states have comprehensive … legislation; some do not,” says Vaughn. “Some have piecemeal legislation, some have only case law, some have all of the above, some have none of the above.”

The meaning of an embryo

So how should we define an embryo? “It’s the million-dollar question,” says Heidi Mertes, a bioethicist at Ghent University in Belgium. Some bioethicists and legal scholars, including Vaughn, think we’d all stand to benefit from clear legal definitions. 

Risa Cromer, a cultural anthropologist at Purdue University in Indiana, who has spent years researching the field, is less convinced. Embryos exist in a murky, in-between state, she argues. You can (usually) discard them, or transfer them, but you can’t sell them. You can make claims against damages to them, but an embryo is never viewed in the same way as a car, for example. “It doesn’t fit really neatly into that property category,” says Cromer. “But, very clearly, it doesn’t fit neatly into the personhood category either.”

And there are benefits to keeping the definition vague, she adds: “There is, I think, a human need for there to be a wide range of interpretive space for what IVF embryos are or could be.”

That’s because we don’t have a fixed moral definition of what an embryo is. Embryos hold special value even for people who don’t view them as children. They hold potential as human life. They can come to represent a fertility journey—one that might have been expensive, exhausting, and traumatizing.  “Even for people who feel like they’re just cells, it still cost a lot of time, money, [and effort] to get those [cells],” says Cattapan.

“I think it’s an illusion that we might all agree on what the moral status of an embryo is,” Mertes says.

In the meantime, a growing number of embryologists, ethicists, and researchers are working to persuade fertility clinics and their patients not to create or freeze so many embryos in the first place. Early signs aren’t promising, says Baccino. The patients she has encountered aren’t particularly receptive to the idea. “They think, ‘If I will pay this amount for a cycle, I want to optimize my chances, so in my case, no,’” she says. She expects the number of embryos in storage to continue to grow.

Holligan’s embryo has been in storage for almost five years. And she still doesn’t know what to do with it. She tears up as she talks through her options. Would discarding the embryo feel like a miscarriage? Would it be a sad thing? If she donated the embryo, would she spend the rest of her life wondering what had become of her biological child, and whether it was having a good life? Should she hold on to the embryo for another decade in case her own daughter needs to use it at some point?

“The question [of what to do with the embryo] does pop into my head, but I quickly try to move past it and just say ‘Oh, that’s something I’ll deal with at a later time,’” says Holligan. “I’m sure [my husband] does the same.”

The accumulation of frozen embryos is “going to continue this way for some time until we come up with something that fully addresses everyone’s concerns,” says Vaughn. But will we ever be able to do that?

“I’m an optimist, so I’m gonna say yes,” he says with a hopeful smile. “But I don’t know at the moment.”

Mark Zuckerberg and the power of the media

This article first appeared in The Debrief, MIT Technology Review’s weekly newsletter from our editor in chief Mat Honan. To receive it in your inbox every Friday,  sign up here.

On Tuesday last week, Meta CEO Mark Zuckerberg released a blog post and video titled “More Speech and Fewer Mistakes.”  Zuckerberg—whose previous self-acknowledged mistakes include the Cambridge Analytica data scandal, allowing a militia to put out a call to arms on Facebook that presaged two killings in Wisconsin, and helping to fuel a genocide in Myanmar—announced that Meta is done with fact checking in the US, that it will roll back “restrictions” on speech, and is going to start showing people more tailored political content in their feeds.  

“I started building social media to give people a voice,” he said while wearing a $900,000 wristwatch.

While the end of fact checking has gotten most of the attention, the changes to its hateful speech policy are also notable. Among other things, the company will now allow people to call transgender people “it,” or to argue that women are property, or to claim homosexuality is a mental illness. (This went over predictably well with LGBTQ employees at Meta.) Meanwhile, thanks to that “more personalized approach to political content,” it looks like polarization is back on the menu, boys.

Zuckerberg’s announcement was one of the most cynical displays of revisionist history I hope I’ll ever see. As very many people have pointed out, it seems to be little more than an effort to curry favor with the incoming Trump administration—complete with a roll out on Fox and Friends.

I’ll leave it to others right now to parse the specific political implications here (and many people are certainly doing so). Rather, what struck me as so cynical was the way Zuckerberg presented Facebook’s history of fact-checking and content moderation as something he was pressured into doing by the government and media. The reality, of course, is that these were his decisions. He structured Meta so that he has near total control over it. He famously calls the shots, and always has.

Yet in Tuesday’s announcement, Zuckerberg tries to blame others for the policies he himself instituted and endorsed. “Governments and legacy media have pushed to censor more and more,” he said.

He went on: “After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US.”

While I’m not here to defend Meta’s fact checking system, I never thought it was particularly useful or effective, let’s get into the claims that it was done at the behest of the government and “legacy media.”

To start: The US government has never taken any meaningful enforcement actions against Meta whatsoever, and definitely nothing meaningful related to misinformation. Full stop. End of story. Call it a day. Sure, there have been fines and settlements, but for a company the size of Meta, these were mosquitos to be slapped away. Perhaps more significantly, there is an FTC antitrust case working its way through the court, but it again has nothing to do with censorship or fact-checking.

And when it comes to the media, consider the real power dynamics at play. Meta, with a current market cap of $1.54 trillion, is worth more than the combined value of the Walt Disney Company (which owns ABC news), Comcast (NBC), Paramount (CBS), Warner Bros (CNN), the New York Times Company, and Fox Corp (Fox News). In fact, Zuckerberg’s estimated personal net worth is greater than the market cap of any of those single companies.

Meanwhile, Meta’s audience completely dwarfs that of any “legacy media” company. According to the tech giant, it enjoys some 3.29 billion daily active users. Daily! And as the company has repeatedly shown, including in this week’s announcements, it is more than willing to twiddle its knobs to control what that audience sees from the legacy media.

As a result, publishers have long bent the knee to Meta to try and get even slivers of that audience. Remember the pivot to video? Or Instant Articles? Media has spent more than a decade now trying to respond or get ahead of what Facebook says it wants to feature, only for it to change its mind and throttle traffic. The notion that publishers have any leverage whatsoever over Meta is preposterous.

I think it’s useful to go back and look at how the company got here.

Once upon a time Twitter was an actual threat to Facebook’s business. After the 2012 election, for which Twitter was central and Facebook was an afterthought, Zuckerberg and company went hard after news. It created share buttons so people could easily drop content from around the Web into their feeds. By 2014, Zuckerberg was saying he wanted it to be the “perfect personalized newspaper” for everyone in the world. But there were consequences to this. By 2015, it had a fake news epidemic on its hands, which it was well aware of. By the time the election rolled around in 2016, Macedonian teens had famously turned fake news into an arbitrage play, creating bogus pro-Trump news stories expressly to take advantage of the combination of Facebook traffic and Google AdSense dollars. Following the 2016 election, this all blew up in Facebook’s face. And in December of that year, it announced it would begin partnering with fact checkers.

A year later, Zuckerberg went on to say the issue of misinformation was “too important an issue to be dismissive.” Until, apparently, right now.

Zuckerberg elided all this inconvenient history. But let’s be real. No one forced him to hire fact checkers. No one was in a position to even truly pressure him to do so. If that were the case, he would not now be in a position to fire them from behind a desk wearing his $900,000 watch. He made the very choices which he now seeks to shirk responsibility for.

But here’s the thing, people already know Mark Zuckerberg too well for this transparent sucking up to be effective.

Republicans already hate Zuck. Sen. Lindsey Graham has accused him of having blood on his hands. Sen. Josh Hawley forced him to make an awkward apology to the families of children harmed on his platform. Sen. Ted Cruz has, on multiple occasionstorn into him. Trump famously threatened to throw him in prison. But so too do Democrats. Sen. Elizabeth WarrenSen. Bernie Sanders, and AOC have all ripped him. And among the general public, he’s both less popular than Trump and more disliked than Joe Biden. He loses on both counts to Elon Musk.

Tuesday’s announcement ultimately seems little more than pandering for an audience that will never accept him.

And while it may not be successful at winning MAGA over, at least the shamelessness and ignoring all past precedent is fully in character. After all, let’s remember what Mark Zuckerberg was busy doing in 2017:

A photo from Mark Zuckerberg's Instagram page showing the Meta CEO at the Heartland Pride Festival in Omaha Nebraska during his 2017 nationwide listening tour.
Image: Mark Zuckerberg Instagram

Now read the rest of The Debrief

The News

• NVIDIA CEO Jensen Huang’s remarks about quantum computing caused quantum stocks to plummet.

• See our predictions for what’s coming for AI in 2025.

• Here’s what the US is doing to prepare for a bird flu pandemic.

• New York state will try to pass an AI bill similar to the one that died in California.

• EVs are projected to be more than 50 percent of auto sales in China next year, 10 years ahead of targets.


The Chat

Every week, I talk to one of MIT Technology Review’s journalists to go behind the scenes of a story they are working on. But this week, I turned the tables a bit and asked some of our editors to grill me about my recent story on the rise of generative search.
Charlotte Jee: What makes you feel so sure that AI search is going to take off?

Mat: I just don’t think there’s any going back. There are definitely problems with it—it can be wild with inaccuracies when it cobbles those answers together. But I think, for the most part it is, to refer to my old colleague Rob Capps’ phenomenal essay, good enough. And I think that’s what usually wins the day. Easy answers that are good enough. Maybe that’s a sad statement, but I think it’s true.

Will Douglas Heaven: For years I’ve been asked if I think AI will take away my job and I always scoffed at the idea. Now I’m not so sure. I still don’t think AI is about to do my job exactly. But I think it might destroy the business model that makes my job exist. And that’s entirely down to this reinvention of search. As a journalist—and editor of the magazine that pays my bills—how worried are you? What can you—we—do about it?

Mat: Is this a trap? This feels like a trap, Will. I’m going to give you two answers here. I think we, as in MIT Technology Review, are relatively insulated here. We’re a subscription business. We’re less reliant on traffic than most. We’re also technology wonks, who tend to go deeper than what you might find in most tech pubs, which I think plays to our benefit.

But I am worried about it and I do think it will be a problem for us, and for others. One thing Rand Fishkin, who has long studied zero-click searches at SparkToro, said to me that wound up getting cut from my story was that brands needed to think more and more about how to build brand awareness. You can do that, for example, by being oft-cited in these models, by being seen as a reliable source. Hopefully, when people ask a question and see us as the expert the model is leaning on, that helps us build our brand and reputation. And maybe they become a readers. That’s a lot more leaps than a link out, obviously. But as he also said to me, if your business model is built on search referrals—and for a lot of publishers that is definitely the case—you’re in trouble.

Will: Is “Google” going to survive as a verb? If not, what are we going to call this new activity?

Mat: I kinda feel like it is already dying. This is anecdotal, but my kids and all their friends almost exclusively use the phrase “search up.” As in “search up George Washington” or “search up a pizza dough recipe.” Often it’s followed by a platform,  search up “Charli XCX on Spotify.” We live in California. What floored me was when I heard kids in New Hampshire and Georgia using the exact same phrase.

But also I feel like we’re just going into a more conversational mode here. Maybe we don’t call it anything.

James O’Donnell: I found myself highlighting this line from your piece: “Who wants to have to learn when you can just know?” Part of me thinks the process of finding information with AI search is pretty nice—it can allow you to just follow your own curiosity a bit more than traditional search. But I also wonder how the meaning of research may change. Doesn’t the process of “digging” do something for us and our minds that AI search will eliminate?

Mat: Oh, this occurred to me too! I asked about it in one of my conversations with Google in fact. Blake Montgomery has a fantastic essay on this very thing. He talks about how he can’t navigate without Google Maps, can’t meet guys without Grindr, and wonders what effect ChatGPT will have on him. If you have not previously, you should read it.

Niall Firth: How much do you use AI search yourself? Do you feel conflicted about it?

Mat: I use it quite a bit. I find myself crafting queries for Google that I think will generate an AI Overview in fact. And I use ChatGPT a lot as well. I like being able to ask a long, complicated question, and I find that it often does a better job of getting at the heart of what I’m looking for — especially when I’m looking for something very specific—because it can suss out the intent along with the key words and phrases.

For example, for the story above I asked “What did Mark Zuckerberg say about misinformation and harmful content in 2016 and 2017? Ignore any news articles from the previous few days and focus only on his remarks in 2016 and 2017.”  The top traditional Google result for that query was this story that I would have wanted specifically excluded. It also coughed up several others from the last few days in the top results. But ChatGPT was able to understand my intent and helped me find the older source material.

And yes, I feel conflicted. Both because I worry about its economic impact on publishers and I’m well aware that there’s a lot of junk in there. It’s also just sort of… an unpopular opinion. Sometimes it feels a bit like smoking, but I do it anyway.


The Recommendation

Most of the time, the recommendation is for something positive that I think people will enjoy. A song. A book. An app. Etc. This week though I’m going to suggest you take a look at something a little more unsettling. Nat Friedman, the former CEO of GitHub, set out to try and understand how much microplastic is in our food supply. He and a team tested hundreds of samples from foods drawn from the San Francisco Bay Area (but very many of which are nationally distributed). The results are pretty shocking. As a disclaimer on the site reads: “we have refrained from drawing high-confidence conclusions from these results, and we think that you should, too. Consider this a snapshot of our raw test results, suitable as a starting point and inspiration for further work, but not solid enough on its own to draw conclusions or make policy recommendations or even necessarily to alter your personal purchasing decisions.” With that said: check it out.

A New York legislator wants to pick up the pieces of the dead California AI bill

The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047, with a new version in his state that would regulate the most advanced AI models. It’s called the RAISE Act, an acronym for “Responsible AI Safety and Education.”

Assemblymember Alex Bores hopes his bill, currently an unpublished draft—subject to change—that MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law.

SB 1047 was, at first, thought to be a fairly modest bill that would pass without much fanfare. In fact, it flew through the California statehouse with huge margins and received significant public support.

However, before it even landed on Governor Gavin Newsom’s desk for signature in September, it sparked an intense national fight. Google, Meta, and OpenAI came out against the bill, alongside top congressional Democrats like Nancy Pelosi and Zoe Lofgren. Even Hollywood celebrities got involved, with Jane Fonda and Mark Hamill expressing support for the bill. 

Ultimately, Newsom vetoed SB 1047, effectively killing regulation of so-called frontier AI models not just in California but, with the lack of laws on the national level, anywhere in the US, where the most powerful systems are developed.

Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models. 

The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause “critical harm”; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people. 

Alternatively, a critical harm could be a use of the AI model that results in 100 or more deaths or at least $1 billion in damages in an act with limited human oversight that if committed by a human would constitute a crime requiring intent, recklessness, or gross negligence.

The safety plans would ensure that a company has cybersecurity protections in place to prevent unauthorized access to a model. The plan would also require testing of models to assess risks before and after training, as well as detailed descriptions of procedures to assess the risks associated with post-training modifications. For example, some current AI systems have safeguards that can be easily and cheaply removed by a malicious actor. A safety plan would have to address how the company plans to mitigate these actions.

The safety plans would then be audited by a third party, like a nonprofit with technical expertise that currently tests AI models. And if violations are found, the bill empowers the attorney general of New York to issue fines and, if necessary, go to the courts to determine whether to halt unsafe development. 

A different flavour of bill

The safety plans and external audits were elements of SB 1047, but Bores aims to differentiate his bill from the California one. “We focused a lot on what the feedback was for 1047,” he says. “Parts of the criticism were in good faith and could make improvements. And so we’ve made a lot of changes.” 

The RAISE Act diverges from SB 1047 in a few ways. For one, SB 1047 would have created the Board of Frontier Models, tasked with approving updates to the definitions and regulations around these AI models, but the proposed act would not create a new government body. The New York bill also doesn’t create a public cloud computing cluster, which SB 1047 would have done. The cluster was intended to support projects to develop AI for the public good. 

The RAISE Act doesn’t have SB 1047’s requirement that companies be able to halt all operations of their model, a capability sometimes referred to as a “kill switch.” Some critics alleged that the shutdown provision of SB 1047 would harm open-source models, since developers can’t shut down a model someone else may now possess (even though SB 1047 had an exemption for open-source models).

The RAISE Act avoids the fight entirely. SB 1047 referred to an “advanced persistent threat” associated with bad actors trying to steal information during model training. The RAISE Act does away with that definition, sticking to addressing critical harms from covered models.

Focusing on the wrong issues?

Bores’ bill is very specific with its definitions in an effort to clearly delineate what this bill is and isn’t about. The RAISE Act doesn’t address some of the current risks from AI models, like bias, discrimination, and job displacement. Like SB 1047, it is very focused on catastrophic risks from frontier AI models. 

Some in the AI community believe this focus is misguided. “We’re broadly supportive of any efforts to hold large models accountable,” says Kate Brennan, associate director of the AI Now Institute, which conducts AI policy research.

“But defining critical harms only in terms of the most catastrophic harms from the most advanced models overlooks the material risks that AI poses, whether it’s workers subject to surveillance mechanisms, prone to workplace injuries because of algorithmically managed speed rates, climate impacts of large-scale AI systems, data centers exerting massive pressure on local power grids, or data center construction sidestepping key environmental protections,” she says.

Bores has worked on other bills addressing current harms posed by AI systems, like discrimination and lack of transparency. That said, Bores is clear that this new bill is aimed at mitigating catastrophic risks from more advanced models. “We’re not talking about any model that exists right now,” he says. “We are talking about truly frontier models, those on the edge of what we can build and what we understand, and there is risk in that.” 

The bill would cover only models that pass a certain threshold for how many computations their training required, typically measured in FLOPs (floating-point operations). In the bill, a covered model is one that requires more than 1026 FLOPs in its training and costs over $100 million. For reference, GPT-4 is estimated to have required 1025 FLOPs. 

This approach may draw scrutiny from industry forces. “While we can’t comment specifically on legislation that isn’t public yet, we believe effective regulation should focus on specific applications rather than broad model categories,” says a spokesperson at Hugging Face, a company that opposed SB 1047.

Early days

The bill is in its nascent stages, so it’s subject to many edits in the future, and no opposition has yet formed. There may already be lessons to be learned from the battle over SB 1047, however. “There’s significant disagreement in the space, but I think debate around future legislation would benefit from more clarity around the severity, the likelihood, and the imminence of harms,” says Scott Kohler, a scholar at the Carnegie Endowment for International Peace, who tracked the development of SB 1047. 

When asked about the idea of mandated safety plans for AI companies, assemblymember Edward Ra, a Republican who hasn’t yet seen a draft of the new bill yet, said: “I don’t have any general problem with the idea of doing that. We expect businesses to be good corporate citizens, but sometimes you do have to put some of that into writing.” 

Ra and Bores co chair the New York Future Caucus, which aims to bring together lawmakers 45 and under to tackle pressing issues that affect future generations.

Scott Wiener, a California state senator who sponsored SB 1047, is happy to see that his initial bill, even though it failed, is inspiring further legislation and discourse. “The bill triggered a conversation about whether we should just trust the AI labs to make good decisions, which some will, but we know from past experience, some won’t make good decisions, and that’s why a level of basic regulation for incredibly powerful technology is important,” he says.

He has his own plans to reignite the fight: “We’re not done in California. There will be continued work in California, including for next year. I’m optimistic that California is gonna be able to get some good things done.”

And some believe the RAISE Act will highlight a notable contradiction: Many of the industry’s players insist that they want regulation, but when any regulation is proposed, they fight against it. “SB 1047 became a referendum on whether AI should be regulated at all,” says Brennan. “There are a lot of things we saw with 1047 that we can expect to see replay in New York if this bill is introduced. We should be prepared to see a massive lobbying reaction that industry is going to bring to even the lightest-touch regulation.”

Wiener and Bores both wish to see regulation at a national level, but in the absence of such legislation, they’ve taken the battle upon themselves. At first it may seem odd for states to take up such important reforms, but California houses the headquarters of the top AI companies, and New York, which has the third-largest state economy in the US, is home to offices for OpenAI and other AI companies. The two states may be well positioned to lead the conversation around regulation. 

“There is uncertainty at the direction of federal policy with the transition upcoming and around the role of Congress,” says Kohler. “It is likely that states will continue to step up in this area.”

Wiener’s advice for New York legislators entering the arena of AI regulation? “Buckle up and get ready.”

2025 is a critical year for climate tech

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

I love the fresh start that comes with a new year. And one thing adding a boost to my January is our newest list of 10 Breakthrough Technologies.

In case you haven’t browsed this year’s list or a previous version, it features tech that’s either breaking into prominence or changing society. We typically recognize a range of items running from early-stage research to consumer technologies that folks are getting their hands on now.

As I was looking over the finished list this week, I was struck by something: While there are some entries from other fields that are three or even five years away, all the climate items are either newly commercially available or just about to be. It’s certainly apt, because this year in particular seems to be bringing a new urgency to the fight against climate change. We’re facing global political shifts and entering the second half of the decade. It’s time for these climate technologies to grow up and get out there.

Green steel

Steel is a crucial material for buildings and vehicles, and making it accounts for around 8% of global greenhouse-gas emissions. New manufacturing methods could be a huge part of cleaning up heavy industry, and they’re just on the cusp of breaking into the commercial market.

One company, called Stegra, is close to starting up the world’s first commercial green steel plant, which will make the metal using hydrogen from renewable sources. (You might know this company by its former name, H2 Green Steel, as we included it on our 2023 list of Climate Tech Companies to Watch.)

When I first started following Stegra a few years ago, its plans for a massive green steel plant felt incredibly far away. Now the company says it’s on track to produce steel at the factory by next year.

The biggest challenge in this space is money. Building new steel plants is expensive—Stegra has raised almost $7 billion. And the company’s product will be more expensive than conventional material, so it’ll need to find customers willing to pay up (so far, it has).

There are other efforts to clean up steel that will all face similar challenges around money, including another play in Sweden called Hybrit and startups like Boston Metal and Electra, which use different processes. Read more about green steel, and the potential obstacles it faces as we enter a new phase of commercialization, in this short blurb and in this longer feature about Stegra.

Cow burp remedies

Humans love burgers and steaks and milk and cheese, so we raise a whole bunch of cows. The problem is, these animals are among a group with a funky digestion process that produces a whole lot of methane (a powerful greenhouse gas). A growing number of companies are trying to develop remedies that help cut down on their methane emissions.

This is one of my favorite items on the list this year (and definitely my favorite illustration—at the very least, check out this blurb to enjoy the art).

There’s already a commercially available option right now: a feed additive called Bovaer from DSM-Firmenich that the company says can cut methane emissions by 30% in dairy cattle, and more in beef cattle. Startups are right behind with their own products, some of which could prove even better.

A key challenge all these companies face moving forward is acceptance: from regulatory agencies, farmers, and consumers. Some companies still need to go through lengthy and often expensive tests to show that their products are safe and effective. They’ll also need to persuade farmers to get on board. Some might also face misinformation that’s causing some consumers to protest these new additives.

Cleaner jet fuel

While planes crisscrossing the world are largely powered by fossil fuels, some alternatives are starting to make their appearance in aircraft.

New fuels, today mostly made from waste products like used cooking oil, can cut down emissions from air travel. In 2024, they made up about 0.5% of the fuel supply. But new policies could help these fuels break into new prominence, and new options are helping to widen their supply.

The key challenge here is scale. Global demand for jet fuel was about 100 billion gallons last year, so we’ll need a whole lot of volume from new producers to make a dent in aviation’s emissions.

To illustrate the scope, take LanzaJet’s new plant, opened in 2024. It’s the first commercial-scale facility that can make jet fuel with ethanol, and it has a capacity of about 9 million gallons annually. So we would need about 10,000 of those plants to meet global demand—a somewhat intimidating prospect. Read more in my write-up here.

From cow burps to jet fuel to green steel, there’s a huge range of tech that’s entering a new stage of deployment and will need to face new challenges in the next few years. We’ll be watching it all—thanks for coming along.


Now read the rest of The Spark

Related reading

Check out our full list of 2025’s Breakthrough Technologies here. There’s also a poll where you can vote for what you think the 11th item should be. I’m not trying to influence anyone’s vote, but I think methane-detecting satellites are pretty interesting—just saying … 

This package is part of our January/February print issue, which also includes stories on: 

A Polestar electric car prepares to park at an EV charging station on July 28, 2023 in Corte Madera, California.

JUSTIN SULLIVAN/GETTY

Another thing 

EVs are (mostly) set for solid growth in 2025, as my colleague James Temple covers in his newest story. Check it out for more about what’s next for electric vehicles, including what we might expect from a new administration in the US and how China is blowing everyone else out of the water. 

Keeping up with climate  

Winter used to be the one time of year that California didn’t have to worry about wildfires. A rapidly spreading fire in the southern part of the state is showing that’s not the case anymore. (Bloomberg)

Tesla’s annual sales decline for the first time in over a decade. Deliveries were lower than expected for the final quarter of the year. (Associated Press)

Meanwhile, in China, EVs are set to overtake traditional cars in sales years ahead of schedule. Forecasts suggest that EVs could account for 50% of car sales this year. (Financial Times)

KoBold metals raised $537 million in funding to use AI to mine copper. The funding pushes the startup’s valuation to $2.96 billion. (TechCrunch)
→ Read this profile of the company from 2021 for more. (MIT Technology Review)

We finally have the final rules for a tax credit designed to boost hydrogen in the US. The details matter here. (Heatmap)

China just approved the world’s most expensive infrastructure project. The hydroelectric dam could produce enough power for 300 million people, triple the capacity of the current biggest dam. (Economist)

In 1979, President Jimmy Carter installed 32 solar panels on the White House’s roof. Although they came down just a few years later, the panels lived multiple lives afterward. I really enjoyed reading about this small piece of Carter’s legacy in the wake of his passing. (New York Times)

An open pit mine in California is the only one in the US mining and extracting rare earth metals including neodymium and praseodymium. This is a fascinating look at the site. (IEEE Spectrum
→ I wrote about efforts to recycle rare earth metals, and what it means for the long-term future of metal supply, in a feature story last year. (MIT Technology Review)

How the US is preparing for a potential bird flu pandemic

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week marks a strange anniversary—it’s five years since most of us first heard about a virus causing a mysterious “pneumonia.” A virus that we later learned could cause a disease called covid-19. A virus that swept the globe and has since been reported to have been responsible for over 7 million deaths—and counting.

I first covered the virus in an article published on January 7, 2020, which had the headline “Doctors scramble to identify mysterious illness emerging in China.” For that article, and many others that followed it, I spoke to people who were experts on viruses, infectious disease, and epidemiology. Frequently, their answers to my questions about the virus, how it might spread, and the risks of a pandemic were the same: “We don’t know.”

We are facing the same uncertainty now with H5N1, the virus commonly known as bird flu. This virus has been decimating bird populations for years, and now a variant is rapidly spreading among dairy cattle in the US. We know it can cause severe disease in animals, and we know it can pass from animals to people who are in close contact with them. As of this Monday this week, we also know that it can cause severe disease in people—a 65-year-old man in Louisiana became the first person in the US to die from an H5N1 infection.

Scientists are increasingly concerned about a potential bird flu pandemic. The question is, given all the enduring uncertainty around the virus, what should we be doing now to prepare for the possibility? Can stockpiled vaccines save us? And, importantly, have we learned any lessons from a covid pandemic that still hasn’t entirely fizzled out?

Part of the challenge here is that it is impossible to predict how H5N1 will evolve.

A variant of the virus caused disease in people in 1997, when there was a small but deadly outbreak in Hong Kong. Eighteen people had confirmed diagnoses, and six of them died. Since then, there have been sporadic cases around the world—but no large outbreaks.

As far as H5N1 is concerned, we’ve been relatively lucky, says Ali Khan, dean of the college of public health at the University of Nebraska. “Influenza presents the greatest infectious-disease pandemic threat to humans, period,” says Khan. The 1918 flu pandemic was caused by a type of influenza virus called H1N1 that appears to have jumped from birds to people. It is thought to have infected a third of the world’s population, and to have been responsible for around 50 million deaths.

Another H1N1 virus was responsible for the 2009 “swine flu” pandemic. That virus hit younger people hardest, as they were less likely to have been exposed to similar variants and thus had much less immunity. It was responsible for somewhere between 151,700 and 575,400 deaths that year.

To cause a pandemic, the H5N1 variants currently circulating in birds and dairy cattle in the US would need to undergo genetic changes that allow them to spread more easily from animals to people, spread more easily between people, and become more deadly in people. Unfortunately, we know from experience that viruses need only a few such changes to become more easily transmissible.

And with each and every infection, the risk that a virus will acquire these dangerous genetic changes increases. Once a virus infects a host, it can evolve and swap chunks of genetic code with any other viruses that might also be infecting that host, whether it’s a bird, a pig, a cow, or a person. “It’s a big gambling game,” says Marion Koopmans, a virologist at the Erasmus University Medical Center in Rotterdam, the Netherlands. “And the gambling is going on at too large a scale for comfort.”

There are ways to improve our odds. For the best chance at preventing another pandemic, we need to get a handle on, and limit, the spread of the virus. Here, the US could have done a better job at limiting the spread in dairy cows, says Khan. “It should have been found a lot earlier,” he says. “There should have been more aggressive measures to prevent transmission, to recognize what disease looks like within our communities, and to protect workers.”

States could also have done better at testing farm workers for infection, says Koopmans. “I’m surprised that I haven’t heard of an effort to eradicate it from cattle,” she adds. “A country like the US should be able to do that.”

The good news is that there are already systems in place for tracking the general spread of flu in people. The World Health Organization’s Global Influenza Surveillance and Response System collects and analyzes samples of viruses collected from countries around the world. It allows the organization to make recommendations about seasonal flu vaccines and also helps scientists track the spread of various flu variants. That’s something we didn’t have for the covid-19 virus when it first took off.

We are also better placed to make vaccines. Some countries, including the US, are already stockpiling vaccines that should be at least somewhat effective against H5N1 (although it is difficult to predict exactly how effective they will be against some future variant). The US Administration for Strategic Preparedness and Response plans to have “up to 10 million doses of prefilled syringes and multidose vials” prepared by the end of March, according to an email from a representative.

The US Department of Health and Human Services has also said it will provide the pharmaceutical company Moderna with $176 million to create mRNA vaccines for pandemic influenza—using the same quick-turnaround vaccine production technology used in the company’s covid-19 vaccines.

Some question whether these vaccines should have already been offered to dairy farm workers in affected parts of the US. Many of these individuals have been exposed to the virus, a good chunk of them appear to have been infected with it, and some of them have become ill. If the decision had been up to Khan, he says, they would have been offered the H5N1 vaccine by now. And we should ensure they are offered seasonal flu vaccines in order to limit the risk that the two flu viruses will mingle inside one person, he adds.

Others worry that 10 million vaccine doses aren’t enough for a country with a population of around 341 million. But health agencies “walk a razor-thin line between having too much vaccine for something and not having enough,” says Khan. If an outbreak never transpires, 340 million doses of vaccine will feel like an enormous waste of resources.

We can’t predict how well these viruses will work, either. Flu viruses mutate all the time, and even seasonal flu vaccines are notoriously unpredictable in their efficacy. “I think we’ve become a little bit spoiled with the covid vaccines,” says Koopmans. “We were really, really lucky [to develop] vaccines with high efficacy.”

One vaccine lesson we should have learned from the covid-19 pandemic is the importance of equitable access to vaccines around the world. Unfortunately, it’s unlikely that we have. “It is doubtful that low-income countries will have early access to [a pandemic influenza] vaccine unless the world takes action,” Nicole Lurie of the Coalition for Epidemic Preparedness Innovations (CEPI) said in a recent interview for Gavi, a public-private alliance for vaccine equity.

And another is the impact of vaccine hesitancy. Making vaccines might not be a problem—but convincing people to take them might be, says Khan. “We have an incoming administration that has lots of vaccine hesitancy,” he points out. “So while we may end up having … vaccines available, it’s not very clear to me if we have the political and social will to actually implement good public health measures.”

This is another outcome that is impossible to predict, and I won’t attempt to do so. But I am hoping that the relevant administrations will step up our defenses. And that this will be enough to prevent another devastating pandemic.


Now read the rest of The Checkup

Read more from MIT Technology Review‘s archive

Bird flu has been circulating in US dairy cows for months. Virologists are worried it could stick around on US farms forever.

As the virus continues to spread, the risk of a pandemic continues to rise. We still don’t really know how the virus is spreading, but we do know that it is turning up in raw milk. (Please don’t drink raw milk.)

mRNA vaccines helped us through the covid-19 pandemic. Now scientists are working on mRNA flu vaccines—including “universal” vaccines that could protect against multiple flu viruses.

The next generation of mRNA vaccines is on the way. These vaccines are “self-amplifying” and essentially tell the body how to make more mRNA. 

Maybe there’s an alternative to dairy farms of the type that are seeing H5N1 in their cattle. Scientists are engineering yeasts and plants with bovine genes so they can produce proteins normally found in milk, which can be used to make spreadable cheeses and ice cream. The cofounder of one company says a factory of bubbling yeast vats could “replace 50,000 to 100,000 cows.”

From around the web

My colleagues and I put together an annual list of what we think are the breakthrough technologies of that year. This year’s list includes long-acting HIV prevention medicines and stem-cell treatments that actually work. Check out the full list here.

Calico, the Google biotech company focused on “tackling aging,” has released results from the trial of a drug to treat amyotrophic lateral sclerosis (ALS). The drug failed. (STAT

Around the world, birth rates are falling. The more concerned nations become about this fact, the greater the risk to gender rights, writes Angela Saini. (Wired)

Brooke Eby, a 36-year-old with ALS, is among a niche group of content creators documenting their journeys with terminal illness on social media platforms like TikTok. “I’m glad that I’m sharing my journey. I wish someone had come before me and shared, start to finish …,” she said. “I’m just going to post all this, because maybe it’ll help someone who’s like a year behind me in their progression.” (New York Times)

Do we each have 30 trillion genomes? A growing understanding of genetic mutations that occur in adults is changing the way doctors diagnose and treat disease. (The Atlantic)

Anthropic’s chief scientist on 5 ways agents will be even better in 2025

Agents are the hottest thing in tech right now. Top firms from Google DeepMind to OpenAI to Anthropic are racing to augment large language models with the ability to carry out tasks by themselves. Known as agentic AI in industry jargon, such systems have fast become the new target of Silicon Valley buzz. Everyone from Nvidia to Salesforce is talking about how they are going to upend the industry. 

“We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies,” Sam Altman claimed in a blog post last week.

In the broadest sense, an agent is a software system that goes off and does something, often with minimal to zero supervision. The more complex that thing is, the smarter the agent needs to be. For many, large language models are now smart enough to power agents that can do a whole range of useful tasks for us, such as filling out forms, looking up a recipe and adding the ingredients to an online grocery basket, or using a search engine to do last-minute research before a meeting and producing a quick bullet-point summary.

In October, Anthropic showed off one of the most advanced agents yet: an extension of its Claude large language model called computer use. As the name suggests, it lets you direct Claude to use a computer much as a person would, by moving a cursor, clicking buttons, and typing text. Instead of simply having a conversation with Claude, you can now ask it to carry out on-screen tasks for you.

Anthropic notes that the feature is still cumbersome and error-prone. But it is already available to a handful of testers, including third-party developers at companies such as DoorDash, Canva, and Asana.

Computer use is a glimpse of what’s to come for agents. To learn what’s coming next, MIT Technology Review talked to Anthropic’s cofounder and chief scientist Jared Kaplan. Here are five ways that agents are going to get even better in 2025.

(Kaplan’s answers have been lightly edited for length and clarity.)

1/ Agents will get better at using tools

“I think there are two axes for thinking about what AI is capable of. One is a question of how complex the task is that a system can do. And as AI systems get smarter, they’re getting better in that direction. But another direction that’s very relevant is what kinds of environments or tools the AI can use. 

“So, like, if you go back almost 10 years now to [DeepMind’s Go-playing model] AlphaGo, we had AI systems that were superhuman in terms of how well they could play board games. But if all you can work with is a board game, then that’s a very restrictive environment. It’s not actually useful, even if it’s very smart. With text models, and then multimodal models, and now computer use—and perhaps in the future with robotics—you’re moving toward bringing AI into different situations and tasks, and making it useful. 

“We were excited about computer use basically for that reason. Until recently, with large language models, it’s been necessary to give them a very specific prompt, give them very specific tools, and then they’re restricted to a specific kind of environment. What I see is that computer use will probably improve quickly in terms of how well models can do different tasks and more complex tasks. And also to realize when they’ve made mistakes, or realize when there’s a high-stakes question and it needs to ask the user for feedback.”

2/ Agents will understand context  

“Claude needs to learn enough about your particular situation and the constraints that you operate under to be useful. Things like what particular role you’re in, what styles of writing or what needs you and your organization have.

Jared Kaplan

ANTHROPIC

“I think that we’ll see improvements there where Claude will be able to search through things like your documents, your Slack, etc., and really learn what’s useful for you. That’s underemphasized a bit with agents. It’s necessary for systems to be not only useful but also safe, doing what you expected.

“Another thing is that a lot of tasks won’t require Claude to do much reasoning. You don’t need to sit and think for hours before opening Google Docs or something. And so I think that a lot of what we’ll see is not just more reasoning but the application of reasoning when it’s really useful and important, but also not wasting time when it’s not necessary.”

3/ Agents will make coding assistants better

“We wanted to get a very initial beta of computer use out to developers to get feedback while the system was relatively primitive. But as these systems get better, they might be more widely used and really collaborate with you on different activities.

“I think DoorDash, the Browser Company, and Canva are all experimenting with, like, different kinds of browser interactions and designing them with the help of AI.

“My expectation is that we’ll also see further improvements to coding assistants. That’s something that’s been very exciting for developers. There’s just a ton of interest in using Claude 3.5 for coding, where it’s not just autocomplete like it was a couple of years ago. It’s really understanding what’s wrong with code, debugging it—running the code, seeing what happens, and fixing it.”

4/ Agents will need to be made safe

“We founded Anthropic because we expected AI to progress very quickly and [thought] that, inevitably, safety concerns were going to be relevant. And I think that’s just going to become more and more visceral this year, because I think these agents are going to become more and more integrated into the work we do. We need to be ready for the challenges, like prompt injection. 

[Prompt injection is an attack in which a malicious prompt is passed to a large language model in ways that its developers did not foresee or intend. One way to do this is to add the prompt to websites that models might visit.]

“Prompt injection is probably one of the No.1 things we’re thinking about in terms of, like, broader usage of agents. I think it’s especially important for computer use, and it’s something we’re working on very actively, because if computer use is deployed at large scale, then there could be, like, pernicious websites or something that try to convince Claude to do something that it shouldn’t do.

“And with more advanced models, there’s just more risk. We have a robust scaling policy where, as AI systems become sufficiently capable, we feel like we need to be able to really prevent them from being misused. For example, if they could help terrorists—that kind of thing.

“So I’m really excited about how AI will be useful—it’s actually also accelerating us a lot internally at Anthropic, with people using Claude in all kinds of ways, especially with coding. But, yeah, there’ll be a lot of challenges as well. It’ll be an interesting year.”

What’s next for AI in 2025

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

For the last couple of years we’ve had a go at predicting what’s coming next in AI. A fool’s game given how fast this industry moves. But we’re on a roll, and we’re doing it again.

How did we score last time round? Our four hot trends to watch out for in 2024 included what we called customized chatbots—interactive helper apps powered by multimodal large language models (check: we didn’t know it yet, but we were talking about what everyone now calls agents, the hottest thing in AI right now); generative video (check: few technologies have improved so fast in the last 12 months, with OpenAI and Google DeepMind releasing their flagship video generation models, Sora and Veo, within a week of each other this December); and more general-purpose robots that can do a wider range of tasks (check: the payoffs from large language models continue to trickle down to other parts of the tech industry, and robotics is top of the list). 

We also said that AI-generated election disinformation would be everywhere, but here—happily—we got it wrong. There were many things to wring our hands over this year, but political deepfakes were thin on the ground

So what’s coming in 2025? We’re going to ignore the obvious here: You can bet that agents and smaller, more efficient, language models will continue to shape the industry. Instead, here are five alternative picks from our AI team.

1. Generative virtual playgrounds 

If 2023 was the year of generative images and 2024 was the year of generative video—what comes next? If you guessed generative virtual worlds (a.k.a. video games), high fives all round.

We got a tiny glimpse of this technology in February, when Google DeepMind revealed a generative model called Genie that could take a still image and turn it into a side-scrolling 2D platform game that players could interact with. In December, the firm revealed Genie 2, a model that can spin a starter image into an entire virtual world.

Other companies are building similar tech. In October, the AI startups Decart and Etched revealed an unofficial Minecraft hack in which every frame of the game gets generated on the fly as you play. And World Labs, a startup cofounded by Fei-Fei Li—creator of ImageNet, the vast data set of photos that kick-started the deep-learning boom—is building what it calls large world models, or LWMs.

One obvious application is video games. There’s a playful tone to these early experiments, and generative 3D simulations could be used to explore design concepts for new games, turning a sketch into a playable environment on the fly. This could lead to entirely new types of games

But they could also be used to train robots. World Labs wants to develop so-called spatial intelligence—the ability for machines to interpret and interact with the everyday world. But robotics researchers lack good data about real-world scenarios with which to train such technology. Spinning up countless virtual worlds and dropping virtual robots into them to learn by trial and error could help make up for that.   

Will Douglas Heaven

2. Large language models that “reason”

The buzz was justified. When OpenAI revealed o1 in September, it introduced a new paradigm in how large language models work. Two months later, the firm pushed that paradigm forward in almost every way with o3—a model that just might reshape this technology for good.

Most models, including OpenAI’s flagship GPT-4, spit out the first response they come up with. Sometimes it’s correct; sometimes it’s not. But the firm’s new models are trained to work through their answers step by step, breaking down tricky problems into a series of simpler ones. When one approach isn’t working, they try another. This technique, known as “reasoning” (yes—we know exactly how loaded that term is), can make this technology more accurate, especially for math, physics, and logic problems.

It’s also crucial for agents.

In December, Google DeepMind revealed an experimental new web-browsing agent called Mariner. In the middle of a preview demo that the company gave to MIT Technology Review, Mariner seemed to get stuck. Megha Goel, a product manager at the company, had asked the agent to find her a recipe for Christmas cookies that looked like the ones in a photo she’d given it. Mariner found a recipe on the web and started adding the ingredients to Goel’s online grocery basket.

Then it stalled; it couldn’t figure out what type of flour to pick. Goel watched as Mariner explained its steps in a chat window: “It says, ‘I will use the browser’s Back button to return to the recipe.’”

It was a remarkable moment. Instead of hitting a wall, the agent had broken the task down into separate actions and picked one that might resolve the problem. Figuring out you need to click the Back button may sound basic, but for a mindless bot it’s akin to rocket science. And it worked: Mariner went back to the recipe, confirmed the type of flour, and carried on filling Goel’s basket.

Google DeepMind is also building an experimental version of Gemini 2.0, its latest large language model, that uses this step-by-step approach to problem solving, called Gemini 2.0 Flash Thinking.

But OpenAI and Google are just the tip of the iceberg. Many companies are building large language models that use similar techniques, making them better at a whole range of tasks, from cooking to coding. Expect a lot more buzz about reasoning (we know, we know) this year.

—Will Douglas Heaven

3. It’s boom time for AI in science 

One of the most exciting uses for AI is speeding up discovery in the natural sciences. Perhaps the greatest vindication of AI’s potential on this front came last October, when the Royal Swedish Academy of Sciences awarded the Nobel Prize for chemistry to Demis Hassabis and John M. Jumper from Google DeepMind for building the AlphaFold tool, which can solve protein folding, and to David Baker for building tools to help design new proteins.

Expect this trend to continue next year, and to see more data sets and models that are aimed specifically at scientific discovery. Proteins were the perfect target for AI, because the field had excellent existing data sets that AI models could be trained on. 

The hunt is on to find the next big thing. One potential area is materials science. Meta has released massive data sets and models that could help scientists use AI to discover new materials much faster, and in December, Hugging Face, together with the startup Entalpic, launched LeMaterial, an open-source project that aims to simplify and accelerate materials research. Their first project is a data set that unifies, cleans, and standardizes the most prominent material data sets. 

AI model makers are also keen to pitch their generative products as research tools for scientists. OpenAI let scientists test its latest o1 model and see how it might support them in research. The results were encouraging. 

Having an AI tool that can operate in a similar way to a scientist is one of the fantasies of the tech sector. In a manifesto published in October last year, Anthropic founder Dario Amodei highlighted science, especially biology, as one of the key areas where powerful AI could help. Amodei speculates that in the future, AI could be not only a method of data analysis but a “virtual biologist who performs all the tasks biologists do.” We’re still a long way away from this scenario. But next year, we might see important steps toward it. 

—Melissa Heikkilä

4. AI companies get cozier with national security

There is a lot of money to be made by AI companies willing to lend their tools to border surveillance, intelligence gathering, and other national security tasks. 

The US military has launched a number of initiatives that show it’s eager to adopt AI, from the Replicator program—which, inspired by the war in Ukraine, promises to spend $1 billion on small drones—to the Artificial Intelligence Rapid Capabilities Cell, a unit bringing AI into everything from battlefield decision-making to logistics. European militaries are under pressure to up their tech investment, triggered by concerns that Donald Trump’s administration will cut spending to Ukraine. Rising tensions between Taiwan and China weigh heavily on the minds of military planners, too. 

In 2025, these trends will continue to be a boon for defense-tech companies like Palantir, Anduril, and others, which are now capitalizing on classified military data to train AI models. 

The defense industry’s deep pockets will tempt mainstream AI companies into the fold too. OpenAI in December announced it is partnering with Anduril on a program to take down drones, completing a year-long pivot away from its policy of not working with the military. It joins the ranks of Microsoft, Amazon, and Google, which have worked with the Pentagon for years. 

Other AI competitors, which are spending billions to train and develop new models, will face more pressure in 2025 to think seriously about revenue. It’s possible that they’ll find enough non-defense customers who will pay handsomely for AI agents that can handle complex tasks, or creative industries willing to spend on image and video generators. 

But they’ll also be increasingly tempted to throw their hats in the ring for lucrative Pentagon contracts. Expect to see companies wrestle with whether working on defense projects will be seen as a contradiction to their values. OpenAI’s rationale for changing its stance was that “democracies should continue to take the lead in AI development,” the company wrote, reasoning that lending its models to the military would advance that goal. In 2025, we’ll be watching others follow its lead. 

James O’Donnell

5. Nvidia sees legitimate competition

For much of the current AI boom, if you were a tech startup looking to try your hand at making an AI model, Jensen Huang was your man. As CEO of Nvidia, the world’s most valuable corporation, Huang helped the company become the undisputed leader of chips used both to train AI models and to ping a model when anyone uses it, called “inferencing.”

A number of forces could change that in 2025. For one, behemoth competitors like Amazon, Broadcom, AMD, and others have been investing heavily in new chips, and there are early indications that these could compete closely with Nvidia’s—particularly for inference, where Nvidia’s lead is less solid. 

A growing number of startups are also attacking Nvidia from a different angle. Rather than trying to marginally improve on Nvidia’s designs, startups like Groq are making riskier bets on entirely new chip architectures that, with enough time, promise to provide more efficient or effective training. In 2025 these experiments will still be in their early stages, but it’s possible that a standout competitor will change the assumption that top AI models rely exclusively on Nvidia chips.

Underpinning this competition, the geopolitical chip war will continue. That war thus far has relied on two strategies. On one hand, the West seeks to limit exports to China of top chips and the technologies to make them. On the other, efforts like the US CHIPS Act aim to boost domestic production of semiconductors.

Donald Trump may escalate those export controls and has promised massive tariffs on any goods imported from China. In 2025, such tariffs would put Taiwan—on which the US relies heavily because of the chip manufacturer TSMC—at the center of the trade wars. That’s because Taiwan has said it will help Chinese firms relocate to the island to help them avoid the proposed tariffs. That could draw further criticism from Trump, who has expressed frustration with US spending to defend Taiwan from China. 

It’s unclear how these forces will play out, but it will only further incentivize chipmakers to reduce reliance on Taiwan, which is the entire purpose of the CHIPS Act. As spending from the bill begins to circulate, next year could bring the first evidence of whether it’s materially boosting domestic chip production. 

James O’Donnell

What’s next for our privacy?

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Every day, we are tracked hundreds or even thousands of times across the digital world. Cookies and web trackers capture every website link that we click, while code installed in mobile apps tracks every physical location that our devices—and, by extension, we—have visited. All of this is collected, packaged together with other details (compiled from public records, supermarket member programs, utility companies, and more), and used to create highly personalized profiles that are then shared or sold, often without our explicit knowledge or consent. 

A consensus is growing that Americans need better privacy protections—and that the best way to deliver them would be for Congress to pass comprehensive federal privacy legislation. While the latest iteration of such a bill, the American Privacy Rights Act of 2024, gained more momentum than previously proposed laws, it became so watered down that it lost support from both Republicans and Democrats before it even came to a vote. 

There have been some privacy wins in the form of limits on what data brokers—third-party companies that buy and sell consumers’ personal information for targeted advertisements, messaging, and other purposes—can do with geolocation data. 

These are still small steps, though—and they are happening as increasingly pervasive and powerful technologies collect more data than ever. And at the same time, Washington is preparing for a new presidential administration that has attacked the press and other critics, promised to target immigrants for mass deportation, threatened to seek retribution against perceived enemies, and supported restrictive state abortion laws. This is not even to mention the increased collection of our biometric data, especially for facial recognition, and the normalization of its use in all kinds of ways. In this light, it’s no stretch to say our personal data has arguably never been more vulnerable, and the imperative for privacy has never felt more urgent. 

So what can Americans expect for their personal data in 2025? We spoke to privacy experts and advocates about (some of) what’s on their mind regarding how our digital data might be traded or protected moving forward. 

Reining in a problematic industry

In early December, the Federal Trade Commission announced separate settlement agreements with the data brokers Mobilewalla and Gravy Analytics (and its subsidiary Venntel). Finding that the companies had tracked and sold geolocation data from users at sensitive locations like churches, hospitals, and military installations without explicit consent, the FTC banned the companies from selling such data except in specific circumstances. This follows something of a busy year in regulation of data brokers, including multiple FTC enforcement actions against other companies for similar use and sale of geolocation data, as well as a proposed rule from the Justice Department that would prohibit the sale of bulk data to foreign entities. 

And on the same day that the FTC announced these settlements in December, the Consumer Financial Protection Bureau proposed a new rule that would designate data brokers as consumer reporting agencies, which would trigger stringent reporting requirements and consumer privacy protections. The rule would prohibit the collection and sharing of people’s sensitive information, such as their salaries and Social Security numbers, without “legitimate purposes.” While the rule will still need to undergo a 90-day public comment period, and it’s unclear whether it will move forward under the Trump administration, if it’s finalized it has the power to fundamentally limit how data brokers do business.

Right now, there just aren’t many limits on how these companies operate—nor, for that matter, clear information on how many data brokerages even exist. Industry watchers estimate there may be 4,000 to 5,000 data brokers around the world, many of which we’ve never heard of—and whose names constantly shift. In California alone, the state’s 2024 Data Broker Registry lists 527 such businesses that have voluntarily registered there, nearly 90 of which also self-reported that they collect geolocation data. 

All this data is widely available for purchase by anyone who will pay. Marketers buy data to create highly targeted advertisements, and banks and insurance companies do the same to verify identity, prevent fraud, and conduct risk assessments. Law enforcement buys geolocation data to track people’s whereabouts without getting traditional search warrants. Foreign entities can also currently buy sensitive information on members of the military and other government officials. And on people-finder websites, basically anyone can pay for anyone else’s contact details and personal history.  

Data brokers and their clients defend these transactions by saying that most of this data is anonymized—though it’s questionable whether that can truly be done in the case of geolocation data. Besides, anonymous data can be easily reidentified, especially when it’s combined with other personal information. 

Digital-rights advocates have spent years sounding the alarm on this secretive industry, especially the ways in which it can harm already marginalized communities, though various types of data collection have sparked consternation across the political spectrum. Representative Cathy McMorris Rodgers, the Republican chair of the House Energy and Commerce Committee, for example, was concerned about how the Centers for Disease Control and Prevention bought location data to evaluate the effectiveness of pandemic lockdowns. Then a study from last year showed how easy (and cheap) it was to buy sensitive data about members of the US military; Senator Elizabeth Warren, a Democrat, called out the national security risks of data brokers in a statement to MIT Technology Review, and Senator John Cornyn, a Republican, later said he was “shocked” when he read about the practice in our story. 

But it was the 2022 Supreme Court decision ending the constitutional guarantee of legal abortion that spurred much of the federal action last year. Shortly after the Dobbs ruling, President Biden issued an executive order to protect access to reproductive health care; it included instructions for the FTC to take steps preventing information about visits to doctor’s offices or abortion clinics from being sold to law enforcement agencies or state prosecutors.

The new enforcers

With Donald Trump taking office in January, and Republicans taking control of both houses of Congress, the fate of the CFPB’s proposed rule—and the CFPB itself—is uncertain. Republicans, the people behind Project 2025, and Elon Musk (who will lead the newly created advisory group known as the Department of Government Efficiency) have long been interested in seeing the bureau “deleted,” as Musk put it on X. That would take an act of Congress, making it unlikely, but there are other ways that the administration could severely curtail its powers. Trump is likely to fire the current director and install a Republican who could rescind existing CFPB rules and stop any proposed rules from moving forward. 

Meanwhile, the FTC’s enforcement actions are only as good as the enforcers. FTC decisions do not set legal precedent in quite the same way that court cases do, says Ben Winters, a former Department of Justice official and the director of AI and privacy at the Consumer Federation of America, a network of organizations and agencies focused on consumer protection. Instead, they “require consistent [and] additional enforcement to make the whole industry scared of not having an FTC enforcement action against them.” (It’s also worth noting that these FTC settlements are specifically focused on geolocation data, which is just one of the many types of sensitive data that we regularly give up in order to participate in the digital world.)

Looking ahead, Tiffany Li, a professor at the University of San Francisco School of Law who focuses on AI and privacy law, is worried about “a defanged FTC” that she says would be “less aggressive in taking action against companies.” 

Lina Khan, the current FTC chair, has been the leader of privacy protection action in the US, notes Li, and she’ll soon be leaving. Andrew Ferguson, Trump’s recently named pick to be the next FTC chair, has come out in strong opposition to data brokers: “This type of data—records of a person’s precise physical locations—is inherently intrusive and revealing of people’s most private affairs,” he wrote in a statement on the Mobilewalla decision, indicating that he is likely to continue action against them. (Ferguson has been serving as a commissioner on the FTC since April 20214.) On the other hand, he has spoken out against using FTC actions as an alternative to privacy legislation passed by Congress. And, of course, this brings us right back around to that other major roadblock: Congress has so far failed to pass such laws—and it’s unclear if the next Congress will either. 

Movement in the states

Without federal legislative action, many US states are taking privacy matters into their own hands. 

In 2025, eight new state privacy laws will take effect, making a total of 25 around the country. A number of other states—like Vermont and Massachusetts—are considering passing their own privacy bills next year, and such laws could, in theory, force national legislation, says Woodrow Hartzog, a technology law scholar at Boston University School of Law. “Right now, the statutes are all similar enough that the compliance cost is perhaps expensive but manageable,” he explains. But if one state passed a law that was different enough from the others, a national law could be the only way to resolve the conflict. Additionally, four states—California, Texas, Vermont, and Oregon—already have specific laws regulating data brokers, including the requirement that they register with the state. 

Along with new laws, says Justin Brookman, the director of technology policy at Consumer Reports, comes the possibility that “we can put some more teeth on these laws.” 

Brookman points to Texas, where some of the most aggressive enforcement action at the state level has taken place under its Republican attorney general, Ken Paxton. Even before the state’s new consumer privacy bill went into effect in July, Paxton announced the creation of a special task force focused on enforcing the state’s privacy laws. He has since targeted a number of data brokers—including National Public Data, which exposed millions of sensitive customer records in a data breach in August, as well as companies that sell to them, like Sirius XM. 

At the same time, though, Paxton has moved to enforce the state’s strict abortion laws in ways that threaten individual privacy. In December, he sued a New York doctor for sending abortion pills to a Texas woman through the mail. While the doctor is theoretically protected by New York’s shield laws, which provide a safeguard from out-of-state prosecution, Paxton’s aggressive action makes it even more crucial that states enshrine data privacy protections into their laws, says Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, an advocacy group. “There is an urgent need for states,” he says, “to lock down our resident’s’ data, barring companies from collecting and sharing information in ways that can be weaponized against them by out-of-state prosecutors.” 

Data collection in the name of “security”

While privacy has become a bipartisan issue, Republicans, in particular, are interested in “addressing data brokers in the context of national security,” such as protecting the data of military members or other government officials, says Winters. But in his view, it’s the effects on reproductive rights and immigrants that are potentially the “most dangerous” threats to privacy. 

Indeed, data brokers (including Venntel, the Gravy Analytics subsidiary named in the recent FTC settlement) have sold cell-phone data to Immigration and Customs Enforcement, as well as to Customs and Border Protection. That data has then been used to track individuals for deportation proceedings—allowing the agencies to bypass local and state sanctuary laws that ban local law enforcement from sharing information for immigration enforcement. 

“The more data that corporations collect, the more data that’s available to governments for surveillance,” warns Ashley Gorski, a senior attorney who works on national security and privacy at the American Civil Liberties Union.

The ACLU is among a number of organizations that have been pushing for the passage of another federal law related to privacy: the Fourth Amendment Is Not For Sale Act. It would close the so-called “data-broker loophole” that allows law enforcement and intelligence agencies to buy personal information from data brokers without a search warrant. The bill would “dramatically limit the ability of the government to buy Americans’ private data,” Gorski says. It was first introduced in 2021 and passed the House in April 2024, with the support of 123 Republicans and 93 Democrats, before stalling in the Senate. 

While Gorski is hopeful that the bill will move forward in the next Congress, others are less sanguine about these prospects—and alarmed about other ways that the incoming administration might “co-opt private systems for surveillance purposes,” as Hartzog puts it. So much of our personal information that is “collected for one purpose,” he says, could “easily be used by the government … to track us.” 

This is especially concerning, adds Winters, given that the next administration has been “very explicit” about wanting to use every tool at its disposal to carry out policies like mass deportations and to exact revenge on perceived enemies. And one possible change, he says, is as simple as loosening the government’s procurement processes to make them more open to emerging technologies, which may have fewer privacy protections. “Right now, it’s annoying to procure anything as a federal agency,” he says, but he expects a more “fast and loose use of commercial tools.” 

“That’s something we’ve [already] seen a lot,” he adds, pointing to “federal, state, and local agencies using the Clearviews of the world”—a reference to the controversial facial recognition company. 

The AI wild card

Underlying all of these debates on potential legislation is the fact that technology companies—especially AI companies—continue to require reams and reams of data, including personal data, to train their machine-learning models. And they’re quickly running out of it. 

This is something of a wild card in any predictions about personal data. Ideally, says Jennifer King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, the shortage would lead to ways for consumers to directly benefit, perhaps financially, from the value of their own data. But it’s more likely that “there will be more industry resistance against some of the proposed comprehensive federal privacy legislation bills,” she says. “Companies benefit from the status quo.” 

The hunt for more and more data may also push companies to change their own privacy policies, says Whitney Merrill, a former FTC official who works on data privacy at Asana. Speaking in a personal capacity, she says that companies “have felt the squeeze in the tech recession that we’re in, with the high interest rates,” and that under those circumstances, “we’ve seen people turn around, change their policies, and try to monetize their data in an AI world”—even if it’s at the expense of user privacy. She points to the $60-million-per-year deal that Reddit struck last year to license its content to Google to help train the company’s AI. 

Earlier this year, the FTC warned companies that it would be “unfair and deceptive” to “surreptitiously” change their privacy policies to allow for the use of user data to train AI. But again, whether or not officials follow up on this depends on those in charge. 

So what will privacy look like in 2025? 

While the recent FTC settlements and the CFPB’s proposed rule represent important steps forward in privacy protection—at least when it comes to geolocation data—Americans’ personal information still remains widely available and vulnerable. 

Rebecca Williams, a senior strategist at the ACLU for privacy and data governance, argues that all of us, as individuals and communities, should take it upon ourselves to do more to protect ourselves and “resist … by opting out” of as much data collection as possible. That means checking privacy settings on accounts and apps, and using encrypted messaging services. 

Cahn, meanwhile, says he’ll “be striving to protect [his] local community, working to enact safeguards to ensure that we live up to our principles and stated commitments.” One example of such safeguards is a proposed New York City ordinance that would ban the sharing of any location data originating from within the city limits. Hartzog says that kind of local activism has already been effective in pushing for city bans on facial recognition. 

“Privacy rights are at risk, but they’re not gone, and it’s not helpful to take an overly pessimistic look right now,” says Li, the USF law professor. “We definitely still have privacy rights, and the more that we continue to fight for these rights, the more we’re going to be able to protect our rights.”