What’s next for AI and math

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

The way DARPA tells it, math is stuck in the past. In April, the US Defense Advanced Research Projects Agency kicked off a new initiative called expMath—short for Exponentiating Mathematics—that it hopes will speed up the rate of progress in a field of research that underpins a wide range of crucial real-world applications, from computer science to medicine to national security.

“Math is the source of huge impact, but it’s done more or less as it’s been done for centuries—by people standing at chalkboards,” DARPA program manager Patrick Shafto said in a video introducing the initiative

The modern world is built on mathematics. Math lets us model complex systems such as the way air flows around an aircraft, the way financial markets fluctuate, and the way blood flows through the heart. And breakthroughs in advanced mathematics can unlock new technologies such as cryptography, which is essential for private messaging and online banking, and data compression, which lets us shoot images and video across the internet.

But advances in math can be years in the making. DARPA wants to speed things up. The goal for expMath is to encourage mathematicians and artificial-intelligence researchers to develop what DARPA calls an AI coauthor, a tool that might break large, complex math problems into smaller, simpler ones that are easier to grasp and—so the thinking goes—quicker to solve.

Mathematicians have used computers for decades, to speed up calculations or check whether certain mathematical statements are true. The new vision is that AI might help them crack problems that were previously uncrackable.  

But there’s a huge difference between AI that can solve the kinds of problems set in high school—math that the latest generation of models has already mastered—and AI that could (in theory) solve the kinds of problems that professional mathematicians spend careers chipping away at.

On one side are tools that might be able to automate certain tasks that math grads are employed to do; on the other are tools that might be able to push human knowledge beyond its existing limits.

Here are three ways to think about that gulf.

1/ AI needs more than just clever tricks

Large language models are not known to be good at math. They make things up and can be persuaded that 2 + 2 = 5. But newer versions of this tech, especially so-called large reasoning models (LRMs) like OpenAI’s o3 and Anthropic’s Claude 4 Thinking, are far more capable—and that’s got mathematicians excited.

This year, a number of LRMs, which try to solve a problem step by step rather than spit out the first result that comes to them, have achieved high scores on the American Invitational Mathematics Examination (AIME), a test given to the top 5% of US high school math students.

At the same time, a handful of new hybrid models that combine LLMs with some kind of fact-checking system have also made breakthroughs. Emily de Oliveira Santos, a mathematician at the University of São Paulo, Brazil, points to Google DeepMind’s AlphaProof, a system that combines an LLM with DeepMind’s game-playing model AlphaZero, as one key milestone. Last year AlphaProof became the first computer program to match the performance of a silver medallist at the International Math Olympiad, one of the most prestigious mathematics competitions in the world.

And in May, a Google DeepMind model called AlphaEvolve discovered better results than anything humans had yet come up with for more than 50 unsolved mathematics puzzles and several real-world computer science problems.

The uptick in progress is clear. “GPT-4 couldn’t do math much beyond undergraduate level,” says de Oliveira Santos. “I remember testing it at the time of its release with a problem in topology, and it just couldn’t write more than a few lines without getting completely lost.” But when she gave the same problem to OpenAI’s o1, an LRM released in January, it nailed it.

Does this mean such models are all set to become the kind of coauthor DARPA hopes for? Not necessarily, she says: “Math Olympiad problems often involve being able to carry out clever tricks, whereas research problems are much more explorative and often have many, many more moving pieces.” Success at one type of problem-solving may not carry over to another.

Others agree. Martin Bridson, a mathematician at the University of Oxford, thinks the Math Olympiad result is a great achievement. “On the other hand, I don’t find it mind-blowing,” he says. “It’s not a change of paradigm in the sense that ‘Wow, I thought machines would never be able to do that.’ I expected machines to be able to do that.”

That’s because even though the problems in the Math Olympiad—and similar high school or undergraduate tests like AIME—are hard, there’s a pattern to a lot of them. “We have training camps to train high school kids to do them,” says Bridson. “And if you can train a large number of people to do those problems, why shouldn’t you be able to train a machine to do them?”

Sergei Gukov, a mathematician at the California Institute of Technology who coaches Math Olympiad teams, points out that the style of question does not change too much between competitions. New problems are set each year, but they can be solved with the same old tricks.

“Sure, the specific problems didn’t appear before,” says Gukov. “But they’re very close—just a step away from zillions of things you have already seen. You immediately realize, ‘Oh my gosh, there are so many similarities—I’m going to apply the same tactic.’” As hard as competition-level math is, kids and machines alike can be taught how to beat it.

That’s not true for most unsolved math problems. Bridson is president of the Clay Mathematics Institute, a nonprofit US-based research organization best known for setting up the Millenium Prize Problems in 2000—seven of the most important unsolved problems in mathematics, with a $1 million prize to be awarded to the first person to solve each of them. (One problem, the Poincaré conjecture, was solved in 2010; the others, which include P versus NP and the Riemann hypothesis, remain open). “We’re very far away from AI being able to say anything serious about any of those problems,” says Bridson.

And yet it’s hard to know exactly how far away, because many of the existing benchmarks used to evaluate progress are maxed out. The best new models already outperform most humans on tests like AIME.

To get a better idea of what existing systems can and cannot do, a startup called Epoch AI has created a new test called FrontierMath, released in December. Instead of co-opting math tests developed for humans, Epoch AI worked with more than 60 mathematicians around the world to come up with a set of math problems from scratch.

FrontierMath is designed to probe the limits of what today’s AI can do. None of the problems have been seen before and the majority are being kept secret to avoid contaminating training data. Each problem demands hours of work from expert mathematicians to solve—if they can solve it at all: some of the problems require specialist knowledge to tackle.

FrontierMath is set to become an industry standard. It’s not yet as popular as AIME, says de Oliveira Santos, who helped develop some of the problems: “But I expect this to not hold for much longer, since existing benchmarks are very close to being saturated.”

On AIME, the best large language models (Anthropic’s Claude 4, OpenAI’s o3 and o4-mini, Google DeepMind’s Gemini 2.5 Pro, X-AI’s Grok 3) now score around 90%. On FrontierMath, 04-mini scores 19% and Gemini 2.5 Pro scores 13%. That’s still remarkable, but there’s clear room for improvement.     

FrontierMath should give the best sense yet just how fast AI is progressing at math. But there are some problems that are still too hard for computers to take on.

2/ AI needs to manage really vast sequences of steps

Squint hard enough and in some ways math problems start to look the same: to solve them you need to take a sequence of steps from start to finish. The problem is finding those steps. 

“Pretty much every math problem can be formulated as path-finding,” says Gukov. What makes some problems far harder than others is the number of steps on that path. “The difference between the Riemann hypothesis and high school math is that with high school math the paths that we’re looking for are short—10 steps, 20 steps, maybe 40 in the longest case.” The steps are also repeated between problems.

“But to solve the Riemann hypothesis, we don’t have the steps, and what we’re looking for is a path that is extremely long”—maybe a million lines of computer proof, says Gukov.

Finding very long sequences of steps can be thought of as a kind of complex game. It’s what DeepMind’s AlphaZero learned to do when it mastered Go and chess. A game of Go might only involve a few hundred moves. But to win, an AI must find a winning sequence of moves among a vast number of possible sequences. Imagine a number with 100 zeros at the end, says Gukov.

But that’s still tiny compared with the number of possible sequences that could be involved in proving or disproving a very hard math problem: “A proof path with a thousand or a million moves involves a number with a thousand or a million zeros,” says Gukov. 

No AI system can sift through that many possibilities. To address this, Gukov and his colleagues developed a system that shortens the length of a path by combining multiple moves into single supermoves. It’s like having boots that let you take giant strides: instead of taking 2,000 steps to walk a mile, you can now walk it in 20.

The challenge was figuring out which moves to replace with supermoves. In a series of experiments, the researchers came up with a system in which one reinforcement-learning model suggests new moves and a second model checks to see if those moves help.

They used this approach to make a breakthrough in a math problem called the Andrews-Curtis conjecture, a puzzle that has been unsolved for 60 years. It’s a problem that every professional mathematician will know, says Gukov.

(An aside for math stans only: The AC conjecture states that a particular way of describing a type of set called a trivial group can be translated into a different but equivalent description with a certain sequence of steps. Most mathematicians think the AC conjecture is false, but nobody knows how to prove that. Gukov admits himself that it is an intellectual curiosity rather than a practical problem, but an important problem for mathematicians nonetheless.)

Gukov and his colleagues didn’t solve the AC conjecture, but they found that a counterexample (suggesting that the conjecture is false) proposed 40 years ago was itself false. “It’s been a major direction of attack for 40 years,” says Gukov. With the help of AI, they showed that this direction was in fact a dead end.   

“Ruling out possible counterexamples is a worthwhile thing,” says Bridson. “It can close off blind alleys, something you might spend a year of your life exploring.” 

True, Gukov checked off just one piece of one esoteric puzzle. But he thinks the approach will work in any scenario where you need to find a long sequence of unknown moves, and he now plans to try it out on other problems.

“Maybe it will lead to something that will help AI in general,” he says. “Because it’s teaching reinforcement learning models to go beyond their training. To me it’s basically about thinking outside of the box—miles away, megaparsecs away.”  

3/ Can AI ever provide real insight?

Thinking outside the box is exactly what mathematicians need to solve hard problems. Math is often thought to involve robotic, step-by-step procedures. But advanced math is an experimental pursuit, involving trial and error and flashes of insight.

That’s where tools like AlphaEvolve come in. Google DeepMind’s latest model asks an LLM to generate code to solve a particular math problem. A second model then evaluates the proposed solutions, picks the best, and sends them back to the LLM to be improved. After hundreds of rounds of trial and error, AlphaEvolve was able to come up with solutions to a wide range of math problems that were better than anything people had yet come up with. But it can also work as a collaborative tool: at any step, humans can share their own insight with the LLM, prompting it with specific instructions.

This kind of exploration is key to advanced mathematics. “I’m often looking for interesting phenomena and pushing myself in a certain direction,” says Geordie Williamson, a mathematician at the University of Sydney in Australia. “Like: ‘Let me look down this little alley. Oh, I found something!’”

Williamson worked with Meta on an AI tool called PatternBoost, designed to support this kind of exploration. PatternBoost can take a mathematical idea or statement and generate similar ones. “It’s like: ‘Here’s a bunch of interesting things. I don’t know what’s going on, but can you produce more interesting things like that?’” he says.

Such brainstorming is essential work in math. It’s how new ideas get conjured. Take the icosahedron, says Williamson: “It’s a beautiful example of this, which I kind of keep coming back to in my own work.” The icosahedron is a 20-sided 3D object where all the faces are triangles (think of a 20-sided die). The icosahedron is the largest of a family of exactly five such objects: there’s the tetrahedron (four sides), cube (six sides), octahedron (eight sides), and dodecahedron (12 sides).

Remarkably, the fact that there are exactly five of these objects was proved by mathematicians in ancient Greece. “At the time that this theorem was proved, the icosahedron didn’t exist,” says Williamson. “You can’t go to a quarry and find it—someone found it in their mind. And the icosahedron goes on to have a profound effect on mathematics. It’s still influencing us today in very, very profound ways.”

For Williamson, the exciting potential of tools like PatternBoost is that they might help people discover future mathematical objects like the icosahedron that go on to shape the way math is done. But we’re not there yet. “AI can contribute in a meaningful way to research-level problems,” he says. “But we’re certainly not getting inundated with new theorems at this stage.”

Ultimately, it comes down to the fact that machines still lack what you might call intuition or creative thinking. Williamson sums it up like this: We now have AI that can beat humans when it knows the rules of the game. “But it’s one thing for a computer to play Go at a superhuman level and another thing for the computer to invent the game of Go.”

“I think that applies to advanced mathematics,” he says. “Breakthroughs come from a new way of thinking about something, which is akin to finding completely new moves in a game. And I don’t really think we understand where those really brilliant moves in deep mathematics come from.”

Perhaps AI tools like AlphaEvolve and PatternBoost are best thought of as advance scouts for human intuition. They can discover new directions and point out dead ends, saving mathematicians months or years of work. But the true breakthroughs will still come from the minds of people, as has been the case for thousands of years.

For now, at least. “There’s plenty of tech companies that tell us that won’t last long,” says Williamson. “But you know—we’ll see.” 

What’s next for robots

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Jan Liphardt teaches bioengineering at Stanford, but to many strangers in Los Altos, California, he is a peculiar man they see walking a four-legged robotic dog down the street. 

Liphardt has been experimenting with building and modifying robots for years, and when he brings his “dog” out in public, he generally gets one of three reactions. Young children want to have one, their parents are creeped out, and baby boomers try to ignore it. “They’ll quickly walk by,” he says, “like, ‘What kind of dumb new stuff is going on here?’” 

In the many conversations I’ve had about robots, I’ve also found that most people tend to fall into these three camps, though I don’t see such a neat age division. Some are upbeat and vocally hopeful that a future is just around the corner in which machines can expertly handle much of what is currently done by humans, from cooking to surgery. Others are scared: of job losses, injuries, and whatever problems may come up as we try to live side by side. 

The final camp, which I think is the largest, is just unimpressed. We’ve been sold lots of promises that robots will transform society ever since the first robotic arm was installed on an assembly line at a General Motors plant in New Jersey in 1961. Few of those promises have panned out so far. 

But this year, there’s reason to think that even those staunchly in the “bored” camp will be intrigued by what’s happening in the robot races. Here’s a glimpse at what to keep an eye on. 

Humanoids are put to the test

The race to build humanoid robots is motivated by the idea that the world is set up for the human form, and that automating that form could mean a seismic shift for robotics. It is led by some particularly outspoken and optimistic entrepreneurs, including Brett Adcock, the founder of Figure AI, a company making such robots that’s valued at more than $2.6 billion (it’s begun testing its robots with BMW). Adcock recently told Time, “Eventually, physical labor will be optional.” Elon Musk, whose company Tesla is building a version called Optimus, has said humanoid robots will create “a future where there is no poverty.” A robotics company called Eliza Wakes Up is taking preorders for a $420,000 humanoid called, yes, Eliza.

In June 2024, Agility Robotics sent a fleet of its Digit humanoid robots to GXO Logistics, which moves products for companies ranging from Nike to Nestlé. The humanoids can handle most tasks that involve picking things up and moving them somewhere else, like unloading pallets or putting boxes on a conveyor. 

There have been hiccups: Highly polished concrete floors can cause robots to slip at first, and buildings need good Wi-Fi coverage for the robots to keep functioning. But charging is a bigger issue. Agility’s current version of Digit, with a 39-pound battery, can run for two to four hours before it needs to charge for one hour, so swapping out the robots for fresh ones is a common task on each shift. If there are a small number of charging docks installed, the robots can theoretically charge by shuffling among the docks themselves overnight when some facilities aren’t running, but moving around on their own can set off a building’s security system. “It’s a problem,” says CTO Melonee Wise.

Wise is cautious about whether humanoids will be widely adopted in workplaces. “I’ve always been a pessimist,” she says. That’s because getting robots to work well in a lab is one thing, but integrating them into a bustling warehouse full of people and forklifts moving goods on tight deadlines is another task entirely.

If 2024 was the year of unsettling humanoid product launch videos, this year we will see those humanoids put to the test, and we’ll find out whether they’ll be as productive for paying customers as promised. Now that Agility’s robots have been deployed in fast-paced customer facilities, it’s clear that small problems can really add up. 

Then there are issues with how robots and humans share spaces. In the GXO facility the two work in completely separate areas, Wise says, but there are cases where, for example, a human worker might accidentally leave something obstructing a charging station. That means Agility’s robots can’t return to the dock to charge, so they need to alert a human employee to move the obstruction out of the way, slowing operations down.  

It’s often said that robots don’t call out sick or need health care. But this year, as fleets of humanoids arrive on the job, we’ll begin to find out the limitations they do have.

Learning from imagination

The way we teach robots how to do things is changing rapidly. It used to be necessary to break their tasks down into steps with specifically coded instructions, but now, thanks to AI, those instructions can be gleaned from observation. Just as ChatGPT was taught to write through exposure to trillions of sentences rather than by explicitly learning the rules of grammar, robots are learning through videos and demonstrations. 

That poses a big question: Where do you get all these videos and demonstrations for robots to learn from?

Nvidia, the world’s most valuable company, has long aimed to meet that need with simulated worlds, drawing on its roots in the video-game industry. It creates worlds in which roboticists can expose digital replicas of their robots to new environments to learn. A self-driving car can drive millions of virtual miles, or a factory robot can learn how to navigate in different lighting conditions.

In December, the company went a step further, releasing what it’s calling a “world foundation model.” Called Cosmos, the model has learned from 20 million hours of video—the equivalent of watching YouTube nonstop since Rome was at war with Carthage—that can be used to generate synthetic training data.

Here’s an example of how this model could help in practice. Imagine you run a robotics company that wants to build a humanoid that cleans up hospitals. You can start building this robot’s “brain” with a model from Nvidia, which will give it a basic understanding of physics and how the world works, but then you need to help it figure out the specifics of how hospitals work. You could go out and take videos and images of the insides of hospitals, or pay people to wear sensors and cameras while they go about their work there.

“But those are expensive to create and time consuming, so you can only do a limited number of them,” says Rev Lebaredian, vice president of simulation technologies at Nvidia. Cosmos can instead take a handful of those examples and create a three-dimensional simulation of a hospital. It will then start making changes—different floor colors, different sizes of hospital beds—and create slightly different environments. “You’ll multiply that data that you captured in the real world millions of times,” Lebaredian says. In the process, the model will be fine-tuned to work well in that specific hospital setting. 

It’s sort of like learning both from your experiences in the real world and from your own imagination (stipulating that your imagination is still bound by the rules of physics). 

Teaching robots through AI and simulations isn’t new, but it’s going to become much cheaper and more powerful in the years to come. 

A smarter brain gets a smarter body

Plenty of progress in robotics has to do with improving the way a robot senses and plans what to do—its “brain,” in other words. Those advancements can often happen faster than those that improve a robot’s “body,” which determine how well a robot can move through the physical world, especially in environments that are more chaotic and unpredictable than controlled assembly lines. 

The military has always been keen on changing that and expanding the boundaries of what’s physically possible. The US Navy has been testing machines from a company called Gecko Robotics that can navigate up vertical walls (using magnets) to do things like infrastructure inspections, checking for cracks, flaws, and bad welding on aircraft carriers. 

There are also investments being made for the battlefield. While nimble and affordable drones have reshaped rural battlefields in Ukraine, new efforts are underway to bring those drone capabilities indoors. The defense manufacturer Xtend received an $8.8 million contract from the Pentagon in December 2024 for its drones, which can navigate in confined indoor spaces and urban environments. These so-called “loitering munitions” are one-way attack drones carrying explosives that detonate on impact.

“These systems are designed to overcome challenges like confined spaces, unpredictable layouts, and GPS-denied zones,” says Rubi Liani, cofounder and CTO at Xtend. Deliveries to the Pentagon should begin in the first few months of this year. 

Another initiative—sparked in part by the Replicator project, the Pentagon’s plan to spend more than $1 billion on small unmanned vehicles—aims to develop more autonomously controlled submarines and surface vehicles. This is particularly of interest as the Department of Defense focuses increasingly on the possibility of a future conflict in the Pacific between China and Taiwan. In such a conflict, the drones that have dominated the war in Ukraine would serve little use because battles would be waged almost entirely at sea, where small aerial drones would be limited by their range. Instead, undersea drones would play a larger role.

All these changes, taken together, point toward a future where robots are more flexible in how they learn, where they work, and how they move. 

Jan Liphardt from Stanford thinks the next frontier of this transformation will hinge on the ability to instruct robots through speech. Large language models’ ability to understand and generate text has already made them a sort of translator between Liphardt and his robot.

“We can take one of our quadrupeds and we can tell it, ‘Hey, you’re a dog,’ and the thing wants to sniff you and tries to bark,” he says. “Then we do one word change—‘You’re a cat.’ Then the thing meows and, you know, runs away from dogs. And we haven’t changed a single line of code.”

Correction: A previous version of this story incorrectly stated that the robotics company Eliza Wakes Up has ties to a16z.

What to expect from Neuralink in 2025

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In November, a young man named Noland Arbaugh announced he’d be livestreaming from his home for three days straight. His broadcast was in some ways typical fare: a backyard tour, video games, meet mom.

The difference is that Arbaugh, who is paralyzed, has thin electrode-studded wires installed in his brain, which he used to move a computer mouse on a screen, click menus, and play chess. The implant, called N1, was installed last year by neurosurgeons working with Neuralink, Elon Musk’s brain-interface company.

The possibility of listening to neurons and using their signals to move a computer cursor was first demonstrated more than 20 years ago in a lab setting. Now, Arbaugh’s livestream is an indicator that Neuralink is a whole lot closer to creating a plug-and-play experience that can restore people’s daily ability to roam the web and play games, giving them what the company has called “digital freedom.”

But this is not yet a commercial product. The current studies are small-scale—they are true experiments, explorations of how the device works and how it can be improved. For instance, at some point last year, more than half the electrode-studded “threads” inserted into Aurbaugh’s brain retracted, and his control over the device worsened; Neuralink rushed to implement fixes so he could use his remaining electrodes to move the mouse.

Neuralink did not reply to emails seeking comment, but here is what our analysis of its public statements leads us to expect from the company in 2025.

More patients

How many people will get these implants? Elon Musk keeps predicting huge numbers. In August, he posted on X: “If all goes well, there will be hundreds of people with Neuralinks within a few years, maybe tens of thousands within five years, millions within 10 years.”

In reality, the actual pace is slower—a lot slower. That’s because in a study of a novel device, it’s typical for the first patients to be staged months apart, to allow time to monitor for problems. 

Neuralink has publicly announced that two people have received an implant: Arbaugh and a man referred to only as “Alex,” who received his in July or August. 

Then, on January 8, Musk disclosed during an online interview that there was now a third person with an implant. “We’ve got now three patients, three humans with Neuralinks implanted, and they are all working …well,” Musk said. During 2025, he added, “we expect to hopefully do, I don’t know, 20 or 30 patients.”  

Barring major setbacks, expect the pace of implants to increase—although perhaps not as fast as Musk says. In November, Neuralink updated its US trial listing to include space for five volunteers (up from three), and it also opened a trial in Canada with room for six. Considering these two studies only, Neuralink would carry out at least two more implants by the end of 2025 and eight by the end of 2026.

However, by opening further international studies, Neuralink could increase the pace of the experiments.

Better control

So how good is Arbaugh’s control over the mouse? You can get an idea by trying a game called Webgrid, where you try to click quickly on a moving target. The program translates your speed into a measure of information transfer: bits per second. 

Neuralink claims Arbaugh reached a rate of over nine bits per second, doubling the old brain-interface record. The median able-bodied user scores around 10 bits per second, according to Neuralink.

And yet during his livestream, Arbaugh complained that his mouse control wasn’t very good because his “model” was out of date. It was a reference to how his imagined physical movements get mapped to mouse movements. That mapping degrades over hours and days, and to recalibrate it, he has said, he spends as long as 45 minutes doing a set of retraining tasks on his monitor, such as imagining moving a dot from a center point to the edge of a circle.

Noland Arbaugh stops to calibrate during a livestream on X
@MODDEDQUAD VIA X

Improving the software that sits between Arbaugh’s brain and the mouse is a big area of focus for Neuralink—one where the company is still experimenting and making significant changes. Among the goals: cutting the recalibration time to a few minutes. “We want them to feel like they are in the F1 [Formula One] car, not the minivan,” Bliss Chapman, who leads the BCI software team, told the podcaster Lex Fridman last year.

Device changes

Before Neuralink ever seeks approval to sell its brain interface, it will have to lock in a final device design that can be tested in a “pivotal trial” involving perhaps 20 to 40 patients, to show it really works as intended. That type of study could itself take a year or two to carry out and hasn’t yet been announced.

In fact, Neuralink is still tweaking its implant in significant ways—for instance, by trying to increase the number of electrodes or extend the battery life. This month, Musk said the next human tests would be using an “upgraded Neuralink device.”

The company is also still developing the surgical robot, called R1, that’s used to implant the device. It functions like a sewing machine: A surgeon uses R1 to thread the electrode wires into people’s brains. According to Neuralink’s job listings, improving the R1 robot and making the implant process entirely automatic is a major goal of the company. That’s partly to meet Musk’s predictions of a future where millions of people have an implant, since there wouldn’t be enough neurosurgeons in the world to put them all in manually. 

“We want to get to the point where it’s one click,” Neuralink president Dongjin Seo told Fridman last year.

Robot arm

Late last year, Neuralink opened a companion study through which it says some of its existing implant volunteers will get to try using their brain activity to control not only a computer mouse but other types of external devices, including an “assistive robotic arm.”

We haven’t yet seen what Neuralink’s robotic arm looks like—whether it’s a tabletop research device or something that could be attached to a wheelchair and used at home to complete daily tasks.

But it’s clear such a device could be helpful. During Aurbaugh’s livestream he frequently asked other people to do simple things for him, like brush his hair or put on his hat.

Arbaugh demonstrates the use of Imagined Movement Control.
@MODDEDQUAD VIA X

And using brains to control robots is definitely possible—although so far only in a controlled research setting. In tests using a different brain implant, carried out at the University of Pittsburgh in 2012, a paralyzed woman named Jan Scheuermann was able to use a robot arm to stack blocks and plastic cups about as well as a person who’d had a severe stroke—impressive, since she couldn’t actually move her own limbs.

There are several practical obstacles to using a robot arm at home. One is developing a robot that’s safe and useful. Another, as noted by Wired, is that the calibration steps to maintain control over an arm that can make 3D movements and grasp objects could be onerous and time consuming.

Vision implant

In September, Neuralink said it had received “breakthrough device designation” from the FDA for a version of its implant that could be used to restore limited vision to blind people. The system, which it calls Blindsight, would work by sending electrical impulses directly into a volunteer’s visual cortex, producing spots of light called phosphenes. If there are enough spots, they can be organized into a simple, pixelated form of vision, as previously demonstrated by academic researchers.

The FDA designation is not the same as permission to start the vision study. Instead, it’s a promise by the agency to speed up review steps, including agreements around what a trial should look like. Right now, it’s impossible to guess when a Neuralink vision trial could start, but it won’t necessarily be this year. 

More money

Neuralink last raised money in 2023, collecting around $325 million from investors in a funding round that valued the company at over $3 billion, according to Pitchbook. Ryan Tanaka, who publishes a podcast about the company, Neura Pod, says he thinks Neuralink will raise more money this year and that the valuation of the private company could double.

Fighting regulators

Neuralink has attracted plenty of scrutiny from news reporters, animal-rights campaigners, and even fraud investigators at the Securities and Exchange Commission. Many of the questions surround its treatment of test animals and whether it rushed to try the implant in people.

More recently, Musk has started using his X platform to badger and bully heads of state and was named by Donald Trump to co-lead a so-called Department of Government Efficiency, which Musk says will “get rid of nonsensical regulations” and potentially gut some DC agencies. 

During 2025, watch for whether Musk uses his digital bullhorn to give health regulators pointed feedback on how they’re handling Neuralink.

Other efforts

Don’t forget that Neuralink isn’t the only company working on brain implants. A company called Synchron has one that’s inserted into the brain through a blood vessel, which it’s also testing in human trials of brain control over computers. Other companies, including Paradromics, Precision Neuroscience, and BlackRock Neurotech, are also developing advanced brain-computer interfaces.

Special thanks to Ryan Tanaka of Neura Pod for pointing us to Neuralink’s public announcements and projections.

What’s next for nuclear power

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

While nuclear reactors have been generating power around the world for over 70 years, the current moment is one of potentially radical transformation for the technology.

As electricity demand rises around the world for everything from electric vehicles to data centers, there’s renewed interest in building new nuclear capacity, as well as extending the lifetime of existing plants and even reopening facilities that have been shut down. Efforts are also growing to rethink reactor designs, and 2025 marks a major test for so-called advanced reactors as they begin to move from ideas on paper into the construction phase.

That’s significant because nuclear power promises a steady source of electricity as climate change pushes global temperatures to new heights and energy demand surges around the world. Here’s what to expect next for the industry.  

A global patchwork

The past two years have seen a new commitment to nuclear power around the globe, including an agreement at the UN climate talks that 31 countries pledged to triple global nuclear energy capacity by 2050. However, the prospects for the nuclear industry differ depending on where you look.

The US is currently home to the highest number of operational nuclear reactors in the world. If its specific capacity were to triple, that would mean adding a somewhat staggering 200 gigawatts of new nuclear energy capacity to the current total of roughly 100 gigawatts. And that’s in addition to replacing any expected retirements from a relatively old fleet. But the country has come to something of a stall. A new reactor at the Vogtle plant in Georgia came online last year (following significant delays and cost overruns), but there are no major conventional reactors under construction or in review by regulators in the US now.

This year also brings an uncertain atmosphere for nuclear power in the US as the incoming Trump administration takes office. While the technology tends to have wide political support, it’s possible that policies like tariffs could affect the industry by increasing the cost of building materials like steel, says Jessica Lovering, cofounder at the Good Energy Collective, a policy research organization that advocates for the use of nuclear energy.

Globally, most reactors under construction or in planning phases are in Asia, and growth in China is particularly impressive. The country’s first nuclear power plant connected to the grid in 1991, and in just a few decades it has built the third-largest fleet in the world, after only France and the US. China has four large reactors likely to come online this year, and another handful are scheduled for commissioning in 2026.

This year will see both Bangladesh and Turkey start up their first nuclear reactors. Egypt also has its first nuclear plant under construction, though it’s not expected to undergo commissioning for several years.  

Advancing along

Commercial nuclear reactors on the grid today, and most of those currently under construction, generally follow a similar blueprint: The fuel that powers the reactor is low-enriched uranium, and water is used as a coolant to control the temperature inside.

But newer, advanced reactors are inching closer to commercial use. A wide range of these so-called Generation IV reactors are in development around the world, all deviating from the current blueprint in one way or another in an attempt to improve safety, efficiency, or both. Some use molten salt or a metal like lead as a coolant, while others use a more enriched version of uranium as a fuel. Often, there’s a mix-and-match approach with variations on the fuel type and cooling methods.

The next couple of years will be crucial for advanced nuclear technology as proposals and designs move toward the building process. “We’re watching paper reactors turn into real reactors,” says Patrick White, research director at the Nuclear Innovation Alliance, a nonprofit think tank.

Much of the funding and industrial activity in advanced reactors is centered in the US, where several companies are close to demonstrating their technology.

Kairos Power is building reactors cooled by molten salt, specifically a fluorine-containing material called Flibe. The company received a construction permit from the US Nuclear Regulatory Commission (NRC) for its first demonstration reactor in late 2023, and a second permit for another plant in late 2024. Construction will take place on both facilities over the next few years, and the plan is to complete the first demonstration facility in 2027.

TerraPower is another US-based company working on Gen IV reactors, though the design for its Natrium reactor uses liquid sodium as a coolant. The company is taking a slightly different approach to construction, too: by separating the nuclear and non-nuclear portions of the facility, it was able to break ground on part of its site in June of 2024. It’s still waiting for construction approval from the NRC to begin work on the nuclear side, which the company expects to do by 2026.

A US Department of Defense project could be the first in-progress Gen IV reactor to generate electricity, though it’ll be at a very small scale. Project Pele is a transportable microreactor being manufactured by BWXT Advanced Technologies. Assembly is set to begin early this year, with transportation to the final site at Idaho National Lab expected in 2026.

Advanced reactors certainly aren’t limited to the US. Even as China is quickly building conventional reactors, the country is starting to make waves in a range of advanced technologies as well. Much of the focus is on high-temperature gas-cooled reactors, says Lorenzo Vergari, an assistant professor at the University of Illinois Urbana-Champaign. These reactors use helium gas as a coolant and reach temperatures over 1,500 °C, much higher than other designs.

China’s first commercial demonstration reactor of this type came online in late 2023, and a handful of larger reactors that employ the technology are currently in planning phases or under construction.

Squeezing capacity

It will take years, or even decades, for even the farthest-along advanced reactor projects to truly pay off with large amounts of electricity on the grid. So amid growing global electricity demand around the world, there’s renewed interest in getting as much power out of existing nuclear plants as possible.

One trend that’s taken off in countries with relatively old nuclear fleets is license extension. While many plants built in the 20th century were originally licensed to run for 40 years, there’s no reason many of them can’t run for longer if they’re properly maintained and some equipment is replaced.

Regulators in the US have granted 20-year extensions to much of the fleet, bringing the expected lifetime of many to 60 years. A handful of reactors have seen their licenses extended even beyond that, to 80 years. Countries including France and Spain have also recently extended licenses of operating reactors beyond their 40-year initial lifetimes. Such extensions are likely to continue, and the next few years could see more reactors in the US relicensed for up to 80-year lifetimes.

In addition, there’s interest in reopening shuttered plants, particularly those that have shut down recently for economic reasons. Palisades Nuclear Plant in Michigan is the target of one such effort, and the project secured a $1.52 billion loan from the US Department of Energy to help with the costs of reviving it. Holtec, the plant’s owner and operator, is aiming to have the facility back online in 2025. 

However, the NRC has reported possible damage to some of the equipment at the plant, specifically the steam generators. Depending on the extent of the repairs needed, the additional cost could potentially make reopening uneconomical, White says.

A reactor at the former Three Mile Island Nuclear Facility is another target. The site’s owner says the reactor could be running again by 2028, though battles over connecting the plant to the grid could play out in the coming year or so. Finally, the owners of the Duane Arnold Energy Center in Iowa are reportedly considering reopening the nuclear plant, which shut down in 2020.

Big Tech’s big appetite

One of the factors driving the rising appetite for nuclear power is the stunning growth of AI, which relies on data centers requiring a huge amount of energy. Last year brought new interest from tech giants looking to nuclear as a potential solution to the AI power crunch.

Microsoft had a major hand in plans to reopen the reactor at Three Mile Island—the company signed a deal in 2024 to purchase power from the facility if it’s able to reopen. And that’s just the beginning.

Google signed a deal with Kairos Power in October 2024 that would see the startup build up to 500 megawatts’ worth of power plants by 2035, with Google purchasing the energy. Amazon went one step further than these deals, investing directly in X-energy, a company building small modular reactors. The money will directly fund the development, licensing, and construction of a project in Washington.

Funding from big tech companies could be a major help in keeping existing reactors running and getting advanced projects off the ground, but many of these commitments so far are vague, says Good Energy Collective’s Lovering. Major milestones to watch for include big financial commitments, contracts signed, and applications submitted to regulators, she says.

“Nuclear had an incredible 2024, probably the most exciting year for nuclear in many decades,” says Staffan Qvist, a nuclear engineer and CEO of Quantified Carbon, an international consultancy focused on decarbonizing energy and industry. Deploying it at the scale required will be a big challenge, but interest is ratcheting up. As he puts it, “There’s a big world out there hungry for power.”

What’s next for AI in 2025

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

For the last couple of years we’ve had a go at predicting what’s coming next in AI. A fool’s game given how fast this industry moves. But we’re on a roll, and we’re doing it again.

How did we score last time round? Our four hot trends to watch out for in 2024 included what we called customized chatbots—interactive helper apps powered by multimodal large language models (check: we didn’t know it yet, but we were talking about what everyone now calls agents, the hottest thing in AI right now); generative video (check: few technologies have improved so fast in the last 12 months, with OpenAI and Google DeepMind releasing their flagship video generation models, Sora and Veo, within a week of each other this December); and more general-purpose robots that can do a wider range of tasks (check: the payoffs from large language models continue to trickle down to other parts of the tech industry, and robotics is top of the list). 

We also said that AI-generated election disinformation would be everywhere, but here—happily—we got it wrong. There were many things to wring our hands over this year, but political deepfakes were thin on the ground

So what’s coming in 2025? We’re going to ignore the obvious here: You can bet that agents and smaller, more efficient, language models will continue to shape the industry. Instead, here are five alternative picks from our AI team.

1. Generative virtual playgrounds 

If 2023 was the year of generative images and 2024 was the year of generative video—what comes next? If you guessed generative virtual worlds (a.k.a. video games), high fives all round.

We got a tiny glimpse of this technology in February, when Google DeepMind revealed a generative model called Genie that could take a still image and turn it into a side-scrolling 2D platform game that players could interact with. In December, the firm revealed Genie 2, a model that can spin a starter image into an entire virtual world.

Other companies are building similar tech. In October, the AI startups Decart and Etched revealed an unofficial Minecraft hack in which every frame of the game gets generated on the fly as you play. And World Labs, a startup cofounded by Fei-Fei Li—creator of ImageNet, the vast data set of photos that kick-started the deep-learning boom—is building what it calls large world models, or LWMs.

One obvious application is video games. There’s a playful tone to these early experiments, and generative 3D simulations could be used to explore design concepts for new games, turning a sketch into a playable environment on the fly. This could lead to entirely new types of games

But they could also be used to train robots. World Labs wants to develop so-called spatial intelligence—the ability for machines to interpret and interact with the everyday world. But robotics researchers lack good data about real-world scenarios with which to train such technology. Spinning up countless virtual worlds and dropping virtual robots into them to learn by trial and error could help make up for that.   

Will Douglas Heaven

2. Large language models that “reason”

The buzz was justified. When OpenAI revealed o1 in September, it introduced a new paradigm in how large language models work. Two months later, the firm pushed that paradigm forward in almost every way with o3—a model that just might reshape this technology for good.

Most models, including OpenAI’s flagship GPT-4, spit out the first response they come up with. Sometimes it’s correct; sometimes it’s not. But the firm’s new models are trained to work through their answers step by step, breaking down tricky problems into a series of simpler ones. When one approach isn’t working, they try another. This technique, known as “reasoning” (yes—we know exactly how loaded that term is), can make this technology more accurate, especially for math, physics, and logic problems.

It’s also crucial for agents.

In December, Google DeepMind revealed an experimental new web-browsing agent called Mariner. In the middle of a preview demo that the company gave to MIT Technology Review, Mariner seemed to get stuck. Megha Goel, a product manager at the company, had asked the agent to find her a recipe for Christmas cookies that looked like the ones in a photo she’d given it. Mariner found a recipe on the web and started adding the ingredients to Goel’s online grocery basket.

Then it stalled; it couldn’t figure out what type of flour to pick. Goel watched as Mariner explained its steps in a chat window: “It says, ‘I will use the browser’s Back button to return to the recipe.’”

It was a remarkable moment. Instead of hitting a wall, the agent had broken the task down into separate actions and picked one that might resolve the problem. Figuring out you need to click the Back button may sound basic, but for a mindless bot it’s akin to rocket science. And it worked: Mariner went back to the recipe, confirmed the type of flour, and carried on filling Goel’s basket.

Google DeepMind is also building an experimental version of Gemini 2.0, its latest large language model, that uses this step-by-step approach to problem solving, called Gemini 2.0 Flash Thinking.

But OpenAI and Google are just the tip of the iceberg. Many companies are building large language models that use similar techniques, making them better at a whole range of tasks, from cooking to coding. Expect a lot more buzz about reasoning (we know, we know) this year.

—Will Douglas Heaven

3. It’s boom time for AI in science 

One of the most exciting uses for AI is speeding up discovery in the natural sciences. Perhaps the greatest vindication of AI’s potential on this front came last October, when the Royal Swedish Academy of Sciences awarded the Nobel Prize for chemistry to Demis Hassabis and John M. Jumper from Google DeepMind for building the AlphaFold tool, which can solve protein folding, and to David Baker for building tools to help design new proteins.

Expect this trend to continue next year, and to see more data sets and models that are aimed specifically at scientific discovery. Proteins were the perfect target for AI, because the field had excellent existing data sets that AI models could be trained on. 

The hunt is on to find the next big thing. One potential area is materials science. Meta has released massive data sets and models that could help scientists use AI to discover new materials much faster, and in December, Hugging Face, together with the startup Entalpic, launched LeMaterial, an open-source project that aims to simplify and accelerate materials research. Their first project is a data set that unifies, cleans, and standardizes the most prominent material data sets. 

AI model makers are also keen to pitch their generative products as research tools for scientists. OpenAI let scientists test its latest o1 model and see how it might support them in research. The results were encouraging. 

Having an AI tool that can operate in a similar way to a scientist is one of the fantasies of the tech sector. In a manifesto published in October last year, Anthropic founder Dario Amodei highlighted science, especially biology, as one of the key areas where powerful AI could help. Amodei speculates that in the future, AI could be not only a method of data analysis but a “virtual biologist who performs all the tasks biologists do.” We’re still a long way away from this scenario. But next year, we might see important steps toward it. 

—Melissa Heikkilä

4. AI companies get cozier with national security

There is a lot of money to be made by AI companies willing to lend their tools to border surveillance, intelligence gathering, and other national security tasks. 

The US military has launched a number of initiatives that show it’s eager to adopt AI, from the Replicator program—which, inspired by the war in Ukraine, promises to spend $1 billion on small drones—to the Artificial Intelligence Rapid Capabilities Cell, a unit bringing AI into everything from battlefield decision-making to logistics. European militaries are under pressure to up their tech investment, triggered by concerns that Donald Trump’s administration will cut spending to Ukraine. Rising tensions between Taiwan and China weigh heavily on the minds of military planners, too. 

In 2025, these trends will continue to be a boon for defense-tech companies like Palantir, Anduril, and others, which are now capitalizing on classified military data to train AI models. 

The defense industry’s deep pockets will tempt mainstream AI companies into the fold too. OpenAI in December announced it is partnering with Anduril on a program to take down drones, completing a year-long pivot away from its policy of not working with the military. It joins the ranks of Microsoft, Amazon, and Google, which have worked with the Pentagon for years. 

Other AI competitors, which are spending billions to train and develop new models, will face more pressure in 2025 to think seriously about revenue. It’s possible that they’ll find enough non-defense customers who will pay handsomely for AI agents that can handle complex tasks, or creative industries willing to spend on image and video generators. 

But they’ll also be increasingly tempted to throw their hats in the ring for lucrative Pentagon contracts. Expect to see companies wrestle with whether working on defense projects will be seen as a contradiction to their values. OpenAI’s rationale for changing its stance was that “democracies should continue to take the lead in AI development,” the company wrote, reasoning that lending its models to the military would advance that goal. In 2025, we’ll be watching others follow its lead. 

James O’Donnell

5. Nvidia sees legitimate competition

For much of the current AI boom, if you were a tech startup looking to try your hand at making an AI model, Jensen Huang was your man. As CEO of Nvidia, the world’s most valuable corporation, Huang helped the company become the undisputed leader of chips used both to train AI models and to ping a model when anyone uses it, called “inferencing.”

A number of forces could change that in 2025. For one, behemoth competitors like Amazon, Broadcom, AMD, and others have been investing heavily in new chips, and there are early indications that these could compete closely with Nvidia’s—particularly for inference, where Nvidia’s lead is less solid. 

A growing number of startups are also attacking Nvidia from a different angle. Rather than trying to marginally improve on Nvidia’s designs, startups like Groq are making riskier bets on entirely new chip architectures that, with enough time, promise to provide more efficient or effective training. In 2025 these experiments will still be in their early stages, but it’s possible that a standout competitor will change the assumption that top AI models rely exclusively on Nvidia chips.

Underpinning this competition, the geopolitical chip war will continue. That war thus far has relied on two strategies. On one hand, the West seeks to limit exports to China of top chips and the technologies to make them. On the other, efforts like the US CHIPS Act aim to boost domestic production of semiconductors.

Donald Trump may escalate those export controls and has promised massive tariffs on any goods imported from China. In 2025, such tariffs would put Taiwan—on which the US relies heavily because of the chip manufacturer TSMC—at the center of the trade wars. That’s because Taiwan has said it will help Chinese firms relocate to the island to help them avoid the proposed tariffs. That could draw further criticism from Trump, who has expressed frustration with US spending to defend Taiwan from China. 

It’s unclear how these forces will play out, but it will only further incentivize chipmakers to reduce reliance on Taiwan, which is the entire purpose of the CHIPS Act. As spending from the bill begins to circulate, next year could bring the first evidence of whether it’s materially boosting domestic chip production. 

James O’Donnell

What’s next for our privacy?

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Every day, we are tracked hundreds or even thousands of times across the digital world. Cookies and web trackers capture every website link that we click, while code installed in mobile apps tracks every physical location that our devices—and, by extension, we—have visited. All of this is collected, packaged together with other details (compiled from public records, supermarket member programs, utility companies, and more), and used to create highly personalized profiles that are then shared or sold, often without our explicit knowledge or consent. 

A consensus is growing that Americans need better privacy protections—and that the best way to deliver them would be for Congress to pass comprehensive federal privacy legislation. While the latest iteration of such a bill, the American Privacy Rights Act of 2024, gained more momentum than previously proposed laws, it became so watered down that it lost support from both Republicans and Democrats before it even came to a vote. 

There have been some privacy wins in the form of limits on what data brokers—third-party companies that buy and sell consumers’ personal information for targeted advertisements, messaging, and other purposes—can do with geolocation data. 

These are still small steps, though—and they are happening as increasingly pervasive and powerful technologies collect more data than ever. And at the same time, Washington is preparing for a new presidential administration that has attacked the press and other critics, promised to target immigrants for mass deportation, threatened to seek retribution against perceived enemies, and supported restrictive state abortion laws. This is not even to mention the increased collection of our biometric data, especially for facial recognition, and the normalization of its use in all kinds of ways. In this light, it’s no stretch to say our personal data has arguably never been more vulnerable, and the imperative for privacy has never felt more urgent. 

So what can Americans expect for their personal data in 2025? We spoke to privacy experts and advocates about (some of) what’s on their mind regarding how our digital data might be traded or protected moving forward. 

Reining in a problematic industry

In early December, the Federal Trade Commission announced separate settlement agreements with the data brokers Mobilewalla and Gravy Analytics (and its subsidiary Venntel). Finding that the companies had tracked and sold geolocation data from users at sensitive locations like churches, hospitals, and military installations without explicit consent, the FTC banned the companies from selling such data except in specific circumstances. This follows something of a busy year in regulation of data brokers, including multiple FTC enforcement actions against other companies for similar use and sale of geolocation data, as well as a proposed rule from the Justice Department that would prohibit the sale of bulk data to foreign entities. 

And on the same day that the FTC announced these settlements in December, the Consumer Financial Protection Bureau proposed a new rule that would designate data brokers as consumer reporting agencies, which would trigger stringent reporting requirements and consumer privacy protections. The rule would prohibit the collection and sharing of people’s sensitive information, such as their salaries and Social Security numbers, without “legitimate purposes.” While the rule will still need to undergo a 90-day public comment period, and it’s unclear whether it will move forward under the Trump administration, if it’s finalized it has the power to fundamentally limit how data brokers do business.

Right now, there just aren’t many limits on how these companies operate—nor, for that matter, clear information on how many data brokerages even exist. Industry watchers estimate there may be 4,000 to 5,000 data brokers around the world, many of which we’ve never heard of—and whose names constantly shift. In California alone, the state’s 2024 Data Broker Registry lists 527 such businesses that have voluntarily registered there, nearly 90 of which also self-reported that they collect geolocation data. 

All this data is widely available for purchase by anyone who will pay. Marketers buy data to create highly targeted advertisements, and banks and insurance companies do the same to verify identity, prevent fraud, and conduct risk assessments. Law enforcement buys geolocation data to track people’s whereabouts without getting traditional search warrants. Foreign entities can also currently buy sensitive information on members of the military and other government officials. And on people-finder websites, basically anyone can pay for anyone else’s contact details and personal history.  

Data brokers and their clients defend these transactions by saying that most of this data is anonymized—though it’s questionable whether that can truly be done in the case of geolocation data. Besides, anonymous data can be easily reidentified, especially when it’s combined with other personal information. 

Digital-rights advocates have spent years sounding the alarm on this secretive industry, especially the ways in which it can harm already marginalized communities, though various types of data collection have sparked consternation across the political spectrum. Representative Cathy McMorris Rodgers, the Republican chair of the House Energy and Commerce Committee, for example, was concerned about how the Centers for Disease Control and Prevention bought location data to evaluate the effectiveness of pandemic lockdowns. Then a study from last year showed how easy (and cheap) it was to buy sensitive data about members of the US military; Senator Elizabeth Warren, a Democrat, called out the national security risks of data brokers in a statement to MIT Technology Review, and Senator John Cornyn, a Republican, later said he was “shocked” when he read about the practice in our story. 

But it was the 2022 Supreme Court decision ending the constitutional guarantee of legal abortion that spurred much of the federal action last year. Shortly after the Dobbs ruling, President Biden issued an executive order to protect access to reproductive health care; it included instructions for the FTC to take steps preventing information about visits to doctor’s offices or abortion clinics from being sold to law enforcement agencies or state prosecutors.

The new enforcers

With Donald Trump taking office in January, and Republicans taking control of both houses of Congress, the fate of the CFPB’s proposed rule—and the CFPB itself—is uncertain. Republicans, the people behind Project 2025, and Elon Musk (who will lead the newly created advisory group known as the Department of Government Efficiency) have long been interested in seeing the bureau “deleted,” as Musk put it on X. That would take an act of Congress, making it unlikely, but there are other ways that the administration could severely curtail its powers. Trump is likely to fire the current director and install a Republican who could rescind existing CFPB rules and stop any proposed rules from moving forward. 

Meanwhile, the FTC’s enforcement actions are only as good as the enforcers. FTC decisions do not set legal precedent in quite the same way that court cases do, says Ben Winters, a former Department of Justice official and the director of AI and privacy at the Consumer Federation of America, a network of organizations and agencies focused on consumer protection. Instead, they “require consistent [and] additional enforcement to make the whole industry scared of not having an FTC enforcement action against them.” (It’s also worth noting that these FTC settlements are specifically focused on geolocation data, which is just one of the many types of sensitive data that we regularly give up in order to participate in the digital world.)

Looking ahead, Tiffany Li, a professor at the University of San Francisco School of Law who focuses on AI and privacy law, is worried about “a defanged FTC” that she says would be “less aggressive in taking action against companies.” 

Lina Khan, the current FTC chair, has been the leader of privacy protection action in the US, notes Li, and she’ll soon be leaving. Andrew Ferguson, Trump’s recently named pick to be the next FTC chair, has come out in strong opposition to data brokers: “This type of data—records of a person’s precise physical locations—is inherently intrusive and revealing of people’s most private affairs,” he wrote in a statement on the Mobilewalla decision, indicating that he is likely to continue action against them. (Ferguson has been serving as a commissioner on the FTC since April 20214.) On the other hand, he has spoken out against using FTC actions as an alternative to privacy legislation passed by Congress. And, of course, this brings us right back around to that other major roadblock: Congress has so far failed to pass such laws—and it’s unclear if the next Congress will either. 

Movement in the states

Without federal legislative action, many US states are taking privacy matters into their own hands. 

In 2025, eight new state privacy laws will take effect, making a total of 25 around the country. A number of other states—like Vermont and Massachusetts—are considering passing their own privacy bills next year, and such laws could, in theory, force national legislation, says Woodrow Hartzog, a technology law scholar at Boston University School of Law. “Right now, the statutes are all similar enough that the compliance cost is perhaps expensive but manageable,” he explains. But if one state passed a law that was different enough from the others, a national law could be the only way to resolve the conflict. Additionally, four states—California, Texas, Vermont, and Oregon—already have specific laws regulating data brokers, including the requirement that they register with the state. 

Along with new laws, says Justin Brookman, the director of technology policy at Consumer Reports, comes the possibility that “we can put some more teeth on these laws.” 

Brookman points to Texas, where some of the most aggressive enforcement action at the state level has taken place under its Republican attorney general, Ken Paxton. Even before the state’s new consumer privacy bill went into effect in July, Paxton announced the creation of a special task force focused on enforcing the state’s privacy laws. He has since targeted a number of data brokers—including National Public Data, which exposed millions of sensitive customer records in a data breach in August, as well as companies that sell to them, like Sirius XM. 

At the same time, though, Paxton has moved to enforce the state’s strict abortion laws in ways that threaten individual privacy. In December, he sued a New York doctor for sending abortion pills to a Texas woman through the mail. While the doctor is theoretically protected by New York’s shield laws, which provide a safeguard from out-of-state prosecution, Paxton’s aggressive action makes it even more crucial that states enshrine data privacy protections into their laws, says Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, an advocacy group. “There is an urgent need for states,” he says, “to lock down our resident’s’ data, barring companies from collecting and sharing information in ways that can be weaponized against them by out-of-state prosecutors.” 

Data collection in the name of “security”

While privacy has become a bipartisan issue, Republicans, in particular, are interested in “addressing data brokers in the context of national security,” such as protecting the data of military members or other government officials, says Winters. But in his view, it’s the effects on reproductive rights and immigrants that are potentially the “most dangerous” threats to privacy. 

Indeed, data brokers (including Venntel, the Gravy Analytics subsidiary named in the recent FTC settlement) have sold cell-phone data to Immigration and Customs Enforcement, as well as to Customs and Border Protection. That data has then been used to track individuals for deportation proceedings—allowing the agencies to bypass local and state sanctuary laws that ban local law enforcement from sharing information for immigration enforcement. 

“The more data that corporations collect, the more data that’s available to governments for surveillance,” warns Ashley Gorski, a senior attorney who works on national security and privacy at the American Civil Liberties Union.

The ACLU is among a number of organizations that have been pushing for the passage of another federal law related to privacy: the Fourth Amendment Is Not For Sale Act. It would close the so-called “data-broker loophole” that allows law enforcement and intelligence agencies to buy personal information from data brokers without a search warrant. The bill would “dramatically limit the ability of the government to buy Americans’ private data,” Gorski says. It was first introduced in 2021 and passed the House in April 2024, with the support of 123 Republicans and 93 Democrats, before stalling in the Senate. 

While Gorski is hopeful that the bill will move forward in the next Congress, others are less sanguine about these prospects—and alarmed about other ways that the incoming administration might “co-opt private systems for surveillance purposes,” as Hartzog puts it. So much of our personal information that is “collected for one purpose,” he says, could “easily be used by the government … to track us.” 

This is especially concerning, adds Winters, given that the next administration has been “very explicit” about wanting to use every tool at its disposal to carry out policies like mass deportations and to exact revenge on perceived enemies. And one possible change, he says, is as simple as loosening the government’s procurement processes to make them more open to emerging technologies, which may have fewer privacy protections. “Right now, it’s annoying to procure anything as a federal agency,” he says, but he expects a more “fast and loose use of commercial tools.” 

“That’s something we’ve [already] seen a lot,” he adds, pointing to “federal, state, and local agencies using the Clearviews of the world”—a reference to the controversial facial recognition company. 

The AI wild card

Underlying all of these debates on potential legislation is the fact that technology companies—especially AI companies—continue to require reams and reams of data, including personal data, to train their machine-learning models. And they’re quickly running out of it. 

This is something of a wild card in any predictions about personal data. Ideally, says Jennifer King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, the shortage would lead to ways for consumers to directly benefit, perhaps financially, from the value of their own data. But it’s more likely that “there will be more industry resistance against some of the proposed comprehensive federal privacy legislation bills,” she says. “Companies benefit from the status quo.” 

The hunt for more and more data may also push companies to change their own privacy policies, says Whitney Merrill, a former FTC official who works on data privacy at Asana. Speaking in a personal capacity, she says that companies “have felt the squeeze in the tech recession that we’re in, with the high interest rates,” and that under those circumstances, “we’ve seen people turn around, change their policies, and try to monetize their data in an AI world”—even if it’s at the expense of user privacy. She points to the $60-million-per-year deal that Reddit struck last year to license its content to Google to help train the company’s AI. 

Earlier this year, the FTC warned companies that it would be “unfair and deceptive” to “surreptitiously” change their privacy policies to allow for the use of user data to train AI. But again, whether or not officials follow up on this depends on those in charge. 

So what will privacy look like in 2025? 

While the recent FTC settlements and the CFPB’s proposed rule represent important steps forward in privacy protection—at least when it comes to geolocation data—Americans’ personal information still remains widely available and vulnerable. 

Rebecca Williams, a senior strategist at the ACLU for privacy and data governance, argues that all of us, as individuals and communities, should take it upon ourselves to do more to protect ourselves and “resist … by opting out” of as much data collection as possible. That means checking privacy settings on accounts and apps, and using encrypted messaging services. 

Cahn, meanwhile, says he’ll “be striving to protect [his] local community, working to enact safeguards to ensure that we live up to our principles and stated commitments.” One example of such safeguards is a proposed New York City ordinance that would ban the sharing of any location data originating from within the city limits. Hartzog says that kind of local activism has already been effective in pushing for city bans on facial recognition. 

“Privacy rights are at risk, but they’re not gone, and it’s not helpful to take an overly pessimistic look right now,” says Li, the USF law professor. “We definitely still have privacy rights, and the more that we continue to fight for these rights, the more we’re going to be able to protect our rights.”

Why EVs are (mostly) set for solid growth in 2025

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

It looks as though 2025 will be a solid year for electric vehicles—at least outside the United States, where sales will depend on the incoming administration’s policy choices.

Globally, these cleaner cars and trucks will continue to eat into the market share of gas-guzzlers as costs decline, consumer options expand, and charging stations proliferate.

Despite all the hubbub about an EV slowdown last year, worldwide sales of battery EVs and plug-in hybrids likely hit a record high of nearly 17 million vehicles in 2024 and are expected to rise about 20% this year, according to the market research firm BloombergNEF. 

In addition, numerous automakers are preparing to deliver a variety of cheaper models to auto showrooms around the world. In turn, both the oil demand and the greenhouse-gas emissions stemming from vehicles on the roads are likely to peak over the next few years.

To be sure, the growth rate of EV sales has cooled, as consumers in many regions continue to wait for more affordable options and more convenient charging solutions. 

It also hasn’t helped that a handful of nations, like China, Germany, and New Zealand, have eased back the subsidies that were accelerating the rollout of low-emissions vehicles. And it certainly won’t do the sector any favors if President-elect Donald Trump follows through on his campaign pledges to eliminate government support for EVs and erect trade barriers that would raise the cost of producing or purchasing them.

Industry experts and climate scientists argue that the opposite should be happening right now. A critical piece of any realistic strategy to keep climate change in check is to fully supplant internal-combustion vehicles by around 2050. Without stricter mandates or more generous support for EVs, the world will not be on track to meet that goal, BloombergNEF finds and others confirm. 

“We have to push the car companies—and we also have to help them with incentives, R&D, and infrastructure,” says Gil Tal, director of the EV Research Center at the University of California, Davis.

But ultimately, the fate of EV sales will depend on the particular dynamics within specific regions. Here’s a closer look at what’s likely to steer the sector in the world’s three largest markets: the US, the EU, and China.

United States

The US EV market will be a mess of contradictions.

On the one hand, companies are spending tens of billions of dollars to build or expand battery, EV, and charger manufacturing plants across America. Within the next few years, Honda intends to begin running assembly lines retooled to produce EVs in Ohio, Toyota plans to begin producing electric SUVs at its flagship plant in Kentucky, and GM expects to begin cranking out its revived Bolts in Kansas, among dozens of other facilities in planning or under construction.

All that promises to drive down the cost of cleaner vehicles, boost consumer options, create tens of thousands of jobs, and help US auto manufacturers catch up with overseas rivals that are speeding ahead in EV design, production, and innovation.

But it’s not clear that will necessarily translate into lower consumer prices, and thus greater demand, because Trump has pledged to unravel the key policies currently propelling the sector. 

His plans are reported to include rolling back the consumer tax credits of up to $7,500 included in President Joe Biden’s signature climate bill, the Inflation Reduction Act. He has also threatened to impose stiff tariffs on goods imported from Mexico, China, Canada, and other nations where many vehicles or parts are manufactured. 

Tal says those policy shifts could more than wipe out any cost reductions brought about as companies scale up production of EV components and vehicles domestically. Tighter trade restrictions could also make it that much harder for foreign companies producing cheaper models to break into the US market.

That matters because the single biggest holdup for American consumers is the lofty expense of EVs. The most affordable models still start at around $30,000 in the US, and many electric cars, trucks, and SUVs top $40,000. 

“There’s nothing available in the more affordable options,” says Bhuvan Atluri, associate director of research at the MIT Mobility Initiative. “And models that were promised are nowhere to be seen.” (MIT owns MIT Technology Review.)

Indeed, Elon Musk still has yet to deliver on his 18-year-old “master plan” to produce a mass-market-priced Tesla EV, most recently calling a $25,000 model “pointless.” 

As noted, there is a revamped Chevy Bolt on the way for US consumers, as well as a $25,000 Jeep. But the actual price tags won’t be clear until these vehicles hit dealerships and the Trump administration translates its campaign rhetoric into policies. 

European Union

The EV story across the Europe Union is likely to be considerably more upbeat in the year to come. That’s because carbon dioxide emissions standards for passenger vehicles are set to tighten, requiring automakers in member countries to reduce climate pollution across their fleet by 15% from 2021 levels. Under the EU’s climate plan, these targets become stricter every five years, with the goal of eliminating emissions from cars and trucks by 2035.

Automakers intend to introduce a number of affordable EV models in the coming months, timed deliberately to help the companies meet the new mandates, says Felipe Rodríguez, Europe deputy managing director at the International Council on Clean Transportation (ICCT).

Those lower-priced models include Volkwagen’s ID.2all hatchback ($26,000) and the Fiat Panda EV ($28,500), among others.

On average, manufacturers will need to boost the share of battery-electric vehicles from 16% of total sales in 2023 to around 28% in order to meet the goal, according to the ICCT. Some European car companies are raising their prices for combustion vehicles and cutting the price tag on existing EVs to help hit the targets. And predictably, some are also arguing for the European Commission to loosen the rules.

Sales trends in any given country will still depend on local conditions and policy decisions. One big question is whether a new set of tax incentives or additional policy changes will help Germany, Europe’s largest auto market, revive the growth of its EV sector. Sales tanked there last year, after the nation cut off subsidies at the end of 2023.

EVs now make up about 25% of new sales across the EU. The ICCT estimates that they’ll surpass combustion vehicles EU-wide around 2030, when the emissions rules are set to significantly tighten again.

China

After decades of strategic investments and targeted policies, China is now the dominant manufacturer of EVs as well as the world’s largest market. That’s not likely to change for the foreseeable future, no matter what trade barriers the US or other countries impose.

In October, the European Commission enacted sharply higher tariffs on China-built EVs, arguing that the country has provided unfair market advantages to its domestic companies. That followed the Biden administration’s decision last May to impose a 100% tariff on Chinese vehicles, citing unfair trade practices and intellectual-property theft.

Chinese officials, for their part, argue that their domestic companies have earned market advantages by producing affordable, high-quality electric vehicles. More than 60% of Chinese EVs are already cheaper than their combustion-engine counterparts, the International Energy Agency (IEA) estimates.

“The reality—and what makes this a difficult challenge—is that there is some truth in both perspectives,” writes Scott Kennedy, trustee chair in Chinese business and economics at the Center for Strategic and International Studies. 

These trade barriers have created significant risks for China’s EV makers, particularly coupled with the country’s sluggish economy, its glut of automotive production capacity, and the fact that most companies in the sector aren’t profitable. China also cut back subsidies for EVs at the end of 2022, replacing them with a policy that requires manufacturers to achieve fuel economy targets.

But the country has been intentionally diversifying its export markets for years and is well positioned to continue increasing its sales of electric cars and buses in countries across Southeast Asia, Latin America and Europe, says Hui He, China regional director at the ICCT. There are also some indications that China and the EU could soon reach a compromise in their trade dispute.

Domestically, China is now looking to rural markets to boost growth for the industry. Officials have created purchase subsidies for residents in the countryside and called for the construction of more charging facilities.

By most estimates, China will continue to see solid growth in EV sales, putting nearly 50 million battery-electric and plug-in hybrid vehicles on the country’s roads by the end of this year.

What’s next for NASA’s giant moon rocket?

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

NASA’s huge lunar rocket, the Space Launch System (SLS), might be in trouble. As rival launchers like SpaceX’s Starship gather pace, some are questioning the need for the US national space agency to have its own mega rocket at all—something that could become a focus of the incoming Trump administration, in which SpaceX CEO Elon Musk is set to play a key role.

“It’s absolutely in Elon Musk’s interest to convince the government to cancel SLS,” says Laura Forczyk from the US space consulting firm Astralytical. “However, it’s not up to him.”

SLS has been in development for more than a decade. The rocket is huge, 322 feet (98 meters) tall, and about 15% more powerful than the Saturn V rocket that took the Apollo astronauts to the moon in the 1960s and 70s. It is also expensive, costing an estimated $4.1 billion per launch.

It was designed with a clear purpose—returning astronauts to the moon’s surface. Built to launch NASA’s human-carrying Orion spacecraft, the rocket is a key part of the agency’s Artemis program to go back to the Moon, started by the previous Trump administration in 2019. “It has an important role to play,” says Daniel Dumbacher, formerly a deputy associate administrator at NASA and part of the team that selected SLS for development in 2010. “The logic for SLS still holds up.”

The rocket has launched once already on the Artemis I mission in 2022, a test flight that saw an uncrewed Orion spacecraft sent around the moon. Its next flight, Artemis II, earmarked for September 2025, will be the same flight but with a four-person crew, before the first lunar landing, Artemis III, currently set for September 2026.

SLS could launch missions to other destinations too. At one stage NASA intended to launch its Europa Clipper spacecraft to Jupiter’s moon Europa using SLS, but cost and delays saw the mission launch instead on a SpaceX Falcon Heavy rocket in October this year. It has also been touted to launch parts of NASA’s new lunar space station, Gateway, beginning in 2028. The station is currently in development.

NASA’s plan to return to the moon involves using SLS to launch astronauts to lunar orbit on Orion, where they will rendezvous with a separate lander to descend to the surface. At the moment that lander will be SpaceX’s Starship vehicle, a huge reusable shuttle intended to launch and land multiple times. Musk wants this rocket to one day take humans to Mars.

Starship is currently undergoing testing. Last month, it completed a stunning flight in which the lower half of the rocket, the Super Heavy booster, was caught by SpaceX’s “chopstick” launch tower in Boca Chica, Texas. The rocket is ultimately more powerful than SLS and designed to be entirely reusable, whereas NASA’s rocket is discarded into the ocean after each launch.

The success of Starship and the development of other large commercial rockets, such as the Jeff Bezos-owned firm Blue Origin’s New Glenn rocket, has raised questions about the need for SLS. In October, billionaire Michael Bloomberg called the rocket a “colossal waste of taxpayer money”. In November, journalist Eric Berger said there was at least a 50-50 chance the rocket would be canceled.

“I think it would be the right call,” says Abhishek Tripathi, a former mission director at SpaceX now at the University of California, Berkeley. “It’s hard to point to SLS as being necessary.”

The calculations are not straightforward, however. Dumbacher notes that while SpaceX is making “great progress” on Starship, there is much yet to do. The rocket will need to launch possibly up to 18 times to transfer fuel to a single lunar Starship in Earth orbit that can then make the journey to the moon. The first test of this fuel transfer is expected next year.

SLS, conversely, can send Orion to the moon in a single launch. That means the case for SLS is only diminished “if the price of 18 Starship launches is less than an SLS launch”, says Dumbacher. SpaceX was awarded $2.9 billion by NASA in 2021 for the first Starship mission to the moon on Artemis III, but the exact cost per launch is unknown.

The Artemis II Core Stage moves from final assembly to the VAB at NASA’s Michoud Assembly Facility in New Orleans, July, 6, 2024.

MICHAEL DEMOCKER/NASA

NASA is also already developing hardware for future SLS launches. “All elements for the second SLS for Artemis II have been delivered,” a NASA spokesperson said in response to emailed questions, adding that SLS also has “hardware in production” for Artemis III, IV, and V.

“SLS can deliver more payload to the moon, in a single launch, than any other rocket,” NASA said. “The rocket is needed and designed to meet the agency’s lunar transportation requirements.”

Dumbacher points out that if the US wants to return to the moon before China sends humans there, which the nation has said it would do by 2030, canceling SLS could be a setback. “Now is not the time to have a major relook at what’s the best rocket,” he says. “Every minute we delay, we are setting ourselves up for a situation where China will be putting people on the moon first.”

President-elect Donald Trump has given Musk a role in his incoming administration to slash public spending as part of the newly established Department of Government Efficiency. While the exact remit of this initiative is not yet clear, projects like SLS could be up for scrutiny.

Canceling SLS would require support from Congress, however, where Republicans will have only a slim majority. “SLS has been bipartisan and very popular,” says Forczyk, meaning it might be difficult to take any immediate action. “Money given to SLS is a benefit to taxpayers and voters in key congressional districts [where development of the rocket takes place],” says Forczyk. “We do not know how much influence Elon Musk will have.”

It seems likely the rocket will at least launch Artemis II next September, but beyond that there is more uncertainty. “The most logical course of action in my mind is to cancel SLS after Artemis III,” says Forczyk.

Such a scenario could have a broad impact on NASA that reaches beyond just SLS. Scrapping the rocket could bring up wider discussions about NASA’s overall budget, currently set at $25.4 billion, the highest-funded space agency in the world. That money is used for a variety of science including astrophysics, astronomy, climate studies, and the exploration of the solar system.

“If you cancel SLS, you’re also canceling the broad support for NASA’s budget at its current level,” says Tripathi. “Once that budget gets slashed, it’s hard to imagine it’ll ever grow back to present levels. Be careful what you wish for.”

What’s next for drones

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Drones have been a mainstay technology among militaries, hobbyists, and first responders alike for more than a decade, and in that time the range available has skyrocketed. No longer limited to small quadcopters with insufficient battery life, drones are aiding search and rescue efforts, reshaping wars in Ukraine and Gaza, and delivering time-sensitive packages of medical supplies. And billions of dollars are being plowed into building the next generation of fully autonomous systems. 

These developments raise a number of questions: Are drones safe enough to be flown in dense neighborhoods and cities? Is it a violation of people’s privacy for police to fly drones overhead at an event or protest? Who decides what level of drone autonomy is acceptable in a war zone?

Those questions are no longer hypothetical. Advancements in drone technology and sensors, falling prices, and easing regulations are making drones cheaper, faster, and more capable than ever. Here’s a look at four of the biggest changes coming to drone technology in the near future.

Police drone fleets

Today more than 1,500 US police departments have drone programs, according to tracking conducted by the Atlas of Surveillance. Trained police pilots use drones for search and rescue operations, monitoring events and crowds, and other purposes. The Scottsdale Police Department in Arizona, for example, successfully used a drone to locate a lost elderly man with dementia, says Rich Slavin, Scottsdale’s assistant chief of police. He says the department has had useful but limited experiences with drones to date, but its pilots have often been hamstrung by the “line of sight” rule from the Federal Aviation Administration (FAA). The rule stipulates that pilots must be able to see their drones at all times, which severely limits the drone’s range.

Soon, that will change. On a rooftop somewhere in the city, Scottsdale police will in the coming months install a new police drone capable of autonomous takeoff, flight, and landing. Slavin says the department is seeking a waiver from the FAA to be able to fly its drone past the line of sight. (Hundreds of police agencies have received a waiver from the FAA since the first was granted in 2019.) The drone, which can fly up to 57 miles per hour, will go on missions as far as three miles from its docking station, and the department says it will be used for things like tracking suspects or providing a visual feed of an officer at a traffic stop who is waiting for backup. 

“The FAA has been much more progressive in how we’re moving into this space,” Slavin says. That could mean that around the country, the sight (and sound) of a police drone soaring overhead will become much more common. 

The Scottsdale department says the drone, which it is purchasing from Aerodome, will kick off its drone-as-first-responder program and will play a role in the department’s new “real-time crime center.” These sorts of centers are becoming increasingly common in US policing, and allow cities to connect cameras, license plate readers, drones, and other monitoring methods to track situations on the fly. The rise of the centers, and their associated reliance on drones, has drawn criticism from privacy advocates who say they conduct a great deal of surveillance with little transparency about how footage from drones and other sources will be used or shared. 

In 2019, the police department in Chula Vista, California, was the first to receive a waiver from the FAA to fly beyond line of sight. The program sparked criticism from members of the community who alleged the department was not transparent about the footage it collected or how it would be used. 

Jay Stanley, a senior policy analyst at the American Civil Liberties Union’s Speech, Privacy, and Technology Project, says the waivers exacerbate existing privacy issues related to drones. If the FAA continues to grant them, police departments will be able to cover far more of a city with drones than ever, all while the legal landscape is murky about whether this would constitute an invasion of privacy. 

“If there’s an accumulation of different uses of this technology, we’re going to end up in a world where from the moment you step out of your front door, you’re going to feel as though you’re under the constant eye of law enforcement from the sky,” he says. “It may have some real benefits, but it is also in dire need of strong checks and balances.”

Scottsdale police say the drone could be used in a variety of scenarios, such as responding to a burglary in progress or tracking a driver with suspected connection to a kidnapping. But the real benefit, Slavin says, will come from pairing it with other existing technologies, like automatic license plate readers and hundreds of cameras placed around the city. “It can get to places very, very quickly,” he says. “It gives us real-time intelligence and helps us respond faster and smarter.”

While police departments might indeed benefit from drones in those situations, Stanley says the ACLU has found that many deploy them for far more ordinary cases, like reports of a kid throwing a ball against a garage or of “suspicious persons” in an area.

“It raises the question about whether these programs will just end up being another way in which vulnerable communities are over-policed and nickeled and dimed by law enforcement agencies coming down on people for all kinds of minor transgressions,” he says.

Drone deliveries, again

Perhaps no drone technology is more overhyped than home deliveries. For years, tech companies have teased futuristic renderings of a drone dropping off a package on your doorstep just hours after you ordered it. But they’ve never managed to expand them much beyond small-scale pilot projects, at least in the US, again largely due to the FAA’s line of sight rules. 

But this year, regulatory changes are coming. Like police departments, Amazon’s Prime Air program was previously limited to flying its drones within the pilot’s line of sight. That’s because drone pilots don’t have radar, air traffic controllers, or any of the other systems commercial flight relies on to monitor airways and keep them safe. To compensate, Amazon spent years developing an onboard system that would allow its drones to detect nearby objects and avoid collisions. The company says it showed the FAA in demonstrations that its drones could fly safely in the same airspace as helicopters, planes, and hot air balloons. 

In May, Amazon announced the FAA had granted the company a waiver and permission to expand operations in Texas, more than a decade after the Prime Air project started. And in July, the FAA cleared one more roadblock by allowing two companies—Zipline as well as Google’s Wing Aviation—to fly in the same airspace simultaneously without the need for visual observers. 

While all this means your chances of receiving a package via drone have ticked up ever so slightly, the more compelling use case might be medical deliveries. Shakiba Enayati, an assistant professor of supply chains at the University of Missouri–St. Louis, has spent years researching how drones could conduct last-mile deliveries of vaccines, antivenom, organs, and blood in remote places. She says her studies have found drones to be game changers for getting medical supplies to underserved populations, and if the FAA extends these regulatory changes, it could have a real impact. 

That’s especially true in the steps leading up to an organ transplant, she says. Before an organ can be transmitted to a recipient, a number of blood tests must be sent back-and-forth to make sure the recipient can accept it, which takes a time if the blood is being transferred by car or even helicopter. “In these cases, the clock is ticking,” Enayati says. If drones were allowed to be used in this step at scale, it would be a significant improvement.

“If the technology is supporting the needs of organ delivery, it’s going to make a big change in such an important arena,” she says.

That development could come sooner than using drones for delivery of the actual organs, which have to be transported under very tightly controlled conditions to preserve them.

Domesticating the drone supply chain

Signed into law last December, the American Security Drone Act bars federal agencies from buying drones from countries thought to pose a threat to US national security, such as Russia and China. That’s significant. China is the undisputed leader when it comes to manufacturing drones and drone parts, with over 90% of law enforcement drones in the US made by Shenzhen-based DJI, and many drones used by both sides of the war in Ukraine are made by Chinese companies. 

The American Security Drone Act is part of an effort to curb that reliance on China. (Meanwhile, China is stepping up export restrictions on drones with military uses.) As part of the act, the US Department of Defense’s Defense Innovation Unit has created the Blue UAS Cleared List, a list of drones and parts the agency has investigated and approved for purchase. The list applies to federal agencies as well as programs that receive federal funding, which often means state police departments or other non-federal agencies. 

Since the US is set to spend such significant sums on drones—with $1 billion earmarked for the Department of Defense’s Replicator initiative alone—getting on the Blue List is a big deal. It means those federal agencies can make large purchases with little red tape. 

Allan Evans, CEO of US-based drone part maker Unusual Machine, says the list has sparked a significant rush of drone companies attempting to conform to the US standards. His company manufactures a first-person view flight controller that he hopes will become the first of its kind to be approved for the Blue List.

The American Security Drone Act is unlikely to affect private purchases in the US of drones used by videographers, drone racers, or hobbyists, which will overwhelmingly still be made by China-based companies like DJI. That means any US-based drone companies, at least in the short term, will only survive by catering to the US defense market.  

“Basically any US company that isn’t willing to have ancillary involvement in defense work will lose,” Evans says. 

The coming months will show the law’s true impact: Because the US fiscal year ends in September, Evans says he expects to see a host of agencies spending their use-it-or-lose-it funding on US-made drones and drone components in the next month. “That will indicate whether the marketplace is real or not, and how much money is actually being put toward it,” he says.

Autonomous weapons in Ukraine

The drone war in Ukraine has largely been one of attrition. Drones have been used extensively for surveying damage, finding and tracking targets, or dropping weapons since the war began, but on average these quadcopter drones last just three flights before being shot down or rendered unnavigable by GPS jamming. As a result, both Ukraine and Russia prioritized accumulating high volumes of drones with the expectation that they wouldn’t last long in battle. 

Now they’re having to rethink that approach, according to Andriy Dovbenko, founder of the UK-Ukraine Tech Exchange, a nonprofit that helps startups involved in Ukraine’s war effort and eventual reconstruction raise capital. While working with drone makers in Ukraine, he says, he has seen the demand for technology shift from big shipments of simple commercial drones to a pressing need for drones that can navigate autonomously in an environment where GPS has been jammed. With 70% of the front lines suffering from jamming, according to Dovbenko, both Russian and Ukrainian drone investment is now focused on autonomous systems. 

That’s no small feat. Drone pilots usually rely on video feeds from the drone as well as GPS technology, neither of which is available in a jammed environment. Instead, autonomous drones operate with various types of sensors like LiDAR to navigate, though this can be tricky in fog or other inclement weather. Autonomous drones are a new and rapidly changing technology, still being tested by US-based companies like Shield AI. The evolving war in Ukraine is raising the stakes and the pressure to deploy affordable and reliable autonomous drones.  

The transition toward autonomous weapons also raises serious yet largely unanswered questions about how much humans should be taken out of the loop in decision-making. As the war rages on and the need for more capable weaponry rises, Ukraine will likely be the testing ground for if and how the moral line is drawn. But Dovbenko says stopping to find that line during an ongoing war is impossible. 

“There is a moral question about how much autonomy you can give to the killing machine,” Dovbenko says. “This question is not being asked right now in Ukraine because it’s more of a matter of survival.”

What’s next for SpaceX’s Falcon 9

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

SpaceX’s Falcon 9 is one of the world’s safest, most productive rockets. But now it’s been grounded: A rare engine malfunction on July 11 prompted the US Federal Aviation Administration to initiate an investigation and halt all Falcon 9 flights until further notice. The incident has exposed the risks of the US aerospace industry’s heavy reliance on the rocket. 

“The aerospace industry is very dependent on the Falcon 9,” says Jonathan McDowell, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics who issues regular reports on space launches. He says the Falcon 9 and the closely related Falcon Heavy represented 83% of US launches in 2023. “There’s a lot of traffic that’s going to be backed up waiting for it to return to flight,” he adds.

During a SpaceX livestream, ice could be seen accumulating on the Falcon 9’s engine following its launch from California’s Vandenberg Space Force Base en route to releasing 20 Starlink satellites. According to SpaceX, this buildup of ice caused a liquid oxygen leak. Then part of the engine failed, and the rocket dropped several satellites into a lower orbit than intended, one in which they could readily fall back into Earth’s atmosphere. 

By July 12, an FAA press statement was circulating on X. The federal agency said it was aware of the malfunction and would require an investigation. “A return to flight is based on the FAA determining that any system, process, or procedure related to the mishap does not affect public safety,” said the statement.

SpaceX says it will cooperate with the investigation. “SpaceX will perform a full investigation in coordination with the FAA, determine root cause, and make corrective actions to ensure the success of future missions,” says a statement on the company’s website. Details about what the investigation will entail and how long it might take are unknown. In the meantime, SpaceX has requested to keep flying the Falcon 9 while the investigation takes place. “The FAA is reviewing the request and will be guided by safety at every step of the process,” said the agency in a statement. 

Nominal failure

The Falcon 9 has an unusually clean safety record. It’s been launched more than 300 times since its maiden voyage in 2010 and has rarely failed. In 2020, the rocket was the first to launch under NASA’s Commercial Crew Program, which was designed to build the US’s commercial capacity for taking people, including astronauts, into orbit. 

According to MIT aerospace engineer Paulo Lozano, part of the Falcon 9’s success is due to advances in rocket engines. Exactly how SpaceX incorporates these new technologies is unclear, and Lozano notes that SpaceX is quite secretive about the manufacturing process. But it is known that SpaceX uses additive manufacturing to build some engine components. This makes it possible to create parts with complex geometries (for example, hollow—and thus lighter-weight—turbine blades) that enhance performance. And, according to Lozano, artificial intelligence has made diagnosing engine health faster and more accurate. Parts of the rocket are also reusable, which keeps costs low.  

With such a successful track record, the Falcon 9 malfunction might seem surprising. But, Lozano says, anomalies are to be expected when it comes to rocket engines. That’s because they operate in harsh environments where they’re subjected to extreme temperatures and pressures. This makes it difficult for engineers to manufacture a rocket as reliable as a commercial airplane.

“These engines produce more power than small cities, and they work in stressful conditions,” says Lozano. “It’s very hard to contain them.” 

What exactly went wrong last week remains a mystery. Still, experts agree the event can’t be brushed off. “‘Oh, it was a fluke’ is not, in the modern space industry, an acceptable answer,” says McDowell. What he finds most surprising is that the malfunction didn’t occur in one of the reusable parts of the rocket (like the booster), but instead in a part known as the second stage, which SpaceX switches out each time the rocket launches. 

Stalled schedules

It remains unclear when the Falcon 9 will fly again. Several upcoming missions will likely be postponed, including the billionaire tech entrepreneur Jacob Isaacman’s Polaris Dawn, which would have been the first all-private mission to include a space walk. It’s possible NASA’s SpaceX Crew-9 mission to the International Space Station (ISS), planned for mid-August 2024, will also be delayed. 

Uncrewed missions will be affected too. One that stands out is the Europa Clipper mission, which is intended to explore Jupiter’s icy moon and assess its habitability. According to McDowell, the mission, which is planned for October 2024, will likely be delayed by the Falcon 9 grounding. That’s because there is a narrow time frame within which the satellite can be launched. (The mission is facing a technological hangup unrelated to the Falcon 9 that could also push back its launch.) 

The incident reveals a need for the US to explore alternatives to the Falcon 9. McDowell says the United Launch Alliance’s Atlas V rocket, accompanied by Boeing’s Starliner capsule, used to be the next best option for US-based crewed ISS missions. But the Atlas V is being phased out. It will be replaced by the ULA’s Vulcan Centaur, a partially reusable rocket that has made only one test flight so far. Plus, the Starliner capsule has serious issues that have left two NASA astronauts stuck at the ISS, potentially until August. 

Blue Origin’s reusable New Glenn rocket could be a competitor, but it hasn’t flown yet. The aerospace company says it hopes to launch the rocket before 2025. Blue Origin’s other reusable rocket, New Shepard, is not capable of flying into orbit. 

The Falcon 9 malfunction makes these projects all the more essential. “Even the Falcon 9 can have problems,” says McDowell. “It’s important to have multiple routes of access to space.”