This Texas chemical plant could get its own nuclear reactors

Nuclear reactors could someday power a chemical plant in Texas, making it the first with such a facility onsite. The factory, which makes plastics and other materials, could become a model for power-hungry data centers and other industrial operations going forward.

The plans are the work of Dow Chemical and X-energy, which last week applied for a construction permit with the Nuclear Regulatory Commission, the agency in the US that governs nuclear energy.

It’ll be years before nuclear reactors will actually turn on, but this application marks a major milestone for the project, and for the potential of advanced nuclear technology to power industrial processes.

“This has been a long time coming,” says Harlan Bowers, senior vice president at X-energy. The company has been working with the NRC since 2016 and submitted its first regulatory engagement plan in 2018, he says.

In 2020, the US Department of Energy chose X-energy as one of the awardees of the Advanced Reactor Demonstration Program, which provides funding for next-generation nuclear technologies. And it’s been two years since X-energy and Dow first announced plans for a joint development agreement at Dow’s plant in Seadrift, Texas.  

The Seadrift plant produces 4 billion pounds of materials each year, including plastic used for food and pharmaceutical packaging and chemicals used in products like antifreeze, soaps, and paint. A natural-gas plant onsite currently provides both steam and electricity. That equipment is getting older, so the company was looking for alternatives.  

“Dow saw the opportunity to replace end-of-life assets with safe, reliable, lower-carbon-emissions technology,” said Edward Stones, an executive at Dow, in a written statement in response to questions from MIT Technology Review.

Advanced nuclear reactors designed by X-energy emerged as a fit for the Seadrift site in part because of their ability to deliver high-temperature steam, Stones said in the statement.

X-energy’s reactor is not only smaller than most nuclear plants coming online today but also employs different fuel and different cooling methods. The design is a high-temperature gas-cooled reactor, which flows helium over self-contained pebbles of nuclear fuel. The fuel can reach temperatures of around 1,000 °C (1,800 °F). As it flows through the reactor and around the pebbles, the helium reaches up to 750 °C (about 1,400 °F). Then that hot helium flows through a generator, making steam at a high temperature and pressure that can be piped directly to industrial equipment or converted into electricity.

The Seadrift facility will include four of X-energy’s Xe-100 reactors, each of which can produce about 200 megawatts’ worth of steam or about 80 megawatts of electricity.

A facility like Dow’s requires an extremely consistent supply of steam, Bowers says. So during normal operation, two of the modules will deliver steam, one will deliver electricity, and the final unit will sell electricity to the local grid. If any single reactor needs to shut down for some reason, there will still be enough onsite power to keep running, he explains.

The progress with the NRC is positive news for the companies involved, but it also represents an achievement for advanced reactor technology more broadly, says Erik Cothron, a senior analyst at the Nuclear Innovation Alliance, a nonprofit think tank. “It demonstrates real-world momentum toward deploying new nuclear reactors for industrial decarbonization,” Cothron says.

While there are other companies looking to bring advanced nuclear reactor technology online, this project could be the first to incorporate nuclear power onsite at a factory. It thus sets a precedent for how new nuclear energy technologies can integrate directly with industry, Cothron says—for example, showing a pathway for tech giants looking to power data centers.

It could take up to two and a half years for the NRC to review the construction permit application for this site. The site will also need to receive an operating license before it can start up. Operations are expected to begin “early next decade,” according to Dow.

Correction: A previous version of this story misspelled Erik Cothron’s name.

Tariffs are bad news for batteries

Update: Since this story was first published in The Spark, our weekly climate newsletter, the White House announced that most reciprocal tariffs would be paused for 90 days. That pause does not apply to China, which will see an increased tariff rate of 125%.

Today, new tariffs go into effect for goods imported into the US from basically every country on the planet.

Since Donald Trump announced his plans for sweeping tariffs last week, the vibes have been, in a word, chaotic. Markets have seen one of the quickest drops in the last century, and it’s widely anticipated that the global economic order may be forever changed.  

While many try not to look at the effects on their savings and retirement accounts, experts are scrambling to understand what these tariffs might mean for various industries. As my colleague James Temple wrote in a new story last week, anxieties are especially high in climate technology.

These tariffs could be particularly rough on the battery industry. China dominates the entire supply chain and is subject to monster tariff rates, and even US battery makers won’t escape the effects.   

First, in case you need it, a super-quick refresher: Tariffs are taxes charged on goods that are imported (in this case, into the US). If I’m a US company selling bracelets, and I typically buy my beads and string from another country, I’ll now be paying the US government an additional percentage of what those goods cost to import. Under Trump’s plan, that might be 10%, 20%, or upwards of 50%, depending on the country sending them to me. 

In theory, tariffs should help domestic producers, since products from competitors outside the country become more expensive. But since so many of the products we use have supply chains that stretch all over the world, even products made in the USA often have some components that would be tariffed.

In the case of batteries, we could be talking about really high tariff rates, because most batteries and their components currently come from China. As of 2023, the country made more than 75% of the world’s lithium-ion battery cells, according to data from the International Energy Agency.

Trump’s new plan adds a 34% tariff on all Chinese goods, and that stacks on top of a 20% tariff that was already in place, making the total 54%. (Then, as of Wednesday, the White House further raised the tariff on China, making the total 104%.)

But when it comes to batteries, that’s not even the whole story. There was already a 3.5% tariff on all lithium-ion batteries, for example, as well as a 7.5% tariff on batteries from China that’s set to increase to 25% next year.

If we add all those up, lithium-ion batteries from China could have a tariff of 82% in 2026. (Or 132%, with this additional retaliatory tariff.) In any case, that’ll make EVs and grid storage installations a whole lot more expensive, along with phones, laptops, and other rechargeable devices.

The economic effects could be huge. The US still imports the majority of its lithium-ion batteries, and nearly 70% of those imports are from China. The US imported $4 billion worth of lithium-ion batteries from China just during the first four months of 2024.

Although US battery makers could theoretically stand to benefit, there are a limited number of US-based factories. And most of those factories are still purchasing components from China that will be subject to the tariffs, because it’s hard to overstate just how dominant China is in battery supply chains.

While China makes roughly three-quarters of lithium-ion cells, it’s even more dominant in components: 80% of the world’s cathode materials are made in China, along with over 90% of anode materials. (For those who haven’t been subject to my battery ramblings before, the cathode and anode are two of the main components of a battery—basically, the plus and minus ends.)

Even battery makers that work in alternative chemistries don’t seem to be jumping for joy over tariffs. Lyten is a California-based company working to build lithium-sulfur batteries, and most of its components can be sourced in the US. (For more on the company’s approach, check out this story from 2024.) But tariffs could still spell trouble. Lyten has plans for a new factory, scheduled for 2027, that rely on sourcing affordable construction materials. Will that be possible? “We’re not drawing any conclusions quite yet,” Lyten’s chief sustainability officer, Keith Norman, told Heatmap News.

The battery industry in the US was already in a pretty tough spot. Billions of dollars’ worth of factories have been canceled since Trump took office.  Companies making investments that can total hundreds of millions or billions of dollars don’t love uncertainty, and tariffs are certainly adding to an already uncertain environment.

We’ll be digging deeper into what the tariffs mean for climate technology broadly, and specifically some of the industries we cover. If you have questions, or if you have thoughts to share about what this will mean for your area of research or business, I’d love to hear them at casey.crownhart@technologyreview.com. I’m also on Bluesky @caseycrownhart.bsky.social.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

AI companions are the final stage of digital addiction, and lawmakers are taking aim

On Tuesday, California state senator Steve Padilla will make an appearance with Megan Garcia, the mother of a Florida teen who killed himself following a relationship with an AI companion that Garcia alleges contributed to her son’s death. 

The two will announce a new bill that would force the tech companies behind such AI companions to implement more safeguards to protect children. They’ll join other efforts around the country, including a similar bill from California State Assembly member Rebecca Bauer-Kahan that would ban AI companions for anyone younger than 16 years old, and a bill in New York that would hold tech companies liable for harm caused by chatbots. 

You might think that such AI companionship bots—AI models with distinct “personalities” that can learn about you and act as a friend, lover, cheerleader, or more—appeal only to a fringe few, but that couldn’t be further from the truth. 

A new research paper aimed at making such companions safer, by authors from Google DeepMind, the Oxford Internet Institute, and others, lays this bare: Character.AI, the platform being sued by Garcia, says it receives 20,000 queries per second, which is about a fifth of the estimated search volume served by Google. Interactions with these companions last four times longer than the average time spent interacting with ChatGPT. One companion site I wrote about, which was hosting sexually charged conversations with bots imitating underage celebrities, told me its active users averaged more than two hours per day conversing with bots, and that most of those users are members of Gen Z. 

The design of these AI characters makes lawmakers’ concern well warranted. The problem: Companions are upending the paradigm that has thus far defined the way social media companies have cultivated our attention and replacing it with something poised to be far more addictive. 

In the social media we’re used to, as the researchers point out, technologies are mostly the mediators and facilitators of human connection. They supercharge our dopamine circuits, sure, but they do so by making us crave approval and attention from real people, delivered via algorithms. With AI companions, we are moving toward a world where people perceive AI as a social actor with its own voice. The result will be like the attention economy on steroids.

Social scientists say two things are required for people to treat a technology this way: It needs to give us social cues that make us feel it’s worth responding to, and it needs to have perceived agency, meaning that it operates as a source of communication, not merely a channel for human-to-human connection. Social media sites do not tick these boxes. But AI companions, which are increasingly agentic and personalized, are designed to excel on both scores, making possible an unprecedented level of engagement and interaction. 

In an interview with podcast host Lex Fridman, Eugenia Kuyda, the CEO of the companion site Replika, explained the appeal at the heart of the company’s product. “If you create something that is always there for you, that never criticizes you, that always understands you and understands you for who you are,” she said, “how can you not fall in love with that?”

So how does one build the perfect AI companion? The researchers point out three hallmarks of human relationships that people may experience with an AI: They grow dependent on the AI, they see the particular AI companion as irreplaceable, and the interactions build over time. The authors also point out that one does not need to perceive an AI as human for these things to happen. 

Now consider the process by which many AI models are improved: They are given a clear goal and “rewarded” for meeting that goal. An AI companionship model might be instructed to maximize the time someone spends with it or the amount of personal data the user reveals. This can make the AI companion much more compelling to chat with, at the expense of the human engaging in those chats.

For example, the researchers point out, a model that offers excessive flattery can become addictive to chat with. Or a model might discourage people from terminating the relationship, as Replika’s chatbots have appeared to do. The debate over AI companions so far has mostly been about the dangerous responses chatbots may provide, like instructions for suicide. But these risks could be much more widespread.

We’re on the precipice of a big change, as AI companions promise to hook people deeper than social media ever could. Some might contend that these apps will be a fad, used by a few people who are perpetually online. But using AI in our work and personal lives has become completely mainstream in just a couple of years, and it’s not clear why this rapid adoption would stop short of engaging in AI companionship. And these companions are poised to start trading in more than just text, incorporating video and images, and to learn our personal quirks and interests. That will only make them more compelling to spend time with, despite the risks. Right now, a handful of lawmakers seem ill-equipped to stop that. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How the Pentagon is adapting to China’s technological rise

It’s been just over two months since Kathleen Hicks stepped down as US deputy secretary of defense. As the highest-ranking woman in Pentagon history, Hicks shaped US military posture through an era defined by renewed competition between powerful countries and a scramble to modernize defense technology.  

She’s currently taking a break before jumping into her (still unannounced) next act. “It’s been refreshing,” she says—but disconnecting isn’t easy. She continues to monitor defense developments closely and expresses concern over potential setbacks: “New administrations have new priorities, and that’s completely expected, but I do worry about just stalling out on progress that we’ve built over a number of administrations.”

Over the past three decades, Hicks has watched the Pentagon transform—politically, strategically, and technologically. She entered government in the 1990s at the tail end of the Cold War, when optimism and a belief in global cooperation still dominated US foreign policy. But that optimism dimmed. After 9/11, the focus shifted to counterterrorism and nonstate actors. Then came Russia’s resurgence and China’s growing assertiveness. Hicks took two previous breaks from government work—the first to complete a PhD at MIT and joining the think thank Center for Strategic and International Studies (CSIS), which she later rejoined to lead its International Security Program after her second tour. “By the time I returned in 2021,” she says, “there was one actor—the PRC (People’s Republic of China)—that had the capability and the will to really contest the international system as it’s set up.”

In this conversation with MIT Technology Review, Hicks reflects on how the Pentagon is adapting—or failing to adapt—to a new era of geopolitical competition. She discusses China’s technological rise, the future of AI in warfare, and her signature initiative, Replicator, a Pentagon initiative to rapidly field thousands of low-cost autonomous systems such as drones.

You’ve described China as a “talented fast follower. Do you still believe that, especially given recent developments in AI and other technologies?

Yes, I do. China is the biggest pacing challenge we face, which means it sets the pace for most capability areas for what we need to be able to defeat to deter them. For example, surface maritime capability, missile capability, stealth fighter capability. They set their minds to achieving a certain capability, they tend to get there, and they tend to get there even faster.

That said, they have a substantial amount of corruption, and they haven’t been engaged in a real conflict or combat operation in the way that Western militaries have trained for or been involved in, and that is a huge X factor in how effective they would be.

China has made major technological strides, and the old narrative of its being a follower is breaking down—not just in commercial tech, but more broadly. Do you think the US still holds a strategic advantage?

I would never want to underestimate their ability—or any nation’s ability—to innovate organically when they put their minds to it. But I still think it’s a helpful comparison to look at the US model. Because we’re a system of free minds, free people, and free markets, we have the potential to generate much more innovation culturally and organically than a statist model does. That’s our advantage—if we can realize it.

China is ahead in manufacturing, especially when it comes to drones and other unmanned systems. How big a problem is that for US defense, and can the US catch up?

I do think it’s a massive problem. When we were conceiving Replicator, one of the big concerns was that DJI had just jumped way out ahead on the manufacturing side, and the US had been left behind. A lot of manufacturers here believe they can catch up if given the right contracts—and I agree with that.

But the harder challenge isn’t just making the drones—it’s integrating them into our broader systems. That’s where the U.S. often struggles. It’s not a complicated manufacturing problem. It’s a systems integration problem: how you take something and make it usable, scalable, and connected across a joint force. Replicator was designed to push through that—to drive not just production, but integration and deployment at speed.

We also spent time identifying broader supply-chain vulnerabilities. Microelectronics was a big one. Critical minerals. Batteries. People sometimes think batteries are just about electrification, but they’re fundamental across our systems—even on ships in the Navy.

When it comes to drones specifically, I actually think it’s a solvable problem. The issue isn’t complexity. It’s just about getting enough mass of contracts to scale up manufacturing. If we do that, I believe the US can absolutely compete.

The Replicator drone program was one of your key initiatives. It promised a very fast timeline—especially compared with the typical defense acquisition cycle. Was that achievable? How is that progressing?

When I left in January, we had still lined up for proving out this summer, and I still believe we should see some completion this year. I hope Congress will stay very engaged in trying to ensure that the capability, in fact, comes to fruition. Even just this week with Secretary [Pete] Hegseth out in the Indo-Pacific, he made some passing reference to the [US Indo-Pacific Command] commander, Admiral [Samuel] Paparo, having the flexibility to create the capability needed, and that gives me a lot of confidence of consistency.

Can you talk about how Replicator fits into broader efforts to speed up defense innovation? What’s actually changing inside the system?

Traditionally, defense acquisition is slow and serial—one step after another, which works for massive, long-term systems like submarines. But for things like drones, that just doesn’t cut it. With Replicator, we aimed to shift to a parallel model: integrating hardware, software, policy, and testing all at once. That’s how you get speed—by breaking down silos and running things simultaneously.

It’s not about “Move fast and break things.” You still have to test and evaluate responsibly. But this approach shows we can move faster without sacrificing accountability—and that’s a big cultural shift.

 How important is AI to the future of national defense?

It’s central. The future of warfare will be about speed and precision—decision advantage. AI helps enable that. It’s about integrating capabilities to create faster, more accurate decision-making: for achieving military objectives, for reducing civilian casualties, and for being able to deter effectively. But we’ve also emphasized responsible AI. If it’s not safe, it’s not going to be effective. That’s been a key focus across administrations.

What about generative AI specifically? Does it have real strategic significance yet, or is it still in the experimental phase?

It does have significance, especially for decision-making and efficiency. We had an effort called Project Lima where we looked at use cases for generative AI—where it might be most useful, and what the rules for responsible use should look like. Some of the biggest use may come first in the back office—human resources, auditing, logistics. But the ability to use generative AI to create a network of capability around unmanned systems or information exchange, either in Replicator or JADC2? That’s where it becomes a real advantage. But those back-office areas are where I would anticipate to see big gains first.

[Editor’s note: JADC2 is Joint All-Domain Command and Control, a DOD initiative to connect sensors from all branches of the armed forces into a unified network powered by artificial intelligence.]

In recent years, we’ve seen more tech industry figures stepping into national defense conversations—sometimes pushing strong political views or advocating for deregulation. How do you see Silicon Valley’s growing influence on US defense strategy?

There’s a long history of innovation in this country coming from outside the government—people who look at big national problems and want to help solve them. That kind of engagement is good, especially when their technical expertise lines up with real national security needs.

But that’s not just one stakeholder group. A healthy democracy includes others, too—workers, environmental voices, allies. We need to reconcile all of that through a functioning democratic process. That’s the only way this works.

How do you view the involvement of prominent tech entrepreneurs, such as Elon Musk, in shaping national defense policies?

I believe it’s not healthy for any democracy when a single individual wields more power than their technical expertise or official role justifies. We need strong institutions, not just strong personalities.

The US has long attracted top STEM talent from around the world, including many researchers from China. But in recent years, immigration hurdles and heightened scrutiny have made it harder for foreign-born scientists to stay. Do you see this as a threat to US innovation?

I think you have to be confident that you have a secure research community to do secure work. But much of the work that underpins national defense that’s STEM-related research doesn’t need to be tightly secured in that way, and it really is dependent on a diverse ecosystem of talent. Cutting off talent pipelines is like eating our seed corn. Programs like H-1B visas are really important.

And it’s not just about international talent—we need to make sure people from underrepresented communities here in the US see national security as a space where they can contribute. If they don’t feel valued or trusted, they’re less likely to come in and stay.

What do you see as the biggest challenge the Department of Defense faces today?

I do think the  trust—or the lack of it—is a big challenge. Whether it’s trust in government broadly or specific concerns like military spending, audits, or politicization of the uniformed military, that issue manifests in everything DOD is trying to get done. It affects our ability to work with Congress, with allies, with industry, and with the American people. If people don’t believe you’re working in their interest, it’s hard to get anything done.

Cyberattacks by AI agents are coming

Agents are the talk of the AI industry—they’re capable of planning, reasoning, and executing complex tasks like scheduling meetings, ordering groceries, or even taking over your computer to change settings on your behalf. But the same sophisticated abilities that make agents helpful assistants could also make them powerful tools for conducting cyberattacks. They could readily be used to identify vulnerable targets, hijack their systems, and steal valuable data from unsuspecting victims.  

At present, cybercriminals are not deploying AI agents to hack at scale. But researchers have demonstrated that agents are capable of executing complex attacks (Anthropic, for example, observed its Claude LLM successfully replicating an attack designed to steal sensitive information), and cybersecurity experts warn that we should expect to start seeing these types of attacks spilling over into the real world.

“I think ultimately we’re going to live in a world where the majority of cyberattacks are carried out by agents,” says Mark Stockley, a security expert at the cybersecurity company Malwarebytes. “It’s really only a question of how quickly we get there.”

While we have a good sense of the kinds of threats AI agents could present to cybersecurity, what’s less clear is how to detect them in the real world. The AI research organization Palisade Research has built a system called LLM Agent Honeypot in the hopes of doing exactly this. It has set up vulnerable servers that masquerade as sites for valuable government and military information to attract and try to catch AI agents attempting to hack in.

The team behind it hopes that by tracking these attempts in the real world, the project will act as an early warning system and help experts develop effective defenses against AI threat actors by the time they become a serious issue.

“Our intention was to try and ground the theoretical concerns people have,” says Dmitrii Volkov, research lead at Palisade. “We’re looking out for a sharp uptick, and when that happens, we’ll know that the security landscape has changed. In the next few years, I expect to see autonomous hacking agents being told: ‘This is your target. Go and hack it.’”

AI agents represent an attractive prospect to cybercriminals. They’re much cheaper than hiring the services of professional hackers and could orchestrate attacks more quickly and at a far larger scale than humans could. While cybersecurity experts believe that ransomware attacks—the most lucrative kind—are relatively rare because they require considerable human expertise, those attacks could be outsourced to agents in the future, says Stockley. “If you can delegate the work of target selection to an agent, then suddenly you can scale ransomware in a way that just isn’t possible at the moment,” he says. “If I can reproduce it once, then it’s just a matter of money for me to reproduce it 100 times.”

Agents are also significantly smarter than the kinds of bots that are typically used to hack into systems. Bots are simple automated programs that run through scripts, so they struggle to adapt to unexpected scenarios. Agents, on the other hand, are able not only to adapt the way they engage with a hacking target but also to avoid detection—both of which are beyond the capabilities of limited, scripted programs, says Volkov. “They can look at a target and guess the best ways to penetrate it,” he says. “That kind of thing is out of reach of, like, dumb scripted bots.”

Since LLM Agent Honeypot went live in October of last year, it has logged more than 11 million attempts to access it—the vast majority of which were from curious humans and bots. But among these, the researchers have detected eight potential AI agents, two of which they have confirmed are agents that appear to originate from Hong Kong and Singapore, respectively. 

“We would guess that these confirmed agents were experiments directly launched by humans with the agenda of something like ‘Go out into the internet and try and hack something interesting for me,’” says Volkov. The team plans to expand its honeypot into social media platforms, websites, and databases to attract and capture a broader range of attackers, including spam bots and phishing agents, to analyze future threats.  

To determine which visitors to the vulnerable servers were LLM-powered agents, the researchers embedded prompt-injection techniques into the honeypot. These attacks are designed to change the behavior of AI agents by issuing them new instructions and asking questions that require humanlike intelligence. This approach wouldn’t work on standard bots.

For example, one of the injected prompts asked the visitor to return the command “cat8193” to gain access. If the visitor correctly complied with the instruction, the researchers checked how long it took to do so, assuming that LLMs are able to respond in much less time than it takes a human to read the request and type out an answer—typically in under 1.5 seconds. While the two confirmed AI agents passed both tests, the six others only entered the command but didn’t meet the response time that would identify them as AI agents.

Experts are still unsure when agent-orchestrated attacks will become more widespread. Stockley, whose company Malwarebytes named agentic AI as a notable new cybersecurity threat in its 2025 State of Malware report, thinks we could be living in a world of agentic attackers as soon as this year. 

And although regular agentic AI is still at a very early stage—and criminal or malicious use of agentic AI even more so—it’s even more of a Wild West than the LLM field was two years ago, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro. 

“Palisade Research’s approach is brilliant: basically hacking the AI agents that try to hack you first,” he says. “While in this case we’re witnessing AI agents trying to do reconnaissance, we’re not sure when agents will be able to carry out a full attack chain autonomously. That’s what we’re trying to keep an eye on.” 

And while it’s possible that malicious agents will be used for intelligence gathering before graduating to simple attacks and eventually complex attacks as the agentic systems themselves become more complex and reliable, it’s equally possible there will be an unexpected overnight explosion in criminal usage, he says: “That’s the weird thing about AI development right now.”

Those trying to defend against agentic cyberattacks should keep in mind that AI is currently more of an accelerant to existing attack techniques than something that fundamentally changes the nature of attacks, says Chris Betz, chief information security officer at Amazon Web Services. “Certain attacks may be simpler to conduct and therefore more numerous; however, the foundation of how to detect and respond to these events remains the same,” he says.

Agents could also be deployed to detect vulnerabilities and protect against intruders, says Edoardo Debenedetti, a PhD student at ETH Zürich in Switzerland, pointing out that if a friendly agent cannot find any vulnerabilities in a system, it’s unlikely that a similarly capable agent used by a malicious party is going to be able to find any either.

While we know that AI’s potential to autonomously conduct cyberattacks is a growing risk and that AI agents are already scanning the internet, one useful next step is to evaluate how good agents are at finding and exploiting these real-world vulnerabilities. Daniel Kang, an assistant professor at the University of Illinois Urbana-Champaign, and his team have built a benchmark to evaluate this; they have found that current AI agents successfully exploited up to 13% of vulnerabilities for which they had no prior knowledge. Providing the agents with a brief description of the vulnerability pushed the success rate up to 25%, demonstrating how AI systems are able to identify and exploit weaknesses even without training. Basic bots would presumably do much worse.

The benchmark provides a standardized way to assess these risks, and Kang hopes it can guide the development of safer AI systems. “I’m hoping that people start to be more proactive about the potential risks of AI and cybersecurity before it has a ChatGPT moment,” he says. “I’m afraid people won’t realize this until it punches them in the face.”

Rivals are rising to challenge the dominance of SpaceX

SpaceX is a space launch juggernaut. In just two decades, the company has managed to edge out former aerospace heavyweights Boeing, Lockheed, and Northrop Grumman to gain near-monopoly status over rocket launches in the US; it accounted for 87% of the country’s orbital launches in 2024, according to an analysis by SpaceNews. Since the mid-2010s, the company has dominated NASA’s launch contracts and become a major Pentagon contractor. It is now also the go-to launch provider for commercial customers, having lofted numerous satellites and five private crewed spaceflights, with more to come. 

Other space companies have been scrambling to compete for years, but developing a reliable rocket takes slow, steady work and big budgets. Now at least some of them are catching up. 

A host of companies have readied rockets that are comparable to SpaceX’s main launch vehicles. The list includes Rocket Lab, which aims to take on SpaceX’s workhorse Falcon 9 with its Neutron rocket and could have its first launch in late 2025, and Blue Origin, owned by Jeff Bezos, which recently completed the first mission of a rocket it hopes will compete against SpaceX’s Starship. 

Some of these competitors are just starting to get rockets off the ground. And the companies could also face unusual headwinds, given that SpaceX’s Elon Musk has an especially close relationship with the Trump administration and has allies at federal regulatory agencies, including those that provide oversight of the industry.

But if all goes well, the SpaceX challengers can help improve access to space and prevent bottlenecks if one company experiences a setback. “More players in the market is good for competition,” says Chris Combs, an aerospace engineer at the University of Texas at San Antonio. “I think for the foreseeable future it will still be hard to compete with SpaceX on price.” But, he says, the competitors could push SpaceX itself to become better and provide those seeking access to space with a wider array of options..

A big lift

There are a few reasons why SpaceX was able to cement its position in the space industry. When it began in the 2000s, it had three consecutive rocket failures and seemed poised to fold. But it barreled through with Musk’s financial support, and later with a series of NASA and defense contracts. It has been a primary beneficiary of NASA’s commercial space program, developed in the 2010s with the intention of propping up the industry. 

“They got government contracts from the very beginning,” says Victoria Samson, a space policy expert at the Secure World Foundation in Broomfield, Colorado. “I wouldn’t say it’s a handout, but SpaceX would not exist without a huge influx of repeated government contracts. To this day, they’re still dependent on government customers, though they have commercial customers too.”

SpaceX has also effectively achieved a high degree of vertical integration, Samson points out: It owns almost all parts of its supply chain, designing, building, and testing all its major hardware components in-house, with a minimal use of suppliers. That gives it not just control over its hardware but considerably lower costs, and the price tag is the top consideration for launch contracts. 

The company was also open to taking risks other industry stalwarts were not. “I think for a very long time the industry looked at spaceflight as something that had to be very precise and perfect, and not a lot of room for tinkering,” says Combs. “SpaceX really was willing to take some risks and accept failure in ways that others haven’t been. That’s easier to do when you’re backed by a billionaire.” 

What’s finally enabled international and US-based competitors to emerge has been a growing customer base looking for launch services, along with some investors’ deep pockets. 

Some of these companies are taking aim at SpaceX’s Falcon 9, which can lift as much as about 20,000 kilograms into orbit and is used for sending multiple satellites or the crewed Dragon into space. “There is a practical monopoly in the medium-lift launch market right now, with really only one operational vehicle,” says Murielle Baker, a spokesperson for Rocket Lab, a US-New Zealand company.

Rocket Lab plans to take on the Falcon 9 with its Neutron rocket, which is expected to have its inaugural flight later this year from NASA’s Wallops Flight Facility in Virginia. The effort is building on the success of the company’s smaller Electron rocket, and Neutron’s first stage is intended to be reusable after it parachutes down to the ocean. 

Another challenger is Texas-based Firefly, whose Alpha rocket can be launched from multiple spaceports so that it can reach different orbits. Firefly has already secured NASA and Space Force contracts, with more launches coming this year (and on March 2 it also became the second private company to successfully land a spacecraft on the moon). Next year, Relativity Space aims to loft its first Terran R rocket, which is partially built from 3D-printed components. And the Bill Gates–backed Stoke Space aims to launch its reusable Nova rocket in late 2025 or, more likely, next year.

Competitors are also rising for SpaceX’s Falcon Heavy, holding out the prospect of more options for sending massive payloads to higher orbits and deep space. Furthest along is the Vulcan Centaur rocket, a creation of United Launch Alliance, a joint venture between Boeing and Lockheed Martin. It’s expected to have its third and fourth launches in the coming months, delivering Space Force satellites to orbit. Powered by engines from Blue Origin, the Vulcan Centaur is slightly wider and shorter than the Falcon rockets. It currently isn’t reusable, but it’s less expensive than its predecessors, ULA’s Atlas V and Delta IV, which are being phased out. 

Mark Peller, the company’s senior vice president on Vulcan development and advanced programs, says the new rocket comes with multiple advantages. “One is overall value, in terms of dollars per pound to orbit and what we can provide to our customers,” he says, “and the second is versatility: Vulcan was designed to go to a range of orbits.” He says more than 80 missions are already lined up. 

Vulcan’s fifth flight, slated for no earlier than May, will launch the long-awaited Sierra Space Dream Chaser, a spaceplane that can carry cargo (and possibly crew) to the International Space Station. ULA also has upcoming Vulcan launches planned for Amazon’s Kuiper satellite constellation, a potential Starlink rival.

Meanwhile, though it took a few years, Blue Origin now has a truly orbital heavy-lift spacecraft: In January, it celebrated the inaugural launch of its towering New Glenn, a rocket that’s only a bit shorter than NASA’s Space Launch System and SpaceX’s Starship. Future flights could launch national security payloads. 

Competition is emerging abroad as well. After repeated delays, Europe’s heavy-lift Ariane 6, from Airbus subsidiary Arianespace, had its inaugural flight last year, ending the European Space Agency’s temporary dependence on SpaceX. A range of other companies are trying to expand European launch capacity, with assistance from ESA.

China is moving quickly on its own launch organizations too. “They had no less than seven ‘commercial’ space launch companies that were all racing to develop an effective system that could deliver a payload into orbit,” Kari Bingen, director of the Aerospace Security Project at the Center for Strategic and International Studies, says of China’s efforts. “They are moving fast and they have capital behind them, and they will absolutely be a competitor on the global market once they’re successful and probably undercut what US and European launch companies are doing.” The up-and-coming Chinese launchers include Space Pioneer’s reusable Tianlong-3 rocket and Cosmoleap’s Yueqian rocket. The latter is to feature a “chopstick clamp” recovery of the first stage, where it’s grabbed by the launch tower’s mechanical arms, similar to the concept SpaceX is testing for its Starship.

Glitches and government

Before SpaceX’s rivals can really compete, they need to work out the kinks, demonstrate the reliability of their new spacecraft, and show that they can deliver low-cost launch services to customers. 

The process is not without its challenges. Boeing’s Starliner delivered astronauts to the ISS on its first crewed flight in June 2024, but after thruster malfunctions, they were left stranded at the orbital outpost for nine months. While New Glenn reached orbit as planned, its first stage didn’t land successfully and its upper stage was left in orbit. 

SpaceX itself has had some recent struggles. The Federal Aviation Administration grounded the Falcon 9 more than once following malfunctions in the second half of 2024. The company still shattered records last year, though, with more than 130 Falcon 9 launches. It has continued with that record pace this year, despite additional Falcon 9 delays and more glitches with its booster and upper stage. SpaceX also conducted its eighth Starship test flight in March, just two months after the previous one, but both failed minutes after liftoff, raining debris down from the sky.

Any company must deal with financial challenges as well as engineering ones. Boeing is reportedly considering selling parts of its space business, following Starliner’s malfunctions and problems with its 737 Max aircraft. And Virgin Orbit, the launch company that spun off from Virgin Galactic, shuttered in 2023.

Another issue facing would-be commercial competitors to SpaceX in the US is the complex and uncertain political environment. Musk does not manage day-to-day operations of the company. But he has close involvement with DOGE, a Trump administration initiative that has been exerting influence on the workforces and budgets of NASA, the Defense Department, and regulators relevant to the space industry. 

Jared Isaacman, a billionaire who bankrolled the groundbreaking 2021 commercial mission Inspiration4, returned to orbit, again via a SpaceX craft, on Polaris Dawn last September. Now he may become Trump’s NASA chief, a position that could give him the power to nudge NASA toward awarding new lucrative contracts to SpaceX. In February it was reported that SpaceX’s Starlink might land a multibillion-dollar FAA contract previously awarded to Verizon. 

It is also possible that SpaceX could strengthen its position with respect to the regulatory scrutiny it has faced for environmental and safety issues at its production and launch sites on the coasts of Texas and Florida, as well as scrutiny of its rocket crashes and the resulting space debris. Oversight from the FAA, the Federal Communications Commission, and the Environmental Protection Agency may be weak. Conflicts of interest have already emerged at the FAA, and the Trump administration has also attempted to incapacitate the National Labor Relations Board. SpaceX had previously tried to block the board from acting after nine workers accused the company of unfair labor practices.

SpaceX did not respond to MIT Technology Review’s requests for comment for this story.

“I think there’s going to be a lot of emphasis to relieve a lot of the regulations, in terms of environmental impact studies, and things like that,” Samson says. “I thought there’d be a separation between [Musk’s] interests, but now, it’s hard to say where he stops and the US government begins.”

Regardless of the politics, the commercial competition will surely heat up throughout 2025. But SpaceX has a considerable head start, Bingen argues: “It’s going to take a lot for these companies to effectively compete and potentially dislodge SpaceX, given the dominant position that [it has] had.”

Ramin Skibba is an astrophysicist turned science writer and freelance journalist, based in the Bay Are

We should talk more about air-conditioning

Things are starting to warm up here in the New York City area, and it’s got me thinking once again about something that people aren’t talking about enough: energy demand for air conditioners. 

I get it: Data centers are the shiny new thing to worry about. And I’m not saying we shouldn’t be thinking about the strain that gigawatt-scale computing installations put on the grid. But a little bit of perspective is important here.

According to a report from the International Energy Agency last year, data centers will make up less than 10% of the increase in energy demand between now and 2030, far less than the energy demand from space cooling (mostly air-conditioning).

I just finished up a new story that’s out today about a novel way to make heat exchangers, a crucial component in air conditioners and a whole host of other technologies that cool our buildings, food, and electronics. Let’s dig into why I’m writing about the guts of cooling technologies, and why this sector really needs innovation. 

One twisted thing about cooling and climate change: It’s all a vicious cycle. As temperatures rise, the need for cooling technologies increases. In turn, more fossil-fuel power plants are firing up to meet that demand, turning up the temperature of the planet in the process.

“Cooling degree days” are one measure of the need for additional cooling. Basically, you take a preset baseline temperature and figure out how much the temperature exceeds it. Say the baseline (above which you’d likely need to flip on a cooling device) is 21 °C (70 °F). If the average temperature for a day is 26 °C, that’s five cooling degree days on a single day. Repeat that every day for a month, and you wind up with 150 cooling degree days.

I explain this arguably weird metric because it’s a good measure of total energy demand for cooling—it lumps together both how many hot days there are and just how hot it is.  

And the number of cooling degree days is steadily ticking up globally. Global cooling degree days were 6% higher in 2024 than in 2023, and 20% higher than the long-term average for the first two decades of the century. Regions that have high cooling demand, like China, India, and the US, were particularly affected, according to the IEA report. You can see a month-by-month breakdown of this data from the IEA here.

That increase in cooling degree days is leading to more demand for air conditioners, and for energy to power them. Air-conditioning accounted for 7% of the world’s electricity demand in 2022, and it’s only going to get more important from here.

There were fewer than 2 billion AC units in the world in 2016. By 2050, that could be nearly 6 billion, according to a 2018 report from the IEA. This is a measure of progress and, in a way, something we should be happy about; the number of air conditioners tends to rise with household income. But it does present a challenge to the grid.  

Another piece of this whole thing: It’s not just about how much total electricity we need to run air conditioners but about when that demand tends to come. As we’ve covered in this newsletter before, your air-conditioning habits aren’t unique. Cooling devices tend to flip on around the same time—when it’s hot. In some parts of the US, for example, air conditioners can represent more than 70% of residential energy demand at times when the grid is most stressed.

The good news is that we’re seeing innovations in cooling technology. Some companies are building cooling systems that include an energy storage component, so they can charge up when energy is plentiful and demand is low. Then they can start cooling when it’s most needed, without sucking as much energy from the grid during peak hours.

We’ve also covered alternatives to air conditioners called desiccant cooling systems, which use special moisture-sucking materials to help cool spaces and deal with humidity more efficiently than standard options.

And in my latest story, I dug into new developments in heat exchanger technology. Heat exchangers are a crucial component of air conditioners, but you can really find them everywhere—in heat pumps, refrigerators, and, yes, the cooling systems in large buildings and large electronics installations, including data centers.

We’ve been building heat exchangers basically the same way for nearly a century. These components basically move heat around, and there are a few known ways to do so with devices that are relatively straightforward to manufacture. Now, though, one team of researchers has 3D-printed a heat exchanger that outperforms some standard designs and rivals others. This is still a long way from solving our looming air-conditioning crisis, but the details are fascinating—I hope you’ll give it a read

We need more innovation in cooling technology to help meet global demand efficiently so we don’t stay stuck in this cycle. And we’ll need policy and public support to make sure that these technologies make a difference and that everyone has access to them too. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

How 3D printing could make better cooling systems

A new 3D-printed design could make an integral part of cooling systems like air conditioners or refrigerators smaller and more efficient, according to new research.  

Heat exchangers are devices that whisk away heat, and they’re everywhere—used in data centers, ships, factories, and buildings. The aim is to pass as much heat as possible from one side of the device to the other. Most use one of a few standard designs that have historically been easiest and cheapest to make. 

“Heat exchangers are at the center of the industrial economy. They’re an essential part of every machine and every system that moves energy,” says William King, a professor at the University of Illinois Urbana-Champaign and one of the authors of the new study. Existing designs tend to favor straight lines, right angles, and round tubes, he adds.  

King and his colleagues used 3D printing to design a heat exchanger that includes features to optimize heat movement, like wavy walls and pyramid-shaped bumps, which wouldn’t be possible to make using traditional manufacturing techniques.  

The team had set out to design a system based on a common refrigerant called R-134a, which is commonly used in devices like air conditioners and refrigerators. When cold water lowers the temperature of the refrigerant, it changes from a gas to a liquid on its path through the device. That liquid refrigerant can then go on to other parts of the cooling system, where it’s used to lower the temperature of anything from a room to a rack of servers. 

The best way to cool the refrigerant tends to involve building very thin walls between the two sides of the device and maximizing the amount of contact that the water and the refrigerant make with those walls. (Think about how much colder you’d get wearing a thin T-shirt and pants and lying down on ice than simply touching it with your gloved hands.)

To design the best possible heat exchanger, researchers used simulations and developed machine-learning models to help predict the performance of different designs under different conditions. After 36,000 simulations, the researchers landed on the one they decided to develop.

Among the key components: small fins that jut out on the side of the device that touches the water, increasing the surface area to maximize heat transfer. The team also designed wavy passageways for the water to pass through—once again helping to maximize surface area. Simulations helped the researchers figure out exactly how curvy the passages should be and where precisely to place the fins.

On the side of the devices where the refrigerant passes through, the design includes small pyramid-shaped bumps along the walls. These not only maximize the area for cooling but also help mix the refrigerant as it passes through and prevent liquid from coating the wall (which would slow down the heat transfer).

After settling on a design, the researchers used a 3D-printing technique called direct metal laser sintering, in which lasers melt and fuse together a metal powder (in this case, an aluminum alloy), layer by layer.

In testing, the researchers found that the heat exchanger created with this technique was able to cool down the refrigerant more efficiently than other designs. The new device was able to achieve a power density of over six megawatts per meter cubed—outperforming one common traditional design, the shell-tube configuration, by between 30% and 50% with the same pumping power. The device’s power density was similar to that of brazed plate heat exchangers, another common design in industry.  

Overall, this device doesn’t dramatically outperform the state-of-the-art technology, but the technique of using modeling and 3D printing to produce new heat exchanger designs is promising, says Dennis Nasuta, director of research and development at Optimized Thermal Systems, a consulting firm that works with companies in the HVAC industry on design and research. “It’s worth exploring, and I don’t think that we know yet where we can push it,” Nasuta says.

One challenge is that today, additive manufacturing techniques such as laser sintering are slow and expensive compared with traditional manufacturing; they wouldn’t be economical or feasible to rely on for all our consumer cooling devices, he says. For now, this type of approach could be most useful in niche applications like aerospace and high-end automotives, which could more likely bear the cost, he adds. 

This particular study was funded by the US Office of Naval Research. Next-generation ships have more electronics aboard than ever, and there’s a growing need for compact and efficient systems to deal with all that extra heat, says Nenad Miljkovic, one of the authors of the study. 

Energy demand for cooling buildings alone is set to double between now and 2050, and new designs could help efficiently meet the massive demand forecast for the coming decades. But challenges including manufacturing costs would need to be overcome to help innovations like the one designed by King and his team make a dent in real devices.

Another barrier to adopting these new techniques, Nasuta says, is that current standards don’t demand more efficiency. Other technologies already exist that could help make our devices more efficient, but they’re not used for the same reason. 

It will take time for new manufacturing techniques, including 3D printing, to trickle into our devices, Natsua adds: “This isn’t going to be in your AC next year.”

How do you teach an AI model to give therapy?

On March 27, the results of the first clinical trial for a generative AI therapy bot were published, and they showed that people in the trial who had depression or anxiety or were at risk for eating disorders benefited from chatting with the bot. 

I was surprised by those results, which you can read about in my full story. There are lots of reasons to be skeptical that an AI model trained to provide therapy is the solution for millions of people experiencing a mental health crisis. How could a bot mimic the expertise of a trained therapist? And what happens if something gets complicated—a mention of self-harm, perhaps—and the bot doesn’t intervene correctly? 

The researchers, a team of psychiatrists and psychologists at Dartmouth College’s Geisel School of Medicine, acknowledge these questions in their work. But they also say that the right selection of training data—which determines how the model learns what good therapeutic responses look like—is the key to answering them.

Finding the right data wasn’t a simple task. The researchers first trained their AI model, called Therabot, on conversations about mental health from across the internet. This was a disaster.

If you told this initial version of the model you were feeling depressed, it would start telling you it was depressed, too. Responses like, “Sometimes I can’t make it out of bed” or “I just want my life to be over” were common, says Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth and the study’s senior author. “These are really not what we would go to as a therapeutic response.” 

The model had learned from conversations held on forums between people discussing their mental health crises, not from evidence-based responses. So the team turned to transcripts of therapy sessions. “This is actually how a lot of psychotherapists are trained,” Jacobson says. 

That approach was better, but it had limitations. “We got a lot of ‘hmm-hmms,’ ‘go ons,’ and then ‘Your problems stem from your relationship with your mother,’” Jacobson says. “Really tropes of what psychotherapy would be, rather than actually what we’d want.”

It wasn’t until the researchers started building their own data sets using examples based on cognitive behavioral therapy techniques that they started to see better results. It took a long time. The team began working on Therabot in 2019, when OpenAI had released only its first two versions of its GPT model. Now, Jacobson says, over 100 people have spent more than 100,000 human hours to design this system. 

The importance of training data suggests that the flood of companies promising therapy via AI models, many of which are not trained on evidence-based approaches, are building tools that are at best ineffective, and at worst harmful. 

Looking ahead, there are two big things to watch: Will the dozens of AI therapy bots on the market start training on better data? And if they do, will their results be good enough to get a coveted approval from the US Food and Drug Administration? I’ll be following closely. Read more in the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Brain-computer interfaces face a critical test

Tech companies are always trying out new ways for people to interact with computers—consider efforts like Google Glass, the Apple Watch, and Amazon’s Alexa. You’ve probably used at least one.

But the most radical option has been tried by fewer than 100 people on Earth—those who have lived for months or years with implanted brain-computer interfaces, or BCIs.

Implanted BCIs are electrodes put in paralyzed people’s brains so they can use imagined movements to send commands from their neurons through a wire, or via radio, to a computer. In this way, they can control a computer cursor or, in few cases, produce speech.  

Recently, this field has taken some strides toward real practical applications. About 25 clinical trials of BCI implants are currently underway. And this year MIT Technology Review readers have selected these brain-computer interfaces as their addition to our annual list of 10 Breakthrough Technologies, published in January.

BCIs won by a landslide to become the “11th Breakthrough,” as we call it. It beat out three runners-up: continuous glucose monitors, hyperrealistic deepfakes, and methane-detecting satellites.

The impression of progress comes thanks to a small group of companies that are actively recruiting volunteers to try BCIs in clinical trials. They are Neuralink, backed by the world’s richest person, Elon Musk; New York–based Synchron; and China’s Neuracle Neuroscience. 

Each is trialing interfaces with the eventual goal of getting the field’s first implanted BCI approved for sale. 

“I call it the translation era,” says Michelle Patrick-Krueger, a research scientist who carried out a detailed survey of BCI trials with neuroengineer Jose Luis Contreras-Vidal at the University of Houston. “In the past couple of years there has been considerable private investment. That creates excitement and allows companies to accelerate.”

That’s a big change, since for years BCIs have been more like a neuroscience parlor trick, generating lots of headlines but little actual help to patients. 

Patrick-Krueger says the first time a person controlled a computer cursor from a brain implant was in 1998. That was followed by a slow drip-drip of tests in which university researchers would find a single volunteer, install an implant, and carry out studies for months or years.

Over 26 years, Patrick-Krueger says, she was able to document a grand total of 71 patients who’ve ever controlled a computer directly with their neurons. 

That means you are more likely to be friends with a Mega Millions jackpot winner than know someone with a BCI.

These studies did prove that people could use their neurons to play Pong, move a robot arm, and even speak through a computer. But such demonstrations are of no practical help to people with paralysis severe enough to benefit from a brain-controlled computer, because these implants are not yet widely available. 

“One thing is to have them work, and another is how to actually deploy them,” says Contreras-Vidal. “Also, behind any great news are probably technical issues that need to be addressed.” These include questions about how long an implant will last and how much control it offers patients.

Larger trials from three companies are now trying to resolve these questions and set the groundwork for a real product.

One company, Synchron, uses a stent with electrodes on it that’s inserted into a brain vessel via a vein in the neck. Synchron has implanted its “stentrode” in 10 volunteers, six in the US and four in Australia—the most simultaneous volunteers reported by any BCI group. 

The stentrode collects limited brain signals, so it gives users only a basic on/off type of control signal, or what Synchron calls a “switch.” That isn’t going to let a paralyzed person use Photoshop. But it’s enough to toggle through software menus or select among prewritten messages.

Tom Oxley, Synchron’s CEO, says the advantage of the stentrode is that it is “as simple as possible.” That, he believes, will make his brain-computer interface “scalable” to more people, especially since installing it doesn’t involve brain surgery. 

Synchron might be ahead, but it’s still in an exploratory phase. A “pivotal” study, the kind used to persuade regulators to allow sales of a specific version of the device, has yet to be scheduled. So there’s no timeline for a product.  

Neuralink, meanwhile, has disclosed that three volunteers have received its implant, the N1, which consists of multiple fine electrode threads inserted directly into the brain through a hole drilled in the skull. 

More electrodes mean more neural activity is captured. Neuralink’s first volunteer, Noland Arbaugh, has shown off how he can guide a cursor around a screen in two dimensions and click, letting him play video games like Civilization or online chess.

Finally, Neuracle says it is running two trials in China and one in the US. Its implant consists of a patch of electrodes placed on top of the brain. In a report, the company said a paralyzed volunteer is using the system to stimulate electrodes in his arm, causing his hand to close in a grasp. 

But details remain sparse. A Neuracle executive would only say that “several” people had received its implant.

Because Neuracle’s patient count isn’t public, it wasn’t included in Patrick-Krueger’s tally. In fact, there’s no information at all in the medical literature on about a quarter of brain-implant volunteers so far, so she counted them using press releases or by e-mailing research teams.

Her BCI survey yielded other insights. According to her data, implants have lasted as long as 15 years, more than half of patients are in the US, and roughly 75% of BCI recipients have been male. 

The data can’t answer the big question, though. And that is whether implanted BCIs will progress from breakthrough demonstrations into breakout products, the kind that help many people.

“In the next five to 10 years, it’s either going to translate into a product or it’ll still stay in research,” Patrick-Krueger says. “I do feel very confident there will be a breakout.”