These technologies could help put a stop to animal testing

Earlier this week, the UK’s science minister announced an ambitious plan: to phase out animal testing.

Testing potential skin irritants on animals will be stopped by the end of next year, according to a strategy released on Tuesday. By 2027, researchers are “expected to end” tests of the strength of Botox on mice. And drug tests in dogs and nonhuman primates will be reduced by 2030. 

The news follows similar moves by other countries. In April, the US Food and Drug Administration announced a plan to replace animal testing for monoclonal antibody therapies with “more effective, human-relevant models.” And, following a workshop in June 2024, the European Commission also began working on a “road map” to phase out animal testing for chemical safety assessments.

Animal welfare groups have been campaigning for commitments like these for decades. But a lack of alternatives has made it difficult to put a stop to animal testing. Advances in medical science and biotechnology are changing that.

Animals have been used in scientific research for thousands of years. Animal experimentation has led to many important discoveries about how the brains and bodies of animals work. And because regulators require drugs to be first tested in research animals, it has played an important role in the creation of medicines and devices for both humans and other animals.

Today, countries like the UK and the US regulate animal research and require scientists to hold multiple licenses and adhere to rules on animal housing and care. Still, millions of animals are used annually in research. Plenty of scientists don’t want to take part in animal testing. And some question whether animal research is justifiable—especially considering that around 95% of treatments that look promising in animals don’t make it to market.

In recent decades, we’ve seen dramatic advances in technologies that offer new ways to model the human body and test the effects of potential therapies, without experimenting on humans or other animals.

Take “organs on chips,” for example. Researchers have been creating miniature versions of human organs inside tiny plastic cases. These systems are designed to contain the same mix of cells you’d find in a full-grown organ and receive a supply of nutrients that keeps them alive.

Today, multiple teams have created models of livers, intestines, hearts, kidneys and even the brain. And they are already being used in research. Heart chips have been sent into space to observe how they respond to low gravity. The FDA used lung chips to assess covid-19 vaccines. Gut chips are being used to study the effects of radiation.

Some researchers are even working to connect multiple chips to create a “body on a chip”—although this has been in the works for over a decade and no one has quite managed it yet.

In the same vein, others have been working on creating model versions of organs—and even embryos—in the lab. By growing groups of cells into tiny 3D structures, scientists can study how organs develop and work, and even test drugs on them. They can even be personalized—if you take cells from someone, you should be able to model that person’s specific organs. Some researchers have even been able to create organoids of developing fetuses.

The UK government strategy mentions the promise of artificial intelligence, too. Many scientists have been quick to adopt AI as a tool to help them make sense of vast databases, and to find connections between genes, proteins and disease, for example. Others are using AI to design all-new drugs.

Those new drugs could potentially be tested on virtual humans. Not flesh-and-blood people, but digital reconstructions that live in a computer. Biomedical engineers have already created digital twins of organs. In ongoing trials, digital hearts are being used to guide surgeons on how—and where—to operate on real hearts.

When I spoke to Natalia Trayanova, the biomedical engineering professor behind this trial, she told me that her model could recommend regions of heart tissue to be burned off as part of treatment for atrial fibrillation. Her tool would normally suggest two or three regions but occasionally would recommend many more. “They just have to trust us,” she told me.

It is unlikely that we’ll completely phase out animal testing by 2030. The UK government acknowledges that animal testing is still required by lots of regulators, including the FDA, the European Medicines Agency, and the World Health Organization. And while alternatives to animal testing have come a long way, none of them perfectly capture how a living body will respond to a treatment.

At least not yet. Given all the progress that has been made in recent years, it’s not too hard to imagine a future without animal testing.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Google is still aiming for its “moonshot” 2030 energy goals

Last week, we hosted EmTech MIT, MIT Technology Review’s annual flagship conference in Cambridge, Massachusetts. Over the course of three days of main-stage sessions, I learned about innovations in AI, biotech, and robotics. 

But as you might imagine, some of this climate reporter’s favorite moments came in the climate sessions. I was listening especially closely to my colleague James Temple’s discussion with Lucia Tian, head of advanced energy technologies at Google. 

They spoke about the tech giant’s growing energy demand and what sort of technologies the company is looking to to help meet it. In case you weren’t able to join us, let’s dig into that session and consider how the company is thinking about energy in the face of AI’s rapid rise. 

I’ve been closely following Google’s work in energy this year. Like the rest of the tech industry, the company is seeing ballooning electricity demand in its data centers. That could get in the way of a major goal that Google has been talking about for years. 

See, back in 2020, the company announced an ambitious target: by 2030, it aimed to run on carbon-free energy 24-7. Basically, that means Google would purchase enough renewable energy on the grids where it operates to meet its entire electricity demand, and the purchases would match up so the electricity would have to be generated when the company was actually using energy. (For more on the nuances of Big Tech’s renewable-energy pledges, check out James’s piece from last year.)

Google’s is an ambitious goal, and on stage, Tian said that the company is still aiming for it but acknowledged that it’s looking tough with the rise of AI. 

“It was always a moonshot,” she said. “It’s something very, very hard to achieve, and it’s only harder in the face of this growth. But our perspective is, if we don’t move in that direction, we’ll never get there.”

Google’s total electricity demand more than doubled from 2020 to 2024, according to its latest Environmental Report. As for that goal of 24-7 carbon-free energy? The company is basically treading water. While it was at 67% for its data centers in 2020, last year it came in at 66%. 

Not going backwards is something of an accomplishment, given the rapid growth in electricity demand. But it still leaves the company some distance away from its finish line.

To close the gap, Google has been signing what feels like constant deals in the energy space. Two recent announcements that Tian talked about on stage were a project involving carbon capture and storage at a natural-gas plant in Illinois and plans to reopen a shuttered nuclear power plant in Iowa. 

Let’s start with carbon capture. Google signed an agreement to purchase most of the electricity from a new natural-gas plant, which will capture and store about 90% of its carbon dioxide emissions. 

That announcement was controversial, with critics arguing that carbon capture keeps fossil-fuel infrastructure online longer and still releases greenhouse gases and other pollutants into the atmosphere. 

One question that James raised on stage: Why build a new natural-gas plant rather than add equipment to an already existing facility? Tacking on equipment to an operational plant would mean cutting emissions from the status quo, rather than adding entirely new fossil-fuel infrastructure. 

The company did consider many existing plants, Tian said. But, as she put it, “Retrofits aren’t going to make sense everywhere.” Space can be limited at existing plants, for example, and many may not have the right geology to store carbon dioxide underground. 

“We wanted to lead with a project that could prove this technology at scale,” Tian said. This site has an operational Class VI well, the type used for permanent sequestration, she added, and it also doesn’t require a big pipeline buildout. 

Tian also touched on the company’s recent announcement that it’s collaborating with NextEra Energy to reopen Duane Arnold Energy Center, a nuclear power plant in Iowa. The company will purchase electricity from that plant, which is scheduled to reopen in 2029. 

As I covered in a story earlier this year, Duane Arnold was basically the final option in the US for companies looking to reopen shuttered nuclear power plants. “Just a few years back, we were still closing down nuclear plants in this country,” Tian said on stage. 

While each reopening will look a little different, Tian highlighted the groups working to restart the Palisades plant in Michigan, which was the first reopening to be announced, last spring. “They’re the real heroes of the story,” she said.

I’m always interested to get a peek behind the curtain at how Big Tech is thinking about energy. I’m skeptical but certainly interested to see how Google’s, and the rest of the industry’s, goals shape up over the next few years. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Google DeepMind is using Gemini to train agents inside Goat Simulator 3

Google DeepMind has built a new video-game-playing agent called SIMA 2 that can navigate and solve problems in a wide range of 3D virtual worlds. The company claims it’s a big step toward more general-purpose agents and better real-world robots.   

Google DeepMind first demoed SIMA (which stands for “scalable instructable multiworld agent”) last year. But SIMA 2 has been built on top of Gemini, the firm’s flagship large language model, which gives the agent a huge boost in capability.

The researchers claim that SIMA 2 can carry out a range of more complex tasks inside virtual worlds, figure out how to solve certain challenges by itself, and chat with its users. It can also improve itself by tackling harder tasks multiple times and learning through trial and error.

“Games have been a driving force behind agent research for quite a while,” Joe Marino, a research scientist at Google DeepMind, said in a press conference this week. He noted that even a simple action in a game, such as lighting a lantern, can involve multiple steps: “It’s a really complex set of tasks you need to solve to progress.”

The ultimate aim is to develop next-generation agents that are able to follow instructions and carry out open-ended tasks inside more complex environments than a web browser. In the long run, Google DeepMind wants to use such agents to drive real-world robots. Marino claimed that the skills SIMA 2 has learned, such as navigating an environment, using tools, and collaborating with humans to solve problems, are essential building blocks for future robot companions.

Unlike previous work on game-playing agents such as AlphaZero, which beat a Go grandmaster in 2016, or AlphaStar, which beat 99.8% of ranked human competition players at the video game StarCraft 2 in 2019, the idea behind SIMA is to train an agent to play an open-ended game without preset goals. Instead, the agent learns to carry out instructions given to it by people.

Humans control SIMA 2 via text chat, by talking to it out loud, or by drawing on the game’s screen. The agent takes in a video game’s pixels frame by frame and figures out what actions it needs to take to carry out its tasks.

Like its predecessor, SIMA 2 was trained on footage of humans playing eight commercial video games, including No Man’s Sky and Goat Simulator 3, as well as three virtual worlds created by the company. The agent learned to match keyboard and mouse inputs to actions.

Hooked up to Gemini, the researchers claim, SIMA 2 is far better at following instructions (asking questions and providing updates as it goes) and figuring out for itself how to perform certain more complex tasks.  

Google DeepMind tested the agent inside environments it had never seen before. In one set of experiments, researchers asked Genie 3, the latest version of the firm’s world model, to produce environments from scratch and dropped SIMA 2 into them. They found that the agent was able to navigate and carry out instructions there.

The researchers also used Gemini to generate new tasks for SIMA 2. If the agent failed, at first Gemini generated tips that SIMA 2 took on board when it tried again. Repeating a task multiple times in this way often allowed SIMA 2 to improve by trial and error until it succeeded, Marino said.

Git gud

SIMA 2 is still an experiment. The agent struggles with complex tasks that require multiple steps and more time to complete. It also remembers only its most recent interactions (to make SIMA 2 more responsive, the team cut its long-term memory). It’s also still nowhere near as good as people at using a mouse and keyboard to interact with a virtual world.

Julian Togelius, an AI researcher at New York University who works on creativity and video games, thinks it’s an interesting result. Previous attempts at training a single system to play multiple games haven’t gone too well, he says. That’s because training models to control multiple games just by watching the screen isn’t easy: “Playing in real time from visual input only is ‘hard mode,’” he says.

In particular, Togelius calls out GATO, a previous system from Google DeepMind, which—despite being hyped at the time—could not transfer skills across a significant number of virtual environments.  

Still, he is open-minded about whether or not SIMA 2 could lead to better robots. “The real world is both harder and easier than video games,” he says. It’s harder because you can’t just press A to open a door. At the same time, a robot in the real world will know exactly what its body can and can’t do at any time. That’s not the case in video games, where the rules inside each virtual world can differ.

Others are more skeptical. Matthew Guzdial, an AI researcher at the University of Alberta, isn’t too surprised that SIMA 2 can play many different video games. He notes that most games have very similar keyboard and mouse controls: Learn one and you learn them all. “If you put a game with weird input in front of it, I don’t think it’d be able to perform well,” he says.

Guzdial also questions how much of what SIMA 2 has learned would really carry over to robots. “It’s much harder to understand visuals from cameras in the real world compared to games, which are designed with easily parsable visuals for human players,” he says.

Still, Marino and his colleagues hope to continue their work with Genie 3 to allow the agent to improve inside a kind of endless virtual training dojo, where Genie generates worlds for SIMA to learn in via trial and error guided by Gemini’s feedback. “We’ve kind of just scratched the surface of what’s possible,” he said at the press conference.  

OpenAI’s new LLM exposes the secrets of how AI really works

ChatGPT maker OpenAI has built an experimental large language model that is far easier to understand than typical models.

That’s a big deal, because today’s LLMs are black boxes: Nobody fully understands how they do what they do. Building a model that is more transparent sheds light on how LLMs work in general, helping researchers figure out why models hallucinate, why they go off the rails, and just how far we should trust them with critical tasks.

“As these AI systems get more powerful, they’re going to get integrated more and more into very important domains,” Leo Gao, a research scientist at OpenAI, told MIT Technology Review in an exclusive preview of the new work. “It’s very important to make sure they’re safe.”

This is still early research. The new model, called a weight-sparse transformer, is far smaller and far less capable than top-tier mass-market models like the firm’s GPT-5, Anthropic’s Claude, and Google DeepMind’s Gemini. At most it’s as capable as GPT-1, a model that OpenAI developed back in 2018, says Gao (though he and his colleagues haven’t done a direct comparison).    

But the aim isn’t to compete with the best in class (at least, not yet). Instead, by looking at how this experimental model works, OpenAI hopes to learn about the hidden mechanisms inside those bigger and better versions of the technology.

It’s interesting research, says Elisenda Grigsby, a mathematician at Boston College who studies how LLMs work and who was not involved in the project: “I’m sure the methods it introduces will have a significant impact.” 

Lee Sharkey, a research scientist at AI startup Goodfire, agrees. “This work aims at the right target and seems well executed,” he says.

Why models are so hard to understand

OpenAI’s work is part of a hot new field of research known as mechanistic interpretability, which is trying to map the internal mechanisms that models use when they carry out different tasks.

That’s harder than it sounds. LLMs are built from neural networks, which consist of nodes, called neurons, arranged in layers. In most networks, each neuron is connected to every other neuron in its adjacent layers. Such a network is known as a dense network.

Dense networks are relatively efficient to train and run, but they spread what they learn across a vast knot of connections. The result is that simple concepts or functions can be split up between neurons in different parts of a model. At the same time, specific neurons can also end up representing multiple different features, a phenomenon known as superposition (a term borrowed from quantum physics). The upshot is that you can’t relate specific parts of a model to specific concepts.

“Neural networks are big and complicated and tangled up and very difficult to understand,” says Dan Mossing, who leads the mechanistic interpretability team at OpenAI. “We’ve sort of said: ‘Okay, what if we tried to make that not the case?’”

Instead of building a model using a dense network, OpenAI started with a type of neural network known as a weight-sparse transformer, in which each neuron is connected to only a few other neurons. This forced the model to represent features in localized clusters rather than spread them out.

Their model is far slower than any LLM on the market. But it is easier to relate its neurons or groups of neurons to specific concepts and functions. “There’s a really drastic difference in how interpretable the model is,” says Gao.

Gao and his colleagues have tested the new model with very simple tasks. For example, they asked it to complete a block of text that opens with quotation marks by adding matching marks at the end.  

It’s a trivial request for an LLM. The point is that figuring out how a model does even a straightforward task like that involves unpicking a complicated tangle of neurons and connections, says Gao. But with the new model, they were able to follow the exact steps the model took.

“We actually found a circuit that’s exactly the algorithm you would think to implement by hand, but it’s fully learned by the model,” he says. “I think this is really cool and exciting.”

Where will the research go next? Grigsby is not convinced the technique would scale up to larger models that have to handle a variety of more difficult tasks.    

Gao and Mossing acknowledge that this is a big limitation of the model they have built so far and agree that the approach will never lead to models that match the performance of cutting-edge products like GPT-5. And yet OpenAI thinks it might be able to improve the technique enough to build a transparent model on a par with GPT-3, the firm’s breakthrough 2021 LLM. 

“Maybe within a few years, we could have a fully interpretable GPT-3, so that you could go inside every single part of it and you could understand how it does every single thing,” says Gao. “If we had such a system, we would learn so much.”

The State of AI: Energy is king, and the US is falling behind

Welcome to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday for the next six weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.

This week, Casey Crownhart, senior reporter for energy at MIT Technology Review and Pilita Clark, FT’s columnist, consider how China’s rapid renewables buildout could help it leapfrog on AI progress.

Casey Crownhart writes:

In the age of AI, the biggest barrier to progress isn’t money but energy. That should be particularly worrying here in the US, where massive data centers are waiting to come online, and it doesn’t look as if the country will build the steady power supply or infrastructure needed to serve them all.

It wasn’t always like this. For about a decade before 2020, data centers were able to offset increased demand with efficiency improvements. Now, though, electricity demand is ticking up in the US, with billions of queries to popular AI models each day—and efficiency gains aren’t keeping pace. With too little new power capacity coming online, the strain is starting to show: Electricity bills are ballooning for people who live in places where data centers place a growing load on the grid.

If we want AI to have the chance to deliver on big promises without driving electricity prices sky-high for the rest of us, the US needs to learn some lessons from the rest of the world on energy abundance. Just look at China.

China installed 429 GW of new power generation capacity in 2024, more than six times the net capacity added in the US during that time.

China still generates much of its electricity with coal, but that makes up a declining share of the mix. Rather, the country is focused on installing solar, wind, nuclear, and gas at record rates.

The US, meanwhile, is focused on reviving its ailing coal industry. Coal-fired power plants are polluting and, crucially, expensive to run. Aging plants in the US are also less reliable than they used to be, generating electricity just 42% of the time, compared with a 61% capacity factor in 2014.

It’s not a great situation. And unless the US changes something, we risk becoming consumers as opposed to innovators in both energy and AI tech. Already, China earns more from exporting renewables than the US does from oil and gas exports. 

Building and permitting new renewable power plants would certainly help, since they’re currently the cheapest and fastest to bring online. But wind and solar are politically unpopular with the current administration. Natural gas is an obvious candidate, though there are concerns about delays with key equipment.

One quick fix would be for data centers to be more flexible. If they agreed not to suck electricity from the grid during times of stress, new AI infrastructure might be able to come online without any new energy infrastructure.

One study from Duke University found that if data centers agree to curtail their consumption just 0.25% of the time (roughly 22 hours over the course of the year), the grid could provide power for about 76 GW of new demand. That’s like adding about 5% of the entire grid’s capacity without needing to build anything new.

But flexibility wouldn’t be enough to truly meet the swell in AI electricity demand. What do you think, Pilita? What would get the US out of these energy constraints? Is there anything else we should be thinking about when it comes to AI and its energy use? 

Pilita Clark responds:

I agree. Data centers that can cut their power use at times of grid stress should be the norm, not the exception. Likewise, we need more deals like those giving cheaper electricity to data centers that let power utilities access their backup generators. Both reduce the need to build more power plants, which makes sense regardless of how much electricity AI ends up using.

This is a critical point for countries across the world, because we still don’t know exactly how much power AI is going to consume. 

Forecasts for what data centers will need in as little as five years’ time vary wildly, from less than twice today’s rates to four times as much.

This is partly because there’s a lack of public data about AI systems’ energy needs. It’s also because we don’t know how much more efficient these systems will become. The US chip designer Nvidia said last year that its specialized chips had become 45,000 times more energy efficient over the previous eight years. 

Moreover, we have been very wrong about tech energy needs before. At the height of the dot-com boom in 1999, it was erroneously claimed that the internet would need half the US’s electricity within a decade—necessitating a lot more coal power.

Still, some countries are clearly feeling the pressure already. In Ireland, data centers chew up so much power that new connections have been restricted around Dublin to avoid straining the grid.

Some regulators are eyeing new rules forcing tech companies to provide enough power generation to match their demand. I hope such efforts grow. I also hope AI itself helps boost power abundance and, crucially, accelerates the global energy transition needed to combat climate change. OpenAI’s Sam Altman said in 2023 that “once we have a really powerful super intelligence, addressing climate change will not be particularly difficult.” 

The evidence so far is not promising, especially in the US, where renewable projects are being axed. Still, the US may end up being an outlier in a world where ever cheaper renewables made up more than 90% of new power capacity added globally last year. 

Europe is aiming to power one of its biggest data centers predominantly with renewables and batteries. But the country leading the green energy expansion is clearly China.

The 20th century was dominated by countries rich in the fossil fuels whose reign the US now wants to prolong. China, in contrast, may become the world’s first green electrostate. If it does this in a way that helps it win an AI race the US has so far controlled, it will mark a striking chapter in economic, technological, and geopolitical history.

Casey Crownhart replies:

I share your skepticism of tech executives’ claims that AI will be a groundbreaking help in the race to address climate change. To be fair, AI is progressing rapidly. But we don’t have time to wait for technologies standing on big claims with nothing to back them up. 

When it comes to the grid, for example, experts say there’s potential for AI to help with planning and even operating, but these efforts are still experimental.  

Meanwhile, much of the world is making measurable progress on transitioning to newer, greener forms of energy. How that will affect the AI boom remains to be seen. What is clear is that AI is changing our grid and our world, and we need to be clear-eyed about the consequences. 

Further reading 

MIT Technology Review reporters did the math on the energy needs of an AI query.

There are still a few reasons to be optimistic about AI’s energy demands.  

The FT’s visual data team take a look inside the relentless race for AI capacity.

And global FT reporters ask whether data centers can ever truly be green.

The first new subsea habitat in 40 years is about to launch

<div data-chronoton-summary="

  • Underwater living quarters Vanguard, launching in early 2025, will house four scientists at a time beneath Florida Keys waters. Its pressurized environment allows aquanauts to conduct extended dives without frequent decompression stops.
  • Scientific potential The habitat enables week-long missions for reef restoration, species surveys, and even astronaut training. With divers able to work many hours daily at depths up to 50 meters, it could dramatically accelerate ocean research.
  • Ambitious expansion plans Deep, Vanguard’s creator, envisions a larger successor called Sentinel by 2027 that could house up to 50 people at depths of 225 meters, advancing their mission to “make humans aquatic.”

” data-chronoton-post-id=”1127682″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Vanguard feels and smells like a new RV. It has long, gray banquettes that convert into bunks, a microwave cleverly hidden under a counter, a functional steel sink with a French press and crockery above. A weird little toilet hides behind a curtain.

But some clues hint that you can’t just fire up Vanguard’s engine and roll off the lot. The least subtle is its door, a massive disc of steel complete with a wheel that spins to lock.

Vanguard subsea human habitat from the outside door.

COURTESY MARK HARRIS

Once it is sealed and moved to its permanent home beneath the waves of the Florida Keys National Marine Sanctuary early next year, Vanguard will be the world’s first new subsea habitat in nearly four decades. Teams of four scientists will live and work on the seabed for a week at a time, entering and leaving the habitat as scuba divers. Their missions could include reef restoration, species surveys, underwater archaeology, or even astronaut training. 

One of Vanguard’s modules, unappetizingly named the “wet porch,” has a permanent opening in the floor (a.k.a. a “moon pool”) that doesn’t flood because Vanguard’s air pressure is matched to the water around it. 

It is this pressurization that makes the habitat so useful. Scuba divers working at its maximum operational depth of 50 meters would typically need to make a lengthy stop on their way back to the surface to avoid decompression sickness. This painful and potentially fatal condition, better known as the bends, develops if divers surface too quickly. A traditional 50-meter dive gives scuba divers only a handful of minutes on the seafloor, and they can make only a couple of such dives a day. With Vanguard’s atmosphere at the same pressure as the water, its aquanauts need to decompress only once, at the end of their stay. They can potentially dive for many hours every day.

That could unlock all kinds of new science and exploration. “More time in the ocean opens a world of possibility, accelerating discoveries, inspiration, solutions,” said Kristen Tertoole, Deep’s chief operating officer, at Vanguard’s unveiling in Miami in October. “The ocean is Earth’s life support system. It regulates our climate, sustains life, and holds mysteries we’ve only begun to explore, but it remains 95% undiscovered.”

Vanguard subsea human habitat unveiled in Miami

COURTESY DEEP

Subsea habitats are not a new invention. Jacques Cousteau (naturally) built the first in 1962, although it was only about the size of an elevator. Larger habitats followed in the 1970s and ’80s, maxing out at around the size of Vanguard.

But the technology has come a long way since then. Vanguard uses a tethered connection to a buoy above, known as the “surface expression,” that pipes fresh air and water down to the habitat. It also hosts a diesel generator to power a Starlink internet connection and a tank to hold wastewater. Norman Smith, Deep’s chief technology officer, says the company modeled the most severe hurricanes that Florida expects over the next 20 years and designed the tether to withstand them. Even if the worst happens and the link is broken, Deep says, Vanguard has enough air, water, and energy storage to support its crew for at least 72 hours.

That number came from DNV, an independent classification agency that inspects and certifies all types of marine vessels so that they can get commercial insurance. Vanguard will be the first subsea habitat to get a DNV classification. “That means you have to deal with the rules and all the challenging, frustrating things that come along with it, but it means that on a foundational level, it’s going to be safe,” says Patrick Lahey, founder of Triton Submarines, a manufacturer of classed submersibles.

An interior view of Vanguard during Life Under The Sea: Ocean Engineering and Technology Company DEEP's unveiling of Vanguard, its pilot subsea human habitat at The Hangar at Regatta Harbour on October 29, 2025 in Miami, Florida.

JASON KOERNER/GETTY IMAGES FOR DEEP

Although Deep hopes Vanguard itself will enable decades of useful science, its prime function for the company is to prove out technologies for its planned successor, an advanced modular habitat called Sentinel. Sentinel modules will be six meters wide, twice the diameter of Vanguard, complete with sweeping staircases and single-occupant cabins. A small deployment might have a crew of eight, about the same as the International Space Station. A big Sentinel system could house 50, up to 225 meters deep. Deep claims that Sentinel will be launched at some point in 2027.

Ultimately, according to its mission statement, Deep seeks to “make humans aquatic,” an indication that permanent communities are on its long-term road map. 

Deep has not publicly disclosed the identity of its principal funder, but business records in the UK indicate that as of January 31, 2025 a Canadian man, Robert MacGregor, owned at least 75% of its holding company. According to a Reuters investigation, MacGregor was once linked with Craig Steven Wright, a computer scientist who claimed to be Satoshi Nakamoto, as bitcoin’s elusive creator is pseudonymously known. However, Wright’s claims to be Nakamoto later collapsed. 

MacGregor has kept a very low public profile in recent years. When contacted for comment, Deep spokesperson Mike Bohan refused to comment on the link with Wright, only to say it was inaccurate, but said: “Robert MacGregor started his career as an IP lawyer in the dot-com era, moving into blockchain technology and has diverse interests including philanthropy, real estate, and now Deep.”

In any case, MacGregor could find keeping that low profile more difficult if Vanguard is successful in reinvigorating ocean science and exploration as the company hopes. The habitat is due to be deployed early next year, following final operational tests at Triton’s facility in Florida. It will welcome its first scientists shortly after. 

“The ocean is not just our resource; it is our responsibility,” says Tertoole. “Deep is more than a single habitat. We are building a full-stack capability for human presence in the ocean.”

An interior view of Vanguard during Life Under The Sea: Ocean Engineering and Technology Company DEEP's unveiling of Vanguard, its pilot subsea human habitat at The Hangar at Regatta Harbour on October 29, 2025 in Miami, Florida. (

JASON KOERNER/GETTY IMAGES FOR DEEP
Cloning isn’t just for celebrity pets like Tom Brady’s dog

This week, we heard that Tom Brady had his dog cloned. The former quarterback revealed that his Junie is actually a clone of Lua, a pit bull mix that died in 2023.

Brady’s announcement follows those of celebrities like Paris Hilton and Barbra Streisand, who also famously cloned their pet dogs. But some believe there are better ways to make use of cloning technologies.

While the pampered pooches of the rich and famous may dominate this week’s headlines, cloning technologies are also being used to diversify the genetic pools of inbred species and potentially bring other animals back from the brink of extinction.

Cloning itself isn’t new. The first mammal cloned from an adult cell, Dolly the sheep, was born in the 1990s. The technology has been used in livestock breeding over the decades since.

Say you’ve got a particularly large bull, or a cow that has an especially high milk yield. Those animals are valuable. You could selectively breed for those kinds of characteristics. Or you could clone the original animals—essentially creating genetic twins.

Scientists can take some of the animals’ cells, freeze them, and store them in a biobank. That opens the option to clone them in the future. It’s possible to thaw those cells, remove the DNA-containing nuclei of the cells, and insert them into donor egg cells.

Those donor egg cells, which come from another animal of the same species, have their own nuclei removed. So it’s a case of swapping out the DNA. The resulting cell is stimulated and grown in the lab until it starts to look like an embryo. Then it is transferred to the uterus of a surrogate animal—which eventually gives birth to a clone.

There are a handful of companies offering to clone pets. Viagen, which claims to have “cloned more animals than anyone else on Earth,” will clone a dog or cat for $50,000. That’s the company that cloned Streisand’s pet dog Samantha, twice.

This week, Colossal Biosciences—the “de-extinction” company that claims to have resurrected the dire wolf and created a “woolly mouse” as a precursor to reviving the woolly mammoth—announced that it had acquired Viagen, but that Viagen will “continue to operate under its current leadership.”

Pet cloning is controversial, for a few reasons. The companies themselves point out that, while the cloned animal will be a genetic twin of the original animal, it won’t be identical. One issue is mitochondrial DNA—a tiny fraction of DNA that sits outside the nucleus and is inherited from the mother. The cloned animal may inherit some of this from the surrogate.

Mitochondrial DNA is unlikely to have much of an impact on the animal itself. More important are the many, many factors thought to shape an individual’s personality and temperament. “It’s the old nature-versus-nurture question,” says Samantha Wisely, a conservation geneticist at the University of Florida. After all, human identical twins are never carbon copies of each other. Anyone who clones a pet expecting a like-for-like reincarnation is likely to be disappointed.

And some animal welfare groups are opposed to the practice of pet cloning. People for the Ethical Treatment of Animals (PETA) described it as “a horror show,” and the UK’s Royal Society for the Prevention of Cruelty to Animals (RSPCA) says that “there is no justification for cloning animals for such trivial purposes.” 

But there are other uses for cloning technology that are arguably less trivial. Wisely has long been interested in diversifying the gene pool of the critically endangered black-footed ferret, for example.

Today, there are around 10,000 black-footed ferrets that have been captively bred from only seven individuals, says Wisely. That level of inbreeding isn’t good for any species—it tends to leave organisms at risk of poor health. They are less able to reproduce or adapt to changes in their environment.

Wisely and her colleagues had access to frozen tissue samples taken from two other ferrets. Along with colleagues at not-for-profit Revive and Restore, the team created clones of those two individuals. The first clone, Elizabeth Ann, was born in 2020. Since then, other clones have been born, and the team has started breeding the cloned animals with the descendants of the other seven ferrets, says Wisely.

The same approach has been used to clone the endangered Przewalski’s horse, using decades-old tissue samples stored by the San Diego Zoo. It’s too soon to predict the impact of these efforts. Researchers are still evaluating the cloned ferrets and their offspring to see if they behave like typical animals and could survive in the wild.

Even this practice is not without its critics. Some have pointed out that cloning alone will not save any species. After all, it doesn’t address the habitat loss or human-wildlife conflict that is responsible for the endangerment of these animals in the first place. And there will always be detractors who accuse people who clone animals of “playing God.” 

For all her involvement in cloning endangered ferrets, Wisely tells me she would not consider cloning her own pets. She currently has three rescue dogs, a rescue cat, and “geriatric chickens.” “I love them all dearly,” she says. “But there are a lot of rescue animals out there that need homes.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Stop worrying about your AI footprint. Look at the big picture instead.

Picture it: I’m minding my business at a party, parked by the snack table (of course). A friend of a friend wanders up, and we strike up a conversation. It quickly turns to work, and upon learning that I’m a climate technology reporter, my new acquaintance says something like: “Should I be using AI? I’ve heard it’s awful for the environment.” 

This actually happens pretty often now. Generally, I tell people not to worry—let a chatbot plan your vacation, suggest recipe ideas, or write you a poem if you want. 

That response might surprise some people, but I promise I’m not living under a rock, and I have seen all the concerning projections about how much electricity AI is using. Data centers could consume up to 945 terawatt-hours annually by 2030. (That’s roughly as much as Japan.) 

But I feel strongly about not putting the onus on individuals, partly because AI concerns remind me so much of another question: “What should I do to reduce my carbon footprint?” 

That one gets under my skin because of the context: BP helped popularize the concept of a carbon footprint in a marketing campaign in the early 2000s. That framing effectively shifts the burden of worrying about the environment from fossil-fuel companies to individuals. 

The reality is, no one person can address climate change alone: Our entire society is built around burning fossil fuels. To address climate change, we need political action and public support for researching and scaling up climate technology. We need companies to innovate and take decisive action to reduce greenhouse-gas emissions. Focusing too much on individuals is a distraction from the real solutions on the table. 

I see something similar today with AI. People are asking climate reporters at barbecues whether they should feel guilty about using chatbots too frequently when we need to focus on the bigger picture. 

Big tech companies are playing into this narrative by providing energy-use estimates for their products at the user level. A couple of recent reports put the electricity used to query a chatbot at about 0.3 watt-hours, the same as powering a microwave for about a second. That’s so small as to be virtually insignificant.

But stopping with the energy use of a single query obscures the full truth, which is that this industry is growing quickly, building energy-hungry infrastructure at a nearly incomprehensible scale to satisfy the AI appetites of society as a whole. Meta is currently building a data center in Louisiana with five gigawatts of computational power—about the same demand as the entire state of Maine at the summer peak.  (To learn more, read our Power Hungry series online.)

Increasingly, there’s no getting away from AI, and it’s not as simple as choosing to use or not use the technology. Your favorite search engine likely gives you an AI summary at the top of your search results. Your email provider’s suggested replies? Probably AI. Same for chatting with customer service while you’re shopping online. 

Just as with climate change, we need to look at this as a system rather than a series of individual choices. 

Massive tech companies using AI in their products should be disclosing their total energy and water use and going into detail about how they complete their calculations. Estimating the burden per query is a start, but we also deserve to see how these impacts add up for billions of users, and how that’s changing over time as companies (hopefully) make their products more efficient. Lawmakers should be mandating these disclosures, and we should be asking for them, too. 

That’s not to say there’s absolutely no individual action that you can take. Just as you could meaningfully reduce your individual greenhouse-gas emissions by taking fewer flights and eating less meat, there are some reasonable things that you can do to reduce your AI footprint. Generating videos tends to be especially energy-intensive, as does using reasoning models to engage with long prompts and produce long answers. Asking a chatbot to help plan your day, suggest fun activities to do with your family, or summarize a ridiculously long email has relatively minor impact. 

Ultimately, as long as you aren’t relentlessly churning out AI slop, you shouldn’t be too worried about your individual AI footprint. But we should all be keeping our eye on what this industry will mean for our grid, our society, and our planet. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Why the for-profit race into solar geoengineering is bad for science and public trust

Last week, an American-Israeli company that claims it’s developed proprietary technology to cool the planet announced it had raised $60 million, by far the largest known venture capital round to date for a solar geoengineering startup.

The company, Stardust, says the funding will enable it to develop a system that could be deployed by the start of the next decade, according to Heatmap, which broke the story.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


As scientists who have worked on the science of solar geoengineering for decades, we have grown increasingly concerned about the emerging efforts to start and fund private companies to build and deploy technologies that could alter the climate of the planet. We also strongly dispute some of the technical claims that certain companies have made about their offerings. 

Given the potential power of such tools, the public concerns about them, and the importance of using them responsibly, we argue that they should be studied, evaluated, and developed mainly through publicly coordinated and transparently funded science and engineering efforts.  In addition, any decisions about whether or how they should be used should be made through multilateral government discussions, informed by the best available research on the promise and risks of such interventions—not the profit motives of companies or their investors.

The basic idea behind solar geoengineering, or what we now prefer to call sunlight reflection methods (SRM), is that humans might reduce climate change by making the Earth a bit more reflective, partially counteracting the warming caused by the accumulation of greenhouse gases. 

There is strong evidence, based on years of climate modeling and analyses by researchers worldwide, that SRM—while not perfect—could significantly and rapidly reduce climate changes and avoid important climate risks. In particular, it could ease the impacts in hot countries that are struggling to adapt.  

The goals of doing research into SRM can be diverse: identifying risks as well as finding better methods. But research won’t be useful unless it’s trusted, and trust depends on transparency. That means researchers must be eager to examine pros and cons, committed to following the evidence where it leads, and driven by a sense that research should serve public interests, not be locked up as intellectual property.

In recent years, a handful of for-profit startup companies have emerged that are striving to develop SRM technologies or already trying to market SRM services. That includes Make Sunsets, which sells “cooling credits” for releasing sulfur dioxide in the stratosphere. A new company, Sunscreen, which hasn’t yet been announced, intends to use aerosols in the lower atmosphere to achieve cooling over small areas, purportedly to help farmers or cities deal with extreme heat.  

Our strong impression is that people in these companies are driven by the same concerns about climate change that move us in our research. We agree that more research, and more innovation, is needed. However, we do not think startups—which by definition must eventually make money to stay in business—can play a productive role in advancing research on SRM.

Many people already distrust the idea of engineering the atmosphere—at whichever scale—to address climate change, fearing negative side effects, inequitable impacts on different parts of the world, or the prospect that a world expecting such solutions will feel less pressure to address the root causes of climate change.

Adding business interests, profit motives, and rich investors into this situation just creates more cause for concern, complicating the ability of responsible scientists and engineers to carry out the work needed to advance our understanding.

The only way these startups will make money is if someone pays for their services, so there’s a reasonable fear that financial pressures could drive companies to lobby governments or other parties to use such tools. A decision that should be based on objective analysis of risks and benefits would instead be strongly influenced by financial interests and political connections.

The need to raise money or bring in revenue often drives companies to hype the potential or safety of their tools. Indeed, that’s what private companies need to do to attract investors, but it’s not how you build public trust—particularly when the science doesn’t support the claims.

Notably, Stardust says on its website that it has developed novel particles that can be injected into the atmosphere to reflect away more sunlight, asserting that they’re “chemically inert in the stratosphere, and safe for humans and ecosystems.” According to the company, “The particles naturally return to Earth’s surface over time and recycle safely back into the biosphere.”

But it’s nonsense for the company to claim they can make particles that are inert in the stratosphere. Even diamonds, which are extraordinarily nonreactive, would alter stratospheric chemistry. First of all, much of that chemistry depends on highly reactive radicals that react with any solid surface, and second, any particle may become coated by background sulfuric acid in the stratosphere. That could accelerate the loss of the protective ozone layer by spreading that existing sulfuric acid over a larger surface area.

(Stardust didn’t provide a response to an inquiry about the concerns raised in this piece.)

In materials presented to potential investors, which we’ve obtained a copy of, Stardust further claims its particles “improve” on sulfuric acid, which is the most studied material for SRM. But the point of using sulfate for such studies was never that it was perfect, but that its broader climatic and environmental impacts are well understood. That’s because sulfate is widespread on Earth, and there’s an immense body of scientific knowledge about the fate and risks of sulfur that reaches the stratosphere through volcanic eruptions or other means.

If there’s one great lesson of 20th-century environmental science, it’s how crucial it is to understand the ultimate fate of any new material introduced into the environment. 

Chlorofluorocarbons and the pesticide DDT both offered safety advantages over competing technologies, but they both broke down into products that accumulated in the environment in unexpected places, causing enormous and unanticipated harms. 

The environmental and climate impacts of sulfate aerosols have been studied in many thousands of scientific papers over a century, and this deep well of knowledge greatly reduces the chance of unknown unknowns. 

Grandiose claims notwithstanding—and especially considering that Stardust hasn’t disclosed anything about its particles or research process—it would be very difficult to make a pragmatic, risk-informed decision to start SRM efforts with these particles instead of sulfate.

We don’t want to claim that every single answer lies in academia. We’d be fools to not be excited by profit-driven innovation in solar power, EVs, batteries, or other sustainable technologies. But the math for sunlight reflection is just different. Why?   

Because the role of private industry was essential in improving the efficiency, driving down the costs, and increasing the market share of renewables and other forms of cleantech. When cost matters and we can easily evaluate the benefits of the product, then competitive, for-profit capitalism can work wonders.  

But SRM is already technically feasible and inexpensive, with deployment costs that are negligible compared with the climate damage it averts.

The essential questions of whether or how to use it come down to far thornier societal issues: How can we best balance the risks and benefits? How can we ensure that it’s used in an equitable way? How do we make legitimate decisions about SRM on a planet with such sharp political divisions?

Trust will be the most important single ingredient in making these decisions. And trust is the one product for-profit innovation does not naturally manufacture. 

Ultimately, we’re just two researchers. We can’t make investors in these startups do anything differently. Our request is that they think carefully, and beyond the logic of short-term profit. If they believe geoengineering is worth exploring, could it be that their support will make it harder, not easier, to do that?  

David Keith is the professor of geophysical sciences at the University of Chicago and founding faculty director of the school’s Climate Systems Engineering Initiative. Daniele Visioni is an assistant professor of earth and atmospheric sciences at Cornell University and head of data for Reflective, a nonprofit that develops tools and provides funding to support solar geoengineering research.

This startup wants to clean up the copper industry

Demand for copper is surging, as is pollution from its dirty production processes. The founders of one startup, Still Bright, think they have a better, cleaner way to generate the copper the world needs. 

The company uses water-based reactions, based on battery chemistry technology, to purify copper in a process that could be less polluting than traditional smelting. The hope is that this alternative will also help ease growing strain on the copper supply chain.

“We’re really focused on addressing the copper supply crisis that’s looming ahead of us,” says Randy Allen, Still Bright’s cofounder and CEO.

Copper is a crucial ingredient in everything from electrical wiring to cookware today. And clean energy technologies like solar panels and electric vehicles are introducing even more demand for the metal. Global copper demand is expected to grow by 40% between now and 2040. 

As demand swells, so do the climate and environmental impacts of copper extraction, the process of refining ore into a pure metal. There’s also growing concern about the geographic concentration of the copper supply chain. Copper is mined all over the world, and historically, many of those mines had smelters on-site to process what they extracted. (Smelters form pure copper metal by essentially burning concentrated copper ore at high temperatures.) But today, the smelting industry has consolidated, with many mines shipping copper concentrates to smelters in Asia, particularly China.

That’s partly because smelting uses a lot of energy and chemicals, and it can produce sulfur-containing emissions that can harm air quality. “They shipped the environmental and social problems elsewhere,” says Simon Jowitt, a professor at the University of Nevada, Reno, and director of the Nevada Bureau of Mines and Geology.

It’s possible to scrub pollution out of a smelter’s emissions, and smelters are much cleaner than they used to be, Jowitt says. But overall, smelting centers aren’t exactly known for environmental responsibility. 

So even countries like the US, which have plenty of copper reserves and operational mines, largely ship copper concentrates, which contain up to around 30% copper, to China or other countries for smelting. (There are just two operational ore smelters in the US today.)

Still Bright avoids the pyrometallurgic process that smelters use in favor of a chemical approach, partially inspired by devices called vanadium flow batteries.

In the startup’s reactor, vanadium reacts with the copper compounds in copper concentrates. The copper metal remains a solid, leaving many of the impurities behind in the liquid phase. The whole thing takes between 30 and 90 minutes. The solid, which contains roughly 70% copper after this reaction, can then be fed into another, established process in the mining industry, called solvent extraction and electrowinning, to make copper that’s over 99% pure. 

This is far from the first attempt to use a water-based, chemical approach to processing copper. Today, some copper ore is processed with acid, for example, and Ceibo, a startup based in Chile, is trying to use a version of that process on the type of copper that’s traditionally smelted. The difference here is the particular chemistry, particularly the choice to use vanadium.

One of Still Bright’s founders, Jon Vardner, was researching copper reactions and vanadium flow batteries when he came up with the idea to marry a copper extraction reaction with an electrical charging step that could recycle the vanadium.

worker in the lab

COURTESY OF STILL BRIGHT

After the vanadium reacts with the copper, the liquid soup can be fed into an electrolyzer, which uses electricity to turn the vanadium back into a form that can react with copper again. It’s basically the same process that vanadium flow batteries use to charge up. 

While other chemical processes for copper refining require high temperatures or extremely acidic conditions to get the copper into solution and force the reaction to proceed quickly and ensure all the copper gets reacted, Still Bright’s process can run at ambient temperatures.

One of the major benefits to this approach is cutting the pollution from copper refining.  Traditional smelting heats the target material to over 1,200 °C (2,000 °F), forming sulfur-containing gases that are released into the atmosphere. 

Still Bright’s process produces hydrogen sulfide gas as a by-product instead. It’s still a dangerous material, but one that can be effectively captured and converted into useful side products, Allen says.

Another source of potential pollution is the sulfide minerals left over after the refining process, which can form sulfuric acid when exposed to air and water (this is called acid mine drainage, common in mining waste). Still Bright’s process will also produce that material, and the company plans to carefully track it, ensuring that it doesn’t leak into groundwater. 

The company is currently testing its process in the lab in New Jersey and designing a pilot facility in Colorado, which will have the capacity to make about two tons of copper per year. Next will be a demonstration-scale reactor, which will have a 500-ton annual capacity and should come online in 2027 or 2028 at a mine site, Allen says. Still Bright recently raised an $18.7 million seed round to help with the scale-up process.

How scale up goes will be a crucial test of the technology and whether the typically conservative mining industry will jump on board, UNR’s Jowitt says: “You want to see what happens on an industrial scale. And I think until that happens, people might be a little reluctant to get into this.”