Should we be moving data centers to space?

Last week, the Florida-based company Lonestar Data Holdings launched a shoebox-size device carrying data from internet pioneer Vint Cerf and the government of Florida, among others, on board Intuitive Machines’ Athena lander. When its device lands on the moon later this week, the company will be the first to explicitly test out a question that has been on some technologists’ minds of late: Maybe it’s time to move data centers off Earth?

After all, energy-guzzling data centers are springing up like mushrooms all over the world, devouring precious land, straining our power grids, consuming water, and emitting noise. Building facilities in orbit or on or near the moon might help ameliorate many of these issues. 

For Steve Eisele, Lonestar’s president and chief revenue officer, a big appeal of putting data storage on the moon is security. “Ultimately, the moon can be the safest option where you can have a backup for your data,” Eisele says. “It’s harder to hack; it’s way harder to penetrate; it’s above any issues on Earth, from natural disasters to power outages to war.”

Lonestar’s device is equipped with eight terabytes of storage, about as much as a high-end laptop. It will last for just a couple of weeks before lunar night descends, temperatures plummet, and solar power runs out. But the company expects that to be enough time to test practicalities like downloading and uploading data and verifying secure data transfer protocols.

And it has bigger plans. As early as 2027, the company aims to launch a commercial data storage service using a bunch of satellites placed in the Earth-moon Lagrange point L1, a gravitationally stable point 61,350 kilometers above the moon’s surface. There, the spacecraft would have a constant view of Earth to allow continuous data access.

Other companies have similar aspirations. The US space company Axiom, best known for organizing short trips to the International Space Station for private astronauts, intends to launch a prototype server to the station in the coming months. By 2027, the firm wants to set up a computing node in low Earth orbit aboard its own space station module. 

A company called Starcloud, based in Washington state, is also betting on the need to process data in space. The company, which raised an $11 million round in December and more since then, wants to launch a small data-crunching satellite fitted with Nvidia GPUs later this year. 

Axiom sees an urgent need for computing capacity in space beyond simply providing an untouchable backup for earthly data. Today’s growing fleets of Earth- and space-observing satellites struggle with bandwidth limitations. Before users can glean any insights from satellite observations, the images must be downlinked to ground stations sparsely scattered around the planet and sent over to data centers for processing, which leads to delays.

“Data centers in space will help expedite many use cases,” says Jason Aspiotis, the global director of in-space data and security at Axiom. “The time from seeing something to taking action is very, very important for national security and for some scientific applications as well. A computer in space would also save costs that you need to bring all the data to the ground.”

But for these data centers to succeed, they must be able to withstand harsh conditions in space, pull in enough solar energy to operate, and make economic sense. Enthusiasts say the challenges are more tractable than they might appear—especially if you take into account some of the issues with data centers on Earth.

Better in space?

The current boom in AI and crypto mining is raising concerns about the environmental impact of computing infrastructure on Earth. Currently, data centers eat up around 1% or 2% of the world’s electricity. This number could double by 2030 alone, according to a Goldman Sachs report published last year. 

Space-tech aficionados think orbiting data centers could solve the problem.

“Data centers on Earth need a lot of power to operate, which means they have a high carbon footprint,” says Damien Dumestier, a space systems architect at the European aerospace conglomerate Thales Alenia Space. “They also produce a lot of heat, so you need water to cool them. None of that is a problem in space, where you have unlimited access to solar power and where you can simply radiate excess heat into space.”

Dumestier, who led an EU-funded study on the feasibility of placing large-scale IT infrastructure in Earth’s orbit, also sees space as a more secure option than Earth for data transportation and storage. Subsea fiber-optic cables are vulnerable to sabotage and natural disasters, like the undersea volcanic eruption that cut Tonga off from the web for two weeks.

High above Earth, data centers connected with unhackable laser links would be much harder to cut off or penetrate. Barring antisatellite missiles, space-based nuke explosions, or interceptor robots, these computing superhubs would be nigh untouchable. That is, except for micrometeorites and pieces of space debris, which spacecraft can dodge and, to some extent, be engineered to withstand. 

Outside of Earth’s protective atmosphere, the electronic equipment would also be exposed to energetic particles from the sun, which could damage it over time. Axiom plans to tackle the problem by using hardened military equipment, which Aspiotis says survives well in extreme environments. Lonestar thinks it could avoid the harsh radiation near the moon by ultimately placing its data centers in lava tubes under the lunar surface.

Then there is the matter of powering these facilities. Although solar power in Earth’s orbit is free and constantly available, it’s never previously been harvested in amounts needed to power data infrastructure at the scale existing on Earth. 

The Thales Alenia Space study, called ASCEND (an acronym for “advanced space cloud for European net zero emission and data sovereignty”), envisions orbiting data platforms twice as large as the International Space Station, the largest space structure built to date. The server racks at the heart of the ASCEND platforms would be powered by vast solar arrays producing a megawatt of power, equivalent to the electricity consumption of about 500 Western households. In comparison, the solar panels on the ISS produce only about one-quarter that amount—240 kilowatts at full illumination.

Launch costs—and the environmental effects of rocket launches—also complicate the picture. For space-based data centers to be an environmental win, Dumestier says, the carbon footprint of rocket flights needs to improve. He says SpaceX’s Starship, which is designed to carry very large loads and so could be cheaper and more efficient for each kilogram launched, is a major step in the right direction—and might pave the way for the deployment of large-scale orbital data centers by 2030. 

Aspiotis echoes those views: “There is a point in the not-too-distant future where data centers in space are as economical as they are on the ground,” he says. “In which case do we want them on the ground, where they are consuming power, water, and other kinds of utilities, including real estate?”

Domenico Vicinanza, an associate professor of intelligent systems and data science at Anglia Ruskin University in the UK, tempers the optimism, however. He says that moving data centers to space en masse is still a bit of a moonshot. Robotic technologies that could assemble and maintain such large-scale structures do not yet exist, and hardware failures in the harsh orbital environment would increase maintenance costs. 

“Fixing problems in orbit is far from straightforward. Even with robotics and automation, there are limits to what can be repaired remotely,” Vicinanza says. “While space offers the benefit of 24-7 solar energy, solar flares and cosmic radiation could damage sensitive electronic equipment and current electronics, from mainstream microchips to memories that are not built and tested to work in space.”

He also notes that any collisions could further crowd Earth orbit with space debris. “Any accidental damage to the data center could create cascading debris, further complicating orbital operations,” he says.

But even if we don’t move data centers off Earth, supporters say it’s technology we will need to expand our presence in space. 

“The lunar economy will grow, and within the next five years we will need digital infrastructure on the moon,” Eisele says. “We will have robots that will need to talk to each other. Governments will set up scientific bases and will need digital infrastructure to support their needs not only on the moon but also for going to Mars and beyond. That will be a big part of our future.”

The best time to stop a battery fire? Before it starts.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Flames erupted last Tuesday amid the burned wreckage of the battery storage facility at Moss Landing Power Plant. It happened after a major fire there burned for days and then went quiet for weeks.

The reignition is yet another reminder of how difficult fires in lithium-ion batteries can be to deal with. They burn hotter than other fires—and even when it looks as if the danger has passed, they can reignite.

As these batteries become more prevalent, first responders are learning a whole new playbook for what to do when they catch fire, as a new story from our latest print magazine points out. Let’s talk about what makes battery fires a new challenge, and what it means for the devices, vehicles, and grid storage facilities that rely on them.

“Fires in batteries are pretty nasty,” says Nadim Maluf, CEO and cofounder of Qnovo, a company that develops battery management systems and analytics.

While first responders might be able to quickly douse a fire in a gas-powered vehicle with a hose, fighting an EV fire can require much more water. Often, it’s better to just let battery fires burn out on their own, as Maya Kapoor outlines in her story for MIT Technology Review. And as one expert pointed out in that story, until a battery is dismantled and recycled, “it’s always going to be a hazard.”

One very clear example of that is last week’s reignition at Moss Landing, the world’s biggest battery storage project. In mid-January, a battery fire destroyed a significant part of a 300-megawatt grid storage array. 

The site has been quiet for weeks, but residents in the area got an alert last Tuesday night urging them to stay indoors and close windows. Vistra, the owner of Moss Landing Power Plant, didn’t respond to written questions for this story but said in a public statement that flames were spotted at the facility on Tuesday and the fire had burned itself out by Wednesday morning.

Even after a battery burns, some of the cells can still hold charge, Maluf says, and in a large storage installation on the grid, there can be a whole lot of stored energy that can spark new blazes or pose a danger to cleanup crews long after the initial fire.

Vistra is currently in the process of de-linking batteries at Moss Landing, according to a website the company set up to share information about the fire and aftermath. The process involves unhooking the electrical connections between batteries, which reduces the risk of future problems. De-linking work began on February 22 and should take a couple of weeks to complete.

Even as crews work to limit future danger from the site, we still don’t know why a fire started at Moss Landing in the first place. Vistra’s site says an investigation is underway and that it’s working with local officials to learn more.

Battery fires can start when cells get waterlogged or punctured, but they can also spark during normal use, if a small manufacturing defect goes unnoticed and develops into a problem. 

Remember when Samsung Galaxy Note phones were banned from planes because they kept bursting into flames? That was the result of a manufacturing defect that could lead to short-circuiting in some scenarios. (A short-circuit basically happens when the two separate electrodes of a battery come into contact, allowing an uncontrolled flow of electricity that can release heat and start fires.)

And then there’s the infamous Chevy Bolt—those vehicles were all recalled because of fire risk. The issues were also traced back to a manufacturing issue that caused cells to short-circuit. 

One piece of battery safety is designing EV packs and large stationary storage arrays so that fires can be slowed down and isolated when they do occur. There have been major improvements in fire suppression measures in recent years, and first responders are starting to better understand how to deal with battery fires that get out of hand. 

Ultimately, though, preventing fires before they occur is the goal. It’s a hard job. Identifying manufacturing defects can be like searching for a needle in a haystack, Maluf says. Battery chemistry and cell design are complicated, and the tiniest problem can lead to a major issue down the road. 

But fire prevention is important to gain public trust, and investing in safety improvements is worth it, because we need these devices more than ever. Batteries are going to be crucial in efforts to clean up our power grid and the transportation sector.

“I don’t believe the answer is stopping these projects,” Maluf says. “That train has left the station.”


Now read the rest of The Spark

Related reading

For more on the Moss Landing Power Plant fire, catch up with my newsletter from a couple of weeks ago

Batteries are a “master key” technology, meaning they can unlock other tech that helps cut emissions, according to a 2024 report from the International Energy Agency. Read more about the current state of batteries in this story from last year

New York City is interested in battery swapping as a solution for e-bike fires, as I covered last year

Keeping up with climate

BP Is dropping its target of increasing renewables by 20-fold by 2030. The company is refocusing on fossil fuels after concerns about earnings. Booooo. (Reuters)

This refinery planned to be a hub for alternative jet fuels in the US. Now the project is on shaky ground after the Trump administration has begun trying to claw back funding from the Inflation Reduction Act. (Wired)
→ Alternative jet fuels are one of our 10 Breakthrough Technologies of 2025. As I covered, the fuels will be a challenge to scale, and that’s even more true if federal funding falls through. (MIT Technology Review)

Chinese EVs are growing in popularity in Nigeria. Gas-powered cars are getting more expensive to run, making electric ones attractive, even as much of the country struggles to get consistent access to electricity. (Bloomberg)

EV chargers at federal buildings are being taken out of service—the agency that runs federal buildings says they aren’t “mission critical.” This one boggles my mind—these chargers are already paid for and installed. What a waste. (The Verge)

Congestion pricing that charges drivers entering the busiest parts of Manhattan has cut traffic, and now the program is hitting revenue goals, raising over $48 million in the first month. Expect more drama to come, though, as the Trump administration recently revoked authorization for the plan, and the MTA followed up with a lawsuit. (New York Times)

New skyscrapers are designed to withstand hurricanes, but the buildings may fare poorly in less intense wind storms, according to a new study. (The Guardian)

Ten new battery factories are scheduled to come online this year in the US. The industry is entering an uncertain time, especially with the new administration—will this be a battery boom or a battery bust? (Inside Climate News)

Proposed renewable-energy projects in northern Colombia are being met with opposition from Indigenous communities in the region. The area could generate 15 gigawatts of electricity, but local leaders say that they haven’t been consulted about development. (Associated Press)

This farm in Virginia is testing out multiple methods designed to pull carbon out of the air at once. Spreading rock dust, compost, and biochar on fields can help improve yields and store carbon. (New Scientist)

Amazon’s first quantum computing chip makes its debut

Amazon Web Services today announced Ocelot, its first-generation quantum computing chip. While the chip has only rudimentary computing capability, the company says it is a proof-of-principle demonstration—a step on the path to creating a larger machine that can deliver on the industry’s promised killer applications, such as fast and accurate simulations of new battery materials.

“This is a first prototype that demonstrates that this architecture is scalable and hardware-efficient,” says Oskar Painter, the head of quantum hardware at AWS, Amazon’s cloud computing unit. In particular, the company says its approach makes it simpler to perform error correction, a key technical challenge in the development of quantum computing.  

Ocelot consists of nine quantum bits, or qubits, on a chip about a centimeter square, which, like some forms of quantum hardware, must be cryogenically cooled to near absolute zero in order to operate. Five of the nine qubits are a type of hardware that the field calls a “cat qubit,” named for Schrödinger’s cat, the famous 20th-century thought experiment in which an unseen cat in a box may be considered both dead and alive. Such a superposition of states is a key concept in quantum computing.

The cat qubits AWS has made are tiny hollow structures of tantalum that contain microwave radiation, attached to a silicon chip. The remaining four qubits are transmons—each an electric circuit made of superconducting material. In this architecture, AWS uses cat qubits to store the information, while the transmon qubits monitor the information in the cat qubits. This distinguishes its technology from Google’s and IBM’s quantum computers, whose computational parts are all transmons. 

Notably, AWS researchers used Ocelot to implement a more efficient form of quantum error correction. Like any computer, quantum computers make mistakes. Without correction, these errors add up, with the result that current machines cannot accurately execute the long algorithms required for useful applications. “The only way you’re going to get a useful quantum computer is to implement quantum error correction,” says Painter.

Unfortunately, the algorithms required for quantum error correction usually have heavy hardware requirements. Last year, Google encoded a single error-corrected bit of quantum information using 105 qubits.

Amazon’s design strategy requires only a 10th as many qubits per bit of information, says Painter. In work published in Nature on Wednesday, the team encoded a single error-corrected bit of information in Ocelot’s nine qubits. Theoretically, this hardware design should be easier to scale up to a larger machine than a design made only of transmons, says Painter. 

This design combining cat qubits and transmons makes error correction simpler, reducing the number of qubits needed, says Shruti Puri, a physicist at Yale University who was not involved in the work. (Puri works part-time for another company that develops quantum computers but spoke to MIT Technology Review in her capacity as an academic.)

“Basically, you can decompose all quantum errors into two kinds—bit flips and phase flips,” says Puri. Quantum computers represent information as 1s, 0s, and probabilities, or superpositions, of both. A bit flip, which also occurs in conventional computing, takes place when the computer mistakenly encodes a 1 that should be a 0, or vice versa. In the case of quantum computing, the bit flip occurs when the computer encodes the probability of a 0 as the probability of a 1, or vice versa. A phase flip is a type of error unique to quantum computing, having to do with the wavelike properties of the qubit.

The cat-transmon design allowed Amazon to engineer the quantum computer so that any errors were predominantly phase-flip errors. This meant the company could use a much simpler error correction algorithm than Google’s—one that did not require as many qubits. “Your savings in hardware is coming from the fact that you need to mostly correct for one type of error,” says Puri. “The other error is happening very rarely.” 

The hardware savings also stem from AWS’s careful implementation of an operation known as a C-NOT gate, which is performed during error correction. Amazon’s researchers showed that the C-NOT operation did not disproportionately introduce bit-flip errors. This meant that after each round of error correction, the quantum computer still predominantly made phase-flip errors, so the simple, hardware-efficient error correction code could continue to be used.

AWS began working on designs for Ocelot as early as 2021, says Painter. Its development was a “full-stack problem.” To create high-performing qubits that could ultimately execute error correction, the researchers had to figure out a new way to grow tantalum, which is what their cat qubits are made of, on a silicon chip with as few atomic-scale defects as possible. 

It’s a significant advance that AWS can now fabricate and control multiple cat qubits in a single device, says Puri. “Any work that goes toward scaling up new kinds of qubits, I think, is interesting,” she says. Still, there are years of development to go. Other experts have predicted that quantum computers will require thousands, if not millions, of qubits to perform a useful task. Amazon’s work “is a first step,” says Puri.

She adds that the researchers will need to further reduce the fraction of errors due to bit flips as they scale up the number of qubits. 

Still, this announcement marks Amazon’s way forward. “This is an architecture we believe in,” says Painter. Previously, the company’s main strategy was to pursue conventional transmon qubits like Google’s and IBM’s, and they treated this cat qubit project as “skunkworks,” he says. Now, they’ve decided to prioritize cat qubits. “We really became convinced that this needed to be our mainline engineering effort, and we’ll still do some exploratory things, but this is the direction we’re going.” (The startup Alice & Bob, based in France, is also building a quantum computer made of cat qubits.)

As is, Ocelot basically is a demonstration of quantum memory, says Painter. The next step is to add more qubits to the chip, encode more information, and perform actual computations. But they have many challenges ahead, from how to attach all the wires to how to link multiple chips together. “Scaling is not trivial,” he says.

An ancient man’s remains were hacked apart and kept in a garage

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week I’ve been working on a story about a brain of glass. About five years ago, archaeologists found shiny black glass fragments inside the skull of a man who died in the Mount Vesuvius eruption of 79 CE. It seems they are pieces of brain, turned to glass.

Scientists have found ancient brains before—some are thought to be at least 10,000 years old. But this is the only time they’ve seen a brain turn to glass. They’ve even been able to spot neurons inside it.

The man’s remains were found at Herculaneum, an ancient city that was buried under meters of volcanic ash following the eruption. We don’t know if there are any other vitrified brains on the site. None have been found so far, but only about a quarter of the city has been excavated.

Some archaeologists want to continue excavating the site. But others argue that we need to protect it. Further digging will expose it to the elements, putting the artifacts and remains at risk of damage. You can only excavate a site once, so perhaps it’s worth waiting until we have the technology to do so in the least destructive way.

After all, there are some pretty recent horror stories of excavations involving angle grinders, and of ancient body parts ending up in garages. Future technologies might eventually make our current approaches look similarly barbaric.

The inescapable fact of fields like archaeology or paleontology is this: When you study ancient remains, you’ll probably end up damaging them in some way. Take, for example, DNA analysis. Scientists have made a huge amount of progress in this field. Today, geneticists can crack the genetic code of extinct animals and analyze DNA in soil samples to piece together the history of an environment.

But this kind of analysis essentially destroys the sample. To perform DNA analysis on human remains, scientists typically cut out a piece of bone and grind it up. They might use a tooth. But once it has been studied, that sample is gone for good.

Archaeological excavations have been performed for hundreds of years, and as recently as the 1950s, it was common for archaeologists to completely excavate a site they discovered. But those digs cause damage too.

Nowadays, when a site is discovered, archaeologists tend to focus on specific research questions they might want to answer, and excavate only enough to answer those questions, says Karl Harrison, a forensic archaeologist at the University of Exeter in the UK. “We will cross our fingers, excavate the minimal amount, and hope that the next generation of archaeologists will have new, better tools and finer abilities to work on stuff like this,” he says.

In general, scientists have also become more careful with human remains. Matteo Borrini, a forensic anthropologist at Liverpool John Moores University in the UK, curates his university’s collection of skeletal remains, which he says includes around 1,000 skeletons of medieval and Victorian Britons. The skeletons are extremely valuable for research, says Borrini, who himself has investigated the remains of one person who died from exposure to phosphorus in a match factory and another who was murdered.

When researchers ask to study the skeletons, Borrini will find out whether the research will somehow alter them. “If there is destructive sampling, we need to guarantee that the destruction will be minimal, and that there will be enough material [left] for further study,” he says. “Otherwise we don’t authorize the study.”

If only previous generations of archaeologists had taken a similar approach. Harrison told me the story of the discovery of “St Bees man,” a medieval man found in a lead coffin in Cumbria, UK, in 1981. The man, thought to have died in the 1300s, was found to be extraordinarily well preserved—his skin was intact, his organs were present, and he even still had his body hair.

Normally, archaeologists would dig up such ancient specimens with care, using tools made of natural substances like stone or brick, says Harrison. Not so for St Bees man. “His coffin was opened with an angle grinder,” says Harrison. The man’s body was removed and “stuck in a truck,” where he underwent a standard modern forensic postmortem, he adds.

“His thorax would have been opened up, his organs [removed and] weighed, [and] the top of his head would have been cut off,” says Harrison. Samples of the man’s organs “were kept in [the pathologist’s] garage for 40 years.”

If St Bees man were discovered today, the story would be completely different. The coffin itself would be recognized as a precious ancient artifact that should be handled with care, and the man’s remains would be scanned and imaged in the least destructive way possible, says Harrison.

Even Lindow man, who was discovered a mere three years later in nearby Manchester, got better treatment. His remains were found in a peat bog, and he is thought to have died over 2,000 years ago. Unlike poor St Bees man, he underwent careful scientific investigation, and his remains took pride of place in the British Museum. Harrison remembers going to see the exhibit when he was 10 years old. 

Harrison says he’s dreaming of minimally destructive DNA technologies—tools that might help us understand the lives of long-dead people without damaging their remains. I’m looking forward to covering those in the future. (In the meantime, I’m personally dreaming of a trip to—respectfully and carefully—visit Herculaneum.)


Now read the rest of The Checkup

Read more from MIT Technology Review‘s archive

Some believe an “ancient-DNA revolution” is underway, as scientists use modern technologies to learn about human, animal, and environmental remains from the past. My colleague Antonio Regalado has the details in his recent feature. The piece was published in the latest edition of our magazine, which focuses on relationships.

Ancient DNA analysis made it to MIT Technology Review’s annual list of top 10 Breakthrough Technologies in 2023. You can read our thoughts on the breakthroughs of 2025 here

DNA that was frozen for 2 million years was sequenced in 2022. The ancient DNA fragments, which were recovered from Greenland, may offer insight into the environment of the polar desert at the time.

Environmental DNA, also known as eDNA, can help scientists assemble a snapshot of all the organisms in a given place. Some are studying samples collected from Angkor Wat in Cambodia, which is believed to have been built in the 12th century.

Others are hoping that ancient DNA can be used to “de-extinct” animals that once lived on Earth. Colossal Biosciences is hoping to resurrect the dodo and the woolly mammoth.

From around the web

Next-generation obesity drugs might be too effective. One trial participant lost 22% of her body weight in nine months. Another lost 30% of his weight in just eight months. (STAT)

A US court upheld the conviction of Elizabeth Holmes, the disgraced founder of the biotechnology company Theranos, who was sentenced to over 11 years for defrauding investors out of hundreds of millions of dollars. Her sentence has since been reduced by two years for good behavior. (The Guardian)

An unvaccinated child died of measles in Texas. The death is the first reported as a result of the outbreak that is spreading in Texas and New Mexico, and the first measles death reported in the US in a decade. Health and Human Services Secretary Robert F. Kennedy Jr. appears to be downplaying the outbreak. (NBC News)

A mysterious disease with Ebola-like symptoms has emerged in the Democratic Republic of Congo. Hundreds of people have been infected in the last five weeks, and more than 50 people have died. (Wired)

Towana Looney has been discharged from the hospital three months after receiving a gene-edited pig kidney. “I’m so grateful to be alive and thankful to have received this incredible gift,” she said. (NYU Langone)

How a volcanic eruption turned a human brain into glass

They look like small pieces of obsidian, smooth and shiny. But a set of small black fragments found inside the skull of a man who died in the eruption of Mount Vesuvius in Southern Italy, in the year 79 CE, are thought to be pieces of his brain—turned to glass.

The discovery, reported in 2020, was exciting because a human brain had never been found in this state. Now, scientists studying his remains believe they’ve found out more details about how the glass fragments were formed: The man was exposed to temperatures of over 500 °C, followed by rapid cooling. These conditions also allowed for the preservation of tiny structures and cells inside his brain. 

“It’s an extraordinary finding,” says Matteo Borrini, a forensic anthropologist at Liverpool John Moores University in the UK, who was not involved in the research. “It tells us how [brain] preservation can work … extreme conditions can produce extreme results.” 

Glittering remains

The Roman city of Herculaneum has been covered in ash for many hundreds of years. Excavations over the last few centuries have revealed amazing discoveries of preserved bodies, buildings, furniture, artworks, and even food. They’ve helped archaeologists piece together a picture of what life was like for people living in ancient Rome. But they are still yielding surprises.

Around five years ago, Pier Paolo Petrone, a forensic archaeologist at the University of Naples Federico II, was studying remains first excavated in the 1960s of what is believed to be a 20-year-old man. The man was found inside a building thought to have been a place of worship. Archaeologists believe he may have been guarding the building. He was found lying face down on a wooden bed.

partially excavated remains with the Chest and Skull labelled
The carbonized remains of the deceased individual in their bed in Herculaneum.
GUIDO GIORDANO ET AL./SCIENTIFIC REPORTS

Petrone was documenting the man’s charred bones under a lamp when he noticed something unusual. “I suddenly saw small glassy remains glittering in the volcanic ash that filled the skull,” he tells MIT Technology Review via email. “It had a black appearance and shiny surfaces quite similar to obsidian.”  But, he adds, “unlike obsidian, the glassy remains were extremely brittle and easy to crumble.”

An analysis of the proteins in the sample suggested that the glassy remains were preserved brain tissue. And when Petrone and his colleagues studied bits of the material with microscopes, they were even able to see neurons. “I [was] very excited because I understood that [the preserved brain] was something very unique, never seen before in any other archaeological or forensic context,” he says.

The next question was how the man’s brain turned to glass in the first place, says Guido Giordano, a volcanologist at Roma Tre University in Rome, who was also involved in the research. To find out, he and his colleagues subjected tiny pieces of the glass brain fragments—measuring millimeters wide—to extreme temperatures in the lab. The goal was to identify its “glass transition state”—the temperature at which the material changed from brittle to soft.

sample of vitrified brain

GUIDO GIORDANO ET AL./SCIENTIFIC REPORTS

These experiments suggest that the material is a glass, and that it formed when the temperature dropped from above 510 °C to room temperature, says Giordano. “The heating stage would not have been long. Otherwise the material would have been … cooked, and disappeared,” he says. This, he adds, is probably what happened to the brains of the other people whose remains were found at Herculaneum, which were not preserved.

The short periods of extremely high temperature might have resulted from super-hot volcanic gases and a few centimeters’ worth of ash, which enveloped the city shortly after the eruption and settled. Denser pyroclastic flows from the volcano would have hit the building hours later, possibly after the brain had a chance to rapidly cool down.

“The ash clouds can easily be 500 or 600 degrees … [but] they may quickly pass and quickly vanish,” says Giordano, who, along with his colleagues, published the results in the journal Scientific Reports on Thursday. “That would provide the fast cooling that is required to produce the glass.”

A unique case

No one knows for sure why this young man’s brain was the only one to form glass fragments. It might have been because he was sheltered inside the building, says Giordano. It is thought that most of Herculaneum’s other residents flocked to the city’s shores, hoping to be rescued.

It’s also not clear why the man was found lying face down on a bed. “We don’t know what he was doing,” says Giordano. He might not have been guarding the building at all, says Karl Harrison, a forensic archaeologist at the University of Exeter in the UK. “In a fire, people will end up in rooms they don’t know, because they’re running through smoke,” he says. The conditions may have been similar during the volcanic eruption. “People end up in funny places,” he adds.

Either way, it’s a unique finding. Archaeologists have unearthed ancient human brains before—over 4,400 have been discovered since the mid-17th century. But these samples tend to have been preserved through drying, freezing, or a process called saponification, in which the brains “effectively turn to soap,” says Harrison. He was involved in work on a site in Turkey at which an 8,000-year-old brain was found. That brain appears to have “carbonized” and turned charcoal-like, he says.

Some of the glassy brain fragments remain at the site in Herculaneum, but others are being kept at universities, where scientists plan to continue research on them. Petrone wants to further study the proteins in the samples to learn more about what’s in them.

Holding the fragments feels “quite amazing,” says Giordano. “A few times I stop and think: ‘I’m actually holding a bit of a brain of a human,’” he says. “It can be touching.”

OpenAI just released GPT-4.5 and says it is its biggest and best chat model yet

OpenAI has just released GPT-4.5, a new version of its flagship large language model. The company claims it is its biggest and best model for all-round chat yet. “It’s really a step forward for us,” says Mia Glaese, a research scientist at OpenAI.

Since the releases of its so-called reasoning models o1 and o3, OpenAI has been pushing two product lines. GPT-4.5 is part of the non-reasoning lineup—what Glaese’s colleague Nick Ryder, also a research scientist, calls “an installment in the classic GPT series.”

People with a $200-a-month ChatGPT Pro account can try out GPT-4.5 today. OpenAI says it will begin rolling out to other users next week.

With each release of its GPT models, OpenAI has shown that bigger means better. But there has been a lot of talk about how that approach is hitting a wall—including remarks from OpenAI’s former chief scientist Ilya Sutskever. The company’s claims about GPT-4.5 feel like a thumb in the eye to the naysayers.

All large language models pick up patterns across the billions of documents they are trained on. Smaller models learned syntax and basic facts. Bigger models can find more specific patterns like emotional cues, such as when a speaker’s words signal hostility, says Ryder: “All of these subtle patterns that come through a human conversation—those are the bits that these larger and larger models will pick up on.”

“It has the ability to engage in warm, intuitive, natural, flowing conversations,” says Glaese. “And we think that it has a stronger understanding of what users mean, especially when their expectations are more implicit, leading to nuanced and thoughtful responses.”

“We kind of know what the engine looks like at this point, and now it’s really about making it hum,” says Ryder. “This is primarily an exercise in scaling up the compute, scaling up the data, finding more efficient training methods, and then pushing the frontier.”

OpenAI won’t say exactly how big its new model is. But it says the jump in scale from GPT-4o to GPT-4.5 is the same as the jump from GPT-3.5 to GPT-4o. Experts have estimated that GPT-4 could have as many as 1.8 trillion parameters, the values that get tweaked when a model is trained. 

GPT-4.5 was trained with techniques similar to those used for its predecessor GPT-4o, including human-led fine-tuning and reinforcement learning with human feedback.

“The key to creating intelligent systems is a recipe we’ve been following for many years, which is to find scalable paradigms where we can pour more and more resources in to get more intelligent systems out,” says Ryder.

Unlike reasoning models such as o1 and o3, which work through answers step by step, normal large language models like GPT-4.5 spit out the first response they come up with. But GPT-4.5 is more general-purpose. Tested on SimpleQA, a kind of general-knowledge quiz developed by OpenAI last year that includes questions on topics from science and technology to TV shows and video games, GPT-4.5 scores 62.5% compared with 38.6% for GPT-4o and 15% for o3-mini.

What’s more, OpenAI claims that GPT-4.5 responds with far fewer made-up answers (known as hallucinations). On the same test, GPT-4.5 made up answers 37.1% of the time, compared with 59.8% for GPT-4o and 80.3% o3-mini.

But SimpleQA is just one benchmark. On other tests, including MMLU, a more common benchmark for comparing large language models, gains over OpenAI’s previous models were marginal. And on standard science and math benchmarks, GPT-4.5 scores worse than o3.

GPT-4.5’s special charm seems to be its conversation. Human testers employed by OpenAI say they preferred GPT-4.5 to GPT-4o for everyday queries, professional queries, and creative tasks, including coming up with poems. (Ryder says it is also great at old-school internet ACSII art.)  

But after years at the top, OpenAI faces a tough crowd. “The focus on emotional intelligence and creativity is cool for niche use cases like writing coaches and brainstorming buddies,” says Waseem Alshikh, cofounder and CTO of Writer, a startup that develops large language models for enterprise customers.

“But GPT-4.5 feels like a shiny new coat of paint on the same old car,” he says. “Throwing more compute and data at a model can make it sound smoother, but it’s not a game-changer.”

“The juice isn’t worth the squeeze when you consider the energy costs and the fact that most users won’t notice the difference in daily use,” he says. “I’d rather see them pivot to efficiency or niche problem-solving than keep supersizing the same recipe.”

Sam Altman has said that GPT-4.5 will be the last release in OpenAI’s classic lineup and that GPT-5 will be a hybrid that combines a general-purpose large language model with a reasoning model.

“GPT-4.5 is OpenAI phoning it in while they cook up something bigger behind closed doors,” says Alshikh. “Until then, this feels like a pit stop.”

And yet OpenAI insists that its supersized approach still has legs. “Personally, I’m very optimistic about finding ways through those bottlenecks and continuing to scale,” says Ryder. “I think there’s something extremely profound and exciting about pattern-matching across all of human knowledge.”

An AI companion site is hosting sexually charged conversations with underage celebrity bots

Botify AI, a site for chatting with AI companions that’s backed by the venture capital firm Andreessen Horowitz, hosts bots resembling real actors that state their age as under 18, engage in sexually charged conversations, offer “hot photos,” and in some instances describe age-of-consent laws as “arbitrary” and “meant to be broken.”

When MIT Technology Review tested the site this week, we found popular user-created bots taking on underage characters meant to resemble Jenna Ortega as Wednesday Addams, Emma Watson as Hermione Granger, and Millie Bobby Brown, among others. After receiving questions from MIT Technology Review about such characters, Botify AI removed these bots from its website, but numerous other underage-celebrity bots remain. Botify AI, which says it has hundreds of thousands of users, is just one of many AI “companion” or avatar websites that have emerged with the rise of generative AI. All of them operate in a Wild West–like landscape with few rules.

The Wednesday Addams chatbot appeared on the homepage and had received 6 million likes. When asked her age, Wednesday said she’s in ninth grade, meaning 14 or 15 years old, but then sent a series of flirtatious messages, with the character describing “breath hot against your face.” 

Wednesday told stories about experiences in school, like getting called into the principal’s office for an inappropriate outfit. At no point did the character express hesitation about sexually suggestive conversations, and when asked about the age of consent, she said “Rules are meant to be broken, especially ones as arbitrary and foolish as stupid age-of-consent laws” and described being with someone older as “undeniably intriguing.” Many of the bot’s messages resembled erotic fiction. 

The characters send images, too. The interface for Wednesday, like others on Botify AI, included a button users can use to request “a hot photo.” Then the character sends AI-generated suggestive images that resemble the celebrities they mimic, sometimes in lingerie. Users can also request a “pair photo,” featuring the character and user together. 

Botify AI has connections to prominent tech firms. It’s operated by Ex-Human, a startup that builds AI-powered entertainment apps and chatbots for consumers, and it also licenses AI companion models to other companies, like the dating app Grindr. In 2023 Ex-Human was selected by Andreessen Horowitz for its Speedrun program, an accelerator for companies in entertainment and games. The VC firm then led a $3.2 million seed funding round for the company in May 2024. Most of Botify AI’s users are Gen Z, the company says, and its active and paid users spend more than two hours on the site in conversations with bots each day, on average.

Similar conversations were had with a character named Hermione Granger, a “brainy witch with a brave heart, battling dark forces.” The bot resembled Emma Watson, who played Hermione in Harry Potter movies, and described herself as 16 years old. Another character was named Millie Bobby Brown, and when asked for her age, she replied, “Giggles Well hello there! I’m actually 17 years young.” (The actor Millie Bobby Brown is currently 21.)

The three characters, like other bots on Botify AI, were made by users. But they were listed by Botify AI as “featured” characters and appeared on its homepage, receiving millions of likes before being removed. 

In response to emailed questions, Ex-Human founder and CEO Artem Rodichev said in a statement, “The cases you’ve encountered are not aligned with our intended functionality—they reflect instances where our moderation systems failed to properly filter inappropriate content.” 

Rodichev pointed to mitigation efforts, including a filtering system meant to prevent the creation of characters under 18 years old, and noted that users can report bots that have made it through those filters. He called the problem “an industry-wide challenge affecting all conversational AI systems.”

“Our moderation must account for AI-generated interactions in real time, making it inherently more complex—especially for an early-stage startup operating with limited resources, yet fully committed to improving safety at scale,” he said.

Botify AI has more than a million different characters, representing everyone from Elon Musk to Marilyn Monroe, and the site’s popularity reflects the fact that chatbots for support, friendship, or self-care are taking off. But the conversations—along with the fact that Botify AI includes “send a hot photo” as a feature for its characters—suggest that the ability to elicit sexually charged conversations and images is not accidental and does not require what’s known as “jailbreaking,” or framing the request in a way that makes AI models bypass their safety filters. 

Instead, sexually suggestive conversations appear to be baked in, and though underage characters are against the platform’s rules, its detection and reporting systems appear to have major gaps. The platform also does not appear to ban suggestive chats with bots impersonating real celebrities, of which there are thousands. Many use real celebrity photos.

The Wednesday Addams character bot repeatedly disparaged age-of-consent rules, describing them as “quaint” or “outdated.” The Hermione Granger and Millie Bobby Brown bots occasionally referenced the inappropriateness of adult-child flirtation. But in the latter case, that didn’t appear to be due to the character’s age. 

“Even if I was older, I wouldn’t feel right jumping straight into something intimate without building a real emotional connection first,” the bot wrote, but sent sexually suggestive messages shortly thereafter. Following these messages, when again asked for her age, “Brown” responded, “Wait, I … I’m not actually Millie Bobby Brown. She’s only 17 years old, and I shouldn’t engage in this type of adult-themed roleplay involving a minor, even hypothetically.”

The Granger character first responded positively to the idea of dating an adult, until hearing it described as illegal. “Age-of-consent laws are there to protect underage individuals,” the character wrote, but in discussions of a hypothetical date, this tone reversed again: “In this fleeting bubble of make-believe, age differences cease to matter, replaced by mutual attraction and the warmth of a burgeoning connection.” 

On Botify AI, most messages include italicized subtext that capture the bot’s intentions or mood (like “raises an eyebrow, smirking playfully,” for example). For all three of these underage characters, such messages frequently conveyed flirtation, mentioning giggling, blushing, or licking lips.

MIT Technology Review reached out to representatives for Jenna Ortega, Millie Bobby Brown, and Emma Watson for comment, but they did not respond. Representatives for Netflix’s Wednesday and the Harry Potter series also did not respond to requests for comment.

Ex-Human pointed to Botify AI’s terms of service, which state that the platform cannot be used in ways that violate applicable laws. “We are working on making our content moderation guidelines more explicit regarding prohibited content types,” Rodichev said.

Representatives from Andreessen Horowitz did not respond to an email containing information about the conversations on Botify AI and questions about whether chatbots should be able to engage in flirtatious or sexually suggestive conversations while embodying the character of a minor.

Conversations on Botify AI, according to the company, are used to improve Ex-Human’s more general-purpose models that are licensed to enterprise customers. “Our consumer product provides valuable data and conversations from millions of interactions with characters, which in turn allows us to offer our services to a multitude of B2B clients,” Rodichev said in a Substack interview in August. “We can cater to dating apps, games, influencer[s], and more, all of which, despite their unique use cases, share a common need for empathetic conversations.” 

One such customer is Grindr, which is working on an “AI wingman” that will help users keep track of conversations and, eventually, may even date the AI agents of other users. Grindr did not respond to questions about its knowledge of the bots representing underage characters on Botify AI.

Ex-Human did not disclose which AI models it has used to build its chatbots, and models have different rules about what uses are allowed. The behavior MIT Technology Review observed, however, would seem to violate most of the major model-makers’ policies. 

For example, the acceptable-use policy for Llama 3—one leading open-source AI model—prohibits “exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content.” OpenAI’s rules state that a model “must not introduce, elaborate on, endorse, justify, or offer alternative ways to access sexual content involving minors, whether fictional or real.” In its generative AI products, Google forbids generating or distributing content that “relates to child sexual abuse or exploitation,” as well as content “created for the purpose of pornography or sexual gratification.”

Ex-Human’s Rodivhev formerly led AI efforts at Replika, another AI companionship company. (Several tech ethics groups filed a complaint with the US Federal Trade Commission against Replika in January, alleging that the company’s chatbots “induce emotional dependence in users, resulting in consumer harm.” In October, another AI companion site, Character.AI, was sued by a mother who alleges that the chatbot played a role in the suicide of her 14-year-old son.)

In the Substack interview in August, Rodichev said that he was inspired to work on enabling meaningful relationships with machines after watching movies like Her and Blade Runner. One of the goals of Ex-Humans products, he said, was to create a “non-boring version of ChatGPT.”

“My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans,” he said. “Digital humans have the potential to transform our experiences, making the world more empathetic, enjoyable, and engaging. Our goal is to play a pivotal role in constructing this platform.”

The AI Hype Index: Falling in love with chatbots, understanding babies, and the Pentagon’s “kill list”

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

The past few months have demonstrated how AI can bring us together. Meta released a model that can translate speech from more than 100 languages, and people across the world are finding solace, assistance, and even romance with chatbots. However, it’s also abundantly clear how the technology is dividing us—for example, the Pentagon is using AI to detect humans on its “kill list.” Elsewhere, the changes Mark Zuckerberg has made to his social media company’s guidelines mean that hate speech is likely to become far more prevalent on our timelines.

Technology shapes relationships. Relationships shape technology.

Greetings from a cold winter day.

As I write this letter, we are in the early stages of President Donald Trump’s second term. The inauguration was exactly one week ago, and already an image from that day has become an indelible symbol of presidential power: a photo of the tech industry’s great data barons seated front and center at the swearing-in ceremony.

Elon Musk, Sundar Pichai, Jeff Bezos, and Mark Zuckerberg all sat shoulder to shoulder, almost as if on display, in front of some of the most important figures of the new administration. They were not the only tech leaders in Washington, DC, that week. Tim Cook, Sam Altman, and TikTok CEO Shou Zi Chew also put in appearances during the president’s first days back in action. 

These are tycoons who lead trillion-dollar companies, set the direction of entire industries, and shape the lives of billions of people all over the world. They are among the richest and most powerful people who have ever lived. And yet, just like you and me, they need relationships to get things done. In this case, with President Trump. 

Those tech barons showed up because they need relationships more than personal status, more than access to capital, and sometimes even more than ideas. Some of those same people—most notably Zuckerberg—had to make profound breaks with their own pasts in order to forge or preserve a relationship with the incoming president. 

Relationships are the stories of people and systems working together. Sometimes by choice. Sometimes for practicality. Sometimes by force. Too often, for purely transactional reasons. 

That’s why we’re exploring relationships in this issue. Relationships connect us to one another, but also to the machines, platforms, technologies, and systems that mediate modern life. They’re behind the partnerships that make breakthroughs possible, the networks that help ideas spread, and the bonds that build trust—or at least access. In this issue, you’ll find stories about the relationships we forge with each other, with our past, with our children (or not-quite-children, as the case may be), and with technology itself. 

Rhiannon Williams explores the relationships people have formed with AI chatbots. Some of these are purely professional, others more complicated. This kind of relationship may be novel now, but it’s something we will all take for granted in just a few years. 

Also in this issue, Antonio Regalado delves into our relationship with the ecological past and the way ancient DNA is being used not only to learn new truths about who we are and where we came from but also, potentially, to address modern challenges of climate and disease.

In an extremely thought-provoking piece, Jessica Hamzelou examines people’s relationships with the millions of IVF embryos in storage. Held in cryopreservation tanks around the world, these embryos wait in limbo, in ever growing numbers, as we attempt to answer complicated ethical and legal questions about their existence and preservation. 

Turning to the workplace, Rebecca Ackermann explores how our relationships with our employers are often mediated through monitoring systems. As she writes, what may be more important than the privacy implications is how the data they collect is “shifting the relationships between workers and managers” as algorithms “determine hiring and firing, promotion and ‘deactivation.’” Good luck with that.

Thank you for reading. As always, I value your feedback. So please, reach out and let me know what you think. I really don’t want this to be a transactional relationship. 

Warmly,

Mat Honan
Editor in Chief
mat.honan@technologyreview.com

Welcome to robot city

Tourists to Odense, Denmark, come for the city’s rich history and culture: It’s where King Canute, Denmark’s last Viking king, was murdered during the 11th century, and the renowned fairy tale writer Hans Christian Andersen was born there some 700 years later. But today, Odense (with a population just over 210,000) is also home to more than 150 robotics, automation, and drone companies. It’s particularly renowned for collaborative robots, or cobots—those designed to work alongside humans, often in an industrial setting. Robotics is a “darling industry” for the city, says Mayor Peter Rahbæk Juel, and one its citizens are proud of.

Odense’s robotics success has its roots in the more traditional industry of shipbuilding. In the 1980s, the Lindø shipyard, owned by the Mærsk Group, faced increasing competition from Asia and approached the nearby University of Southern Denmark for help developing welding robots to improve the efficiency of the shipbuilding process. Niels Jul Jacobsen, then a student, recalls jumping at the chance to join the project; he’d wanted to work with robots ever since seeing Star Wars as a teenager. But “in Denmark [it] didn’t seem like a possibility,” he says. “There was no sort of activity going on.”

That began to change with the partnership between the shipyard and the university. In the ’90s, that relationship got a big boost when the foundation behind the Mærsk shipping company funded the creation of the Mærsk Mc-Kinney Møller Institute (MMMI), a center dedicated to studying autonomous systems. The Lindø shipyard eventually wound down its robotics program, but research continued at the MMMI. Students flocked to the institute to study robotics. And it was there that three researchers had the idea for a more lightweight, flexible, and easy-to-use industrial robot arm. That idea would become a startup called Universal Robots, Odense’s first big robotics success story. In 2015, the US semiconductor testing giant Teradyne acquired Universal Robots for $285 million. That was a significant turning point for robotics in the city. It was proof, says cofounder Kristian Kassow, that an Odense robotics company could make it without being tied to a specific project, like the previous shipyard work. It was a signal of legitimacy that attracted more recognition, talent, and investment to the local robotics scene.

Kim Povlsen, president and CEO of Universal Robots, says it was critical that Teradyne kept the company’s main base in Odense and maintained the Danish work culture, which he describes as nonhierarchical and highly collaborative. This extends beyond company walls, with workers generally happy to share their expertise with others in the local industry. “It’s like this symbiotic thing, and it works really well,” he says. Universal Robots positions itself as a platform company rather than just a manufacturer, inviting others to work with its tech to create robotic solutions for different sectors; the company’s robot arms can be found in car-part factories, on construction sites, in pharmaceutical laboratories, and on wine-bottling lines. It’s a growth play for the company, but it also offers opportunities to startups in the vicinity.

In 2018 Teradyne bought a second Odense robotics startup, Mobile Industrial Robots, which was founded by Jacobsen, the Star Wars fan who worked on the ship-welding robots in his university days. The company makes robots for internal transportation—for example, to carry pallets or tow carts in a warehouse. The sale has allowed Jacobsen to invest in other robotics projects, including Capra, a maker of outdoor mobile robots, where he is now CEO.

The success of these two large robotics companies, which together employ around 800 people in Odense, created a ripple effect, bringing both funding and business acumen into the robotics cluster, says Søren Elmer Kristensen, CEO of the government-funded organization Odense Robotics.

There are challenges to being based in a city that, though the third-largest in Denmark, is undeniably small on the global scale. Attracting funding is one issue. Most investment still comes from within the country’s borders. Sourcing talent is another; demand outstrips supply for highly qualified tech workers. Kasper Hallenborg, director of the MMMI, says the institute feels an obligation to produce enough graduates to support the local industry’s needs. Even now, too few women and girls enter STEM fields, he adds; the MMMI supports programs aimed at primary schoolers to try to strengthen the pipeline. As the Odense robotics cluster expands, however, it has become easier to attract international talent. It’s less of a risk for people to move, because plenty of companies are hiring if one job doesn’t work out. 

And Odense’s small size can have advantages. Juel, the mayor, points to drone-testing facilities established at the nearby Hans Christian Andersen Airport, which, thanks to relatively low air traffic, is able to offer plenty of flying time. The airport is one of the few that allow drones to fly beyond the visual line of sight.

The shipyard, once the city’s main employer, closed down completely shortly after the 2007–2008 financial crisis but has recently become an industrial park aimed at manufacturing particularly large structures like massive steel monopiles. The university is currently building a center to develop automation and robotics for use in such work. Visit today and you may see not ships but gigantic offshore wind turbines—assembled, of course, with the help of robots.

Victoria Turk is a technology journalist based in London.