Stijn Lemmens has a cleanup job like few others. A senior space debris mitigation analyst at the European Space Agency (ESA), Lemmens works on counteracting space pollution by collaborating with spacecraft designers and the wider industry to create missions less likely to clutter the orbital environment.
Although significant attention has been devoted to launching spacecraft into space, the idea of what to do with their remains has been largely ignored. Many previous missions did not have an exit strategy. Instead of being pushed into orbits where they could reenter Earth’s atmosphere and burn up, satellites were simply left in orbit at the ends of their lives, creating debris that must be monitored and, if possible, maneuvered around to avoid a collision. “For the last 60 years, we’ve been using [space] as if it were an infinite resource,” Lemmens says. “But particularly in the last 10 years, it has become rather clear that this is not the case.”
Engineering the ins and outs: Step one in reducing orbital clutter—or, colloquially, space trash—is designing spacecraft that safely leave space when their missions are complete. “I thought naïvely, as a student, ‘How hard can that be?’” says Lemmens. The answer turned out to be more complicated than he expected.
At ESA, he works with scientists and engineers on specific missions to devise good approaches. Some incorporate propulsion that works reliably even decades after launch; others involve designing systems that can move spacecraft to keep them from colliding with other satellites and with space debris. They also work on plans to get the remains through the atmosphere without large risks to aviation and infrastructure.
Standardizing space: Earth’s atmosphere exerts a drag on satellites that will eventually pull them out of orbit. National and international guidelines recommend that satellites lower their altitude at the end of their operational lives so that they will reenter the atmosphere and make this possible. Previously the goal was for this to take 25 years at most; Lemmens and his peers now suggest five years or less, a time frame that would have to be taken into account from the start of mission planning and design.
Explaining the need for this change in policy can feel a bit like preaching, Lemmens says, and it’s his least favorite part of the job. It’s a challenge, he says, to persuade people not to think of the vastness of space as “an infinite amount of orbits.” Without change, the amount of space debris may create a serious problem in the coming decades, cluttering orbits and increasing the number of collisions.
Shaping the future: Lemmens says his wish is for his job to become unnecessary in the future, but with around 11,500 satellites and over 35,000 debris objects being tracked, and more launches planned, that seems unlikely to happen.
Researchers are looking into more drastic changes to the way space missions are run. We might one day, for instance, be able to dismantle satellites and find ways to recycle their components in orbit. Such an approach isn’t likely to be used anytime soon, Lemmens says. But he is encouraged that more spacecraft designers are thinking about sustainability: “Ideally, this becomes the normal in the sense that this becomes a standard engineering practice that you just think of when you’re designing your spacecraft.”
When our universe was less than half as old as it is today, a burst of energy that could cook a sun’s worth of popcorn shot out from somewhere amid a compact group of galaxies. Some 8 billion years later, radio waves from that burst reached Earth and were captured by a sophisticated low-frequency radio telescope in the Australian outback.
The signal, which arrived on June 10, 2022, and lasted for under half a millisecond, is one of a growing class of mysterious radio signals called fast radio bursts. In the last 10 years, astronomers have picked up nearly 5,000 of them. This one was particularly special: nearly double the age of anything previously observed, and three and a half times more energetic.
But like the others that came before, it was otherwise a mystery. No one knows what causes fast radio bursts. They flash in a seemingly random and unpredictable pattern from all over the sky. Some appear from within our galaxy, others from previously unexamined depths of the universe. Some repeat in cyclical patterns for days at a time and then vanish; others have been consistently repeating every few days since we first identified them. Most never repeat at all.
Despite the mystery, these radio waves are starting to prove extraordinarily useful. By the time our telescopes detect them, they have passed through clouds of hot, rippling plasma, through gas so diffuse that particles barely touch each other, and through our own Milky Way. And every time they hit the free electrons floating in all that stuff, the waves shift a little bit. The ones that reach our telescopes carry with them a smeary fingerprint of all the ordinary matter they’ve encountered between wherever they came from and where we are now.
This makes fast radio bursts, or FRBs, invaluable tools for scientific discovery—especially for astronomers interested in the very diffuse gas and dust floating between galaxies, which we know very little about.
“We don’t know what they are, and we don’t know what causes them. But it doesn’t matter. This is the tool we would have constructed and developed if we had the chance to be playing God and create the universe,” says Stuart Ryder, an astronomer at Macquarie University in Sydney and the lead author of the Science paper that reported the record-breaking burst.
Many astronomers now feel confident that finding more such distant FRBs will enable them to create the most detailed three-dimensional cosmological map ever made—what Ryder likens to a CT scan of the universe. Even just five years ago making such a map might have seemed an intractable technical challenge: spotting an FFB and then recording enough data to determine where it came from is extraordinarily difficult because most of that work must happen in the few milliseconds before the burst passes.
But that challenge is about to be obliterated. By the end of this decade, a new generation of radio telescopes and related technologies coming online in Australia, Canada, Chile, California, and elsewhere should transform the effort to find FRBs—and help unpack what they can tell us. What was once a series of serendipitous discoveries will become something that’s almost routine. Not only will astronomers be able to build out that new map of the universe, but they’ll have the chance to vastly improve our understanding of how galaxies are born and how they change over time.
We know that about 5% of the total weight of the universe is made up of baryons like protons and neutrons— the particles that make up atoms, or all the “stuff” in the universe. (The other 95% includes dark energy and dark matter.) But the astronomers managed to locate only about 2.5%, not 5%, of the universe’s total. “They counted the stars, black holes, white dwarfs, exotic objects, the atomic gas, the molecular gas in galaxies, the hot plasma, etc. They added it all up and wound up at least a factor of two short of what it should have been,” says Xavier Prochaska, an astrophysicist at the University of California, Santa Cruz, and an expert in analyzing the light in the early universe. “It’s embarrassing. We’re not actively observing half of the matter in the universe.”
All those missing baryons were a serious problem for simulations of how galaxies form, how our universe is structured, and what happens as it continues to expand.
Astronomers began to speculate that the missing matter exists in extremely diffuse clouds of what’s known as the warm–hot intergalactic medium, or WHIM. Theoretically, the WHIM would contain all that unobserved material. After the 1998 paper was published, Prochaska committed himself to finding it.
But nearly 10 years of his life and about $50 million in taxpayer money later, the hunt was going very poorly.
That search had focused largely on picking apart the light from distant galactic nuclei and studying x-ray emissions from tendrils of gas connecting galaxies. The breakthrough came in 2007, when Prochaska was sitting on a couch in a meeting room at the University of California, Santa Cruz, reviewing new research papers with his colleagues. There, amid the stacks of research, sat the paper reporting the discovery of the first FRB.
Duncan Lorimer and David Narkevic, astronomers at West Virginia University, had discovered a recording of an energetic radio wave unlike anything previously observed. The wave lasted for less than five milliseconds, and its spectral lines were very smeared and distorted, unusual characteristics for a radio pulse that was also brighter and more energetic than other known transient phenomena. The researchers concluded that the wave could not have come from within our galaxy, meaning that it had traveled some unknown distance through the universe.
Here was a signal that had traversed long distances of space, been shaped and affected by electrons along the way, and had enough energy to be clearly detectable despite all the stuff it had passed through. There are no other signals we can currently detect that commonly occur throughout the universe and have this exact set of traits.
“I saw that and I said, ‘Holy cow—that’s how we can solve the missing-baryons problem,’” Prochaska says. Astronomers had used a similar technique with the light from pulsars— spinning neutron stars that beam radiation from their poles—to count electrons in the Milky Way. But pulsars are too dim to illuminate more of the universe. FRBs were thousands of times brighter, offering a way to use that technique to study space well beyond our galaxy.
This visualization of large-scale structure in the universe shows galaxies (bright knots) and the filaments of material between them.
NASA/NCSA UNIVERSITY OF ILLINOIS VISUALIZATION BY FRANK SUMMERS, SPACE TELESCOPE SCIENCE INSTITUTE, SIMULATION BY MARTIN WHITE AND LARS HERNQUIST, HARVARD UNIVERSITY
There’s a catch, though: in order for an FRB to be an indicator of what lies in the seemingly empty space between galaxies, researchers have to know where it comes from. If you don’t know how far the FRB has traveled, you can’t make any definitive estimate of what space looks like between its origin point and Earth.
Astronomers couldn’t even point to the direction that the first 2007 FRB came from, let alone calculate the distance it had traveled. It was detected by an enormous single-dish radio telescope at the Parkes Observatory (now called the Murriyang) in New South Wales, which is great at picking up incoming radio waves but can pinpoint FRBs only to an area of the sky as large as Earth’s full moon. For the next decade, telescopes continued to identify FRBs without providing a precise origin, making them a fascinating mystery but not practically useful.
Then, in 2015, one particular radio wave flashed—and then flashed again. Over the course of two months of observation from the Arecibo telescope in Puerto Rico, the radio waves came again and again, flashing 10 times. This was the first repeating burst of FRBs ever observed (a mystery in its own right), and now researchers had a chance to determine where the radio waves had begun, using the opportunity to home in on its location.
In 2017, that’s what happened. The researchers obtained an accurate position for the fast radio burst using the NRAO Very Large Array telescope in central New Mexico. Armed with that position, the researchers then used the Gemini optical telescope in Hawaii to take a picture of the location, revealing the galaxy where the FRB had begun and how far it had traveled. “That’s when it became clear that at least some of these we’d get the distance for. That’s when I got really involved and started writing telescope proposals,” Prochaska says.
That same year, astronomers from across the globe gathered in Aspen, Colorado, to discuss the potential for studying FRBs. Researchers debated what caused them. Neutron stars? Magnetars, neutron stars with such powerful magnetic fields that they emit x-rays and gamma rays? Merging galaxies? Aliens? Did repeating FRBs and one-offs have different origins, or could there be some other explanation for why some bursts repeat and most do not? Did it even matter, since all the bursts could be used as probes regardless of what caused them? At that Aspen meeting, Prochaska met with a team of radio astronomers based in Australia, including Keith Bannister, a telescope expert involved in the early work to build a precursor facility for the Square Kilometer Array, an international collaboration to build the largest radio telescope arrays in the world.
The construction of that precursor telescope, called ASKAP, was still underway during that meeting. But Bannister, a telescope expert at the Australian government’s scientific research agency, CSIRO, believed that it could be requisitioned and adapted to simultaneously locate and observe FRBs.
Bannister and the other radio experts affiliated with ASKAP understood how to manipulate radio telescopes for the unique demands of FRB hunting; Prochaska was an expert in everything “not radio.” They agreed to work together to identify and locate one-off FRBs (because there are many more of these than there are repeating ones) and then use the data to address the problem of the missing baryons.
And over the course of the next five years, that’s exactly what they did—with astonishing success.
Building a pipeline
To pinpoint a burst in the sky, you need a telescope with two things that have traditionally been at odds in radio astronomy: a very large field of view and high resolution. The large field of view gives you the greatest possible chance to detect a fleeting, unpredictable burst. High resolution lets you determine where that burst actually sits in your field of view.
ASKAP was the perfect candidate for the job. Located in the westernmost part of the Australian outback, where cattle and sheep graze on public land and people are few and far between, the telescope consists of 36 dishes, each with a large field of view. These dishes are separated by large distances, allowing observations to be combined through a technique called interferometry so that a small patch of the sky can be viewed with high precision.
The dishes weren’t formally in use yet, but Bannister had an idea. He took them and jerry-rigged a “fly’s eye” telescope, pointing the dishes at different parts of the sky to maximize its ability to spot something that might flash anywhere.
“Suddenly, it felt like we were living in paradise,” Bannister says. “There had only ever been three or four FRB detections at this point, and people weren’t entirely sure if [FRBs] were real or not, and we were finding them every two weeks.”
When ASKAP’s interferometer went online inSeptember 2018, the real work began. Bannister designed a piece of software that he likens to live-action replay of the FRB event. “This thing comes by and smacks into your telescope and disappears, and you’ve got a millisecond to get its phone number,” he says. To do so, the software detects the presence of an FRB within a hundredth of a second and then reaches upstream to create a recording of the telescope’s data before the system overwrites it. Data from all the dishes can be processed and combined to reconstruct a view of the sky and find a precise point of origin.
The team can then send the coordinates on to optical telescopes, which can take detailed pictures of the spot to confirm the presence of a galaxy—the likely origin point of the FRB.
These two dishes are part of CSIRO’s Australian Square Kilometre Array Pathfinder (ASKAP) telescope.
CSIRO
Ryder’s team used data on the galaxy’s spectrum, gathered from the European Southern Observatory, to measure how much its light stretched as it traversed space to reach our telescopes. This “redshift” becomes a proxy for distance, allowing astronomers to estimate just how much space the FRB’s light has passed through.
In 2018, the live-action replay worked for the first time, making Bannister, Ryder, Prochaska, and the rest of their research team the first to localize an FRB that was not repeating. By the following year, the team had localized about five of them. By 2020, they had published a paper in Nature declaring that the FRBs had let them count up the universe’s missing baryons.
The centerpiece of the paper’s argument was something called the dispersion measure—a number that reflects how much an FRB’s light has been smeared by all the free electrons along our line of sight. In general, the farther an FRB travels, the higher the dispersion measure should be. Armed with both the travel distance (the redshift) and the dispersion measure for a number of FRBs, the researchers found they could extrapolate the total density of particles in the universe. J-P Macquart, the paper’s lead author, believed that the relationship between dispersion measure and FRB distance was predictable and could be applied to map the universe.
As a leader in the field and a key player in the advancement of FRB research, Macquart would have been interviewed for this piece. But he died of a heart attack one week after the paper was published, at the age of 45. FRB researchers began to call the relationship between dispersion and distance the “Macquart relation,” in honor of his memory and his push for the groundbreaking idea that FRBs could be used for cosmology.
Proving that the Macquart relation would hold at greater distances became not just a scientific quest but also an emotional one.
“I remember thinking that I know something about the universe that no one else knows.”
The researchers knew that the ASKAP telescope was capable of detecting bursts from very far away—they just needed to find one. Whenever the telescope detected an FRB, Ryder was tasked with helping to determine where it had originated. It took much longer than he would have liked. But one morning in July 2022, after many months of frustration, Ryder downloaded the newest data email from the European Southern Observatory and began to scroll through the spectrum data. Scrolling, scrolling, scrolling—and then there it was: light from 8 billion years ago, or a redshift of one, symbolized by two very close, bright lines on the computer screen, showing the optical emissions from oxygen. “I remember thinking that I know something about the universe that no one else knows,” he says. “I wanted to jump onto a Slack and tell everyone, but then I thought: No, just sit here and revel in this. It has taken a lot to get to this point.”
With the October 2023 Science paper, the team had basically doubled the distance baseline for the Macquart relation, honoring Macquart’s memory in the best way they knew how. The distance jump was significant because Ryder and the others on his team wanted to confirm that their work would hold true even for FRBs whose light comes from so far away that it reflects a much younger universe. They also wanted to establish that it was possible to find FRBs at this redshift, because astronomers need to collect evidence about many more like this one in order to create the cosmological map that motivates so much FRB research.
“It’s encouraging that the Macquart relation does still seem to hold, and that we can still see fast radio bursts coming from those distances,” Ryder said. “We assume that there are many more out there.”
Mapping the cosmic web
The missing stuff that lies between galaxies, which should contain the majority of the matter in the universe, is often called the cosmic web. The diffuse gases aren’t floating like random clouds; they’re strung together more like a spiderweb, a complex weaving of delicate filaments that stretches as the galaxies at their nodes grow and shift. This gas probably escaped from galaxies into the space beyond when the galaxies first formed, shoved outward by massive explosions.
“We don’t understand how gas is pushed in and out of galaxies. It’s fundamental for understanding how galaxies form and evolve,” says Kiyoshi Masui, the director of MIT’s Synoptic Radio Lab. “We only exist because stars exist, and yet this process of building up the building blocks of the universe is poorly understood … Our ability to model that is the gaping hole in our understanding of how the universe works.”
Astronomers are also working to build large-scale maps of galaxies in order to precisely measure the expansion of the universe. But the cosmological modeling underway with FRBs should create a picture of invisible gasses between galaxies, one that currently does not exist. To build a three-dimensional map of this cosmic web, astronomers will need precise data on thousands of FRBs from regions near Earth and from very far away, like the FRB at redshift one. “Ultimately, fast radio bursts will give you a very detailed picture of how gas gets pushed around,” Masui says. “To get to the cosmological data, samples have to get bigger, but not a lot bigger.”
That’s the task at hand for Masui, who leads a team searching for FRBs much closer to our galaxy than the ones found by the Australian-led collaboration. Masui’s team conducts FRB research with the CHIME telescope in British Columbia, a nontraditional radio telescope with a very wide field of view and focusing reflectors that look like half-pipes instead of dishes. CHIME (short for “Canadian Hydrogen Intensity Mapping Experiment)” has no moving parts and is less reliant on mirrors than a traditional telescope (focusing light in only one direction rather than two), instead using digital techniques to process its data. CHIME can use its digital technology to focus on many places at once, creating a 200-square-degree field of view compared with ASKAP’s 30-degree one. Masui likened it to a mirror that can be focused on thousands of different places simultaneously.
Because of this enormous field of view, CHIME has been able to gather data on thousands of bursts that are closer to the Milky Way. While CHIME cannot yet precisely locate where they are coming from the way that ASKAP can (the telescope is much more compact, providing lower resolution), Masui is leading the effort to change that by building three smaller versions of the same telescope in British Columbia; Green Bank, West Virginia; and Northern California. The additional data provided by these telescopes, the first of which will probably be collected sometime this year, can be combined with data from the original CHIME telescope to produce location information that is about 1,000 times more precise. That should be detailed enough for cosmological mapping.
The reflectors of the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, have been used to spot thousands of FRBs.
ANDRE RECNIK/CHIME
Telescope technology is improving so fast that the quest to gather enough FRB samples from different parts of the universe for a cosmological map could be finished within the next 10 years. In addition to CHIME, the BURSTT radio telescope in Taiwan should go online this year; the CHORD telescope in Canada, designed to surpass CHIME, should begin operations in 2025; and the Deep Synoptic Array in California could transform the field of radio astronomy when it’s finished, which is expected to happen sometime around the end of the decade.
And at ASKAP, Bannister is building a new tool that will quintuple the sensitivity of the telescope, beginning this year. If you can imagine stuffing a million people simultaneously watching uncompressed YouTube videos into a box the size of a fridge, that’s probably the easiest way to visualize the data handling capabilities of this new processor, called a field-programmable gate array, which Bannister is almost finished programming. He expects the new device to allow the team to detect one new FRB each day.
With all the telescopes in competition, Bannister says, “in five or 10 years’ time, there will be 1,000 new FRBs detected before you can write a paper about the one you just found … We’re in a race to make them boring.”
Prochaska is so confident FRBs will finally give us the cosmological map he’s been working toward his entire life that he’s started studying for a degree in oceanography. Once astronomers have measured distances for 1,000 of the bursts, he plans to give up the work entirely.
“In a decade, we could have a pretty decent cosmological map that’s very precise,” he says. “That’s what the 1,000 FRBs are for—and I should be fired if we don’t.”
Unlike most scientists, Prochaska can define the end goal. He knows that all those FRBs should allow astronomers to paint a map of the invisible gases in the universe, creating a picture of how galaxies evolve as gases move outward and then fall back in. FRBs will grant us an understanding of the shape of the universe that we don’t have today—even if the mystery of what makes them endures.
Anna Kramer is a science and climate journalist based in Washington, D.C.
Tzu-Wei Fang will always remember February 3, 2022. It was a Thursday just after Groundhog Day, and Fang, a physicist born in Taiwan, was analyzing satellite images of a cloud of charged particles that had erupted from the sun. The incoming cloud was a coronal mass ejection, or CME—essentially a massive burst of magnetized plasma from the sun’s upper atmosphere. It looked like dozens of similar CMEs that hit Earth every year, usually making their presence known mostly through mesmerizing polar light displays.
“The CME wasn’t significant at all,” says Fang, who had been analyzing the incoming data from her office at the National Oceanic and Atmospheric Administration (NOAA) in Boulder, Colorado.
But five days later, Fang learned that the CME was not as innocuous as it had seemed. Just as the cloud of plasma was making its way to the planet, a SpaceX Falcon 9 rocket was blasting off from a launchpad at the Kennedy Space Center in Florida with 49 new Starlink satellites in its nose cone.
The CME heated the tenuous gases in Earth’s upper atmosphere, causing it to swell, pushing the lower, denser layers upward. When the satellites were released from their rocket, they struggled against an unexpectedly thick medium. With thrusters too weak to push them to a higher, safer orbit, 38 of them spiraled back to Earth.
Scientists had long known that solar activity can change the density of the upper atmosphere, so the fact that this happened wasn’t a surprise. But the Starlink incident highlighted a big gap in capability: researchers lacked the ability to precisely predict the sorts of density changes that a given amount of solar activity would produce. And they did not have a good way to transfer those changes to predictions about how satellite trajectories would be affected.
The need to improve predictions was growing more urgent. A new solar cycle had just begun picking up strength after a prolonged quiet period, and the sun was spouting many more solar flares and CMEs than it had in years. At the same time, the number of satellites orbiting the planet had grown sevenfold since the last solar maximum. Researchers understood that a powerful solar storm could make conditions in near-Earth space so unpredictable that it would be impossible to tell whether objects were on a collision course. And that was a worry. One head-on crash between two large spacecraft can create thousands of out-of-control debris fragments that could remain in orbit for years, making space even harder for operators to navigate through.
The Starlink event proved to be just the catalyst the community needed. In the ensuingweeks, Fang, who had been working on a model of the upper atmosphere, began a partnership with SpaceX to get more data on the speed and trajectory of the constellation’s thousands of satellites. It was an unprecedented source of information that is allowing scientists to improve their models of how solar activity affects the environment in low Earth orbit. At the same time, other researchers are working to better connect this model of the sparse air in this part of the atmosphere with the trajectories of the satellites that pass through it.
If Fang and her colleagues succeed, they’ll be able to keep satellites safe even amid turbulent space weather, reducing the risk of potentially catastrophic orbital collisions.
Solar weather havoc
CMEs have been buffeting Earth since the beginning of time. But until the advent of electricity, their only observable consequences were the spectacular polar lights.
That changed in 1859 with the Carrington Event, the most energetic CME to hit Earth in recorded history. When that tsunami of magnetized plasma hit Earth’s atmosphere, it disrupted telegraph networks all over the world. Clerks saw their equipment give off sparks, and in some cases they received electrical shocks.
The satellite era has so far experienced only one major geomagnetic storm. Dubbed the Halloween storm because it pummeled Earth in the last week of October 2003, the CME affected nearly 60% of NASA space missions in orbit at the time, according to a later investigation by NOAA. A Japanese Earth-observation spacecraft lost contact with Earth, never to regain it—its electronics most likely fried by the onslaught of charged solar particles.
Thomas Berger, nowthe director of the Space Weather Technology, Research, and Education Center of the University of Colorado Boulder, was a young space-weather scientist at that time. He remembers the people buzzing about losing track of satellites.
Unlike aircraft, satellites are not constantly observed by radar in real time. Their likely trajectories are calculated days ahead, based on repeated observations by a handful of ground-based space radars and optical sensors scattered across the globe. When space weather warms up the upper reaches of the atmosphere, the increased density throws those predictions off, and it can take operators a while to find the satellites again.
“After the 2003 Halloween storm, the entire satellite catalogue was off track,” says Berger. “It took three days of emergency operations to locate and retrack all these objects. Some of the satellites were tens of kilometers below their usual orbit and maybe a thousand kilometers away from their expected position.”
When we don’t know where satellites—and space-debris fragments—are, it is more than an inconvenience. It means that operators can no longer make predictions about potential collisions—events that can not only destroy satellites but also create thousands of new pieces of space debris, creating cascading risks to other satellites.
The Halloween storm luckily passed without an orbital crash. But next time, satellite operators may not be so lucky.
A lot has changed in near-Earth space since 2003. The number of active satellites orbiting our planet has risen from 800 back then to more than 9,000 today, and low Earth orbit has seen the greatest increase in traffic. The quantity of space junk has also grown. Twenty years ago, the US Space Surveillance Network tracked some 11,000 pieces of such debris. Today, according to NASA, it keeps an eye on more than 35,000 objects. With that much more stuff hurtling around Earth, many more collision-avoidance maneuvers are needed to keep things safe.
And it is just a matter of time before Earth is hit with bigger CMEs. The Halloween storm packed dozens of times more power than the “insignificant” event that doomed the Starlink satellites. Yet it had only about one-tenth the energy of the Carrington Event. The orbital mayhem—not to mention the havoc on the ground—could certainly get much worse.
Extending weather forecasts into space
Six months before that fateful Groundhog Day, Fang had taken a job at NOAA’s Space Weather Prediction Center to work on a new simulation of the outermost parts of Earth’s atmosphere.
The model she was working on, the Whole Atmosphere Model and Ionosphere Plasmasphere Electrodynamics (WAM-IPE) forecast system, is an extension of kinds of models that meteorologists at NOAA use to forecast weather on Earth, only at much higher altitudes.
Most satellites in low Earth orbit travel within the second-highest layer of the atmosphere—a region called the thermosphere, which is filled with dispersed atoms of oxygen, nitrogen, and helium. Invisible waves rising from the mesosphere, the atmospheric layer underneath, push on the thermosphere, stirring hurricane-speed winds. But since the air in the thermosphere is so thin, satellites orbiting there barely notice. That changes when space weather hits. Within an hour, the density of this thin air can increase many times, and its atoms become charged by collisions with energetic solar particles, triggering aurora displays and electrical currents.
The WAM-IPE model attempts to simulate the intricacies of these processes and predict their outcomes. “It’s a lot of complex physics, and we still don’t completely understand all of it,” Fang says.
At the time of the Starlink incident, Fang’s model was still in experimental stages. The sorts of measurements of the upper atmosphere that could directly verify the model’s calculations were not yet available.
In 2022, only two spacecraft in orbit were able to provide some basic measurements of thermospheric density. No new mission by either NOAA or NASA was in the works that could fill the gaps in the near future.
But SpaceX had a solution to Fang’s problem. Starlink satellites, although not fitted with dedicated instruments to measure atmospheric density, carry GPS receivers to determine their position. During their conversations, Fang and Starlink engineers figured out that with some clever mathematics, they could calculate atmospheric density from changes in Starlink satellites’ trajectories.
“It’s quite complicated because you need to have a very good understanding of how the spacecraft’s shape affects its drag, but with that provided, we can look at the positional differences and see how that changes and calculate the density,” says Fang.
At that time, about 2,000 Starlink satellites were in orbit. And so, where there was no data before, Fang suddenly had an abundant resource to tap into and use to make sure the WAM-IPE model’s calculations matched reality—at least at Starlink’s orbital altitude. The constellation has since grown to 5,000 spacecraft, providing an even denser network of measurements.
Fang says that several othersatellite operators have since joined her effort, supplying NOAA with data to make the model work before the next big solar storm hits.
“The Starlink incident really raised the problem,” she says. “The industry is booming and now everybody is aware, and they come to us and want to understand the problem. It’s been a tricky two years, and sometimes I feel we are not solving it fast enough for them.”
Work left to do
In the months following the Starlink incident, other spacecraft operators began reporting issues related to space weather. In May 2022, the European Space Agency said its constellation of Swarm satellites, which measure the magnetic field around Earth, had been losing altitude 10 times faster than they had during the previous 10 years. In December 2023, NASA announced that its asteroid-hunting space telescope Neowise will reenter Earth’s atmosphere by early 2025 because of the increasing drag caused by solar activity.
The current solar cycle is set to reach its maximum later this year. But the sun will likely keep on spouting CMEs and solar flares at a high rate for the next five years before the sun settles into its minimum. During those years, the number of satellites in orbit is set to continue to rise. Analysts expect that by the end of this decade the number of operational satellites could hit 100,000.
“It’s not unlikely that we will get a large geomagnetic storm in the next four or five years,” says Berger. “And that will really test the whole thing.”
Berger’s team in Colorado collaborates with Fang’s team at NOAA, trying to find ways to integrate the WAM-IPE model’s predictions of changes in atmospheric density into calculations of satellite orbits.
As the Starlink incident showed, it’s not just the big, cataclysmic solar storms that operators need to worry about.
Dan Oltrogge, an orbital tracking expert at Comspoc, a company that specializes in space situational awareness, says that the accuracy of satellite trajectory predictions at orbits below 250 miles (400 kilometers) is “particularly susceptible to space-weather variations.”
“It’s those altitudes where the International Space Station, the Chinese space station, and also many Earth-observing satellites orbit,” Oltrogge says. “When space weather changes, the atmospheric drag changes, and it changes where and how close things come together. It’s difficult to know when to make a collision-avoidance maneuver.”
The stronger the storm, the greater the fluctuations in atmospheric density, and the greater the uncertainty. According to Fang, the underwhelming Starlink storm thickened the atmosphere at altitudes between 120 and 240 miles by 50% to 125%. A once-in-a-century event like the Carrington storm could lead to a 900% density increase, she says.
The biggest worries, Fang says, are that we don’t fully understand the behavior of the sun and that we get so little notice about when CMEs will arrive.
“Even with the new model, we only know what is happening now,” she says. “We don’t have a real forecasting ability. We don’t know when a flare is going to happen or when a CME is going to happen.”
It might take a couple of days for a CME to hit Earth, but researchers don’t get measurements of its intensity until about 30 minutes before then, when it passes SOHO, a NASA and European Space Agency satellite some 900,000 miles away in a stable orbit between Earth and the sun. The European Space Agency is developing a new spacecraft, called Vigil, that would be capable of providing a side view of the sun, allowing researchers to see potentially dangerous sunspots not visible from Earth. But it will take years to get it off the ground. Until then, space operators will have to keep their fingers crossed and hope the space weather holds.
Tereza Pultarova is a freelance science and technology journalist based in London who specializes in space and sustainability.
On April 8, the moon will pass directly between Earth and the sun, creating a total solar eclipse across much of the United States, Mexico, and Canada.
Although total solar eclipses occur somewhere in the world every 18 months or so, this one is unusual because tens of millions of people in North America will likely witness it, from Mazatlán in Mexico to Newfoundland in Canada.
“It’s a huge communal experience,” says Meg Thacher, a senior lab instructor in the astronomy department at Smith College in Massachusetts. “A total solar eclipse is the Super Bowl of astronomy.” Here’s how to safely watch—and photograph—the natural phenomenon.
Fail to prepare, prepare to fail
It pays to have a plan of action for the day.
Before you decide on a spot to watch the eclipse, whether it’s in your own backyard, in a national park, or at a viewing party, it’s worth checking the weather forecast to see how likely clouds are to spoil the show. Currently the majority of the eclipse’s path of totality—areas where onlookers will see a full eclipse, as opposed to a partial one—is forecast to have some degree of cloud cover.
However, even if visibility turns out to be poor, you still have options. NASA and the National Science Foundation are broadcasting livestreams, and many eclipse viewing parties will broadcast unobstructed views as part of their festivities. The American Astronomical Society has a state-by-state list to help you find your nearest event.
Safety first
You need proper eye protection to look at the eclipse, because the sun’s light can cause long-term damage to your vision. Be sure to purchase either specially made eclipse glasses or handheld solar viewers. Glasses might be the best option if you plan to take photos, as they’ll keep your hands free. Eclipse glasses are thousands of times darker than regular sunglasses and contain a polymer designed to filter out harmful light.
You should also make sure any cameras, binoculars, or telescopes through which you plan to look at the sun have been fitted with a solar filter. You don’t need to double up and wear eclipse glasses if you already have a solar filter, though.
Once the moon fully obscures the sun, it’s safe to remove your eye protection for the duration of the totality, which is projected to last around four minutes during this eclipse.
A proper camera is your best bet …
Photographing an eclipse is pretty simple, says Randall Benton, a professional photographer who has been capturing them since 1979. Although cameras have changed vastly since then, the fundamentals remain the same. (If you plan to use your phone to take photos, skip to the next section.)
He recommends fixing a DSLR or mirrorless camera (equipped with a solar filter to protect both your eyes and the camera itself) to a tripod. A short exposure, which is designed to capture movement, is more likely to capture the details of the sun’s corona—the plasma surrounding it. A longer exposure, which keeps certain elements of pictures in focus while blurring others, is likely to stretch the corona further out. The exposure you choose will depend on the kind of shot you’d like to capture.
Before the eclipse begins, take the time to focus the camera exactly where you want the sun and moon to appear in your shot, and turn off any autofocus function. While some mounts come with an automated tracking feature that will follow the eclipse’s progression, others will require you to move your camera yourself, so make sure you’re familiar with the mount you’ve got to prevent the eclipse from drifting out of your frame.
Then, “when there’s just a sliver of sun left and it’s a few seconds away from disappearing, take the filter off the camera lens,” Benton says. “At the very last moment, there’s a phenomenon called the diamond ring effect, when the last speck of visible sunlight resembles a ring—that’s a great dramatic photo. Once the sunlight reappears, it’s time to put the filter back on.”
… but smartphones work too
Despite the rapid advances in smartphone cameras over the past decade or so, they can’t really rival DSLR or mirrorless cameras when it comes to capturing an eclipse.
Their short lenses means the sun will appear very small, which doesn’t tend to produce great photographs. That said, you can still capture the best photo possible by cutting out the plastic lens from a pair of spare eclipse glasses, taping it over your phone’s camera lens (or lenses), and securing the device in a tripod (or propping it up against a cup).
Don’t try to hold the phone, and use your phone’s shutter delay to decrease vibrations, says Gordon Telepun, an amateur enthusiast who has been photographing eclipses since 2001 and has advised NASA on how to capture them. “During totality, take the [eclipse glasses] filter off and take wide-angle shots of the corona in the sky and the landscape,” he says. “Automatic mode will work fine.”
Something smartphones are great at capturing is video of the moment the moon glides over the sun, says Benton: “That transition from daylight to nighttime is dramatic, and smartphones can handle that pretty well.”
Don’t be afraid to get creative
During the eclipse, there are plenty of other things to photograph besides the sun and moon. Foliage will create a natural version of a pinhole viewer, casting thousands of crescent images of the sun dancing around in the shade as the light streams through the trees.
Another natural phenomena is shadow bands—flickering gray ripples that appear on light-colored surfaces like sheets or the sides of houses within a few minutes of totality. “It’s almost like a stroboscopic effect,” Benton says, referring to the visual effect that makes objects appear as though they are moving more slowly than they actually are. “Videos of that could be interesting.”
“Take pictures of the faces of the people around you, too,” he adds. “Twenty years from now, your photo of the eclipse is going to be pretty much the same as anyone else’s. These other pictures are going to be a little more powerful in reminding you what your day was like.”
Take a moment to look around
Finally, when you’re looking up at the moon covering the sun during totality, let yourself enjoy the moment free from your technology. The next eclipse the US can expect to experience on this scale is in August 2044—so try hard to stay present.
“During totality, if you’re really concentrating on getting your photo, at some point let go of everything. Turn around—take a look with your eyes,” says Benton. “Whatever you’re seeing in the viewfinder or on the screen, it isn’t the same thing as seeing it with your own eyes. And it will change your life.”
When two black holes spiral inward and collide, they shake the very fabric of space, producing ripples in space-time that can travel for hundreds of millions of light-years. Since 2015, scientists have been observing these so-called gravitational waves to help them study fundamental questions about the cosmos, including the origin of heavy elements such as gold and the rate at which the universe is expanding.
But detecting gravitational waves isn’t easy. By the time they reach Earth and the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO), in Louisiana and Washington state, the ripples have dissipated into near silence. LIGO’s detectors must sense motions on the scale of one ten-thousandth the width of a proton to stand a chance.
LIGO has confirmed 90 gravitational wave detections so far, but physicists want to detect more, which will require making the experiment even more sensitive. And that is a challenge.
“The struggle of these detectors is that every time you try to improve them, you actually can make things worse, because they are so sensitive,” says Lisa Barsotti, a physicist at the Massachusetts Institute of Technology.
Nevertheless, Barsotti and her colleagues recently pushed past this challenge, creating a device that will allow LIGO’s detectors to detect far more black hole mergers and neutron star collisions. The device belongs to a growing class of instruments that use quantum squeezing—a practical way for researchers dealing with systems that operate by the fuzzy rules of quantum mechanics to manipulate those phenomena to their advantage.
Physicists describe objects in the quantum realm in terms of probabilities—for example, an electron is not located here or there but has some likelihood of being in each place, locking into one only when its properties are measured. Quantum squeezing can manipulate the probabilities, and researchers are increasingly using it to exert more control over the act of measurement, dramatically improving the precision of quantum sensors like the LIGO experiment.
“In precision sensing applications where you want to detect super-small signals, quantum squeezing can be a pretty big win,” says Mark Kasevich, a physicist at Stanford University who applies quantum squeezing to make more precise magnetometers, gyroscopes, and clocks with potential applications for navigation. Creators of commercial and military technology have begun dabbling in the technique as well: the Canadian startup Xanadu uses it in its quantum computers, and last fall, DARPA announced Inspired, a program for developing quantum squeezing technology on a chip. Let’s take a look at two applications where quantum squeezing is already being used to push the limits of quantum systems.
Taking control of uncertainty
The key concept behind quantum squeezing is the phenomenon known as Heisenberg’s uncertainty principle. In a quantum-mechanical system, this principle puts a fundamental limit on how precisely you can measure an object’s properties. No matter how good your measurement devices are, they will suffer a fundamental level of imprecision that is part of nature itself. In practice, that means there’s a trade-off. If you want to track a particle’s speed precisely, for example, then you must sacrifice precision in knowing its location, and vice versa. “Physics imposes limits on experiments, and especially on precision measurement,” says John Robinson, a physicist at the quantum computing startup QuEra.
By “squeezing” uncertainty into properties they aren’t measuring, however, physicists can gain precision in the property they want to measure. Theorists proposed using squeezing in measurement as early as the 1980s. Since then, experimental physicists have been developing the ideas; over the last decade and a half, the results have matured from sprawling tabletop prototypes to practical devices. Now the big question is what applications will benefit. “We’re just understanding what the technology might be,” says Kasevich. “Then hopefully our imagination will grow to help us find what it’s really going to be good for.”
LIGO is blazing a trail to answer that question, by enhancing the detectors’ ability to measure extremely tiny distances. The observatory registers gravitational waves with L-shaped machines capable of sensing tiny motions along their four-kilometer-long arms. At each machine, researchers split a laser beam in two, sending a beam down each arm to reflect off a set of mirrors. In the absence of a gravitational wave, the crests and troughs of the constituent light waves should completely cancel each other out when the beams are recombined. But when a gravitational wave passes through, it will alternately stretch and compress the arms so that the split light waves are slightly out of phase.
The resulting signals are subtle, though—so subtle that they risk being drowned out by the quantum vacuum, the irremovable background noise of the universe, caused by particles flitting in and out of existence. The quantum vacuum introduces a background flicker of light that enters LIGO’s arms, and this light pushes the mirrors, shifting them on the same scale as the gravitational waves LIGO aims to detect.
Barsotti’s team can’t get rid of this background flicker, but quantum squeezing allows them to exert limited control over it. To do so, the team installed a 300-meter-long cavity in each of LIGO’s two L-shaped detectors. Using lasers, they can create an engineered quantum vacuum, in which they can manipulate conditions to increase their level of control over either how bright the flicker can be or how randomly it occurs in time. Detecting higher-frequency gravitational waves is harder when the rhythm of the flickering is more random, while lower-frequency gravitational waves get drowned out when the background light is brighter. In their engineered vacuum, noisy particles still show up in their measurements, but in ways that don’t do as much to disturb the detection of gravitational waves.“ You can [modify] the vacuum by manipulating it in a way that is useful to you,” she explains.
The innovation was decades in the making: through the 2010s, LIGO incorporated incrementally more sophisticated forms of quantum squeezing based on theoretical ideas developed in the 1980s. With these latest squeezing innovations, installed last year, the collaboration expects to detect gravitational waves up to 65% more frequently than before.
Quantum squeezing has also improved precision in timekeeping. Working at the University of Colorado Boulder with physicist Jun Ye, a pioneer in atomic clock technology, Robinson and his team made a clock that will lose or gain at most a second in 14 billion years. These super-precise clocks tick slightly differently in different gravitational fields, which could make them useful for sensing how Earth’s mass redistributes itself as a result of seismic or volcanic activity. They could also potentially be used to detect certain proposed forms of dark matter, the hypothesized substance that physicists think permeates the universe, pulling on objects with its gravity.
The clock Robinson’s team developed, a type called an optical atomic clock, uses 10,000 strontium atoms. Like all atoms, strontium emits light at specific signature frequencies as electrons around the atom’s nucleus jump between different energy levels. A fixed number of crests and troughs in one of these light waves corresponds to a second in their clock. “You’re saying the atom is perfect,” says Robinson. “The atom is my reference.” The “ticking” of this light is far steadier than the vibrating quartz crystal in a wristwatch, for example, which expands and contracts at different temperatures to tick at different rates.
In practice, the tick in the Robinson team’s clock comes not from the light the electrons emit but from how the whole system evolves over time. The researchers first put each strontium atom in a “superposition” of two states: one in which the atom’s electrons are all at their lowest energy levels and another in which one of the electrons is in an excited state. This means each atom has some probability of being in either state but is not definitively in either one—similar to how a coin flipping in the air has some probability of being either heads or tails, but is neither.
Then they measure how many atoms are in each state. The act of measurement puts the atoms definitively in one state or the other, equivalent to letting the flipping coin land on a surface. Before they measure the atoms, even if they intend to wind up with a 50-50 mixture, they cannot precisely dictate how many atoms will end up in each state. That’s because in addition to the system’s change over time, there is also inherent uncertainty in the state of the individual atoms. Robinson’s team uses quantum squeezing to more reliably determine their final states by reducing these intrinsic fluctuations. Specifically, they manipulate the uncertainties in the direction of each atom’s spin, a property of many quantum particles that has no classical counterpart. Squeezing improved the clock’s precision by a factor of 1.5.
To be sure, gravitational waves and ultra-precise clocks are niche academic applications. But there is interest in adapting the approach to other, potentially more mainstream uses, including quantum computers, navigation, and microscopy.
The increased use of quantum squeezing is part of a wider technological trend toward higher precision—one that encompasses cramming more transistors on chips, studying the universe’s most elusive particles, and clocking the fleeting time it takes for an electron to leave a molecule. Squeezing benefits only measurements so subtle that the randomness of quantum mechanics contributes significant noise. But it turns out that physicists have more control than they think. They may not be able to remove the randomness, but they can engineer where it shows up.
More than 9,000 metric tons of human-made metal and machinery are orbiting Earth, including satellites, shrapnel, and the International Space Station. But a significant bulk of that mass comes from one source: the nearly a thousand dead rockets that have been discarded in space since the space age began.
Now, for the first time, a mission has begun to remove one of those dead rockets. Funded by the Japanese space agency JAXA, a spacecraft from the Japanese company Astroscale was launched on Sunday, February 18, by the New Zealand firm Rocket Lab and is currently on its way to rendezvous with such a rocket in the coming weeks. It’ll inspect it and then work out how a follow-up mission might be able to pull the dead rocket back into the atmosphere. If it succeeds, it could demonstrate how we could remove large, dangerous, and uncontrolled pieces of space junk from orbit—objects that could cause a monumental disaster if they collided with satellites or spacecraft.
“It cannot be overstated how important this is,” says Michelle Hanlon, a space lawyer at the University of Mississippi. “We have these ‘debris bombs’ just sitting up there waiting to be hit.”
There are an estimated 500,000 pieces of space junk as small as a centimeter across orbiting Earth, and about 23,000 trackable objects bigger than 10 centimeters. Dead rockets make up an interesting—and dangerous—category. The 956 known rocket bodies in space account for just 4% of trackable objects but nearly a third of the total mass. The biggest empty rockets, mostly discarded by Russia in the 1980s, 1990s, and 2000s, weigh up to nine tons—as much as an elephant.
These discarded upper stages, the top section of a rocket that boosts a satellite or spacecraft into its final orbit, are left to drift around our planet once the launch is complete. They are uncontrolled, spinning haphazardly, and pose a huge risk. If any two were to collide, they would produce a deadly cloud of up to “10,000 to 20,000 fragments,” says Darren McKnight, a space debris expert at the US debris tracking firm LeoLabs.
Such an event could happen at any moment. “At some point, I’d expect there to be a collision involving them,” says Hugh Lewis, a space debris expert at the University of Southampton in the UK. “There’s so much stuff out there.” That would pose a huge problem, rendering parts of Earth’s orbit unusable or, in a worst-case scenario, leading to a runaway chain reaction of collisions known as the Kessler syndrome. That could make some orbits unusable or even make human spaceflight too risky until the debris falls back into the atmosphere after decades to centuries.
Since 2007, when the United Nations introduced a new guideline that objects should be removed from space within 25 years of their operational lifetime, fewer rockets have been abandoned in orbit. Most upper stages now retain a bit of fuel to push themselves back into the atmosphere after launch. “They now tend to reserve some propellant to help them deorbit,” says Lewis. But thousands of “legacy objects” remain from before this rule was introduced, Lewis adds.
The rocket JAXA is targeting, as part of its Commercial Removal of Debris Demonstration (CRD2) program, is the upper stage of a Japanese H-IIA rocket that launched a climate satellite in 2009. Weighing three metric tons and as big as a bus, it orbits our planet at an altitude of 600 kilometers (373 miles). If left untended it will remain in orbit for decades, says Lewis, before the atmospheric drag of our planet is able to pull it back into the atmosphere. At that point it will burn up, with any remnants most likely falling into the ocean.
ADRAS-J’s mission is to figure out how to pull it back into the atmosphere before that happens. Sidling up to the rocket, the spacecraft will use cameras and sensors to inspect it from as near as a meter away. It will study the state of the rocket, including whether it is intact or if pieces have broken off and are drifting nearby, and also look for grapple points where a future spacecraft could attach.
“Designing a servicer to go up and grapple a three-ton piece of debris comes with a lot of challenges,” says Mike Lindsay, Astroscale’s chief technology officer. “The biggest challenge is dealing with the amount of uncertainty. The object has been up there for 15 years. It’s uncontrolled. We’re not communicating with it. So we don’t know how it’s moving, how it looks, and how it’s aged.”
Particularly crucial will be to determine whether, and if so how much, the rocket is spinning. Any rotation will need to be counteracted and stabilized before the rocket can then be pushed back into the atmosphere. The famous docking scene in the movie Interstellar, says Lewis, is “a perfect demonstration.”
ADRAS-J will spend the next few weeks investigating the rocket, and the inspection is expected to conclude by April. It is the first time a piece of derelict space debris will ever have been investigated in such a manner.
Japan has yet to pick the company that will conduct the second phase of the mission and actually remove the rocket from orbit, but Lindsay says Astroscale is ready, if it wins the contract. “We’re already testing some robotic capture methodologies that are compatible with the grapple points we’re going to inspect,” he says. “So it’s really important we get imagery of those interfaces.”
That mission will need to be much more substantial than ADRAS-J, says Lewis. To halt the rotation of the rocket and push it down into the atmosphere, any removal spacecraft would need to be almost as heavy as the rocket itself. “You need something equivalent [in mass] if you’re going to grab it,” he says. “If it’s tumbling end over end, you need a really capable system to manage that angular momentum.”
This is not the only effort at space debris removal taking place. In October 2023, the US Senate passed a bill to investigate removal technologies. The UK has selected both Astroscale and a Swiss firm, ClearSpace, to design missions to remove British space junk from orbit. And in 2026, ClearSpace plans to launch a mission for the European Space Agency (ESA) to remove from orbit a small piece of a European rocket, weighing about 112 kilograms (247 pounds).
“For missions after 2030, the ESA would foresee active removal to become mandatory,” says Holger Krag, the head of ESA’s Space Debris Office in Germany—that is, if a spacecraft has failed to remove itself from orbit with its own fuel.
It’s unclear exactly what shape the market for debris removal missions will take. While Japan is targeting one of its own dead rockets in good faith, tackling the daunting number of other dead rockets and satellites would be a costly endeavor. “Who’s going to pay for that?” says Lewis. “Removing one or two isn’t going to really make a dent in the problem. We need a sustained plan of removals.”
Legal hurdles abound, too. Russia and China, which have many of the largest dead rockets in orbit, are unlikely to let other countries remove their rockets for them, says Hanlon. “Private companies are not going to get permission from China or Russia to approach something that might have technological capabilities they do not want to share with the world,” she says.
There is also, currently, “no law that says you have to get your garbage out of orbit,” says Hanlon. While the UN does have its 25-year guideline, and national regulators such as the Federal Communications Commission in the US require satellites to be removed from orbit in as little as five years, empty rockets and legacy junk pose a whole other problem. “There’s no incentive to remediate,” says Hanlon.
Another option might be to reuse and recycle debris in orbit, including some of these dead rockets. Such an idea is untested at the moment, but could become viable as our operations in Earth orbit grow in the future. “Then we’re entering a different realm where there is an incentive—there is a market,” says Hanlon.
ADRAS-J, and whatever spacecraft follows in its footsteps, will demonstrate how we can start to tackle this problem. If we don’t, space junk “is just going to grow to the extent that we will not be able to launch anything,” says Hanlon. “The only way this cycle ends is to remove debris.”
We’ve known of Europa’s existence for more than four centuries, but for most of that time, Jupiter’s fourth-largest moon was just a pinprick of light in our telescopes—a bright and curious companion to the solar system’s resident giant. Over the last few decades, however, as astronomers have scrutinized it through telescopes and six spacecraft have flown nearby, a new picture has come into focus. Europa is nothing like our moon.
Observations suggest that its heart is a ball of metal and rock, surrounded by a vast saltwater ocean that contains more than twice as much water as is found on Earth. That massive sea is encased in a smooth but fractured blanket of cracked ice, one that seems to occasionally break open and spew watery plumes into the moon’s thin atmosphere.
For these reasons, Europa has captivated planetary scientists interested in the geophysics of alien worlds. All that water and energy—and hints of elements essential for building organic molecules —point to another extraordinary possibility. In the depths of its ocean, or perhaps crowded in subsurface lakes or below icy surface vents, Jupiter’s big, bright moon could host life.
“We think there’s an ocean there, everywhere,” says Bob Pappalardo, a planetary scientist at NASA’s Jet Propulsion Laboratory in Pasadena, California. “Essentially everywhere on Earth that there’s water, there’s life. Could there be life on Europa?”
Pappalardo has been at the forefront of efforts to send a craft to Europa for more than two decades. Now his hope is finally coming to fruition: later this year, NASA plans to launch Europa Clipper, the largest-ever craft designed to visit another planet. The $5 billion mission, scheduled to reach Jupiter in 2030, will spend four years analyzing this moon to determine whether it could support life. It will be joined after two years by the European Space Agency’s Juice, which launched last year and is similarly designed to look for habitable conditions, not only on Europa but also on other mysterious Jovian moons.
Neither mission will beam back a definitive answer to the question of extraterrestrial life. “Unless we get really lucky, we’re not going to be able to tell if there is life there, but we can find out if all the conditions are right for life,” says planetary geologist Louise Prockter at the Johns Hopkins Applied Physics Laboratory, a co-investigator on the Clipper camera team.
“Essentially everywhere on Earth that there’s water, there’s life. Could there be life on Europa?”
Bob Pappalardo, planetary scientist, NASA’s Jet Propulsion Laboratory
What these spacecraft will do is get us closer than ever before to answers, by identifying the telltale chemical, physical, and geological signatures of habitability—whether a place is a suitable environment for life to emerge and thrive.
The payoff for confirming these signs on Europa would be huge. Not because humans could settle on its surface—it’s far too harsh and rugged and cold and irradiated for our delicate bodies—but because it could justify future exploration to land there and look for alien life-forms. Finding something, anything, living on Europa would offer strong evidence for an alternate path through which life could emerge. It would mean that life on Earth is not exceptional. We’d know that we have neighbors close by—even if they’re microbial, which would be the most likely life-form—and that would make it very likely that we have neighbors elsewhere in the cosmos.
Engineers and technicians install reaction wheels on Europa Clipper at NASA’s Jet Propulsion Laboratory in California
NASA/JPL-CALTECH
“With the prospects of life—the prospects of vast oceans—within reach, you just have to go,” says Nicholas Makris, director of MIT’s Center for Ocean Engineering, who uses acoustics and other innovative methods to observe and explore big bodies of water. He once led a team of scientists who proposed a mission to land a spacecraft on Europa and use sound waves to explore what lies beneath the ice; he still hopes to see a lander go there one day. “You have to find out. Everyone wants to know,” he says. “There isn’t anyone who doesn’t want to know.”
From a spot in the sky to a dynamic moon
Long before it became the cosmic destination of the year, Europa played an outsize role in transforming our understanding of the solar system. That began with its discovery, when one night in January 1610, the Italian astronomer Galileo Galilei fixed his occiale—an ingenious homemade telescope—on Jupiter and noted three bright little dots near the side of the gas giant.
Galileo assumed it was an illusion, that they were distant stars that only appeared to be close. But the next night, he observed those same three bright little stars now on the other side of the planet. Further observations revealed yet another bright light, also wandering nearby but refusing to leave Jupiter’s side. In a short treatise called Sidereus Nuncius (Starry Messenger), published in March 1610, Galileo reported that he’d found four worlds orbiting Jupiter, similar to how Mercury and Venus orbit the sun. (Astronomers still regard Jupiter and its satellites as a kind of mini solar system.) Galileo named the worlds I, II, III, etc., and referred to them as the “Medicean planets,” though they’re now called the “Galilean moons.” His discovery was the first time scientists had directly observed small worlds orbiting something other than Earth or the sun, giving strong evidence to the argument, still controversial at the time, that planets circled the sun and not the other way around.
In 1614, German astronomer Simon Marius suggested that Jupiter’s four newly discovered moons be named Io (top), Callisto (middle), Ganymede (bottom), and Europa, after four “irregular loves” pursued by the god for whom the planet is named. Astronomers have since identified 91 others, and there are likely more.
NASA/JPL/UNIVERSITY OF ARIZONA (IO); NASA/JPL-CALTECH/KEVIN M. GILL (CALLISTO); NASA/JPL-CALTECH/SWRI/MSSS/KEVIN M. GILL (GANYMEDE¡)
Naming rights for these four Jovian moons ultimately went to the German astronomer Simon Marius, who claimed (but couldn’t prove) that he’d actually discovered them a few weeks before Galileo. In 1614, on a suggestion from Johannes Kepler, Marius proposed naming the moons Io, Callisto, Europa, and Ganymede—after four “irregular loves” pursued by Zeus (Jupiter) in ancient mythology. It took 200 years for those names to gain widespread adoption, but they were definitely an upgrade. Had Galileo’s naming scheme stuck, you’d now be reading about the “II Clipper,” which doesn’t have the same ring.
These moons were only the first to be discovered orbiting Jupiter. As of December 2023, astronomers had officially confirmed the existence of 91 others—and there are likely many more. Where the first four are round and follow stately, simple orbits, the more recent discoveries are more diverse. Some orbit in erratic swarms or go the opposite way around; some were asteroids captured in passing; others resulted from collisions. There are so many objects around Jupiter, in fact, that the International Astronomical Union no longer confers names on Jovian satellites unless they’re deemed to have significant scientific value.
The more we’ve learned about Europa, the more fascinating it has become. For centuries, it was little more than a spot appearing to move from one side of Jupiter to the other. But by the early 20th century, stargazers had made reasonable estimates of Europa’s diameter and mass (revealing that it was slightly smaller than Mercury or Earth’s moon, but larger than Pluto). They’d also studied the light reflecting from its surface and found that Europa was unexpectedly bright. Were it to replace our moon in the night sky, Europa would be a little smaller but shine five times brighter.
In the 1950s, when scientists began regarding distant objects not as bright cosmic curiosities but as real worlds, each with a distinct origin story, they began to ask questions about composition and formation. In The Planets, a book published in 1952, the astronomer Harold Urey suggested that water ice was abundant in the outer solar system because the bodies there formed far from the sun and never became warm enough for their ice to evaporate. By the 1960s, astronomers and astrophysicists had begun to speculate, partly on the basis of early measurements of its infrared spectrum, that Europa’s extraordinary reflectance was indeed due to the presence of ice. But proving it was difficult.
Stephen Ridgway, now an astronomer at the National Science Foundation’s NOIRLab in Tucson, Arizona, first heard about the problem of potentially icy moons in the outer solar system in the early 1970s, as a graduate student. Carl Pilcher, a postdoctoral researcher he’d met at a conference, told him about it. “We think they should have ice on them because they’re cold and reflective, but is it water? Is it carbon dioxide ice? Is it some other kind, or some mixture?” Ridgway recalls him asking.
It turned out that Ridgway, who describes himself as a tinkerer as well as a physicist, was well positioned to answer those questions. Using an old mathematical trick, he had devised an innovative instrument that could capture the spectrum of a distant light source, and he was using it during nighttime observations at a telescope at Kitt Peak Observatory, in Arizona. Every element and molecule absorbs and emits a unique collection of wavelengths of energy, and astronomers can read these spectra as fingerprints that reveal the composition of cosmic bodies. Pilcher suggested that he use the instrument to observe Europa.
They thought it would take a week to get a useful spectrum of one of Jupiter’s moons. “I went and got it in one night, maybe two,” Ridgway recalls. Ridgway showed the data to Pilcher, who showed it to his advisor, Tom McCord. Their analyses, published in Science in December 1972, suggested that water ice covered at least half, and possibly all, of the surface of Europa. (They also confirmed that the Jovian moons Ganymede and Callisto, both of which are larger than Europa, also had ice on their surfaces.)
In a 1980 paper, scientists reported that Europa looked “cracked like a broken eggshell” and compared it to a white pool ball fouled by a felt-tip pen.
One year later, the Pioneer 10 spacecraft, which had launched in March 1972, passed close enough to Europa to take a photo. The grainy image was provocative enough to justify sending Pioneer 11—which launched in 1973—to swing by on its way to Saturn and then out of the solar system.
Other potentially habitable locations in the solar system include two moons of Saturn: Titan (top) and Enceladus (bottom). Enceladus boasts liquid water beneath its surface and spews icy geysers into space. Titan has a surface rich in organic molecules and likely also has a liquid-water ocean beneath its crust.
NASA/JPL/UNIVERSITY OF ARIZONA/UNIVERSITY OF IDAHO (TITAN); NASA/JPL£CALTECH/SPACE SCIENCE INSTITUTE (ENCELADUS)
But Europa really started to come into focus in 1979, after the Voyager 2 spacecraft sped past the moon on July 9. (Voyager 1 also passed near Europa, but Voyager 2 had better photos.) The photographs the spacecraft beamed back revealed a smooth, bright surface, crisscrossed by long linear marks and low ridges; they might have been cracks or cliffs. In a 1980 NASA paper describing the observation, scientists reported that Europa looked “cracked like a broken eggshell” and compared it to a white pool ball fouled by a felt-tip pen. A 1983 Nature paper fueled interest in Europa by proposing that those features were consistent with liquid water and regular resurfacing, like the work of a natural Zamboni machine.
The Galileo mission, which launched in 1989 to study Jupiter’s atmosphere and the composition of Europa and other moons, encountered complications: the spacecraft’s primary antenna neglected to extend, which severely limited the data that could be transmitted to Earth.
But what did come back, after Galileo reached the system in 1995, further highlighted the moon’s extraordinary features and continues to energize scientists. “We have a lot of tantalizing glimpses of things,” Prockter says.
Among other things, Galileo’s magnetometer revealed a wildly varying magnetic field. Ice is a poor conductor, but liquid salt water isn’t, and Europa’s magnetic oscillations pointed to something moving beneath the surface. Its readings fit the idea of a global ocean being pushed, pulled, and heated by the tidal forces of Jupiter and its moon companions. They also lined up with earlier theoretical predictions of liquid water near the surface of icy moons. “We are pretty certain there’s an ocean there,” Prockter says, “but there is a chance that it might be something really exotic we don’t understand.” The only way to know for sure, she says, is to go back.
Other images from Galileo confirmed what telescope observations had long suggested: that Europa sports a youthful appearance despite its advanced age. It likely formed at the same time as Jupiter and the rest of the solar system, about 4.5 billion years ago, yet its surface—as dated by the oldest craters—is less than 100 million years old. “That’s a long time for us mere mortals,” says Prockter, “but in geological terms, it was born yesterday. The surface is very, very young.” The cracks and crevices on Europa suggest that giant ice plates on its surface collide, break apart, shove under and over each other, and refreeze.
The Pioneer 10 spacecraft, which launched in March 1972, passed close enough to take the first flyby photo
of Europa.
A photograph beamed
back by the Voyager 2
spacecraft, taken on the
morning of July 9, 1979,
illuminated Europa’s
mysterious nature in
better detail.
The Galileo mission, which launched in 1989 to study Jupiter’s atmosphere and the composition of its moons, including Europa, brought the moon’s extraordinary
features into focus.
This striking image of
Europa, captured in
September 2022 by a camera on the Juno spacecraft, reveals many of the features that are driving scientists to want to go back.
The longer scientists stared at Europa, the more mysteries emerged—like the questions around those ubiquitous dark ridges, often in pairs, that splatter the surface like a Jackson Pollock painting. Theorists have been busy devising explanations. Perhaps they’re made by ice volcanoes or geysers, or cracks where liquid water from subsurface pools rose, froze, and crumbled as the opening closed again. Maybe they resulted from subduction, which occurs on Earth in plate tectonics, as one giant sheet of ice slid and crumpled under another. “I’ve lost count of the number of different models for forming those landforms, but we really don’t know how they form,” Prockter says. “Part of the reason is that geology is based on Earth geology, but it’s not like Earth.”
One particularly striking image of Europa, captured in September 2022 by a camera on the Juno spacecraft, which is currently exploring Jupiter, reveals many of the features that are driving scientists to want to take a closer look. It shows the side of Europa that always faces Jupiter, bathed in sunlight. The moon’s surface is covered with cracks, streaks, and ridges where water may rise from the ocean beneath, or where irradiated surface material may sink lower. It also shows the “chaos terrains”—remarkably messy areas suggesting that giant pieces of ice have broken off, moved around, and refrozen, bolstering the case for geological activity similar to plate tectonics on Earth.
However, Juno’s brief two-hour flyby failed to answer questions about how those features formed or to confirm the existence of a buried ocean. For planetary scientists and astrophysicists, Clipper’s data can help fill in the missing knowledge. It will also push our relationship with Europa into new, unexplored territory.
What all those previous missions did do was help build enthusiasm for the plan to get to Europa, a plan that has evolved dramatically over the last 20 years. Originally, scientists wanted orbiters and landers, and NASA and ESA were working together on a joint mission with multiple spacecraft. Those plans fizzled, but in 2013—as a result of the 2011 Decadal Survey, a report that sets the priorities for space exploration for the next 10 years—NASA approved a plan to send an orbiter. By 2015, the agency had selected the instruments on board. Independently, the ESA moved forward with its own mission, with a broader goal of studying Jupiter’s icy moons.
“The Voyager mission transformed Europa from a light in the sky to a geologic world, and then the Galileo mission did the transformation to an ocean world,” says Diana Blaney, a JPL geophysicist who leads the Clipper team charged with using a mapping image spectrometer to identify molecules on Europa’s surface. “Hopefully, Clipper will bring the transformation to a habitable world.”
Getting in close
Researchers have long searched for signs of habitability in the solar system. Landers and rovers on Mars have found evidence of liquid water, mostly long gone, and organic molecules, which contain carbon, often in chains or rings. The building blocks of biological organisms—including nucleic acids and proteins—all contain carbon, which is why scientists get excited when they find organic molecules. Their presence could indicate that it’s possible for the precursors of life to form.
But it’s not enough just to have promising pieces in place. Any alien species would also have to find a way to grow and survive. That far from the sun, photosynthesis is likely impossible. Organisms would necessarily be fueled by chemical energy, much as microbial extremophiles near the black smokers and hydrothermal vents on the seafloor live off the minerals and methane.
The possibility for Europan life is at the mercy of the moon’s geophysics, says Lynnae Quick, a planetary geophysicist at NASA’s Goddard Space Flight Center. In fact, she argues that you can’t have one without the other. Europa seems to host the necessary ingredients for life. But ingredients alone, on Europa as in the kitchen, won’t spontaneously combine in the right way on their own. Other forces have to intervene: the moon needs to shift and squeeze, with heat, to mix the minerals from the seafloor with the salt water and any irradiated particles that seep down from the icy surface. “We need something to stir the pot, and I think the geophysical processes do that,” says Quick, whose graduate work on cryovolcanism in alien worlds led to her recruitment to join Clipper. She’s particularly excited about the possibility of finding pockets of warm salty water, trapped just beneath the surface, that could be abodes for life.
“Europa is my favorite body in the solar system,” Quick confesses. But she notes that other ocean worlds also offer promising places to look for signs of life. Those include Enceladus, a small moon of Saturn that, like Europa, has an icy crust with an ocean beneath. Images from the Cassini mission in 2005 revealed that geysers on the south pole of Enceladus spew water and organic molecules into space, feeding Saturn’s outermost ring.
However, Europa is bigger than Enceladus and is more likely to have a surface covered in icy plates that move in a way similar to Earth’s plate tectonics. This sort of activity would help combine the ingredients for life. Ganymede, another Jovian moon and the solar system’s largest, also likely has a liquid ocean, but sandwiched between two ice layers; without an interface between water and minerals, life is less probable. Other possible places to look include Titan, Saturn’s biggest moon, which also probably hides a liquid-water ocean beneath an ice crust. (Quick is an investigator on Dragonfly, a mission to explore Titan, scheduled to launch in 2028.)
Many of the challenges facing mission engineers
revolve around energy: Europa receives only a fifth as much sunlight as Earth. Clipper addresses the problem with gargantuan solar panels, spanning 30 meters when fully extended.
To look for the signs and signals of habitability, Clipper will use nine primary instruments. These will take pictures of the surface, look for water plumes, use ground-penetrating radar to measure the icy shell and search for the ocean below, and take precise measurements of the magnetic field.
The spacecraft will pass close enough to the moon to sample its thin atmosphere, and it will use mass spectrometry to identify molecules in the gases it finds there. Another instrument will enable scientists to analyze dust from the surface that has been kicked into the atmosphere by meteorite collisions. With any luck, they’ll be able to tell if that dust originated from below—from the enclosed ocean or subsurface lakes trapped in the ice—or from above, as fragments that migrated from the violent volcanoes on the nearby moon Io. Either scenario would be interesting to planetary geologists, but if the molecules were organic and came from below, they would help build the case that life could exist there.
ESA’s Juice mission has a similar suite of instruments, and scientists from the two teams meet regularly to plan for ways to jointly exploit the data when it starts coming in—five or six years from now. “This is really very good for scientists in the planetary community,” says Lorenzo Bruzzone, a telecommunications engineer at the University of Trento who leads the Juice mission’s radar tool team. He’s long been involved in efforts to get to Europa and the rest of the Jovian system.
Because Juice will visit the other ocean-bearing Galilean moons, Bruzzone says, data from that mission can be combined with Clipper’s to generate a more comprehensive picture of the geological processes and potential habitability of all the ocean worlds. “We can analyze the differences in subsurface geology to better understand the evolution of the Jupiter system,” he says. Those differences may help explain, for example, why three of the Galilean moons formed as icy worlds while the fourth, Io, became a volcanic hellscape.
Jupiter’s radiation has the potential to interfere with every measurement, turning a meaningful signal into a mess of digital snow, like static on a television screen.
To make sure those instruments work when they get there, engineers and designers for both missions have had to contend with a raft of challenges. Many of them revolve around energy: Europa receives only a fifth as much sunlight as Earth. Clipper addresses the problem with gargantuan solar panels, which will span 30 meters when fully extended. (An earlier proposal for a mission to Europa included nuclear batteries, but that idea was expensive, and it was ultimately scrapped.)
In addition, Jupiter’s magnetic field is more than 10,000 times more powerful than Earth’s, accelerating already-energetic particles around the planet to create an intense radiation environment. The radiation has the potential to interfere with every measurement—turning a meaningful signal into a mess of digital snow, like static on a television screen—and can threaten the integrity of the instruments.
To slow the accumulation of radiation damage, Clipper won’t orbit Europa when it reaches the moon in 2030; instead, it will make about 50 flybys over four years, swooping nearer and farther from the destructive radiation field. At its closest, it will pass just 16 miles above the surface. The name points back to fast 19th-century sailing vessels, but it also describes the journey. The craft will sail past the world, over and over. In between passes, its distance from Jupiter will give it openings to transmit data back to Earth.
Those first transmissions will have been generations—if not centuries—in the making. Some of the people who laid the groundwork for the mission, decades ago, have already died. Makris, at MIT, says that when scientists were first discussing how to get to Europa, he was told by Ron Greeley, a planetary geologist and NASA advisor who proposed and fiercely advocated for missions to the moon, that space travel spans generations: “He likened it to building a cathedral.” Prockter notes that by the time Clipper’s data comes in, she’ll be in her late 60s. “I will have spent my entire career on Clipper,” Prockter says. Quick, at 39, is one of the youngest members of the science team.
In April 2023, the European Space Agency
launched Juice to explore several ocean-bearing moons of Jupiter. In July 2032, it will fly 400 kilometers above Europa’s surface, twice.
ESA/ATG MEDIALAB
Many of the scientists involved in Clipper—including Pappalardo, Prockter, and Quick—are already planning ways to use its insights for future missions to other worlds. But it’s Europa that holds the most promise, at least for the moment.
Pappalardo thrills at the prospect of finding a Europan neighborhood that might be just right for life. “What if we find a place that’s kind of an oasis, where there are hot spots or warm spots that we detect with a thermal imager?” he says.
Ultimately, Pappalardo says, his hope is that Clipper finds enough evidence to make a strong case for sending a lander someday. The mission’s observations could also tell scientists where to land it: “That would be a place where we’d say, well, we really need to go and scoop up some of that stuff from below the surface, look at it with a microscope, put it in a mass spectrometer, and do the next step, which is to search for life.”
Stephen Ornes is a science writer based in Nashville, Tennessee.