How aging clocks can help us understand why we age—and if we can reverse it

Be honest: Have you ever looked up someone from your childhood on social media with the sole intention of seeing how they’ve aged? 

One of my colleagues, who shall remain nameless, certainly has. He recently shared a photo of a former classmate. “Can you believe we’re the same age?” he asked, with a hint of glee in his voice. A relative also delights in this pastime. “Wow, she looks like an old woman,” she’ll say when looking at a picture of someone she has known since childhood. The years certainly are kinder to some of us than others.

But wrinkles and gray hairs aside, it can be difficult to know how well—or poorly—someone’s body is truly aging, under the hood. A person who develops age-related diseases earlier in life, or has other biological changes associated with aging (such as elevated cholesterol or markers of inflammation), might be considered “biologically older” than a similar-age person who doesn’t have those changes. Some 80-year-olds will be weak and frail, while others are fit and active. 

Doctors have long used functional tests that measure their patients’ strength or the distance they can walk, for example, or simply “eyeball” them to guess whether they look fit enough to survive some treatment regimen, says Tamir Chandra, who studies aging at the Mayo Clinic. 

But over the past decade, scientists have been uncovering new methods of looking at the hidden ways our bodies are aging. What they’ve found is changing our understanding of aging itself. 

“Aging clocks” are new scientific tools that can measure how our organs are wearing out, giving us insight into our mortality and health. They hint at our biological age. While chronological age is simply how many birthdays we’ve had, biological age is meant to reflect something deeper. It measures how our bodies are handling the passing of time and—perhaps—lets us know how much more of it we have left. And while you can’t change your chronological age, you just might be able to influence your biological age.

It’s not just scientists who are using these clocks. Longevity influencers like Bryan Johnson often use them to make the case that they are aging backwards. “My telomeres say I’m 10 years old,” Johnson posted on X in April. The Kardashians have tried them too (Khloé was told on TV that her biological age was 12 years below her chronological age). Even my local health-food store offers biological age testing. Some are pushing the use of clocks even further, using them to sell unproven “anti-aging” supplements.

The science is still new, and few experts in the field—some of whom affectionately refer to it as “clock world”—would argue that an aging clock can definitively reveal an individual’s biological age. 

But their work is revealing that aging clocks can offer so much more than an insta-brag, a snake-oil pitch—or even just an eye-catching number. In fact, they are helping scientists unravel some of the deepest mysteries in biology: Why do we age? How do we age? When does aging begin? What does it even mean to age?

Ultimately, and most importantly, they might soon tell us whether we can reverse the whole process.

Clocks kick off

The way your genes work can change. Molecules called methyl groups can attach to DNA, controlling the way genes make proteins. This process is called methylation, and it can potentially occur at millions of points along the genome. These epigenetic markers, as they are known, can switch genes on or off, or increase or decrease how much protein they make. They’re not part of our DNA, but they influence how it works.

In 2011, Steve Horvath, then a biostatistician at the University of California, Los Angeles, took part in a study that was looking for links between sexual orientation and these epigenetic markers. Steve is straight; he says his twin brother, Markus, who also volunteered, is gay.

That study didn’t find a link between DNA methyl­ation and sexual orientation. But when Horvath looked at the data, he noticed a different trend—a very strong link between age and methylation at around 88 points on the genome. He once told me he fell off his chair when he saw it

Many of the affected genes had already been linked to age-related brain and cardiovascular diseases, but it wasn’t clear how methylation might be related to those diseases. 

If a model could work out what average aging looks like, it could potentially estimate whether someone was aging unusually fast or slowly. It could transform medicine and fast-track the search for an anti-aging drug. It could help us understand what aging is, and why it happens at all.

In 2013, Horvath collected methylation data from 8,000 tissue and cell samples to create what he called the Horvath clock—essentially a mathematical model that could estimate age on the basis of DNA methylation at 353 points on the genome. From a tissue sample, it was able to detect a person’s age within a range of 2.9 years.

That clock changed everything. Its publication in 2013 marked the birth of “clock world.” To some, the possibilities were almost endless. If a model could work out what average aging looks like, it could potentially estimate whether someone was aging unusually fast or slowly. It could transform medicine and fast-track the search for an anti-aging drug. It could help us understand what aging is, and why it happens at all.

The epigenetic clock was a success story in “a field that, frankly, doesn’t have a lot of success stories,” says João Pedro de Magalhães, who researches aging at the University of Birmingham, UK.

It took a few years, but as more aging researchers heard about the clock, they began incorporating it into their research and even developing their own clocks. Horvath became a bit of a celebrity. Scientists started asking for selfies with him at conferences, he says. Some researchers even made T-shirts bearing the front page of his 2013 paper.

Some of the many other aging clocks developed since have become notable in their own right. Examples include the PhenoAge clock, which incorporates health data such as blood cell counts and signs of inflammation along with methyl­ation, and the Dunedin Pace of Aging clock, which tells you how quickly or slowly a person is aging rather than pointing to a specific age. Many of the clocks measure methylation, but some look at other variables, such as proteins in blood or certain carbohydrate molecules that attach to such proteins.

Today, there are hundreds or even thousands of clocks out there, says Chiara Herzog, who researches aging at King’s College London and is a member of the Biomarkers of Aging Consortium. Everyone has a favorite. Horvath himself favors his GrimAge clock, which was named after the Grim Reaper because it is designed to predict time to death.

That clock was trained on data collected from people who were monitored for decades, many of whom died in that period. Horvath won’t use it to tell people when they might die of old age, he stresses, saying that it wouldn’t be ethical. Instead, it can be used to deliver a biological age that hints at how long a person might expect to live. Someone who is 50 but has a GrimAge of 60 can assume that, compared with the average 50-year-old, they might be a bit closer to the end.

GrimAge is not perfect. While it can strongly predict time to death given the health trajectory someone is on, no aging clock can predict if someone will start smoking or get a divorce (which generally speeds aging) or suddenly take up running (which can generally slow it). “People are complicated,” Horvath tells MIT Technology Review. “There’s a huge error bar.”

On the whole, the clocks are pretty good at making predictions about health and lifespan. They’ve been able to predict that people over the age of 105 have lower biological ages, which tracks given how rare it is for people to make it past that age. A higher epigenetic age has been linked to declining cognitive function and signs of Alzheimer’s disease, while better physical and cognitive fitness has been linked to a lower epigenetic age.

Black-box clocks

But accuracy is a challenge for all aging clocks. Part of the problem lies in how they were designed. Most of the clocks were trained to link age with methylation. The best clocks will deliver an estimate that reflects how far a person’s biology deviates from the average. Aging clocks are still judged on how well they can predict a person’s chronological age, but you don’t want them to be too close, says Lucas Paulo de Lima Camillo, head of machine learning at Shift Bioscience, who was awarded $10,000 by the Biomarkers of Aging Consortium for developing a clock that could estimate age within a range of 2.55 years.

a cartoon alarm clock shrugging
None of the clocks are precise enough to predict the biological age of a single person. Putting the same biological sample through five different clocks will give you five wildly different results.
LEON EDLER

“There’s this paradox,” says Camillo. If a clock is really good at predicting chronological age, that’s all it will tell you—and it probably won’t reveal much about your biological age. No one needs an aging clock to tell them how many birthdays they’ve had. Camillo says he’s noticed that when the clocks get too close to “perfect” age prediction, they actually become less accurate at predicting mortality.

Therein lies the other central issue for scientists who develop and use aging clocks: What is the thing they are really measuring? It is a difficult question for a field whose members notoriously fail to agree on the basics. (Everything from the definition of aging to how it occurs and why is up for debate among the experts.)

They do agree that aging is incredibly complex. A methylation-based aging clock might tell you about how that collection of chemical markers compares across individuals, but at best, it’s only giving you an idea of their “epigenetic age,” says Chandra. There are probably plenty of other biological markers that might reveal other aspects of aging, he says: “None of the clocks measure everything.” 

We don’t know why some methyl groups appear or disappear with age, either. Are these changes causing damage? Or are they a by-product of it? Are the epigenetic patterns seen in a 90-year-old a sign of deterioration? Or have they been responsible for keeping that person alive into very old age?

To make matters even more complicated, two different clocks can give similar answers by measuring methylation at entirely different regions of the genome. No one knows why, or which regions might be the best ones to focus on.

“The biomarkers have this black-box quality,” says Jesse Poganik at Brigham and Women’s Hospital in Boston. “Some of them are probably causal, some of them may be adaptive … and some of them may just be neutral”: either “there’s no reason for them not to happen” or “they just happen by random chance.”

What we know is that, as things stand, none of the clocks are precise enough to predict the biological age of a single person (sorry, Khloé). Putting the same biological sample through five different clocks will give you five wildly different results.

Even the same clock can give you different answers if you put a sample through it more than once. “They’re not yet individually predictive,” says Herzog. “We don’t know what [a clock result] means for a person, [or if] they’re more or less likely to develop disease.”

And it’s why plenty of aging researchers—even those who regularly use the clocks in their work—haven’t bothered to measure their own epigenetic age. “Let’s say I do a clock and it says that my biological age … is five years older than it should be,” says Magalhães. “So what?” He shrugs. “I don’t see much point in it.”

You might think this lack of clarity would make aging clocks pretty useless in a clinical setting. But plenty of clinics are offering them anyway. Some longevity clinics are more careful, and will regularly test their patients with a range of clocks, noting their results and tracking them over time. Others will simply offer an estimate of biological age as part of a longevity treatment package.

And then there are the people who use aging clocks to sell supplements. While no drug or supplement has been definitively shown to make people live longer, that hasn’t stopped the lightly regulated wellness industry from pushing a range of “treatments” that range from lotions to herbal pills all the way through to stem-cell injections.

Some of these people come to aging meetings. I was in the audience at an event when one CEO took to the stage to claim he had reversed his own biological age by 18 years—thanks to the supplement he was selling. Tom Weldon of Ponce de Leon Health told us his gray hair was turning brown. His biological age was supposedly reversing so rapidly that he had reached “longevity escape velocity.”

But if the people who buy his supplements expect some kind of Benjamin Button effect, they might be disappointed. His company hasn’t yet conducted a randomized controlled trial to demonstrate any anti-aging effects of that supplement, called Rejuvant. Weldon says that such a trial would take years and cost millions of dollars, and that he’d “have to increase the price of our product more than four times” to pay for one. (The company has so far tested the active ingredient in mice and carried out a provisional trial in people.)

More generally, Horvath says he “gets a bad taste in [his] mouth” when people use the clocks to sell products and “make a quick buck.” But he thinks that most of those sellers have genuine faith in both the clocks and their products. “People truly believe their own nonsense,” he says. “They are so passionate about what they discovered, they fall into this trap of believing [their] own prejudices.” 

The accuracy of the clocks is at a level that makes them useful for research, but not for individual predictions. Even if a clock did tell someone they were five years younger than their chronological age, that wouldn’t necessarily mean the person could expect to live five years longer, says Magalhães. “The field of aging has long been a rich ground for snake-oil salesmen and hype,” he says. “It comes with the territory.” (Weldon, for his part, says Rejuvant is the only product that has “clinically meaningful” claims.) 

In any case, Magalhães adds that he thinks any publicity is better than no publicity.

And there’s the rub. Most people in the longevity field seem to have mixed feelings about the trendiness of aging clocks and how they are being used. They’ll agree that the clocks aren’t ready for consumer prime time, but they tend to appreciate the attention. Longevity research is expensive, after all. With a surge in funding and an explosion in the number of biotech companies working on longevity, aging scientists are hopeful that innovation and progress will follow. 

So they want to be sure that the reputation of aging clocks doesn’t end up being tarnished by association. Because while influencers and supplement sellers are using their “biological ages” to garner attention, scientists are now using these clocks to make some remarkable discoveries. Discoveries that are changing the way we think about aging.

How to be young again

Two little mice lie side by side, anesthetized and unconscious, as Jim White prepares his scalpel. The animals are of the same breed but look decidedly different. One is a youthful three-month-old, its fur thick, black, and glossy. By comparison, the second mouse, a 20-month-old, looks a little the worse for wear. Its fur is graying and patchy. Its whiskers are short, and it generally looks kind of frail.

But the two mice are about to have a lot more in common. White, with some help from a colleague, makes incisions along the side of each mouse’s body and into the upper part of an arm and leg on the same side. He then carefully stitches the two animals together—membranes, fascia, and skin. 

The procedure takes around an hour, and the mice are then roused from their anesthesia. At first, the two still-groggy animals pull away from each other. But within a few days, they seem to have accepted that they now share their bodies. Soon their circulatory systems will fuse, and the animals will share a blood flow too.

cartoon man in profile with a stick of a wrist watch around a lit stick of dynamite in his mouth
“People are complicated. There’s a huge error bar.” — Steve Horvath, former biostatistician at the University of California, Los Angeles
LEON EDLER

White, who studies aging at Duke University, has been stitching mice together for years; he has performed this strange procedure, known as heterochronic parabiosis, more than a hundred times. And he’s seen a curious phenomenon occur. The older mice appear to benefit from the arrangement. They seem to get younger.

Experiments with heterochronic parabiosis have been performed for decades, but typically scientists keep the mice attached to each other for only a few weeks, says White. In their experiment, he and his colleagues left the mice attached for three months—equivalent to around 10 human years. The team then carefully separated the animals to assess how each of them had fared. “You’d think that they’d want to separate immediately,” says White. “But when you detach them … they kind of follow each other around.”

The most striking result of that experiment was that the older mice who had been attached to a younger mouse ended up living longer than other mice of a similar age. “[They lived] around 10% longer, but [they] also maintained a lot of [their] function,” says White. They were more active and maintained their strength for longer, he adds.

When his colleagues, including Poganik, applied aging clocks to the mice, they found that their epigenetic ages were lower than expected. “The young circulation slowed aging in the old mice,” says White. The effect seemed to last, too—at least for a little while. “It preserved that youthful state for longer than we expected,” he says.

The young mice went the other way and appeared biologically older, both while they were attached to the old mice and shortly after they were detached. But in their case, the effect seemed to be short-lived, says White: “The young mice went back to being young again.” 

To White, this suggests that something about the “youthful state” might be programmed in some way. That perhaps it is written into our DNA. Maybe we don’t have to go through the biological process of aging. 

This gets at a central debate in the aging field: What is aging, and why does it happen? Some believe it’s simply a result of accumulated damage. Some believe that the aging process is programmed; just as we grow limbs, develop a brain, reach puberty, and experience menopause, we are destined to deteriorate. Others think programs that play an important role in our early development just turn out to be harmful later in life by chance. And there are some scientists who agree with all of the above.

White’s theory is that being old is just “a loss of youth,” he says. If that’s the case, there’s a silver lining: Knowing how youth is lost might point toward a way to somehow regain it, perhaps by restoring those youthful programs in some way. 

Dogs and dolphins

Horvath’s eponymous clock was developed by measuring methylation in DNA samples taken from tissues around the body. It seems to represent aging in all these tissues, which is why Horvath calls it a pan-tissue clock. Given that our organs are thought to age differently, it was remarkable that a single clock could measure aging in so many of them.

But Horvath had ambitious plans for an even more universal clock: a pan-species model that could measure aging in all mammals. He started out, in 2017, with an email campaign that involved asking hundreds of scientists around the world to share samples of tissues from animals they had worked with. He tried zoos, too.   

The pan-mammalian clock suggests that there is something universal about aging—not just that all mammals experience it in a similar way, but that a similar set of genetic or epigenetic factors might be responsible for it.

“I learned that people had spent careers collecting [animal] tissues,” he says. “They had freezers full of [them].” Amenable scientists would ship those frozen tissues, or just DNA, to Horvath’s lab in California, where he would use them to train a new model.

Horvath says he initially set out to profile 30 different species. But he ended up receiving around 15,000 samples from 200 scientists, representing 348 species—including everything from dogs to dolphins. Could a single clock really predict age in all of them?

“I truly felt it would fail,” says Horvath. “But it turned out that I was completely wrong.” He and his colleagues developed a clock that assessed methylation at 36,000 locations on the genome. The result, which was published in 2023 as the pan-mammalian clock, can estimate the age of any mammal and even the maximum lifespan of the species. The data set is open to anyone who wants to download it, he adds: “I hope people will mine the data to find the secret of how to extend a healthy lifespan.”

The pan-mammalian clock suggests that there is something universal about aging—not just that all mammals experience it in a similar way, but that a similar set of genetic or epigenetic factors might be responsible for it.

Comparisons between mammals also support the idea that the slower methylation changes occur, the longer the lifespan of the animal, says Nelly Olova, an epigeneticist who researches aging at the University of Edinburgh in the UK. “DNA methylation slowly erodes with age,” she says. “We still have the instructions in place, but they become a little messier.” The research in different mammals suggests that cells can take only so much change before they stop functioning.

“There’s a finite amount of change that the cell can tolerate,” she says. “If the instructions become too messy and noisy … it cannot support life.”

Olova has been investigating exactly when aging clocks first begin to tick—in other words, the point at which aging starts. Clocks can be trained on data from volunteers, and by matching the patterns of methylation on their DNA to their chronological age. The trained clocks are then typically used to estimate the biological age of adults. But they can also be used on samples from children. Or babies. They can be used to work out the biological age of cells that make up embryos. 

In her research, Olova used adult skin cells, which—thanks to Nobel Prize–winning research in the 2000s—can be “reprogrammed” back to a state resembling that of the pluripotent stem cells found in embryos. When Olova and her colleagues used a “partial reprogramming” approach to take cells close to that state, they found that the closer they got to the entirely reprogrammed state, the “younger” the cells were. 

It was around 20 days after the cells had been reprogrammed into stem cells that they reached the biological age of zero according to the clock used, says Olova. “It was a bit surreal,” she says. “The pluripotent cells measure as minus 0.5; they’re slightly below zero.”

Vadim Gladyshev, a prominent aging researcher at Harvard University, has since proposed that the same negative level of aging might apply to embryos. After all, some kind of rejuvenation happens during the early stages of embryo formation—an aged egg cell and an aged sperm cell somehow create a brand-new cell. The slate is wiped clean.

Gladyshev calls this point “ground zero.” He posits that it’s reached sometime during the “mid-embryonic state.” At this point, aging begins. And so does “organismal life,” he argues. “It’s interesting how this coincides with philosophical questions about when life starts,” says Olova. 

Some have argued that life begins when sperm meets egg, while others have suggested that the point when embryonic cells start to form some kind of unified structure is what counts. The ground zero point is when the body plan is set out and cells begin to organize accordingly, she says. “Before that, it’s just a bunch of cells.”

This doesn’t mean that life begins at the embryonic state, but it does suggest that this is when aging begins—perhaps as the result of “a generational clearance of damage,” says Poganik.

It is early days—no pun intended—for this research, and the science is far from settled. But knowing when aging begins could help inform attempts to rewind the clock. If scientists can pinpoint an ideal biological age for cells, perhaps they can find ways to get old cells back to that state. There might be a way to slow aging once cells reach a certain biological age, too. 

“Presumably, there may be opportunities for targeting aging before … you’re full of gray hair,” says Poganik. “It could mean that there is an ideal window for intervention which is much earlier than our current geriatrics-based approach.”

When young meets old

When White first started stitching mice together, he would sit and watch them for hours. “I was like, look at them go! They’re together, and they don’t even care!” he says. Since then, he’s learned a few tricks. He tends to work with female mice, for instance—the males tend to bicker and nip at each other, he says. The females, on the other hand, seem to get on well. 

The effect their partnership appears to have on their biological ages, if only temporarily, is among the ways aging clocks are helping us understand that biological age is plastic to some degree. White and his colleagues have also found, for instance, that stress seems to increase biological age, but that the effect can be reversed once the stress stops. Both pregnancy and covid-19 infections have a similar reversible effect.

Poganik wonders if this finding might have applications for human organ transplants. Perhaps there’s a way to measure the biological age of an organ before it is transplanted and somehow rejuvenate organs before surgery. 

But new data from aging clocks suggests that this might be more complicated than it sounds. Poganik and his colleagues have been using methylation clocks to measure the biological age of samples taken from recently transplanted hearts in living people. 

If being old is simply a case of losing our youthfulness, then that might give us a clue to how we can somehow regain it.

Young hearts do well in older bodies, but the biological age of these organs eventually creeps up to match that of their recipient. The same is true for older hearts in younger bodies, says Poganik, who has not yet published his findings. “After a few months, the tissue may assimilate the biological age of the organism,” he says. 

If that’s the case, the benefits of young organs might be short-lived. It also suggests that scientists working on ways to rejuvenate individual organs may need to focus their anti-aging efforts on more systemic means of rejuvenation—for example, stem cells that repopulate the blood. Reprogramming these cells to a youthful state, perhaps one a little closer to “ground zero,” might be the way to go.

Whole-body rejuvenation might be some way off, but scientists are still hopeful that aging clocks might help them find a way to reverse aging in people.

“We have the machinery to reset our epigenetic clock to a more youthful state,” says White. “That means we have the ability to turn the clock backwards.” 

Can we repair the internet?

From addictive algorithms to exploitative apps, data mining to misinformation, the internet today can be a hazardous place. Books by three influential figures—the intellect behind “net neutrality,” a former Meta executive, and the web’s own inventor—propose radical approaches to fixing it. But are these luminaries the right people for the job? Though each shows conviction, and even sometimes inventiveness, the solutions they present reveal blind spots.

book cover
The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity
Tim Wu
KNOPF, 2025

In The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity, Tim Wu argues that a few platform companies have too much concentrated power and must be dismantled. Wu, a prominent Columbia professor who popularized the principle that a free internet requires all online traffic to be treated equally, believes that existing legal mechanisms, especially anti-monopoly laws, offer the best way to achieve this goal.

Pairing economic theory with recent digital history, Wu shows how platforms have shifted from giving to users to extracting from them. He argues that our failure to understand their power has only encouraged them to grow, displacing competitors along the way. And he contends that convenience is what platforms most often exploit to keep users entrapped. “The human desire to avoid unnecessary pain and inconvenience,” he writes, may be “the strongest force out there.”

He cites Google’s and Apple’s “ecosystems” as examples, showing how users can become dependent on such services as a result of their all-­encompassing seamlessness. To Wu, this isn’t a bad thing in itself. The ease of using Amazon to stream entertainment, make online purchases, or help organize day-to-day life delivers obvious gains. But when powerhouse companies like Amazon, Apple, and Alphabet win the battle of convenience with so many users—and never let competitors get a foothold—the result is “industry dominance” that must now be reexamined.

The measures Wu advocates—and that appear the most practical, as they draw on existing legal frameworks and economic policies—are federal anti-monopoly laws, utility caps that limit how much companies can charge consumers for service, and “line of business” restrictions that prohibit companies from operating in certain industries.

Columbia University’s Tim Wu shows how platforms have shifted from giving to users to extracting from them. He argues that our failure to understand their power has only encouraged them to grow.

Anti-monopoly provisions and antitrust laws are effective weapons in our armory, Wu contends, pointing out that they have been successfully used against technology companies in the past. He cites two well-known cases. The first is the 1960s antitrust case brought by the US government against IBM, which helped create competition in the computer software market that enabled companies like Apple and Microsoft to emerge. The 1982 AT&T case that broke the telephone conglomerate up into several smaller companies is another instance. In each, the public benefited from the decoupling of hardware, software, and other services, leading to more competition and choice in a technology market.

But will past performance predict future results? It’s not yet clear whether these laws can be successful in the platform age. The 2025 antitrust case against Google—in which a judge ruled that the company did not have to divest itself of its Chrome browser as the US Justice Department had proposed—reveals the limits of pursuing tech breakups through the law. The 2001 antitrust case brought against Microsoft likewise failed to separate the company from its web browser and mostly kept the conglomerate intact. Wu noticeably doesn’t discuss the Microsoft case when arguing for antitrust action today.

Nick Clegg, until recently Meta’s president of global affairs and a former deputy prime minister of the UK, takes a position very different from Wu’s: that trying to break up the biggest tech companies is misguided and would degrade the experience of internet users. In How to Save the Internet: The Threat to Global Connection in the Age of AI and Political Conflict, Clegg acknowledges Big Tech’s monopoly over the web. But he believes punitive legal measures like antitrust laws are unproductive and can be avoided by means of regulation, such as rules for what content social media can and can’t publish. (It’s worth noting that Meta is facing its own antitrust case, involving whether it should have been allowed to acquire Instagram and WhatsApp.)

book cover
How to Save the Internet: The Threat to Global Connection in the Age of AI and Political Conflict
Nick Clegg
BODLEY HEAD, 2025

Clegg also believes Silicon Valley should take the initiative to reform itself. He argues that encouraging social media networks to “open up the books” and share their decision-making power with users is more likely to restore some equilibrium than contemplating legal action as a first resort.

But some may be skeptical of a former Meta exec and politician who worked closely with Mark Zuckerberg and still wasn’t able to usher in such changes to social media sites while working for one. What will only compound this skepticism is the selective history found in Clegg’s book, which briefly acknowledges some scandals (like the one surrounding Cambridge Analytica’s data harvesting from Facebook users in 2016) but refuses to discuss other pertinent ones. For example, Clegg laments the “fractured” nature of the global internet today but fails to acknowledge Facebook’s own role in this splintering.

Breaking up Big Tech through antitrust laws would hinder innovation, says Clegg, arguing that the idea “completely ignores the benefits users gain from large network effects.” Users stick with these outsize channels because they can find “most of what they’re looking for,” he writes, like friends and content on social media and cheap consumer goods on Amazon and eBay.

Wu might concede this point, but he would disagree with Clegg’s claims that maintaining the status quo is beneficial to users. “The traditional logic of antitrust law doesn’t work,” Clegg insists. Instead, he believes less sweeping regulation can help make Big Tech less dangerous while ensuring a better user experience.

Clegg has seen both sides of the regulatory coin: He worked in David Cameron’s government passing national laws for technology companies to follow and then moved to Meta to help the company navigate those types of nation-specific obligations. He bemoans the hassle and complexity Silicon Valley faces in trying to comply with differing rules across the globe, some set by “American federal agencies” and others by “Indian nationalists.”

But with the resources such companies command, surely they are more than equipped to cope? Given that Meta itself has previously meddled in access to the internet (such as in India, whose telecommunications regulator ultimately blocked its Free Basics internet service for violating net neutrality rules), this complaint seems suspect coming from Clegg. What should be the real priority, he argues, is not any new nation-specific laws but a global “treaty that protects the free flow of data between signatory countries.”

What the former Meta executive Nick Clegg advocates—unsurprisingly—is not a breakup of Big Tech but a push for it to become “radically transparent.”

Clegg believes that these nation-specific technology obligations—a recent one is Australia’s ban on social media for people under 16—usually reflect fallacies about the technology’s human impact, a subject that can be fraught with anxiety. Such laws have proved ineffective and tend to taint the public’s understanding of social networks, he says. There is some truth to his argument here, but reading a book in which a former Facebook executive dismisses techno-determinism—that is, the argument that technology makes people do or think certain things—may be cold comfort to those who have seen the harm technology can do.

In any case, Clegg’s defensiveness about social networks may not gain much favor from users themselves. He stresses the need for more personal responsibility, arguing that Meta doesn’t ever intend for users to stay on Facebook or Instagram endlessly: “How long you spend on the app in a single session is not nearly as important as getting you to come back over and over again.” Social media companies want to serve you content that is “meaningful to you,” he claims, not “simply to give you a momentary dopamine spike.” All this feels disingenuous at best.

What Clegg advocates—unsurprisingly—is not a breakup of Big Tech but a push for it to become “radically transparent,” whether on its own or, if necessary, with the help of federal legislators. He also wants platforms to bring users more into their governance processes (by using Facebook’s model of community forums to help improve their apps and products, for example). Finally, Clegg also wants Big Tech to give users more meaningful control of their data and how companies such as Meta can use it.

Here Clegg shares common ground with the inventor of the web, Tim Berners-Lee, whose own proposal for reform advances a technically specific vision for doing just that. In his memoir/manifesto This Is for Everyone: The Unfinished Story of the World Wide Web, Berners-Lee acknowledges that his initial vision—of a technology he hoped would remain open-source, collaborative, and completely decentralized—is a far cry from the web that we know today.

book cover
This Is for Everyone: The Unfinished Story of the World Wide Web
Tim Berners-Lee
FARRAR, STRAUS & GIROUX, 2025

If there’s any surviving manifestation of his original project, he says, it’s Wikipedia, which remains “probably the best single example of what I wanted the web to be.” His best idea for moving power from Silicon Valley platforms into the hands of users is to give them more data control. He pushes for a universal data “pod” he helped develop, known as “Solid” (an abbreviation of “social linked data”).

The system—which was originally developed at MIT—would offer a central site where people could manage data ranging from credit card information to health records to social media comment history. “Rather than have all this stuff siloed off with different providers across the web, you’d be able to store your entire digital information trail in a single private repository,” Berners-Lee writes.

The Solid product may look like a kind of silver bullet in an age when data harvesting is familiar and data breaches are rampant. Placing greater control with users and enabling them to see “what data [i]s being generated about them” does sound like a tantalizing prospect.

But some people may have concerns about, for example, merging their confidential health records with data from personal devices (like heart rate info from a smart watch). No matter how much user control and decentralization Berners-Lee may promise, recent data scandals (such as cases in which period-tracking apps misused clients’ data) may be on people’s minds.

Berners-Lee believes that centralizing user data in a product like Solid could save people time and improve daily life on the internet. “An alien coming to Earth would think it was very strange that I had to tell my phone the same things again and again,” he complains about the experience of using different airline apps today.

With Solid, everything from vaccination records to credit card transactions could be kept within the digital vault and plugged into different apps. Berners-Lee believes that AI could also help people make more use of this data—for example, by linking meal plans to grocery bills. Still, if he’s optimistic on how AI and Solid could coordinate to improve users’ lives, he is vague on how to make sure that chatbots manage such personal data sensitively and safely.

Berners-Lee generally opposes regulation of the web (except in the case of teenagers and social media algorithms, where he sees a genuine need). He believes in internet users’ individual right to control their own data; he is confident that a product like Solid could “course-correct” the web from its current “exploitative” and extractive direction.

Of the three writers’ approaches to reform, it is Wu’s that has shown some effectiveness of late. Companies like Google have been forced to give competitors some advantage through data sharing, and they have now seen limits on how their systems can be used in new products and technologies. But in the current US political climate, will antitrust laws continue to be enforced against Big Tech?

Clegg may get his way on one issue: limiting new nation-specific laws. President Donald Trump has confirmed that he will use tariffs to penalize countries that ratify their own national laws targeting US tech companies. And given the posture of the Trump administration, it doesn’t seem likely that Big Tech will see more regulation in the US. Indeed, social networks have seemed emboldened (Meta, for example, removed fact-checkers and relaxed content moderation rules after Trump’s election win). In any case, the US hasn’t passed a major piece of federal internet legislation since 1996.

If using anti-monopoly laws through the courts isn’t possible, Clegg’s push for a US-led omnibus deal—setting consensual rules for data and acceptable standards of human rights—may be the only way to make some more immediate improvements.

In the end, there is not likely to be any single fix for what ails the internet today. But the ideas the three writers agree on—greater user control, more data privacy, and increased accountability from Silicon Valley—are surely the outcomes we should all fight for.

Nathan Smith is a writer whose work has appeared in the Washington Post, the Economist, and the Los Angeles Times.

An Earthling’s guide to planet hunting

The pendant on Rebecca Jensen-Clem’s necklace is only about an inch wide, composed of 36 silver hexagons entwined in a honeycomb mosaic. At the Keck Observatory, in Hawaii, just as many segments make up a mirror that spans 33 feet, reflecting images of uncharted worlds for her to study. 

Jensen-Clem, an astronomer at the University of California, Santa Cruz, works with the Keck Observatory to figure out how to detect new planets without leaving our own. Typically, this pursuit faces an array of obstacles: Wind, fluctuations in atmospheric density and temperature, or even a misaligned telescope mirror can create a glare from a star’s light that obscures the view of what’s around it, rendering any planets orbiting the star effectively invisible. And what light Earth’s atmosphere doesn’t obscure, it absorbs. That’s why researchers who study these distant worlds often work with space telescopes that circumvent Earth’s pesky atmosphere entirely, such as the $10 billion James Webb Space Telescope. 

But there’s another way over these hurdles. At her lab among the redwoods, Jensen-Clem and her students experiment with new technologies and software to help Keck’s primary honeycomb mirror and its smaller, “deformable” mirror see more clearly. Using measurements from atmospheric sensors, deformable mirrors are designed to adjust shape rapidly, so they can correct for distortions caused by Earth’s atmosphere on the fly. 

This general imaging technique, called adaptive optics, has been common practice since the 1990s. But Jensen-Clem is looking to level up the game with extreme adaptive optics technologies, which are aimed to create the highest image quality over a small field of view. Her group, in particular, does so by tackling issues involving wind or the primary mirror itself. The goal is to focus starlight so precisely that a planet can be visible even if its host star is a million to a billion times brighter.

In April, she and her former collaborator Maaike van Kooten were named co-recipients of the Breakthrough Prize Foundation’s New Horizons in Physics Prize. The prize announcement says they earned this early-career research award for their potential “to enable the direct detection of the smallest exo­planets” through a repertoire of methods the two women have spent their careers developing. 

In July, Jensen-Clem was also announced as a member of a new committee for the Habitable Worlds Observatory, a concept for a NASA space telescope that would spend its career on the prowl for signs of life in the universe. She’s tasked with defining the mission’s scientific goals by the end of the decade.

The Keck Observatory’s 10-meter primary mirror features a honeycomb structure with 36 individual mirror segments.
The Keck Observatory’s 10-meter primary mirror features a honeycomb structure with 36 individual mirror segments.
ETHAN TWEEDIE

“In adaptive optics, we spend a lot of time on simulations, or in the lab,” Jensen-Clem says. “It’s been a long road to see that I’ve actually made things better at the observatory in the past few years.”

Jensen-Clem has long appreciated astronomy for its more mind-bending qualities. In seventh grade, she became fascinated by how time slows down near a black hole when her dad, an aerospace engineer, explained that concept to her. After starting her bachelor’s degree at MIT in 2008, she became taken with how a distant star can seem to disappear—either suddenly winking out or gently fading away, depending on the kind of object that passes in front of it. “It wasn’t quite exoplanet science, but there was a lot of overlap,” she says.

“If you just look up at the night sky and see stars twinkling, it’s happening fast. So we have to go fast too.”

During this time, Jensen-Clem began sowing the seeds for one of her prize-winning methods after her teaching assistant recommended that she apply for an internship at NASA’s Jet Propulsion Laboratory. There, she worked on a setup that could perfect the orientation of a large mirror. Such mirrors are more difficult to realign than the smaller, deformable ones, whose shape-changing segments cater to Earth’s fluctuating atmosphere.

“At the time, we were saying, ‘Oh, wouldn’t it be really cool to install one of these at Keck Observatory?’” Jensen-Clem says. The idea stuck around. She even wrote about it in a fellowship application when she was gearing up to start her graduate work at Caltech. And after years of touch-and-go development, Jensen-Clem succeeded in installing the system—which uses a technology called a Zernike wavefront sensor—on Keck’s primary mirror about a year ago. “My work as a college intern is finally done,” she says. 

The system, which is currently used for occasional recalibrations rather than continuous adjustments, includes a special kind of glass plate that bends the light rays from the mirror to reveal a specific pattern. The detector can pick up a hairbreadth misalignment in that picture: If one hexagon is pushed too far back or forward, its brightness changes. Even the tiniest misalignment is important to correct, because “when you’re studying a faint object, suddenly you’re much more susceptible to little mistakes,” Jensen-Clem says.

She has also been working to perfect the craft of molding Keck’s deformable mirror. This instrument, which reflects light that’s been rerouted from the primary mirror, is much smaller—only six inches wide—and is designed to reposition as often as 2,000 times a second to combat atmospheric turbulence and create the clearest picture possible. “If you just look up at the night sky and see stars twinkling, it’s happening fast. So we have to go fast too,” Jensen-Clem says. 

Even at this rapid rate of readjustment, there’s still a lag. The deformable mirror is usually about one millisecond behind the actual outdoor conditions at any given time. “When the [adaptive optics] system can’t keep up, then you aren’t going to get the best resolution,” says van Kooten, Jensen-Clem’s former collaborator, who is now at the National Research Council Canada. This lag has proved especially troublesome on windy nights. 

Jensen-Clem thought it was an unsolvable problem. “The reason we have that delay is because we need to run computations and then move the deformable mirror,” she says. “You’re never going to do those things instantaneously.”

But while she was still a postdoc at UC Berkeley, she came across a paper that posited a solution. Its authors proposed that using previous measurements and simple algebra to predict how the atmosphere will change, rather than trying to keep up with it in real time, would yield better results. She wasn’t able to test the idea at the time, but coming to UCSC and working with Keck presented the perfect opportunity. 

Around this time, Jensen-Clem invited van Kooten to join her team at UCSC as a postdoc because of their shared interest in the predictive software. “I didn’t have a place to live at first, so she put me up in her guest room,” van Kooten says. “She’s just so supportive at every level.”

After creating experimental software to try out at Keck, the team compared the predictive version with the more standard adaptive optics, examining how well each imaged an exoplanet without its drowning in starlight. They found that the predictive software could image even faint exoplanets two to three times more clearly. The results, which Jensen-Clem published in 2022, were part of what earned her the New Horizons in Physics Prize. 

Thayne Currie, an astronomer at the University of Texas, San Antonio, says that these new techniques will become especially vital as researchers build bigger and bigger ground-based facilities to capture images of exoplanets—including upcoming projects such as the Extremely Large Telescope at the European Southern Observatory and the Giant Magellan Telescope in Chile. “There’s an incredible amount that we’re learning about the universe, and it is really driven by technology advances that are very, very new,” Currie says. “Dr. Jensen-Clem’s work is an example of that kind of innovation.”

In May, one of Jensen-Clem’s graduate students went back to Hawaii to reinstall the predictive software at Keck. This time, the program isn’t just a trial run; it’s there to stay. The new software has shown it can refocus artificial starlight. Next, it will have to prove it can handle the real thing. 

And in about a year, Jensen-Clem and her students and colleagues will brace themselves for a flood of observations from the European Space Agency’s Gaia mission, which recently finished measuring the motion, temperature, and composition of billions of stars over more than a decade. 

When the project releases its next set of data—slated for December 2026—Jensen-Clem’s team aims to hunt for new exoplanetary systems using clues like the wobbles in a star’s motion caused by the gravitational tugs of planets orbiting around it. Once a system has been identified, exoplanet photographers will then be able to shoot the hidden planets using a new instrument at Keck that can reveal more about their atmospheres and temperatures. 

There will be a mountain of data to sort through, and an even steeper supply of starlight to refocus. Thankfully, Jensen-Clem has spent more than a decade refining just the techniques she’ll need: “This time next year,” she says, “we’ll be racing to throw all our adaptive optics tricks at these systems and detect as many of these objects as possible.”

Jenna Ahart is a science journalist specializing in the physical sciences. 

This test could reveal the health of your immune system

Attentive readers might have noticed my absence over the last couple of weeks. I’ve been trying to recover from a bout of illness.

It got me thinking about the immune system, and how little I know about my own immune health. The vast array of cells, proteins, and biomolecules that works to defend us from disease is mind-bogglingly complicated. Immunologists are still getting to grips with how it all works.

Those of us who aren’t immunologists are even more in the dark. I had my flu jab last week and have no idea how my immune system responded. Will it protect me from the flu virus this winter? Is it “stressed” from whatever other bugs it has encountered in the last few months? And since my husband had his shot at the same time, I can’t help wondering how our responses will compare. 

So I was intrigued to hear about a new test that is being developed to measure immune health. One that even gives you a score.

Writer David Ewing Duncan hoped that the test would reveal more about his health than any other he’d ever taken. He described the experience in a piece published jointly by MIT Technology Review and Aventine.

The test David took was developed by John Tsang at Yale University and his colleagues. The team wanted to work out a way of measuring how healthy a person’s immune system might be.

It’s a difficult thing to do, for several reasons. First, there’s the definition of “healthy.” I find it’s a loose concept that becomes more complicated the more you think about it. Yes, we all have a general sense of what it means to be in good health. But is it just the absence of disease? Is it about resilience? Does it have something to do with withstanding the impact of aging?

Tsang and his colleagues wanted to measure “deviation from health.” They looked at blood samples from 228 people who had immune diseases that were caused by single-gene mutations, as well as 42 other people who were free from disease. All those individuals could be considered along a health spectrum.

Another major challenge lies in trying to capture the complexity of the immune system, which involves hundreds of proteins and cells interacting in various ways. (Side note: Last year, MIT Technology Review recognized Ang Cui at Harvard University as one of our Innovators under 35 for her attempts to make sense of it all using machine learning. She created the Immune Dictionary to describe how hundreds of proteins affect immune cells—something she likens to a “periodic table” for the immune system.)

Tsang and his colleagues tackled this by running a series of tests on those blood samples. The vast scope of these tests is what sets them apart from the blood tests you might get during a visit to the doctor. The team looked at how genes were expressed by cells in the blood. They measured a range of immune cells and more than 1,300 proteins.

The team members used machine learning to find correlations between these measurements and health, allowing them to create an immune health score for each of the volunteers. They call it the immune health metric, or IHM.

When they used this approach to find the immune scores of people who had already volunteered in other studies, they found that the IHM seemed to align with other measures of health, such as how people respond to diseases, treatments, and vaccines. The study was published in the journal Nature Medicine last year.

The researchers behind it hope that a test like this could one day help identify people who are at risk of cancer and other diseases, or explain why some people respond differently to treatments or immunizations.

But the test isn’t ready for clinical use. If, like me, you’re finding yourself curious to know your own IHM, you’ll just have to wait.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

How do our bodies remember?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

“Like riding a bike” is shorthand for the remarkable way that our bodies remember how to move. Most of the time when we talk about muscle memory, we’re not talking about the muscles themselves but about the memory of a coordinated movement pattern that lives in the motor neurons, which control our muscles. 

Yet in recent years, scientists have discovered that our muscles themselves have a memory for movement and exercise.

When we move a muscle, the movement may appear to begin and end, but all these little changes are actually continuing to happen inside our muscle cells. And the more we move, as with riding a bike or other kinds of exercise, the more those cells begin to make a memory of that exercise.

When we move a muscle, the movement may appear to begin and end, but all these little changes are actually continuing to happen inside our muscle cells.

We all know from experience that a muscle gets bigger and stronger with repeated work. As the pioneering muscle scientist Adam Sharples—a professor at the Norwegian School of Sport Sciences in Oslo and a former professional rugby player in the UK—explained to me, skeletal muscle cells are unique in the human body: They’re long and skinny, like fibers, and have multiple nuclei. The fibers grow larger not by dividing but by recruiting muscle satellite cells—stem cells specific to muscle that are dormant until activated in response to stress or injury—to contribute their own nuclei and support muscle growth and regeneration. Those nuclei often stick around for a while in the muscle fibers, even after periods of inactivity, and there is evidence that they may help accelerate the return to growth once you start training again. 

Sharples’s research focuses on what’s called epigenetic muscle memory.Epigenetic” refers to changes in gene expression that are caused by behavior and environment—the genes themselves don’t change, but the way they work does. In general, exercise switches on genes that help make muscles grow more easily. When you lift weights, for example, small molecules called methyl groups detach from the outside of certain genes, making them more likely to turn on and produce proteins that affect muscle growth (also known as hypertrophy). Those changes persist; if you start lifting weights again, you’ll add muscle mass more quickly than before.

In 2018, Sharples’s muscle lab was the first to show that human skeletal muscle has an epigenetic memory of muscle growth after exercise: Muscle cells are primed to respond more rapidly to exercise in the future, even after a monthslong (and maybe even yearslong) pause. In other words: Your muscles remember how to do it.

Subsequent studies from Sharples and others have replicated similar findings in mice and older humans, offering further supporting evidence of epigenetic muscle memory across species and into later life. Even aging muscles have the capacity to remember when you work out.

At the same time, Sharples points to intriguing new evidence that muscles also remember periods of atrophy—and that young and old muscles remember this differently. While young human muscle seems to have what he calls a “positive” memory of wasting—“in that it recovers well after a first period of atrophy and doesn’t experience greater loss in a repeated atrophy period,” he explains—aged muscle in rats seems to have a more pronounced “negative” memory of atrophy, in which it appears “more susceptible to greater loss and a more exaggerated molecular response when muscle wasting is repeated.” Basically, young muscle tends to bounce back from periods of muscle loss—“ignoring” it, in a sense—while older muscle is more sensitive to it and might be more susceptible to further loss in the future. 

Illness can also lead to this kind of “negative” muscle memory; in a study of breast cancer survivors more than a decade after diagnosis and treatment, participants showed an epigenetic muscle profile of people much older than their chronological age. But get this: After five months of aerobic exercise training, participants were able to reset the epigenetic profile of their muscle back toward that of muscle seen in an age-matched control group of healthy women.  

What this shows is that “positive” muscle memories can help counteract “negative” ones. The takeaway? Your muscles have their own kind of intelligence. The more you use them, the more they can harness it to become a lasting beneficial resource for your body in the future. 

Bonnie Tsui is the author of On Muscle: The Stuff That Moves Us and Why It Matters (Algonquin Books, 2025).

3 takeaways about climate tech right now

On Monday, we published our 2025 edition of Climate Tech Companies to Watch. This marks the third time we’ve put the list together, and it’s become one of my favorite projects to work on every year. 

In the journalism world, it’s easy to get caught up in the latest news, whether it’s a fundraising round, research paper, or startup failure. Curating this list gives our team a chance to take a step back and consider the broader picture. What industries are making progress or lagging behind? Which countries or regions are seeing quick changes? Who’s likely to succeed? 

This year is an especially interesting moment in the climate tech world, something we grappled with while choosing companies. Here are three of my takeaways from the process of building this list. 

1. It’s hard to overstate China’s role in energy technology right now. 

To put it bluntly, China’s progress on cleantech is wild. The country is dominating in installing wind and solar power and building EVs, and it’s also pumping government money into emerging technologies like fusion energy. 

We knew we wanted this list to reflect China’s emergence as a global energy superpower, and we ended up including two Chinese firms in key industries: renewables and batteries.

In 2024, China accounted for the top four wind turbine makers worldwide. Envision was in the second spot, with 19.3 gigawatts of new capacity added last year. But the company isn’t limited to wind; it’s working to help power heavy industries like steel and chemicals with technology like green hydrogen. 

Batteries are also a hot industry in China, and we’re seeing progress in tech beyond the lithium-ion cells that currently dominate EVs and energy storage on the grid. We represent that industry with HiNa Battery Technology, a leading startup building sodium-ion batteries, which could be cheaper than today’s options. The company’s batteries are already being used in electric mopeds and grid installations. 

2. Energy demand from data centers and AI is on everyone’s mind, especially in the US. 

Another trend we noticed this year was a fixation on the growing energy demand of data centers, including massive planned dedicated facilities that power AI models. (Here’s another nudge to check out our Power Hungry series on AI and energy, in case you haven’t explored it already.) 

Even if their technology has nothing to do with data centers, companies are trying to show how they can be valuable in this age of rising energy demand. Some are signing lucrative deals with tech giants that could provide the money needed to help bring their product to market. 

Kairos Power hopes to be one such energy generator, building next-generation nuclear reactors. Last year, the company signed an agreement with Google that will see the company buy up to 500 megawatts of electricity from Kairos’s first reactors through 2035. 

In a more direct play, Redwood Materials is stringing together used EV batteries to build microgrids that could power—you guessed it—data centers. The company’s first installation fired up this year, and while it’s small, it’s an interesting example of a new use for old technology. 

3. Materials continue to be an area that’s ripe for innovation. 

In a new essay that accompanies the list, Bill Gates lays out the key role of innovation in making progress on climate technology. One thing that jumped out at me while I was reading that piece was a number: 30% of global greenhouse-gas emissions come from manufacturing, including cement and steel production. 

I’ve obviously covered materials and heavy industry for years. But it still strikes me just how much innovation we still need in the most important materials we use to scaffold our world. 

Several companies on this year’s list focus on materials: We’ve once again represented cement, a material that accounts for 7% of global greenhouse-gas emissions. Cemvision is working to use alternative fuel sources and starting materials to clean up the dirty industry. 

And Cyclic Materials is trying to reclaim and recycle rare earth magnets, a crucial technology that underpins everything from speakers to EVs and wind turbines. Today, only about 0.2% of rare earths from recycled devices are recycled, but the company is building multiple facilities in North America in hopes of changing that. 

Our list of 10 Climate Tech Companies to Watch highlights businesses we think have a shot at helping the world address and adapt to climate change with the help of everything from established energy technologies to novel materials. It’s a representation of this moment, and I hope you enjoy taking a spin through it.

How healthy am I? My immunome knows the score.  

The story is a collaboration between MIT Technology Review and Aventine, a non-profit research foundation that creates and supports content about how technology and science are changing the way we live.

It’s not often you get a text about the robustness of your immune system, but that’s what popped up on my phone last spring. Sent by John Tsang, an immunologist at Yale, the text came after his lab had put my blood through a mind-boggling array of newfangled tests. The result—think of it as a full-body, high-resolution CT scan of my immune system—would reveal more about the state of my health than any test I had ever taken. And it could potentially tell me far more than I wanted to know.

“David,” the text read, “you are the red dot.”

Tsang was referring to an image he had attached to the text that showed a graph with a scattering of black dots representing other people whose immune systems had been evaluated—and a lone red one. There also was a score: 0.35.

I had no idea what any of this meant.

The red dot was the culmination of an immuno-quest I had begun on an autumn afternoon a few months earlier, when a postdoc in Tsang’s lab drew several vials of my blood. It was also a significant milestone in a decades-long journey I’ve taken as a journalist covering life sciences and medicine. Over the years, I’ve offered myself up as a human guinea pig for hundreds of tests promising new insights into my health and mortality. In 2001, I was one of the first humans to have my DNA sequenced. Soon after, in the early 2000s, researchers tapped into my proteome—proteins circulating in my blood. Then came assessments of my microbiome, metabolome, and much more. I have continued to test-drive the latest protocols and devices, amassing tens of terabytes of data on myself, and I’ve reported on the results in dozens of articles and a book called Experimental Man. Over time, the tests have gotten better and more informative, but no test I had previously taken promised to deliver results more comprehensive or closer to revealing the truth about my underlying state of health than what John Tsang was offering.

Over the years, I’ve offered myself up as a human guinea pig for hundreds of tests promising new insights into my health and mortality. But no test I had previously taken promised to deliver results more comprehensive or closer to revealing the truth about my underlying state of health.

It also was not lost on me that I’m now 20-plus years older than I was when I took those first tests. Back in my 40s, I was ridiculously healthy. Since then, I’ve been battered by various pathogens, stresses, and injuries, including two bouts of covid and long covid—and, well, life.

But I’d kept my apprehensions to myself as Tsang, a slim, perpetually smiling man who directs the Yale Center for Systems and Engineering Immunology, invited me into his office in New Haven to introduce me to something called the human immunome.

John Tsang in his office
John Tsang has helped create a new test for your immune system.
JULIE BIDWELL

Made up of 1.8 trillion cells and trillions more proteins, metabolites, mRNA, and other biomolecules, every person’s immunome is different, and it is constantly changing. It’s shaped by our DNA, past illnesses, the air we have breathed, the food we have eaten, our age, and the traumas and stresses we have experienced—in short, everything we have ever been exposed to physically and emotionally. Right now, your immune system is hard at work identifying and fending off viruses and rogue cells that threaten to turn cancerous—or maybe already have. And it is doing an excellent job of it all, or not, depending on how healthy it happens to be at this particular moment.

Yet as critical as the immunome is to each of us, this universe of cells and molecules has remained largely beyond the reach of modern medicine—a vast yet inaccessible operating system that powerfully influences everything from our vulnerability to viruses and cancer to how well we age to whether we tolerate certain foods better than others.

Now, thanks to a slew of new technologies and to scientists like Tsang, who is on the Steering Committee of the Chan Zuckerberg Biohub New York, understanding this vital and mysterious system is within our grasp, paving the way for powerful new tools and tests to help us better assess, diagnose and treat diseases.

Already, new research is revealing patterns in the ways our bodies respond to stress and disease. Scientists are creating contrasting portraits of weak and robust immunomes—portraits that someday, it’s hoped, could offer new insights into patient care and perhaps detect illnesses before symptoms appear. There are plans afoot to deploy this knowledge and technology on a global scale, which would enable scientists to observe the effects of climate, geography, and countless other factors on the immunome. The results could transform what it means to be healthy and how we identify and treat disease.

It all begins with a test that can tell you whether your immune system is healthy or not.

Reading the immunome

Sitting in his office last fall, Tsang—a systems immunologist whose expertise combines computer science and immunology— began my tutorial in immunomics by introducing me to a study that he and his team wrote up in a 2024 paper published in Nature Medicine. It described the results of measurements made on blood samples taken from 270 subjects—tests similar to the ones Tsang’s team would be running on me. In the study, Tsang and his colleagues looked at the immune systems of 228 patients diagnosed with a variety of genetic disorders and a control group of 42 healthy people.

To help me visualize what my results might look like, Tsang opened his laptop to reveal several colorful charts from the study, punctuated by black dots representing each person evaluated. The results reminded me vaguely of abstract paintings by Joan Miró. But in place of colorful splotches, whirls, and circles were an assortment of scatter plots, Gantt charts, and heat maps tinted in greens, blues, oranges, and purples.

It all looked like gibberish to me.

Luckily, Tsang was willing to serve as my guide. Flashing his perpetually patient smile, he explained that these colorful jumbles depicted what his team had uncovered about each subject after taking blood samples and assessing the details of how well their immune cells, proteins, mRNA, and other immune system components were doing their job.

IBRAHIM RAYINTAKATH

The results placed people—represented by the individual dots—on a left-to-right continuum, ranging from those with unhealthy immunomes on the left to those with healthy immunomes on the right. Background colors, meanwhile, were used to identify people with different medical conditions affecting their immune systems. For example, olive-green indicated those with auto-immune disorders; orange backgrounds were designated for individuals with no known disease history. Tsang said he and his team would be placing me on a similar graph after they finished analyzing my blood.

Tsang’s measurements go significantly beyond what can be discerned from the handful of immune biomarkers that people routinely get tested for today. “The main immune cell panel typically ordered by a physician is called a CBC differential,” he told me. CBC, which stands for “complete blood count,” is a decades-old type of analysis that counts levels of red blood cells, hemoglobin, and basic immune cell types (neutrophils, lymphocytes, monocytes, basophils, and eosinophils). Changes in these levels can indicate whether a person’s immune system might be reacting to a virus or other infection, cancer, or something else. Other blood tests—like one that looks for elevated levels of C-reactive protein, which can indicate inflammation associated with heart disease—are more specific than the CBC. But they still rely on blunt counting—in this case of certain proteins.

Tsang’s assessment, by contrast, tests up to a million cells, proteins, mRNA and immune biomolecules—significantly more than the CBC and others. His protocol is designed to paint a more holistic portrait of a person’s immune system by not only counting cells and molecules but also by assessing their interactions. The CBC “doesn’t tell me as a physician what the cells being counted are doing,” says Rachel Sparks, a clinical immunologist who was the lead author of the Nature Medicine study and is now a translational medicine physician with the drug giant AstraZeneca. “I just know that there are more neutrophils than normal, which may or may not indicate that they’re behaving badly. We now have technology that allows us to see at a granular level what a cell is actually doing when a virus appears—how it’s changing and reacting.”

Tsang’s measurements go significantly beyond what can be discerned from the handful of immune biomarkers that people routinely get tested for today. His assessment tests up to a million cells, proteins, mRNA and immune biomolecules.

Such breakthroughs have been made possible thanks to a raft of new and improved technologies that have evolved over the past decade, allowing scientists like Tsang and Sparks to explore the intricacies of the immunome with newfound precision. These include devices that can count myriad different types of cells and biomolecules, as well as advanced sequencers that identify and characterize DNA, RNA, proteins, and other molecules. There are now instruments that also can measure thousands of changes and reactions that occur inside a single immune cell as it reacts to a virus or other threat.

Tsang and Spark’s’ team used data generated by such measurements to identify and characterize a series of signals distinctive to unhealthy immune systems. Then they used the presence or absence of these signals to create a numerical assessment of the health of a person’s immunome—a score they call an “immune health metric,” or IHM.

Rachel Sparks outdoors in a green space
Clinical immunologist Rachel Sparks hopes new tests can improve medical care.
JARED SOARES

To make sense of the crush of data being collected, Tsang’s team used machine-learning algorithms that correlated the results of the many measurements with a patient’s known health status and age. They also used AI to compare their findings with immune system data collected elsewhere. All this allowed them to determine and validate an IHM score for each person, and to place it on their spectrum, identifying that person as healthy or not.

It all came together for the first time with the publication of the Nature Medicine paper, in which Tsang and his colleagues reported the results from testing multiple immune variables in the 270 subjects. They also announced a remarkable discovery: Patients with different kinds of diseases reacted with similar disruptions to their immunomes. For instance, many showed a lower level of the aptly named natural killer immune cells, regardless of what they were suffering from. Critically, the immune profiles of those with diagnosed diseases tended to look very different from those belonging to the outwardly healthy people in the study. And, as expected, immune health declined in the older patients.

But then the results got really interesting. In a few cases, the immune systems of  unhealthy and healthy people looked similar, with some people appearing near the “healthy” area of the chart even though they were known to have diseases. Most likely this was because their symptoms were in remission and not causing an immune reaction at the moment when their blood was drawn, Tsang told me. 

In other cases, people without a known disease showed up on the chart closer to those who were known to be sick. “Some of these people who appear to be in good health are overlapping with pathology that traditional metrics can’t spot,” says Tsang, whose Nature Medicine paper reported that roughly half the healthy individuals in the study had IHM scores that overlapped with those of people known to be sick. Either these seemingly healthy people had normal immune systems that were busy fending off, say, a passing virus, or  their immune systems had been impacted by aging and the vicissitudes of life. Potentially more worrisome, they were harboring an illness or stress that was not yet making them ill but might do so eventually.

These findings have obvious implications for medicine. Spotting a low immune score in a seemingly healthy person could make it possible to identify and start treating an illness before symptoms appear, diseases worsen, or tumors grow and metastasize. IHM-style evaluations could also provide clues as to why some people respond differently to viruses like the one that causes covid, and why vaccines—which are designed to activate a healthy immune system—might not work as well in people whose immune systems are compromised.

Spotting a low immune score in a seemingly healthy person could make it possible to identify and start treating an illness before symptoms appear, diseases worsen, or tumors grow and metastasize.

“One of the more surprising things about the last pandemic was that all sorts of random younger people who seemed very healthy got sick and then they were gone,” says Mark Davis, a Stanford immunologist who helped pioneer the science being developed in labs like Tsang’s. “Some had underlying conditions like obesity and diabetes, but some did not. So the question is, could we have pointed out that something was off with these folks’ immune systems? Could we have diagnosed that and warned people to take extra precautions?”

Tsang’s IHM test is designed to answer a simple question: What is the relative health of your immune system? But there are other assessments being developed to provide more detailed information on how the body is doing. Tsang’s own team is working on a panel of additional scores aimed at getting finer detail on specific immune conditions. These include a test that measures the health of a person’s bone marrow, which makes immune cells. “If you have a bone marrow stress or inflammatory condition in the bone marrow, you could have lower capacity to produce cells, which will be reflected by this score,” he says. Another detailed metric will measure protein levels to predict how a person will respond to a virus.

Tsang hopes that an IHM-style test will one day be part of a standard physical exam—a snapshot of a patient’s immune system that could inform care. For instance, has a period of intense stress compromised the immune system, making it less able to fend off this season’s flu? Will someone’s score predict a better or worse response to a vaccine or a cancer drug? How does a person’s immune system change with age?

Or, as I anxiously wondered while waiting to learn my own score, will the results reveal an underlying disorder or disease, silently ticking away until it shows itself?

Toward a human immunome project  

The quest to create advanced tests like the IHM for the immune system began more than 15 years ago, when scientists like Mark Davis became frustrated with a field in which research—primarily in mice—was focused mostly on individual immune cells and proteins. In 2007 he launched the Stanford Human Immune Monitoring Center, one of the first efforts to conceptualize the human immunome as a holistic, body-wide network in human beings. Speaking by Zoom from his office in Palo Alto, California, Davis told me that the effort had spawned other projects, including a landmark twin study showing that a lot of immune variation is not genetic, which was then the prevailing theory, but is heavily influenced by environmental factors—a major shift in scientists’ understanding.

Shai Shen-Orr
Shai Shen-Orr sees a day when people will check their immune scores on an app.
COURTESY OF SHAI SHEN-ORR

Davis and others also laid the groundwork for tests like John Tsang’s by discovering how a T cell—among the most common and important immune players—can recognize pathogens, cancerous cells, and other threats, triggering defensive measures that can include destroying the threat. This and other discoveries have revealed many of the basic mechanics of how immune cells work, says Davis, “but there’s still a lot we have to learn.”

One researcher working with Davis in those early days was Shai Shen-Orr, who is now director of the Zimin Institute for AI Solutions in Healthcare at the Technion-Israel Institute of Technology, based in Haifa, Israel. (He’s also a frequent collaborator with Tsang.) Shen-Orr, like Tsang, is a systems immunologist. He recalls that in 2007, when he was a postdoc in Davis’s lab, immunologists had identified around 100 cell types and a similar number of cytokines—proteins that act as messengers in the immune system. But they weren’t able to measure them simultaneously, which limited visibility into how the immune system works as a whole. Today, Shen-Orr says, immunologists can measure hundreds of cell types and thousands of proteins and watch them interact.

Shen-Orr’s current lab has developed its own version of an immunome test that he calls IMM-AGE (short for “immune age”), the basics of which were published in a 2019 paper in Nature Medicine. IMM-AGE looks at the composition of people’s immune systems—how many of each type of immune cell they have and how these numbers change as they age. His team has used this information primarily to ascertain a person’s risk of heart disease.

Shen-Orr also has been a vociferous advocate for expanding the pool of test samples, which now come mostly from Americans and Europeans. “We need to understand why different people in different environments react differently and how that works,” he says. “We also need to test a lot more people—maybe millions.”

Tsang has seen why a limited sample size can pose problems. In 2013, he says, researchers at the National Institutes of Health came up with a malaria vaccine that was effective for almost everyone who got it during clinical trials conducted in Maryland. “But in Africa,” he says, “it only worked for about 25% of the people.” He attributes this to the significant differences in genetics, diet, climate, and other environmental factors that cause people’s immunomes to develop differently. “Why?” he asks. “What exactly was different about the immune systems in Maryland and Tanzania? That’s what we need to understand so we can design personalized vaccines and treatments.”

“What exactly was different about the immune systems in Maryland and Tanzania? That’s what we need to understand so we can design personalized vaccines and treatments.”

John Tsang

For several years, Tsang and Shen-Orr have advocated going global with testing, “but there has been resistance,” Shen-Orr says. “Look, medicine is conservative and moves slowly, and the technology is expensive and labor intensive.” They finally got the audience they needed at a 2022 conference in La Jolla, California, convened by the Human Immunome Project, or HIP. (The organization was originally founded in 2016 to create more effective vaccines but had recently changed its name to emphasize a pivot from just vaccines to the wider field of immunome science.) It was in La Jolla that they met HIP’s then-new chairperson, Jane Metcalfe, a cofounder of Wired magazine, who saw what was at stake.

“We’ve got all of these advanced molecular immunological profiles being developed,” she said, “but we can’t begin to predict the breadth of immune system variability if we’re  only testing small numbers of people in Palo Alto or Tel Aviv. And that’s when the big aha moment struck us that we need sites everywhere to collect that information so we can build proper computer models and a predictive understanding of the human immune system.”

IBRAHIM RAYINTAKATH

Following that meeting, HIP created a new scientific plan, with Tsang and Shen-Orr as chief science officers. The group set an ambitious goal of raising around $3 billion over the next 10 years—a goal Tsang and Metcalfe say will be met by working in conjunction with a broad network of public and private supporters. Cutbacks in federal funding for biomedical research in the US may limit funds from this traditional source, but HIP plans to work with government agencies outside the US too, with the goal of creating a comprehensive global immunological database.

HIP’s plan is to first develop a pilot version based on Tsang’s test, which it will call the Immune Monitoring Kit, to test a few thousand people in Africa, Australia, East Asia, Europe, the US, and Israel. The initial effort, according to Metcalfe, is expected to begin by the end of the year.  

After that, HIP would like to expand to some 150 sites around the world, eventually assessing about 250,000 people and collecting a vast cache of data and insights that Tsang believes will profoundly affect—even revolutionize—clinical medicine, public health, and drug development.

My immune health metric score is …

As HIP develops its pilot study to take on the world, John Tsang, for better or worse, has added one more North American Caucasian male to the small number of people who have received an IHM score to date. That would be me.

It took a long time to get my score, but Tsang didn’t leave me hanging once he pinged me the red dot. “We plotted you with other participants who are clinically quite healthy,” he texted, referring to a cluster of black dots on the grid he had sent, although he cautioned that the group I’m being compared with includes only a few dozen people. “Higher IHM means better immune health,” he wrote, referring to my 0.35 score, which he described as a number on an arbitrary scale. “As you can see, your IHM is right in the middle of a bunch of people 20 years younger.”

This was a relief, given that our immune system, like so many other bodily functions, declines with age—though obviously at different rates. Yet I also felt a certain disappointment. To be honest, I had expected more granular detail after having a million or so cells and markers tested—like perhaps some insights on why I got long covid (twice) and others didn’t. Tsang and other scientists are working on ways to extract more specific information from the tests. Still, he insists that the single score itself is a powerful tool to understand the general state of our immunomes, indicating the absence or presence of underlying health issues that might not be revealed in traditional testing.

To be honest, I had expected more granular detail after having a million or so cells and markers tested—like perhaps some insights on why I got long covid (twice) and others didn’t.

I asked Tsang what my score meant for my future. “Your score is always changing depending on what you’re exposed to and due to age,” he said, adding that the IHM is still so new that it’s hard to know exactly what the score means until researchers do more work—and until HIP can evaluate and compare thousands or hundreds of thousands of people. They also need to keep testing me over time to see how my immune system changes as it’s exposed to new perturbations and stresses.

For now, I’m left with a simple number. Though it tells me little about the detailed workings of my immune system, the good news is that it raises no red flags. My immune system, it turns out, is pretty healthy.

A few days after receiving my score from Tsang, I heard from Shen-Orr about more results. Tsang had shared my data with his lab so that he could run his IMM-AGE protocol on my immunome and provide me with another score to worry about. Shen-Orr’s result put the age of my immune system at around 57—still 10 years younger than my true age.

The coming age of the immunome

Shai Shen-Orr imagines a day when people will be able to check their advanced IHM and IMM-AGE scores—or their HIP Immune Monitoring Kit score—on an app after a blood draw, the way they now check health data such as heart rate and blood pressure. Jane Metcalfe talks about linking IHM-type measurements and analyses with rising global temperatures and steamier days and nights to study how global warming might affect the immune system of, say, a newborn or a pregnant woman. “This could be plugged into other people’s models and really help us understand the effects of pollution, nutrition, or climate change on human health,” she says.

“I think [in 10 years] I’ll be able to use this much more granular understanding of what the immune system is doing at the cellular level in my patients. And hopefully we could target our therapies more directly to those cells or pathways that are contributing to disease.”

Rachel Sparks

Other clues could also be on the horizon. “At some point we’ll have IHM scores that can provide data on who will be most affected by a virus during a pandemic,” Tsang says. Maybe that will help researchers engineer an immune system response that shuts down the virus before it spreads. He says it’s possible to run a test like that now, but it remains experimental and will take years to fully develop, test for safety and accuracy, and establish standards and protocols for use as a tool of global public health. “These things take a long time,” he says. 

The same goes for bringing IHM-style tests into the exam room, so doctors like Rachel Sparks can use the results to help treat their patients. “I think in 10 years, with some effort, we really could have something useful,” says Stanford’s Mark Davis. Sparks agrees. “I think by then I’ll be able to use this much more granular understanding of what the immune system is doing at the cellular level in my patients,” she says. “And hopefully we could target our therapies more directly to those cells or pathways that are contributing to disease.”

Personally, I’ll wait for more details with a mix of impatience, curiosity, and at least a hint of concern. I wonder what more the immune circuitry deep inside me might reveal about whether I’m healthy at this very moment, or will be tomorrow, or next month, or years from now. 

David Ewing Duncan is an award-winning science writer. For more information on this story check out his Futures Column on Substack.

The three big unanswered questions about Sora

Last week OpenAI released Sora, a TikTok-style app that presents an endless feed of exclusively AI-generated videos, each up to 10 seconds long. The app allows you to create a “cameo” of yourself—a hyperrealistic avatar that mimics your appearance and voice—and insert other peoples’ cameos into your own videos (depending on what permissions they set). 

To some people who believed earnestly in OpenAI’s promise to build AI that benefits all of humanity, the app is a punchline. A former OpenAI researcher who left to build an AI-for-science startup referred to Sora as an “infinite AI tiktok slop machine.” 

That hasn’t stopped it from soaring to the top spot on Apple’s US App Store. After I downloaded the app, I quickly learned what types of videos are, at least currently, performing well: bodycam-style footage of police pulling over pets or various trademarked characters, including SpongeBob and Scooby Doo; deepfake memes of Martin Luther King Jr. talking about Xbox; and endless variations of Jesus Christ navigating our modern world. 

Just as quickly, I had a bunch of questions about what’s coming next for Sora. Here’s what I’ve learned so far.

Can it last?

OpenAI is betting that a sizable number of people will want to spend time on an app in which you can suspend your concerns about whether what you’re looking at is fake and indulge in a stream of raw AI. One reviewer put it this way: “It’s comforting because you know that everything you’re scrolling through isn’t real, where other platforms you sometimes have to guess if it’s real or fake. Here, there is no guessing, it’s all AI, all the time.”

This may sound like hell to some. But judging by Sora’s popularity, lots of people want it. 

So what’s drawing these people in? There are two explanations. One is that Sora is a flash-in-the-pan gimmick, with people lining up to gawk at what cutting-edge AI can create now (in my experience, this is interesting for about five minutes). The second, which OpenAI is betting on, is that we’re witnessing a genuine shift in what type of content can draw eyeballs, and that users will stay with Sora because it allows a level of fantastical creativity not possible in any other app. 

There are a few decisions down the pike that may shape how many people stick around: how OpenAI decides to implement ads, what limits it sets for copyrighted content (see below), and what algorithms it cooks up to decide who sees what. 

Can OpenAI afford it?

OpenAI is not profitable, but that’s not particularly strange given how Silicon Valley operates. What is peculiar, though, is that the company is investing in a platform for generating video, which is the most energy-intensive (and therefore expensive) form of AI we have. The energy it takes dwarfs the amount required to create images or answer text questions via ChatGPT.

This isn’t news to OpenAI, which has joined a half-trillion-dollar project to build data centers and new power plants. But Sora—which currently allows you to generate AI videos, for free, without limits—raises the stakes: How much will it cost the company? 

OpenAI is making moves toward monetizing things (you can now buy products directly through ChatGPT, for example). On October 3, its CEO, Sam Altman, wrote in a blog post that “we are going to have to somehow make money for video generation,” but he didn’t get into specifics. One can imagine personalized ads and more in-app purchases. 

Still, it’s concerning to imagine the mountain of emissions might result if Sora becomes popular. Altman has accurately described the emissions burden of one query to ChatGPT as impossibly small. What he has not quantified is what that figure is for a 10-second video generated by Sora. It’s only a matter of time until AI and climate researchers start demanding it. 

How many lawsuits are coming? 

Sora is awash in copyrighted and trademarked characters. It allows you to easily deepfake deceased celebrities. Its videos use copyrighted music. 

Last week, the Wall Street Journal reported that OpenAI has sent letters to copyright holders notifying them that they’ll have to opt out of the Sora platform if they don’t want their material included, which is not how these things usually work. The law on how AI companies should handle copyrighted material is far from settled, and it’d be reasonable to expect lawsuits challenging this. 

In last week’s blog post, Altman wrote that OpenAI is “hearing from a lot of rightsholders” who want more control over how their characters are used in Sora. He says that the company plans to give those parties more “granular control” over their characters. Still, “there may be some edge cases of generations that get through that shouldn’t,” he wrote.

But another issue is the ease with which you can use the cameos of real people. People can restrict who can use their cameo, but what limits will there be for what these cameos can be made to do in Sora videos? 

This is apparently already an issue OpenAI is being forced to respond to. The head of Sora, Bill Peebles, posted on October 5 that users can now restrict how their cameo can be used—preventing it from appearing in political videos or saying certain words, for example. How well will this work? Is it only a matter of time until someone’s cameo is used for something nefarious, explicit, illegal, or at least creepy, sparking a lawsuit alleging that OpenAI is responsible? 

Overall, we haven’t seen what full-scale Sora looks like yet (OpenAI is still doling out access to the app via invite codes). When we do, I think it will serve as a grim test: Can AI create videos so fine-tuned for endless engagement that they’ll outcompete “real” videos for our attention? In the end, Sora isn’t just testing OpenAI’s technology—it’s testing us, and how much of our reality we’re willing to trade for an infinite scroll of simulation.

This company is planning a lithium empire from the shores of the Great Salt Lake

BOX ELDER COUNTY, Utah – On a bright afternoon in August, the shore on the North Arm of the Great Salt Lake looks like something out of a science fiction film set in a scorching alien world. The desert sun is blinding as it reflects off the white salt that gathers and crunches underfoot like snow at the water’s edge. In a part of the lake too shallow for boats, bacteria have turned the water a Pepto-Bismol pink. The landscape all around is ringed with jagged red mountains and brown brush. The only obvious sign of people is the salt-encrusted hose running from the water’s edge to a makeshift encampment of shipping containers and trucks a few hundred feet away. 

This otherworldly scene is the test site for a company called Lilac Solutions, which is developing a technology it says will shake up the United States’ efforts to pry control over the global supply of lithium, the so-called “white gold” needed for electric vehicles and batteries, away from China. Before tearing down its demonstration facility to make way for its first commercial plant, due online next year, the company invited me to be the first journalist to tour its outpost in this remote area, a roughly two-hour drive from Salt Lake City.

The startup is in a race to commercialize a new way to extract lithium from rocks, called direct lithium extraction (DLE). This approach is designed to reduce the environmental damage caused by the two most common traditional methods of mining lithium: hard-rock mining and brining. 

Australia, the world’s top producer of lithium, uses the first approach, scraping rocks laden with lithium out of the earth so they can be chemically processed into industrial-grade versions of the metal. Chile, the second-largest lithium source, uses the second: It floods areas of its sun-soaked Atacama Desert with water. This results in ponds rich in dissolved lithium, which are then allowed to dry off, leaving behind lithium salts that can be harvested and processed elsewhere. 

a black hose crusted and partly buried with white and pink minerals winds into a pool of water
An intake hose, used to pump water to Lilac Solutions’ demonstration site, snakes into the pink-hued Great Salt Lake.
ALEXANDER KAUFMAN

The range of methods known as DLE use lithium brine too, but instead of water-intensive evaporation, they all involve advanced chemical or physical filtering processes that selectively separate out lithium ions. While DLE has yet to take off, its reduced need for water and land has made it a prime focus for companies and governments looking to ramp up production to meet the growing demand for lithium as electric vehicles take off and even bigger batteries are increasingly used to back up power grids. China, which processes more than two-thirds of the world’s mined lithium, is developing its own DLE to increase domestic production of the raw material. New approaches are still being researched, but nearly a dozen companies are actively looking to commercialize DLE technology now, and some industrial giants already offer basic off-the-shelf hardware. 

In August, Lilac completed its most advanced test yet of its technology, which the company says doesn’t just require far less water than traditional lithium extraction—it uses a fraction of what other DLE approaches demand. 

The company uses proprietary beads to draw lithium ions from water and says its process can extract lithium using a tenth as much water as the alumina sorbent technology that dominates the DLE industry. Lilac also highlights its all-American supply chain. Technology originally developed by Koch Industries, for example, uses some Chinese-made components. Lilac’s beads are manufactured at the company’s plant in Nevada. 

Lilac says the beads are particularly well suited to extracting lithium where concentrations are low. That doesn’t mean they could be deployed just anywhere—there won’t be lithium extraction on the Hudson River anytime soon. But Lilac’s tech could offer significant advantages over what’s currently on the market. And forgoing plans to become a major producer itself could enable the company to seize a decent slice of global production by appealing to lithium miners companies looking for the best equipment, says Milo McBride, a researcher at the Carnegie Endowment for International Peace who authored a recent report on DLE. 

If everything pans out, the pilot plant Lilac builds next to prove its technology at commercial scale could significantly increase domestic supply at a moment when the nation’s largest proposed lithium project, the controversial hard-rock Thacker Pass mine in Nevada, has faced fresh uncertainty. At the beginning of October, the Trump administration renegotiated a federal loan worth more than $2 billion to secure a 5% ownership stake for the US government. 

walking path between several tall blue tanks connected by hose
The blue tank on the left filters the brine from the Great Salt Lake to remove large particles before pumping the lithium-rich water into the ion-exchange systems located in the shipping containers.
ALEXANDER KAUFMAN

Despite bipartisan government support, the prospect of opening a deep gash in an unspoiled stretch of Nevada landscape has drawn fierce opposition from conservationists and lawsuits from ranchers and Native American tribes who say the Thacker Pass project would destroy the underground freshwater reservoirs on which they depend. Water shortages in the parched West have also made it difficult to plan on using additional evaporation ponds, the other traditional way of extracting lithium. 

Lilac is not the only company in the US pushing for DLE. In California’s Salton Sea, developers such as EnergySource Minerals are looking to build a geothermal power plant to power a DLE facility pulling lithium from the inland desert lake. And energy giants such as Exxon Mobil, Chevron, and Occidental Petroleum are racing to develop an area in southwestern Arkansas called the Smackover region, where researchers with the US Geological Survey have found as much as 19 million metric tons of untapped lithium in salty underground water. In between, both geographically and strategically, is Lilac: It’s looking to develop new technology like the California companies but sell its hardware to the energy giants in Arkansas. 

The Great Salt Lake isn’t an obvious place to develop a lithium mine. The Salton Sea boasts lithium concentrations of just under 200 parts per million. Argentina, where Lilac has another test facility, has resources of above 700 parts per million. 

Here on the Great Salt Lake? “It’s 70 parts per million,” Raef Sully, Lilac’s Australia-born chief executive, tells me. “So if you had a football stadium with 45,000 seats, this would be three people.”

For Lilac, this is actually a feature of the location. “It’s a very, very good demonstration of the capability of our technology,” Sully says. Showing that Lilac’s hardware can extract lithium at high purity levels from a brine with low concentration, he says, proves its versatility. That wasn’t the reason Lilac selected the site, though. “Utah is a mining friendly state,” says Elizabeth Pond, the vice president of communications. And though the lake water has low concentrations of lithium, extracting the brine simply calls for running a hose into the water, whereas other locations would require digging a well at great cost. 

When I accompanied Sully to the test site during my tour, our route following unpaved county roads lined with fields of wild sunflowers. The facility itself is little more than an assortment of converted shipping containers and two mobile trailers, one to serve as the main office and the other as a field laboratory to test samples. It’s off the grid, relying on diesel generators that the company says will be replaced with propane units once this location is converted to a permanent facility but could eventually be swapped for geothermal technology tapping into a hot rock resource located nearby. (Solar panels, Sully clarifies, couldn’t supply the 24-7 power supply the facility will need.) But it depends on its connection to the Great Salt Lake via that lengthy hose. 

hand holding a square of wire mesh with a clump of crystals in the center
Hardened salt and impurities are encrusted on metal mesh that keeps larger materials out of Lilac’s water intake system.
ALEXANDER KAUFMAN

Pumped uphill, the lake water passes through a series of filters to remove solids until it ends up in a vessel filled with the company’s specially designed ceramic beads, made from a patented material that attracts lithium ions from the water. Once saturated, the beads are put through an acid wash to remove the lithium. The remaining brine is then repeatedly tested and, once deemed safe to release back into the lake, pumped back down to the shore through an outgoing tube in the hose. The lithium solution, meanwhile, is stockpiled in tanks on site before shipping off to a processing plant to be turned into battery-grade lithium carbonate, which is a white powder. 

“As a technology provider in the long term, if we’re going to have decades of lithium demand, they want to position their technology as something that can tap a bunch of markets,” McBride says. “To have a technology that can potentially economically recover different types of resources in different types of environments is an enticing proposition.” 

This testing ground won’t stay this way for long. During my visit, Lilac’s crew was starting to pack up the location after completing its demonstration testing. The results the company shared exclusively with me suggest a smashing success, particularly for such low-grade brine with numerous impurities: Lilac’s equipment recovered 87% of the available lithium, on average, with a purity rate of 99.97%.

The next step will be to clear the area to make way for construction of Lilac’s first permanent commercial facility at the same site. To meet the stipulations of Utah state permits for the new plant, the company had to cease all operations at the demonstration project. If everything goes according to plan, Lilac’s first US facility will begin commercial production in the second half of 2027. The company has lined up about two-thirds of its funding for the project. That could make the plant the first new commercial source of lithium in the US to come online in years, and the first DLE facility ever. 

Once it’s fully online, the project should produce 5,000 tons per year—doubling annual US production of lithium. But a full-scale plant using Lilac’s technology would produce between three and five times that amount. 

There are some potential snags. Utah regulators this year started cracking down on mineral companies pumping water from the Great Salt Lake, which is shrinking amid worsening droughts. (Lilac says it’s largely immune to the restrictions since it returns the water to the lake.) While the relatively low concentrations of lithium in the water make for a good test case, full-scale commercial production would likely prove far more economical in a place with more of the metal. 

sunflowers growing next to a dirt road
Wild sunflowers line the unpaved county roads that cut through ranching land en route to Lilac Solutions’ remote demonstration site.
ALEXANDER KAUFMAN

“The Great Salt Lake is probably the worst possible place to be doing this, because there are real challenges around pulling water from the lake,” says Ashley Zumwalt-Forbes, a mining engineer who previously served as the deputy director of battery minerals at the Department of Energy. “But if it’s just being used as a trial for the technology, that makes sense.” 

What makes Lilac stand out among its peers is that it has no plans to design and manufacture its own DLE equipment and produce actual lithium. Lilac wants instead to sell its technology to others. The pilot plant is just intended to test and debut its hardware. Sully tells me it’s being built under a separate limited-liability corporation to make a potential sale easier if it’s successful. 

It’s an unusual play in the lithium industry. Once most companies see success with their technology, “they go crazy and think they can vertically integrate and at the same time be a miner and an energy producer,” Kwasi Ampofo, the head of minerals and metals at the energy consultancy BloombergNEF, tells me. 

“Lilac is trying to be a technology vendor,” he says. “I wonder why a lot more people aren’t choosing that route.” 

If things work out the right way, Sully says, Lilac could become the vendor of choice to projects like the oil-backed sites in the Smackover and beyond. 

“We think our technology is the next generation,” he says. “And if we end up working with an Exxon or a Chevron or a Rio Tinto, we want to be the DLE technology provider in their lithium project.”

AI toys are all the rage in China—and now they’re appearing on shelves in the US too

Kids have always played with and talked to stuffed animals. But now their toys can talk back, thanks to a wave of companies that are fitting children’s playthings with chatbots and voice assistants. 

It’s a trend that has particularly taken off in China: A recent report by the Shenzhen Toy Industry Association and JD.com predicts that the sector will surpass ¥100 billion ($14 billion) by 2030, growing faster than almost any other branch of consumer AI. According to the Chinese corporation registration database Qichamao, there are over 1,500 AI toy companies operating in China as of October 2025.

One of the latest entrants to the market is a toy called BubblePal, a device the size of a Ping-Pong ball that clips onto a child’s favorite stuffed animal and makes it “talk.” The gadget comes with a smartphone app that lets parents switch between 39 characters, from Disney’s Elsa to the Chinese cartoon classic Nezha. It costs $149, and 200,000 units have been sold since it launched last summer. It’s made by the Chinese company Haivivi and runs on DeepSeek’s large language models. 

Other companies are approaching the market differently. FoloToy, another Chinese startup, allows parents to customize a bear, bunny, or cactus toy by training it to speak with their own voice and speech pattern. FoloToy reported selling more than 20,000 of its AI-equipped plush toys in the first quarter of 2025, nearly equaling its total sales for 2024, and it projects sales of 300,000 units this year. 

But Chinese AI toy companies have their sights set beyond the nation’s borders. BubblePal was launched in the US in December 2024 and is now also available in Canada and the UK. And FoloToy is now sold in more than 10 countries, including the US, UK, Canada, Brazil, Germany, and Thailand. Rui Ma, a China tech analyst at AlphaWatch.AI, says that AI devices for children make particular sense in China, where there is already a well-established market for kid-focused educational electronics—a market that does not exist to the same extent globally. FoloToy’s CEO, Kong Miaomiao, told the Chinese outlet Baijing Chuhai that outside China, his firm is still just “reaching early adopters who are curious about AI.”

China’s AI toy boom builds on decades of consumer electronics designed specifically for children. As early as the 1990s, companies such as BBK popularized devices like electronic dictionaries and “study machines,” marketed to parents as educational aids. These toy-electronics hybrids read aloud, tell interactive stories, and simulate the role of a playmate.

The competition is heating up, however—US companies have also started to develop and sell AI toys. The musician Grimes helped to create Grok, a plush toy that chats with kids and adapts to their personality. Toy giant Mattel is working with OpenAI to bring conversational AI to brands like Barbie and Hot Wheels, with the first products expected to be announced later this year.

However, reviews from parents who’ve bought AI toys in China are mixed. Although many appreciate the fact they are screen-free and come with strict parental controls, some parents say their AI capabilities can be glitchy, leading children to tire of them easily. 

Penny Huang, based in Beijing, bought a BubblePal for her five-year-old daughter, who is cared for mostly by grandparents. Huang hoped that the toy could make her less lonely and reduce her constant requests to play with adults’ smartphones. But the novelty wore off quickly.

“The responses are too long and wordy. My daughter quickly loses patience,” says Huang, “It [the role-play] doesn’t feel immersive—just a voice that sometimes sounds out of place.” 

Another parent who uses BubblePal, Hongyi Li, found the voice recognition lagging: “Children’s speech is fragmented and unclear. The toy frequently interrupts my kid or misunderstands what she says. It also still requires pressing a button to interact, which can be hard for toddlers.” 

Huang recently listed her BubblePal for sale on Xianyu, a secondhand marketplace. “This is just like one of the many toys that my daughter plays for five minutes then gets tired of,” she says. “She wants to play with my phone more than anything else.”