This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
This week, we’re acknowledging a special birthday. It’s 100 years since EEG (electroencephalography) was first used to measure electrical activity in a person’s brain. The finding was revolutionary. It helped people understand that epilepsy was a neurological disorder as opposed to a personality trait, for one thing (yes, really).
The fundamentals of EEG have not changed much over the last century—scientists and doctors still put electrodes on people’s heads to try to work out what’s going on inside their brains. But we’ve been able to do a lot more with the information that’s collected.
We’ve been able to use EEG to learn more about how we think, remember, and solve problems. EEG has been used to diagnose brain and hearing disorders, explore how conscious a person might be, and even allow people to control devices like computers, wheelchairs, and drones.
First, a quick overview of what EEG is and how it works. EEG involves placing electrodes on the top of someone’s head, collecting electrical signals from brainwaves, and feeding these to a computer for analysis. Today’s devices often resemble swimming caps. They’re very cheap compared with other types of brain imaging technologies, such as fMRI scanners, and they’re pretty small and portable.
The first person to use EEG in people was Hans Berger, a German psychiatrist who was fascinated by the idea of telepathy. Berger developed EEG as a tool to measure “psychic energy,” and he carried out his early research—much of it on his teenage son—in secret, says Faisal Mushtaq, a cognitive neuroscientist at the University of Leeds in the UK. Berger was, and remains, a controversial figure owing to his unclear links with Nazi regime, Mushtaq tells me.
But EEG went on to take the neuroscience world by storm. It has become a staple of neuroscience labs, where it can be used on people of all ages, even newborns. Neuroscientists use EEG to explore how babies learn and think, and even what makes them laugh. In my own reporting, I’ve covered the use of EEG to understand the phenomenon of lucid dreaming, to reveal how our memories are filed away during sleep, and to allow people to turn on the TV by thought alone.
So where do we go from here? Mushtaq, along with Pedro Valdes-Sosa at the University of Electronic Science and Technology of China in Chengdu and their colleagues, put the question to 500 people who work with EEG, including neuroscientists, clinical neurophysiologists, and brain surgeons. Specifically, with the help of ChatGPT, the team generated a list of predictions, which ranged from the very likely to the somewhat fanciful. Each of the 500 survey responders was asked to estimate when, if at all, each prediction might be likely to pan out.
Some of the soonest breakthroughs will be in sleep analysis, according to the responders. EEG is already used to diagnose and monitor sleep disorders—but this is set to become routine practice in the next decade. Consumer EEG is also likely to take off in the near future, potentially giving many of us the opportunity to learn more about our own brain activity, and how it corresponds with our wellbeing. “Perhaps it’s integrated into a sort of baseball cap that you wear as you walk around, and it’s connected to your smartphone,” says Mushtaq. EEG caps like these have already been trialed on employees in China and used to monitor fatigue in truck drivers and mining workers, for example.
For the time being, EEG communication is limited to the lab or hospital, where studies focus on the technology’s potential to help people who are paralyzed, or who have disorders of consciousness. But that is likely to change in the coming years, once more clinical trials have been completed. Survey respondents think that EEG could become a primary tool of communication for individuals like these in the next 20 years or so.
At the other end of the scale is what Mushtaq calls the “more fanciful” application—the idea of using EEG to read people’s thoughts, memories, and even dreams.
Mushtaq thinks this is a “relatively crazy” prediction—one that’s a long, long way from coming to pass considering we don’t yet have a clear picture of how and where our memories are formed. But it’s not completely science fiction, and some respondents predict the technology could be with us in around 60 years.
Artificial intelligence will probably help neuroscientists squeeze more information from EEG recordings by identifying hidden patterns in brain activity. And it is already being used to turn a person’s thoughts into written words, albeit with limited accuracy. “We’re on the precipice of this AI revolution,” says Mushtaq.
These kinds of advances will raise questions over our right to mental privacy and how we can protect our thoughts. I talked this over with Nita Farahany, a futurist and legal ethicist at Duke University in Durham, North Carolina, last year. She told me that while brain data itself is not thought, it can be used to make inferences about what a person is thinking or feeling. “The only person who has access to your brain data right now is you, and it is only analyzed in the internal software of your mind,” she said. “But once you put a device on your head … you’re immediately sharing that data with whoever the device manufacturer is, and whoever is offering the platform.”
Valdes-Sosa is optimistic about the future of EEG. Its low cost, portability, and ease of use make the technology a prime candidate for use in poor countries with limited resources, he says; he has been using it in his research since 1969. (You can see what his set up looked like in 1970 in the image below!) EEG should be used to monitor and improve brain health around the world, he says: “It’s difficult … but I think it could happen in the future.”
PEDRO VALDES-SOSA
Now read the rest of The Checkup
Read more from MIT Technology Review’s archive
You can read the full interview with Nita Farahany, in which she describes some decidedly creepy uses of brain data, here.
Ross Compton’s heart data was used against him when he was accused of burning down his home in Ohio in 2016. Brain data could be used in a similar way. One person has already had to hand over recordings from a brain implant to law enforcement officials after being accused of assaulting a police officer. (It turned out that person was actually having a seizure at the time.) I looked at some of the other ways your brain data could be used against you in a previous edition of The Checkup.
Teeny-tiny versions of EEG caps have been used to measure electrical activity in brain organoids (clumps of neurons that are meant to represent a full brain), as my colleague Rhiannon Williams reported a couple of years ago.
EEG has also been used to create a “brain-to-brain network” that allows three people to collaborate on a game of Tetris by thought alone.
Some neuroscientists are using EEG to search for signs of consciousness in people who seem completely unresponsive. One team found such signs in a 21-year-old woman who had experienced a traumatic brain injury. “Every clinical diagnostic test, experimental and established, showed no signs of consciousness,” her neurophysiologist told MIT Technology Review. After a test that involved EEG found signs of consciousness, the neurophysiologist told rehabilitation staff to “search everywhere and find her!” They did, about a month later. With physical and drug therapy, she learned to move her fingers to answer simple questions.
From around the web
Food waste is a problem. This Japanese company is fermenting it to create sustainable animal feed. In case you were wondering, the food processing plant smells like a smoothie, and the feed itself tastes like sour yogurt. (BBC Future)
The pharmaceutical company Gilead Sciences is accused of “patent hopping”—having dragged its feet to bring a safer HIV treatment to market while thousands of people took a harmful one. The company should be held accountable, argues a cofounder of PrEP4All, an advocacy organization promoting a national HIV prevention plan. (STAT)
Anti-suicide nets under San Francisco’s Golden Gate Bridge are already saving lives, perhaps by acting as a deterrent. (The San Francisco Standard)
Genetic screening of newborn babies could help identify treatable diseases early in life. Should every baby be screened as part of a national program? (Nature Medicine)
Is “race science”—which, it’s worth pointing out, is nothing but pseudoscience—on the rise, again? The far right’s references to race and IQ make it seem that way. (The Atlantic)
As part of our upcoming magazine issue celebrating 125 years of MIT Technology Review and looking ahead to the next 125, my colleague Antonio Regalado explores how the gene-editing tool CRISPR might influence the future of human evolution. (MIT Technology Review)
In 2019, an agency within the U.S. Department of Defense released a call for research projects to help the military deal with the copious amount of plastic waste generated when troops are sent to work in remote locations or disaster zones. The agency wanted a system that could convert food wrappers and water bottles, among other things, into usable products, such as fuel and rations. The system needed to be small enough to fit in a Humvee and capable of running on little energy. It also needed to harness the power of plastic-eating microbes.
“When we started this project four years ago, the ideas were there. And in theory, it made sense,” said Stephen Techtmann, a microbiologist at Michigan Technological University, who leads one of the three research groups receiving funding. Nevertheless, he said, in the beginning, the effort “felt a lot more science-fiction than really something that would work.”
In one reactor, shown here at a recent MTU demonstration, some deconstructed plastics are subject to high heat and the absence of oxygen — a process called pyrolysis.
KADEN STALEY/MICHIGAN TECHNOLOGICAL UNIVERSITY
That uncertainty was key. The Defense Advanced Research Projects Agency, or DARPA, supports high-risk, high-reward projects. This means there’s a good chance that any individual effort will end in failure. But when a project does succeed, it has the potential to be a true scientific breakthrough. “Our goal is to go from disbelief, like, ‘You’re kidding me. You want to do what?’ to ‘You know, that might be actually feasible,’” said Leonard Tender, a program manager at DARPA who is overseeing the plastic waste projects.
The problems with plastic production and disposal are well known. According to the United Nations Environment Program, the world creates about 440 million tons of plastic waste per year. Much of it ends up in landfills or in the ocean, where microplastics, plastic pellets, and plastic bags pose a threat to wildlife. Many governments and experts agree that solving the problem will require reducing production, and some countries and U.S. states have additionally introduced policies to encourage recycling.
For years, scientists have also been experimenting with various species of plastic-eating bacteria. But DARPA is taking a slightly different approach in seeking a compact and mobile solution that uses plastic to create something else entirely: food for humans.
In the beginning, the effort “felt a lot more science-fiction than really something that would work.”
The goal, Techtmann hastens to add, is not to feed people plastic. Rather, the hope is that the plastic-devouring microbes in his system will themselves prove fit for human consumption. While Techtmann believes most of the project will be ready in a year or two, it’s this food step that could take longer. His team is currently doing toxicity testing, and then they will submit their results to the Food and Drug Administration for review. Even if all that goes smoothly, an additional challenge awaits. There’s an ick factor, said Techtmann, “that I think would have to be overcome.”
The military isn’t the only entity working to turn microbes into nutrition. From Korea to Finland, a small number of researchers, as well as some companies, are exploring whether microorganisms might one day help feed the world’s growing population.
According to Tender, DARPA’s call for proposals was aimed at solving two problems at once. First, the agency hoped to reduce what he called supply-chain vulnerability: During war, the military needs to transport supplies to troops in remote locations, which creates a safety risk for people in the vehicle. Additionally, the agency wanted to stop using hazardous burn pits as a means of dealing with plastic waste. “Getting those waste products off of those sites responsibly is a huge lift,” Tender said.
A research engineer working on the MTU project takes a raw sample from the pyrolysis reactor, which can be upcycled into fuels and lubricants.
KADEN STALEY/MICHIGAN TECHNOLOGICAL UNIVERSITY
The Michigan Tech system begins with a mechanical shredder, which reduces the plastic to small shards that then move into a reactor, where they soak in ammonium hydroxide under high heat. Some plastics, such as PET, which is commonly used to make disposable water bottles, break down at this point. Other plastics used in military food packaging — namely polyethylene and polypropylene — are passed along to another reactor, where they are subject to much higher heat and an absence of oxygen.
Under these conditions, the polyethylene and polypropylene are converted into compounds that can be upcycled into fuels and lubricants. David Shonnard, a chemical engineer at Michigan Tech who oversaw this component of the project, has developed a startup company called Resurgent Innovation to commercialize some of the technology. (Other members of the research team, said Shonnard, are pursuing additional patents related to other parts of the system.)
After the PET has broken down in the ammonium hydroxide, the liquid is moved to another reactor, where it is consumed by a colony of microbes. Techtmann initially thought he would need to go to a highly contaminated environment to find bacteria capable of breaking down the deconstructed plastic. But as it turned out, bacteria from compost piles worked really well. This may be because the deconstructed plastic that enters the reactor has a similar molecular structure to some plant material compounds, he said. So the bacteria that would otherwise eat plants can perhaps instead draw their energy from the plastic.
Materials for the MTU project are shown at a recent demonstration. Before being placed in a reactor, plastic feedstocks (bottom row) are mechanically shredded into small pieces.
KADEN STALEY/MICHIGAN TECHNOLOGICAL UNIVERSITY
After the bacteria consume the plastic, the microbes are then dried into a powder that smells a bit like nutritional yeast and has a balance of fats, carbohydrates, and proteins, said Techtmann.
Research into edible microorganisms dates back at least 60 years, but the body of evidence is decidedly small. (One review estimated that since 1961, an average of seven papers have been published per year.) Still, researchers in the field say there are good reasons for countries to consider microbes as a food source. Among other things, they are rich in protein, wrote Sang Yup Lee, a bioengineer and senior vice president for research at Korea Advanced Institute of Science and Technology, in an email to Undark. Lee and others have noted that growing microbes requires less land and water than conventional agriculture. Therefore, they might prove to be a more sustainable source of nutrition, particularly as the human population grows.
The product from the microbe reactor is collected in a glass jar. The microbes can be dried into a powder for human consumption — once they are deemed safe by regulators.
After PET is broken down in the ammonium hydroxide, the liquid is moved to a reactor where it is consumed by a colony of microbes.
Lee reviewed a paper describing the microbial portion of the Michigan Tech project, and said that the group’s plans are feasible. But he pointed out a significant challenge: At the moment, only certain microorganisms are considered safe to eat, namely “those we have been eating thorough fermented food and beverages, such as lactic acid bacteria, bacillus, some yeasts.” But these don’t degrade plastics.
Before using the plastic-eating microbes as food for humans, the research team will submit evidence to regulators indicating that the substance is safe. Joshua Pearce, an electrical engineer at Western University in Ontario, Canada, performed the initial toxicology screening, breaking the microbes down into smaller pieces, which they compared against known toxins.
“We’re pretty sure there’s nothing bad in there,” said Pearce. He added that the microbes have also been fed to C. elegans roundworms without apparent ill-effects, and the team is currently looking at how rats do when they consume the microbes over the longer term. If the rats do well, then the next step would be to submit data to the Food and Drug Administration for review.
Before using the plastic-eating microbes as food for humans, the research team will submit evidence to regulators indicating that the substance is safe.
At least a handful of companies are in various stages of commercializing new varieties of edible microbes. A Finnish startup, Solar Foods, for example, has taken a bacterium found in nature and created a powdery product with a mustard brown hue that has been approved for use in Singapore. In an email to Undark, chief experience officer Laura Sinisalo said that the company has applied for approval in the E.U. and the U.K., as well as in the U.S., where it hopes to enter the market by the end of this year.
Even if the plastic-eating microbes turn out to be safe for human consumption, Techtmann said, the public might still balk at the prospect of eating something nourished on plastic waste. For this reason, he said, this particular group of microbes might prove most useful on remote military bases or during disaster relief, where it could be consumed short-term, to help people survive.
“I think there’s a bit less of a concern about the ick factor,” said Techtmann, “if it’s really just, ‘This is going to keep me alive for another day or two.’”
A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death.
His idea? Replace your body parts. All of them. Even your brain.
Jean Hébert, a new hire with the US Advanced Projects Agency for Health (ARPA-H), is expected to lead a major new initiative around “functional brain tissue replacement,” the idea of adding youthful tissue to people’s brains.
President Joe Biden created ARPA-H in 2022, as an agency within the Department of Health and Human Services, to pursue what he called “bold, urgent innovation” with transformative potential.
The brain renewal concept could have applications such as treating stroke victims, who lose areas of brain function. But Hébert, a biologist at the Albert Einstein school of medicine, has most often proposed total brain replacement, along with replacing other parts of our anatomy, as the only plausible means of avoiding death from old age.
As he described in his 2020 book, Replacing Aging, Hébert thinks that to live indefinitely people must find a way to substitute all their body parts with young ones, much like a high-mileage car is kept going with new struts and spark plugs.
The idea has a halo of plausibility since there are already liver transplants and titanium hips, artificial corneas and substitute heart valves. The trickiest part is your brain. That ages, too, shrinking dramatically in old age. But you don’t want to swap it out for another—because it is you.
And that’s where Hébert’s research comes in. He’s been exploring ways to “progressively” replace a brain by adding bits of youthful tissue made in a lab. The process would have to be done slowly enough, in steps, that your brain could adapt, relocating memories and your self-identity.
During a visit this spring to his lab at Albert Einstein, Hébert showed MITTechnology Review how he has been carrying out initial experiments with mice, removing small sections of their brains and injecting slurries of embryonic cells. It’s a step toward proving whether such youthful tissue can survive and take over important functions.
To be sure, the strategy is not widely accepted, even among researchers in the aging field. “On the surface it sounds completely insane, but I was surprised how good a case he could make for it,” says Matthew Scholz, CEO of aging research company Oisín Biotechnologies, who met with Hébert this year.
Scholz is still skeptical though. “A new brain is not going to be a popular item,” he says. “The surgical element of it is going to be very severe, no matter how you slice it.”
Now, though, Hébert’s ideas appear to have gotten a huge endorsement from the US government. Hébert told MITTechnology Review that he had proposed a $110 million project to ARPA-H to prove his ideas in monkeys and other animals, and that the government “didn’t blink” at the figure.
ARPA-H confirmed this week that it had hired Hébert as a program manager.
The agency, modeled on DARPA, the Department of Defense organization that developed stealth fighters, gives managers unprecedented leeway in awarding contracts to develop novel technologies. Among its first programs are efforts to develop at-home cancer tests and cure blindness with eye transplants.
President Biden created ARPA-H in 2022 to pursue “bold, urgent innovation” with transformative potential.
It may be several months before details of the new project are announced, and it’s possible that ARPA-H will establish more conventional goals like treating stroke victims and Alzheimer’s patients, whose brains are damaged, rather than the more radical idea of extreme life extension.
“If it can work, forget aging; it would be useful for all kinds of neurodegenerative disease,” says Justin Rebo, a longevity scientist and entrepreneur.
But defeating death is Hébert’s stated aim. “I was a weird kid and when I found out that we all fall apart and die, I was like, ‘Why is everybody okay with this?’ And that has pretty much guided everything I do,” he says. “I just prefer life over this slow degradation into nonexistence that biology has planned for all of us.”
Hébert, now 58, also recalls when he began thinking that the human form might not be set in stone. It was upon seeing the 1973 movie Westworld, in which the gun-slinging villain, played by Yul Brynner, turns out to be an android. “That really stuck with me,” Hébert said.
Lately, Hébert has become something of a star figure among immortalists, a fringe community devoted to never dying. That’s because he’s an established scientist who is willing to propose extreme steps to avoid death. “A lot of people want radical life extension without a radical approach. People want to take a pill, and that’s not going to happen,” says Kai Micah Mills, who runs a company, Cryopets, developing ways to deep-freeze cats and dogs for future reanimation.
The reason pharmaceuticals won’t ever stop aging, Hébert says, is that time affects all of our organs and cells and even degrades substances such as elastin, one of the molecular glues that holds our bodies together. So even if, say, gene therapy could rejuvenate the DNA inside cells, a concept some companies are exploring, Hébert believes we’re still doomed as the scaffolding around them comes undone.
One organization promoting Hébert’s ideas is the Longevity Biotech Fellowship (LBF), a self-described group of “hardcore” life extension enthusiasts, which this year published a technical roadmap for defeating aging altogether. In it, they used data from Hébert’s ARPA-H proposal to argue in favor of extending life with gradual brain replacement for elderly subjects, as well as transplant of their heads onto the bodies of “non-sentient” human clones, raised to lack a functioning brain of their own, a procedure they referred to as “body transplant.”
Such a startling feat would involve several technologies that don’t yet exist, including a means to attach a transplanted head to a spinal cord. Even so, the group rates “replacement” as the most likely way to conquer death, claiming it would take only 10 years and $3.6 billion to demonstrate.
“It doesn’t require you to understand aging,” says Mark Hamalainen, co-founder of the research and education group. “That is why Jean’s work is interesting.”
Hébert’s connections to such far-out concepts (he serves as a mentor in LBF’s training sessions) could make him an edgy choice for ARPA-H, a young agency whose budget is $1.5 billion a year.
For instance, Hebert recently said on a podcast with Hamalainen that human fetuses might be used as a potential source of life-extending parts for elderly people. That would be ethical to do, Hébert said during the program, if the fetus is young enough that there “are no neurons, no sentience, and no person.” And according to a meeting agenda viewed by MIT Technology Review, Hébert was also a featured speaker at an online pitch session held last year on full “body replacement,” which included biohackers and an expert in primate cloning.
Hébert declined to describe the session, which he said was not recorded “out of respect for those who preferred discretion.” But he’s in favor of growing non-sentient human bodies. “I am in conversation with all these groups because, you know, not only is my brain slowly deteriorating, but so is the rest of my body,” says Hébert. “I’m going to need other body parts as well.”
The focus of Hébert’s own scientific work is the neocortex, the outer part of the brain that looks like a pile of extra-thick noodles and which houses most of our senses, reasoning, and memory. The neocortex is “arguably the most important part of who we are as individuals,” says Hébert, as well as “maybe the most complex structure in the world.”
There are two reasons he believes the neocortex could be replaced, albeit only slowly. The first is evidence from rare cases of benign brain tumors, like a man described in the medical literature who developed a growth the size of an orange. Yet because it grew very slowly, the man’s brain was able to adjust, shifting memories elsewhere, and his behavior and speech never seemed to change—even when the tumor was removed.
That’s proof, Hébert thinks, that replacing the neocortex little by little could be achieved “without losing the information encoded in it” such as a person’s self-identity.
The second source of hope, he says, is experiments showing that fetal-stage cells can survive, and even function, when transplanted into the brains of adults. For instance, medical tests underway are showing that young neurons can integrate into the brains of people who have epilepsy and stop their seizures.
“It was these two things together—the plastic nature of brains and the ability to add new tissue—that, to me, were like, ‘Ah, now there has got to be a way,’” says Hébert.
“I just prefer life over this slow degradation into nonexistence that biology has planned for all of us.”
One challenge ahead is how to manufacture the replacement brain bits, or what Hebert has called “facsimiles” of neocortical tissue. During a visit to his lab at Albert Einstein, Hébert described plans to manually assemble chunks of youthful brain tissue using stem cells. These parts, he says, would not be fully developed, but instead be similar to what’s found in a still-developing fetal brain. That way, upon transplant, they’d be able to finish maturing, integrate into your brain, and be “ready to absorb and learn your information.”
To design the youthful bits of neocortex, Hébert has been studying brains of aborted human fetuses 5 to 8 weeks of age. He’s been measuring what cells are present, and in what numbers and locations, to try to guide the manufacture of similar structures in the lab.
“What we’re engineering is a fetal-like neocortical tissue that has all the cell types and structure needed to develop into normal tissue on its own,” says Hébert.
Part of the work has been carried out by a startup company, BE Therapeutics (it stands for Brain Engineering), located in a suite on Einstein’s campus and which is funded by Apollo Health Ventures, VitaDAO, and with contributions from a New York State development fund. The company had only two employees when MIT Technology Review visited this spring, and the its future is uncertain, says Hébert, now that he’s joining ARPA-H and closing his lab at Einstein.
Because it’s often challenging to manufacture even a single cell type from stem cells, making a facsimile of the neocortex involving a dozen cell types isn’t an easy project. In fact, it’s just one of several scientific problems standing between you and a younger brain, some of which might never have practical solutions. “There is a saying in engineering. You are allowed one miracle, but if you need more than one, find another plan,” says Scholz.
Maybe the crucial unknown is whether young bits of neocortex will ever correctly function inside an elderly person’s brain, for example by establishing connections or storing and sending electro-chemical information. Despite evidence the brain can incorporate individual transplanted cells, that’s never been robustly proven for larger bits of tissue, says Rusty Gage, a biologist at the Salk Institute in La Jolla, Calif., and who is considered a pioneer of neural transplants. He says researchers for years have tried to transplant larger parts of fetal animal brains into adult animals, but with inconclusive results. “If it worked, we’d all be doing more of it,” he says.
The problem, says Gage, isn’t whether the tissue can survive, but whether it can participate in the workings of an existing brain. “I am not dissing his hypothesis. But that’s all it is,” says Gage. “Yes, fetal or embryonic tissue can mature in the adult brain. But whether it replaces the function of the dysfunctional area is an experiment he needs to do, if he wants to convince the world he has actually replaced an aged section with a new section.”
In his new role at ARPA-H, it’s expected that Hébert will have a large budget to fund scientists to try and prove his ideas can work. He agrees it won’t be easy. “We’re, you know, a couple steps away from reversing brain aging,” says Hébert. “A couple of big steps away, I should say.”
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
This week I came across research that suggests aging hits us in waves. You might feel like you’re on a slow, gradual decline, but, at the molecular level, you’re likely to be hit by two waves of changes, according to the scientists behind the work. The first one comes in your 40s. Eek.
For the study, Michael Snyder at Stanford University and his colleagues collected a vast amount of biological data from 108 volunteers aged 25 to 75, all of whom were living in California. Their approach was to gather as much information as they could and look for age-related patterns afterward.
This approach can lead to some startling revelations, including the one about the impacts of age on 40-year-olds (who, I was horrified to learn this week, are generally considered “middle-aged”). It can help us answer some big questions about aging, and even potentially help us find drugs to counter some of the most unpleasant aspects of the process.
But it’s not as simple as it sounds. And midlife needn’t involve falling off a cliff in terms of your well-being. Let’s explore why.
First, the study, which was published in the journal Nature Aging on August 14. Snyder and his colleagues collected a real trove of data on their volunteers, including on gene expression, proteins, metabolites, and various other chemical markers. The team also swabbed volunteers’ skin, stool, mouths, and noses to get an idea of the microbial communities that might be living there.
Each volunteer gave up these samples every few months for a median period of 1.7 years, and the team ended up with a total of 5,405 samples, which included over 135,000 biological features. “The idea is to get a very complete picture of people’s health,” says Snyder.
When he and his colleagues analyzed the data, they found that around 7% of the molecules and microbes measured changes gradually over time, in a linear way. On the other hand, 81% of them changed at specific life stages. There seem to be two that are particularly important: one at around the age of 44, and another around the age of 60.
Some of the dramatic changes at age 60 seem to be linked to kidney and heart function, and diseases like atherosclerosis, which narrows the arteries. That makes sense, given that our risks of developing cardiovascular diseases increase dramatically as we age—around 40% of 40- to 59-year-olds have such disorders, and this figure rises to 75% for 60- to 79-year-olds.
But the changes that occur around the age of 40 came as a surprise to Snyder. He says that, on reflection, they make intuitive sense. Many of us start to feel a bit creakier once we hit 40, and it can take longer to recover from injuries, for example.
Other changes suggest that our ability to metabolize lipids and alcohol shifts when we reach our 40s, though it’s hard to say why, for a few reasons.
First, it’s not clear if a change in alcohol metabolism, for example, means that we are less able to break down alcohol, or if people are just consuming less of it when they’re older.
This gets us to a central question about aging: Is it an inbuilt program that sets us on a course of deterioration, or is it merely a consequence of living?
We don’t have an answer to that one, yet. It’s probably a combination of both. Our bodies are exposed to various environmental stressors over time. But also, as our cells age, they are less able to divide, and clear out the molecular garbage they accumulate over time.
It’s also hard to tell what’s happening in this study, because the research team didn’t measure more physiological markers of aging, such as muscle strength or frailty, says Colin Selman, a biogerontologist at the University of Glasgow in Scotland.
There’s another, perhaps less scientific, question that comes to mind. How worried should we be about these kinds of molecular changes? I’m approaching 40—should I panic? I asked Sara Hägg, who studies the molecular epidemiology of aging at the Karolinska Institute in Stockholm, Sweden. “No,” was her immediate answer.
While Snyder’s team collected a vast amount of data, it was from a relatively small number of people over a relatively short period of time. None of them were tracked for the two or three decades you’d need to see the two waves of molecular changes occur in a person.
“This is an observational study, and they compare different people,” Hägg told me. “There is absolutely no evidence that this is going to happen to you.” After all, there’s a lot that can happen in a person’s life over 20 or 30 years. They might take up a sport. They might quit smoking or stop eating meat.
However, the findings do support the idea that aging is not a linear process.
“People have always suggested that you’re on this decline in your life from [around the age of] 40, depressingly,” says Selman. “But it’s not quite as simple as that.”
Snyder hopes that studies like his will help reveal potential new targets for therapies that help counteract some of the harmful molecular shifts associated with aging. “People’s healthspan is 11 to 15 years shorter than their lifespan,” he says. “Ideally you’d want to live for as long as possible [in good health], and then die.”
We don’t have any such drugs yet. For now, it all comes down to the age-old advice about eating well, sleeping well, getting enough exercise, and avoiding the big no-nos like smoking and alcohol.
“A little bit of alcohol is actually quite nice,” Selman agreed. He told me about an experience he’d had once at a conference on aging. Some of the attendees were members of a society that practiced caloric restriction—the idea being that cutting your calories can boost your lifespan (we don’t yet know if this works for people). “There was a big banquet… and these people all had little scales, and were weighing their salads on the scales,” he told me. “To me, that seems like a rather miserable way to live your life.”
I’m all for finding balance between healthy lifestyle choices and those that bring me joy. And it’s worth remembering that no amount of deprivation is going to radically extend our lifespans. As Selman puts it: “We can do certain things, but ultimately, when your time’s up, your time’s up.”
Now read the rest of the Checkup
Read more from MIT Technology Review’s archive
We don’t yet have a drug that targets aging. But that hasn’t stopped a bunch of longevity clinics from cropping up, offering a range of purported healthspan-extending services for the mega-rich. Now, they’re on a quest to legitimize longevity medicine.
There are plenty of potential rejuvenation strategies being explored right now. But the one that has received some of the most attention—and the most investment—is cellular reprogramming. My colleague Antonio Regalado looked at the promise of the field in this feature.
Scientists are working on new ways to measure how old a person is. Not just the number of birthdays they’ve had, but how aged or close to death they are. I took one of these biological aging tests. And I wasn’t all that pleased with the result.
Is there a limit to human life? Is old age a disease? Find out in the Mortality issue of MIT Technology Review’s magazine.
You can of course read all of these stories and many more on our new app, which can be downloaded here (for Android users) or here (for Apple users).
From around the web
Mpox, the disease that has been surging in the Democratic Republic of the Congo and nearby countries, now constitutes a public health emergency of international concern, according to the World Health Organization.
“The detection and rapid spread of a new clade [subgroup] of mpox in Eastern DRC, its detection in neighboring countries that had not previously reported mpox, and the potential for further spread within Africa and beyond is very worrying,” WHO director general Tedros Adhanom Ghebreyesus said in a briefing shared on X. “It’s clear that a coordinated international response is essential to stop these outbreaks and save lives.” (WHO)
Prosthetic limbs are often branded with company logos. For users of the technology, it can feel like a tattoo you didn’t ask for. (The Atlantic)
A testing facility in India submitted fraudulent data for more than 400 drugs to the FDA. But these drugs have not been withdrawn from the US market. That needs to be remedied, says the founder and president of a nonprofit focused on researching drug side effects. (STAT)
Antibiotics can impact our gut microbiomes. But the antibiotics given to people who undergo c-sections don’t have much of an impact on the baby’s microbiome. The way the baby is fed seems to be much more influential. (Cell Host & Microbe)
When unexpected infectious diseases show up in people, it’s not just physicians that are crucial. Veterinarian “disease detectives” can play a vital role in tracking how infections pass from animals to people, and the other way around. (New Yorker)
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
What does a thought look like? We can think about thoughts resulting from shared signals between some of the billions of neurons in our brains. Various chemicals are involved, but it really comes down to electrical activity. We can measure that activity and watch it back.
Earlier this week, I caught up with Ben Rapoport, the cofounder and chief science officer of Precision Neuroscience, a company doing just that. It is developing brain-computer interfaces that Rapoport hopes will one day help paralyzed people control computers and, as he puts it, “have a desk job.”
Rapoport and his colleagues have developed thin, flexible electrode arrays that can be slipped under the skull through a tiny incision. Once inside, they can sit on a person’s brain, collecting signals from neurons buzzing away beneath. So far, 17 people have had these electrodes placed onto their brains. And Rapoport has been able to capture how their brains form thoughts. He even has videos. (Keep reading to see one for yourself, below.)
Brain electrodes have been around for a while and are often used to treat disorders such as Parkinson’s disease and some severe cases of epilepsy. Those devices tend to involve sticking electrodes deep inside the brain to access regions involved in those disorders.
Brain-machine interfaces are newer. In the last couple of decades, neuroscientists and engineers have made significant progress in developing technologies that allow them to listen in on brain activity and use brain data to allow people to control computers and prosthetic limbs by thought alone.
The technology isn’t commonplace yet, and early versions could only be used in a lab setting. Scientists like Rapoport are working on new devices that are more effective, less invasive, and more practical. He and his colleagues have developed a miniature device that fits 1,024 tiny electrodes onto a sliver of ribbon-like film that’s just 20 microns thick—around a third of the width of a human eyelash.
The vast majority of these electrodes are designed to pick up brain activity. The device itself is designed to be powered by a rechargeable battery implanted under the skin in the chest, like a pacemaker. And from there, data could be transmitted wirelessly to a computer outside the body.
Unlike other needle-like electrodes that penetrate brain tissue, Rapoport says his electrode array “doesn’t damage the brain at all.” Instead of being inserted into brain tissue, the electrode arrays are arranged on a thin, flexible film, fed through a slit in the skull, and placed on the surface of the brain.
From there, they can record what the brain is doing when the person thinks. In one case, Rapoport’s team inserted their electrode array into the skull of a man who was undergoing brain surgery to treat a disease. He was kept awake during his operation so that surgeons could make sure they weren’t damaging any vital regions of his brain. And all the while, the electrodes were picking up the electrical signals from his neurons.
This is what the activity looked like:
“This is basically the brain thinking,” says Rapoport. “You’re seeing the physical manifestation of thought.”
In this video, which I’ve converted to a GIF, you can see the pattern of electrical activity in the man’s brain as he recites numbers. Each dot represents the voltage sensed by an electrode on the array on the man’s brain, over a region involved in speech. The reds and oranges represent higher voltages, while the blues and purples represent lower ones. The video has been slowed down 20-fold, because “thoughts happen faster than the eye can see,” says Rapoport.
This approach allows neuroscientists to visualize what happens in the brain when we speak—and when we plan to speak. “We can decode his intention to say a word even before he says it,” says Rapoport. That’s important—scientists hope technologies will interpret these kinds of planning signals to help some individuals communicate.
For the time being, Rapoport and his colleagues are only testing their electrodes in volunteers who are already scheduled to have brain surgery. The electrodes are implanted, tested, and removed during a planned operation. The company announced in May that the team had broken a record for the greatest number of electrodes placed on a human brain at any one time—a whopping 4,096.
Rapoport hopes the US Food and Drug Administration will approve his device in the coming months. “That will unlock … what we hope will be a new standard of care,” he says.
Now read the rest of The Checkup
Read more from MIT Technology Review’s archive
Precision Neuroscience is one of a handful of companies leading the search for a new brain-computer interface. Cassandra Willyard covered the key players in a recent edition of the Checkup.
Brain implants can do more than treat disease or aid communication. They can change a person’s sense of self. This was the case for Rita Leggett, who was devastated when her implant was removed against her will. I explored whether experiences like these should be considered a breach of human rights in a piece published last year.
Ian Burkhart, who was paralyzed as a result of a diving accident, received a brain implant when he was 24 years old. Burkhart learned to use the implant to control a robotic arm and even play Guitar Hero. But funding issues and an infection meant the implant had to be removed. “When I first had my spinal cord injury, everyone said: ‘You’re never going to be able to move anything from your shoulders down again,’” Burkhart told me last year. “I was able to restore that function, and then lose it again. That was really tough.”
Do you share DNA with Ludwig van Beethoven, or perhaps a Viking? Tests can reveal genetic links, but they are not always clear, and the connections are not always meaningful or informative. (Nature)
This week marks 79 years since the United States dropped atomic bombs on Hiroshima and Nagasaki. Survivors share their stories of what it’s like to live with the trauma, stigma, and survivor’s guilt caused by the bombs—and why weapons like these must never be used again. (New York Times)
At least 19 Olympic athletes have tested positive for covid-19 in the past two weeks. The rules allow them to compete regardless. (Scientific American)
Honey contains a treasure trove of biological information, including details about the plants that supplied the pollen and the animals and insects in the environment. It can even tell you something about the bees’ “micro-bee-ota.” (New Scientist)
Several million people were listening in February when Joe Rogan falsely declared that “party drugs” were an “important factor in AIDS.” His guest on TheJoe Rogan Experience, the former evolutionary biology professor turned contrarian podcaster Bret Weinstein, agreed with him: The “evidence” that AIDS is not caused by HIV is, he said, “surprisingly compelling.”
During the show, Rogan also asserted that AZT, the earliest drug used in the treatment of AIDS, killed people “quicker” than the disease itself—another claim that’s been widely repeated even though it is just as untrue.
Speaking to the biggest podcast audience in the world, the two men were promoting dangerous and false ideas—ideas that were in fact debunked and thoroughly disproved decades ago.
But it wasn’t just them. A few months later, the New York Jets quarterback Aaron Rodgers, four-time winner of the NFL’s MVP award, alleged that Anthony Fauci, who led the National Institute of Allergy and Infectious Diseases for 38 years, had orchestrated the government’s response to the AIDS crisis for personal gain and to promote AZT, which Rodgers also depicted as “killing people.” Though he was speaking to a much smaller audience, on a podcast hosted by a jujitsu fighter turned conspiracy theorist, a clip of the interview was re-shared on X, where it’s been viewed more than 13 million times.
Rodgers was repeating claims that appear in The Real Anthony Fauci, a 2021 book by Robert F. Kennedy Jr.—a work that has renewed relevance as the anti-vaccine activist makes a long-shot but far-from-inconsequential run for the White House. The book, which depicts the elderly immunologist as a Machiavellian figure who used both the AIDS and covid pandemics for his own ends, has reportedly sold 1.3 million copies across all formats.
But it already has. These comments and others like them add up to a small but unmistakable resurgence in AIDS denialism—a false collection of theories arguing either that HIV doesn’t cause AIDS or that there’s no such thing as HIV at all.
The ideas here were initially promoted by a cadre of scientists from unrelated fields, as well as many science-adjacent figures and self-proclaimed investigative journalists, back in the 1980s and ’90s. But as more and more evidence stacked up against them, and as more people with HIV and AIDS started living longer lives thanks to effective new treatments, their claims largely fell out of favor.
At least until the coronavirus arrived.
The covid-19 pandemic brought together people with a mistrust of institutions to rally and march against masks and vaccines.
SPENCER PLATT/GETTY IMAGES
Following the pandemic, a renewed suspicion of public health figures and agencies is giving new life to ideas that had long ago been pushed to the margins. And the impact is far from confined to the dark corners of the web. Arguments spreading rapidly online are reaching millions of people—and, in turn, potentially putting individual patients at risk. The fear is that AIDS denialism could once again spread in the way that covid denialism has: that people will politicize the illness, call its most effective and evidence-based treatments into question, and encourage extremist politicians to adopt these views as the basis for policy. And if it continues to build, this movement could threaten the bedrock knowledge about germs and viruses that underpin the foundation of modern health care and disease prevention, creating dangerous confusion among the public at a deeply inopportune time.
Before they promoted bunk information on HIV and AIDS, Rogan, Kennedy, and Rodgers were spreading fringe theories about the coronavirus’s origins, as well as loudly questioning basic public health measures like vaccines, social distancing, and masks. All three men have also boosted the false idea that ivermectin, an antiparasitic drug, is a treatment or preventative for covid that is being kept from the American public for sinister reasons at the behest of Big Pharma.
“The AIDS denialists have come from the covid denialists,” says Tara Smith, an infectious-disease epidemiologist and a professor at Kent State University’s College of Public Health, who tracks conspiratorial narratives about illness and public health. She saw them emerging first in social media groups driven by covid skepticism, with people asking, as she puts it, “If covid doesn’t exist, what else have we been lied to about?”
“Unlike HIV, covid impacted everybody, and the policy decisions that were made around covid impacted everybody.”
The covid pandemic was a particularly fertile ground for such suspicion, Kalichman notes, because “unlike HIV, covid impacted everybody, and the policy decisions that were made around covid impacted everybody.”
“The covid phenomenon—not the pandemic but the phenomenon around it—created this opportunity for AIDS denialists to reemerge,” he adds. Denialists like Peter Duesberg, the now-infamous Berkeley biologist who first promoted the idea that AIDS is caused by pharmaceuticals or recreational drugs, and Celia Farber and Rebecca V. Culshaw, an independent journalist and researcher, respectively, who have both written critically about what they see as the “official” narrative of HIV/AIDS. (Farber tells MIT Technology Review that she uses the term “AIDS dissent” rather than “denialism”: “‘Denialism’ is a religious and vituperative word.” )
In addition to the renewed skepticism toward public health institutions, the reanimated AIDS denialist movement is being supercharged by technological tools that didn’t exist the first time around: platforms with gigantic reach like X, Substack, Amazon, and Spotify, as well as newer ones that don’t have specific moderation policies around medical misinformation, like Rumble, Gab, and Telegram.
Spotify, for one, has largely declined to curb or moderate Rogan in any meaningful way, while also paying him an eye-watering amount of money; the company inked a $250 million renewal deal with him in February, just weeks before he and Weinstein made their false remarks about AIDS. Amazon, meanwhile, is currently offering Duesberg’s long-out-of-print 1996 book Inventing AIDS for free with a trial of its Audible program, and three of Culshaw’s books are available for free with either an Audible or Kindle Unlimited trial. Farber, meanwhile, has a Substack with more than 28,000 followers.
Now 87 years old and no longer actively speaking publicly, Peter Duesberg’s decades-old theories about AIDS are finding new life online.
AP PHOTO/SUSAN RAGAN
(Spotify, Substack, Rumble, and Telegram did not respond to requests for comment, while Meta and Amazon confirmed receipt of a request for comment but did not answer questions, and X’s press office provided only an auto-response. An email to Gab’s press address was returned as undeliverable.)
While this wave of AIDS denialism doesn’t currently have the reach and influence that the movement had in the past, it still has potentially serious consequences for patients as well as the general public. If these ideas gain enough traction, particularly among elected officials, they could endanger funding for AIDS research and treatments. Public health researchers are still haunted by the period in the 1990s and early 2000s when AIDS denial became official policy in South Africa; one analysis estimates that between just 2000 and 2005, more than 300,000 people died prematurely as a result of the country’s bad public health policies. On an individual level, there could also be devastating results if people with HIV are discouraged from seeking treatment or from trying to prevent the virus’s spread by taking medication or using condoms; a 2010 study has shown that a belief in denialist rhetoric among people with HIV is associated with medication refusal and poor health outcomes, including increased incidence of hospitalization, HIV-related symptoms, and detectable viral loads.
Above all, the revival of this particular slice of medical misinformation is another troubling sign for the ways that tech platforms can deepen distrust in our public health system. The same tech-savvy denialist playbook is already being deployed in the wider “health freedom” space to create confusion and suspicion around other serious diseases, like measles, and to challenge more foundational claims about the science of viruses—that is, to posit that viruses don’t exist at all, or are harmless and can’t cause illness. (A Gab account solely dedicated to the idea that all viruses are hoaxes has more than 3,000 followers.)
As Smith puts it, “We are not in a good place regarding [trust in] all of our public health institutions right now.”
Capitalizing on confusion
One reason AIDS and covid denialists have been able to build similar and interlocking movements that inveigh against government science is that the early days of the two viruses were markedly similar: full of confusion, mystery, and skepticism.
In 1981, James Curran served on a task force investigating the first five known cases of what was then a novel disease. “There were a lot of theories about what caused it,” says Curran, an epidemiologist who is now a dean emeritus at Emory University’s Rollins School of Public Health and previously spent 25 years working at the US Centers for Disease Control and Prevention, serving ultimately as the assistant surgeon general. He and his colleagues had all previously studied sexually transmitted infections that affected gay men and people who injected drugs. With that context, the researchers saw the early patterns of the disease as “indicative of a likely sexually transmissible agent.”
Not everyone agreed, Curran says: “Other people saw poppers or other drugs or accumulation of semen or environmental factors. Some of these things came from the backgrounds that people had, or they came from the simple denial that it could possibly be a new virus.”
The first wave of contrarian ideas about AIDS, then, was less true “denialism” and more the understandable confusion and differences of opinion that can emerge around a new disease. Yet as time went on, “the death rates were increasing dramatically,” says Lindsay Zafir, a distinguished lecturer in anthropology and interdisciplinary programs at the City College of New York who wrote her dissertation on the emergence and evolution of AIDS denialism. “Some people started to wonder whether scientists actually knew what they were doing.”
This led to the emergence of a wider round of more deliberate AIDS disinformation, which was picked up by mainstream publications. In the late 1980s, Spin magazine printed a series of stories that platformed denialist ideas and figures, including interviews with Duesberg, who’d already gained attention for his arguments that AIDS was caused by pharmaceutical drugs and not by HIV. The magazine also published pieces by Farber, a journalist who has described herself becoming progressively more sympathetic to the AIDS denialist cause after interviewing Duesberg. In 1991, theLos Angeles Timespublished a piece that asked whether Duesberg was “a hero or a heretic” for his “controversial” arguments about AIDS.
The tides began to turn only in 1995, when the first generation of antiretroviral therapies emerged to treat AIDS and deaths finally, mercifully, began to drop across the United States.
“Mbeki famously said, Your scientist says this, mine says that—which scientist is right? When that confusion exists, that’s the real vulnerability.”
Still, the denialist movement continued to grow, with next-generation leaders who were, like Duesberg and Farber, publicity savvy and (perhaps unsurprisingly) quick adopters of the earliest versions of the internet. This notably included Christine Maggiore, who was HIV-positive herself and who founded the group Alive & Well AIDS Alternatives. Long before social media, she and her peers used the internet to foster community, offering links on their websites to hotlines and in-person meetings.
Kent State’s Smith and Steven P. Novella, now a clinical neurologist and associate professor at Yale, wrote a paper in 2007 about how the internet had become a powerful force for AIDS denialism. It was “a fertile and unrefereed medium” for denialist ideas and one of just a few common tools to make counterarguments in the face of the widespread scientific agreement on AIDS that dominated medical literature.
Around this time, Farber wrote another big piece, this time in Harper’s,on the so-called AIDS dissidents, which in turn generated a firestorm of criticism and corrections and revived the debate for a new era of readers.
“It’s hard to quantify how much influence those types of people had,” Smith says. She points out that Maggiore was even promoted by Nate Mendel of the Foo Fighters. “It’s hard to know how many people followed her advice,” Smith emphasizes. “But certainly a lot of people heard it.”
Former South African president Thabo Mbeki enacted AIDS denialism as part of his public policy, denying patients in the country access to antiretroviral drugs.
MAKSIM BLINOV/SPUTNIK VIA AP IMAGES
In a devastating turn, one of those people was Thabo Mbeki, who became the second democratically elected president of South Africa in 1999. Mbeki was skeptical of antiretrovirals to treat AIDS, and as the Lancet points out, both Mbeki and his health minister promoted the work of Western AIDS skeptics. In the summer of 2000, Mbeki hosted a presidential advisory panel that included denialists like Duesberg; Farber tells MIT Technology Review that she was also present. Just a few weeks later, the South African president met privately with Maggiore.
Curran, the former CDC official, visited South Africa during this era and remembers how officials “said they would throw doctors in jail” if they provided AZT to pregnant women.
“Mbeki famously said, Your scientist says this, mine says that—which scientist is right?” Kalichman says. “When that confusion exists, that’s the real vulnerability.”
Mbeki left office in 2008. And while AIDS denialism didn’t exactly disappear by the 2010s, it did largely recede into relative obscurity, beaten back by clear evidence that antiretroviral drugs were working.
There were also meticulous fact-based campaigns from groups like AIDSTruth, which was founded following Farber’s 2006 Harper’s article. This group gained traction online, systematically debunking arguments from denialists on a bare-bones website and using hyperlinks to guide people quickly to science-based material on each point—a somewhat novel approach at the time.
By 2015, the decline of denialism was so complete that AIDSTruth stopped active work, believing that its mission was complete. The group wrote, “We have long since reached the point where we—the people who have in one way or another been involved in running this website—believe that AIDS denialism died as an effective political force.”
Of course, it didn’t take too long to see the work was far from complete.
Growing the “beehive”
Kalichman, from the University of Connecticut, has compared the world of AIDS denial to a “beehive”: It looks like a chaotic mix of people pursuing bad science and debunked ideas for their own particular ends. But if you look closer, what appears to be a swarm is actually “very well organized.” The modern, post-covid variety is no different.
The new wave of denialists often don’t count their theories on AIDS as their sole pseudoscientific interest; rather, it’s part of a whole bouquet of bad ideas.
Robert F. Kennedy Jr. has been vocal in his support of anti-vaccine causes long before his current bid for president.
AP PHOTO/TED S. WARREN
These individuals seem to have arrived at revisionist and denialist ideas through a broad-based skepticism of public health, a rejection of what they see as Big Pharma’s meddling, and a particular, visceral disgust toward Fauci. Kennedy, specifically, attributes almost superhuman powers to Fauci, claiming in one 2022 tweet—referencing the Mafia code of silence—that he “purchased omertà among virologists globally with a total of $37 billion in annual payoffs in research grants.” The tweet has been liked more than 26,000 times.
Kennedy’s book “changed everything,” Celia Farber says. “I answered his questions … and was included and quoted in the book. This led to a chance for me to once again be a professional writer, on Substack.”
The new guard has also been comfortable reviving the oldest debunked ideas. Both Rogan and Kennedy, for instance, have claimed that poppers could be the cause of AIDS. “A hundred percent of the people who died in the first thousand [with] AIDS were people who were addicted to poppers, which are known to cause Kaposi sarcoma in rats,” Kennedy told an audience in a speech whose date isn’t clear; a video of the remarks has recently been circulating widely. “And they were people who were part of a gay lifestyle where they were burning the candle at both ends.” (Kennedy’s presidential campaign did not respond to a request for comment.)
Some have even given fresh life to the old guard. Duesberg is now 87 and is no longer active in the public sphere (and his wife told MIT Technology Review that his health did not allow him to sit for an interview or answer questions via email). But the basic shape of his arguments—obfuscating the causes of AIDS, the treatments, and the nature of the disease itself—continue to live on. Rogan actually hosted Duesberg on his podcast in 2012, a decision that generated relatively few headlines at the time—likely because Rogan hadn’t yet become so popular and America’s crisis of disinformation and medical distrust was less pronounced. Rogan and Weinstein praised Duesberg in their recent conversation, asserting that he’d been “demonized” for his arguments aboutAZT. (Weinstein did not respond to a request for comment.Several attempts to reach Spotify through multiple channels did not get responses. Attempts to reach Rogan through Spotify and one of his producers also did not receive responses.)
Before Rodgers spoke falsely about AIDS and AZT, he and the Green Bay Packers were fined for conduct in violation of the NFL’s covid policies.
SARAH STIER/GETTY IMAGES
The support seems to largely go both ways. Culshaw has written that even critical stories about Rodgers are helpful to the cause: “The more hit pieces are published, the more the average citizen—especially the average post-covid citizen—will become curious and begin to look into the issue. And once you’ve looked into it far enough, you cannot unsee what you’ve seen.”
Culshaw and Farber have also been empowered by the new ability to command their own megaphones online. Farber, for instance, is now primarily active on Substack, with a newsletter that is a mix of HIV/AIDS content and general conspiracy theorizing. Her current work refers to HIV/AIDS as a “PSY OP” (caps hers); she presents herself as a soldier in a long war against government propaganda, one in which covid is the latest salvo.
Farber says she sees her arguments gaining ground. “What’s happening now is that the general public are learning about the buried history,” she writes to MIT Technology Review. “People are very interested in the HIV ‘thing’ these days, to my eternal astonishment,” she adds, writing that Kennedy’s book “changed everything.” She says, “I answered his questions about HIV war history and was included and quoted in the book. This led to a chance for me to once again be a professional writer, on Substack.”
Culshaw (who now uses the name Culshaw Smith) strikes a similar tone, though she is a less prominent figure. A mathematician and self-styled HIV researcher, she published her first book in 2007; it claimed to use mathematical evidence to prove that HIV doesn’t cause AIDS.
In 2023 she published another AIDS denial book, this one with Skyhorse, a press that traffics heavily in conspiracy theories and pseudoscience, and which published Kennedy’s book on Fauci. She gained some level of notoriety when the book was distributed by publishing giant Simon & Schuster, leading to protests outside its headquarters from theLGBT rights advocacy groups GLAAD and ACT UP NY. Though Simon & Schuster appears to continue to distribute the book, that pushback has provided the basis for her new act: life after “cancellation.” She produced a short memoir last year that describes the furor—a history Culshaw presents as a dramatic moment in the suppression of AIDS truth. This is one of the books now available for free on Amazon through a Kindle Unlimited trial. (Simon & Schuster did not respond to a request for comment. Culshaw did not respond to a request for comment sent through Substack.)
The argument that she’s been “canceled” by the scientific establishment holds tremendous sway with disease denialists online, who are always eager to seize on cases where they perceive the government to be repressing and censoring “alternative” views. In May, Chronicles, an online right-wing magazine, approvingly tied together Rodgers with the broader web of AIDS denialists, including Culshaw, Duesberg, and others—holding them up as heroic figures who’d been unfairly dismissed as “conspiracy theorists” and who’d done well to challenge medical expertise that the magazine denigrated as “white coat supremacy.” (A request for comment for Rodgers through a representative did not receive a response.)
Platforming denial
AIDS denialism and revisionism are resurging in the midst of bitter ongoing arguments over what kinds of things should be allowed to exist on online platforms. Spotify, for instance, has clear rules that prohibit “asserting that AIDS, COVID-19, cancer or other serious life threatening diseases are a hoax or not real,” and specific rules against “dangerous and deceptive content” that are both thoughtful and clearly articulated. Yet Rogan’s program seems to be exempt from these rules or manages to skirt them; after all, he and Weinstein did not suggest that AIDS isn’t real, per se, but instead promoted debunked ideas about its cause.
While Amazon and Meta have misinformation policies of some kind, they clearly do not prevent AIDS denial books from being sold or denialist arguments from being shared. (Amazon also has content guidelines for books that ban obvious things like hate speech, pornography, or the promotion of terrorism, but they do not specifically mention medical misinformation.)
The difficulty of policing false or unproven health information across all these different platforms, in all the forms it can take, is immense. In 2019, for instance, Facebook allowed misleading ads from personal injury lawyers claiming that PrEP, or pre-exposure prophylaxis drugs, can cause bone and kidney damage; it took action only after a sustained outcry from LGBT groups.
“It’s one of those things that either plants seeds of doubt or encourages those to grow if they’re already there.”
In a sign of how entrenched some of these things can be, there’s a YouTube channel originally called Rethinking AIDS—now known as Question Everything—that has been active for 14 years, sharing interviews with denialists. The channel has 16,000 subscribers, and its most popular videos have upwards of half a million views. Another page, devoted to a conspiratorial documentary about AIDS, has been active since 2009, and its most popular video has nearly 300,000 views. (A YouTube spokesperson tells MIT Technology Review it has “developed our approach to medical misinformation over many years, in close alignment with health authorities around the world” and that it prominently features “content and information from high-quality health sources … in search results and recommendations related to HIV/AIDS.”)
Meanwhile, on platforms like the Elon Musk–owned X, formerly known as Twitter, there is little moderation happening at all. The company removed its ban on covid misinformation in 2022, to almost immediate effect: misinformation and propaganda of all kinds has flourished, including HIV/AIDS denial. One widely circulated video depicts the late biochemist Kary Mullis talking about the moment he first “really questioned” the predominant HIV narrative.
Complementing these more established spaces are newer, more niche platforms like Rumble and Telegram, which don’t have any moderation policies to address medical misinformation and proudly tout a commitment to free speech that means they do very little about any kind of misinformation at all, no matter how noxious.
Joe Rogan’s podcast, with an audience of 14.5 million just on Spotify, has hosted a number of guests expressing anti-vaccine sentiments.
PHOTO ILLUSTRATION BY CINDY ORD/GETTY IMAGES
Telegram, which is one of the most popular messaging apps in Russia, does have a general “verified information” policy. The statement of this policy links to a post by its CEO, Pavel Durov, that says “spreading the truth will always be a more efficient strategy than engaging in censorship.” Discussions of HIV among Telegram’s current and most active misinformation peddlers often compare it to covid, characterizing both as “manufactured” viruses. One widely shared post by the anti-vaccine activist Sherri Tenpenny claims that covid-19 was created by “splicing” HIV into a coronavirus to “inflict maximum harm,” a bizarre lie that’s also meant to strengthen the unproven idea that covid was created in a lab. Telegram is also a fertile ground for sharing phony HIV cures; one group with 43,000 followers has promoted an oil that it claims is used in Nigeria.
When YouTube began to crack down on medical misinformation during the height of the pandemic, conservative and conspiratorial content creators went to Rumble instead. The company claims it saw a 106% revenue increase last year and now has an average of 67 million monthly active users. A clip of Rogan talking about Duesberg’s AIDS-related claims has racked up 30,000 views in the last two years, and an interview with Farber by Joseph Mercola, a major player in the natural-health and anti-vaccine worlds, has gotten more than 300,000 views since it was posted there earlier this year.
The concern with these kinds of falsehoods, Smith says, is always that patient populations, communities at high risk for HIV, or populations with real histories of medical mistreatment, like Black and Native people, “think there might be a grain of truth and start to doubt if they need to be tested or continue treatment or things like that.” She adds, “It’s one of those things that either plants seeds of doubt or encourages those to grow if they’re already there.”
But it’s far more concerning when people like Rogan, who have a massive reach, take up the cause. “They just have such a huge platform, and those stories are scary and they spread,” Smith says. “Once they do that, it’s so hard for scientists to fight that.”
The offline impact
For all the work AIDS denialists are doing to try to grow their numbers, Kalichman remains hopeful that they’re unlikely to make significant inroads. The most profound reason, he believes, is that many people now know someone living with HIV—a friend, a family member, a celebrity. As a result, many more people are directly familiar with how life-altering current HIV treatments have been.
“This isn’t the ’90s,” he says. “People are taking one pill once a day and living really healthy lives. If a person with HIV smokes, they’re much more likely to die of a smoking-related illness [than HIV] if their HIV Is being treated.”
Even the much stranger and more esoteric “terrain theory” seems to be making a modest comeback in alternative online spaces; the idea is that germs don’t cause illness in a healthy person whose “terrain” is sound thanks to vitamins, exercise, and sunlight.
Yet the risk doesn’t necessarily hang solely on how many people buy into the false information—but who does. Among people who have been studying AIDS denialism for decades, the biggest concern is ultimately that someone in public office will take notice and begin formally acting on those ideas. If that happens, Curran, the former assistant surgeon general, worries it could jeopardize funding for PEPFAR (the United States President’s Emergency Plan for AIDS Relief), the enormously successful public health program that has supported HIV testing, prevention, and treatment in lower-resource countries since the George W. Bush administration.
The current political environment further exacerbates the risk: Donald Trump has said that if he is elected again, he will cut federal funding to schools with mask or vaccine mandates, and Florida’s surgeon general, Joseph Ladapo, allowed parents to continue sending unvaccinated kids to school in the midst of a measles outbreak.
All it takes, Kalichman says, is for “someone who’s sitting in a policymaker’s chair in a state health department” to take AIDS denial arguments seriously. “A lot of damage can be done.” (He expresses relief, however, that Trump and his wing of the Republican Party have not yet taken up the particular cause of AIDS denialists: “Thank goodness.”)
Florida Surgeon General Joseph Ladapo’s letter to parents during a measles outbreak ran counter to the CDC’s recommended guidelines.
AP PHOTO/CHRIS O’MEARA
Then there is the fact that the same kind of denialist campaign is already being deployed with other diseases. Christiane Northrup, a former ob-gyn and a significant figure in natural health and related conspiratorial thinking, has recently been on Telegram sharing an old lie that a German court ruled the measles virus “does not exist.” (Northrup did not respond to a request for comment.)
On its own, if it were just bunk HIV theories recirculating, “I wouldn’t be as worried about it,” Smith says. “But in this broader anti-covid, anti-vaccine, and everything about germ theory being denied—that’s what worries me.”
By trying to effectively decouple cause and effect—claiming that HIV doesn’t cause AIDS, that measles isn’t caused by a virus and is instead a vitamin deficiency or caused by the MMR (measles, mumps, and rubella) vaccine itself—these movements discourage people from treating or trying to prevent serious and contagious illnesses. They try to sow doubt about the very nature of viruses themselves, a global gesture toward doubt, distrust, and minimization of serious diseases. Even the much stranger and more esoteric “terrain theory” seems to be making a modest comeback in alternative online spaces; the idea is that germs don’t cause illness in a healthy person whose “terrain” is sound thanks to vitamins, exercise, and sunlight.
These kinds of false claims, Smith points out, are resurging at a particularly inopportune time, when the public health world is already trying to prepare for the next pandemic. “We’re out of the emergency mode of the covid pandemic and trying to repair some of the damage to public health,” she says, “and thinking about another one.”
Curran also has a larger, more existential concern when he considers the lessons of the AIDS and covid pandemics: “The problem is, if you bad-mouth Fauci and his successors so much, the next epidemic people come around and they say, ‘Why should we trust these people?’ And the question is, who do we trust?
“When bird flu gets out of cows and goes to humans, are we going to go to Joe Rogan for the answers?”
A few months ago, a woman in her mid-50s—let’s call her Sophie—experienced a hemorrhagic stroke. Her brain started to bleed. She underwent brain surgery, but her heart stopped beating.
Sophie’s ordeal left her with significant brain damage. She was unresponsive; she couldn’t squeeze her fingers or open her eyes when asked, and she didn’t flinch when her skin was pinched. She needed a tracheostomy tube in her neck to breathe and a feeding tube to deliver nutrition directly to her stomach, because she couldn’t swallow. Where should her medical care go from there?
This difficult question was left, as it usually is in these kinds of situations, to Sophie’s family members, recalls Holland Kaplan, an internal-medicine physician at Baylor College of Medicine who was involved in Sophie’s care. But the family couldn’t agree. Sophie’s daughter was adamant that her mother would want to stop having medical treatments and be left to die in peace. Another family member vehemently disagreed and insisted that Sophie was “a fighter.” The situation was distressing for everyone involved, including Sophie’s doctors.
End-of-life decisions can be extremely upsetting for surrogates, the people who have to make those calls on behalf of another person, says David Wendler, a bioethicist at the US National Institutes of Health. Wendler and his colleagues have been working on an idea for something that could make things easier: an artificial-intelligence-based tool that can help surrogates predict what patients themselves would want in any given situation.
The tool hasn’t been built yet. But Wendler plans to train it on a person’s own medical data, personal messages, and social media posts. He hopes it could not only be more accurate at working out what the patient would want, but also alleviate the stress and emotional burden of difficult decision-making for family members.
Wendler, along with bioethicist Brian Earp at the University of Oxford and their colleagues, hopes to start building the tool as soon as they secure funding for it, potentially in the coming months. But rolling it out won’t be simple. Critics wonder how such a tool can ethically be trained on a person’s data, and whether life-or-death decisions should ever be entrusted to AI.
Chest compressions administered to a failing heart might extend a person’s life. But the treatment might lead to a broken sternum and ribs, and by the time the person comes around—if ever—significant brain damage may have developed. Keeping the heart and lungs functioning with a machine might maintain a supply of oxygenated blood to the other organs—but recovery is no guarantee, and the person could develop numerous infections in the meantime. A terminally ill person might want to continue trying hospital-administered medications and procedures that could offer a few more weeks or months. But someone else might want to forgo those interventions and be more comfortable at home.
The decisions themselves can also be extremely distressing, Wendler adds. While some surrogates feel a sense of satisfaction from having supported their loved ones, others struggle with the emotional burden and can feel guilty for months or even years afterwards. Some fear they ended the life of their loved ones too early. Others worry they unnecessarily prolonged their suffering. “It’s really bad for a lot of people,” says Wendler. “People will describe this as one of the worst things they’ve ever had to do.”
In 2007, Wendler and his colleagues built a “very basic,” preliminary version of this tool based on a small amount of data. That simplistic tool did “at least as well as next-of-kin surrogates” in predicting what kind of care people would want, says Wendler.
Now Wendler, Earp and their colleagues are working on a new idea. Instead of being based on crude characteristics, the new tool the researchers plan to build will be personalized. The team proposes using AI and machine learning to predict a patient’s treatment preferences on the basis of personal data such as medical history, along with emails, personal messages, web browsing history, social media posts, or even Facebook likes. The result would be a “digital psychological twin” of a person—a tool that doctors and family members could consult to guide a person’s medical care. It’s not yet clear what this would look like in practice, but the team hopes to build and test the tool before refining it.
The researchers call their tool a personalized patient preference predictor, or P4 for short. In theory, if it works as they hope, it could be more accurate than the previous version of the tool—and more accurate than human surrogates, says Wendler. It could be more reflective of a patient’s current thinking than an advance directive, which might have been signed a decade beforehand, says Earp.
A better bet?
A tool like the P4 could also help relieve the emotional burden surrogates feel in making such significant life-or-death decisions about their family members, which can sometimes leave people with symptoms of post-traumatic stress disorder, says Jennifer Blumenthal-Barby, a medical ethicist at Baylor College of Medicine in Texas.
Some surrogates experience “decisional paralysis” and might opt to use the tool to help steer them through a decision-making process, says Kaplan. In cases like these, the P4 could help ease some of the burden surrogates might be experiencing, without necessarily giving them a black-and-white answer. It might, for example, suggest that a person was “likely” or “unlikely” to feel a certain way about a treatment, or give a percentage score indicating how likely the answer is to be right or wrong.
Kaplan can imagine a tool like the P4 being helpful in cases like Sophie’s, where various family members might have different opinions on a person’s medical care. In those cases, the tool could be offered to these family members, ideally to help them reach a decision together.
It could also help guide decisions about care for people who don’t have surrogates. Kaplan is an internal-medicine physician at Ben Taub Hospital in Houston, a “safety net” hospital that treats patients whether or not they have health insurance. “A lot of our patients are undocumented, incarcerated, homeless,” she says. “We take care of patients who basically can’t get their care anywhere else.”
These patients are often in dire straits and at the end stages of diseases by the time Kaplan sees them. Many of them aren’t able to discuss their care, and some don’t have family members to speak on their behalf. Kaplan says she could imagine a tool like the P4 being used in situations like these, to give doctors a little more insight into what the patient might want. In such cases, it might be difficult to find the person’s social media profile, for example. But other information might prove useful. “If something turns out to be a predictor, I would want it in the model,” says Wendler. “If it turns out that people’s hair color or where they went to elementary school or the first letter of their last name turns out to [predict a person’s wishes], then I’d want to add them in.”
This approach is backed by preliminary research from Earp and his colleagues, who have started running surveys to find out how individuals might feel about using the P4. This research is ongoing, but early responses suggest that people would be willing to try the model if there were no human surrogates available. Earp says he feels the same way. He also says that if the P4 and a surrogate were to give different predictions, “I’d probably defer to the human that knows me, rather than the model.”
Not a human
Earp’s feelings betray a gut instinct many others will share: that these huge decisions should ideally be made by a human. “The question is: How do we want end-of-life decisions to be made, and by whom?” says Georg Starke, a researcher at the Swiss Federal Institute of Technology Lausanne. He worries about the potential of taking a techno-solutionist approach and turning intimate, complex, personal decisions into “an engineering issue.”
Bryanna Moore, an ethicist at the University of Rochester, says her first reaction to hearing about the P4 was: “Oh, no.” Moore is a clinical ethicist who offers consultations for patients, family members, and hospital staff at two hospitals. “So much of our work is really just sitting with people who are facing terrible decisions … they have no good options,” she says. “What surrogates really need is just for you to sit with them and hear their story and support them through active listening and validating [their] role … I don’t know how much of a need there is for something like this, to be honest.”
Moore accepts that surrogates won’t always get it right when deciding on the care of their loved ones. Even if we were able to ask the patients themselves, their answers would probably change over time. Moore calls this the “then self, now self” problem.
And she doesn’t think a tool like the P4 will necessarily solve it. Even if a person’s wishes were made clear in previous notes, messages, and social media posts, it can be very difficult to know how you’ll feel about a medical situation until you’re in it. Kaplan recalls treating an 80-year-old man with osteoporosis who had been adamant that he wanted to receive chest compressions if his heart were to stop beating. But when the moment arrived, his bones were too thin and brittle to withstand the compressions. Kaplan remembers hearing his bones cracking “like a toothpick,” and the man’s sternum detaching from his ribs. “And then it’s like, what are we doing? Who are we helping? Could anyone really want this?” says Kaplan.
There are other concerns. For a start, an AI trained on a person’s social media posts may not end up being all that much of a “psychological twin.” “Any of us who have a social media presence know that often what we put on our social media profile doesn’t really represent what we truly believe or value or want,” says Blumenthal-Barby. And even if we did, it’s hard to know how these posts might reflect our feelings about end-of-life care—many people find it hard enough to have these discussions with their family members, let alone on public platforms.
As things stand, AI doesn’t always do a great job of coming up with answers to human questions. Even subtly altering the prompt given to an AI model can leave you with an entirely different response. “Imagine this happening for a fine-tuned large language model that’s supposed to tell you what a patient wants at the end of their life,” says Starke. “That’s scary.”
On the other hand, humans are fallible, too. Vasiliki Rahimzadeh, a bioethicist at Baylor College of Medicine, thinks the P4 is a good idea, provided it is rigorously tested. “We shouldn’t hold these technologies to a higher standard than we hold ourselves,” she says.
Earp and Wendler acknowledge the challenges ahead of them. They hope the tool they build can capture useful information that might reflect a person’s wishes without violating privacy. They want it to be a helpful guide that patients and surrogates can choose to use, but not a default way to give black-and-white final answers on a person’s care.
Even if they do succeed on those fronts, they might not be able to control how such a tool is ultimately used. Take a case like Sophie’s, for example. If the P4 were used, its prediction might only serve to further fracture family relationships that are already under pressure. And if it is presented as the closest indicator of a patient’s own wishes, there’s a chance that a patient’s doctors might feel legally obliged to follow the output of the P4 over the opinions of family members, says Blumenthal-Barby. “That could just be very messy, and also very distressing, for the family members,” she says.
“What I’m most worried about is who controls it,” says Wendler. He fears that hospitals could misuse tools like the P4 to avoid undertaking costly procedures, for example. “There could be all kinds of financial incentives,” he says.
Everyone contacted by MIT Technology Review agrees that the use of a tool like the P4 should be optional, and that it won’t appeal to everyone. “I think it has the potential to be helpful for some people,” says Earp. “I think there are lots of people who will be uncomfortable with the idea that an artificial system should be involved in any way with their decision making with the stakes being what they are.”
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
This week, I’ve been working on a piece about an AI-based tool that could help guide end-of-life care. We’re talking about the kinds of life-and-death decisions that come up for very unwell people: whether to perform chest compressions, for example, or start grueling therapies, or switch off life support.
Often, the patient isn’t able to make these decisions—instead, the task falls to a surrogate, usually a family member, who is asked to try to imagine what the patient might choose if able. It can be an extremely difficult and distressing experience.
A group of ethicists have an idea for an AI tool that they believe could help make things easier. The tool would be trained on information about the person, drawn from things like emails, social media activity, and browsing history. And it could predict, from those factors, what the patient might choose. The team describe the tool, which has not yet been built, as a “digital psychological twin.”
There are lots of questions that need to be answered before we introduce anything like this into hospitals or care settings. We don’t know how accurate it would be, or how we can ensure it won’t be misused. But perhaps the biggest question is: Would anyone want to use it?
To answer this question, we first need to address who the tool is being designed for. The researchers behind the personalized patient preference predictor, or P4, had surrogates in mind—they want to make things easier for the people who make weighty decisions about the lives of their loved ones. But the tool is essentially being designed for patients. It will be based on patients’ data and aims to emulate these people and their wishes.
This is important. In the US, patient autonomy is king. Anyone who is making decisions on behalf of another person is asked to use “substituted judgment”—essentially, to make the choices that the patient would make if able. Clinical care is all about focusing on the wishes of the patient.
If that’s your priority, a tool like the P4 makes a lot of sense. Research suggests that even close family members aren’t great at guessing what type of care their loved ones might choose. If an AI tool is more accurate, it might be preferable to the opinions of a surrogate.
But while this line of thinking suits American sensibilities, it might not apply the same way in all cultures. In some cases, families might want to consider the impact of an individual’s end-of-life care on family members, or the family unit as a whole, rather than just the patient.
“I think sometimes accuracy is less important than surrogates,” Bryanna Moore, an ethicist at the University of Rochester in New York, told me. “They’re the ones who have to live with the decision.”
Moore has worked as a clinical ethicist in hospitals in both Australia and the US, and she says she has noticed a difference between the two countries. “In Australia there’s more of a focus on what would benefit the surrogates and the family,” she says. And that’s a distinction between two English-speaking countries that are somewhat culturally similar. We might see greater differences in other places.
Moore says her position is controversial. When I asked Georg Starke at the Swiss Federal Institute of Technology Lausanne for his opinion, he told me that, generally speaking, “the only thing that should matter is the will of the patient.” He worries that caregivers might opt to withdraw life support if the patient becomes too much of a “burden” on them. “That’s certainly something that I would find appalling,” he told me.
The way we weigh a patient’s own wishes and those of their family members might depend on the situation, says Vasiliki Rahimzadeh, a bioethicist at Baylor College of Medicine in Houston, Texas. Perhaps the opinions of surrogates might matter more when the case is more medically complex, or if medical interventions are likely to be futile.
Rahimzadeh has herself acted as a surrogate for two close members of her immediate family. She hadn’t had detailed discussions about end-of-life care with either of them before their crises struck, she told me.
Would a tool like the P4 have helped her through it? Rahimzadeh has her doubts. An AI trained on social media or internet search history couldn’t possibly have captured all the memories, experiences, and intimate relationships she had with her family members, which she felt put her in good stead to make decisions about their medical care.
“There are these lived experiences that are not well captured in these data footprints, but which have incredible and profound bearing on one’s actions and motivations and behaviors in the moment of making a decision like that,” she told me.
Now read the rest of The Checkup
Read more from MIT Technology Review’s archive
You can read the full article about the P4, and its many potential benefits and flaws, here.
AI is infiltrating health care in lots of other ways. We shouldn’t let it make all the decisions—AI paternalism could put patient autonomy at risk, as we explored in a previous edition of The Checkup.
When is someone deemed “too male” or “too female” to compete in the Olympics? A new podcast called Tested dives into the long, fascinating, and infuriating history of testing and excluding athletes on the basis of their gender and sex. (Sequencer)
There’s a dirty secret among Olympic swimmers: Everyone pees in the pool. “I’ve probably peed in every single pool I’ve swam in,” said Lilly King, a three-time Olympian for Team USA. “That’s just how it goes.” (Wall Street Journal)
When saxophonist Joey Berkley developed a movement disorder that made his hands twist into pretzel shapes, he volunteered for an experimental treatment that involved inserting an electrode deep into his brain. That was three years ago. Now he’s releasing a new suite about his experience, including a frenetic piece inspired by the surgery itself. (NPR)
After a case of mononucleosis, Jason Werbeloff started to see the people around him in an entirely new way—literally. He’s one of a small number of people for whom people’s faces morph into monstrous shapes, with bulging sides and stretching teeth, because of a rare condition called prosopometamorphopsia. (The New Yorker)
How young are you feeling today? Your answer might depend on how active you’ve been, and how sunny it is. (Innovation in Aging)
This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.
Back in 2018, it was my colleague Antonio Regalado, senior editor for biomedicine, who broke the story that a Chinese scientist named He Jiankui had used CRISPR to edit the genes of live human embryos, leading to the first gene-edited babies in the world. The news made He (or JK, as he prefers to be called) a controversial figure across the world, and just a year later, he was sentenced to three years in prison by the Chinese government, which deemed him guilty of illegal medical practices.
Last Thursday, JK, who was released from prison in 2022, sat down with Antonio and Mat Honan, our editor in chief, for a live broadcast conversation on the experiment, his current situation, and his plans for the future.
If you subscribe to MIT Technology Review, you can watch a recording of the conversation or read the transcript here. But if you don’t yet subscribe (and do consider it—I’m biased, but it’s worth it), allow me to recap some of the highlights of what JK shared.
His life has been eventful since he came out of prison. JK sought to live in Hong Kong but was rejected by its government; he publicly declared he would set up a nonprofit lab in Beijing, but that hasn’t happened yet; he was hired to lead a genetic-medicine research institution at Wuchang University of Technology, a private university in Wuhan, but he seems to have been let go again. Now, according to Stat News, he has relocated to Hainan, China’s southernmost island province, and started a lab there.
During the MIT Technology Review conversation, JK confirmed that he’s currently in Hainan and working on using gene-editing technology to cure genetic diseases like Duchenne muscular dystrophy (DMD).
He’s currently funded by private donations from Chinese and American companies, although he refused to name them. Some have even offered to pay him to travel to obscure countries with lax regulations to continue his previous work, but he turned them down. He would much prefer to return to academia to do research, JK said, but he can still conduct scientific research at a private company.
For now, he’s planning to experiment only on mice, monkeys, and nonviable human embryos, JK said.
His experiment in 2018 inspired China to come out with regulations that explicitly forbid gene editing for reproductive uses. Today, implanting an edited embryo into a human is a crime subject to up to seven years in prison. JK repeatedly said all his current work will “comply with all the laws, regulations, and international ethics” but shied away from answering a question on what he thinks regulation around gene editing should look like.
However, he is hopeful that society will come around one day and accept embryo gene editing as a form of medical treatment. “As humans, we are always conservative. We are always worried about new things, and it takes time for people to accept new technology,” he said. He believes this lack of societal acceptance is the biggest obstacle to using CRISPR for embryo editing.
Other than DMD, another disease for which JK is currently working on gene-editing treatments is Alzheimer’s. And there’s a personal reason. “I decided to do Alzheimer’s disease because my mother has Alzheimer’s. So I’m going to have Alzheimer’s too, and maybe my daughter and my granddaughter. So I want to do something to change it,” JK said. He said his interest in embryo gene editing was never about trying to change human evolution, but about changing the lives of his family and the patients who have come to him for help.
His idea for Alzheimer’s treatment is to modify one letter in the human DNA sequence to simulate a natural mutation found in some Icelandic and Scandinavian people, which previous research found could be related to a lower chance of getting Alzheimer’s disease. JK said it would take only about two years to finish the basic research for this treatment, but he won’t go into human trials with the current regulations.
He compares these gene-editing treatments to vaccines that everyone will be able to get easily in the future. “I would say in 50 years, like in 2074, embryo gene editing will be as common as IVF babies to prevent all the genetic diseases we know today. So the babies born at that time will be free of genetic disease,” he said.
For all that he’s been through, JK seems pretty optimistic about the future of embryo gene editing. “I believe society will eventually accept that embryo gene editing is a good thing because it improves human health. So I’m waiting for society to accept that,” he said.
Do you agree with his vision of embryo gene editing as a universal medical treatment in the future? I’d love to hear your thoughts. Write to me at zeyi@technologyreview.com.
Now read the rest of China Report
Catch up with China
1. There’s a new buzz phrase in China’s latest national economy blueprint: “new productive forces.” It just means the country is still invested in technology-driven economic growth. (The Economist $)
2. For the first time ever, Chinese scientists found water in the form of hydrated minerals from lunar soil samples retrieved in 2020. (Sixth Tone)
3. In June, Chinese electric-vehicle brands accounted for 11% of the European EV market, reaching a new record. But tariffs that went into effect in July could stop that trend. (Bloomberg $)
4. Chinese companies are supplying precision parts for weapons to Russia through a Belarusian defense contractor. (Nikkei Asia $)
5. China is looking for international buyers for its first home-grown passenger jet, the C919. Airlines in Southeast Asian countries like Indonesia and Brunei are the most likely customers. (South China Morning Post $)
6. Hundreds of Temu suppliers protested at the headquarters of the company in Guangzhou. They said the platform is subjecting the suppliers to unfair penalties for consumer complaints. (Bloomberg $)
Lost in translation
Since Russia tightened its import regulations early this year, the once-lucrative business of smuggling Chinese electric vehicles has almost vanished, according to the Chinese publication Lifeweek. Previously, traders could leverage the high demand for Chinese EVs in Russia and the low tariffs in transit countries in Central Asia to reap huge profits. For example, one businessman earned 870,000 RMB (about $120,000) through one batch export of 12 cars in December.
But new policies in Russia drastically increased import duties and enforced stricter vehicle registration. Chinese carmakers like BYD and XPeng also saw the opportunity to set up licensed operations in Central Asia to cater to this market. These changes transformed a profitable business into a barely sustainable one, and traders have been forced to adapt or exit the market.
One more thing
To prevent drivers from falling asleep, some highways in China have installed laser equipment that light up the night sky with red, blue, and green rays to attract attention and keep people awake. This looks straight out of a sci-fi novel but has been in use in over 10 Chinese provinces since 2022, according to the company that made the system.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
What does the genome do? You might have heard that it is a blueprint for an organism. Or that it’s a bit like a recipe. But building an organism is much more complex than constructing a house or baking a cake.
This week I came across an idea for a new way to think about the genome—one that borrows from the field of artificial intelligence. Two researchers are arguing that we should think about it as being more like a generative model, a form of AI that can generate new things.
You might be familiar with such AI tools—they’re the ones that can create text, images, or even films from various prompts. Do our genomes really work in the same way? It’s a fascinating idea. Let’s explore.
When I was at school, I was taught that the genome is essentially a code for an organism. It contains the instructions needed to make the various proteins we need to build our cells and tissues and keep them working. It made sense to me to think of the human genome as being something like a program for a human being.
But this metaphor falls apart once you start to poke at it, says Kevin Mitchell, a neurogeneticist at Trinity College in Dublin, Ireland, who has spent a lot of time thinking about how the genome works.
A computer program is essentially a sequence of steps, each controlling a specific part of development. In human terms, this would be like having a set of instructions to start by building a brain, then a head, and then a neck, and so on. That’s just not how things work.
Another popular metaphor likens the genome to a blueprint for the body. But a blueprint is essentially a plan for what a structure should look like when it is fully built, with each part of the diagram representing a bit of the final product. Our genomes don’t work this way either.
It’s not as if you’ve got a gene for an elbow and a gene for an eyebrow. Multiple genes are involved in the development of multiple body parts. The functions of genes can overlap, and the same genes can work differently depending on when and where they are active. It’s far more complicated than a blueprint.
Then there’s the recipe metaphor. In some ways, this is more accurate than the analogy of a blueprint or program. It might be helpful to think about our genes as a set of ingredients and instructions, and to bear in mind that the final product is also at the mercy of variations in the temperature of the oven or the type of baking dish used, for example. Identical twins are born with the same DNA, after all, but they are often quite different by the time they’re adults.
But the recipe metaphor is too vague, says Mitchell. Instead, he and his colleague Nick Cheney at the University of Vermont are borrowing concepts from AI to capture what the genome does. Mitchell points to generative AI models like Midjourney and DALL-E, both of which can generate images from text prompts. These models work by capturing elements of existing images to create new ones.
Say you write a prompt for an image of a horse. The models have been trained on a huge number of images of horses, and these images are essentially compressed to allow the models to capture certain elements of what you might call “horsiness.” The AI can then construct a new image that contains these elements.
We can think about genetic data in a similar way. According to this model, we might consider evolution to be the training data. The genome is the compressed data—the set of information that can be used to create the new organism. It contains the elements we need, but there’s plenty of scope for variation. (There are lots more details about the various aspects of the model in the paper, which has not yet been peer-reviewed.)
Mitchell thinks it’s important to get our metaphors in order when we think about the genome. New technologies are allowing scientists to probe ever deeper into our genes and the roles they play. They can now study how all the genes are expressed in a single cell, for example, and how this varies across every cell in an embryo.
“We need to have a conceptual framework that will allow us to make sense of that,” says Mitchell. He hopes that the concept will aid the development of mathematical models that might help us better understand the intricate relationships between genes and the organisms they end up being part of—in other words, exactly how components of our genome contribute to our development.
Now read the rest of The Checkup
Read more from MIT Technology Review’s archive:
Last year, researchers built a new human genome reference designed to capture the diversity among us. They called it the “pangenome,” as Antonio Regalado reported.
Generative AI has taken the world by storm. Will Douglas Heaven explored six big questions that will determine the future of the technology.
A Disney director tried to use AI to generate a soundtrack in the style of Hans Zimmer.It wasn’t as good as the real thing, as Melissa Heikkilä found.
What is AI? No one can agree, as Will found in his recent deep dive on the topic.
From around the web
Evidence from more than 1,400 rape cases in Maryland, some from as far back as 1977, are set to be processed by the end of the year, thanks to a new law. The state still has more than 6,000 untested rape kits. (ProPublica)
How well is your brain aging? A new tool has been designed to capture a person’s brain age based on an MRI scan, and which accounts for the possible effects of traumatic brain injuries. (NeuroImage)
Iran has reported the country’s first locally acquired cases of dengue, a viral infection spread by mosquitoes. There are concerns it could spread. (WHO)
IVF is expensive, and add-ons like endometrial scratching (which literally involves scratching the lining of the uterus) are not supported by strong evidence. Is the fertility industry profiting from vulnerability? (The Lancet)
Up to 2 million Americans are getting their supply of weight loss drugs like Wegovy or Zepbound from compounding pharmacies. They’re a fraction of the price of brand-name Big Pharma drugs, but there are some safety concerns. (KFF Health News)