When someone loses part of a leg, a prosthetic can make it easier to get around. But most prosthetics are static, cumbersome, and hard to move. A new neural interface connects a bionic limb to nerve endings in the thigh, allowing the limb to be controlled by the brain. The new device, which is described today in Nature Medicine, could help people with lower-leg amputations feel as if their prosthesis is part of them.
“When you ask a patient ‘What is your body?’ They don’t include the prosthesis,” says MIT biophysicist Hugh Herr, one of the lead authors on the study. The work is personal for him: he lost both his lower legs in a climbing accident when he was 17. He says linking the brain to the prosthesis can make it feel more like part of someone’s anatomy, which can have a positive emotional impact.
Getting the neural interface hooked up to a prosthetic takes two steps. First, patients undergo surgery. Following a lower leg amputation, portions of shin and calf muscle still remain. The operation connects shin muscle, which contracts to make the ankle flex upward, to calf muscle, which counteracts this movement. The prosthetic can also be fitted at this point. Reattaching the remnants of these muscles can enable the prosthetic to move more dynamically. It can also reduce phantom limb pain, and patients are less likely to trip and fall.
“The surgery stands on its own,” says Amy Pietrafitta, a para-athlete who received it in 2018. “I feel like I have my leg back.” But natural movements are still limited when the prosthetic isn’t connected to the nervous system.
In step two, surface electrodes measure nerve activity from the brain to the calf and shin muscles, indicating an intention to move the lower leg. A small computer in the bionic leg decodes those nerve signals and moves the leg accordingly, allowing the patient to move the limb more naturally.
“If you have intact biological limbs, you can walk up and down steps, for example, and not even think about it. It’s involuntary,” says Herr. “That’s the case with our patients, but their limb is made of titanium and silicone.”
The authors compared the mobility of seven patients using a neural interface with that of patients who had not received the surgery. Patients using the neural interface could walk 41% faster and climb sloped surfaces and steps. They could also dodge obstacles more nimbly and had better balance. And they described feeling that the prosthetic was truly a part of their body rather than just a tool that they used to get around.
“It’s a very forward-thinking approach,” says Hamid Charkhkar, a biomedical engineer at Case Western Reserve University, who was not involved in the study. “Our limbs are not like shoes. They’re not worn over our bodies. They are integrally attached to our bodies via bones, muscles, and nerves.”
There are limitations. The surgery can be done during amputation or several years later, but it won’t work equally well for every patient. If it’s done later, for example, some people’s upper thigh muscles could have atrophied too severely for them to receive the full benefits.
The surgery connecting the shin and calf muscles has become the standard of care at Brigham and Women’s Hospital in Boston. But the surface electrodes that give patients full neural control of their limbs are a few years away from being clinically implemented. Plus, the neural interfaces have only been used in laboratory settings, and it will be important to know how they hold up in the real world.
Herr and his team at MIT hope to provide users with even greater control over their prosthetic limbs. In the future, their efforts will likely involve replacing the surface electrodes with magnetic spheres, which can more accurately track muscle dynamics.
“The goal that we have is to really reconstruct bodies, to rebuild bodies,” says Herr. And to fully achieve that ambition, he says, “neural integration and embodiment is our long-term goal.”
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
This week I’ve been thinking about thought. It was all brought on by reading my colleague Niall Firth’s recent cover story about the use of artificial intelligence in video games. The piece describes how game companies are working to incorporate AI into their products to create more immersive experiences for players.
These companies are applying large language models to generate new game characters with detailed backstories—characters that could engage with a player in any number of ways. Enter in a few personality traits, catchphrases, and other details, and you can create a background character capable of endless unscripted, never-repeating conversations with you.
This is what got me thinking. Neuroscientists and psychologists have long been using games as research tools to learn about the human mind. Numerous video games have been either co-opted or especially designed to study how people learn, navigate, and cooperate with others, for example. Might AI video games allow us to probe more deeply, and unravel enduring mysteries about our brains and behavior?
I decided to call up Hugo Spiers to find out. Spiers is a neuroscientist at University College London who has been using a game to study how people find their way around. In 2016, Spiers and his colleagues worked with Deutsche Telekom and the games company Glitchers to develop Sea Hero Quest, a mobile video game in which players have to navigate a sea in a boat. They have since been using the game to learn more about how people lose navigational skills in the early stages of Alzheimer’s disease.
The use of video games in neuroscientific research kicked into gear in the 1990s, Spiers tells me, following the release of 3D games like Wolfenstein 3D and Duke Nukem. “For the first time, you could have an entirely simulated world in which to test people,” he says.
Scientists could observe and study how players behaved in these games: how they explored their virtual environment, how they sought rewards, how they made decisions. And research volunteers didn’t need to travel to a lab—their gaming behavior could be observed from wherever they happened to be playing, whether that was at home, at a library, or even inside an MRI scanner.
For scientists like Spiers, one of the biggest advantages of using games in research is that people want to play them. The use of games allows scientists to explore fundamental experiences like fun and curiosity. Researchers often offer a small financial incentive to volunteers who take part in their studies. But they don’t have to pay people to play games, says Spiers.
You’re much more likely to have fun if you’re motivated. It’s just not quite the same when you’re doing something purely for the money. And not having to pay participants allows researchers to perform huge studies on smaller budgets. Spiers has been able to collect data on over 4 million people from 195 countries, all of whom have willingly played Sea Hero Quest.
AI could help researchers go even further. A rich, immersive world filled with characters that interact in realistic ways could help them study how our minds respond to various social settings and how we relate to other individuals. By observing how players interact with AI characters, scientists can learn more about how we cooperate—and compete—with others. It would be far cheaper and easier than hiring actors to engage with research volunteers, says Spiers.
Spiers himself is interested in learning how people hunt, whether for food, clothes, or a missing pet. “We still use these bits of our brain that our ancestors would have used daily, and of course some traditional communities still hunt,” he tells me. “But we know almost nothing about how the brain does this.” He envisions using AI-driven nonplayer characters to learn more about how humans cooperate for hunting.
There are other, newer questions to explore. At a time when people are growing attached to “virtual companions,” and an increasing number of AI girlfriends and boyfriends are being made available, AI video-game characters could also help us understand these novel relationships. “People are forming a relationship with an artificial agent,” says Spiers. “That’s inherently interesting. Why would you not want to study that?”
Now read the rest of The Checkup
Read more from MITTechnology Review’s archive:
My fellow London-based colleagues had a lot of fun generating an AI game character based on Niall. He turned out to be a sarcastic, smug, and sassy monster.
Google DeepMind has developed a generative AI model that can generate a basic but playable video game from a short description, a hand-drawn sketch, or a photo, as my colleague Will Heaven wrote earlier this year. The resulting games look a bit like Super Mario Bros.
Today’s world is undeniably gamified, argues Bryan Gardiner. He explores how we got here in another article from the Play issue of the magazine.
Large language models behave in unexpected ways. And no one really knows why, as Will wrote in March.
Technologies can be used to study the brain in lots of different ways—some of which are much more invasive than others. Tech that aims to read your mind and probe your memories is already being used, as I wrote in a previous edition of The Checkup.
From around the web:
Bad night of sleep left you needing a pick-me-up? Scientists have designed an algorithm to deliver tailored sleep-and-caffeine-dosing schedules to help tired individuals “maximize the benefits of limited sleep opportunities and consume the least required amount of caffeine.” (Yes, it may have been developed with the US Army in mind, but surely we all stand to benefit?) (Sleep)
Is dog cloning a sweet way to honor the memory of a dearly departed pet, or a “frivolous and wasteful and ethically obnoxious” pursuit in which humans treat living creatures as nothing more than their own “stuff”? This feature left me leaning toward the latter view, especially after learning that people tend to like having dogs with health problems … (The New Yorker)
States that have enacted the strongest restrictions to abortion access have also seen prescriptions for oral contraceptives plummet, according to new research. (Mother Jones)
And another study has linked Texas’s 2021 ban on abortion in early pregnancy with an increase in the number of infant deaths recorded in the state. In 2022, across the rest of the US, the number of infant deaths ascribed to anomalies present at birth decreased by 3.1%. In Texas, this figure increased by 22.9%. (JAMA Pediatrics)
We are three months into the bird flu outbreak in US dairy cattle. But the country still hasn’t implemented a sufficient testing infrastructure and doesn’t fully understand how the virus is spreading. (STAT)
In a clean room in his lab, Sean Moore peers through a microscope at a bit of intestine, its dark squiggles and rounded structures standing out against a light gray background. This sample is not part of an actual intestine; rather, it’s human intestinal cells on a tiny plastic rectangle, one of 24 so-called “organs on chips” his lab bought three years ago.
Moore, a pediatric gastroenterologist at the University of Virginia School of Medicine, hopes the chips will offer answers to a particularly thorny research problem. He studies rotavirus, a common infection that causes severe diarrhea, vomiting, dehydration, and even death in young children. In the US and other rich nations, up to 98% of the children who are vaccinated against rotavirus develop lifelong immunity. But in low-income countries, only about a third of vaccinated children become immune. Moore wants to know why.
His lab uses mice for some protocols, but animal studies are notoriously bad at identifying human treatments. Around 95% of the drugs developed through animal research fail in people. Researchers have documented this translation gap since at least 1962. “All these pharmaceutical companies know the animal models stink,” says Don Ingber, founder of the Wyss Institute for Biologically Inspired Engineering at Harvard and a leading advocate for organs on chips. “The FDA knows they stink.”
But until recently there was no other option. Research questions like Moore’s can’t ethically or practically be addressed with a randomized, double-blinded study in humans. Now these organs on chips, also known as microphysiological systems, may offer a truly viable alternative. They look remarkably prosaic: flexible polymer rectangles about the size of a thumb drive. In reality they’re triumphs of bioengineering, intricate constructions furrowed with tiny channels that are lined with living human tissues. These tissues expand and contract with the flow of fluid and air, mimicking key organ functions like breathing, blood flow, and peristalsis, the muscular contractions of the digestive system.
More than 60 companies now produce organs on chips commercially, focusing on five major organs: liver, kidney, lung, intestines, and brain. They’re already being used to understand diseases, discover and test new drugs, and explore personalized approaches to treatment.
As they continue to be refined, they could solve one of the biggest problems in medicine today. “You need to do three things when you’re making a drug,” says Lorna Ewart, a pharmacologist and chief scientific officer of Emulate, a biotech company based in Boston. “You need to show it’s safe. You need to show it works. You need to be able to make it.”
All new compounds have to pass through a preclinical phase, where they’re tested for safety and effectiveness before moving to clinical trials in humans. Until recently, those tests had to run in at least two animal species—usually rats and dogs—before the drugs were tried on people.
But in December 2022, President Biden signed the FDA Modernization Act, which amended the original FDA Act of 1938. With a few small word changes, the act opened the door for non-animal-based testing in preclinical trials. Anything that makes it faster and easier for pharmaceutical companies to identify safe and effective drugs means better, potentially cheaper treatments for all of us.
Moore, for one, is banking on it, hoping the chips help him and his colleagues shed light on the rotavirus vaccine responses that confound them. “If you could figure out the answer,” he says, “you could save a lot of kids’ lives.”
While many teams have worked on organ chips over the last 30 years, the OG in the field is generally acknowledged to be Michael Shuler, a professor emeritus of chemical engineering at Cornell. In the 1980s, Shuler was a math and engineering guy who imagined an “animal on a chip,” a cell culture base seeded with a variety of human cells that could be used for testing drugs. He wanted to position a handful of different organ cells on the same chip, linked to one another, which could mimic the chemical communication between organs and the way drugs move through the body. “This was science fiction,” says Gordana Vunjak-Novakovic, a professor of biomedical engineering at Columbia University whose lab works with cardiac tissue on chips. “There was no body on a chip. There is still no body on a chip. God knows if there will ever be a body on a chip.”
Shuler had hoped to develop a computer model of a multi-organ system, but there were too many unknowns. The living cell culture system he dreamed up was his bid to fill in the blanks. For a while he played with the concept, but the materials simply weren’t good enough to build what he imagined.
“You can force mice to menstruate, but it’s not really menstruation. You need the human being.”
Linda Griffith, founding professor of biological engineering at MIT and a 2006 recipient of a MacArthur “genius grant”
He wasn’t the only one working on the problem. Linda Griffith, a founding professor of biological engineering at MIT and a 2006 recipient of a MacArthur “genius grant,” designed a crude early version of a liver chip in the late 1990s: a flat silicon chip, just a few hundred micrometers tall, with endothelial cells, oxygen and liquid flowing in and out via pumps, silicone tubing, and a polymer membrane with microscopic holes. She put liver cells from rats on the chip, and those cells organized themselves into three-dimensional tissue. It wasn’t a liver, but it modeled a few of the things a functioning human liver could do. It was a start.
Griffith, who rides a motorcycle for fun and speaks with a soft Southern accent, suffers from endometriosis, an inflammatory condition where cells from the lining of the uterus grow throughout the abdomen. She’s endured decades of nausea, pain, blood loss, and repeated surgeries. She never took medical leaves, instead loading up on Percocet, Advil, and margaritas, keeping a heating pad and couch in her office—a strategy of necessity, as she saw no other choice for a working scientist. Especially a woman.
And as a scientist, Griffith understood that the chronic diseases affecting women tend to be under-researched, underfunded, and poorly treated. She realized that decades of work with animals hadn’t done a damn thing to make life better for women like her. “We’ve got all this data, but most of that data does not lead to treatments for human diseases,” she says. “You can force mice to menstruate, but it’s not really menstruation. You need the human being.”
Or, at least, the human cells. Shuler and Griffith, and other scientists in Europe, worked on some of those early chips, but things really kicked off around 2009, when Don Ingber’s lab in Cambridge, Massachusetts, created the first fully functioning organ on a chip. That “lung on a chip” was made from flexible silicone rubber, lined with human lung cells and capillary blood vessel cells that “breathed” like the alveoli—tiny air sacs—in a human lung. A few years later Ingber, an MD-PhD with the tidy good looks of a younger Michael Douglas, founded Emulate, one of the earliest biotech companies making microphysiological systems. Since then he’s become a kind of unofficial ambassador for in vitro technologies in general and organs on chips in particular, giving hundreds of talks, scoring millions in grant money, repping the field with scientists and laypeople. Stephen Colbert once ragged on him after the New York Times quoted him as describing a chip that “walks, talks, and quacks like a human vagina,” a quote Ingber says was taken out of context.
Ingber began his career working on cancer. But he struggled with the required animal research. “I really didn’t want to work with them anymore, because I love animals,” he says. “It was a conscious decision to focus on in vitro models.” He’s not alone; a growing number of young scientists are speaking up about the distress they feel when research protocols cause pain, trauma, injury, and death to lab animals. “I’m a master’s degree student in neuroscience and I think about this constantly. I’ve done such unspeakable, horrible things to mice all in the name of scientific progress, and I feel guilty about this every day,” wrote one anonymous student on Reddit. (Full disclosure: I switched out of a psychology major in college because I didn’t want to cause harm to animals.)
Emulate is one of the companies building organ-on-a-chip technology. The devices combine live human cells with a microenvironment designed to emulate specific tissues.
EMULATE
Taking an undergraduate art class led Ingber to an epiphany: mechanical forces are just as important as chemicals and genes in determining the way living creatures work. On a shelf in his office he still displays a model he built in that art class, a simple construction of sticks and fishing line, which helped him realize that cells pull and twist against each other. That realization foreshadowed his current work and helped him design dynamic microfluidic devices that incorporated shear and flow.
Ingber coauthored a 2022 paper that’s sometimes cited as a watershed in the world of organs on chips. Researchers used Emulate’s liver chips to reevaluate 27 drugs that had previously made it through animal testing and had then gone on to kill 242 people and necessitate more than 60 liver transplants. The liver chips correctly flagged problems with 22 of the 27 drugs, an 87% success rate compared with a 0% success rate for animal testing. It was the first time organs on chips had been directly pitted against animal models, and the results got a lot of attention from the pharmaceutical industry. Dan Tagle, director of the Office of Special Initiatives for the National Center for Advancing Translational Sciences (NCATS), estimates that drug failures cost around $2.6 billion globally each year. The earlier in the process failing compounds can be weeded out, the more room there is for other drugs to succeed.
“The capacity we have to test drugs is more or less fixed in this country,” says Shuler, whose company, Hesperos, also manufactures organs on chips. “There are only so many clinical trials you can do. So if you put a loser into the system, that means something that could have won didn’t get into the system. We want to change the success rate from clinical trials to a much higher number.”
In 2011, the National Institutes of Health established NCATS and started investing in organs on chips and other in vitro technologies. Other government funders, like the Defense Advanced Research Projects Agency and the Food and Drug Administration, have followed suit. For instance, NIH recently funded NASA scientists to send heart tissue on chips into space. Six months in low gravity ages the cardiovascular system 10 years, so this experiment lets researchers study some of the effects of aging without harming animals or humans.
Scientists have made liver chips, brain chips, heart chips, kidney chips, intestine chips, and even a female reproductive system on a chip (with cells from ovaries, fallopian tubes, and uteruses that release hormones and mimic an actual 28-day menstrual cycle). Each of these chips exhibits some of the specific functions of the organs in question. Cardiac chips, for instance, contain heart cells that beat just like heart muscle, making it possible for researchers to model disorders like cardiomyopathy.
Shuler thinks organs on chips will revolutionize the world of research for rare diseases. “It is a very good model when you don’t have enough patients for normal clinical trials and you don’t have a good animal model,” he says. “So it’s a way to get drugs to people that couldn’t be developed in our current pharmaceutical model.” Shuler’s own biotech company used organs on chips to test a potential drug for myasthenia gravis, a rare neurological disorder. In 2022,the FDA approved the drug for clinical trials based on that data—one of six Hesperos drugs that have so far made it to that stage.
Each chip starts with a physiologically based pharmacokinetic model, known as a PBPK model—a mathematical expression of how a chemical compound behaves in a human body. “We try and build a physical replica of the mathematical model of what really occurs in the body,” explains Shuler. That model guides the way the chip is designed, re-creating the amount of time a fluid or chemical stays in that particular organ—what’s known as the residence time. “As long as you have the same residence time, you should get the same response in terms of chemical conversion,” he says.
Tiny channels on each chip, each between 10 and 100 microns in diameter, help bring fluids and oxygen to the cells. “When you get down to less than one micron, you can’t use normal fluid dynamics,” says Shuler. And fluid dynamics matters, because if the fluid moves through the device too quickly, the cells might die; too slowly, and the cells won’t react normally.
Chip technology, while sophisticated, has some downsides. One of them is user friendliness. “We need to get rid of all this tubing and pumps and make something that’s as simple as a well plate for culturing cells,” says Vunjak-Novakovic. Her lab and others are working on simplifying the design and function of such chips so they’re easier to operate and are compatible with robots, which do repetitive tasks like pipetting in many labs.
Cost and sourcing can also be challenging. Emulate’s base model, which looks like a simple rectangular box from the outside,starts at around $100,000 and rises steeply from there. Most human cells come from commercial suppliers that arrange for donations from hospital patients. During the pandemic, when people had fewer elective surgeries, many of those sources dried up. As microphysiological systems become more mainstream, finding reliable sources of human cells will be critical.
“As your confidence in using the chips grows, you might say, Okay, we don’t need two animals anymore— we could go with chip plus one animal.”
Lorna Ewart, Chief Scientific Officer, Emulate
Another challenge is that every company producing organs on chips uses its own proprietary methods and technologies. Ingber compares the landscape to the early days of personal computing, when every company developed its own hardware and software, and none of them meshed well. For instance, the microfluidic systems in Emulate’s intestine chips are fueled by micropumps, while those made by Mimetas, another biotech company, use an electronic rocker and gravity to circulate fluids and air. “This is not an academic lab type of challenge,” emphasizes Ingber. “It’s a commercial challenge. There’s no way you can get the same results anywhere in the world with individual academics making [organs on chips], so you have to have commercialization.”
Namandje Bumpus, the FDA’s chief scientist, agrees. “You can find differences [in outcomes] depending even on what types of reagents you’re using,” she says. Those differences mean research can’t be easily reproduced, which diminishes its validity and usefulness. “It would be great to have some standardization,” she adds.
On the plus side, the chip technology could help researchers address some of the most deeply entrenched health inequities in science. Clinical trials have historically recruited white men, underrepresenting people of color, women (especially pregnant and lactating women), the elderly, and other groups. And treatments derived from those trials all too often fail in members of those underrepresented groups, as in Moore’s rotavirus vaccine mystery. “With organs on a chip, you may be able to create systems by which you are very, very thoughtful—where you spread the net wider than has ever been done before,” says Moore.
This microfluidic platform, designed by MIT engineers, connects engineered tissue from up to 10 organs.
FELICE FRANKEL
Another advantage is that chips will eventually reduce the need for animals in the lab even as they lead to better human outcomes. “There are aspects of animal research that make all of us uncomfortable, even people that do it,” acknowledges Moore. “The same values that make us uncomfortable about animal research are also the same values that make us uncomfortable with seeing human beings suffer with diseases that we don’t have cures for yet. So we always sort of balance that desire to reduce suffering in all the forms that we see it.”
Lorna Ewart, who spent 20 years at the pharma giant AstraZeneca before joining Emulate, thinks we’re entering a kind of transition time in research, in which scientists use in vitro technologies like organs on chips alongside traditional cell culture methods and animals. “As your confidence in using the chips grows, you might say, Okay, we don’t need two animals anymore—we could go with chip plus one animal,” she says.
In the meantime, Sean Moore is excited about incorporating intestine chips more and more deeply into his research. His lab has been funded by the Gates Foundation to do what he laughingly describes as a bake-off between intestine chips made by Emulate and Mimetas. They’re infecting the chips with different strains of rotavirus to try to identify the pros and cons of each company’s design. It’s too early for any substantive results, but Moore says he does have data showing that organ chips are a viable model for studying rotavirus infection. That could ultimately be a real game-changer in his lab and in labs around the world.
“There’s more players in the space right now,” says Moore. “And that competition is going to be a healthy thing.”
Harriet Brown writes about health, medicine, and science. Her most recent book is Shadow Daughter: A Memoir of Estrangement. She’s a professor of magazine, news, and digital journalism at Syracuse University’s Newhouse School.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
Earlier this week, the US surgeon general, also known as the “nation’s doctor,” authored an article making the case that health warnings should accompany social media. The goal: to protect teenagers from its harmful effects. “Adolescents who spend more than three hours a day on social media face double the risk of anxiety and depression symptoms,” Vivek Murthy wrote in a piece published in the New York Times. “Additionally, nearly half of adolescents say social media makes them feel worse about their bodies.”
His concern instinctively resonates with me. I’m in my late 30s, and even I can end up feeling a lot worse about myself after a brief stint on Instagram. I have two young daughters, and I worry about how I’ll respond when they reach adolescence and start asking for access to whatever social media site their peers are using. My children already have a fascination with cell phones; the eldest, who is almost six, will often come into my bedroom at the crack of dawn, find my husband’s phone, and somehow figure out how to blast “Happy Xmas (War Is Over)” at full volume.
But I also know that the relationship between this technology and health isn’t black and white. Social media can affect users in different ways—often positively. So let’s take a closer look at the concerns, the evidence behind them, and how best to tackle them.
Murthy’s concerns aren’t new, of course. In fact, almost any time we are introduced to a new technology, some will warn of its potential dangers. Innovations like the printing press, radio, and television all had their critics back in the day. In 2009, the Daily Maillinked Facebook use to cancer.
More recently, concerns about social media have centered on young people. There’s a lot going on in our teenage years as our brains undergo maturation, our hormones shift, and we explore new ways to form relationships with others. We’re thought to be more vulnerable to mental-health disorders during this period too. Around half of such disorders are thought to develop by the age of 14, and suicide is the fourth-leading cause of death in people aged between 15 and 19, according to the World Health Organization. Many have claimed that social media only makes things worse.
Reports have variously cited cyberbullying, exposure to violent or harmful content, and the promotion of unrealistic body standards, for example, as potential key triggers of low mood and disorders like anxiety and depression. There have also been several high-profile cases of self-harm and suicide with links to social media use, often involving online bullying and abuse. Just this week, the suicide of an 18-year-old in Kerala, India, was linked to cyberbullying. And children have died after taking part in dangerous online challenges made viral on social media, whether from inhaling toxic substances, consuming ultra-spicy tortilla chips, or choking themselves.
Murthy’s new article follows an advisory on social media and youth mental health published by his office in 2023. The 25-page document, which lays out some of known benefits and harms of social media use as well as the “unknowns,” was intended to raise awareness of social media as a health issue. The problem is that things are not entirely clear cut.
“The evidence is currently quite limited,” says Ruth Plackett, a researcher at University College London who studies the impact of social media on mental health in young people. A lot of the research on social media and mental health is correlational. It doesn’t show that social media use causes mental health disorders, Plackett says.
The surgeon general’s advisory cites some of these correlational studies. It also points to survey-based studies, including one looking at mental well-being among college students after the rollout of Facebook in the mid-2000s. But even if you accept the authors’ conclusion that Facebook had a negative impact on the students’ mental health, it doesn’t mean that other social media platforms will have the same effect on other young people. Even Facebook, and the way we use it, has changed a lot in the last 20 years.
Other studies have found that social media has no effect on mental health. In a study published last year, Plackett and her colleagues surveyed 3,228 children in the UK to see how their social media use and mental well-being changed over time. The children were first surveyed when they were aged between 12 and 13, and again when they were 14 to 15 years old.
Plackett expected to find that social media use would harm the young participants. But when she conducted the second round of questionnaires, she found that was not the case. “Time spent on social media was not related to mental-health outcomes two years later,” she tells me.
Other research has found that social media use can be beneficial to young people, especially those from minority groups. It can help some avoid loneliness, strengthen relationships with their peers, and find a safe space to express their identities, says Plackett. Social media isn’t only for socializing, either. Today, young people use these platforms for news, entertainment, school, and even (in the case of influencers) business.
“It’s such a mixed bag of evidence,” says Plackett. “I’d say it’s hard to draw much of a conclusion at the minute.”
In his article, Murthy calls for a warning label to be applied to social media platforms, stating that “social media is associated with significant mental-health harms for adolescents.”
But while Murthy draws comparisons to the effectiveness of warning labels on tobacco products, bingeing on social media doesn’t have the same health risks as chain-smoking cigarettes. We have plenty of strong evidence linking smoking to a range of diseases, including gum disease, emphysema, and lung cancer, among others. We know that smoking can shorten a person’s life expectancy. We can’t make any such claims about social media, no matter what was written in that Daily Mail article.
Health warnings aren’t the only way to prevent any potential harms associated with social media use, as Murthy himself acknowledges. Tech companies could go further in reducing or eliminating violent and harmful content, for a start. And digital literacy education could help inform children and their caregivers how to alter the settings on various social media platforms to better control the content children see, and teach them how to assess the content that does make it to their screens.
I like the sound of these measures. They might even help me put an end to the early-morning Christmas songs.
Now read the rest of The Checkup
Read more from MITTechnology Review’s archive:
Bills designed to make the internet safer for children have been popping up across the US. But individual states take different approaches, leaving the resulting picture a mess, as Tate Ryan-Mosley explored.
Dozens of US states sued Meta, the parent company of Facebook, last October. As Tate wrote at the time, the states claimed that the company knowingly harmed young users, misled them about safety features and harmful content, and violated laws on children’s privacy.
China has been implementing increasingly tight controls over how children use the internet. In August last year, the country’s cyberspace administrator issued detailed guidelines that include, for example, a rule to limit use of smart devices to 40 minutes a day for children under the age of eight. And even that use should be limited to content about “elementary education, hobbies and interests, and liberal arts education.” My colleague Zeyi Yang had the story in a previous edition of his weekly newsletter, China Report.
Last year, TikTok set a 60-minute-per-day limit for users under the age of 18. But the Chinese domestic version of the app, Douyin, has even tighter controls, as Zeyi wrote last March.
One way that social media can benefit young people is by allowing them to express their identities in a safe space. Filters that superficially alter a person’s appearance to make it more feminine or masculine can help trans people play with gender expression, as Elizabeth Anne Brown wrote in 2022. She quoted Josie, a trans woman in her early 30s. “The Snapchat girl filter was the final straw in dropping a decade’s worth of repression,” Josie said. “[I] saw something that looked more ‘me’ than anything in a mirror, and I couldn’t go back.”
From around the web
Could gentle shock waves help regenerate heart tissue? A trial of what’s being dubbed a “space hairdryer” suggests the treatment could help people recover from bypass surgery. (BBC)
“We don’t know what’s going on with this virus coming out of China right now.” Anthony Fauci gives his insider account of the first three months of the covid-19 pandemic. (The Atlantic)
Microplastics are everywhere. It was only a matter of time before scientists found them in men’s penises. (The Guardian)
Is the singularity nearer? Ray Kurzweil believes so. He also thinks medical nanobots will allow us to live beyond 120. (Wired)
The outbreak of avian influenza on US dairy farms has started to make milk seem a lot less wholesome. Milk that’s raw, or unpasteurized, can actually infect mice that drink it, and a few dairy workers have already caught the bug.
The FDA says that commercial milk is safe because it is pasteurized, killing the germs. Even so, it’s enough to make a person ponder a life beyond milk—say, taking your coffee black or maybe drinking oat milk.
But for those of us who can’t do without the real thing, it turns out some genetic engineers are working on ways to keep the milk and get rid of the cows instead. They’re doing it by engineering yeasts and plants with bovine genes so they make the key proteins responsible for milk’s color, satisfying taste, and nutritional punch.
The proteins they’re copying are casein, a floppy polymer that’s the most abundant protein in milk and is what makes pizza cheese stretch, and whey, a nutritious combo of essential amino acids that’s often used in energy powders.
It’s part of a larger trend of replacing animals with ingredients grown in labs, steel vessels, or plant crops. Think of the Impossible burger, the veggie patty made mouthwatering with the addition of heme, a component of blood that’s produced in the roots of genetically modified soybeans.
One of the milk innovators is Remilk, an Israeli startup founded in 2019, which has engineered yeast so it will produce beta-lactoglobulin (the main component of whey). Company cofounder Ori Cohavi says a single biotech factory of bubbling yeast vats feeding on sugar could in theory “replace 50,000 to 100,000 cows.”
Remilk has been making trial batches and is testing ways to formulate the protein with plant oils and sugar to make spreadable cheese, ice cream, and milk drinks. So yes, we’re talking “processed” food—one partner is a local Coca-Cola bottler, and advising the company are former executives of Nestlé, Danone, and PepsiCo.
But regular milk isn’t exactly so natural either. At milking time, animals stand inside elaborate robots, and it looks for all the world as if they’re being abducted by aliens. “The notion of a cow standing in some nice green scenery is very far from how we get our milk,” says Cohavi. And there are environmental effects: cattle burp methane, a potent greenhouse gas, and a lactating cow needs to drink around 40 gallons of water a day.
“There are hundreds of millions of dairy cows on the planet producing greenhouse waste, using a lot of water and land,” says Cohavi. “It can’t be the best way to produce food.”
For biotech ventures trying to displace milk, the big challenge will be keeping their own costs of production low enough to compete with cows. Dairies get government protections and subsidies, and they don’t only make milk. Dairy cows are eventually turned into gelatin, McDonald’s burgers, and the leather seats of your Range Rover. Not much goes to waste.
At Alpine Bio, a biotech company in San Francisco (also known as Nobell Foods), researchers have engineered soybeans to produce casein. While not yet cleared for sale, the beans are already being grown on USDA-sanctioned test plots in the Midwest, says Alpine’s CEO, Magi Richani.
Richani chose soybeans because they’re already a major commodity and the cheapest source of protein around. “We are working with farmers who are already growing soybeans for animal feed,” she says. “And we are saying, ‘Hey, you can grow this to feed humans.’ If you want to compete with a commodity system, you have to have a commodity crop.”
Alpine intends to crush the beans, extract the protein, and—much like Remilk—sell the ingredient to larger food companies.
Everyone agrees that cow’s milk will be difficult to displace. It holds a special place in the human psyche, and we owe civilization itself, in part, to domesticated animals. In fact, they’ve left their mark in our genes, with many of us carrying DNA mutations that make cow’s milk easier to digest.
But that’s why it might be time for the next technological step, says Richani. “We raise 60 billion animals for food every year, and that is insane. We took it too far, and we need options,” she says. “We need options that are better for the environment, that overcome the use of antibiotics, and that overcome the disease risk.”
It’s not clear yet whether the bird flu outbreak on dairy farms is a big danger to humans. But making milk without cows would definitely cut the risk that an animal virus will cause a new pandemic. As Richani says: “Soybeans don’t transmit diseases to humans.”
Now read the rest of The Checkup
Read more from MIT Technology Review’s archive
Hungry for more from the frontiers of fromage? In the Build issue of our print magazine, Andrew Rosenblum tasted a yummy brie made only from plants. Harder to swallow was the claim by developer Climax Foods that its cheese was designed using artificial intelligence.
The idea of using yeast to create food ingredients, chemicals, and even fuel via fermentation is one of the dreams of synthetic biology. But it’s not easy. In 2021, we raised questions about high-flying startup Ginkgo Bioworks. This week its stock hit an all-time low of $0.49 per share as the company struggles to make … well, anything.
This spring, I traveled to Florida to watch attempts to create life in a totally new way: using a synthetic embryo made in a lab. The action involved cattle at the animal science department of the University of Florida, Gainesville.
From around the web
How many human bird flu cases are there? No one knows, because there’s barely any testing. Scientists warn we’re flying blind as US dairy farms struggle with an outbreak. (NBC)
Moderna, one of the companies behind the covid-19 shots, is seeing early success with a cancer vaccine. It uses the same basic technology: gene messages packed into nanoparticles. (Nature)
It’s the covid-19 theory that won’t go away. This week the New York Times published an op-ed arguing that the virus was the result of a lab accident. We previously profiled the author, Alina Chan, who is a scientist with the Broad Institute. (NYTimes)
Sales of potent weight loss drugs, like Ozempic, are booming. But it’s not just humans who are overweight. Now the pet care industry is dreaming of treating chubby cats and dogs, too. (Bloomberg)
On Tuesday, the FDA asked a panel of experts to weigh in on whether the evidence shows that MDMA, also known as ecstasy, is a safe and efficacious treatment for PTSD. The answer was a resounding no. Just two out of 11 panel members agreed that MDMA-assisted therapy is effective. And only one panel member thought the benefits of the therapy outweighed the risks.
The outcome came as a surprise to many, given that trial results have been positive. And it is also a blow for advocates who have been working to bring psychedelic therapy into mainstream medicine for more than two decades. This isn’t the final decision on MDMA. The FDA has until August 11 to make that ruling. But while the agency is under no obligation to follow the recommendations of its advisory committees, it rarely breaks with their decisions.
Today on The Checkup, let’s unpack the advisory committee’s vote and talk about what it means for the approval of other recreational drugs as therapies.
One of the main stumbling blocks for the committee was the design of the two efficacy studies that have been completed. Trial participants weren’t supposed to know whether they were in the treatment group, but the effects of MDMA make it pretty easy to tell whether you’ve been given a hefty dose, and most correctly guessed which group they had landed in.
In 2021, MIT Technology Review’s Charlotte Jee interviewed an MDMA trial participant named Nathan McGee. “Almost as soon as I said I didn’t think I’d taken it, it kicked in. I mean, I knew,” he told her. “I remember going to the bathroom and looking in the mirror, and seeing my pupils looking like saucers. I was like, ‘Wow, okay.’”
The Multidisciplinary Association for Psychedelic Studies, better known as MAPS, has been working with the FDA to develop MDMA as a treatment since 2001. When the organization met with the FDA in 2016 to hash out the details of its phase III trials, studies to test whether a treatment works, agency officials suggested that MAPS use an active compound for the control group to help mask whether participants had received the drug. But MAPS pushed back, and the trial forged ahead with a placebo.
No surprise, then, that about 90% of those assigned to the MDMA group and 75% of those assigned to the placebo group accurately identified which arm of the study they had landed in. And it wasn’t just participants. Therapists treating the participants also likely knew whether those under their supervision had been given the drug. It’s called “functional unblinding,” and the issue came up at the committee meeting again and again. Here’s why it’s a problem: If a participant strongly believes that MDMA will help their PTSD and they know they’ve received MDMA, this expectation bias could amplify the treatment effect. This is especially a problem when the outcome is based on subjective measures like how a person feels rather than, say, laboratory data.
Another sticking point was the therapy component of the treatment. Lykos Therapeutics (the for-profit spinoff of MAPS) asked the FDA to approve MDMA-assisted therapy: that’s MDMA administered in concert with psychotherapy. Therapists oversaw participants during the three MDMA sessions. But participants also received three therapy sessions before getting the drug, and three therapy sessions afterwards to help them process their experience.
Because the two treatments were administered together, there was no good way to tell how much of the effect was due to MDMA and how much was due to the therapy. What’s more, “the content or approach of these integrated sessions was not standardized in the treatment manuals and was mainly left up to the individual therapist,” said David Millis, a clinical reviewer for the FDA, at the committee meeting.
Several committee members also raised safety concerns. They worried that MDMA’s effects might make people more suggestible and vulnerable to abuse, and they brought up allegations of ethics violations outlined in a recent report from the Institute for Clinical and Economic Review.
Because of these issues and others, most committee members felt compelled to vote against MDMA-assisted therapy. “I felt that the large positive effect was denuded by the significant confounders,” said committee member Maryann Amirshahi, a professor of emergency medicine at Georgetown University School of Medicine, after the vote. “Although I do believe that there was a signal, it just needs to be better studied.”
Whether this decision will be a setback for the entire field remains to be seen. “To make it crystal clear: It isn’t MDMA itself that was rejected per se, but the specific, poor data set provided by Lykos Therapeutics; in my opinion, there is still a strong chance that MDMA, with a properly conducted clinical Phase 3 trial program that addresses those concerns of the FDA advisory committee, will get approved.” wrote Christian Angermayer, founder of ATAI Therapeutics, a company that is also working to develop MDMA as a therapy.
If the FDA denies approval of MDMA therapy, Lykos or another company could conduct additional studies and reapply. Many of the committee members said they believed MDMA does hold promise, but that the studies conducted thus far were inadequate to demonstrate the drug’s safety and efficacy.
Psilocybin is likely to be the next psychedelic therapy considered by the FDA, and in some ways, it might have an easier path to approval. The idea behind MDMA is that it alleviates PTSD by helping facilitate psychotherapy. The therapy is a crucial component of the treatment, which is problematic because the FDA regulates drugs, not psychotherapy. With psilocybin, a therapist is present, but the drug appears to do the heavy lifting. “We are not offering therapy; we are offering psychological support that’s designed for the patient’s safety and well-being,” says Kabir Nath, CEO of Compass Pathways, the company working to bring psilocybin to market. “What we actually find during a six- to eight-hour session is most of it is silent. There’s actually no interaction.”
That could make the approval process more straightforward. “The difficult thing … is that we don’t regulate psychotherapy, and also we don’t really have any say in the design or the implementation of the particular therapy that is going to be used,” said Tiffany Farchione, director of the FDA’s division of psychiatry, at the committee meeting. “This is something unprecedented, so we certainly want to get as many opinions and as much input as we can.”
Sally Adee dives deep into the messy history of electric medicine and what the future might hold for research into electric therapies. “Instead of focusing only on the nervous system—the highway that carries electrical messages between the brain and the body—a growing number of researchers are finding clever ways to electrically manipulate cells elsewhere in the body, such as skin and kidney cells, more directly than ever before,” she writes.
MDMA does seem to have helped some individuals. Nathan McGee, who took the drug as part of a clinical trial, told Charlotte Jee that he “understands what joy is now.”
Researchers are working to design virtual-reality programs that recreate the trippy experience of taking psychedelics. Hana Kiros has the story.
– Finland is poised to become the first country to start administering bird flu vaccine—albeit to a very limited subset of people, including poultry and mink farmers, vets, and scientists who study the virus (Stat)
– What are the most pressing questions about bird flu? They revolve around what’s happening in cows, what’s happening in farm workers, and what’s happening to the virus. (Stat)
– A man in Mexico has died of H5N2, a strain of bird flu that has never before been reported in humans. (CNN)
Biodegradable, squishy sensors injected into the brain hold promise for detecting changes following a head injury or cancer treatment. (Nature)
A synthetic version of a hallucinogenic toad toxin could be a promising treatment for mental-health disorders. (Undark)
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.
MDMA, sometimes called Molly or ecstasy, has been banned in the United States for more than three decades. Now this potent mind-altering drug is poised to become a badly needed therapy for PTSD.
On June 4, the Food and Drug Administration’s advisory committee will meet to discuss the risks and benefits of MDMA therapy. If the committee votes in favor of the drug, it could be approved to treat PTSD this summer. The approval would represent a momentous achievement for proponents of mind-altering drugs, who have been working toward this goal for decades. And it could help pave the way for FDA approval of other illicit drugs like psilocybin. But the details surrounding how these compounds will make the transition from illicit substances to legitimate therapies are still foggy.
Here’s what to know ahead of the upcoming hearing.
What’s the argument for legitimizing MDMA?
Studies suggest the compound can help treat mental-health disorders like PTSD and depression. Lykos, the company that has been developing MDMA as a therapy, looked at efficacy in two clinical trials that included about 200 people with PTSD. Researchers randomly assigned participants to receive psychotherapy with or without MDMA. The group that received MDMA-assisted therapy had a greater reduction in PTSD symptoms. They were also more likely to respond to treatment, to meet the criteria for PTSD remission, and to lose their diagnosis of PTSD.
But some experts question the validity of the results. With substances like MDMA, study participants almost always know whether they’ve received the drug or a placebo. That can skew the results, especially when the participants and therapists strongly believe a drug is going to help. The Institute for Clinical and Economic Review (ICER), a nonprofit research organization that evaluates the clinical and economic value of drugs, recently rated the evidence for MDMA-assisted therapy as “insufficient.”
In briefing documents published ahead of the June 4 meeting, FDA officials write that the question of approving MDMA “presents a number of complex review issues.”
The ICER report also referenced allegations of misconduct and ethical violations. Lykos (formerly the Multidisciplinary Association for Psychedelic Studies Public Benefit Corporation) acknowledges that ethical violations occurred in one particularly high-profile case. But in a rebuttal to the ICER report, more than 70 researchers involved in the trials wrote that “a number of assertions in the ICER report represent hearsay, and should be weighted accordingly.” Lykos did not respond to an interview request.
At the meeting on the 4th, the FDA has asked experts to discuss whether Lykos has demonstrated that MDMA is effective, whether the drug’s effect lasts, and what role psychotherapy plays. The committee will also discuss safety, including the drug’s potential for abuse and the risk posed by the impairment MDMA causes.
What’s stopping people from using this therapy?
MDMA is illegal. In 1985, the Drug Enforcement Agency grew concerned about growing street use of the drug and added it to its list of Schedule 1 substances—those with a high abuse potential and no accepted medical use.
MDMA boosts the brain’s production of feel-good neurotransmitters, causing a burst of euphoria and good will toward others. But the drug can also cause high blood pressure, memory problems, anxiety, irritability, and confusion. And repeated use can cause lasting changes in the brain.
If the FDA approves MDMA therapy, when will people be able to access it?
That has yet to be determined. It could take months for the DEA to reclassify the drug. After that, it’s up to individual states.
Lykos applied for approval of MDMA-assisted therapy, not just the compound itself. In the clinical trials, MDMA administration happened in the presence of licensed therapists, who then helped patients process their emotions during therapy sessions that lasted for hours.
But regulating therapy isn’t part of the FDA’s purview. The FDA approves drugs; it doesn’t oversee how they’re administered. “The agency has been clear with us,” says Kabir Nath, CEO of Compass Pathways, the company working to bring psilocybin to market. “They don’t want to regulate psychotherapy, because they see that as the practice of medicine, and that’s not their job.”
However, for drugs that carry a risk of serious side effects, the FDA can add a risk evaluation and mitigation strategy to its approval. For MDMA that might include mandating that the health-care professionals who administer the medication have certain certifications or specialized training, or requiring that the drug be dispensed only in licensed facilities.
For example, Spravato, a nasal spray approved in 2019 for depression that works much like ketamine, is available only at a limited number of health-care facilities and must be taken under the observation of a health-care provider. Having safeguards in place for MDMA makes sense, at least at the outset, says Matt Lamkin, an associate professor at the University of Tulsa College of Law who has been following the field closely.: “Given the history, I think it would only take a couple of high-profile bad incidents to potentially set things back.”
What mind-altering drug is next in line for FDA approval?
Psilocybin, a.k.a. the active ingredient in magic mushrooms. This summer Compass Pathways will release the first results from one of its phase 3 trials of psilocybin to treat depression. Results from the other trial will come in the middle of 2025, which—if all goes well—puts the company on track to file for approval in the fall or winter of next year. With the FDA review and the DEA rescheduling, “it’s still kind of two to three years out,” Nath says.
Some states are moving ahead without formal approval. Oregon voters made psilocybin legal in 2020, and the drug is now accessible there at about 20 licensed centers for supervised use. “It’s an adult use program that has a therapeutic element,” says Ismail Ali, director of policy and advocacy at the Multidisciplinary Association for Psychedelic Studies (MAPS).
Colorado voted to legalize psilocybin and some other plant-based psychedelics in 2022, and the state is now working to develop a framework to guide the licensing of facilitators to administer these drugs for therapeutic purposes. More states could follow.
So would FDA approval of these compounds open the door to legal recreational use of psychedelics?
Maybe. The DEA can still prosecute physicians if they’re prescribing drugs outside of their medically accepted uses. But Lamkin does see the lines between recreational use and medical use getting blurry. “What we’re seeing is that the therapeutic uses have recreational side effects and the recreation has therapeutic side effects,” he says. “I’m interested to see how long they can keep the genie in the bottle.”
What’s the status of MDMA therapies elsewhere in the world?
Last summer, Australia became the first country to approve MDMA and psilocybin as medicines to treat psychiatric disorders, but the therapies are not yet widely available. The first clinic opened just a few months ago. The US is poised to become the second country if the FDA greenlights Lykos’s application. Health Canada told the CBC it is watching the FDA’s review of MDMA “with interest.” Europe is lagging a bit behind, but there are some signs of movement. In April, the European Medicines Agency convened a workshop to bring together a variety of stakeholders to discuss a regulatory framework for psychedelics.
In the early 2010s, electricity seemed poised for a hostile takeover of your doctor’s office. Research into how the nervous system controls the immune response was gaining traction. And that had opened the door to the possibility of hacking into the body’s circuitry and thereby controlling a host of chronic diseases, including rheumatoid arthritis, asthma, and diabetes, as if the immune system were as reprogrammable as a computer.
To do that you’d need a new class of implant: an “electroceutical,” formally introduced in an article in Naturein 2013. “What we are doing is developing devices to replace drugs,” coauthor and neurosurgeon Kevin Tracey told Wired UK. These would become a “mainstay of medical treatment.” No more messy side effects. And no more guessing whether a drug would work differently for you and someone else.
There was money behind this vision: the British pharmaceutical giant GlaxoSmithKline announced a $1 million research prize, a $50 million venture fund, and an ambitious program to fund 40 researchers who would identify neural pathways that could control specific diseases. And the company had an aggressive timeline in mind. As one GlaxoSmithKline executive put it, the goal was to have “the first medicine that speaks the electrical language of our body ready for approval by the end of this decade.”
In the 10 years or so since, around a billion dollars has accreted around the effort by way of direct and indirect funding. Some implants developed in that electroceutical push have trickled into clinical trials, and two companies affiliated with GlaxoSmithKline and Tracey are ramping up for splashy announcements later this year. We don’t know much yet about how successful the trials now underway have been. But widespread regulatory approval of the sorts of devices envisioned in 2013—devices that could be applied to a broad range of chronic diseases—is not imminent. Electroceuticals are a long way from fomenting a revolution in medical care.
At the same time, a new area of science has begun to cohere around another way of using electricity to intervene in the body. Instead of focusing only on the nervous system—the highway that carries electrical messages between the brain and the body—a growing number of researchers are finding clever ways to electrically manipulate cells elsewhere in the body, such as skin and kidney cells, more directly than ever before. Their work suggests that this approach could match the early promise of electroceuticals, yielding fast-healing bioelectric bandages, novel approaches to treating autoimmune disorders, new ways of repairing nerve damage, and even better treatments for cancer. However, such ventures have not benefited from investment largesse. Investors tend to understand the relationship between biology and electricity only in the context of the nervous system. “These assumptions come from biases and blind spots that were baked in during 100 years of neuroscience,” says Michael Levin, a bioelectricity researcher at Tufts University.
Electrical implants have already had success in targeting specific problems like epilepsy, sleep apnea, and catastrophic bowel dysfunction. But the broader vision of replacing drugs with nerve-zapping devices, especially ones that alter the immune system, has been slower to materialize. In some cases, perhaps the nervous system is not the best way in. Looking beyond this singular locus of control might open the way for a wider suite of electromedical interventions—especially if the nervous system proves less amenable to hacking than originally advertised.
How it started
GSK’s ambitious electroceutical venture was a response to an increasingly onerous problem: 90% of drugs fall down during the obstacle race through clinical trials. A new drug that does squeak by can cost $2 billion or $3 billion and take 10 to 15 years to bring to market, a galling return on investment. The flaw is in the delivery system. The way we administer healing chemicals hasn’t had much of a conceptual overhaul since the Renaissance physician Paracelsus: ingest or inject. Both approaches have built-in inefficiencies: it takes a long time for the drugs to build up in your system, and they can disperse widely before arriving in diluted form at their target, which may make them useless where they are needed and toxic elsewhere. Tracey and Kristoffer Famm, a coauthor on the Nature article who was then a VP at GlaxoSmithKline, explained on the publicity circuit that electroceuticals would solve these problems—acting more quickly and working only in the precise spot where the intervention was needed. After 500 years, finally, here was a new idea.
Well … new-ish.Electrically stimulating the nervous system had racked up promising successes since the mid-20th century. For example, the symptoms of Parkinson’s disease had been treated via deep brain stimulation, and intractable pain via spinal stimulation. However, these interventions could not be undertaken lightly; the implants needed to be placed in the spine or the brain, a daunting prospect to entertain. In other words, this idea would never be a money spinner.
The vagus nerve runs from the brain through the body
WELLCOME COLLECTION
What got GSK excited was recent evidence that health could be more broadly controlled, and by nerves that were easier to access. By the dawn of the 21st century it had become clear you could tap the nervous system in a way that carried fewer risks and more rewards. That was because of findings suggesting that the peripheral nervous system—essentially, everything but the brain and spine—had much wider influence than previously believed.
The prevailing wisdom had long been that the peripheral nervous system had only one job: sensory awareness of the outside world. This information is ferried to the brain along many little neural tributaries that emerge from the extremities and organs, most of which converge into a single main avenue at the torso: the vagus nerve.
Starting in the 1990s, research by Linda Watkins, a neuroscientist leading a team at the University of Colorado, Boulder, suggested that this main superhighway of the peripheral nervous system was not a one-way street after all. Instead it seemed to carry message traffic in both directions, not just into the brain but from the brain back into all those organs. Furthermore, it appeared that this comms link allows the brain to exert some control over the immune system—for example, stoking a fever in response to an infection.
And unlike the brain or spinal cord, the vagus nerve is comparatively easy to access: its path to and from the brain stem runs close to the surface of the neck, along a big cable on either side. You could just pop an electrode on it—typically on the left branch—and get zapping.
Meddling with the flow of traffic up the vagus nerve in this wayhad successfully treated issues in the brain, specifically epilepsy and treatment-resistant depression (and electrical implants for those applications were approved by the FDA around the turn of the millennium). But the insights from Watkins’s team put the down direction in play.
It was Kevin Tracey who joined all these dots, after which it did not take long for him to become the public face of research on vagus nerve stimulation. During the 2000s, he showed that electrically stimulating the nerve calmed inflammation in animals. This “inflammatory reflex,” as he came to call it, implied that the vagus nerve could act as a switch capable of turning off a wide range of diseases, essentially hacking the immune system. In 2007, while based at what is now called the Feinstein Institutes for Medical Research, in New York, he spun his insights off into a Boston startup called SetPoint Medical. Its aim was to develop devices to flip this switch and bring relief, starting with inflammatory bowel disease and rheumatoid arthritis.
By 2012, a coordinated relationship had developed between GSK, Tracey, and US government agencies. Tracey says that Famm and others contacted him “to help them on that Nature article.” A year later the electroceuticals road map was ready to be presented to the public.
The story the researchers told about the future was elegant and simple. It was illustrated by a tale Tracey recounted frequently on the publicity circuit, of a first-in-human case study SetPoint had coordinated at the University of Amsterdam’s Academic Medical Center. That team had implanted a vagus nerve stimulator in a man suffering from rheumatoid arthritis. The stimulation triggered his spleen to release a chemical called acetylcholine. This in turn told the cells in the spleen to switch off production of inflammatory molecules called cytokines. For this man, the approach worked well enough to let him resume his job, play with his kids, and even take up his old hobbies. In fact, his overenthusiastic resumption of his former activities resulted in a sports injury, as Tracey delighted in recounting for reporters and conferences.
Such case studies opened the money spigot. The combination of a wider range of disease targets and less risky surgical targets was an investor’s love language. Where deep brain stimulation and other invasive implants had been limited to rare, obscure, and catastrophic problems, this new interface with the body promised many more customers: the chronic diseases now on the table are much more prevalent, including not only rheumatoid arthritis but diabetes, asthma, irritable bowel syndrome, lupus, and many other autoimmune disorders. GSK launched an investment arm it dubbed Action Potential Venture Capital Limited, with $50 million in the coffers to invest in the technologies and companies that would turn the futuristic vision of electroceuticals into reality. Its inaugural investment was a $5 million stake in SetPoint.
If you were superstitious, what happened next might have looked like an omen. The word “electroceutical” already belonged to someone else—a company called Ivivi Technologies had trademarked it in 2008. “I am fairly certain we sent them a letter soon after they started that campaign, to alert them of our trademark,” says Sean Hagberg, a cofounder and then chief science officer at the company. Today neither GSK nor SetPoint can officially call its tech “electroceuticals,” and both refer to the implants they are developing as “bioelectronic medicine.” However, this umbrella term encompasses a wide range of other interventions, some quite well established, including brain implants, spine implants, hypoglossal nerve stimulation for sleep apnea (which targets a motor nerve running through the vagus), and other peripheral-nervous-system implants, including those for people with severe gastric disorders.
Kevin Tracey has been one of the leading proponents of using electrical stimulation to target inflammation in the body.
MIKE DENORA VIA WIKIPEDIA
The next problem appeared in short order: how to target the correct nerve. The vagus nerve has roughly 100,000 fibers packed tightly within it, says Kip Ludwig, who was then with the US National Institutes of Health and now co-directs the Wisconsin Institute for Translational Neuroengineering at the University of Wisconsin, Madison. These myriad fibers connect to many different organs, including the larynx and lower airways, and electrical fields are not precise enough to hit a single one without hitting many of its neighbors (as Ludwig puts it, “electric fields [are] really promiscuous”).
This explains why a wholesale zap of the entire bundle had long been associated with unpredictable “on-target effects” and unpleasant “off-target effects,” which is another way of saying it didn’t always work and could carry side effects that ranged from the irritating, like a chronic cough, to the life-altering, including headaches and a shortness of breath that is better described as air hunger. Singling out the fibers that led to the particular organ you were after was hard for another reason, too: the existing maps of the human peripheral nervous system were old and quite limited. Such a low-resolution road map wouldn’t be sufficient to get a signal from the highway all the way to a destination.
In 2014, to remedy this and generally advance the field of peripheral nerve stimulation, the NIH announced a research initiative known as SPARC—Stimulating Peripheral Activity to Relieve Conditions—with the aim of pouring $248 million into research on new ways to exploit the nervous system’s electrical pathways for medicine. “My job,” says Gene Civillico, who managed the program until 2021, “was to do a program related to electroceuticals that used the NIH policy options that were available to us to try to make something catalytic happen.” The idea was to make neural anatomical maps and sort out the consequences of following various paths. After the organs were mapped, Civillico says, the next step was to figure out which nerve circuit would stimulate them, and settle on an access point—“And the access point should be the vagus nerve, because that’s where the most interest is.”
Two years later, as SPARC began to distribute its funds, companies moved forward with plans for the first generation of implants. GSK teamed up with Verily (formerly Google Life Sciences) on a $715 million research initiative they called Galvani Bioelectronics, with Famm at its helm as president. SetPoint, which had relocated to Valencia, California, moved to an expanded location, a campus that had once housed a secret Lockheed R&D facility.
How it’s going
Ten years after electroceuticals entered (and then quickly departed) the lexicon, the SPARC program has yielded important information about the electrical particulars of the peripheral nervous system. Its maps have illuminated nodes that are both surgically attractive and medically relevant. It has funded a global constellation of academic researchers. But its insights will be useful for the next generation of implants, not those in trials today.
Today’s implants, from SetPoint and Galvani, will be in the news later this year. Though SetPoint estimates that an extended study of its phase III clinical trial will conclude in 2027, the primary outcomes will be released this summer, says Ankit Shah, a marketing VP at SetPoint. And while Galvani’s trial will conclude in 2029, Famm says, the company is “coming to an exciting point” and will publish patient data later in 2024.
The results could be interpreted as a referendum on the two companies’ different approaches. Both devices treat rheumatoid arthritis, and both target the immune system via the peripheral nervous system, but that’s where the similarities end. SetPoint’s device uses a clamshell design that cuffs around the vagus nerve at the neck. It stimulates for just one minute, once per day. SetPoint representatives say they have never seen the sorts of side effects that have resulted from using such stimulators to treat epilepsy. But if anyone did experience those described by other researchers—even vomiting and headaches—they might be tolerable if they only lasted a minute.
But why not avoid the vagus nerve entirely? Galvani is using a more precise implant that targets the “end organ” of the spleen. If the vagus nerve can be considered the main highway of the peripheral nervous system, an end organ is essentially a particular organ’s “driveway.” Galvani’s target is the point where the splenic nerve (having split off from a system connected to the vagus highway) meets the spleen.
To zero in on such a specific target, the company has sacrificed ease of access. Its implant, which is about the size of a house key, is laparoscopically injected into the body through the belly button. Famm says if this approach works for rheumatoid arthritis, then it will likely translate for all autoimmune disorders. Highlighting this clinical trial in 2022, he told Nature Reviews: “This is what makes the next 10 years exciting.”
The Galvani device and system targets the splenic nerve.
GALVANI VIA BUSINESSWIRE
Perhaps more so for researchers than for patients, however. Even as Galvani and SetPoint prepare talking points, other SPARC-funded groups are still pondering the sorts of research questions suggesting that the best technological interface with the immune system is still up for debate. At the moment, electroceuticals are in the spotlight, but they have a long way to go, says Vaughan Macefield, a neurophysiologist at Monash University in Australia, whose work is funded by a more recent $21 million SPARC grant: “It’s an elegant idea, [but] there are conflicting views.”
Macefield doesn’t think zapping the entire bundle is a good idea. Many researchers are working on ways to get more selective about which particular fibers of the vagus nerve they stimulate. Some are designing novel electrodes that will penetrate specific fibers rather than clamping around all of them. Others are trying to hit the vagus at deeper points in the abdomen. Indeed, some aren’t sure either electricity or an implant is a necessary ingredient of the “electroceutical.” Instead, they are pivoting from electrical stimulation to ultrasound.
The sheer range of these approaches makes it pretty clear that the electroceutical’s final form is still an open research question. Macefield says we still don’t know the nitty-gritty of how vagus nerve stimulation works.
However, Tracey thinks the variety of approaches being developed doesn’t contravene the merits of the basic idea. How tech companies will make this work in the clinic, he says, is a separate business and IP question: “Can you do it with focused ultrasound? Can you do it with a device implanted with abdominal surgery? Can you do it with a device implanted in the neck? Can you do it with a device implanted in the brain, even? All of these strategies are enabled by the idea of the inflammatory reflex.” Until clinical trial data is in, he says, there’s no point arguing about the best way to manipulate the mechanism—and if one approach fails to work, that is not a referendum on the validity of the inflammatory reflex.
After stepping down from SetPoint’s board to resume a purely consulting role in 2011, Tracey focused on his lab work at the Feinstein Institutes, which he directs, to deepen understanding of this pathway. The research there is wide-ranging. Several researchers under his remit are exploring a type of noninvasive, indirect manipulation called transcutaneous auricular vagus nerve stimulation, which stimulates the skin of the ear with a wearable device. Tracey says it’s a “malapropism” to call this approach vagus nerve stimulation. “It’s just an ear buzzer,” he says. It may stimulate a sensory branch of the vagus nerve, which may engage the inflammatory reflex. “But nobody knows,” he says. Nonetheless, several clinical trials are underway.
SetPoint’s device is cuffed around the vagus nerve within the neck of a patient.
SETPOINT MEDICAL
“These things take time,” Tracey says. “It is extremely difficult to invent and develop a completely revolutionary new thing in medicine. In the history of medicine, anything that was truly new and revolutionary takes between 20 and 40 years from the time it’s invented to the time it’s widely adopted.”
“As the discoverer of this pathway,” he says, “what I want to see is multiple therapies, helping millions of people.” This vision will hinge on bigger trials conducted over many more years. These tend to be about as hard for devices as they are for drugs. Many results that look compelling in early trials disappoint in later rounds—just as for drugs. It will be possible, says Ludwig, “for them to pass a short-duration FDA trial yet still really not be a major improvement over the drug solutions.” Even after FDA approval, should it come, yet more studies will be needed to determine whether the implants are subject to the same issues that plague drugs, including habituation.
This vision of electroceuticals seems to have placed about a billion eggs into the single basket of the peripheral nervous system. In some ways, this makes sense. After all, the received wisdom has it that these nervous signals are the only way to exert electrical control of the other cells in the body. Those other trillions—the skin cells, the immune cells, the stem cells—are beyond the reach of direct electrical intervention.
Except in the past 20 years it’s become abundantly clear thatthey are not.
Other cells speak electricity
At the end of the 19th century, the German physiologist Max Verworn watched as a single-celled marine creature was drawn across the surface of his slide as if captured by a tractor beam. It had been, in a way: under the influence of an electric field, it squidged over to the cathode (the pole that attracts positive charge). Many other types of cells could be coaxed to obey the directional wiles of an electric field, a phenomenon known as galvanotaxis.
But this was too weird for biology, and charlatans already occupied too much of the space in the Venn diagram where electricity met medicine. (The association was formalized in 1910 in the Flexner Report, commissioned to improve the dismal state of American medical schools, which sent electrical medicine into exile along with the likes of homeopathy.) Everyone politely forgot about galvanotaxis until the 1970s and ’80s, when the peculiar behavior resurfaced. Yeast, fungi, bacteria, you name it—they all liked a cathode. “We were pulling every kind of cell along on petri dishes with an electric field,” says Ann Rajnicek of the University of Aberdeen in Scotland, who was among the first group of researchers who tried to discover the mechanism when scientific interest reawakened.
Galvanotaxis would have raised few eyebrows if the behavior had been confined to neurons. Those cells have evolved receptors that sense electric fields; they are a fundamental aspect of the mechanism the nervous system uses to send its information. Indeed, the reason neurons are so amenable to electrical manipulation in the first place is that electric implants hijack a relatively predictable mechanism. Zap a nerve or a muscle and you are forcing it to “speak” a language in which it is already fluent.
Non-excitable cells such as those found in skin and bone don’t share these receptors, but it keeps getting more obvious that they somehow still sense and respond to electric fields.
Why? We keep finding more reasons. Galvanotaxis, for example, is increasingly understood to play a crucial role in wound healing. In every species studied, injury to the skin produces an instant, internally generated electric field, and there’s overwhelming evidence that it guides patch-up cells to the center of the wound to start the rebuilding process. But galvanotaxis is not the only way these cells are led by electricity. During development, immature cells seem to sense the electric properties of their neighbors, which plays a role in their future identity—whether they become neurons, skin cells, fat cells, or bone cells.
Early experiments showed that paramecia on a wet plate will orient themselves in the direction of a cathode.
PUBLIC DOMAIN
Intriguing as this all was, no one had much luck turning such insights into medicine. Even attempts to go after the lowest-hanging fruit—by exploiting galvanotaxis for novel bandages—were for many years at best hit or miss. “When we’ve come upon wounds that are intractable, resistant, and will not heal, and we apply an electric field, only 50% or so of the cases actually show any effect,” says Anthony Guiseppi-Elie, a senior fellow with the American International Institute for Medical Sciences, Engineering, and Innovation.
However, in the past few years, researchers have found ways to make electrical stimulation outside the nervous system less of a coin toss.
That’s down to steady progress in our understanding of how exactly non-neural cells pick up on electric fields, which has helped calm anxieties around the mysticism and the Frankenstein associations that have attended biological responses to electricity.
The first big win came in 2006, with the identification of specific genes in skin cells that get turned on and off by electric fields. When skin is injured, the body’s native electric field orients cells toward the center of the wound, and the physiologist Min Zhao and his colleagues found important signaling pathways that are turned on by this field and mobilized to move cells toward this natural cathode. He also found associated receptors, and other scientists added to the catalogue of changes to genes and gene regulatory networks that get switched on and off under an electric field.
What has become clear since then is that there is no simple mechanism waiting at the end of the rainbow. “There isn’t one single master protein, as far as anybody knows, that regulates responses [to an electric field],” says Daniel Cohen, a bioengineer at Princeton University. “Every cell type has a different cocktail of stuff sticking out of it.”
But recent years have brought good news, in both experimental and applied science. First, the experimental platforms to investigate gene expression are in the middle of a transformation. One advance was unveiled last year by Sara Abasi, Guiseppi-Elie, and their colleagues at Texas A&M and the Houston Methodist Research Institute: their carefully designed research platform kept track of pertinent cellular gene expression profiles and how they change under electric fields—specifically, ones tuned to closely mimic what you find in biology. They found evidence for the activation of two proteins involved in tissue growth along with increased expression of a protein called CD-144, a specific version of what’s known as a cadherin. Cadherins are important physical structures that enable cells to stick to each other, acting like little handshakes between cells. They are crucial to the cells’ ability to act en masse instead of individually.
The other big improvement is in tools that can reveal just how cells work together in the presence of electric fields.
A different kind of electroceutical
A major limit on past experiments was that they tended to test the effects of electrical fields either on single cells or on whole animals. Neither is quite the right scale to offer useful insights, explains Cohen: measuring these dynamics in animals is too “messy,” but in single cells, the dynamics are too artificial to tell you much about how cells behave collectively as they heal a wound. That behavior emerges only at relevant scales, like bird flocks, schools of fish, or road traffic. “The math is identical to describe these types of collective dynamics,” he says.
In 2020, Cohen and his team came up with a solution: an experimental setup that strikes the balance between single cell (tells you next to nothing) and animal (tells you too many things at once). The device, called SCHEEPDOG, can reveal what is going on at the tissue level, which is the relevant scale for investigating wound healing.
It uses two sets of electrodes—a bit the way you might twiddle the dials on an Etch A Sketch—placed in a closed bioreactor, which better approximates how electric fields operate in biology. With this setup, Cohen and his colleagues can precisely tune the electrical environment of tens of thousands of cells at a time to influence their behavior.
In this time-lapse, SCHEEPDOG maneuvers epithelial cells with electric fields.
COHEN ET AL
Their subsequent “healing-on-a-chip” platform yielded an interesting discovery: skin cells’ response to an electric field depends on their maturity. The less mature, the easier they were to control.
The culprit? Those cadherins that Abasi and Guiseppi-Elie had also observed changing under electric fields. In mature cells, these little handshakes had become so strong that a competing electric field, instead of gently guiding the cells, caused them to rip apart. The immature skin cells followed the electric field’s directions without complaint.
After they found a way to dial down the cadherins with an antibody drug, all the cells synchronized. For Cohen, the lesson was that it’s more important to look at the system, and the collective dynamics that govern a behavior like wound healing, than at what is happening in any single cell. “This is really important because many clinical attempts at using electrical stimulation to accelerate wound healing have failed,” says Guiseppi-Elie, and it had never become clear why some worked and others some didn’t.
Cohen’s team is now working to translate these findings into next-generation bioelectric plasters. They are far from alone, and the payoff is more than skin deep. A lot of work is going on, some of it open and some behind closed doors with patents being closely guarded, says Cohen.
At Stanford, the University of Arizona, and Northwestern, researchers are creating smart electric bandages that can be implanted under the skin. They can also monitor the state of the wound in real time, increasing the stimulation if healing is too slow. More challenging, says Rajnicek, are ways to interface with less accessible areas of the body. However, here too new tools are revealing intriguing creative solutions.
Electric fields don’t have to directly changecells’ gene expression to be useful. There is another way their application can be turned to medical benefit. Electric fields evoke reactive oxygen species (ROS) in biological cells. Normally, these charged molecules are a by-product of a cell’s everyday metabolic activities. If you induce them purposefully using an external DC current, however, they can be hijacked to do your bidding.
Starting in 2020, theSwiss bioengineer Martin Fussenegger and an international team of collaborators began to publish investigations into this mechanism to power gene expression. He and his team engineered human kidney cells to be hypersensitive to the induced ROSs in quantities that normal cells couldn’t sense. But when these were generated by DC electrodes, the kidney cells could sense the minute quantities just fine.
Using this instrument, in 2023 they were able to create a tiny, wearable insulin factory. The designer kidney cells were created with a synthetic promoter—an engineered sequence of DNA that can drive expression of a target gene—that reacted to those faint inducedROSs by activating a cascade of genetic changes that opened a tap for insulin production on demand.
Then they packaged this electrogenetic contraption into a wearable device that worked for a month in a living mouse, which had been engineered to be diabetic (Fussenegger says that “others have shown that implanted designer cells can generally be active for over a year”). The designer cells in the wearable are kept alive by algae gelatine but are fed by the mouse’s own vascular system, permitting the exchange of nutrients and protein. The cells can’t get out, but the insulin they secrete can, seeping straight into the mouse’s bloodstream. Ten seconds a day of electrical stimulation delivered via needles connected to three AAA batteries was enough to make the implant perform like a pancreas, returning the mouse’s blood sugar to nondiabetic levels. Given how easy it would be to generalize the mechanism, Fussenegger says, there’s no reason insulin should be the only drug such a device can generate. He is quick to stress that this wearable device is very much in the proof-of-concept stage, but others outside the team are excited about its potential. It could provide a more direct electrical alternative to the solution electroceuticals promised for diabetes.
Escaping neurochauvinism
Before the concerted push around branding electroceuticals, efforts to tap the peripheral nervous system were fragmented and did not share much data. Today, thanks to SPARC, which is winding down, data-sharing resources have been centralized. And money, both direct and indirect, for the electroceuticals project has been lavish. Therapies—especially vagus nerve stimulation—have been the subject of “a steady increase in funding and interest,” says Imran Eba, a partner at GSK’s bioelectronics investment arm Action Potential Venture Capital. Eba estimates that the initial GSK seed of $50 million at Action Potential has grown to about $200 million in assets under management.
Whether you call it bioelectronic medicine or electroceuticals, some researchers would like to see the definition take on a broader remit. “It’s been an extremely neurocentric approach,” says Daniel Cohen.
Neurostimulation has not yet shown success against cancer. Other forms of electrical stimulation, however, have proved surprisingly effective. In one study on glioblastoma, tumor-treating fields offered an electrical version of chemotherapy: an electric field blasts a brain tumor, preferentially killing only cells whose electrical identity marks them as dividing (which cancer cells do, pathologically—but neurons, being fully differentiated, do not). A study recently published in The Lancet Oncology suggests that these fields could also work in lung cancer to boost existing drugs and extend survival.
All of this points to more sophisticated interventions than a zap to a nerve. “The complex things that we need to do in medicine will be about communicating with the collective decision-making and problem-solving of the cells,” says Michael Levin. He has been working to repurpose already-approved drugs so they can be used to target the electrical communication between cells. In a funny twist, he has taken to calling these drugs electroceuticals, which has ruffled some feathers. But he would certainly find support from researchers like Cohen. “I would describe electroceuticals much more broadly as anything that manipulates cellular electrophysiology,” Cohen says.
Even interventions with the nervous system could be helped by expanding our understanding of the ways nerve cells react to electricity beyond action potentials. Kim Gokoffski, a professor of clinical ophthalmology at the University of Southern California, is working with galvanotaxis as a possible means of repairing damage to the optic nerve. In prior experiments that involve regrowing axons—the cables that carry messages out of neurons—these new nerve fibers tend to miss the target they’re meant to rejoin. Existing approaches “are all pushing the gas pedal,” she says, “but no one is controlling the steering wheel.” So her group uses electric fields to guide the regenerating axons into position. In rodent trials, this has worked well enough to partially restore sight.
And yet, Cohen says, “there’s massive social stigma around this that is significantly hampering the entire field.” That stigma has dramatically shaped research direction and funding. For Gokoffski, it has led to difficulties with publishing. She also recounts hearing a senior NIH official refer to her lab’s work on reconnecting optic nerves as “New Age–y.” It was a nasty surprise: “New Age–y has a very bad connotation.”
However, there are signs of more support for work outside the neurocentric model of bioelectric medicine. The US Defense Department funds projects in electrical wound healing (including Gokoffski’s). Action Potential’s original remit—confined to targeting peripheral nerves with electrical stimulation—has expanded. “We have a broader approach now, where energy (in any form, be it electric, electromagnetic, or acoustic) can be directed to regulate neuronal or other cellular activities in the body,” Eba wrote in an email. Three of the companies now in their portfolio focus on areas outside neurostimulation. “While we don’t have any investments targeting wound healing or regenerative medicine specifically, there is no explicit exclusion here for us,” he says.
This suggests that the “social stigma” Cohen described around electrical medicine outside the nervous system is slowly beginning to abate. But if such projects are to really flourish, the field needs to be supported, not just tolerated—perhaps with its own road map and dedicated NIH program. Whether or not bioelectric medicine ends up following anything like the original electroceuticals road map, SPARC ensured a flourishing research community, one that is in hot pursuit of promising alternatives.
The use of electricity outside the nervous system needs a SPARC program of its own. But if history is any guide, first it needs a catchy name. It can’t be “electroceuticals.” And the researchers should definitely check the trademark listings before rolling it out.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
Here in the US, bird flu has now infected cows in nine states, millions of chickens, and—as of last week—a second dairy worker. There’s no indication that the virus has acquired the mutations it would need to jump between humans, but the possibility of another pandemic has health officials on high alert. Last week, they said they are working to get 4.8 million doses of H5N1 bird flu vaccine packaged into vials as a precautionary measure.
The good news is that we’re far more prepared for a bird flu outbreak than we were for covid. We know so much more about influenza than we did about coronaviruses. And we already have hundreds of thousands of doses of a bird flu vaccine sitting in the nation’s stockpile.
The bad news is we would need more than 600 million doses to cover everyone in the US, at two shots per person. And the process we typically use to produce flu vaccines takes months and relies on massive quantities of chicken eggs. Yes, chickens. One of the birds that’s susceptible to avian flu. (Talk about putting all our eggs in one basket. #sorrynotsorry)
This week in The Checkup, let’s look at why we still use a cumbersome, 80-year-old vaccine production process to make flu vaccines—and how we can speed it up.
The idea to grow flu virus in fertilized chicken eggs originated with Frank Macfarlane Burnet, an Australian virologist. In 1936, he discovered that if he bored a tiny hole in the shell of a chicken egg and injected flu virus between the shell and the inner membrane, he could get the virus to replicate.
Even now, we still grow flu virus in much the same way. “I think a lot of it has to do with the infrastructure that’s already there,” says Scott Hensley, an immunologist at the University of Pennsylvania’s Perelman School of Medicine. It’s difficult for companies to pivot.
The process works like this:Health officials provide vaccine manufacturers with a candidate vaccine virus that matches circulating flu strains. That virus is injected into fertilized chicken eggs, where it replicates for several days. The virus is then harvested, killed (for most use cases), purified, and packaged.
Making flu vaccine in eggs has a couple of major drawbacks. For a start, the virus doesn’t always grow well in eggs. So the first step in vaccine development is creating a virus that does. That happens through an adaptation process that can take weeks or even months. This process is particularly tricky for bird flu: Viruses like H5N1 are deadly to birds, so the virus might end up killing the embryo before the egg can produce much virus. To avoid this, scientists have to develop a weakened version of the virus by combining genes from the bird flu virus with genes typically used to produce seasonal flu virus vaccines.
And then there’s the problem of securing enough chickens and eggs. Right now, many egg-based production lines are focused on producing vaccines for seasonal flu. They could switch over to bird flu, but “we don’t have the capacity to do both,” Amesh Adalja, an infectious disease specialist at Johns Hopkins University, told KFF Health News. The US government is so worried about its egg supply that it keeps secret, heavily guarded flocks of chickens peppered throughout the country.
Most of the flu virus used in vaccines is grown in eggs, but there are alternatives. The seasonal flu vaccine Flucelvax, produced by CSL Seqirus, is grown in a cell line derived in the 1950s from the kidney of a cocker spaniel. The virus used in the seasonal flu vaccine FluBlok, made by Protein Sciences, isn’t grown; it’s synthesized. Scientists engineer an insect virus to carry the gene for hemagglutinin, a key component of the flu virus that triggers the human immune system to create antibodies against it. That engineered virus turns insect cells into tiny hemagglutinin production plants.
And then we have mRNA vaccines, which wouldn’t require vaccine manufacturers to grow any virus at all. There aren’t yet any approved mRNA vaccines for influenza, but many companies are fervently working on them, including Pfizer, Moderna, Sanofi, and GSK. “With the covid vaccines and the infrastructure that’s been built for covid, we now have the capacity to ramp up production of mRNA vaccines very quickly,” says Hensley. This week, the Financial Timesreported that the US government will soon close a deal with Moderna to provide tens of millions of dollars to fund a large clinical trial of a bird flu vaccine the company is developing.
There are hints that egg-free vaccines might work better than egg-based vaccines.A CDC study published in January showed that people who received Flucelvax or FluBlok had more robust antibody responses than those who received egg-based flu vaccines. That may be because viruses grown in eggs sometimes acquire mutations that help them grow better in eggs. Those mutations can change the virus so much that the immune response generated by the vaccine doesn’t work as well against the actual flu virus that’s circulating in the population.
Hensley and his colleagues are developing an mRNA vaccine against bird flu. So far they’ve only tested it in animals, but the shot performed well, he claims. “All of our preclinical studies in animals show that these vaccines elicit a much stronger antibody response compared with conventional flu vaccines.”
No one can predict when we might need a pandemic flu vaccine. But just because bird flu hasn’t made the jump to a pandemic doesn’t mean it won’t. “The cattle situation makes me worried,” Hensley says. Humans are in constant contact with cows, he explains. While there have only been a couple of human cases so far, “the fear is that some of those exposures will spark a fire.” Let’s make sure we can extinguish it quickly.
I don’t have to tell you that mRNA vaccines are a big deal. In 2021, MIT Technology Review highlighted them as one of the year’s 10 breakthrough technologies. Antonio Regalado explored their massive potential to transform medicine. Jessica Hamzelou wrote about the other diseases researchers are hoping to tackle.I followed up with a story after two mRNA researchers won a Nobel Prize. And earlier this year I wrote about a new kind of mRNA vaccine that’s self-amplifying, meaning it not only works at lower doses, but also sticks around for longer in the body.
From around the web
Researchers installed a literal window into the brain, allowing for ultrasound imaging that they hope will be a step toward less invasive brain-computer interfaces. (Stat)
People who carry antibodies against the common viruses used to deliver gene therapies can mount a dangerous immune response if they’re re-exposed. That means many people are ineligible for these therapies and others can’t get a second dose. Now researchers are hunting for a solution. (Nature)
More good news about Ozempic. A new study shows that the drug can cut the risk of kidney complications, including death in people with diabetes and chronic kidney disease. (NYT)
Must read: This story, the second in series on the denial of reproductive autonomy for people with sickle-cell disease, examines how the US medical system undermines a woman’s right to choose. (Stat)
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
The study didn’t garner as much media attention as previous, much smaller studies that focused on helping people with paralysis walk. Tech that allows people to type slightly faster or put their hair in a ponytail unaided just doesn’t have the same allure. “The image of a paralyzed person getting up and walking is almost biblical,” Charles Liu, director of the Neurorestoration Center at the University of Southern California, once told a reporter.
For the people who have spinal cord injuries, however, incremental gains can have a huge impact on quality of life.
So today in The Checkup, let’s talk about this tech and who it serves.
In 2004, Kim Anderson-Erisman, a researcher at Case Western Reserve University, who also happens to be paralyzed, surveyed more than 600 people with spinal cord injuries. Wanting to better understand their priorities, she asked them to consider seven different functions—everything from hand and arm mobility to bowel and bladder function to sexual function. She asked respondents to rank these functions according to how big an impact recovery would have on their quality of life.
Walking was one of the functions, but it wasn’t the top priority for most people. Most quadriplegics put hand and arm function at the top of the list. For paraplegics, meanwhile, the top priority was sexual function. I interviewed Anderson-Erisman for a story I wrote in 2019 about research on implantable stimulators as a way to help people with spinal cord injuries walk. For many people, “not being able to walk is the easy part of spinal cord injury,” she told me. “[If] you don’t have enough upper-extremity strength or ability to take care of yourself independently, that’s a bigger problem than not being able to walk.”
One of the research groups I focused on was at the University of Louisville. When I visited in 2019, the team had recently made the news because two people with spinal cord injuries in one of their studies had regained the ability to walk, thanks to an implanted stimulator. “Experimental device helps paralyzed man walk the length of four football fields,” one headline had trumpeted.
But when I visited one of those participants, Jeff Marquis, in his condo in Louisville, I learned that walking was something he could only do in the lab. To walk he needed to hold onto parallel bars supported by other people and wear a harness to catch him if he fell. Even if he had extra help at home, there wasn’t enough room for the apparatus. Instead, he gets around his condo the same way he gets around outside his condo: in a wheelchair. Marquis does stand at home, but even that requires a bulky frame. And the standing he does is only for therapy. “I mostly just watch TV while I’m doing that,” he said.
That’s not to say the tech has been useless. The implant helped Marquis gain some balance, stamina, and trunk stability. “Trunk stability is kind of underrated in how much easier that makes every other activity I do,” he told me. “That’s the biggest thing that stays with me when I have [the stimulator] turned off.”
What’s exciting to me about this latest study is that the tech gave the participants skills they could use beyond the lab. And because the stimulator is external, it is likely to be more accessible and vastly cheaper. Yes, the newly enabled movements are small, but if you listen to the palpable excitement of one study participant as he demonstrates how he can move a small ball into a cup, you’ll appreciate that incremental gains are far from insignificant. That’s according to Melanie Reid, one of the participants in the latest trial, who spoke at a press conference last week. “There [are] no miracles in spinal injury, but tiny gains can be life-changing.”
Now read the rest of The Checkup
Read more from MIT Technology Review’s archive
In 2017, we hailed as a breakthrough technology electronic interfaces designed to reverse paralysis by reconnecting the brain and body. Antonio Regalado has the story.
An implanted stimulator changed John Mumford’s life, allowing him to once again grasp objects after a spinal cord injury left him paralyzed. But when the company that made the device folded, Mumford was left with few options for keeping the device running. “Limp limbs can be reanimated by technology, but they can be quieted again by basic market economics,” wrote Brian Bergstein in 2015.
In 2014, Courtney Humphries covered some of the rat research that laid the foundation for the technological developments that have allowed paralyzed people to walk.
From around the web
Lots of bird flu news this week. A second person in the US has tested positive for the illness after working with infected livestock. (NBC)
The livestock industry, which depends on shipping tens of millions of live animals, provides some ideal conditions for the spread of pathogens, including bird flu. (NYT)
Long read: How the death of a nine-year-old boy in Cambodia triggered a global H5N1 alert. (NYT)
You’ve heard about tracking viruses via wastewater. H5N1 is the first one we’re tracking via store-bought milk. (STAT)
The first organ transplants from pigs to humans have not ended well, but scientists are learning valuable lessons about what they need to do better. (Nature)
Another long read that’s worth your time: an inside look at just how long 3M knew about the pervasiveness of “forever chemicals.” (New Yorker)