Why engineers are working to build better pulse oximeters

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Visit any health-care facility, and one of the first things they’ll do is clip a pulse oximeter to your finger. These devices, which track heart rate and blood oxygen, offer vital information about a person’s health. But they’re also flawed. For people with dark skin, pulse oximeters can overestimate just how much oxygen their blood is carrying. That means that a person with dangerously low oxygen levels might seem, according to the pulse oximeter, fine.

The US Food and Drug Administration is still trying to figure out what to do about this problem. Last week, an FDA advisory committee met to mull over better ways to evaluate the performance of these devices in people with a variety of skin tones. But engineers have been thinking about this problem too. In today’s Checkup, let’s look at the problem with pulse oximeters—why they are biased and what technological fixes might be possible.

To understand the problem, you first have to understand how pulse oximeters work. Most of these devices clamp onto some part of the body—usually a fingertip, but sometimes they need to be placed on earlobes or toes. One side of the clamp contains LEDs that emit light in two different wavelengths—red and infrared. A sensor on the other side of the clamp measures how much of that light passes through the tissue. The hemoglobin in oxygenated blood and deoxygenated blood absorbs these wavelengths differently, and by calculating the ratio of the red-light measurements to the infrared-light measurements—the R value—the device can tabulate blood oxygen saturation.

Here’s the problem: other factors can affect how much light is absorbed. Dark nail polish, for example, can throw off the reading. Or tattoos. Or melanin. “If a person has a darker skin tone, they’re going to be absorbing more light,” says Maggie Delano, an engineer at Swarthmore College who is interested in inclusive engineering design. Imagine there are 100 photons of light going through a finger. Some get absorbed by blood, some by bone, and some by melanin in the skin. “So if someone has a darker skin tone, maybe five photons get through instead of 20,” Delano says. “If your electronics don’t compensate for that in some way, there can be errors in that result.”

Those errors can have real clinical consequences. Blood oxygen is one of the key vital signs doctors use to determine whether someone needs to receive oxygen or be admitted to the hospital.   

Engineers are working to fix this problem in a variety of ways. At Tufts, Valencia Koomson and her colleagues have developed a device that can detect when the signal quality is poor or when the user has a darker skin tone and compensate by sending more light through. “We’re dealing with very weak optical signals that have to transverse through tissues with lots of [other] elements that absorb and scatter light,” she told Inverse. “It’s very similar to when you’re riding a car and you go through a tunnel. You lose signal because of the absorption of the materials in the tunnel, such that the signal being transmitted from the cell-phone tower is too weak to be processed by your phone.”

Koomson and her colleagues are collaborating with a medical-device manufacturing company to develop a prototype for clinical trials. Because their team was named a finalist in a recent challenge by Open Oximetry, they’ll be able to validate the device for free in the Hypoxia Lab at the University of California, San Francisco.

Meanwhile, engineers at Brown University are trying to find a workaround using special LEDs that can emit polarized light beams. Jesse Jokerst, an engineer at the University of California, San Diego, is working on an oximeter that uses light and sound, and also corrects for skin tone. Another team at the University of Texas at Arlington is hoping to swap the standard red light in pulse oximeters for green light, which bounces back instead of being absorbed. At Johns Hopkins, engineers have developed a prototype pulse oximeter that factors in skin tone when calculating blood oxygen saturation.

Neal Patwari, a mechanical engineer at Washington University in St. Louis, wants to keep the pulse oximeter’s hardware the same, but swap out the algorithm. A pulse oximeter takes four different measurements, two in each wavelength. One measurement takes place as the heart pushes blood through the arteries, when blood flow is at a maximum, and the other happens between pulses, when blood flow is at a minimum. Those four numbers get fed into an algorithm that calculates ratios—actually, one ratio divided by another. That gives you the R value. But, “when you take two numbers and divide them, you can get some strange effects when the denominator is noisy,” Patwari says. And one of the factors that can increase noisiness is darkly pigmented skin. He hopes to find an algorithm that doesn’t rely on ratios, which could offer up a less biased R value. 

Whether any of these strategies will fix the bias in pulse oximeters remains to be seen. But it’s likely that by the time improved devices are up for regulatory approval, the bar for performance will be higher. At the meeting last week, committee members reviewed a proposal that would require companies to test the device in at least 24 people whose skin tones span the entirety of a 10-shade scale. The current requirement is that the trial must include 10 people, two of whom have “darkly pigmented” skin.

In the meantime, health-care workers are grappling with how to use the existing tools and whether to trust them. In the advisory committee meeting on Friday, one committee member asked a representative from Medtronic, one of the largest providers of pulse oximeters, if the company had considered a voluntary recall of its devices. “We believe with 100% certainty that our devices conform to current FDA standards,” said Sam Ajizian, Medtronic’s chief medical officer of patient monitoring. A recall “would undermine public safety because this is a foundational device in operating rooms and ICUs, ERs, and ambulances and everywhere.”

But not everyone agrees that the benefits outweigh the harms. Last fall, a community health center in Oakland California, filed a lawsuit against some of the largest manufacturers and sellers of pulse oximeters, asking the court to prohibit sale of the devices in California until the readings are proved accurate for people with dark skin, or until the devices carry a warning label.

“The pulse oximeter is an example of the tragic harm that occurs when the nation’s health-care industry and the regulatory agencies that oversee it prioritize white health over the realities of non-white patients,” said Noha Aboelata, CEO of Roots Community Health Center, in a statement. “The story of the making, marketing and use of racially biased pulse oximeters is an indictment of our health-care system.”

Read more from MIT Technology Review’s archive

Melissa Heikkilä’s reporting showed her just how “pale, male, and stale” the humans of AI are. Could we just ask it to do better

No surprise that technology perpetuates racism, wrote Charlton McIlwain in 2020. That’s the way it was designed. “The question we have to confront is whether we will continue to design and deploy tools that serve the interests of racism and white supremacy.”

We’ve seen that deep-learning models can perform as well as medical professionals when it comes to imaging tasks, but they can also perpetuate biases. Some researchers say the way to fix the problem is to stop training algorithms to match the experts, reported Karen Hao in 2021

From around the web

The high lead levels found in applesauce pouches came from a single cinnamon processing plant in Ecuador. (NBC)

Alternating arms for your covid vaccines might offer an immunity boost over sticking to the same arm, according to a new study. (NYT)

Weight loss through either surgery or medication lowers blood pressure, according to new research. (CNN)

Pharma is increasingly building AI into its businesses, but don’t expect that to lead to instantaneous breakthroughs. (STAT)

The next generation of mRNA vaccines is on its way

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Welcome back to The Checkup! Today I want to talk about … mRNA vaccines.

I can hear the collective groan from here, but wait—hear me out! I know you’ve heard a lot about mRNA vaccines, but Japan recently approved a new one for covid. And this one is pretty exciting. Just like the mRNA vaccines you know and love, it delivers the instructions for making the virus’s spike protein. But here’s what makes it novel: it also tells the body how to make more mRNA. Essentially, it provides instructions for making more instructions. It’s self-amplifying.

I’ll wait while your head explodes.

Self-amplifying RNA vaccines (saRNA) offer a couple of important advantages over conventional mRNA vaccines, at least in theory. Because saRNA vaccines come with a built-in photocopier, the dose can be much lower. One team of researchers tested both an mRNA vaccine and an saRNA vaccine in mice and found that they could achieve equivalent levels of protection against influenza with just 1/64th the dose. Second, it’s possible that saRNA vaccines will induce a more durable immune response because the RNA keeps copying itself and  sticks around longer. While mRNA might last a day or two, self-amplifying RNA can persist for a month.

Lest you think that this is just a tweaked version of conventional mRNA, It’s not. “saRNA is a totally different beast,” Anna Blakney, a bioengineer at the University of British Columbia, told Nature. (Blakney was one of our 35 Innovators Under 35 in 2023.)

What makes it a different beast? Conventional mRNA vaccines consist of messenger RNA that carries the genetic code for covid’s spike protein. Once that mRNA enters the body, it gets translated into proteins by the same cellular machinery that translates our own messenger RNA. 

Self-amplifying mRNA vaccines contain a gene that encodes the spike protein as well as viral genes that code for replicase, the enzyme that serves as a photocopier. So one self-amplifying mRNA molecule can produce many more. The idea of a vaccine that copies itself in the body might sound a little, well, unnerving. But there are a few things I should make clear. Although the genes that give these vaccines the ability to self-amplify come from viruses, they don’t encode the information needed to make the virus itself. So saRNA vaccines can’t produce new viruses. And just like mRNA, saRNA degrades quickly in the body. It lasts longer than mRNA, but it doesn’t amplify forever. 

Japan approved the new vaccine, called LUNAR-COV19, in late November on the basis of results from a 16,000-person trial in Vietnam. Last month researchers published results of a head-to-head comparison between LUNAR-COV19 and Comirnaty, the mRNA vaccine from Pfizer-BioNTech. In that 800-person study, vaccinated participants received either five  micrograms of LUNAR-COV19 or 30 micrograms of Comirnaty as a fourth dose booster. Reactions to both shots tended to be mild and resolve quickly. But the self-amplifying mRNA shot did elicit antibodies in a greater percentage of people than Comirnaty. And a month out, antibody levels against Omicron BA.4/5 were higher in people who received LUNAR-COV19. That could be a signal of increased durability.

The company has already filed for approval in Europe. It’s also working on a self-amplifying mRNA vaccine for flu, both seasonal and pandemic. Other companies are exploring the possibility that self-amplifying mRNA might be useful in rare genetic conditions to replace missing proteins. Arcturus, the company that co-developed LUNAR-COV19 with the global biotech CSL, is also developing self-amplifying messenger RNA to treat ornithine transcarbamylase deficiency, a rare and life-threatening genetic disease. It’s an mRNA bonanza that will hopefully lead to better vaccines and new therapies. 

Another thing

Babies and AI learn language in very different ways. The former rely on a relatively small set of experiences. The latter relies on data sets that encompass a trillion words. But this week I wrote about a new study that shows AI can learn language like a baby—at least some aspects of language. The researchers found that a neural network trained on things a single child saw and heard over the course of a year and a half could learn to match words to the objects they represent. Here’s the story. 

Read more from MIT Technology Review’s archive

mRNA vaccines helped tackle covid, but they can help with so much more—malaria, HIV, TB, Zika, even cancer. Jessica Hamzelou wrote about their potential in January, and I followed up with a story after two mRNA researchers won a Nobel Prize. 

Using self-amplifying RNA isn’t the only way to make mRNA vaccines more powerful. Researchers are tweaking them in other ways that might help boost the immune response, writes Anne Trafton

From around the web

Elon Musk says his company Neuralink has implanted a brain chip in a person for the first time. The device is designed to allow people to control external devices like smartphones and computers with their thoughts. (Washington Post)

In August I  wrote about Vertex’s quest to develop a non-opioid pain pill. This week the company announced positive results from phase 3 trials. The company expects to seek regulatory approval in the coming months, and if approved, the drug is likely to become a blockbuster. (Stat)

In some rare cases, it appears that Alzheimer’s can be transmitted from one person to another. That’s the conclusion of a new study: it found that eight people who received growth hormone from the brains of cadavers before the 1980s had sticky beta-amyloid plaques in their brains, a hallmark of the disease. The growth hormone they received also contained these proteins. And when researchers injected these proteins into mice, the mice also developed amyloid plaques. (Science)

This baby with a head camera helped teach an AI how kids learn language

Human babies are far better at learning than even the very best large language models. To be able to write in passable English, ChatGPT had to be  trained on massive data sets that contain millions or even a trillion words. Children, on the other hand, have access to only a tiny fraction of that data, yet by age three they’re communicating in quite sophisticated ways.

A team of researchers at New York University wondered if AI could learn like a baby. What could an AI model do when given a far smaller data set—the sights and sounds experienced by a single child learning to talk?

A lot, it turns out.  The AI model managed to match words to the objects they represent.  “There’s enough data even in this blip of the child’s experience that it can do genuine word learning,” says Brenden Lake, a computational cognitive scientist at New York University and an author of the study. This work, published in Science today, not only provides insights into how babies learn but could also lead to better AI models.

For this experiment, the researchers relied on 61 hours of video from a helmet camera worn by a child who lives near Adelaide, Australia. That child, Sam, wore the camera off and on for one and a half years, from the time he was six months old until a little after his second birthday. The camera captured the things Sam looked at and paid attention to during about 1% of his waking hours. It recorded Sam’s two cats, his parents, his crib and toys, his house, his meals, and much more. “This data set was totally unique,” Lake says. “It’s the best window we’ve ever had into what a single child has access to.” 

To train the model, Lake and his colleagues used 600,000 video frames paired with the phrases that were spoken by Sam’s parents or other people in the room when the image was captured—37,500 “utterances” in all. Sometimes the words and objects matched. Sometimes they didn’t. For example, in one still, Sam looks at a shape sorter and a parent says, “You like the string.” In another, an adult hand covers some blocks and a parent says, “You want the blocks too.” 

COURTESY OF SAM’S DAD

The team gave the model two cues. When objects and words occur together, that’s a sign that they might be linked. But when an object and a word don’t occur together, that’s a sign they likely aren’t a match. “So we have this sort of pulling together and pushing apart that occurs within the model,” says Wai Keen Vong, a computational cognitive scientist at New York University and an author of the study. “Then the hope is that there are enough instances in the data where when the parent is saying the word ‘ball,’ the kid is seeing a ball,” he says.

Matching words to the objects they represent may seem like a simple task, but it’s not. To give you a sense of the scope of the problem, imagine the living room of a family with young children. It has all the normal living room furniture, but also kid clutter. The floor is littered with toys. Crayons are scattered across the coffee table. There’s a snack cup on the windowsill and laundry on a chair. If a toddler hears the word “ball,” it could refer to a ball. But it could also refer to any other toy, or the couch, or a pair of pants, or the shape of an object, or its color, or the time of day. “There’s an infinite number of possible meanings for any word,” Lake says.

The problem is so intractable that some developmental psychologists have argued that children must be born with an innate understanding of how language works to be able to learn it so quickly.  But the study suggests that some parts of language are learnable from a really small set of experiences even without that innate ability, says Jess Sullivan, a developmental psychologist at Skidmore University, who was part of the team that collected Sam’s helmet camera data but was not involved in the new study. “That, for me, really does shake up my worldview.” 

But Sullivan points out that being able to match words to the objects they represent, though a hard learning problem, is just part of what makes up language. There are also rules that govern how words get strung together. Your dog might know the words “ball” or “walk,” but that doesn’t mean he can understand English. And it could be that whatever innate capacity for language babies possess goes beyond vocabulary. It might influence how they move through the world, or what they pay attention to, or how they respond to language. “I don’t think the study would have worked if babies hadn’t created the data set that the neural net was learning from,” she says. 

baby wearing a camera on head sitting in a high chair

BRENDEN LAKE

The next step for Lake and his colleagues is to try to figure out what they need to make the model’s learning more closely replicate early language learning in children. “There’s more work to be done to try to get a model with fully two-year-old-like abilities,” he says. That might mean providing more data. Lake’s child, who is now 18 months old, is part of the next cohort of kids who are providing that data. She  wears a helmet camera for a few hours a week. Or perhaps the model needs to pay attention to the parents’ gaze, or to have some sense of the solidity of objects—something children intuitively grasp. Creating models that can learn more like children will help the researchers better understand human learning and development. 

AI models that can pick up some of the ways in which humans learn language might be far more efficient at learning; they might act more like humans and less like “a lumbering statistical engine for pattern matching,” as the linguist Noam Chomsky and his colleagues once described large language models like ChatGPT. “AI systems are still brittle and lack common sense,” says Howard Shrobe, who manages the program at the US government’s Defense Advanced Research Projects Agency that helped fund Lake’s team. But AI that could learn like a child might be capable of understanding meaning, responding to new situations, and learning from new experiences. The goal is to bring AI one step closer to human intelligence.

How wastewater could offer an early warning system for measles

Measles is back with a vengeance. In the UK, where only 85% of school-age children have received two doses of the MMR vaccine, as many as 300 people have contracted the disease since October. And in the US, an outbreak has infected nine people in Philadelphia since last month. One case has been reported in Atlanta, another in Delaware. An entire family of six is infected in Washington state. 

On January 23, the World Health Organization issued a warning. “It is vital that all countries are prepared to rapidly detect and timely respond to measles outbreaks, which could endanger progress towards measles elimination,” said Hans Kluge, WHO regional director for Europe. 

Catching measles outbreaks early is tricky, though. Like many other respiratory viruses, it starts off with a cough, runny nose, fever, and achy body. The telltale rash doesn’t appear for two to four more days. By then, a person is already infectious. Very infectious, in fact. Measles is one of the most contagious diseases around.

Maybe there’s a solution. The US developed a vast wastewater sampling network to detect covid during the pandemic. Could we leverage that network to provide an early warning system for measles?

“I actually think you could make the argument that measles is even more important to [detect] than covid or influenza or any of the other pathogens that we’re looking for,” says Samuel Scarpino, an epidemiologist at Northeastern University in Boston.

Wastewater surveillance relies on standard lab tests to find genetic evidence of pathogens in sewage—DNA or RNA. When people are infected with covid, they shed SARS-CoV-2 in their stools, so it’s easy to see why it would show up in wastewater. But even viruses that don’t get pooped out can show up in the sewers. 

Although measles is a respiratory virus, people shed it in their urine. They also brush their teeth and spit in the sink. They blow their noses and throw the tissue in the toilet. “We shed these viruses and we shed bacteria and fungi in so many ways that end up in the sewer,” says Marlene Wolfe, an environmental microbiologist and epidemiologist at Emory University and one of the directors of WastewaterSCAN, a program based at Stanford that monitors infectious diseases through municipal wastewater systems. 

The literature on wastewater detection of measles is scant, but encouraging. In one study, a team of researchers in the Netherlands tested wastewater samples collected in 2013 during a measles outbreak in an orthodox Protestant community for evidence of the virus. They found measles RNA, and the positive samples matched the locations where cases had been reported. They even managed to confirm that the virus in one sample was genetically identical to the outbreak strain. But not every measles case showed up in the sewers. Some samples taken where cases had occurred didn’t harbor any measles RNA. 

In another study, researchers from Nova Scotia developed a tool to screen wastewater for four pathogens simultaneously: RSV, influenza, covid, and measles. When they tested it in Nova Scotia, they didn’t get any positive hits for measles, which didn’t surprise them as no cases had been reported. But when they seeded the wastewater samples with a surrogate for measles, they were able to detect it at both high and low concentrations

The real question, Wolfe says, is whether detecting measles in wastewater would have any public health value. Because measles is rarely asymptomatic and the rash is so distinctive, cases tend to get noticed. “Some of our other systems can work pretty well at identifying measles cases as they come up,” she says.

Wolfe could see value in monitoring, she says, if people really shed high quantities of the virus before those signs are visible. “Then it really could provide an early warning,” she says. But that’s not known at the moment. 

What would a wastewater surveillance program for measles look like? “If we had the ability to target places where the vaccination coverage was lower, that would be a place to prioritize resources,” Scarpino says. “Airports and other ports of entry are going to be really important as well.” Earlier this month, someone infected with measles passed through both Dulles and Ronald Reagan airports just outside of Washington, DC. Finding measles RNA in airport sewage doesn’t necessarily mean a local outbreak might occur, but “it definitely means that the risk profile is there and we should be monitoring much more actively,” he says. 

While measles isn’t part of wastewater surveillance yet, plenty of other pathogens are. Health officials around the globe have been testing sewage for polio since the late 1980s. Because people who contract polio shed large amounts of the virus in their feces, and because so many people are asymptomatic, “it’s like a perfect use case in a lot of ways,” Wolfe says. But wastewater surveillance didn’t really become fashionable until 2020, when covid hit. 

The National Wastewater Surveillance System, which the Centers for Disease Control and Prevention (CDC) launched in 2020 to monitor covid, now also tests for mpox. WastewaterSCAN currently tests for 10 different pathogens, including covid, mpox, RSV, influenza, norovirus, and rotavirus. The team publishes that data on a dashboard on its website and shares it with the CDC. Wolfe and her colleagues also recently worked with Miami-Dade County in Florida to assess the feasibility of testing for dengue. Even though dengue is rare in Florida, the team picked up a signal in the wastewater

In fact, wastewater surveillance works for most of the pathogens they’ve tried, Wolfe says: “The potential for leveraging this tool to effectively support measles surveillance is absolutely possible.” 

Another thing

The complement system may be the most important immune defense you’ve never heard of. And now two teams of researchers say that this microbe-fighting protein cascade is abnormal in some people with long covid, pointing researchers toward new potential therapies. 

Read more from MIT Technology Review’s archive

Wastewater with its wealth of microbes could help researchers track the evolution of antibiotic resistance in bacteria, Jessica Hamzelou wrote last year. 

Health officials used wastewater surveillance to track the spread of mpox in 2022 and helped scientists estimate how many people in California’s Bay Area might be affected, Hana Kiros reported

Way back in 2021, Antonio Regalado covered some of the first efforts to track the spread of covid variants using wastewater.  

From around the web

The FDA slapped a black box warning on CAR-T cancer therapies, which rely on engineered T cells to fight the disease. The decision comes after the agency received 25 reports of new blood cancers in people who received these treatments. (NBC)

My latest for Nature is a deep dive into efforts to restore immune tolerance in people with autoimmune diseases. Researchers are finally having some success addressing the cause of these diseases and are even talking about (gasp!) the possibility of a cure. (Nature)   

An 11-year-old boy who was born deaf can now hear after receiving gene therapy as part of a clinical trial. “There’s no sound I don’t like,” he told the New York Times. “They’re all good.” (NYT)

Donated bodies are powering gene-edited organ research

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Hooked up to a ventilation machine, a person can be dead in the eyes of the law, medical professionals, and loved ones, yet still alive enough to be useful for medical research. Such brain-dead people are often used for organ donation, but they are also of increasing importance to the biotech world. 

This week, we reported how surgeons at the University of Pennsylvania connected a pig liver to a brain-dead person in an experiment that lasted for three days.

The point was to determine whether the organ—which was mounted inside a special pumping device—could still do its job of cleaning up toxins from the body, and possibly lead to a new approach for helping patients with acute liver failure.

Using entire bodies in this way—as an experimental “decedent model”—remains highly unusual. But there’s been an upsurge in requests for bodies as more companies start testing animal-to-human organ transplants using tissues from specially gene-edited pigs.

“In order to get to humans, you have to go through steps. You can’t say ‘I am going to try it tomorrow,’ as you did 50 years ago,” says Abraham Shaked, the surgeon at Penn who directed the experiment.

To learn how common it is to use bodies as experimental models, I checked in with Richard Hasz, CEO of Gift of Life Donor Program, a nonprofit that arranges for organ donation in Pennsylvania, New Jersey and Delaware, and which provided Penn with the body used in the liver experiment.

“It’s definitely a new model. But sometimes we repeat things that have happened before. We have been around 50 years, and this is the second time it’s been requested,” says Hasz.

The previous time was in the 1980s, when researchers at Temple University sought out brain-dead bodies as a “no risk” way to test an early artificial heart made from plastic and metal. They wanted to see how it fit in a chest and test surgical techniques before trying the mechanical heart in a living patient.

Starting in 2021, though, donation organizations again started hearing from surgeons who needed brain-dead people, sometimes called “beating-heart cadavers.”  That was because several companies had developed gene-edited pigs and doctors were ready to start trying their organs.

According to a tally from the biotech company eGenesis, of the 10 pig-to-human transplant experiments that have taken place in the US since 2021, two have been in living people, but the other eight have involved brain-dead bodies.

The main use of such bodies is as organ donors. Although most people don’t realize it, says Hasz, only that relatively rare 1% to 2% of people who experience brain death while under medical care can have their organs collected.

“It’s a big misconception that anyone who has died in a car accident or outside the hospital can be an organ donor. You have to have died in the ICU from a devastating neurological injury to your brain,” he says.

It’s that brain-dead but beating-heart state that provides the time—sometimes a day or two—to move the body to a central location, find a suitable recipient, and allow surgeons to remove the organs.

Organizations like Hasz’s are the ones that approach families, transport the bodies, and help match organs to recipients.  Last year Gift of Life helped arrange for 1,734 transplants of organs taken from 693 donors.

The family of the patient in this case—really, the “decedent,” since he’d been declared brain dead—wanted to see his organs donated. But there weren’t any takers; sometimes factors like cancer, age, or infections make organs less desirable.

So Hasz approached the family about another option. Would they agree to let his body be used in an experiment with a pig liver?  The whole concept was new to them, but they quickly agreed, he says.

“Our team tried to shepherd this family to understand all the ins and outs of what that would mean—the length of time, the goals, the fact that it would be an extracorporeal support—and provide them with all that information,” he says.

This time the experiment lasted only 72 hours, as that’s about how long a pig liver would be needed to support a real patient. Hasz says other families might be comfortable with longer experiments, but probably not anything indefinite: “We can maintain a body with mechanical support once they are declared medically and legally dead, but families have a desire for closure, funeral services, and depending on the family, they may limit it to one day or one month.”

Hasz says his team will be looking for more body donors to support further experiments with pig livers. And he expects many will agree. “We depend every day in organ transplant on the kindness of strangers who are at their worst possible moment, but they can set that aside and think of others,” he says. “Having talked to many families over the years, I am always surprised and humbled by their willingness to say yes.”

Read more from MIT Technology Review’s archive

Last year, MIT Technology Review’s Mortality Issue explored how technology is sometimes blurring the line between life and death. News editor Charlotte Jee wrote my favorite story in the issue, which described how chatbots can create  “digital clones” that let people speak to their dead relatives.

We said donated organs only come from brain-dead individuals. But there are some exceptions. In 2015 we wrote about a device that could revive hearts that had stopped beating, making them available for transplant.

Pig-to-human organ transplants made our 2023 list of 10 Breakthrough Technologies because they could end the organ shortage. We took a deep dive into one entrepreneur’s plans to make it happen.

Around the web

A lab in China reported experiments with a coronavirus that is 100% fatal to mice and could harm humans. It caused brain damage and turned their eyes white. Some scientists condemned the risky research as “madness.” (New York Post)

Perverse incentives, no real negotiation, and profiteering middlemen. Those are among the five key reasons drug prices in the US are nearly twice those in some European countries. (New York Times)

No one can resist a cute animal story—I think that’s why efforts to test anti-aging drugs in pets get so much media attention. But now people are howling about the $7-million-a-year Dog Aging Project, whose organizers say they’re about to lose their government funding. The project has been testing the life-span effects on dogs of a drug called rapamycin. (Science)

Scientists are finding signals of long covid in blood. They could lead to new treatments.

For many people, covid is an illness that blusters in and out of our lives as cases spike and recede. But for tens of millions of others, a case of covid is the beginning of a chronic and sometimes debilitating illness that persists for months or even years.  What makes individuals with long covid different from those who get infected and recover? According to a new paper, an often overlooked part of the immune system is unusually active in these people.

A team of researchers from Switzerland compared protein levels in blood samples taken from patients who had never had covid, those who had recovered from covid, and those who had developed long covid. “We wanted to understand what drives long covid, what keeps long covid active,” says Onur Boyman, an immunologist at the University of Zurich and an author of the study.

The scientists found that people with long covid exhibit changes in a suite of proteins involved in the complement system, which helps the immune system destroy microbes and clear away cellular debris. The results echo what at least one other group has found. 

None of the existing research proves that these changes drive the disease. But they offer up a new avenue for treatment exploration by helping doctors pick the best people to trial certain drugs “There aren’t really any effective therapies,” says Aran Singanayagam, a respiratory medicine specialist who studies lung infections at Imperial College London. “So we are quite desperate, and it’s a big problem.” 

The researchers began by looking at levels of more than 6,500 proteins in the blood of 113 people who tested positive for SARS-CoV-2 and 39 people who had never been infected. Six months later, they took new blood samples. By that time, 73 people who had been infected had recovered, and 40 had gone on to develop long covid. Many of the proteins elevated in people with long covid were also elevated in people who had recovered from severe covid. But the markers that were unique to the long covid groups pointed to abnormal activation of the complement system.

What is the complement system? Good question. “We never hear of it as non-immunologists,” Boyman says. But it plays a vital role in defending the body against microorganisms. The complement system is composed of more than 30 proteins produced by the liver that travel the bloodstream and act as an immune surveillance system. Activation of the complement system kicks off a cascade of reactions that recruits immune cells to the site of an infection, flags pathogens for destruction, or even destroys microbes by poking holes in them. The system, as its name suggests, complements the activity of antibodies. But when it goes awry, it can cause widespread inflammation and damage cells and blood vessels

When the results pointed to abnormal activation of the complement system as a distinguishing feature of long covid, “we all of a sudden said ‘Oh, this makes so much sense,’” Boyman says. “The complement system is so central, not only communicating with the immune system but also communicating with the blood clotting system—with the endothelial cells, with platelets, with red blood cells, and going into all the organs.” That might explain why some researchers have found tiny clots in people with the disease.

Why the complement system might go awry after a covid infection isn’t clear. “To me, when you see complement activation like this, it suggests that you have ongoing infection,” says Timothy Henrich, an immunologist at the University of California, San Francisco. That residual virus could keep the complement system active. Or it’s possible that lingering tissue damage keeps the system engaged. Or maybe it’s something else entirely. “The fundamental issue that we have with long covid research right now is that we have a lot of associations, but we don’t have a lot of causations that have been proven,” Henrich says. 

This isn’t the only paper to point to complement dysregulation as a feature of long covid. Back in October Paul Morgan, an immunologist at the Cardiff University School of Medicine, and his colleagues posted research—not yet peer-reviewed—that also found abnormal complement protein levels in people with long covid. Their group wasn’t able to follow patients over time, from acute covid through to the development of long covid. Both groups identified a set of markers that seem predictive of long covid, although not the same markers. Singanayagam is skeptical that any of these markers could offer a definitive diagnosis. 

But if the complement system is to blame for some of the symptoms of long covid, there might be a solution. Companies already have drugs to block the system’s activation. They’re approved to treat some rare genetic and autoimmune diseases. Some of those therapies have already been tested in people with severe covid, with mixed results. But that could be because researchers didn’t have a way of including only those people with signs of complement dysregulation, Morgan says. If a company launched a trial of these therapies in people with long covid, it could use some of these markers to enroll the people who might benefit most. “Treating with anti-complement drugs might actually give us, for really the first time, an effective therapy for long covid,” he says. Morgan’s team has already started talking to companies that have developed these therapies.

But even if these drugs work—and that’s still a big “if”—they’re not likely to work for everyone. Long covid is “such a heterogeneous collection of conditions,” says Singanayagam. “It’s brain fog, fatigue, chest pain—and different patients have different degrees of each of those.” In Morgan’s study, only about a third to half of long covid patients had clear and obvious complement dysregulation. 

Henrich says the paper provides important insights. But the mystery of what drives long covid is far from solved. “This is a 1,000-piece jigsaw puzzle and you finished an edge,” he says. “That’s a good start, but it’s not the entire puzzle.”

​​A brain-dead man was attached to a gene-edited pig liver for three days

Surgeon Abraham Shaked thinks he has probably carried out more than 2,500 liver transplants. But in December 2023, a team he oversees at the University of Pennsylvania did something he’d never tried before. 

Working on the body of a brain-dead man, they attached his veins to a refrigerator-size machine with pig liver mounted in the middle of it.

For three days, the man’s blood passed into the machine, through the pig liver, and back into his body.

This “extracorporeal,” or outside-the-body, liver—whose initial test was announced today by the University of Pennsylvania and a biotech company, eGenesis—is designed to help people survive acute liver failure, which can be caused by infection, poisoning, or (most commonly) too much alcohol.

A damaged liver can’t do its job removing toxins from the body, processing nutrients, and making protein. Hooking people up to an external one could buy them time. “You want to give the liver time to recover … or maintain them until transplant is available,” says Shaked.

A white machine is shown, holding a pig's liver for organ donation. Two surgeons tend to the machine.

EGENESIS AND ORGANOX

The liver test in Philadelphia is also the latest effort to experiment with organs from pigs that have been genetically engineered so their tissues are more compatible with people.

In earlier studies, at the University of Maryland, two men with terminal heart disease had their hearts replaced with hearts from pigs developed by another company, United Therapeutics

Remarkably, each was able to live with the animal heart, but only for a short time; both died within two months of the transplant. Scientists continue to scrutinize why the hearts failed, but at least the second patient’s heart showed signs of rejection.

Now some doctors say the use of a pig organ that’s kept outside the body might prove easier to pull off, since it only needs to work for a limited time.

“If what we are doing is working in the way that we think it is, I believe this technology will be the first pig organ out there in real clinical use,” says Shaked.

The big goal of pig engineering companies, which include eGenesis, United, and Makana Therapeutics, is to create hearts, kidneys, or lungs that can keep a person alive for years.

To do that, they have all made genetic changes to pigs so that the animal tissue is cloaked from the human immune system, which would otherwise attack the organs.

A pig donor for a liver, heart and kidney.
The liver of this gene-edited Yucatan minipig was tested on brain-dead human.
EGENESIS

But using a liver outside the body largely avoids the issue of longer-term organ rejection because it only needs to work for a few days, not years. And the gene edits made to the pigs do seem to protect the organs from severe rejection in the short term. “Here there is no complex immunology,” says Shaked. “We eliminate the rejection question because we don’t use the organ for long. It’s more like a piece of machinery.”

The idea is to use the external organ to support people with liver failure until a human liver transplant becomes available for them or until their livers bounce back, something that’s possible given the organ’s impressive ability to regenerate.

Patients who could benefit include those who overdose on painkillers or who drink too much alcohol over time and develop acute liver failure.

Mike Curtis, CEO of eGenesis, says the biotech company is also testing pig kidneys and hearts that its collaborators have transplanted into baboons. And it hopes to try those organs in humans eventually.

However, starting a little over a year ago, he realized an extracorporeal liver might become a product more quickly. “People were like, ‘We have to do a transplant, we have got to do a transplant.’ But we also have to prove there is something investible here, and liver is an acute need with few competing products,” says Curtis. “It’s not a transplant, and it’s a little weird, but the pieces kept falling together.”

For biotech companies, it’s crucial to get a product into human testing as soon as they can, as that’s when big drug company partners come knocking. And pig organ transplants have always had a reputation as a speculative technology that’s never quite succeeded.  

“There is always curiosity. Everyone takes a meeting,” says Curtis. “They do view it as the future, but is it next year, or 100 years from now?”

“With a heart transplant, you are really swinging for the fence,” he says. “Whereas with extracorporeal, it’s a little more like how products usually get developed. You can try for steady improvements.”

This is the first time an organ from one of the eGenesis pigs has been tried with a human, but Curtis says eGenesis is ready to apply for permission to start a formal trial of the liver system this year. If it gets the green light, that could make it the first formal clinical trial of any gene-edited pig organ. (The heart transplants were considered one-off bids to help patients, not a clinical trial.)

The experiment started on December 22, after the family of a elderly man who’d suffered a brain bleed agreed to let his body be used in the research. He was brain dead, but his heart was still beating.

During the experiment, the pig liver was mounted into a device from the company OrganOx that is normally used to keep donated human organs warm and perfused with blood so they will be available for transplant longer.

In this case, tubes connected to the subject’s veins were routed into the machine, and  the two remained attached for 72 hours. 

The idea of extracorporeal organs has been tried before. In the 1990s, researchers connected several patients to livers taken from ordinary pigs, but the organs quickly deteriorated. According to eGenesis, its liver, which came from a genetically modified Yucatan minipig, was still healthy even after three days.

Unlike hearts and kidneys, pig livers probably aren’t plausible candidates for transplanting directly into a person. One of the organ’s jobs is to mass-produce proteins, fats, and glucose—and the pig versions of those molecules would probably provoke a powerful immune reaction against even a genetically modified liver.

Adam Griesemer, a surgical director for the NYU Langone Transplant Institute, says extracorporeal use is “probably the only application” for pig livers.

The innovation that gets an Alzheimer’s drug through the blood-brain barrier

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Therapies to treat brain diseases share a common problem: they struggle to reach their target. The blood vessels that permeate the brain have a special lining so tightly packed with cells that only very tiny molecules can pass through. This blood-brain barrier “acts as a seal,” protecting the brain from toxins or other harmful substances, says Anne Eichmann, a molecular biologist at Yale. But it also keeps most medicines out. Researchers have been working on methods to sneak drugs past the blood-brain barrier for decades. And their hard work is finally beginning to pay off.

Last week, researchers at the West Virginia University Rockefeller Neuroscience Institute reported that by using focused ultrasound to open the blood-brain barrier, they

improved delivery of a new Alzheimer’s treatment and sped up clearance of the sticky plaques that are thought to contribute to some of the cognitive and memory problems in people with Alzheimer’s by 32%.

For this issue of The Checkup, we’ll explore some of the ways scientists are trying to disrupt the blood-brain barrier.

A patient surrounded by a medical team lays on the bed of an MRI machine with their head in a special focused ultrasound helmet
An Alzheimer’s patient undergoes focused ultrasound treatment
with the WVU RNI team.
WVU ROCKEFELLER NEUROSCIENCE INSTITUTE

In the West Virginia study, three people with mild Alzheimer’s received monthly doses of aducanumab, a lab-made antibody that is delivered via IV. This drug, first approved in 2021,  helps clear away beta-amyloid, a protein fragment that clumps up in the brains of people with Alzheimer’s disease. (The drug’s approval was controversial, and it’s still not clear whether it actually slows progression of the disease.)  After the infusion, the researchers treated specific regions of the patients’ brains with focused ultrasound, but just on one side. That allowed them to use the other half of the brain as a control. PET scans revealed a greater reduction in amyloid plaques in the ultrasound-treated regions than in those same regions on the untreated side of the brain, suggesting that more of the antibody was getting into the brain on the treated side.

Aducanumab does clear plaques without ultrasound, but it takes a long time, perhaps in part because the antibody has trouble entering the brain. “Instead of using the therapy intravenously for 18 to 24 months to see the plaque reduction, we want to see if we can achieve that reduction in a few months,” says Ali Rezai, a neurosurgeon at West Virginia University Rockefeller Neuroscience Institute and an author of the new study. Cutting the amount of time needed to clear these plaques might help slow the memory loss and cognitive problems that define the disease.

The device used to target and deliver the ultrasound waves, developed by a company called Insightec, consists of an MRI machine and a helmet studded with ultrasound transducers. It’s FDA approved, but for an entirely different purpose: to help stop tremors in people with Parkinson’s by creating lesions in the brain. To open the blood-brain barrier, “we inject individuals intravenously with microbubbles,” Rezai says. These tiny gas bubbles, commonly used as a contrast agent, travel through the bloodstream. Using the MRI, the researchers can aim the ultrasound waves at very specific parts of the brain “with millimeter precision,” Rezai says. When the waves hit the microbubbles, the bubbles begin to expand and contract, physically pushing apart the tightly packed cells that line the brain’s capillaries. “This temporary opening can last up to 48 hours, which means that during those 48 hours, you can have increased penetration into the brain of therapeutics,” he says.

Focused ultrasound has been explored as a method for opening the blood-brain barrier for years. (We wrote about this technology way back in 2006.) But this is the first time it has been combined with an Alzheimer’s therapy and tested in humans.

The proof-of-concept study was too small to look at efficacy, but Rezai and his team are planning to continue their work. The next step is to repeat the study in five people with one of the newer anti-amyloid antibodies, lecanemab. Not only does that drug clear plaque, but one study showed that it slowed disease progression by about 30% after 18 months of treatment in patients with early Alzheimer’s symptoms. That’s a modest amount, but a major success in a field that has struggled with repeated failures. 

Eichmann, who is also working on disrupting the blood-brain barrier, says the new results using focused ultrasound are exciting. But she wonders about long-term effects of the technique. “I guess it remains to be seen whether over time, upon repeated use, this would be damaging to the blood-brain barrier,” she says.

Other strategies for opening the blood-brain barrier look promising too. Rather than mechanically pushing the barrier apart, Roche, a pharmaceutical company, has developed a technology called “Brainshuttle” that ferries drugs across it by binding to receptors on the cells that line the vessel walls.

The company has linked Brainshuttle to its own anti-amyloid antibody, gantenerumab, and is testing it in 44 people with Alzheimer’s. At a conference in October, researchers presented initial results. The highest dose completely wiped out plaque in three of four participants. The biotech company Denali Therapeutics is working on a similar strategy to tackle Parkinson’s and other neurodegenerative diseases..   

Eichmann is working on a different strategy. Her team is testing an antibody that binds to a receptor that is important for maintaining the integrity of the blood-brain barrier. By blocking that receptor, they can temporarily loosen the junctions between cells, at least in lab mice.

Other groups are targeting different receptors, exploring various viral vectors, or developing nanoparticles that can slip into the brain. 

All these strategies will have different advantages and drawbacks, and it isn’t yet clear which will be safest and most effective. But Eichmann thinks some strategy is likely to be approved in the coming years: “We are indeed getting close.”

Techniques to open the blood-brain barrier could be useful in a whole host of diseases—Alzheimer’s, but also Parkinson’s disease, ALS, and brain tumors. “This really opens up a whole array of potential opportunities,” Rezai says. “It’s an exciting time.”

Read more from MIT Technology Review’s archive

Until recently, drug development in Alzheimer’s had been a dismal pursuit, marked by repeated failures. In 2017, Emily Mullin looked at how failures of some of the anti-amyloid drugs had researchers questioning whether amyloid is really the problem in Alzheimer’s. 

In 2016, Ryan Cross covered one of the first efforts to use ultrasound to open the blood-brain barrier in humans, a trial to deliver chemotherapy to patients with recurrent brain tumors. That same year, Antonio Regalado reported some of the first exciting results of the Alzheimer’s drug aducanumab. 

From around the web

Bayer’s non-hormonal drug to treat hot flashes reduced their frequency and intensity and improved sleep and quality of life. These results, coupled with other recent advances in treatment for symptoms of menopause, are a sign that these long-neglected issues have become big business. (Stat)

Covid is surging. Wastewater data is the best way we have to measure the virus’s ebb and flow, but it’s far from perfect. (NYT)

Last week the FDA approved Florida’s request to import drugs from Canada to cut costs. The pharmaceutical industry is not thrilled. (Reuters) Neither is Canada. (Ars Technica

The first gene-editing treatment: 10 Breakthrough Technologies 2024

WHO

CRISPR Therapeutics, Editas Medicine, Precision BioSciences, Vertex Pharmaceuticals

WHEN

Now

The first gene-editing cure has arrived. Grateful patients are calling it “life changing.”

It was only 11 years ago that scientists first developed the potent DNA-snipping technology called CRISPR. Now they’ve brought CRISPR out of the lab and into real medicine with a treatment that cures the symptoms of sickle-cell disease.

Sickle-cell is caused by inheriting two bad copies of one of the genes that make hemoglobin. Symptoms include bouts of intense pain, and life expectancy with the disease is just 53 years. It affects 1 in 4,000 people in the US, nearly all of them African-American. 

So how did this disease become CRISPR’s first success? A fortuitous fact of biology is part of the answer. Our bodies harbor another way to make hemoglobin that turns off when we’re born. Researchers found that a simple DNA edit to cells from the bone marrow could turn it back on.

Many CRISPR treatments are in trials, but in 2022, Vertex Pharmaceuticals, based in Boston, was first to bring one to regulators for approval. That treatment was for sickle-cell. After their bone marrow was edited, nearly all the patients who volunteered in the trial were pain free. 

Good news. But the expected price tag of the gene-editing treatment is $2 to $3 million. And Vertex has no immediate plans to offer it in Africa—where sickle-cell disease is most common, and where it still kills children.

The company says this is because the treatment regimen is so complex. It involves a hospital stay; doctors remove the bone marrow, edit the cells, and then transplant them back. In countries that still struggle to cover basic health needs, the procedure remains too demanding. So simpler, cheaper ways to deliver CRISPR could come next. 

These AI-powered apps can hear the cause of a cough

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week I came across a paper that uses AI in a way that I hadn’t heard of before. Researchers developed a smartphone app that can distinguish tuberculosis from other diseases by the sound of the patient’s cough.

The method isn’t foolproof. The app failed to detect TB in about 30% of people who actually had the disease. But it’s simpler and vastly cheaper than collecting phlegm to look for the bacterium that causes the disease, the gold-standard method for diagnosing TB. So it could prove especially useful in low-income countries as a screening tool, helping to catch cases and interrupting transmission.

In the new study, a team of researchers from the US and Kenya trained and tested their smartphone-based diagnostic tool on recordings of coughs collected in a Kenyan health-care center—about 33,000 spontaneous coughs and 1,200 forced coughs from 149 people with TB and 46 people with other respiratory conditions. The app’s performance wasn’t good enough to replace traditional diagnostics. But it could be used as an additional screening tool. The sooner people with active cases of TB are identified and receive treatment, the less likely they will be to spread the disease. 

This new paper is one of dozens that have come out in recent years that aim to use coughs and other body sounds as “acoustic biomarkers”—sounds that indicate changes in health.  The concept has been around for at least three decades, but in the past five years, the field has exploded. What changed, says Yael Bensoussan, a laryngologist at the University of South Florida, is the growing use of AI: “With artificial intelligence, you can analyze a larger quantity of data faster.”

Covid also helped drive interest in cough analysis. The pandemic gave rise to 30 or 40 startups focusing on the acoustics of cough, Bensoussan says. AudibleHealthAI launched in 2020 and began working on a mobile app designed to diagnose covid. The software, called AudibleHealth DX, is currently being reviewed by the FDA. And now the company is now branching out to influenza and TB.

The Australian company ResApp Health has been working on acoustic diagnosis of respiratory diseases since 2014, well before the pandemic. But when covid emerged, the company pivoted and developed an audio-based covid-19 screening test. In 2022, the company announced that the tool correctly identified 92% of positive covid cases just from the sound of a patient’s cough.  Soon after, Pfizer paid $179 million to acquire ResApp.

Bensoussan is skeptical that these kinds of apps will become reliable diagnostics. But she says apps that detect coughs—any coughs—could prove to be  valuable health tools even if they can’t pinpoint the cause. Coughs are especially easy for smartphones to capture. “It’s a sea change to have a common device, the smartphone, which everyone has sitting by their bedside or in their pocket to help observe your coughs,” Jamie Rogers, product manager at Google Health, told Time magazine. Google’s newest Pixel phones have cough and snore detection available.

Bensoussan also thinks cough-tracking apps could be game-changers for clinical trials where coughs are one of the things researchers are trying to measure. “It’s really hard to track cough,” she says. Researchers often rely on patients’ recall of their coughing. But an app would be far more accurate. “It’s really easy to capture the frequency of cough from a tech perspective,” she says. 

And it’s not just coughs that can reveal clues about our health status. Bensoussan is leading a $14 million project funded by the NIH to develop a massive database of voice, cough, and respiratory sounds to aid in the development of tools to diagnose cancers, respiratory illnesses, neurological and mood disorders, speech disorders, and more. The database captures a wide variety of sounds—coughing, reading sentences or vowel sounds, inhaling, exhaling, and more. 

“One of the big limitations is that a lot of these studies have private data sets that are secret,” Bensoussan says. That makes it difficult to validate the research. The database that she and her colleagues are developing will be publicly available. She expects the first data release to happen before June.

As more data becomes available, expect to see even more apps that can help alert us to health problems on the basis of cough or speech patterns. It’s too soon to say whether those apps will make a significant difference in diagnosis or screening,  but we’ll keep an ear out for any new developments.  

Read more from MIT Technology Review’s archive

Vocal cues could provide a way to diagnose PTSD, traumatic brain injuries, mood disorders, and even heart disease, Emily Mullin wrote in this story from 2017. 

AI tools might perform well  in the lab but falter in the chaos of the real world. Will Douglas Heaven unpacked what happened when Google Health implemented a tool in Thailand to screen people for an eye condition linked to diabetes. 

In a previous issue of The Checkup, Jessica Hamelzou outlined why we shouldn’t let AI make all our health-care decisions: “Doctors may be inclined to trust AI at the expense of a patient’s own lived experiences, as well as their own clinical judgment.” 

From around the web

Safe bathrooms equipped with motion sensors have eliminated overdose deaths at a Boston clinic that serves unhoused individuals in the city’s infamous “methadone mile”—further proof that supervised consumption sites would save lives. (STAT)

Now that we’ve got new blockbuster weight-loss drugs, some companies are looking to develop longer-lasting treatments and preventatives. But some say an obesity-free future won’t come from pharma. “We are not going to be able to treat our way out of this problem, or medicalize our way out of this problem,” says William Dietz, director of the Global Center for Prevention and Wellness at George Washington University. “What we need to do is to come to terms with the kind of environmental forces which are driving obesity, and generate the political will necessary to address those factors.” (STAT

Advances in neuroscience have sparked worries that brain-computer interfaces might someday read people’s minds or hamper free will. Now “neurorights” advocates are racing against the clock to push for laws that would protect against the misuse and abuse of neurotechnology. (Undark)