We’re learning more about what vitamin D does to our bodies

It has started to get really wintry here in London over the last few days. The mornings are frosty, the wind is biting, and it’s already dark by the time I pick my kids up from school. The darkness in particular has got me thinking about vitamin D, a.k.a. the sunshine vitamin.

At a checkup a few years ago, a doctor told me I was deficient in vitamin D. But he wouldn’t write me a prescription for supplements, simply because, as he put it, everyone in the UK is deficient. Putting the entire population on vitamin D supplements would be too expensive for the country’s national health service, he told me.

But supplementation—whether covered by a health-care provider or not—can be important. As those of us living in the Northern Hemisphere spend fewer of our waking hours in sunlight, let’s consider the importance of vitamin D.

Yes, it is important for bone health. But recent research is also uncovering surprising new insights into how the vitamin might influence other parts of our bodies, including our immune systems and heart health.

Vitamin D was discovered just over 100 years ago, when health professionals were looking for ways to treat what was then called “the English disease.” Today, we know that rickets, a weakening of bones in children, is caused by vitamin D deficiency. And vitamin D is best known for its importance in bone health.

That’s because it helps our bodies absorb calcium. Our bones are continually being broken down and rebuilt, and they need calcium for that rebuilding process. Without enough calcium, bones can become weak and brittle. (Depressingly, rickets is still a global health issue, which is why there is global consensus that infants should receive a vitamin D supplement at least until they are one year old.)

In the decades since then, scientists have learned that vitamin D has effects beyond our bones. There’s some evidence to suggest, for example, that being deficient in vitamin D puts people at risk of high blood pressure. Daily or weekly supplements can help those individuals lower their blood pressure.

A vitamin D deficiency has also been linked to a greater risk of “cardiovascular events” like heart attacks, although it’s not clear whether supplements can reduce this risk; the evidence is pretty mixed.

Vitamin D appears to influence our immune health, too. Studies have found a link between low vitamin D levels and incidence of the common cold, for example. And other research has shown that vitamin D supplements can influence the way our genes make proteins that play important roles in the way our immune systems work.

We don’t yet know exactly how these relationships work, however. And, unfortunately, a recent study that assessed the results of 37 clinical trials found that overall, vitamin D supplements aren’t likely to stop you from getting an “acute respiratory infection.”

Other studies have linked vitamin D levels to mental health, pregnancy outcomes, and even how long people survive after a cancer diagnosis. It’s tantalizing to imagine that a cheap supplement could benefit so many aspects of our health.

But, as you might have gathered if you’ve got this far, we’re not quite there yet. The evidence on the effects of vitamin D supplementation for those various conditions is mixed at best.

In fairness to researchers, it can be difficult to run a randomized clinical trial for vitamin D supplements. That’s because most of us get the bulk of our vitamin D from sunlight. Our skin converts UVB rays into a form of the vitamin that our bodies can use. We get it in our diets, too, but not much. (The main sources are oily fish, egg yolks, mushrooms, and some fortified cereals and milk alternatives.)

The standard way to measure a person’s vitamin D status is to look at blood levels of 25-hydroxycholecalciferol (25(OH)D), which is formed when the liver metabolizes vitamin D. But not everyone can agree on what the “ideal” level is.

Even if everyone did agree on a figure, it isn’t obvious how much vitamin D a person would need to consume to reach this target, or how much sunlight exposure it would take. One complicating factor is that people respond to UV rays in different ways—a lot of that can depend on how much melanin is in your skin. Similarly, if you’re sitting down to a meal of oily fish and mushrooms and washing it down with a glass of fortified milk, it’s hard to know how much more you might need.

There is more consensus on the definition of vitamin D deficiency, though. (It’s a blood level below 30 nanomoles per liter, in case you were wondering.) And until we know more about what vitamin D is doing in our bodies, our focus should be on avoiding that.

For me, that means topping up with a supplement. The UK government advises everyone in the country to take a 10-microgram vitamin D supplement over autumn and winter. That advice doesn’t factor in my age, my blood levels, or the amount of melanin in my skin. But it’s all I’ve got for now.

These technologies could help put a stop to animal testing

Earlier this week, the UK’s science minister announced an ambitious plan: to phase out animal testing.

Testing potential skin irritants on animals will be stopped by the end of next year, according to a strategy released on Tuesday. By 2027, researchers are “expected to end” tests of the strength of Botox on mice. And drug tests in dogs and nonhuman primates will be reduced by 2030. 

The news follows similar moves by other countries. In April, the US Food and Drug Administration announced a plan to replace animal testing for monoclonal antibody therapies with “more effective, human-relevant models.” And, following a workshop in June 2024, the European Commission also began working on a “road map” to phase out animal testing for chemical safety assessments.

Animal welfare groups have been campaigning for commitments like these for decades. But a lack of alternatives has made it difficult to put a stop to animal testing. Advances in medical science and biotechnology are changing that.

Animals have been used in scientific research for thousands of years. Animal experimentation has led to many important discoveries about how the brains and bodies of animals work. And because regulators require drugs to be first tested in research animals, it has played an important role in the creation of medicines and devices for both humans and other animals.

Today, countries like the UK and the US regulate animal research and require scientists to hold multiple licenses and adhere to rules on animal housing and care. Still, millions of animals are used annually in research. Plenty of scientists don’t want to take part in animal testing. And some question whether animal research is justifiable—especially considering that around 95% of treatments that look promising in animals don’t make it to market.

In recent decades, we’ve seen dramatic advances in technologies that offer new ways to model the human body and test the effects of potential therapies, without experimenting on humans or other animals.

Take “organs on chips,” for example. Researchers have been creating miniature versions of human organs inside tiny plastic cases. These systems are designed to contain the same mix of cells you’d find in a full-grown organ and receive a supply of nutrients that keeps them alive.

Today, multiple teams have created models of livers, intestines, hearts, kidneys and even the brain. And they are already being used in research. Heart chips have been sent into space to observe how they respond to low gravity. The FDA used lung chips to assess covid-19 vaccines. Gut chips are being used to study the effects of radiation.

Some researchers are even working to connect multiple chips to create a “body on a chip”—although this has been in the works for over a decade and no one has quite managed it yet.

In the same vein, others have been working on creating model versions of organs—and even embryos—in the lab. By growing groups of cells into tiny 3D structures, scientists can study how organs develop and work, and even test drugs on them. They can even be personalized—if you take cells from someone, you should be able to model that person’s specific organs. Some researchers have even been able to create organoids of developing fetuses.

The UK government strategy mentions the promise of artificial intelligence, too. Many scientists have been quick to adopt AI as a tool to help them make sense of vast databases, and to find connections between genes, proteins and disease, for example. Others are using AI to design all-new drugs.

Those new drugs could potentially be tested on virtual humans. Not flesh-and-blood people, but digital reconstructions that live in a computer. Biomedical engineers have already created digital twins of organs. In ongoing trials, digital hearts are being used to guide surgeons on how—and where—to operate on real hearts.

When I spoke to Natalia Trayanova, the biomedical engineering professor behind this trial, she told me that her model could recommend regions of heart tissue to be burned off as part of treatment for atrial fibrillation. Her tool would normally suggest two or three regions but occasionally would recommend many more. “They just have to trust us,” she told me.

It is unlikely that we’ll completely phase out animal testing by 2030. The UK government acknowledges that animal testing is still required by lots of regulators, including the FDA, the European Medicines Agency, and the World Health Organization. And while alternatives to animal testing have come a long way, none of them perfectly capture how a living body will respond to a treatment.

At least not yet. Given all the progress that has been made in recent years, it’s not too hard to imagine a future without animal testing.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Cloning isn’t just for celebrity pets like Tom Brady’s dog

This week, we heard that Tom Brady had his dog cloned. The former quarterback revealed that his Junie is actually a clone of Lua, a pit bull mix that died in 2023.

Brady’s announcement follows those of celebrities like Paris Hilton and Barbra Streisand, who also famously cloned their pet dogs. But some believe there are better ways to make use of cloning technologies.

While the pampered pooches of the rich and famous may dominate this week’s headlines, cloning technologies are also being used to diversify the genetic pools of inbred species and potentially bring other animals back from the brink of extinction.

Cloning itself isn’t new. The first mammal cloned from an adult cell, Dolly the sheep, was born in the 1990s. The technology has been used in livestock breeding over the decades since.

Say you’ve got a particularly large bull, or a cow that has an especially high milk yield. Those animals are valuable. You could selectively breed for those kinds of characteristics. Or you could clone the original animals—essentially creating genetic twins.

Scientists can take some of the animals’ cells, freeze them, and store them in a biobank. That opens the option to clone them in the future. It’s possible to thaw those cells, remove the DNA-containing nuclei of the cells, and insert them into donor egg cells.

Those donor egg cells, which come from another animal of the same species, have their own nuclei removed. So it’s a case of swapping out the DNA. The resulting cell is stimulated and grown in the lab until it starts to look like an embryo. Then it is transferred to the uterus of a surrogate animal—which eventually gives birth to a clone.

There are a handful of companies offering to clone pets. Viagen, which claims to have “cloned more animals than anyone else on Earth,” will clone a dog or cat for $50,000. That’s the company that cloned Streisand’s pet dog Samantha, twice.

This week, Colossal Biosciences—the “de-extinction” company that claims to have resurrected the dire wolf and created a “woolly mouse” as a precursor to reviving the woolly mammoth—announced that it had acquired Viagen, but that Viagen will “continue to operate under its current leadership.”

Pet cloning is controversial, for a few reasons. The companies themselves point out that, while the cloned animal will be a genetic twin of the original animal, it won’t be identical. One issue is mitochondrial DNA—a tiny fraction of DNA that sits outside the nucleus and is inherited from the mother. The cloned animal may inherit some of this from the surrogate.

Mitochondrial DNA is unlikely to have much of an impact on the animal itself. More important are the many, many factors thought to shape an individual’s personality and temperament. “It’s the old nature-versus-nurture question,” says Samantha Wisely, a conservation geneticist at the University of Florida. After all, human identical twins are never carbon copies of each other. Anyone who clones a pet expecting a like-for-like reincarnation is likely to be disappointed.

And some animal welfare groups are opposed to the practice of pet cloning. People for the Ethical Treatment of Animals (PETA) described it as “a horror show,” and the UK’s Royal Society for the Prevention of Cruelty to Animals (RSPCA) says that “there is no justification for cloning animals for such trivial purposes.” 

But there are other uses for cloning technology that are arguably less trivial. Wisely has long been interested in diversifying the gene pool of the critically endangered black-footed ferret, for example.

Today, there are around 10,000 black-footed ferrets that have been captively bred from only seven individuals, says Wisely. That level of inbreeding isn’t good for any species—it tends to leave organisms at risk of poor health. They are less able to reproduce or adapt to changes in their environment.

Wisely and her colleagues had access to frozen tissue samples taken from two other ferrets. Along with colleagues at not-for-profit Revive and Restore, the team created clones of those two individuals. The first clone, Elizabeth Ann, was born in 2020. Since then, other clones have been born, and the team has started breeding the cloned animals with the descendants of the other seven ferrets, says Wisely.

The same approach has been used to clone the endangered Przewalski’s horse, using decades-old tissue samples stored by the San Diego Zoo. It’s too soon to predict the impact of these efforts. Researchers are still evaluating the cloned ferrets and their offspring to see if they behave like typical animals and could survive in the wild.

Even this practice is not without its critics. Some have pointed out that cloning alone will not save any species. After all, it doesn’t address the habitat loss or human-wildlife conflict that is responsible for the endangerment of these animals in the first place. And there will always be detractors who accuse people who clone animals of “playing God.” 

For all her involvement in cloning endangered ferrets, Wisely tells me she would not consider cloning her own pets. She currently has three rescue dogs, a rescue cat, and “geriatric chickens.” “I love them all dearly,” she says. “But there are a lot of rescue animals out there that need homes.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Here’s why we don’t have a cold vaccine. Yet.

For those of us in the Northern Hemisphere, it’s the season of the sniffles. As the weather turns, we’re all spending more time indoors. The kids have been back at school for a couple of months. And cold germs are everywhere.

My youngest started school this year, and along with artwork and seedlings, she has also been bringing home lots of lovely bugs to share with the rest of her family. As she coughed directly into my face for what felt like the hundredth time, I started to wonder if there was anything I could do to stop this endless cycle of winter illnesses. We all got our flu jabs a month ago. Why couldn’t we get a vaccine to protect us against the common cold, too?

Scientists have been working on this for decades. It turns out that creating a cold vaccine is hard. Really hard.

But not impossible. There’s still hope. Let me explain.

Technically, colds are infections that affect your nose and throat, causing symptoms like sneezing, coughing, and generally feeling like garbage. Unlike some other infections,—covid-19, for example—they aren’t defined by the specific virus that causes them.

That’s because there are a lot of viruses that cause colds, including rhinoviruses, adenoviruses, and even seasonal coronaviruses (they don’t all cause covid!). Within those virus families, there are many different variants.

Take rhinoviruses, for example. These viruses are thought to be behind most colds. They’re human viruses—over the course of evolution, they have become perfectly adapted to infecting us, rapidly multiplying in our noses and airways to make us sick. There are around 180 rhinovirus variants, says Gary McLean, a molecular immunologist at Imperial College London in the UK.

Once you factor in the other cold-causing viruses, there are around 280 variants all told. That’s 280 suspects behind the cough that my daughter sprayed into my face. It’s going to be really hard to make a vaccine that will offer protection against all of them.

The second challenge lies in the prevalence of those variants.

Scientists tailor flu and covid vaccines to whatever strain happens to be circulating. Months before flu season starts, the World Health Organization advises countries on which strains their vaccines should protect against. Early recommendations for the Northern Hemisphere can be based on which strains seem to be dominant in the Southern Hemisphere, and vice versa.

That approach wouldn’t work for the common cold, because all those hundreds of variants are circulating all the time, says McLean.

That’s not to say that people haven’t tried to make a cold vaccine. There was a flurry of interest in the 1960s and ’70s, when scientists made valiant efforts to develop vaccines for the common cold. Sadly, they all failed. And we haven’t made much progress since then.

In 2022, a team of researchers reviewed all the research that had been published up to that year. They only identified one clinical trial—and it was conducted back in 1965.

Interest has certainly died down since then, too. Some question whether a cold vaccine is even worth the effort. After all, most colds don’t require much in the way of treatment and don’t last more than a week or two. There are many, many more dangerous viruses out there we could be focusing on.

And while cold viruses do mutate and evolve, no one really expects them to cause the next pandemic, says McLean. They’ve evolved to cause mild disease in humans—something they’ve been doing successfully for a long, long time. Flu viruses—which can cause serious illness, disability, or even death—pose a much bigger risk, so they probably deserve more attention.

But colds are still irritating, disruptive, and potentially harmful. Rhinoviruses are considered to be the leading cause of human infectious disease. They can cause pneumonia in children and older adults. And once you add up doctor visits, medication, and missed work, the economic cost of colds is pretty hefty: a 2003 study put it at $40 billion per year for the US alone.

So it’s reassuring that we needn’t abandon all hope: Some scientists are making progress! McLean and his colleagues are working on ways to prepare the immune systems of people with asthma and lung diseases to potentially protect them from cold viruses. And a team at Emory University has developed a vaccine that appears to protect monkeys from around a third of rhinoviruses.

There’s still a long way to go. Don’t expect a cold vaccine to materialize in the next five years, at least. “We’re not quite there yet,” says Michael Boeckh, an infectious-disease researcher at Fred Hutch Cancer Center in Seattle, Washington. “But will it at some point happen? Possibly.”

At the end of our Zoom call, perhaps after reading the disappointed expression on my sniffling, cold-riddled face (yes, I did end up catching my daughter’s cold), McLean told me he hoped he was “positive enough.” He admitted that he used to be more optimistic about a cold vaccine. But he hasn’t given up hope. He’s even running a trial of a potential new vaccine in people, although he wouldn’t reveal the details.

“It could be done,” he said.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Here’s the latest company planning for gene-edited babies

A West Coast biotech entrepreneur says he’s secured $30 million to form a public-benefit company to study how to safely create genetically edited babies, marking the largest known investment into the taboo technology.  

The new company, called Preventive, is being formed to research so-called “heritable genome editing,” in which the DNA of embryos would be modified by correcting harmful mutations or installing beneficial genes. The goal would be to prevent disease.

Preventive was founded by the gene-editing scientist Lucas Harrington, who described his plans yesterday in a blog post announcing the venture. Preventive, he said, will not rush to try out the technique but instead will dedicate itself “to rigorously researching whether heritable genome editing can be done safely and responsibly.”

Creating genetically edited humans remains controversial, and the first scientist to do it, in China, was imprisoned for three years. The procedure remains illegal in many countries, including the US, and doubts surround its usefulness as a form of medicine.

Still, as gene-editing technology races forward, the temptation to shape the future of the species may prove irresistible, particularly to entrepreneurs keen to put their stamp on the human condition. In theory, even small genetic tweaks could create people who never get heart disease or Alzheimer’s, and who would pass those traits on to their own offspring.

According to Harrington, if the technique proves safe, it “could become one of the most important health technologies of our time.” He has estimated that editing an embryo would cost only about $5,000 and believes regulations could change in the future. 

Preventive is the third US startup this year to say it is pursuing technology to produce gene-edited babies. The first, Bootstrap Bio, based in California, is reportedly seeking seed funding and has an interest in enhancing intelligence. Another, Manhattan Genomics, is also in the formation stage but has not announced funding yet.

As of now, none of these companies have significant staff or facilities, and they largely lack any credibility among mainstream gene-editing scientists. Reached by email, Fyodor Urnov, an expert in gene editing at the University of California, Berkeley, where Harrington studied, said he believes such ventures should not move forward.

Urnov has been a pointed critic of the concept of heritable genome editing, calling it dangerous, misguided, and a distraction from the real benefits of gene editing to treat adults and children. 

In his email, Urnov said the launch of still another venture into the area made him want to “howl with pain.”  

Harrinton’s venture was incorporated in Delaware in May 2025,under the name Preventive Medicine PBC. As a public-benefit corporation, it is organized to put its public mission above profits. “If our research shows [heritable genome editing] cannot be done safely, that conclusion is equally valuable to the scientific community and society,” Harrington wrote in his post.

Harrington is a cofounder of Mammoth Biosciences, a gene-editing company pursuing drugs for adults, and remains a board member there.

In recent months, Preventive has sought endorsements from leading figures in genome editing, but according to its post, it had secured only one—from Paula Amato, a fertility doctor at Oregon Health Sciences University, who said she had agreed to act as an advisor to the company.

Amato is a member of a US team that has researched embryo editing in the country since 2017, and she has promoted the technology as a way to increase IVF success. That could be the case if editing could correct abnormal embryos, making more available for use in trying to create a pregnancy.

It remains unclear where Preventive’s funding is coming from. Harrington said the $30 million was gathered from “private funders who share our commitment to pursuing this research responsibly.” But he declined to identify those investors other than SciFounders, a venture firm he runs with his personal and business partner Matt Krisiloff, the CEO of the biotech company Conception, which aims to create human eggs from stem cells.

That’s yet another technology that could change reproduction, if it works. Krisiloff is listed as a member of Preventive’s founding team.

The idea of edited babies has received growing attention from figures in the cryptocurrency business. These include Brian Armstrong, the billionaire founder of Coinbase, who has held a series of off-the-record dinners to discuss the technology (which Harrington attended). Armstrong previously argued that the “time is right” for a startup venture in the area.

Will Harborne, a crypto entrepreneur and partner at LongGame Ventures, says he’s “thrilled” to see Preventive launch. If the technology proves safe, he argues, “widespread adoption is inevitable,” calling its use a “societal obligation.”

Harborne’s fund has invested in Herasight, a company that uses genetic tests to rank IVF embryos for future IQ and other traits. That’s another hotly debated technology, but one that has already reached the market, since such testing isn’t strictly regulated. Some have begun to use the term “human enhancement companies” to refer to such ventures.

What’s still lacking is evidence that leading gene-editing specialists support these ventures. Preventive was unsuccessful in establishing a collaboration with at least one key research group, and Urnov says he had harsh words for Manhattan Genomics when that company reached out to him about working together. “I encourage you to stop,” he wrote back. “You will cause zero good and formidable harm.”

Harrington thinks Preventive could change such attitudes, if it shows that it is serious about doing responsible research. “Most scientists I speak with either accept embryo editing as inevitable or are enthusiastic about the potential but hesitate to voice these opinions publicly,” he told MIT Technology Review earlier this year. “Part of being more public about this is to encourage others in the field to discuss this instead of ignoring it.”

How conspiracy theories infiltrated the doctor’s office

As anyone who has googled their symptoms and convinced themselves that they’ve got a brain tumor will attest, the internet makes it very easy to self-(mis)diagnose your health problems. And although social media and other digital forums can be a lifeline for some people looking for a diagnosis or community, when that information is wrong, it can put their well-being and even lives in danger.

Unfortunately, this modern impulse to “do your own research” became even more pronounced during the coronavirus pandemic.


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


We asked a number of health-care professionals about how this shifting landscape is changing their profession. They told us that they are being forced to adapt how they treat patients. It’s a wide range of experiences: Some say patients tell them they just want more information about certain treatments because they’re concerned about how effective they are. Others hear that their patients just don’t trust the powers that be. Still others say patients are rejecting evidence-based medicine altogether in favor of alternative theories they’ve come across online. 

These are their stories, in their own words.

Interviews have been edited for length and clarity.


The physician trying to set shared goals 

David Scales

Internal medicine hospitalist and assistant professor of medicine,
Weill Cornell Medical College
New York City

Every one of my colleagues has stories about patients who have been rejective of care, or had very peculiar perspectives on what their care should be. Sometimes that’s driven by religion. But I think what has changed is people, not necessarily with a religious standpoint, having very fixed beliefs that are sometimes—based on all the evidence that we have—in contradiction with their health goals. And that is a very challenging situation. 

I once treated a patient with a connective tissue disease called Ehlers-Danlos syndrome. While there’s no doubt that the illness exists, there’s a lot of doubt and uncertainty over which symptoms can be attributed to Ehlers-Danlos. This means it can fall into what social scientists call a “contested illness.” 

Contested illnesses used to be causes for arguably fringe movements, but they have become much more prominent since the rise of social media in the mid-2010s. Patients often search for information that resonates with their experience. 

This patient was very hesitant about various treatments, and it was clear she was getting her information from, I would say, suspect sources. She’d been following people online who were not necessarily trustworthy, so I sat down with her and we looked them up on Quackwatch, a site that lists health myths and misconduct. 

“She was extremely knowledgeable, and had done a lot of her own research, but she struggled to tell the difference between good and bad sources.”

She was still accepting of treatment, and was extremely knowledgeable, and had done a lot of her own research, but she struggled to tell the difference between good and bad sources and fixed beliefs that overemphasize particular things—like what symptoms might be attributable to other stuff.

Physicians have the tools to work with patients who are struggling with these challenges. The first is motivational interviewing, a counseling technique that was developed for people with substance-use disorders. It’s a nonjudgmental approach that uses open-ended questions to draw out people’s motivations, and to find where there’s a mismatch between their behaviors and their beliefs. It’s highly effective in treating people who are vaccine-hesitant.

Another is an approach called shared decision-making. First we work out what the patient’s goals are and then figure out a way to align those with what we know about the evidence-based way to treat them. It’s something we use for end-of-life care, too.

What’s concerning to me is that it seems as though there’s a dynamic of patients coming in with a fixed belief of how to diagnose their illness, how their symptoms should be treated, and how to treat it in a way that’s completely divorced from the kinds of medicine you’d find in textbooks—and that the same dynamic is starting to extend to other illnesses, too.


The therapist committed to being there when the conspiracy fever breaks 

Damien Stewart

Psychologist
Warsaw, Poland

Before covid, I hadn’t really had any clients bring up conspiracy theories into my practice. But once the pandemic began, they went from being fun or harmless to something dangerous.

In my experience, vaccines were the topic where I first really started to see some militancy—people who were looking down the barrel of losing their jobs because they wouldn’t get vaccinated. At one point, I had an out-and-out conspiracy theorist say to me, “I might as well wear a yellow star like the Jews during the Holocaust, because I won’t get vaccinated.” 

I felt pure anger, and I reached a point in my therapeutic journey I didn’t know would ever occur—I’d found that I had a line that could be crossed by a client that I could not tolerate. I spoke in a very direct manner he probably wasn’t used to and challenged his conspiracy theory. He got very angry and hung up the call.  

It made me figure out how I was going to deal with this in future, and to develop an approach—which was to not challenge the conspiracy theory, but to gently talk through it, to provide alternative points of view and ask questions. I try to find the therapeutic value in the information, in the conversations we’re having. My belief is and evidence seems to show that people believe in conspiracy theories because there’s something wrong in their life that is inexplicable, and they need something to explain what’s happening to them. And even if I have no belief or agreement whatsoever in what they’re saying, I think I need to sit here and have this conversation, because one day this person might snap out of it, and I need to be here when that happens.

As a psychologist, you have to remember that these people who believe in these things are extremely vulnerable. So my anger around these conspiracy theories has changed from being directed toward the deliverer—the person sitting in front of me saying these things—to the people driving the theories.


The emergency room doctor trying to get patients to reconnect with the evidence

Luis Aguilar Montalvan

Attending emergency medicine physician 
Queens, New York

The emergency department is essentially the pulse of what is happening in society. That’s what really attracted me to it. And I think the job of the emergency doctor, particularly within shifting political views or belief in Western medicine, is to try to reconnect with someone. To just create the experience that you need to prime someone to hopefully reconsider their relationship with this evidence-based medicine.

When I was working in the pediatrics emergency department a few years ago, we saw a resurgence of diseases we thought we had eradicated, like measles. I typically framed it by saying to the child’s caregiver: “This is a disease we typically use vaccines for, and it can prevent it in the majority of people.” 

“The doctor is now more like a consultant or a customer service provider than the authority. … The power dynamic has changed.”

The sentiment among my adult patients who are reluctant to get vaccinated or take certain medications seems to be from a mistrust of the government or “The System” rather than from anything Robert F. Kennedy Jr. says directly, for example. I’m definitely seeing more patients these days asking me what they can take to manage a condition or pain that’s not medication. I tell them that the knowledge I have is based on science, and explain the medications I’d typically give other people in their situation. I try to give them autonomy while reintroducing the idea of sticking with the evidence, and for the most part they’re appreciative and courteous.

The role of doctor has changed in recent years—there’s been a cultural change. My understanding is that back in the day, what the doctor said, the patient did. Some doctors used to shame parents who hadn’t vaccinated their kids. Now we’re shifting away from that, and the doctor is now more like a consultant or a customer service provider than the authority. I think that could be because we’ve seen a lot of bad actors in medicine, so the power dynamic has changed.  

I think if we had a more unified approach at a national level, if they had an actual unified and transparent relationship with the population, that would set us up right. But I’m not sure we’ve ever had it.

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | PUBLIC DOMAIN

The psychologist who supported severely mentally ill patients through the pandemic 

Michelle Sallee

Psychologist, board certified in serious mental illness psychology
Oakland, California

I’m a clinical psychologist who only works with people who have been in the hospital three or more times in the last 12 months. I do both individual therapy and a lot of group work, and several years ago during the pandemic, I wrote a 10-week program for patients about how to cope with sheltering in place, following safety guidelines, and their concerns about vaccines.

My groups were very structured around evidence-based practice, and I had rules for the groups. First, I would tell people that the goal was not to talk them out of their conspiracy theory; my goal was not to talk them into a vaccination. My goal was to provide a safe place for them to be able to talk about things that were terrifying to them. We wanted to reduce anxiety, depression, thoughts of suicide, and the need for psychiatric hospitalizations. 

Half of the group was pro–public health requirements, and their paranoia and fear for safety was around people who don’t get vaccinated; the other half might have been strongly opposed to anyone other than themselves deciding they need a vaccination or a mask. Both sides were fearing for their lives—but from each other.

I wanted to make sure everybody felt heard, and it was really important to be able to talk about what they believed—like, some people felt like the government was trying to track us and even kill us—without any judgment from other people. My theory is that if you allow people to talk freely about what’s on their mind without blocking them with your own opinions or judgment, they will find their way eventually. And a lot of times that works. 

People have been stuck on their conspiracy theory or their paranoia has been stuck on it for a long time because they’re always fighting with people about it, everyone’s telling them that this is not true. So we would just have an open discussion about these things. 

“People have been stuck on their conspiracy theory for a long time because they’re always fighting with people about it, everyone’s telling them that this is not true.”

I ran the program four times for a total of 27 people, and the thing that I remember the most was how respectful and tolerant and empathic, but still honest about their feelings and opinions, everybody was. At the end of the program, most participants reported a decrease in pandemic-related stress. Half reported a decrease in general perceived stress, and half reported no change.

I’d say that the rate of how much vaccines are talked about now is significantly lower, and covid doesn’t really come up anymore. But other medical illnesses come up—patients saying, “My doctor said I need to get this surgery, but I know who they’re working for.” Everybody has their concerns, but when a person with psychosis has concerns, it becomes delusional, paranoid, and psychotic.

I’d like to see more providers be given more training around severe mental illness. These are not just people who just need to go to the hospital to get remedicated for a couple of days. There’s a whole life that needs to get looked at here, and they deserve that. I’d like to see more group settings with a combination of psychoeducation, evidence-based research, skills training, and process, because the research says that’s the combination that’s really important.

Editor’s note: Sallee works for a large HMO psychiatry department, and her account here is not on behalf of, endorsed by, or speaking for any larger organization.


The epidemiologist rethinking how to bridge differences in culture and community 

John Wright

Clinician and epidemiologist
Bradford, United Kingdom

I work in Bradford, the fifth-biggest city in the UK. It has a big South Asian population and high levels of deprivation. Before covid, I’d say there was growing awareness about conspiracies. But during the pandemic, I think that lockdown, isolation, fear of this unknown virus, and then the uncertainty about the future came together in a perfect storm to highlight people’s latent attraction to alternative hypotheses and conspiracies—it was fertile ground. I’ve been a National Health Service doctor for almost 40 years, and until recently, the NHS had a great reputation, with great trust, and great public support. The pandemic was the first time that I started seeing that erode.

It wasn’t just conspiracies about vaccines or new drugs, either—it was also an undermining of trust in public institutions. I remember an older woman who had come into the emergency department with covid. She was very unwell, but she just wouldn’t go into hospital despite all our efforts, because there were conspiracies going around that we were killing patients in hospital. So she went home, and I don’t know what happened to her.

The other big change in recent years has been social media and social networks that have obviously amplified and accelerated alternative theories and conspiracies. That’s been the tinder that’s allowed the wildfires to spread with these sort of conspiracy theories. In Bradford, particularly among ethnic minority communities, there’s been stronger links between them—allowing this to spread quicker—but also a more structural distrust. 

Vaccination rates have fallen since the pandemic, and we’re seeing lower uptake of the meningitis and HPV vaccines in schools among South Asian families. Ultimately, this needs a bigger societal approach than individual clinicians putting needles in arms. We started a project called Born in Bradford in 2007 that’s following more than 13,000 families, including around 20,000 teenagers as they grow up. One of the biggest focuses for us is how they use social media and how it links to their mental health, so we’re asking them to donate their digital media to us so we can examine it in confidence. We’re hoping it could allow us to explore conspiracies and influences.

The challenge for the next generation of resident doctors and clinicians is: How do we encourage health literacy in young people about what’s right and what’s wrong without being paternalistic? We also need to get better at engaging with people as health advocates to counter some of the online narratives. The NHS website can’t compete with how engaging content on TikTok is.


The pediatrician who worries about the confusing public narrative on vaccines

Jessica Weisz

Pediatrician
Washington, DC

I’m an outpatient pediatrician, so I do a lot of preventative care, checkups, and sick visits, and treating coughs and colds—those sorts of things. I’ve had specific training in how to support families in clinical decision-making related to vaccines, and every family wants what’s best for their child, and so supporting them is part of my job.

I don’t see specific articulation of conspiracy theories, but I do think there’s more questions about vaccines in conversations I’ve not typically had to have before. I’ve found that parents and caregivers do ask general questions about the risks and benefits of vaccines. We just try to reiterate that vaccines have been studied, that they are intentionally scheduled to protect an immature immune system when it’s the most vulnerable, and that we want everyone to be safe, healthy, and strong. That’s how we can provide protection.

“I think what’s confusing is that distress is being sowed in headlines when most patients, families, and caregivers are motivated and want to be vaccinated.”

I feel that the narrative in the public space is unfairly confusing to families when over 90% of families still want their kids to be vaccinated. The families who are not as interested in that, or have questions—it typically takes multiple conversations to support that family in their decision-making. It’s very rarely one conversation.

I think what’s confusing is that distress is being sowed in headlines when most patients, families, and caregivers are motivated and want to be vaccinated. For example, some of the headlines around recent changes the CDC are making make it sound like they’re making a huge clinical change, when it’s actually not a huge change from what people are typically doing. In my standard clinical practice, we don’t give the combined MMRV vaccine to children under four years old, and that’s been standard practice in all of the places I’ve worked on the Eastern Seaboard. [Editor’s note: In early October, the CDC updated its recommendation that young children receive the varicella vaccine separately from the combined vaccine for measles, mumps, and rubella. Many practitioners, including Weisz, already offer the shots separately.]

If you look at public surveys, pediatricians are still the most trusted [among health-care providers], and I do live in a jurisdiction with pretty strong policy about school-based vaccination. I think that people are getting information from multiple sources, but at the end of the day, in terms of both the national rates and also what I see in clinical practice, we really are seeing most families wanting vaccines.

An AI app to measure pain is here

How are you feeling?

I’m genuinely interested in the well-being of all my treasured Checkup readers, of course. But this week I’ve also been wondering how science and technology can help answer that question—especially when it comes to pain. 
In the latest issue of MIT Technology Review magazine, Deena Mousa describes how an AI-powered smartphone app is being used to assess how much pain a person is in.

The app, and other tools like it, could help doctors and caregivers. They could be especially useful in the care of people who aren’t able to tell others how they are feeling.

But they are far from perfect. And they open up all kinds of thorny questions about how we experience, communicate, and even treat pain.

Pain can be notoriously difficult to describe, as almost everyone who has ever been asked to will know. At a recent medical visit, my doctor asked me to rank my pain on a scale from 1 to 10. I found it incredibly difficult to do. A 10, she said, meant “the worst pain imaginable,” which brought back unpleasant memories of having appendicitis.

A short while before the problem that brought me in, I’d broken my toe in two places, which had hurt like a mother—but less than appendicitis. If appendicitis was a 10, breaking a toe was an 8, I figured. If that was the case, maybe my current pain was a 6. As a pain score, it didn’t sound as bad as I actually felt. I couldn’t help wondering if I might have given a higher score if my appendix were still intact. I wondered, too, how someone else with my medical issue might score their pain.

In truth, we all experience pain in our own unique ways. Pain is subjective, and it is influenced by our past experiences, our moods, and our expectations. The way people describe their pain can vary tremendously, too.

We’ve known this for ages. In the 1940s, the anesthesiologist Henry Beecher noted that wounded soldiers were much less likely to ask for pain relief than similarly injured people in civilian hospitals. Perhaps they were putting on a brave face, or maybe they just felt lucky to be alive, given their circumstances. We have no way of knowing how much pain they were really feeling.

Given this messy picture, I can see the appeal of a simple test that can score pain and help medical professionals understand how best to treat their patients. That’s what is being offered by PainChek, the smartphone app Deena wrote about. The app works by assessing small facial movements, such as lip raises or brow pinches. A user is then required to fill a separate checklist to identify other signs of pain the patient might be displaying. It seems to work well, and it is already being used in hospitals and care settings.

But the app is judged against subjective reports of pain. It might be useful for assessing the pain of people who can’t describe it themselves—perhaps because they have dementia, for example—but it won’t add much to assessments from people who can already communicate their pain levels.

There are other complications. Say a test could spot that a person was experiencing pain. What can a doctor do with that information? Perhaps prescribe pain relief—but most of the pain-relieving drugs we have were designed to treat acute, short-term pain. If a person is grimacing from a chronic pain condition, the treatment options are more limited, says Stuart Derbyshire, a pain neuroscientist at the National University of Singapore.

The last time I spoke to Derbyshire was back in 2010, when I covered work by researchers in London who were using brain scans to measure pain. That was 15 years ago. But pain-measuring brain scanners are yet to become a routine part of clinical care.

That scoring system was also built on subjective pain reports. Those reports are, as Derbyshire puts it, “baked into the system.” It’s not ideal, but when it comes down to it, we must rely on these wobbly, malleable, and sometimes incoherent self-descriptions of pain. It’s the best we have.

Derbyshire says he doesn’t think we’ll ever have a “pain-o-meter” that can tell you what a person is truly experiencing. “Subjective report is the gold standard, and I think it always will be,” he says.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Job titles of the future: AI embryologist

Embryologists are the scientists behind the scenes of in vitro fertilization who oversee the development and selection of embryos, prepare them for transfer, and maintain the lab environment. They’ve been a critical part of IVF for decades, but their job has gotten a whole lot busier in recent years as demand for the fertility treatment skyrockets and clinics struggle to keep up. The United States is in fact facing a critical shortage of both embryologists and genetic counselors. 

Klaus Wiemer, a veteran embryologist and IVF lab director, believes artificial intelligence might help by predicting embryo health in real time and unlocking new avenues for productivity in the lab. 

Wiemer is the chief scientific officer and head of clinical affairs at Fairtility, a company that uses artificial intelligence to shed light on the viability of eggs and embryos before proceeding with IVF. The company’s algorithm, called CHLOE (for Cultivating Human Life through Optimal Embryos), has been trained on millions of embryo data points and outcomes and can quickly sift through a patient’s embryos to point the clinician to the ones with the highest potential for successful implantation. This, the company claims, will improve time to pregnancy and live births. While its effectiveness has been tested only retrospectively to date, CHLOE is the first and only FDA-approved AI tool for embryo assessment. 

Current challenge 

When a patient undergoes IVF, the goal is to make genetically normal embryos. Embryologists collect cells from each embryo and send them off for external genetic testing. The results of this biopsy can take up to two weeks, and the process can add thousands of dollars to the treatment cost. Moreover, passing the screen just means an embryo has the correct number of chromosomes. That number doesn’t necessarily reflect the overall health of the embryo. 

“An embryo has one singular function, and that is to divide,” says Wiemer. “There are millions of data points concerning embryo cell division, cell division characteristics, area and size of the inner cell mass, and the number of times the trophectoderm [the layer that contributes to the future placenta] contracts.”

The AI model allows for a group of embryos to be constantly measured against the optimal characteristics at each stage of development. “What CHLOE answers is: How well did that embryo develop? And does it have all the necessary components that are needed in order to make a healthy implantation?” says Wiemer. CHLOE produces an AI score reflecting all the analysis that’s been done within an embryo. 

In the near future, Wiemer says, reducing the percentage of abnormal embryos that IVF clinics transfer to patients should not require a biopsy: “Every embryology laboratory will be doing automatic assessments of embryo development.” 

A changing field

Wiemer, who started his career in animal science, says the difference between animal embryology and human embryology is the extent of paperwork. “Embryologists spend 40% of their time on non-embryology skills,” he adds. “AI will allow us to declutter the embryology field so we can get back to being true scientists.” This means spending more time studying the embryos, ensuring that they are developing normally, and using all that newfound information to get better at picking which embryos to transfer. 

“CHLOE is like having a virtual assistant in the lab to help with embryo selection, ensure conditions are optimal, and send out reports to patients and clinical staff,” he says. “Getting to study data and see what impacts embryo development is extremely rewarding, given that this capability was impossible a few years ago.” 

Amanda Smith is a freelance journalist and writer reporting on culture, society, human interest, and technology.

AI could predict who will have a heart attack

For all the modern marvels of cardiology, we struggle to predict who will have a heart attack. Many people never get screened at all. Now, startups like Bunkerhill Health, Nanox.AI, and HeartLung Technologies are applying AI algorithms to screen millions of CT scans for early signs of heart disease. This technology could be a breakthrough for public health, applying an old tool to uncover patients whose high risk for a heart attack is hiding in plain sight. But it remains unproven at scale while raising thorny questions about implementation and even how we define disease. 

Last year, an estimated 20 million Americans had chest CT scans done, after an event like a car accident or to screen for lung cancer. Frequently, they show evidence of coronary artery calcium (CAC), a marker for heart attack risk, that is buried or not mentioned in a radiology report focusing on ruling out bony injuries, life-threatening internal trauma, or cancer.

Dedicated testing for CAC remains an underutilized method of predicting heart attack risk. Over decades, plaque in heart arteries moves through its own life cycle, hardening from lipid-rich residue into calcium. Heart attacks themselves typically occur when younger, lipid-rich plaque unpredictably ruptures, kicking off a clotting cascade of inflammation that ultimately blocks the heart’s blood supply. Calcified plaque is generally stable, but finding CAC suggests that younger, more rupture-prone plaque is likely present too. 

Coronary artery calcium can often be spotted on chest CTs, and its concentration can be subjectively described. Normally, quantifying a person’s CAC score involves obtaining a heart-specific CT scan. Algorithms that calculate CAC scores from routine chest CTs, however, could massively expand access to this metric. In practice, these algorithms could then be deployed to alert patients and their doctors about abnormally high scores, encouraging them to seek further care. Today, the footprint of the startups offering AI-derived CAC scores is not large, but it is growing quickly. As their use grows, these algorithms may identify high-risk patients who are traditionally missed or who are on the margins of care. 

Historically, CAC scans were believed to have marginal benefit and were marketed to the worried well. Even today, most insurers won’t cover them. Attitudes, though, may be shifting. More expert groups are endorsing CAC scores as a way to refine cardiovascular risk estimates and persuade skeptical patients to start taking statins. 

The promise of AI-derived CAC scores is part of a broader trend toward mining troves of medical data to spot otherwise undetected disease. But while it seems promising, the practice raises plenty of questions. For example, CAC scores ­haven’t proved useful as a blunt instrument for universal screening. A 2022 Danish study evaluating a population-based program, for example, showed no benefit in mortality rates for patients who had undergone CAC screening tests. If AI delivered this information automatically, would the calculus really shift? 

And with widespread adoption, abnormal CAC scores will become common. Who follows up on these findings? “Many health systems aren’t yet set up to act on incidental calcium findings at scale,” says Nishith Khandwala, the cofounder of Bunkerhill Health. Without a standard procedure for doing so, he says, “you risk creating more work than value.” 

There’s also the question of whether these AI-generated scores would actually improve patient care. For a symptomatic patient, a CAC score of zero may offer false reassurance. For the asymptomatic patient with a high CAC score, the next steps remain uncertain. Beyond statins, it isn’t clear if these patients would benefit from starting costly cholesterol-lowering drugs such as Repatha or other PCSK9-inhibitors. It may encourage some to pursue unnecessary but costly downstream procedures that could even end up doing harm. Currently, AI-derived CAC scoring is not reimbursed as a separate service by Medicare or most insurers. The business case for this technology today, effectively, lies in these potentially perverse incentives. 

At a fundamental level, this approach could actually change how we define disease. Adam Rodman, a hospitalist and AI expert at Beth Israel Deaconess Medical Center in Boston, has observed that AI-derived CAC scores share similarities with the “incidentaloma,” a term coined in the 1980s to describe unexpected findings on CT scans. In both cases, the normal pattern of diagnosis—in which doctors and patients deliberately embark on testing to figure out what’s causing a specific problem—were fundamentally disrupted. But, as Rodman notes, incidentalomas were still found by humans reviewing the scans. 

Now, he says, we are entering an era of “machine-based nosology,” where algorithms define diseases on their own terms. As machines make more diagnoses, they may catch things we miss. But Rodman and I began to wonder if a two-tiered diagnostic future may emerge, where “haves” pay for brand-name algorithms while “have-nots” settle for lesser alternatives. 

For patients who have no risk factors or are detached from regular medical care, an AI-derived CAC score could potentially catch problems earlier and rewrite the script. But how these scores reach people, what is done about them, and whether they can ultimately improve patient outcomes at scale remain open questions. For now—holding the pen as they toggle between patients and algorithmic outputs—clinicians still matter. 

Vishal Khetpal is a fellow in cardiovascular disease. The views expressed in this article do not represent those of his employers. 

This retina implant lets people with vision loss do a crossword puzzle

Science Corporation—a competitor to Neuralink founded by the former president of Elon Musk’s brain-interface venture—has leapfrogged its rival after acquiring, at a fire-sale price, a vision implant that’s in advanced testing,.

The implant produces a form of “artificial vision” that lets some patients read text and do crosswords, according to a report published in the New England Journal of Medicine today.

The implant is a microelectronic chip placed under the retina. Using signals from a camera mounted on a pair of glasses, the chip emits bursts of electricity in order to bypass photoreceptor cells damaged by macular degeneration, the leading cause of vision loss in elderly people.

“The magnitude of the effect is what’s notable,” says José-Alain Sahel, a University of Pittsburgh vision scientist who led testing of the system, which is called PRIMA. “There’s a patient in the UK and she is reading the pages of a regular book, which is unprecedented.”  

Until last year, the device was being developed by Pixium Vision, a French startup cofounded by Sahel, which faced bankruptcy after it couldn’t raise more cash.  

That’s when Science Corporation swept in to purchase the company’s assets for about €4 million ($4.7 million), according to court filings.

“Science was able to buy it for very cheap just when the study was coming out, so it was good timing for them,” says Sahel. “They could quickly access very advanced technology that’s closer to the market, which is good for a company to have.”

Science was founded in 2021 by Max Hodak, the first president of Neuralink, after his sudden departure from that company. Since its founding, Science has raised around $290 million, according to the venture capital database Pitchbook, and used the money to launch broad-ranging exploratory research on brain interfaces and new types of vision treatments.

“The ambition here is to build a big, standalone medical technology company that would fit in with an Apple, Samsung, or an Alphabet,” Hodak said in an interview at Science’s labs in Alameda, California in September. “The goal is to change the world in important ways … but we need to make money in order to invest in these programs.”

By acquiring the PRIMA implant program, Science effectively vaulted past years of development and testing. The company has requested approval to sell the eye chip in Europe and is in discussions with regulators in the US.

Unlike Neuralink’s implant, which records brain signals so paralyzed recipients can use their thoughts to move a computer mouse, the retina chip sends information into the brain to produce vision. Because the retina is an outgrowth of the brain, the chip qualifies as a type of brain-computer interface.

Artificial vision systems have been studied for years and one, called the Argus II, even reached the market and was installed in the eyes of about 400 people. But that product was later withdrawn after it proved to be a money-loser, according to Cortigent, the company that now owns that technology.

Thirty-eight patients in Europe received a PRIMA implant in one eye. On average, the study found, they were able to read five additional lines on a vision chart—the kind with rows of letters, each smaller than the last. Some of that improvement was due to what Sahel calls “various tricks” like using a zoom function, which allows patients to zero in on text they want to read.

The type of vision loss being treated with the new implant is called geographic atrophy, in which patients have peripheral vision but can’t make out objects directly in front of them, like words or faces. According to Prevent Blindness, an advocacy organization, this type of central vision loss affects around one in 10 people over 80.  

The implant was originally designed starting 20 years ago by Daniel Palanker, a laser expert and now a professor at Stanford University, who says his breakthrough was realizing that light beams could supply both energy and information to a chip placed under the retina. Other implants, like Argus II, use a wire, which adds complexity.

“The chip has no brains at all. It just turns light into electrical current that flows into the tissue,” says Palanker. “Patients describe the color they see as yellowish blue or sun color.”

The system works using a wearable camera that records a scene and then blasts bright infrared light into the eye, using a wavelength humans can’t see. That light hits the chip, which is covered by “what are basically tiny solar panels,” says Palanker. “We just try to replace the photoreceptors with a photo-array.”

A diagram of how a visual scene could be represented by a retinal implant.
COURTESY SCIENCE CORPORATION

The current system produces about 400 spots of vision, which lets users make out the outlines of words and objects. Palanaker says a next-generation device will have five times as many “pixels” and should let people see more: “What we discovered in the trial is that even though you stimulate individual pixels, patients perceive it as continuous. The patient says ‘I see a line,’ “I see a letter.’”

Palanker says it will be important to keep improving the system because “the market size depends on the quality of the vision produced.”

When Pixium teetered on insolvency, Palanker says, he helped search for a buyer, meeting with Hodak. “It was a fire sale, not a celebration,” he says. “But for me it’s a very lucky outcome, because it means the product is going forward. And the purchase price doesn’t really matter, because there’s a big investment needed to bring it to market. It’s going to cost money.”  

Photo of the PRIMA Glasses and Pocket Processor.
The PRIMA artificial vision system has a battery pack/controller and an eye-mounted camera.
COURTESY SCIENCE CORPORATION

During a visit to Science’s headquarters, Hodak described the company’s effort to redesign the system into something sleeker and more user-friendly. In the original design, in addition to the wearable camera, the patient has to carry around a bulky controller containing a battery and laser, as well as buttons to zoom in and out. 

But Science has already prototyped a version in which those electronics are squeezed into what look like an extra-large pair of sunglasses.

“The implant is great, but we’ll have new glasses on patients fairly shortly,” Hodak says. “This will substantially improve their ability to have it with them all day.” 

Other companies also want to treat blindness with brain-computer interfaces, but some think it might be better to send signals directly into the brain. This year, Neuralink has been touting plans for “Blindsight,” a project to send electrical signals directly into the brain’s visual cortex, bypassing the retina entirely. It has yet to test the approach in a person.