How our genome is like a generative AI model

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

What does the genome do? You might have heard that it is a blueprint for an organism. Or that it’s a bit like a recipe. But building an organism is much more complex than constructing a house or baking a cake.

This week I came across an idea for a new way to think about the genome—one that borrows from the field of artificial intelligence. Two researchers are arguing that we should think about it as being more like a generative model, a form of AI that can generate new things.

You might be familiar with such AI tools—they’re the ones that can create text, images, or even films from various prompts. Do our genomes really work in the same way? It’s a fascinating idea. Let’s explore.

When I was at school, I was taught that the genome is essentially a code for an organism. It contains the instructions needed to make the various proteins we need to build our cells and tissues and keep them working. It made sense to me to think of the human genome as being something like a program for a human being.

But this metaphor falls apart once you start to poke at it, says Kevin Mitchell, a neurogeneticist at Trinity College in Dublin, Ireland, who has spent a lot of time thinking about how the genome works.

A computer program is essentially a sequence of steps, each controlling a specific part of development. In human terms, this would be like having a set of instructions to start by building a brain, then a head, and then a neck, and so on. That’s just not how things work.

Another popular metaphor likens the genome to a blueprint for the body. But a blueprint is essentially a plan for what a structure should look like when it is fully built, with each part of the diagram representing a bit of the final product. Our genomes don’t work this way either.

It’s not as if you’ve got a gene for an elbow and a gene for an eyebrow. Multiple genes are involved in the development of multiple body parts. The functions of genes can overlap, and the same genes can work differently depending on when and where they are active. It’s far more complicated than a blueprint.

Then there’s the recipe metaphor. In some ways, this is more accurate than the analogy of a blueprint or program. It might be helpful to think about our genes as a set of ingredients and instructions, and to bear in mind that the final product is also at the mercy of variations in the temperature of the oven or the type of baking dish used, for example. Identical twins are born with the same DNA, after all, but they are often quite different by the time they’re adults.

But the recipe metaphor is too vague, says Mitchell. Instead, he and his colleague Nick Cheney at the University of Vermont are borrowing concepts from AI to capture what the genome does. Mitchell points to generative AI models like Midjourney and DALL-E, both of which can generate images from text prompts. These models work by capturing elements of existing images to create new ones.

Say you write a prompt for an image of a horse. The models have been trained on a huge number of images of horses, and these images are essentially compressed to allow the models to capture certain elements of what you might call “horsiness.” The AI can then construct a new image that contains these elements.

We can think about genetic data in a similar way. According to this model, we might consider evolution to be the training data. The genome is the compressed data—the set of information that can be used to create the new organism. It contains the elements we need, but there’s plenty of scope for variation. (There are lots more details about the various aspects of the model in the paper, which has not yet been peer-reviewed.)

Mitchell thinks it’s important to get our metaphors in order when we think about the genome. New technologies are allowing scientists to probe ever deeper into our genes and the roles they play. They can now study how all the genes are expressed in a single cell, for example, and how this varies across every cell in an embryo.

“We need to have a conceptual framework that will allow us to make sense of that,” says Mitchell. He hopes that the concept will aid the development of mathematical models that might help us better understand the intricate relationships between genes and the organisms they end up being part of—in other words, exactly how components of our genome contribute to our development.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive:

Last year, researchers built a new human genome reference designed to capture the diversity among us. They called it the “pangenome,” as Antonio Regalado reported.

Generative AI has taken the world by storm. Will Douglas Heaven explored six big questions that will determine the future of the technology.

A Disney director tried to use AI to generate a soundtrack in the style of Hans Zimmer. It wasn’t as good as the real thing, as Melissa Heikkilä found.

Melissa has also reported on how much energy it takes to create an image using generative AI. Turns out it’s about the same as charging your phone. 

What is AI? No one can agree, as Will found in his recent deep dive on the topic.

From around the web

Evidence from more than 1,400 rape cases in Maryland, some from as far back as 1977, are set to be processed by the end of the year, thanks to a new law. The state still has more than 6,000 untested rape kits. (ProPublica)

How well is your brain aging? A new tool has been designed to capture a person’s brain age based on an MRI scan, and which accounts for the possible effects of traumatic brain injuries. (NeuroImage)

Iran has reported the country’s first locally acquired cases of dengue, a viral infection spread by mosquitoes. There are concerns it could spread. (WHO)

IVF is expensive, and add-ons like endometrial scratching (which literally involves scratching the lining of the uterus) are not supported by strong evidence. Is the fertility industry profiting from vulnerability? (The Lancet)

Up to 2 million Americans are getting their supply of weight loss drugs like Wegovy or Zepbound from compounding pharmacies. They’re a fraction of the price of brand-name Big Pharma drugs, but there are some safety concerns. (KFF Health News)

Why we need safeguards against genetic discrimination

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

A couple of years ago, I spat into a little plastic tube, stuck it in the post, and waited for a company to analyze markers on my DNA to estimate how biologically old I am. It’s not the first time I’ve shared my genetic data for a story. Over a decade ago, I shared a DNA sample with a company that promised to tell me about my ancestry.

Of course, I’m not the only one. Tens of millions of people have shipped their DNA off to companies offering to reveal clues about their customers’ health or ancestry, or even to generate tailored diet or exercise advice. And then there are all the people who have had genetic tests as part of their clinical care, under a doctor’s supervision. Add it all together, and there’s a hell of a lot of genetic data out there.

It isn’t always clear how secure this data is, or who might end up getting their hands on it—and how that information might affect people’s lives. I don’t want my insurance provider or my employer to make decisions about my future on the basis of my genetic test results, for example. Scientists, ethicists and legal scholars aren’t clear on the matter either. They are still getting to grips with what genetic discrimination entails—and how we can defend against it.

If we’re going to protect ourselves from genetic discrimination, we first have to figure out what it is. Unfortunately, no one has a good handle on how widespread it is, says Yann Joly, director of the Centre of Genomics and Policy at McGill University in Quebec. And that’s partly because scientists keep defining it in different ways. In a paper published last month, Joly and his colleagues listed 12 different definitions that have been used in various studies since the 1990s. So what is it?

“I see genetic discrimination as a child of eugenics practices,” says Joly. Modern eugenics, which took off in the late 19th century, was all about limiting the ability of some people to pass on their genes to future generations. Those who were considered “feeble minded” or “mentally defective” could be flung into institutions, isolated from the rest of the population, and forced or coerced into having procedures that left them unable to have children. Disturbingly, some of these practices have endured. In the fiscal years 2005-2006 and 2012-2013, 144 women in California’s prisons were sterilized—many without informed consent.

These cases are thankfully rare. In recent years, ethicists and policymakers have been more worried about the potential misuse of genetic data by health-care and insurance providers. There have been instances in which people have been refused health insurance or life insurance on the basis of a genetic result, such as one that predicts the onset of Huntington’s disease. (In the UK, where I live, life insurance providers are not meant to ask for a genetic test or use the results of one—unless the person has tested positive for Huntington’s.)

Joly is collecting reports of suspected discrimination in his role at the Genetic Discrimination Observatory, a network of researchers working on the issue. He tells me that in one recent report, a woman wrote about her experience after she had been referred to a new doctor. This woman had previously taken a genetic test that revealed she would not respond well to certain medicines. Her new doctor told her he would only take her on as a patient if she first signed a waiver releasing him of any responsibility over her welfare if she didn’t follow the advice generated by her genetic test.

“It’s unacceptable,” says Joly. “Why would you sign a waiver because of a genetic predisposition? We’re not asking people with cancer to [do so]. As soon as you start treating people differently because of genetic factors … that’s genetic discrimination.”

Many countries have established laws to protect people from these kinds of discrimination. But these laws, too, can vary hugely both when it comes to defining what genetic discrimination is and to how they safeguard against it. The law in Canada focuses on DNA, RNA, and chromosome tests, for example. But you don’t always need such a test to know if you’re at risk for a genetic disease. A person might have a family history of a disease or already be showing symptoms of it.

And then there are the newer technologies. Take, for example, the kind of test that I took to measure my biological age. Many aging tests measure either chemical biomarkers in the body or epigenetic markers on the DNA—not necessarily the DNA itself. These tests are meant to indicate how close a person is to death. You might not want your life insurance provider to know or act on the results of those, either.

Joly and his colleagues have come up with a new definition. And they’ve kept it broad. “The narrower the definition, the easier it is to get around it,” he says. He wanted to avoid excluding the experiences of any people who feel they’ve experienced genetic discrimination. Here it is:

“Genetic discrimination involves an individual or a group being negatively treated, unfairly profiled or harmed, relative to the rest of the population, on the basis of actual or presumed genetic characteristics.

It will be up to policymakers to decide how to design laws around genetic discrimination. And it won’t be simple. The laws may need to look different in different countries, depending on what technologies are available and how they are being used. Perhaps some governments will want to ensure that residents have access to technologies, while other may choose to limit access. In some cases, a health-care provider may need to make decisions about a person’s care based on their genetic results.

In the meantime, Joly has advice for anyone worried about genetic discrimination. First, don’t let such concerns keep you from having a genetic test that you might need for your own health. As things stand, the risk of being discriminated against on the basis of these tests is still quite small.

And when it comes to consumer genetic testing, it’s worth looking closely at the company’s terms and conditions to find out how your data might be shared or used. It is also useful to look up the safeguarding laws in your own country or state, which can give you a good idea of when you’re within your rights to refuse to share your data.

Shortly after I received the results from my genetic tests, I asked the companies involved to delete my data. It’s not a foolproof approach—last year, hackers stole personal data on 6.9 million 23andMe customers—but at least it’s something. Just this week I was offered yet another genetic test. I’m still thinking on it.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive:

As of 2019, more than 26 million people had undertaken a consumer genetic test, as my colleague Antonio Regalado found. The number is likely to have grown significantly since then.
 
Some companies say they can build a picture of what a person looks like on the basis of DNA alone. The science is questionable, as Tate Ryan-Mosley found when she covered one such company.
 
The results of a genetic test can have profound consequences, as Golda Arthur found when a test revealed she had a genetic mutation that put her at risk of ovarian cancer. Arthur, whose mother developed the disease, decided to undergo the prophylactic removal of her ovaries and fallopian tubes. 
 
Tests that measure biological age were selected by readers as our 11th breakthrough technology of 2022. You can read more about them here.
 
The company that gave me an estimate of my biological age later reanalyzed my data (before I had deleted it). That analysis suggested that my brain and liver were older than they should be. Great.

From around the web:

Over the past few decades, doctors have implanted electrodes deep into the brains of a growing number of people, usually to treat disorders like epilepsy and Parkinson’s disease. We still don’t really know how they work, or how long they last. (Neuromodulation)

A ban on female genital mutilation will be upheld in the Gambia following a vote by the country’s National Assembly. The decision “reaffirm[s the country’s] commitments to human rights, gender equality, and protecting the health and well-being of girls and women,” directors of UNICEF, UNFPA, WHO, UN Women, and the UN High Commissioner for Human Rights said in a joint statement. (WHO)

Weight-loss drugs that work by targeting the GLP-1 receptor, like Wegovy and Saxena, are in high demand—and there’s not enough to go around. Other countries could follow Switzerland’s lead to make the drugs more affordable and accessible, but only for the people who really need them. (JAMA Internal Medicine)

J.D. Vance, Donald Trump’s running mate, has ties to the pharmaceutical industry and has an evolving health-care agenda. (STAT)

Psilocybin, the psychedelic compound in magic mushrooms, can disrupt the way regions of our brains communicate with each other. And the effect can last for weeks. (The Guardian)

IVF alone can’t save us from a looming fertility crisis

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

I’ve just learned that July 11 is World Population Day. There are over 8 billion of us on the planet, and there’ll probably be 8.5 billion of us by 2030. We’re continually warned about the perils of overpopulation and the impact we humans are having on our planet. So it seems a bit counterintuitive to worry that, actually, we’re not reproducing enough.

But plenty of scientists are incredibly worried about just that. Improvements in health care and sanitation are helping us all lead longer lives. But we’re not having enough children to support us as we age. Fertility rates are falling in almost every country.

But wait! We have technologies to solve this problem! IVF is helping to bring more children into the world than ever, and it can help compensate for the fertility problems faced by older parents! Unfortunately, things aren’t quite so simple. Research suggests that these technologies can only take us so far. If we want to make real progress, we also need to work on gender equality.

Researchers tend to look at fertility in terms of how many children the average woman has in her lifetime. To maintain a stable population, this figure, known as the total fertility rate (TFR), needs to be around 2.1.

But this figure has been falling over the last 50 years. In Europe, for example, women born in 1939 had a TFR of 2.3—but the figure has dropped to 1.7 for women born in 1981 (who are 42 or 43 years old by now). “We can summarize [the last 50 years] in three words: ‘declining,’ ‘late,’ and ‘childlessness,’” Gianpiero Dalla Zuanna, a professor of demography at the University of Padua in Italy, told an audience at the annual meeting of the European Society of Human Reproduction and Embryology earlier this week.

There are a lot of reasons behind this decline. Around one in six people is affected by infertility, and globally, many people aren’t having as many children as they would like. On the other hand, more people are choosing to live child-free. Others are delaying starting a family, perhaps because they face soaring living costs and have been unable to afford their own homes. Some hesitate to have children because they are concerned about the future. With the ongoing threat of global wars and climate change, who can blame them? 

There are financial as well as social consequences to this fertility crisis. We’re already seeing fewer young people supporting a greater number of older ones. And it’s not sustainable.

“Europe today has 10% of the population, 20% of gross domestic product, and 50% of the welfare expense of the world,” Dalla Zuanna said at the meeting. Twenty years from now, there will be 20% fewer people of reproductive age than there are today, he warned.

It’s not just Europe that will be affected. The global TFR in 2021 was 2.2—less than half the figure in 1950, when it was 4.8. By one recent estimate, the global fertility rate is declining at a rate of 1.1% per year. Some countries are facing especially steep declines: In 2021, the TFR in South Korea was just 0.8—well below the 2.1 needed to maintain the population. If this decline continues, we can expect the global TFR to hit 1.83 by 2050 and 1.59 by 2100.

So what’s the solution? Fertility technologies like IVF and egg freezing have been touted as one potential remedy. More people than ever are using these technologies to conceive. An IVF baby is born somewhere in the world every 35 seconds. And IVF can indeed help us overcome some fertility issues, including those that can arise for people starting a family after the age of 35. IVF is already involved in 5% to 10% of births in high-income countries. “IVF has got to be our solution, you would think,” said Georgina Chambers, who directs the National Perinatal Epidemiology and Statistics Unit at UNSW Sydney in Australia, in another talk at ESHRE.

Unfortunately, technology is unlikely to solve the fertility crisis anytime soon, as Chambers’s own research shows. A handful of studies suggest that the use of assisted reproductive technologies (ART) can only increase the total fertility rate of a country by around 1% to 5%. The US sits at the lower end of this scale—it is estimated that in 2020, the use of ART increased the fertility rate by about 1.3%. In Australia, however, ART boosted the fertility rate by 5%.

Why the difference? It all comes down to accessibility. IVF can be prohibitively expensive in the US—without insurance covering the cost, a single IVF cycle can cost around half a person’s annual disposable income. Compare that to Australia, where would-be parents get plenty of government support, and an IVF cycle costs just 6% of the average annual disposable income.

In another study, Chambers and her colleagues have found that ART can help restore fertility to some extent in women who try to have children later in life. It’s difficult to be precise here, because it’s hard to tell whether some of the births that followed IVF would have happened eventually without the technology.

Either way, IVF and other fertility technologies are not a cure-all. And overselling them as such risks encouraging people to further delay starting a family, says Chambers. There are other ways to address the fertility crisis.

Dalla Zuanna and his colleague Maria Castiglioni believe that countries with low fertility rates, like their home country Italy, need to boost the number of people of reproductive age. “The only possibility [of achieving this] in the next 20 years is to increase immigration,” Castiglioni told an audience at ESHRE.

Several countries have used “pronatalist” policies to encourage people to have children. Some involve financial incentives: Families in Japan are eligible for one-off payments and monthly allowances for each child,as part of a scheme that was recently extended. Australia has implemented a similar “baby bonus.”

“These don’t work,” Chambers said. “They can affect the timing and spacing of births, but they are short-lived. And they are coercive: They negatively affect gender equity and reproductive and sexual rights.”

But family-friendly policies can work. In the past, the fall in fertility rates was linked to women’s increasing participation in the workforce. That’s not the case anymore. Today, higher female employment rates are linked to higher fertility rates, according to Chambers. “Fertility rises when women combine work and family life on an equal footing with men,” she said at the meeting. Gender equality, along with policies that support access to child care and parental leave, can have a much bigger impact.

These policies won’t solve all our problems. But we need to acknowledge that technology alone won’t solve the fertility crisis. And if the solution involves improving gender equality, surely that’s a win-win.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive:

My colleague Antonio Regalado discussed how reproductive technology might affect population decline with Martin Varsavsky, director of the Prelude Fertility network of clinics, in a roundtable on the future of families earlier this year.

There are new fertility technologies on the horizon. I wrote about the race to generate lab-grown sperm and eggs from adult skin cells, for example. Scientists have already created artificial eggs and sperm from mouse cells and used them to create mouse pups. Artificial human sex cells are next.

Advances like these could transform the way we understand parenthood. Some researchers believe we’re not far being able to create babies with multiple genetic parents or none at all, as I wrote in a previous edition of The Checkup.

Elizabeth Carr was America’s first IVF baby when she was born in 1981. Now she works at a company that offers genetic tests for embryos, enabling parents to choose those with the highest health scores.

Some people are already concerned about maintaining human populations beyond planet Earth. The Dutch entrepreneur Egbert Edelbroek wants to try IVF in space. “Humanity needs a backup plan,” he told Scott Solomon in October last year. “If you want to be a sustainable species, you want to be a multiplanetary species.”

We have another roundtable discussion coming up with Antonio later this month. You can join him for a discussion about CRISPR and the future of gene editing. “CRISPR Babies: Six years later” takes place on Thursday, July 25, and is a subscriber-only online event. You can register for free.

From around the web

When a Bitcoin mining facility moved into the Granbury area in Texas, local residents started complaining of strange new health problems. They believe the noisy facility might be linked to their migraines, panic attacks, heart palpitations, chest pain, and hypertension. (Time)

In the spring of 1997, 20 volunteers agreed to share their DNA for the Human Genome Project, an ambitious effort to publish a reference human genome. They were told researchers expected that “no more than 10% of the eventual DNA sequence will have been obtained from [each person’s] DNA.” But when the draft was published in 2001, nearly 75% of it came from just one person. Ashley Smart reports on the ethical questions surrounding the project. (Undark)

How can you make cultured meat taste more like the real thing? Scientists have developed “flavor scaffolds” that can release a meaty taste when cultured meat is cooked. The resulting product looks like a meaty pink jelly. Bon appétit! (Nature)

Doctors can continue their medical education by taking courses throughout their careers. Some of these are funded by big tobacco companies. They really shouldn’t be, argue these doctors from Stanford and the University of California. (JAMA)

“Skin care = brain care”? Maybe, if you believe the people behind the burgeoning industry of neurocosmetics. (The Atlantic)

AI lie detectors are better than humans at spotting lies

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Can you spot a liar? It’s a question I imagine has been on a lot of minds lately, in the wake of various televised political debates. Research has shown that we’re generally pretty bad at telling a truth from a lie.
 
Some believe that AI could help improve our odds, and do better than dodgy old fashioned techniques like polygraph tests. AI-based lie detection systems could one day be used to help us sift fact from fake news, evaluate claims, and potentially even spot fibs and exaggerations in job applications. The question is whether we will trust them. And if we should.

In a recent study, Alicia von Schenk and her colleagues developed a tool that was significantly better than people at spotting lies. Von Schenk, an economist at the University of Würzburg in Germany, and her team then ran some experiments to find out how people used it. In some ways, the tool was helpful—the people who made use of it were better at spotting lies. But they also led people to make a lot more accusations.

In their study published in the journal iScience, von Schenk and her colleagues asked volunteers to write statements about their weekend plans. Half the time, people were incentivized to lie; a believable yet untrue statement was rewarded with a small financial payout. In total, the team collected 1,536 statements from 768 people.
 
They then used 80% of these statements to train an algorithm on lies and truths, using Google’s AI language model BERT. When they tested the resulting tool on the final 20% of statements, they found it could successfully tell whether a statement was true or false 67% of the time. That’s significantly better than a typical human; we usually only get it right around half the time.
 
To find out how people might make use of AI to help them spot lies, von Schenk and her colleagues split 2,040 other volunteers into smaller groups and ran a series of tests.
 
One test revealed that when people are given the option to pay a small fee to use an AI tool that can help them detect lies—and earn financial rewards—they still aren’t all that keen on using it. Only a third of the volunteers given that option decided to use the AI tool, possibly because they’re skeptical of the technology, says von Schenk. (They might also be overly optimistic about their own lie-detection skills, she adds.)
 
But that one-third of people really put their trust in the technology. “When you make the active choice to rely on the technology, we see that people almost always follow the prediction of the AI… they rely very much on its predictions,” says von Schenk.

This reliance can shape our behavior. Normally, people tend to assume others are telling the truth. That was borne out in this study—even though the volunteers knew half of the statements were lies, they only marked out 19% of them as such. But that changed when people chose to make use of the AI tool: the accusation rate rose to 58%.
 
In some ways, this is a good thing—these tools can help us spot more of the lies we come across in our lives, like the misinformation we might come across on social media.
 
But it’s not all good. It could also undermine trust, a fundamental aspect of human behavior that helps us form relationships. If the price of accurate judgements is the deterioration of social bonds, is it worth it?
 
And then there’s the question of accuracy. In their study, von Schenk and her colleagues were only interested in creating a tool that was better than humans at lie detection. That isn’t too difficult, given how terrible we are at it. But she also imagines a tool like hers being used to routinely assess the truthfulness of social media posts, or hunt for fake details in a job hunter’s resume or interview responses. In cases like these, it’s not enough for a technology to just be “better than human” if it’s going to be making more accusations. 
 
Would we be willing to accept an accuracy rate of 80%, where only four out of every five assessed statements would be correctly interpreted as true or false? Would even 99% accuracy suffice? I’m not sure.
 
It’s worth remembering the fallibility of historical lie detection techniques. The polygraph was designed to measure heart rate and other signs of “arousal” because it was thought some signs of stress were unique to liars. They’re not. And we’ve known that for a long time. That’s why lie detector results are generally not admissible in US court cases. Despite that, polygraph lie detector tests have endured in some settings, and have caused plenty of harm when they’ve been used to hurl accusations at people who fail them on reality TV shows.
 
Imperfect AI tools stand to have an even greater impact because they are so easy to scale, says von Schenk. You can only polygraph so many people in a day. The scope for AI lie detection is almost limitless by comparison.
 
“Given that we have so much fake news and disinformation spreading, there is a benefit to these technologies,” says von Schenk. “However, you really need to test them—you need to make sure they are substantially better than humans.” If an AI lie detector is generating a lot of accusations, we might be better off not using it at all, she says.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

AI lie detectors have also been developed to look for facial patterns of movement and “microgestures” associated with deception. As Jake Bittle puts it: “the dream of a perfect lie detector just won’t die, especially when glossed over with the sheen of AI.”
 
On the other hand, AI is also being used to generate plenty of disinformation. As of October last year, generative AI was already being used in at least 16 countries to “sow doubt, smear opponents, or influence public debate,” as Tate Ryan-Mosley reported.
 
The way AI language models are developed can heavily influence the way that they work. As a result, these models have picked up different political biases, as my colleague Melissa Heikkilä covered last year.
 
AI, like social media, has the potential for good or ill. In both cases, the regulatory limits we place on these technologies will determine which way the sword falls, argue Nathan E. Sanders and Bruce Schneier.
 
Chatbot answers are all made up.
But there’s a tool that can give a reliability score to large language model outputs, helping users work out how trustworthy they are. Or, as Will Douglas Heaven put it in an article published a few months ago, a BS-o-meter for chatbots.

From around the web

Scientists, ethicists and legal experts in the UK have published a new set of guidelines for research on synthetic embryos, or, as they call them, “stem cell-based embryo models (SCBEMs).” There should be limits on how long they are grown in labs, and they should not be transferred into the uterus of a human or animal, the guideline states. They also note that, if, in future, these structures look like they might have the potential to develop into a fetus, we should stop calling them “models” and instead refer to them as “embryos.”

Antimicrobial resistance is already responsible for 700,000 deaths every year, and could claim 10 million lives per year by 2050. Overuse of broad spectrum antibiotics is partly to blame. Is it time to tax these drugs to limit demand? (International Journal of Industrial Organization)

Spaceflight can alter the human brain, reorganizing gray and white matter and causing the brain to shift upwards in the skull. We need to better understand these effects, and the impact of cosmic radiation on our brains, before we send people to Mars. (The Lancet Neurology)

The vagus nerve has become an unlikely star of social media, thanks to influencers who drum up the benefits of stimulating it. Unfortunately, the science doesn’t stack up. (New Scientist)

A hospital in Texas is set to become the first in the country to enable doctors to see their patients via hologram. Crescent Regional Hospital in Lancaster has installed Holobox—a system that projects a life-sized hologram of a doctor for patient consultations. (ABC News)

How AI video games can help reveal the mysteries of the human mind

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

This week I’ve been thinking about thought. It was all brought on by reading my colleague Niall Firth’s recent cover story about the use of artificial intelligence in video games. The piece describes how game companies are working to incorporate AI into their products to create more immersive experiences for players.

These companies are applying large language models to generate new game characters with detailed backstories—characters that could engage with a player in any number of ways. Enter in a few personality traits, catchphrases, and other details, and you can create a background character capable of endless unscripted, never-repeating conversations with you.

This is what got me thinking. Neuroscientists and psychologists have long been using games as research tools to learn about the human mind. Numerous video games have been either co-opted or especially designed to study how people learn, navigate, and cooperate with others, for example. Might AI video games allow us to probe more deeply, and unravel enduring mysteries about our brains and behavior?

I decided to call up Hugo Spiers to find out. Spiers is a neuroscientist at University College London who has been using a game to study how people find their way around. In 2016, Spiers and his colleagues worked with Deutsche Telekom and the games company Glitchers to develop Sea Hero Quest, a mobile video game in which players have to navigate a sea in a boat. They have since been using the game to learn more about how people lose navigational skills in the early stages of Alzheimer’s disease.

The use of video games in neuroscientific research kicked into gear in the 1990s, Spiers tells me, following the release of 3D games like Wolfenstein 3D and Duke Nukem. “For the first time, you could have an entirely simulated world in which to test people,” he says.

Scientists could observe and study how players behaved in these games: how they explored their virtual environment, how they sought rewards, how they made decisions. And research volunteers didn’t need to travel to a lab—their gaming behavior could be observed from wherever they happened to be playing, whether that was at home, at a library, or even inside an MRI scanner.

For scientists like Spiers, one of the biggest advantages of using games in research is that people want to play them. The use of games allows scientists to explore fundamental experiences like fun and curiosity. Researchers often offer a small financial incentive to volunteers who take part in their studies. But they don’t have to pay people to play games, says Spiers.

You’re much more likely to have fun if you’re motivated. It’s just not quite the same when you’re doing something purely for the money. And not having to pay participants allows researchers to perform huge studies on smaller budgets. Spiers has been able to collect data on over 4 million people from 195 countries, all of whom have willingly played Sea Hero Quest.  

AI could help researchers go even further. A rich, immersive world filled with characters that interact in realistic ways could help them study how our minds respond to various social settings and how we relate to other individuals. By observing how players interact with AI characters, scientists can learn more about how we cooperate—and compete—with others. It would be far cheaper and easier than hiring actors to engage with research volunteers, says Spiers.

Spiers himself is interested in learning how people hunt, whether for food, clothes, or a missing pet. “We still use these bits of our brain that our ancestors would have used daily, and of course some traditional communities still hunt,” he tells me. “But we know almost nothing about how the brain does this.” He envisions using AI-driven nonplayer characters to learn more about how humans cooperate for hunting.

There are other, newer questions to explore. At a time when people are growing attached to “virtual companions,” and an increasing number of AI girlfriends and boyfriends are being made available, AI video-game characters could also help us understand these novel relationships. “People are forming a relationship with an artificial agent,” says Spiers. “That’s inherently interesting. Why would you not want to study that?”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive:

My fellow London-based colleagues had a lot of fun generating an AI game character based on Niall. He turned out to be a sarcastic, smug, and sassy monster.

Google DeepMind has developed a generative AI model that can generate a basic but playable video game from a short description, a hand-drawn sketch, or a photo, as my colleague Will Heaven wrote earlier this year. The resulting games look a bit like Super Mario Bros.

Today’s world is undeniably gamified, argues Bryan Gardiner. He explores how we got here in another article from the Play issue of the magazine.

Large language models behave in unexpected ways. And no one really knows why, as Will wrote in March.

Technologies can be used to study the brain in lots of different ways—some of which are much more invasive than others. Tech that aims to read your mind and probe your memories is already being used, as I wrote in a previous edition of The Checkup.

From around the web:

Bad night of sleep left you needing a pick-me-up? Scientists have designed an algorithm to deliver tailored sleep-and-caffeine-dosing schedules to help tired individuals “maximize the benefits of limited sleep opportunities and consume the least required amount of caffeine.” (Yes, it may have been developed with the US Army in mind, but surely we all stand to benefit?) (Sleep)

Is dog cloning a sweet way to honor the memory of a dearly departed pet, or a “frivolous and wasteful and ethically obnoxious” pursuit in which humans treat living creatures as nothing more than their own “stuff”? This feature left me leaning toward the latter view, especially after learning that people tend to like having dogs with health problems … (The New Yorker)

States that have enacted the strongest restrictions to abortion access have also seen prescriptions for oral contraceptives plummet, according to new research. (Mother Jones)

And another study has linked Texas’s 2021 ban on abortion in early pregnancy with an increase in the number of infant deaths recorded in the state. In 2022, across the rest of the US, the number of infant deaths ascribed to anomalies present at birth decreased by 3.1%. In Texas, this figure increased by 22.9%. (JAMA Pediatrics)

We are three months into the bird flu outbreak in US dairy cattle. But the country still hasn’t implemented a sufficient testing infrastructure and doesn’t fully understand how the virus is spreading. (STAT)

Should social media come with a health warning?

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Earlier this week, the US surgeon general, also known as the “nation’s doctor,” authored an article making the case that health warnings should accompany social media. The goal: to protect teenagers from its harmful effects. “Adolescents who spend more than three hours a day on social media face double the risk of anxiety and depression symptoms,” Vivek Murthy wrote in a piece published in the New York Times. “Additionally, nearly half of adolescents say social media makes them feel worse about their bodies.”

His concern instinctively resonates with me. I’m in my late 30s, and even I can end up feeling a lot worse about myself after a brief stint on Instagram. I have two young daughters, and I worry about how I’ll respond when they reach adolescence and start asking for access to whatever social media site their peers are using. My children already have a fascination with cell phones; the eldest, who is almost six, will often come into my bedroom at the crack of dawn, find my husband’s phone, and somehow figure out how to blast “Happy Xmas (War Is Over)” at full volume.

But I also know that the relationship between this technology and health isn’t black and white. Social media can affect users in different ways—often positively. So let’s take a closer look at the concerns, the evidence behind them, and how best to tackle them.

Murthy’s concerns aren’t new, of course. In fact, almost any time we are introduced to a new technology, some will warn of its potential dangers. Innovations like the printing press, radio, and television all had their critics back in the day. In 2009, the Daily Mail linked Facebook use to cancer.

More recently, concerns about social media have centered on young people. There’s a lot going on in our teenage years as our brains undergo maturation, our hormones shift, and we explore new ways to form relationships with others. We’re thought to be more vulnerable to mental-health disorders during this period too. Around half of such disorders are thought to develop by the age of 14, and suicide is the fourth-leading cause of death in people aged between 15 and 19, according to the World Health Organization. Many have claimed that social media only makes things worse.

Reports have variously cited cyberbullying, exposure to violent or harmful content, and the promotion of unrealistic body standards, for example, as potential key triggers of low mood and disorders like anxiety and depression. There have also been several high-profile cases of self-harm and suicide with links to social media use, often involving online bullying and abuse. Just this week, the suicide of an 18-year-old in Kerala, India, was linked to cyberbullying. And children have died after taking part in dangerous online challenges made viral on social media, whether from inhaling toxic substances, consuming ultra-spicy tortilla chips, or choking themselves.

Murthy’s new article follows an advisory on social media and youth mental health published by his office in 2023. The 25-page document, which lays out some of known benefits and harms of social media use as well as the “unknowns,” was intended to raise awareness of social media as a health issue. The problem is that things are not entirely clear cut.

“The evidence is currently quite limited,” says Ruth Plackett, a researcher at University College London who studies the impact of social media on mental health in young people. A lot of the research on social media and mental health is correlational. It doesn’t show that social media use causes mental health disorders, Plackett says.

The surgeon general’s advisory cites some of these correlational studies. It also points to survey-based studies, including one looking at mental well-being among college students after the rollout of Facebook in the mid-2000s. But even if you accept the authors’ conclusion that Facebook had a negative impact on the students’ mental health, it doesn’t mean that other social media platforms will have the same effect on other young people. Even Facebook, and the way we use it, has changed a lot in the last 20 years.

Other studies have found that social media has no effect on mental health. In a study published last year, Plackett and her colleagues surveyed 3,228 children in the UK to see how their social media use and mental well-being changed over time. The children were first surveyed when they were aged between 12 and 13, and again when they were 14 to 15 years old.

Plackett expected to find that social media use would harm the young participants. But when she conducted the second round of questionnaires, she found that was not the case. “Time spent on social media was not related to mental-health outcomes two years later,” she tells me.

Other research has found that social media use can be beneficial to young people, especially those from minority groups. It can help some avoid loneliness, strengthen relationships with their peers, and find a safe space to express their identities, says Plackett. Social media isn’t only for socializing, either. Today, young people use these platforms for news, entertainment, school, and even (in the case of influencers) business.

“It’s such a mixed bag of evidence,” says Plackett. “I’d say it’s hard to draw much of a conclusion at the minute.”

In his article, Murthy calls for a warning label to be applied to social media platforms, stating that “social media is associated with significant mental-health harms for adolescents.”

But while Murthy draws comparisons to the effectiveness of warning labels on tobacco products, bingeing on social media doesn’t have the same health risks as chain-smoking cigarettes. We have plenty of strong evidence linking smoking to a range of diseases, including gum disease, emphysema, and lung cancer, among others. We know that smoking can shorten a person’s life expectancy. We can’t make any such claims about social media, no matter what was written in that Daily Mail article.

Health warnings aren’t the only way to prevent any potential harms associated with social media use, as Murthy himself acknowledges. Tech companies could go further in reducing or eliminating violent and harmful content, for a start. And digital literacy education could help inform children and their caregivers how to alter the settings on various social media platforms to better control the content children see, and teach them how to assess the content that does make it to their screens.

I like the sound of these measures. They might even help me put an end to the early-morning Christmas songs. 


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive:

Bills designed to make the internet safer for children have been popping up across the US. But individual states take different approaches, leaving the resulting picture a mess, as Tate Ryan-Mosley explored.

Dozens of US states sued Meta, the parent company of Facebook, last October. As Tate wrote at the time, the states claimed that the company knowingly harmed young users, misled them about safety features and harmful content, and violated laws on children’s privacy.  

China has been implementing increasingly tight controls over how children use the internet. In August last year, the country’s cyberspace administrator issued detailed guidelines that include, for example, a rule to limit use of smart devices to 40 minutes a day for children under the age of eight. And even that use should be limited to content about “elementary education, hobbies and interests, and liberal arts education.” My colleague Zeyi Yang had the story in a previous edition of his weekly newsletter, China Report.

Last year, TikTok set a 60-minute-per-day limit for users under the age of 18. But the Chinese domestic version of the app, Douyin, has even tighter controls, as Zeyi wrote last March.

One way that social media can benefit young people is by allowing them to express their identities in a safe space. Filters that superficially alter a person’s appearance to make it more feminine or masculine can help trans people play with gender expression, as Elizabeth Anne Brown wrote in 2022. She quoted Josie, a trans woman in her early 30s. “The Snapchat girl filter was the final straw in dropping a decade’s worth of repression,” Josie said. “[I] saw something that looked more ‘me’ than anything in a mirror, and I couldn’t go back.”

From around the web

Could gentle shock waves help regenerate heart tissue? A trial of what’s being dubbed a “space hairdryer” suggests the treatment could help people recover from bypass surgery. (BBC)

“We don’t know what’s going on with this virus coming out of China right now.” Anthony Fauci gives his insider account of the first three months of the covid-19 pandemic. (The Atlantic)

Microplastics are everywhere. It was only a matter of time before scientists found them in men’s penises. (The Guardian)

Is the singularity nearer? Ray Kurzweil believes so. He also thinks medical nanobots will allow us to live beyond 120. (Wired)

Biotech companies are trying to make milk without cows

The outbreak of avian influenza on US dairy farms has started to make milk seem a lot less wholesome. Milk that’s raw, or unpasteurized, can actually infect mice that drink it, and a few dairy workers have already caught the bug. 

The FDA says that commercial milk is safe because it is pasteurized, killing the germs. Even so, it’s enough to make a person ponder a life beyond milk—say, taking your coffee black or maybe drinking oat milk.

But for those of us who can’t do without the real thing, it turns out some genetic engineers are working on ways to keep the milk and get rid of the cows instead. They’re doing it by engineering yeasts and plants with bovine genes so they make the key proteins responsible for milk’s color, satisfying taste, and nutritional punch.

The proteins they’re copying are casein, a floppy polymer that’s the most abundant protein in milk and is what makes pizza cheese stretch, and whey, a nutritious combo of essential amino acids that’s often used in energy powders.

It’s part of a larger trend of replacing animals with ingredients grown in labs, steel vessels, or plant crops. Think of the Impossible burger, the veggie patty made mouthwatering with the addition of heme, a component of blood that’s produced in the roots of genetically modified soybeans.

One of the milk innovators is Remilk, an Israeli startup founded in 2019, which has engineered yeast so it will produce beta-lactoglobulin (the main component of whey). Company cofounder Ori Cohavi says a single biotech factory of bubbling yeast vats feeding on sugar could in theory “replace 50,000 to 100,000 cows.” 

Remilk has been making trial batches and is testing ways to formulate the protein with plant oils and sugar to make spreadable cheese, ice cream, and milk drinks. So yes, we’re talking “processed” food—one partner is a local Coca-Cola bottler, and advising the company are former executives of Nestlé, Danone, and PepsiCo.

But regular milk isn’t exactly so natural either. At milking time, animals stand inside elaborate robots, and it looks for all the world as if they’re being abducted by aliens. “The notion of a cow standing in some nice green scenery is very far from how we get our milk,” says Cohavi. And there are environmental effects: cattle burp methane, a potent greenhouse gas, and a lactating cow needs to drink around 40 gallons of water a day

“There are hundreds of millions of dairy cows on the planet producing greenhouse waste, using a lot of water and land,” says Cohavi. “It can’t be the best way to produce food.”  

For biotech ventures trying to displace milk, the big challenge will be keeping their own costs of production low enough to compete with cows. Dairies get government protections and subsidies, and they don’t only make milk. Dairy cows are eventually turned into gelatin, McDonald’s burgers, and the leather seats of your Range Rover. Not much goes to waste.

At Alpine Bio, a biotech company in San Francisco (also known as Nobell Foods), researchers have engineered soybeans to produce casein. While not yet cleared for sale, the beans are already being grown on USDA-sanctioned test plots in the Midwest, says Alpine’s CEO, Magi Richani

Richani chose soybeans because they’re already a major commodity and the cheapest source of protein around. “We are working with farmers who are already growing soybeans for animal feed,” she says. “And we are saying, ‘Hey, you can grow this to feed humans.’ If you want to compete with a commodity system, you have to have a commodity crop.”

Alpine intends to crush the beans, extract the protein, and—much like Remilk—sell the ingredient to larger food companies.

Everyone agrees that cow’s milk will be difficult to displace. It holds a special place in the human psyche, and we owe civilization itself, in part, to domesticated animals. In fact, they’ve  left their mark in our genes, with many of us carrying DNA mutations that make cow’s milk easier to digest.  

But that’s why it might be time for the next technological step, says Richani. “We raise 60 billion animals for food every year, and that is insane. We took it too far, and we need options,” she says. “We need options that are better for the environment, that overcome the use of antibiotics, and that overcome the disease risk.”

It’s not clear yet whether the bird flu outbreak on dairy farms is a big danger to humans. But making milk without cows would definitely cut the risk that an animal virus will cause a new pandemic. As Richani says: “Soybeans don’t transmit diseases to humans.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Hungry for more from the frontiers of fromage? In the Build issue of our print magazine, Andrew Rosenblum tasted a yummy brie made only from plants. Harder to swallow was the claim by developer Climax Foods that its cheese was designed using artificial intelligence.

The idea of using yeast to create food ingredients, chemicals, and even fuel via fermentation is one of the dreams of synthetic biology. But it’s not easy. In 2021, we raised questions about high-flying startup Ginkgo Bioworks. This week its stock hit an all-time low of $0.49 per share as the company struggles to make … well, anything.

This spring, I traveled to Florida to watch attempts to create life in a totally new way: using a synthetic embryo made in a lab. The action involved cattle at the animal science department of the University of Florida, Gainesville.


From around the web

How many human bird flu cases are there? No one knows, because there’s barely any testing. Scientists warn we’re flying blind as US dairy farms struggle with an outbreak. (NBC)  

Moderna, one of the companies behind the covid-19 shots, is seeing early success with a cancer vaccine. It uses the same basic technology: gene messages packed into nanoparticles. (Nature)

It’s the covid-19 theory that won’t go away. This week the New York Times published an op-ed arguing that the virus was the result of a lab accident. We previously profiled the author, Alina Chan, who is a scientist with the Broad Institute. (NYTimes)

Sales of potent weight loss drugs, like Ozempic, are booming. But it’s not just humans who are overweight. Now the pet care industry is dreaming of treating chubby cats and dogs, too. (Bloomberg)

What’s next for bird flu vaccines

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Here in the US, bird flu has now infected cows in nine states, millions of chickens, and—as of last week—a second dairy worker. There’s no indication that the virus has acquired the mutations it would need to jump between humans, but the possibility of another pandemic has health officials on high alert. Last week, they said they are working to get 4.8 million doses of H5N1 bird flu vaccine packaged into vials as a precautionary measure. 

The good news is that we’re far more prepared for a bird flu outbreak than we were for covid. We know so much more about influenza than we did about coronaviruses. And we already have hundreds of thousands of doses of a bird flu vaccine sitting in the nation’s stockpile.

The bad news is we would need more than 600 million doses to cover everyone in the US, at two shots per person. And the process we typically use to produce flu vaccines takes months and relies on massive quantities of chicken eggs. Yes, chickens. One of the birds that’s susceptible to avian flu. (Talk about putting all our eggs in one basket. #sorrynotsorry)

This week in The Checkup, let’s look at why we still use a cumbersome, 80-year-old vaccine production process to make flu vaccines—and how we can speed it up.

The idea to grow flu virus in fertilized chicken eggs originated with Frank Macfarlane Burnet, an Australian virologist. In 1936, he discovered that if he bored a tiny hole in the shell of a chicken egg and injected flu virus between the shell and the inner membrane, he could get the virus to replicate.  

Even now, we still grow flu virus in much the same way. “I think a lot of it has to do with the infrastructure that’s already there,” says Scott Hensley, an immunologist at the University of Pennsylvania’s Perelman School of Medicine. It’s difficult for companies to pivot. 

The process works like this: Health officials provide vaccine manufacturers with a candidate vaccine virus that matches circulating flu strains. That virus is injected into fertilized chicken eggs, where it replicates for several days. The virus is then harvested, killed (for most use cases), purified, and packaged. 

Making flu vaccine in eggs has a couple of major drawbacks. For a start, the virus doesn’t always grow well in eggs. So the first step in vaccine development is creating a virus that does. That happens through an adaptation process that can take weeks or even months. This process is particularly tricky for bird flu: Viruses like H5N1 are deadly to birds, so the virus might end up killing the embryo before the egg can produce much virus. To avoid this, scientists have to develop a weakened version of the virus by combining genes from the bird flu virus with genes typically used to produce seasonal flu virus vaccines. 

And then there’s the problem of securing enough chickens and eggs. Right now, many egg-based production lines are focused on producing vaccines for seasonal flu. They could switch over to bird flu, but “we don’t have the capacity to do both,” Amesh Adalja, an infectious disease specialist at Johns Hopkins University, told KFF Health News. The US government is so worried about its egg supply that it keeps secret, heavily guarded flocks of chickens peppered throughout the country. 

Most of the flu virus used in vaccines is grown in eggs, but there are alternatives. The seasonal flu vaccine Flucelvax, produced by CSL Seqirus, is grown in a cell line derived in the 1950s from the kidney of a cocker spaniel. The virus used in the seasonal flu vaccine FluBlok, made by Protein Sciences, isn’t grown; it’s synthesized. Scientists engineer an insect virus to carry the gene for hemagglutinin, a key component of the flu virus that triggers the human immune system to create antibodies against it. That engineered virus turns insect cells into tiny hemagglutinin production plants.   

And then we have mRNA vaccines, which wouldn’t require vaccine manufacturers to grow any virus at all. There aren’t yet any approved mRNA vaccines for influenza, but many companies are fervently working on them, including Pfizer, Moderna, Sanofi, and GSK. “With the covid vaccines and the infrastructure that’s been built for covid, we now have the capacity to ramp up production of mRNA vaccines very quickly,” says Hensley. This week, the Financial Times reported that the US government will soon close a deal with Moderna to provide tens of millions of dollars to fund a large clinical trial of a bird flu vaccine the company is developing.

There are hints that egg-free vaccines might work better than egg-based vaccines. A CDC study published in January showed that people who received Flucelvax or FluBlok had more robust antibody responses than those who received egg-based flu vaccines. That may be because viruses grown in eggs sometimes acquire mutations that help them grow better in eggs. Those mutations can change the virus so much that the immune response generated by the vaccine doesn’t work as well against the actual flu virus that’s circulating in the population. 

Hensley and his colleagues are developing an mRNA vaccine against bird flu. So far they’ve only tested it in animals, but the shot performed well, he claims. “All of our preclinical studies in animals show that these vaccines elicit a much stronger antibody response compared with conventional flu vaccines.”

No one can predict when we might need a pandemic flu vaccine. But just because bird flu hasn’t made the jump to a pandemic doesn’t mean it won’t. “The cattle situation makes me worried,” Hensley says. Humans are in constant contact with cows, he explains. While there have only been a couple of human cases so far, “the fear is that some of those exposures will spark a fire.” Let’s make sure we can extinguish it quickly. 


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

In a previous issue of The Checkup, Jessica Hamzelou explained what it would take for bird flu to jump to humans. And last month, after bird flu began circulating in cows, I posted an update that looked at strategies to protect people and animals.

I don’t have to tell you that mRNA vaccines are a big deal. In 2021, MIT Technology Review highlighted them as one of the year’s 10 breakthrough technologies. Antonio Regalado explored their massive potential to transform medicine. Jessica Hamzelou wrote about the other diseases researchers are hoping to tackle. I followed up with a story after two mRNA researchers won a Nobel Prize. And earlier this year I wrote about a new kind of mRNA vaccine that’s self-amplifying, meaning it not only works at lower doses, but also sticks around for longer in the body. 

From around the web

Researchers installed a literal window into the brain, allowing for ultrasound imaging that they hope will be a step toward less invasive brain-computer interfaces. (Stat

People who carry antibodies against the common viruses used to deliver gene therapies can mount a dangerous immune response if they’re re-exposed. That means many people are ineligible for these therapies and others can’t get a second dose. Now researchers are hunting for a solution. (Nature)

More good news about Ozempic. A new study shows that the drug can cut the risk of kidney complications, including death in people with diabetes and chronic kidney disease. (NYT)

Microplastics are everywhere. Including testicles. (Scientific American)

Must read: This story, the second in series on the denial of reproductive autonomy for people with sickle-cell disease, examines how the US medical system undermines a woman’s right to choose. (Stat)

Splashy breakthroughs are exciting, but people with spinal cord injuries need more

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

This week, I wrote about an external stimulator that delivers electrical pulses to the spine to help improve hand and arm function in people who are paralyzed. This isn’t a cure. In many cases the gains were relatively modest. One participant said it increased his typing speed from 23 words a minute to 35. Another participant was newly able to use scissors with his right hand. A third used her left hand to release a seatbelt.

The study didn’t garner as much media attention as previous, much smaller studies that focused on helping people with paralysis walk. Tech that allows people to type slightly faster or put their hair in a ponytail unaided just doesn’t have the same allure. “The image of a paralyzed person getting up and walking is almost biblical,” Charles Liu, director of the Neurorestoration Center at the University of Southern California, once told a reporter. 

For the people who have spinal cord injuries, however, incremental gains can have a huge impact on quality of life. 

So today in The Checkup, let’s talk about this tech and who it serves.

In 2004, Kim Anderson-Erisman, a researcher at Case Western Reserve University, who also happens to be paralyzed, surveyed more than 600 people with spinal cord injuries. Wanting to better understand their priorities, she asked them to consider seven different functions—everything from hand and arm mobility to bowel and bladder function to sexual function. She asked respondents to rank these functions according to how big an impact recovery would have on their quality of life. 

Walking was one of the functions, but it wasn’t the top priority for most people. Most quadriplegics put hand and arm function at the top of the list. For paraplegics, meanwhile, the top priority was sexual function. I interviewed Anderson-Erisman for a story I wrote in 2019 about research on implantable stimulators as a way to help people with spinal cord injuries walk. For many people, “not being able to walk is the easy part of spinal cord injury,” she told me. “[If] you don’t have enough upper-extremity strength or ability to take care of yourself independently, that’s a bigger problem than not being able to walk.” 

One of the research groups I focused on was at the University of Louisville. When I visited in 2019, the team had recently made the news because two people with spinal cord injuries in one of their studies had regained the ability to walk, thanks to an implanted stimulator. “Experimental device helps paralyzed man walk the length of four football fields,” one headline had trumpeted.

But when I visited one of those participants, Jeff Marquis, in his condo in Louisville, I learned that walking was something he could only do in the lab. To walk he needed to hold onto parallel bars supported by other people and wear a harness to catch him if he fell. Even if he had extra help at home, there wasn’t enough room for the apparatus. Instead, he gets around his condo the same way he gets around outside his condo: in a wheelchair. Marquis does stand at home, but even that requires a bulky frame. And the standing he does is only for therapy. “I mostly just watch TV while I’m doing that,” he said.  

That’s not to say the tech has been useless. The implant helped Marquis gain some balance, stamina, and trunk stability. “Trunk stability is kind of underrated in how much easier that makes every other activity I do,” he told me. “That’s the biggest thing that stays with me when I have [the stimulator] turned off.”  

What’s exciting to me about this latest study is that the tech gave the participants skills they could use beyond the lab. And because the stimulator is external, it is likely to be more accessible and vastly cheaper. Yes, the newly enabled movements are small, but if you listen to the palpable excitement of one study participant as he demonstrates how he can move a small ball into a cup, you’ll appreciate that incremental gains are far from insignificant. That’s according to Melanie Reid, one of the participants in the latest trial, who spoke at a press conference last week. “There [are] no miracles in spinal injury, but tiny gains can be life-changing.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

In 2017, we hailed as a breakthrough technology electronic interfaces designed to reverse paralysis by reconnecting the brain and body. Antonio Regalado has the story

An implanted stimulator changed John Mumford’s life, allowing him to once again grasp objects after a spinal cord injury left him paralyzed. But when the company that made the device folded, Mumford was left with few options for keeping the device running. “Limp limbs can be reanimated by technology, but they can be quieted again by basic market economics,” wrote Brian Bergstein in 2015. 

In 2014, Courtney Humphries covered some of the rat research that laid the foundation for the technological developments that have allowed paralyzed people to walk. 

From around the web

Lots of bird flu news this week. A second person in the US has tested positive for the illness after working with infected livestock. (NBC)

The livestock industry, which depends on shipping tens of millions of live animals, provides some ideal conditions for the spread of pathogens, including bird flu. (NYT)

Long read: How the death of a nine-year-old boy in Cambodia triggered a global H5N1 alert. (NYT)

You’ve heard about tracking viruses via wastewater. H5N1 is the first one we’re tracking via store-bought milk. (STAT

The first organ transplants from pigs to humans have not ended well, but scientists are learning valuable lessons about what they need to do better. (Nature

Another long read that’s worth your time: an inside look at just how long 3M knew about the pervasiveness of “forever chemicals.” (New Yorker

How cuddly robots could change dementia care

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Last week, I scoured the internet in search of a robotic dog. I wanted a belated birthday present for my aunt, who was recently diagnosed with Alzheimer’s disease. Studies suggest that having a companion animal can stave off some of the loneliness, anxiety, and agitation that come with Alzheimer’s. My aunt would love a real dog, but she can’t have one.

That’s how I discovered the Golden Pup from Joy for All. It cocks its head. It sports a jaunty red bandana. It barks when you talk. It wags when you touch it. It has a realistic heartbeat. And it’s just one of the many, many robots designed for people with Alzheimer’s and dementia.

This week on The Checkup, join me as I go down a rabbit hole. Let’s look at the prospect of  using robots to change dementia care.

Golden pup robot with red kerchief

As robots go, Golden Pup is decidedly low tech. It retails for $140. For around $6,000 you can opt for Paro, a fluffy robotic baby seal developed in Japan, which can sense touch, light, sound, temperature, and posture. Its manufacturer says it develops its own character, remembering behaviors that led its owner to give it attention.  

Golden Pup and Paro are available now. But researchers are working on much more  sophisticated robots for people with cognitive disorders—devices that leverage AI to converse and play games. Researchers from Indiana University Bloomington are tweaking a commercially available robot system called QT to serve people with dementia and Alzheimer’s. The researchers’ two-foot-tall robot looks a little like a toddler in an astronaut suit. Its round white head holds a screen that displays two eyebrows, two eyes, and a mouth that together form a variety of expressions. The robot engages people in  conversation, asking AI-generated questions to keep them talking. 

The AI model they’re using isn’t perfect, and neither are the robot’s responses. In one awkward conversation, a study participant told the robot that she has a sister. “I’m sorry to hear that,” the robot responded. “How are you doing?”

But as large language models improve—which is happening already—so will the quality of the conversations. When the QT robot made that awkward comment, it was running Open AI’s GPT-3, which was released in 2020. The latest version of that model, GPT-4o, which was released this week, is faster and provides for more seamless conversations. You can interrupt the conversation, and the model will adjust.  

The idea of using robots to keep dementia patients engaged and connected isn’t always an easy sell. Some people see it as an abdication of our social responsibilities. And then there are privacy concerns. The best robotic companions are personalized. They collect information about people’s lives, learn their likes and dislikes, and figure out when to approach them. That kind of data collection can be unnerving, not just for patients but also for medical staff. Lillian Hung, creator of the Innovation in Dementia care and Aging (IDEA) lab at the University of British Columbia in Vancouver, Canada, told one reporter about an incident that happened during a focus group at a care facility.  She and her colleagues popped out for lunch. When they returned, they found that staff had unplugged the robot and placed a bag over its head. “They were worried it was secretly recording them,” she said.

On the other hand, robots have some advantages over humans in talking to people with dementia. Their attention doesn’t flag. They don’t get annoyed or angry when they have to repeat themselves. They can’t get stressed. 

What’s more, there are increasing numbers of people with dementia, and too few people to care for them. According to the latest report from the Alzheimer’s Association, we’re going to need more than a million additional care workers to meet the needs of people living with dementia between 2021 and 2031. That is the largest gap between labor supply and demand for any single occupation in the United States.

Have you been in an understaffed or poorly staffed memory care facility? I have. Patients are often sedated to make them easier to deal with. They get strapped into wheelchairs and parked in hallways. We barely have enough care workers to take care of the physical needs of people with dementia, let alone provide them with social connection and an enriching environment.

“Caregiving is not just about tending to someone’s bodily concerns; it also means caring for the spirit,” writes Kat McGowan in this beautiful Wired story about her parents’ dementia and the promise of social robots. “The needs of adults with and without dementia are not so different: We all search for a sense of belonging, for meaning, for self-actualization.”

If robots can enrich the lives of people with dementia even in the smallest way, and if they can provide companionship where none exists, that’s a win.

“We are currently at an inflection point, where it is becoming relatively easy and inexpensive to develop and deploy [cognitively assistive robots] to deliver personalized interventions to people with dementia, and many companies are vying to capitalize on this trend,” write a team of researchers from the University of California, San Diego, in a 2021 article in Proceedings of We Robot. “However, it is important to carefully consider the ramifications.”

Many of the more advanced social robots may not be ready for prime time, but the low-tech Golden Pup is readily available. My aunt’s illness has been progressing rapidly, and she occasionally gets frustrated and agitated. I’m hoping that Golden Pup might provide a welcome (and calming) distraction. Maybe  it will spark joy during a time that has been incredibly confusing and painful for my aunt and uncle. Or maybe not. Certainly a robotic pup isn’t for everyone. Golden Pup may not be a dog. But I’m hoping it can be a friendly companion.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Robots are cool, and with new advances in AI they might also finally be useful around the house, writes Melissa Heikkilä. 

Social robots could help make personalized therapy more affordable and accessible to kids with autism. Karen Hao has the story

Japan is already using robots to help with elder care, but in many cases they require as much work as they save. And reactions among the older people they’re meant to serve are mixed. James Wright wonders whether the robots are “a shiny, expensive distraction from tough choices about how we value people and allocate resources in our societies.” 

From around the web

A tiny probe can work its way through arteries in the brain to help doctors spot clots and other problems. The new tool could help surgeons make diagnoses, decide on treatment strategies, and provide assurance that clots have been removed. (Stat

Richard Slayman, the first recipient of a pig kidney transplant, has died, although the hospital that performed the transplant says the death doesn’t seem to be linked to the kidney. (Washington Post)

EcoHealth, the virus-hunting nonprofit at the center of covid lab-eak theories, has been banned from receiving federal funding. (NYT)

In a first, scientists report that they can translate brain signals into speech without any vocalization or mouth movements, at least for a handful of words. (Nature)