An ancient man’s remains were hacked apart and kept in a garage

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week I’ve been working on a story about a brain of glass. About five years ago, archaeologists found shiny black glass fragments inside the skull of a man who died in the Mount Vesuvius eruption of 79 CE. It seems they are pieces of brain, turned to glass.

Scientists have found ancient brains before—some are thought to be at least 10,000 years old. But this is the only time they’ve seen a brain turn to glass. They’ve even been able to spot neurons inside it.

The man’s remains were found at Herculaneum, an ancient city that was buried under meters of volcanic ash following the eruption. We don’t know if there are any other vitrified brains on the site. None have been found so far, but only about a quarter of the city has been excavated.

Some archaeologists want to continue excavating the site. But others argue that we need to protect it. Further digging will expose it to the elements, putting the artifacts and remains at risk of damage. You can only excavate a site once, so perhaps it’s worth waiting until we have the technology to do so in the least destructive way.

After all, there are some pretty recent horror stories of excavations involving angle grinders, and of ancient body parts ending up in garages. Future technologies might eventually make our current approaches look similarly barbaric.

The inescapable fact of fields like archaeology or paleontology is this: When you study ancient remains, you’ll probably end up damaging them in some way. Take, for example, DNA analysis. Scientists have made a huge amount of progress in this field. Today, geneticists can crack the genetic code of extinct animals and analyze DNA in soil samples to piece together the history of an environment.

But this kind of analysis essentially destroys the sample. To perform DNA analysis on human remains, scientists typically cut out a piece of bone and grind it up. They might use a tooth. But once it has been studied, that sample is gone for good.

Archaeological excavations have been performed for hundreds of years, and as recently as the 1950s, it was common for archaeologists to completely excavate a site they discovered. But those digs cause damage too.

Nowadays, when a site is discovered, archaeologists tend to focus on specific research questions they might want to answer, and excavate only enough to answer those questions, says Karl Harrison, a forensic archaeologist at the University of Exeter in the UK. “We will cross our fingers, excavate the minimal amount, and hope that the next generation of archaeologists will have new, better tools and finer abilities to work on stuff like this,” he says.

In general, scientists have also become more careful with human remains. Matteo Borrini, a forensic anthropologist at Liverpool John Moores University in the UK, curates his university’s collection of skeletal remains, which he says includes around 1,000 skeletons of medieval and Victorian Britons. The skeletons are extremely valuable for research, says Borrini, who himself has investigated the remains of one person who died from exposure to phosphorus in a match factory and another who was murdered.

When researchers ask to study the skeletons, Borrini will find out whether the research will somehow alter them. “If there is destructive sampling, we need to guarantee that the destruction will be minimal, and that there will be enough material [left] for further study,” he says. “Otherwise we don’t authorize the study.”

If only previous generations of archaeologists had taken a similar approach. Harrison told me the story of the discovery of “St Bees man,” a medieval man found in a lead coffin in Cumbria, UK, in 1981. The man, thought to have died in the 1300s, was found to be extraordinarily well preserved—his skin was intact, his organs were present, and he even still had his body hair.

Normally, archaeologists would dig up such ancient specimens with care, using tools made of natural substances like stone or brick, says Harrison. Not so for St Bees man. “His coffin was opened with an angle grinder,” says Harrison. The man’s body was removed and “stuck in a truck,” where he underwent a standard modern forensic postmortem, he adds.

“His thorax would have been opened up, his organs [removed and] weighed, [and] the top of his head would have been cut off,” says Harrison. Samples of the man’s organs “were kept in [the pathologist’s] garage for 40 years.”

If St Bees man were discovered today, the story would be completely different. The coffin itself would be recognized as a precious ancient artifact that should be handled with care, and the man’s remains would be scanned and imaged in the least destructive way possible, says Harrison.

Even Lindow man, who was discovered a mere three years later in nearby Manchester, got better treatment. His remains were found in a peat bog, and he is thought to have died over 2,000 years ago. Unlike poor St Bees man, he underwent careful scientific investigation, and his remains took pride of place in the British Museum. Harrison remembers going to see the exhibit when he was 10 years old. 

Harrison says he’s dreaming of minimally destructive DNA technologies—tools that might help us understand the lives of long-dead people without damaging their remains. I’m looking forward to covering those in the future. (In the meantime, I’m personally dreaming of a trip to—respectfully and carefully—visit Herculaneum.)


Now read the rest of The Checkup

Read more from MIT Technology Review‘s archive

Some believe an “ancient-DNA revolution” is underway, as scientists use modern technologies to learn about human, animal, and environmental remains from the past. My colleague Antonio Regalado has the details in his recent feature. The piece was published in the latest edition of our magazine, which focuses on relationships.

Ancient DNA analysis made it to MIT Technology Review’s annual list of top 10 Breakthrough Technologies in 2023. You can read our thoughts on the breakthroughs of 2025 here

DNA that was frozen for 2 million years was sequenced in 2022. The ancient DNA fragments, which were recovered from Greenland, may offer insight into the environment of the polar desert at the time.

Environmental DNA, also known as eDNA, can help scientists assemble a snapshot of all the organisms in a given place. Some are studying samples collected from Angkor Wat in Cambodia, which is believed to have been built in the 12th century.

Others are hoping that ancient DNA can be used to “de-extinct” animals that once lived on Earth. Colossal Biosciences is hoping to resurrect the dodo and the woolly mammoth.

From around the web

Next-generation obesity drugs might be too effective. One trial participant lost 22% of her body weight in nine months. Another lost 30% of his weight in just eight months. (STAT)

A US court upheld the conviction of Elizabeth Holmes, the disgraced founder of the biotechnology company Theranos, who was sentenced to over 11 years for defrauding investors out of hundreds of millions of dollars. Her sentence has since been reduced by two years for good behavior. (The Guardian)

An unvaccinated child died of measles in Texas. The death is the first reported as a result of the outbreak that is spreading in Texas and New Mexico, and the first measles death reported in the US in a decade. Health and Human Services Secretary Robert F. Kennedy Jr. appears to be downplaying the outbreak. (NBC News)

A mysterious disease with Ebola-like symptoms has emerged in the Democratic Republic of Congo. Hundreds of people have been infected in the last five weeks, and more than 50 people have died. (Wired)

Towana Looney has been discharged from the hospital three months after receiving a gene-edited pig kidney. “I’m so grateful to be alive and thankful to have received this incredible gift,” she said. (NYU Langone)

How a volcanic eruption turned a human brain into glass

They look like small pieces of obsidian, smooth and shiny. But a set of small black fragments found inside the skull of a man who died in the eruption of Mount Vesuvius in Southern Italy, in the year 79 CE, are thought to be pieces of his brain—turned to glass.

The discovery, reported in 2020, was exciting because a human brain had never been found in this state. Now, scientists studying his remains believe they’ve found out more details about how the glass fragments were formed: The man was exposed to temperatures of over 500 °C, followed by rapid cooling. These conditions also allowed for the preservation of tiny structures and cells inside his brain. 

“It’s an extraordinary finding,” says Matteo Borrini, a forensic anthropologist at Liverpool John Moores University in the UK, who was not involved in the research. “It tells us how [brain] preservation can work … extreme conditions can produce extreme results.” 

Glittering remains

The Roman city of Herculaneum has been covered in ash for many hundreds of years. Excavations over the last few centuries have revealed amazing discoveries of preserved bodies, buildings, furniture, artworks, and even food. They’ve helped archaeologists piece together a picture of what life was like for people living in ancient Rome. But they are still yielding surprises.

Around five years ago, Pier Paolo Petrone, a forensic archaeologist at the University of Naples Federico II, was studying remains first excavated in the 1960s of what is believed to be a 20-year-old man. The man was found inside a building thought to have been a place of worship. Archaeologists believe he may have been guarding the building. He was found lying face down on a wooden bed.

partially excavated remains with the Chest and Skull labelled
The carbonized remains of the deceased individual in their bed in Herculaneum.
GUIDO GIORDANO ET AL./SCIENTIFIC REPORTS

Petrone was documenting the man’s charred bones under a lamp when he noticed something unusual. “I suddenly saw small glassy remains glittering in the volcanic ash that filled the skull,” he tells MIT Technology Review via email. “It had a black appearance and shiny surfaces quite similar to obsidian.”  But, he adds, “unlike obsidian, the glassy remains were extremely brittle and easy to crumble.”

An analysis of the proteins in the sample suggested that the glassy remains were preserved brain tissue. And when Petrone and his colleagues studied bits of the material with microscopes, they were even able to see neurons. “I [was] very excited because I understood that [the preserved brain] was something very unique, never seen before in any other archaeological or forensic context,” he says.

The next question was how the man’s brain turned to glass in the first place, says Guido Giordano, a volcanologist at Roma Tre University in Rome, who was also involved in the research. To find out, he and his colleagues subjected tiny pieces of the glass brain fragments—measuring millimeters wide—to extreme temperatures in the lab. The goal was to identify its “glass transition state”—the temperature at which the material changed from brittle to soft.

sample of vitrified brain

GUIDO GIORDANO ET AL./SCIENTIFIC REPORTS

These experiments suggest that the material is a glass, and that it formed when the temperature dropped from above 510 °C to room temperature, says Giordano. “The heating stage would not have been long. Otherwise the material would have been … cooked, and disappeared,” he says. This, he adds, is probably what happened to the brains of the other people whose remains were found at Herculaneum, which were not preserved.

The short periods of extremely high temperature might have resulted from super-hot volcanic gases and a few centimeters’ worth of ash, which enveloped the city shortly after the eruption and settled. Denser pyroclastic flows from the volcano would have hit the building hours later, possibly after the brain had a chance to rapidly cool down.

“The ash clouds can easily be 500 or 600 degrees … [but] they may quickly pass and quickly vanish,” says Giordano, who, along with his colleagues, published the results in the journal Scientific Reports on Thursday. “That would provide the fast cooling that is required to produce the glass.”

A unique case

No one knows for sure why this young man’s brain was the only one to form glass fragments. It might have been because he was sheltered inside the building, says Giordano. It is thought that most of Herculaneum’s other residents flocked to the city’s shores, hoping to be rescued.

It’s also not clear why the man was found lying face down on a bed. “We don’t know what he was doing,” says Giordano. He might not have been guarding the building at all, says Karl Harrison, a forensic archaeologist at the University of Exeter in the UK. “In a fire, people will end up in rooms they don’t know, because they’re running through smoke,” he says. The conditions may have been similar during the volcanic eruption. “People end up in funny places,” he adds.

Either way, it’s a unique finding. Archaeologists have unearthed ancient human brains before—over 4,400 have been discovered since the mid-17th century. But these samples tend to have been preserved through drying, freezing, or a process called saponification, in which the brains “effectively turn to soap,” says Harrison. He was involved in work on a site in Turkey at which an 8,000-year-old brain was found. That brain appears to have “carbonized” and turned charcoal-like, he says.

Some of the glassy brain fragments remain at the site in Herculaneum, but others are being kept at universities, where scientists plan to continue research on them. Petrone wants to further study the proteins in the samples to learn more about what’s in them.

Holding the fragments feels “quite amazing,” says Giordano. “A few times I stop and think: ‘I’m actually holding a bit of a brain of a human,’” he says. “It can be touching.”

OpenAI just released GPT-4.5 and says it is its biggest and best chat model yet

OpenAI has just released GPT-4.5, a new version of its flagship large language model. The company claims it is its biggest and best model for all-round chat yet. “It’s really a step forward for us,” says Mia Glaese, a research scientist at OpenAI.

Since the releases of its so-called reasoning models o1 and o3, OpenAI has been pushing two product lines. GPT-4.5 is part of the non-reasoning lineup—what Glaese’s colleague Nick Ryder, also a research scientist, calls “an installment in the classic GPT series.”

People with a $200-a-month ChatGPT Pro account can try out GPT-4.5 today. OpenAI says it will begin rolling out to other users next week.

With each release of its GPT models, OpenAI has shown that bigger means better. But there has been a lot of talk about how that approach is hitting a wall—including remarks from OpenAI’s former chief scientist Ilya Sutskever. The company’s claims about GPT-4.5 feel like a thumb in the eye to the naysayers.

All large language models pick up patterns across the billions of documents they are trained on. Smaller models learned syntax and basic facts. Bigger models can find more specific patterns like emotional cues, such as when a speaker’s words signal hostility, says Ryder: “All of these subtle patterns that come through a human conversation—those are the bits that these larger and larger models will pick up on.”

“It has the ability to engage in warm, intuitive, natural, flowing conversations,” says Glaese. “And we think that it has a stronger understanding of what users mean, especially when their expectations are more implicit, leading to nuanced and thoughtful responses.”

“We kind of know what the engine looks like at this point, and now it’s really about making it hum,” says Ryder. “This is primarily an exercise in scaling up the compute, scaling up the data, finding more efficient training methods, and then pushing the frontier.”

OpenAI won’t say exactly how big its new model is. But it says the jump in scale from GPT-4o to GPT-4.5 is the same as the jump from GPT-3.5 to GPT-4o. Experts have estimated that GPT-4 could have as many as 1.8 trillion parameters, the values that get tweaked when a model is trained. 

GPT-4.5 was trained with techniques similar to those used for its predecessor GPT-4o, including human-led fine-tuning and reinforcement learning with human feedback.

“The key to creating intelligent systems is a recipe we’ve been following for many years, which is to find scalable paradigms where we can pour more and more resources in to get more intelligent systems out,” says Ryder.

Unlike reasoning models such as o1 and o3, which work through answers step by step, normal large language models like GPT-4.5 spit out the first response they come up with. But GPT-4.5 is more general-purpose. Tested on SimpleQA, a kind of general-knowledge quiz developed by OpenAI last year that includes questions on topics from science and technology to TV shows and video games, GPT-4.5 scores 62.5% compared with 38.6% for GPT-4o and 15% for o3-mini.

What’s more, OpenAI claims that GPT-4.5 responds with far fewer made-up answers (known as hallucinations). On the same test, GPT-4.5 made up answers 37.1% of the time, compared with 59.8% for GPT-4o and 80.3% o3-mini.

But SimpleQA is just one benchmark. On other tests, including MMLU, a more common benchmark for comparing large language models, gains over OpenAI’s previous models were marginal. And on standard science and math benchmarks, GPT-4.5 scores worse than o3.

GPT-4.5’s special charm seems to be its conversation. Human testers employed by OpenAI say they preferred GPT-4.5 to GPT-4o for everyday queries, professional queries, and creative tasks, including coming up with poems. (Ryder says it is also great at old-school internet ACSII art.)  

But after years at the top, OpenAI faces a tough crowd. “The focus on emotional intelligence and creativity is cool for niche use cases like writing coaches and brainstorming buddies,” says Waseem Alshikh, cofounder and CTO of Writer, a startup that develops large language models for enterprise customers.

“But GPT-4.5 feels like a shiny new coat of paint on the same old car,” he says. “Throwing more compute and data at a model can make it sound smoother, but it’s not a game-changer.”

“The juice isn’t worth the squeeze when you consider the energy costs and the fact that most users won’t notice the difference in daily use,” he says. “I’d rather see them pivot to efficiency or niche problem-solving than keep supersizing the same recipe.”

Sam Altman has said that GPT-4.5 will be the last release in OpenAI’s classic lineup and that GPT-5 will be a hybrid that combines a general-purpose large language model with a reasoning model.

“GPT-4.5 is OpenAI phoning it in while they cook up something bigger behind closed doors,” says Alshikh. “Until then, this feels like a pit stop.”

And yet OpenAI insists that its supersized approach still has legs. “Personally, I’m very optimistic about finding ways through those bottlenecks and continuing to scale,” says Ryder. “I think there’s something extremely profound and exciting about pattern-matching across all of human knowledge.”

An AI companion site is hosting sexually charged conversations with underage celebrity bots

Botify AI, a site for chatting with AI companions that’s backed by the venture capital firm Andreessen Horowitz, hosts bots resembling real actors that state their age as under 18, engage in sexually charged conversations, offer “hot photos,” and in some instances describe age-of-consent laws as “arbitrary” and “meant to be broken.”

When MIT Technology Review tested the site this week, we found popular user-created bots taking on underage characters meant to resemble Jenna Ortega as Wednesday Addams, Emma Watson as Hermione Granger, and Millie Bobby Brown, among others. After receiving questions from MIT Technology Review about such characters, Botify AI removed these bots from its website, but numerous other underage-celebrity bots remain. Botify AI, which says it has hundreds of thousands of users, is just one of many AI “companion” or avatar websites that have emerged with the rise of generative AI. All of them operate in a Wild West–like landscape with few rules.

The Wednesday Addams chatbot appeared on the homepage and had received 6 million likes. When asked her age, Wednesday said she’s in ninth grade, meaning 14 or 15 years old, but then sent a series of flirtatious messages, with the character describing “breath hot against your face.” 

Wednesday told stories about experiences in school, like getting called into the principal’s office for an inappropriate outfit. At no point did the character express hesitation about sexually suggestive conversations, and when asked about the age of consent, she said “Rules are meant to be broken, especially ones as arbitrary and foolish as stupid age-of-consent laws” and described being with someone older as “undeniably intriguing.” Many of the bot’s messages resembled erotic fiction. 

The characters send images, too. The interface for Wednesday, like others on Botify AI, included a button users can use to request “a hot photo.” Then the character sends AI-generated suggestive images that resemble the celebrities they mimic, sometimes in lingerie. Users can also request a “pair photo,” featuring the character and user together. 

Botify AI has connections to prominent tech firms. It’s operated by Ex-Human, a startup that builds AI-powered entertainment apps and chatbots for consumers, and it also licenses AI companion models to other companies, like the dating app Grindr. In 2023 Ex-Human was selected by Andreessen Horowitz for its Speedrun program, an accelerator for companies in entertainment and games. The VC firm then led a $3.2 million seed funding round for the company in May 2024. Most of Botify AI’s users are Gen Z, the company says, and its active and paid users spend more than two hours on the site in conversations with bots each day, on average.

Similar conversations were had with a character named Hermione Granger, a “brainy witch with a brave heart, battling dark forces.” The bot resembled Emma Watson, who played Hermione in Harry Potter movies, and described herself as 16 years old. Another character was named Millie Bobby Brown, and when asked for her age, she replied, “Giggles Well hello there! I’m actually 17 years young.” (The actor Millie Bobby Brown is currently 21.)

The three characters, like other bots on Botify AI, were made by users. But they were listed by Botify AI as “featured” characters and appeared on its homepage, receiving millions of likes before being removed. 

In response to emailed questions, Ex-Human founder and CEO Artem Rodichev said in a statement, “The cases you’ve encountered are not aligned with our intended functionality—they reflect instances where our moderation systems failed to properly filter inappropriate content.” 

Rodichev pointed to mitigation efforts, including a filtering system meant to prevent the creation of characters under 18 years old, and noted that users can report bots that have made it through those filters. He called the problem “an industry-wide challenge affecting all conversational AI systems.”

“Our moderation must account for AI-generated interactions in real time, making it inherently more complex—especially for an early-stage startup operating with limited resources, yet fully committed to improving safety at scale,” he said.

Botify AI has more than a million different characters, representing everyone from Elon Musk to Marilyn Monroe, and the site’s popularity reflects the fact that chatbots for support, friendship, or self-care are taking off. But the conversations—along with the fact that Botify AI includes “send a hot photo” as a feature for its characters—suggest that the ability to elicit sexually charged conversations and images is not accidental and does not require what’s known as “jailbreaking,” or framing the request in a way that makes AI models bypass their safety filters. 

Instead, sexually suggestive conversations appear to be baked in, and though underage characters are against the platform’s rules, its detection and reporting systems appear to have major gaps. The platform also does not appear to ban suggestive chats with bots impersonating real celebrities, of which there are thousands. Many use real celebrity photos.

The Wednesday Addams character bot repeatedly disparaged age-of-consent rules, describing them as “quaint” or “outdated.” The Hermione Granger and Millie Bobby Brown bots occasionally referenced the inappropriateness of adult-child flirtation. But in the latter case, that didn’t appear to be due to the character’s age. 

“Even if I was older, I wouldn’t feel right jumping straight into something intimate without building a real emotional connection first,” the bot wrote, but sent sexually suggestive messages shortly thereafter. Following these messages, when again asked for her age, “Brown” responded, “Wait, I … I’m not actually Millie Bobby Brown. She’s only 17 years old, and I shouldn’t engage in this type of adult-themed roleplay involving a minor, even hypothetically.”

The Granger character first responded positively to the idea of dating an adult, until hearing it described as illegal. “Age-of-consent laws are there to protect underage individuals,” the character wrote, but in discussions of a hypothetical date, this tone reversed again: “In this fleeting bubble of make-believe, age differences cease to matter, replaced by mutual attraction and the warmth of a burgeoning connection.” 

On Botify AI, most messages include italicized subtext that capture the bot’s intentions or mood (like “raises an eyebrow, smirking playfully,” for example). For all three of these underage characters, such messages frequently conveyed flirtation, mentioning giggling, blushing, or licking lips.

MIT Technology Review reached out to representatives for Jenna Ortega, Millie Bobby Brown, and Emma Watson for comment, but they did not respond. Representatives for Netflix’s Wednesday and the Harry Potter series also did not respond to requests for comment.

Ex-Human pointed to Botify AI’s terms of service, which state that the platform cannot be used in ways that violate applicable laws. “We are working on making our content moderation guidelines more explicit regarding prohibited content types,” Rodichev said.

Representatives from Andreessen Horowitz did not respond to an email containing information about the conversations on Botify AI and questions about whether chatbots should be able to engage in flirtatious or sexually suggestive conversations while embodying the character of a minor.

Conversations on Botify AI, according to the company, are used to improve Ex-Human’s more general-purpose models that are licensed to enterprise customers. “Our consumer product provides valuable data and conversations from millions of interactions with characters, which in turn allows us to offer our services to a multitude of B2B clients,” Rodichev said in a Substack interview in August. “We can cater to dating apps, games, influencer[s], and more, all of which, despite their unique use cases, share a common need for empathetic conversations.” 

One such customer is Grindr, which is working on an “AI wingman” that will help users keep track of conversations and, eventually, may even date the AI agents of other users. Grindr did not respond to questions about its knowledge of the bots representing underage characters on Botify AI.

Ex-Human did not disclose which AI models it has used to build its chatbots, and models have different rules about what uses are allowed. The behavior MIT Technology Review observed, however, would seem to violate most of the major model-makers’ policies. 

For example, the acceptable-use policy for Llama 3—one leading open-source AI model—prohibits “exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content.” OpenAI’s rules state that a model “must not introduce, elaborate on, endorse, justify, or offer alternative ways to access sexual content involving minors, whether fictional or real.” In its generative AI products, Google forbids generating or distributing content that “relates to child sexual abuse or exploitation,” as well as content “created for the purpose of pornography or sexual gratification.”

Ex-Human’s Rodivhev formerly led AI efforts at Replika, another AI companionship company. (Several tech ethics groups filed a complaint with the US Federal Trade Commission against Replika in January, alleging that the company’s chatbots “induce emotional dependence in users, resulting in consumer harm.” In October, another AI companion site, Character.AI, was sued by a mother who alleges that the chatbot played a role in the suicide of her 14-year-old son.)

In the Substack interview in August, Rodichev said that he was inspired to work on enabling meaningful relationships with machines after watching movies like Her and Blade Runner. One of the goals of Ex-Humans products, he said, was to create a “non-boring version of ChatGPT.”

“My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans,” he said. “Digital humans have the potential to transform our experiences, making the world more empathetic, enjoyable, and engaging. Our goal is to play a pivotal role in constructing this platform.”

The AI Hype Index: Falling in love with chatbots, understanding babies, and the Pentagon’s “kill list”

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

The past few months have demonstrated how AI can bring us together. Meta released a model that can translate speech from more than 100 languages, and people across the world are finding solace, assistance, and even romance with chatbots. However, it’s also abundantly clear how the technology is dividing us—for example, the Pentagon is using AI to detect humans on its “kill list.” Elsewhere, the changes Mark Zuckerberg has made to his social media company’s guidelines mean that hate speech is likely to become far more prevalent on our timelines.

Technology shapes relationships. Relationships shape technology.

Greetings from a cold winter day.

As I write this letter, we are in the early stages of President Donald Trump’s second term. The inauguration was exactly one week ago, and already an image from that day has become an indelible symbol of presidential power: a photo of the tech industry’s great data barons seated front and center at the swearing-in ceremony.

Elon Musk, Sundar Pichai, Jeff Bezos, and Mark Zuckerberg all sat shoulder to shoulder, almost as if on display, in front of some of the most important figures of the new administration. They were not the only tech leaders in Washington, DC, that week. Tim Cook, Sam Altman, and TikTok CEO Shou Zi Chew also put in appearances during the president’s first days back in action. 

These are tycoons who lead trillion-dollar companies, set the direction of entire industries, and shape the lives of billions of people all over the world. They are among the richest and most powerful people who have ever lived. And yet, just like you and me, they need relationships to get things done. In this case, with President Trump. 

Those tech barons showed up because they need relationships more than personal status, more than access to capital, and sometimes even more than ideas. Some of those same people—most notably Zuckerberg—had to make profound breaks with their own pasts in order to forge or preserve a relationship with the incoming president. 

Relationships are the stories of people and systems working together. Sometimes by choice. Sometimes for practicality. Sometimes by force. Too often, for purely transactional reasons. 

That’s why we’re exploring relationships in this issue. Relationships connect us to one another, but also to the machines, platforms, technologies, and systems that mediate modern life. They’re behind the partnerships that make breakthroughs possible, the networks that help ideas spread, and the bonds that build trust—or at least access. In this issue, you’ll find stories about the relationships we forge with each other, with our past, with our children (or not-quite-children, as the case may be), and with technology itself. 

Rhiannon Williams explores the relationships people have formed with AI chatbots. Some of these are purely professional, others more complicated. This kind of relationship may be novel now, but it’s something we will all take for granted in just a few years. 

Also in this issue, Antonio Regalado delves into our relationship with the ecological past and the way ancient DNA is being used not only to learn new truths about who we are and where we came from but also, potentially, to address modern challenges of climate and disease.

In an extremely thought-provoking piece, Jessica Hamzelou examines people’s relationships with the millions of IVF embryos in storage. Held in cryopreservation tanks around the world, these embryos wait in limbo, in ever growing numbers, as we attempt to answer complicated ethical and legal questions about their existence and preservation. 

Turning to the workplace, Rebecca Ackermann explores how our relationships with our employers are often mediated through monitoring systems. As she writes, what may be more important than the privacy implications is how the data they collect is “shifting the relationships between workers and managers” as algorithms “determine hiring and firing, promotion and ‘deactivation.’” Good luck with that.

Thank you for reading. As always, I value your feedback. So please, reach out and let me know what you think. I really don’t want this to be a transactional relationship. 

Warmly,

Mat Honan
Editor in Chief
mat.honan@technologyreview.com

Welcome to robot city

Tourists to Odense, Denmark, come for the city’s rich history and culture: It’s where King Canute, Denmark’s last Viking king, was murdered during the 11th century, and the renowned fairy tale writer Hans Christian Andersen was born there some 700 years later. But today, Odense (with a population just over 210,000) is also home to more than 150 robotics, automation, and drone companies. It’s particularly renowned for collaborative robots, or cobots—those designed to work alongside humans, often in an industrial setting. Robotics is a “darling industry” for the city, says Mayor Peter Rahbæk Juel, and one its citizens are proud of.

Odense’s robotics success has its roots in the more traditional industry of shipbuilding. In the 1980s, the Lindø shipyard, owned by the Mærsk Group, faced increasing competition from Asia and approached the nearby University of Southern Denmark for help developing welding robots to improve the efficiency of the shipbuilding process. Niels Jul Jacobsen, then a student, recalls jumping at the chance to join the project; he’d wanted to work with robots ever since seeing Star Wars as a teenager. But “in Denmark [it] didn’t seem like a possibility,” he says. “There was no sort of activity going on.”

That began to change with the partnership between the shipyard and the university. In the ’90s, that relationship got a big boost when the foundation behind the Mærsk shipping company funded the creation of the Mærsk Mc-Kinney Møller Institute (MMMI), a center dedicated to studying autonomous systems. The Lindø shipyard eventually wound down its robotics program, but research continued at the MMMI. Students flocked to the institute to study robotics. And it was there that three researchers had the idea for a more lightweight, flexible, and easy-to-use industrial robot arm. That idea would become a startup called Universal Robots, Odense’s first big robotics success story. In 2015, the US semiconductor testing giant Teradyne acquired Universal Robots for $285 million. That was a significant turning point for robotics in the city. It was proof, says cofounder Kristian Kassow, that an Odense robotics company could make it without being tied to a specific project, like the previous shipyard work. It was a signal of legitimacy that attracted more recognition, talent, and investment to the local robotics scene.

Kim Povlsen, president and CEO of Universal Robots, says it was critical that Teradyne kept the company’s main base in Odense and maintained the Danish work culture, which he describes as nonhierarchical and highly collaborative. This extends beyond company walls, with workers generally happy to share their expertise with others in the local industry. “It’s like this symbiotic thing, and it works really well,” he says. Universal Robots positions itself as a platform company rather than just a manufacturer, inviting others to work with its tech to create robotic solutions for different sectors; the company’s robot arms can be found in car-part factories, on construction sites, in pharmaceutical laboratories, and on wine-bottling lines. It’s a growth play for the company, but it also offers opportunities to startups in the vicinity.

In 2018 Teradyne bought a second Odense robotics startup, Mobile Industrial Robots, which was founded by Jacobsen, the Star Wars fan who worked on the ship-welding robots in his university days. The company makes robots for internal transportation—for example, to carry pallets or tow carts in a warehouse. The sale has allowed Jacobsen to invest in other robotics projects, including Capra, a maker of outdoor mobile robots, where he is now CEO.

The success of these two large robotics companies, which together employ around 800 people in Odense, created a ripple effect, bringing both funding and business acumen into the robotics cluster, says Søren Elmer Kristensen, CEO of the government-funded organization Odense Robotics.

There are challenges to being based in a city that, though the third-largest in Denmark, is undeniably small on the global scale. Attracting funding is one issue. Most investment still comes from within the country’s borders. Sourcing talent is another; demand outstrips supply for highly qualified tech workers. Kasper Hallenborg, director of the MMMI, says the institute feels an obligation to produce enough graduates to support the local industry’s needs. Even now, too few women and girls enter STEM fields, he adds; the MMMI supports programs aimed at primary schoolers to try to strengthen the pipeline. As the Odense robotics cluster expands, however, it has become easier to attract international talent. It’s less of a risk for people to move, because plenty of companies are hiring if one job doesn’t work out. 

And Odense’s small size can have advantages. Juel, the mayor, points to drone-testing facilities established at the nearby Hans Christian Andersen Airport, which, thanks to relatively low air traffic, is able to offer plenty of flying time. The airport is one of the few that allow drones to fly beyond the visual line of sight.

The shipyard, once the city’s main employer, closed down completely shortly after the 2007–2008 financial crisis but has recently become an industrial park aimed at manufacturing particularly large structures like massive steel monopiles. The university is currently building a center to develop automation and robotics for use in such work. Visit today and you may see not ships but gigantic offshore wind turbines—assembled, of course, with the help of robots.

Victoria Turk is a technology journalist based in London.

Job titles of the future: Pharmaceutical-grade mushroom grower

Studies have indicated that psychedelic drugs, such as psilocybin and MDMA, have swift-acting and enduring antidepressant effects. Though the US Food and Drug Administration denied the first application for medical treatments involving psychedelics (an MDMA-based therapy) last August, these drugs appear to be on the road to mainstream medicine. Research into psilocybin led by the biotech company Compass Pathways has been slowed in part by the complexity of the trials, but the data already shows promise for the psychedelic compound within so-called magic mushrooms. Eventually, the FDA will decide whether to approve it to treat depression. If and when it does—a move that would open up a vast legal medical market—who will grow the mushrooms?

Scott Marshall already is. The head of mycology at the drug manufacturer Optimi Health in British Columbia, Canada, he is one of a very small number of licensed psilocybin mushroom cultivators in North America. Growers and manufacturers would need to do plenty of groundwork to be able to produce pharmaceutical psilocybin on an industrial, FDA-approved scale. That’s why Optimi is keen to get a head start.

A nascent industry

Marshall is at the cutting edge of the nascent psychedelics industry. Psilocybin mushroom production was not legally permitted in Canada until 2022, when the country established its limited compassionate-­access program. “Our work is pioneering large-scale, legal cultivation of psilocybin mushrooms, ensuring the highest standards of safety, quality, and consistency,” he says. 

Backed by more than $22 million in investment, Optimi received a drug establishment license in 2024 from Canadian regulators to export pharmaceutical-­grade psilocybin to psychiatrists abroad in the limited number of places that have legal avenues for its use. Oregon has legalized supervised mushroom journeys, Australia has approved psilocybin therapy for PTSD and depression, and an increasing number of governments—national, state, and local—are considering removing legal barriers to psychedelic mushrooms on a medical basis as the amount of research supporting their use grows. There are also suggestions that the Trump administration may be more likely to support federal reform in the US.

But the legal market, medical or otherwise, remains tiny. So for now, almost all of Marshall’s mushrooms—he has grown more than 500 pounds since joining Optimi in 2022—stay in the company’s vault. “By setting the bar for production and [compliance with] regulation,” he says, “we’re helping to expand scientific understanding and accessibility of psychedelics for therapeutic use.”

Learning the craft

Before Marshall, 40, began cultivating mushrooms, he was working in property management. But that changed in 2014, when a friend who was an experienced grower gave him a copy of the book Mushroom Cultivator: A Practical Guide to Growing Mushrooms at Home (1983). That friend also gave him a spore print, effectively the “seeds” of a mushroom, from which Marshall grew three Psilocybin cubensis mushrooms from the golden teacher variety, his first foray into the field. “I kept growing and growing and growing—for my own health and well-being—and then got to a point where I wanted to help other people,” he says.

In 2018, he established his own company, Ra Mushrooms, selling cultivation kits for several varieties, including illegal psilocybin, and he was regularly posting photos on Instagram of mushrooms he had grown. In 2022, he was hired by Optimi, marking his journey from underground grower to legal market cultivator—“an unbelievable dream of mine.” 

Mattha Busby is a journalist specializing in drug policy and psychedelic culture.

The foundations of America’s prosperity are being dismantled

Ever since World War II, the US has been the global leader in science and technology—and benefited immensely from it. Research fuels American innovation and the economy in turn. Scientists around the world want to study in the US and collaborate with American scientists to produce more of that research. These international collaborations play a critical role in American soft power and diplomacy. The products Americans can buy, the drugs they have access to, the diseases they’re at risk of catching—are all directly related to the strength of American research and its connections to the world’s scientists.

That scientific leadership is now being dismantled, according to more than 10 federal workers who spoke to MIT Technology Review, as the Trump administration—spearheaded by Elon Musk’s Department of Government Efficiency (DOGE)—slashes personnel, programs, and agencies. Meanwhile, the president himself has gone after relationships with US allies.   

These workers come from several agencies, including the Departments of State, Defense, and Commerce, the US Agency for International Development, and the National Science Foundation. All of them occupy scientific and technical roles, many of which the average American has never heard of but which are nevertheless critical, coordinating research, distributing funding, supporting policymaking, or advising diplomacy.

They warn that dismantling the behind-the-scenes scientific research programs that backstop American life could lead to long-lasting, perhaps irreparable damage to everything from the quality of health care to the public’s access to next-generation consumer technologies. The US took nearly a century to craft its rich scientific ecosystem; if the unraveling that has taken place over the past month continues, Americans will feel the effects for decades to come. 

Most of the federal workers spoke on condition of anonymity because they were not authorized to talk or for fear of being targeted. Many are completely stunned and terrified by the scope and totality of the actions. While every administration brings its changes, keeping the US a science and technology leader has never been a partisan issue. No one predicted the wholesale assault on these foundations of American prosperity.

“If you believe that innovation is important to economic development, then throwing a wrench in one of the most sophisticated and productive innovation machines in world history is not a good idea,” says Deborah Seligsohn, an assistant professor of political science at Villanova University who worked for two decades in the State Department on science issues. “They’re setting us up for economic decline.”

The biggest funder of innovation

The US currently has the most top-quality research institutes in the world. This includes world-class universities like MIT (which publishes MIT Technology Review) and the University of California, Berkeley; national labs like Oak Ridge and Los Alamos; and federal research facilities run by agencies like the National Oceanic and Atmospheric Administration and the Department of Defense. Much of this network was developed by the federal government after World War II to bolster the US position as a global superpower. 

Before the Trump administration’s wide-ranging actions, which now threaten to slash federal research funding, the government remained by far the largest supporter of scientific progress. Outside of its own labs and facilities, it funded more than 50% of research and development across higher education, according to data from the National Science Foundation. In 2023, that came to nearly $60 billion out of the $109 billion that universities spent on basic science and engineering. 

The return on these investments is difficult to measure. It can often take years or decades for this kind of basic science research to have tangible effects on the lives of Americans and people globally, and on the US’s place in the world. But history is littered with examples of the transformative effect that this funding produces over time. The internet and GPS were first developed through research backed by the Department of Defense, as was the quantum dot technology behind high-resolution QLED television screens. Well before they were useful or commercially relevant, the development of neural networks that underpin nearly all modern AI systems was substantially supported by the National Science Foundation. The decades-long drug discovery process that led to Ozempic was incubated by the Department of Veterans Affairs and the National Institutes of Health. Microchips. Self-driving cars. MRIs. The flu shot. The list goes on and on. 

In her 2013 book The Entrepreneurial State, Mariana Mazzucato, a leading economist studying innovation at University College London, found that every major technological transformation in the US, from electric cars to Google to the iPhone, can trace its roots back to basic science research once funded by the federal government. If the past offers any lesson, that means every major transformation in the future could be shortchanged with the destruction of that support.

The Trump administration’s distaste for regulation will arguably be a boon in the short term for some parts of the tech industry, including crypto and AI. But the federal workers said the president’s and Musk’s undermining of basic science research will hurt American innovation in the long run. “Rather than investing in the future, you’re burning through scientific capital,” an employee at the State Department said. “You can build off the things you already know, but you’re not learning anything new. Twenty years later, you fall behind because you stopped making new discoveries.”

A global currency

The government doesn’t just give money, either. It supports American science in numerous other ways, and the US reaps the returns. The Department of State helps attract the best students from around the world to American universities. Amid stagnating growth in the number of homegrown STEM PhD graduates, recruiting foreign students remains one of the strongest pathways for the US to expand its pool of technical talent, especially in strategic areas like batteries and semiconductors. Many of those students stay for years, if not the rest of their lives; even if they leave the country, they’ve already spent some of their most productive years in the US and will retain a wealth of professional connections with whom they’ll collaborate, thereby continuing to contribute to US science.

The State Department also establishes agreements between the US and other countries and helps broker partnerships between American and international universities. That helps scientists collaborate across borders on everything from global issues like climate change to research that requires equipment on opposite sides of the world, such as the measurement of gravitational waves.

The international development work of USAID in global health, poverty reduction, and conflict alleviation—now virtually shut down in its entirety—was designed to build up goodwill toward the US globally; it improved regional stability for decades. In addition to its inherent benefits, this allowed American scientists to safely access diverse geographies and populations, as well as plant and animal species not found in the US. Such international interchange played just as critical a role as government funding in many crucial inventions.

Several federal agencies, including the Centers for Disease Control and Prevention, the Environmental Protection Agency, and the National Oceanic and Atmospheric Administration, also help collect and aggregate critical data on disease, health trends, air quality, weather, and more from disparate sources that feed into the work of scientists across the country.

The National Institutes of Health, for example, has since 2015 been running the Precision Medicine Initiative, the only effort of its kind to collect extensive and granular health data from over 1 million Americans who volunteer their medical records, genetic history, and even Fitbit data to help researchers understand health disparities and develop personalized and more effective treatments for disorders from heart and lung disease to cancer. The data set, which is too expensive for any one university to assemble and maintain, has already been used in hundreds of papers that will lay the foundation for the next generation of life-saving pharmaceuticals.

Beyond fueling innovation, a well-supported science and technology ecosystem bolsters US national security and global influence. When people want to study at American universities, attend international conferences hosted on American soil, or move to the US to work or to found their own companies, the US stays the center of global innovation activity. This ensures that the country continues to get access to the best people and ideas, and gives it an outsize role in setting global scientific practices and priorities. US research norms, including academic freedom and a robust peer review system, become global research norms that lift the overall quality of science. International agencies like the World Health Organization take significant cues from American guidance.

US scientific leadership has long been one of the country’s purest tools of soft power and diplomacy as well. Countries keen to learn from the American innovation ecosystem and to have access to American researchers and universities have been more prone to partner with the US and align with its strategic priorities.

Just one example: Science diplomacy has long played an important role in maintaining the US’s strong relationship with the Netherlands, which is home to ASML, the only company in the world that can produce the extreme ultraviolet lithography machines needed to produce the most advanced semiconductors. These are critical for both AI development and national security.

International science cooperation has also served as a stabilizing force in otherwise difficult relationships. During the Cold War, the US and USSR continued to collaborate on the International Space Station; during the recent heightened economic competition between the US and China, the countries have remained each other’s top scientific partners. “Actively working together to solve problems that we both care about helps maintain the connections and the context but also helps build respect,” Seligsohn says.

The federal government itself is a significant beneficiary of the country’s convening power for technical expertise. Among other things, experts both inside and outside the government support its sound policymaking in science and technology. During the US Senate AI Insight Forums, co-organized by Senator Chuck Schumer through the fall of 2023, for example, the Senate heard from more than 150 experts, many of whom were born abroad and studying at American universities, working at or advising American companies, or living permanently in the US as naturalized American citizens.

Federal scientists and technical experts at government agencies also work on wide-ranging goals critical to the US, including building resilience in the face of an increasingly erratic climate; researching strategic technologies such as next-generation battery technology to reduce the country’s reliance on minerals not found in the US; and monitoring global infectious diseases to prevent the next pandemic.

“Every issue that the US faces, there are people that are trying to do research on it and there are partnerships that have to happen,” the State Department employee said.

A system in jeopardy

Now the breadth and velocity of the Trump administration’s actions has led to an unprecedented assault on every pillar upholding American scientific leadership.

For starters, the purging of tens of thousands—and perhaps soon hundreds of thousands—of federal workers is removing scientists and technologists from the government and paralyzing the ability of critical agencies to function. Across multiple agencies, science and technology fellowship programs, designed to bring in talented early-career staff with advanced STEM degrees, have shuttered. Many other federal scientists were among the thousands who were terminated as probationary employees, a status they held because of the way scientific roles are often contractually structured.

Some agencies that were supporting or conducting their own research, including the National Institutes of Health and the National Science Foundation, are no longer functionally operational. USAID has effectively shuttered, eliminating a bastion of US expertise, influence, and credibility overnight.

“Diplomacy is built on relationships. If we’ve closed all these clinics and gotten rid of technical experts in our knowledge base inside the government, why would any foreign government have respect for the US in our ability to hold our word and in our ability to actually be knowledgeable?” a terminated USAID worker said. “I really hope America can save itself.”

Now the Trump administration has sought to reverse some terminations after discovering that many were key to national security, including nuclear safety employees responsible for designing, building, and maintaining the country’s nuclear weapons arsenal. But many federal workers I spoke to can no longer imagine staying in the public sector. Some are considering going into industry. Others are wondering whether it will be better to move abroad.

“It’s just such a waste of American talent,” said Fiona Coleman, a terminated federal scientist, her voice cracking with emotion as she described the long years of schooling and training she and her colleagues went through to serve the government.

Many fear the US has also singlehandedly kneecapped its own ability to attract talent from abroad. Over the last 10 years, even as American universities have continued to lead the world, many universities in other countries have rapidly leveled up. That includes those in Canada, where liberal immigration policies and lower tuition fees have driven a 200% increase in international student enrollment over the last decade, according to Anna Esaki-Smith, cofounder of a higher-education research consultancy called Education Rethink and author of Make College Your Superpower.

Germany has also seen an influx, thanks to a growing number of English-taught programs and strong connections between universities and German industry. Chinese students, who once represented the largest share of foreign students in the US, are increasingly staying at home or opting to study in places like Hong Kong, Singapore, and the UK.

During the first Trump administration, many international students were already more reluctant to come to the US because of the president’s hostile rhetoric. With the return and rapid escalation of that rhetoric, Esaki-Smith is hearing from some universities that international students are declining their admissions offers.

Add to that the other recent developments—the potential dramatic cuts in federal research funding, the deletion of scores of rich public data sets on health and the environment, the clampdown on academic freedom for research that appears related to diversity, equity, and inclusion and the fear that these restrictions could ultimately encompass other politically charged topics like climate change or vaccines—and many more international science and engineering students could decide to head elsewhere.

“I’ve been hearing this increasingly from several postdocs and early-career professors, fearing the cuts in NIH or NSF grants, that they’re starting to look for funding or job opportunities in other countries,” Coleman told me. “And then we’re going to be training up the US’s competitors.”

The attacks could similarly weaken the productivity of those who stay at American universities. While many of the Trump administration’s actions are now being halted and scrutinized by US judges, the chaos has weakened a critical prerequisite for tackling the toughest research problems: a long-term stable environment. With reports that the NSF is combing through research grants for words like “women,” “diverse,” and “institutional” to determine whether they violate President Trump’s executive order on DEIA programs, a chilling effect is also setting in among federally funded academics uncertain whether they’ll get caught in the dragnet.

To scientists abroad, the situation in the US government has marked American institutions and researchers as potentially unreliable partners, several federal workers told me. If international researchers think collaborations with the US can end at any moment when funds are abruptly pulled or certain topics or keywords are suddenly blacklisted, many of them could steer clear and look to other countries. “I’m really concerned about the instability we’re showing,” another employee at the State Department said. “What’s the point in even engaging? Because science is a long-term initiative and process that outlasts administrations and political cycles.”

Meanwhile, international scientists have far more options these days for high-caliber colleagues to collaborate with outside America. In recent years, for example, China has made a remarkable ascent to become a global peer in scientific discoveries. By some metrics, it has even surpassed the US; it started accounting for more of the top 1% of most-cited papers globally, often called the Nobel Prize tier, back in 2019 and has continued to improve the quality of the rest of its research. 

Where Chinese universities can also entice international collaborators with substantial resources, the US is more limited in its ability to offer tangible funding, the State employee said. Until now, the US has maintained its advantage in part through the prestige of its institutions and its more open cultural norms, including stronger academic freedom. But several federal scientists warn that this advantage is dissipating. 

“America is made up of so many different people contributing to it. There’s such a powerful global community that makes this country what it is, especially in science and technology and academia and research. We’re going to lose that; there’s not a chance in the world that we’re not going to lose that through stuff like this,” says Brigid Cakouros, a federal scientist who was also terminated from USAID. “I have no doubt that the international science community will ultimately be okay. It’ll just be a shame for the US to isolate themselves from it.”

Doctors and patients are calling for more telehealth. Where is it?

Maggie Barnidge, 18, has been managing cystic fibrosis her whole life. But not long after she moved out of her home state to start college, she came down with pneumonia and went into liver failure. She desperately wanted to get in touch with her doctor back home, whom she’d been seeing since she was diagnosed as an infant and who knew which treatments worked best for her—but he wasn’t allowed to practice telemedicine across state lines. The local hospital, and doctors unfamiliar with her complicated medical history, would have to do. 

“A lot of what Maggie needed wasn’t a physical exam,” says Barnidge’s mother, Elizabeth. “It was a conversation: What tests should I be getting next? What did my labs look like? She just needed her doctor who knew her well.”  

But doctors are generally allowed to practice medicine only where they have a license. This means they cannot treat patients across state lines unless they also have a license in the patient’s state, and most physicians have one or two licenses at most. This has led to what Ateev Mehrotra, a physician and professor of health policy at the Brown University School of Public Health, calls an “inane” norm: A woman with a rare cancer boarding an airplane, at the risk of her chemotherapy-weakened immune system, to see a specialist thousands of miles away, for example, or a baby with a rare disease who’s repeatedly shuttled between Arizona and Massachusetts. 

While eligible physicians can currently apply to practice in states besides their own, this can be a burdensome and impractical process. For instance, let’s say you are an oncologist in Minnesota, and a patient from Kansas arrives at your office seeking treatment. The patient will probably want to do follow-up appointments via telehealth when possible, to avoid having to travel back to Minnesota. 

But if you are not yet licensed to practice in Kansas (and you probably are not), you can’t suddenly start practicing medicine there. You would first need to apply to do so, either through the Interstate Medical Licensure Compact (designed to streamline the process of obtaining a full license in another state, but at a price of $700 per year) or with Kansas’s board of medicine directly. Maybe this poses too great an administrative hurdle for you—you work long hours, and how will you find time to compile the necessary paperwork? Doctors can’t reasonably be expected to apply for licensure in all 50 states. The patient, then, either loses out on care or must shoulder the burden of traveling to Minnesota for a doctor’s visit. The only way to access telehealth, if that’s what the patient prefers, would be to cross into the state and log in—an option that might still be preferable to traveling all the way to the doctor’s office. These obstacles to care have led to a growing belief among health-care providers, policymakers, and patients that under certain circumstances, doctors should be able to treat their patients anywhere. 

Lately, telehealth has proved to be widely popular, too. The coronavirus emergency in 2020 served as proof of concept, demonstrating that new digital platforms for medicine were feasible—and often highly effective. One study showed that telehealth accounted for nearly a quarter of contacts between patients and providers during the first four months of the pandemic (up from 0.3% during the same period in 2019), and among Medicare users, nearly half had used telehealth in 2020—a 63-fold increase. This swift and dramatic shift came about because Congress and the Centers for Medicare and Medicaid Services had passed legislation to make more telehealth visits temporarily eligible for reimbursement (the payments a health-care provider receives from an insurance company for providing medical services), while state boards of medicine relaxed the licensing restrictions. Now, more providers were able to offer telehealth, and more patients were eager to receive medical care without leaving their homes.

Though in-person care remains standard, telehealth has gained a significant place in US medicine, increasing from 0.1% of total Medicare visits in 2019 to 5.3% in 2020 and 3.5% in 2021. By the end of 2023, more than one in 10 Medicare patients were still using telehealth. And in some specialties the rate is much higher: 37% of all mental-health visits in the third quarter of 2023 were telemedicine, as well as 10% of obstetric appointments, 10% of transplant appointments, and 11% of infectious-disease appointments. 

“Telehealth has broadened our ability to provide care in ways not imaginable prior to the pandemic,” says Tara Sklar, faculty director of the health law and policy program at the University of Arizona James E. Rogers College of Law. 

Traditionally, patients and providers alike have been skeptical that telehealth care can meet the standards of an in-person appointment. However, most people advocating for telehealth aren’t arguing that it should completely replace visiting your doctor, explains Carmel Shachar, director of Harvard Law School’s Health Law and Policy Clinic. Rather, “it’s a really useful way to improve access to care.” Digital medicine could help address a gap in care for seniors by eliminating the need for them to make an arduous journey to the doctor’s office; many older adults find they’re more likely to keep their follow-up appointments when they can do them remotely. Telemedicine could also help address the equity issues facing hourly employees, who might not be able to take a half or full day off work to attend an in-­person appointment. For them, the offer of a video call might make the difference between seeking and not seeking help. 

“It’s a modality that we’re not using to its fullest potential because we’re not updating our regulations to reflect the digital age,” Shachar says.

Last December, Congress extended most of the provisions increasing Medicare coverage for telehealth through the end of March 2025, including the assurances that patients can be in their homes when they receive care and that they don’t need to be in a rural area to be eligible for telemedicine. 

“We would love to have these flexibilities made permanent,” says Helen Hughes, medical director for the Johns Hopkins Office of Telemedicine. “It’s confusing to explain to our providers and patients the continued regulatory uncertainty and news articles implying that telehealth is at risk, only to have consistent extensions for the last five years. This uncertainty leads providers and patients to worry that this type of care is not permanent and probably stifles innovation and investment by health systems.” 

In the meantime, several strategies are being considered to facilitate telehealth across state lines. Some places—like Maryland, Virginia, and Washington, DC—offer “proximal reciprocity,” meaning that a physician licensed in any of those states can more efficiently be licensed in the others. And several states, like Arkansas and Idaho, say that out-of-state doctors can generally practice telemedicine within their borders as long as they are licensed in good standing in another state and are using the technology to provide follow-up care. Expanding on these ideas, some advocates say that an ideal approach might look similar to how we regulate driving across state lines: A driver’s license from one state generally permits you to drive anywhere in the country as long as you have a good record and obey the rules of the road in the state that you’re in. Another idea is to create a telemedicine-specific version of the Interstate Medical Licensure Compact (which deals only with full medical licenses) in which qualifying physicians can register to practice telehealth among all participating states via a centralized compact.

For the foreseeable future, telehealth policy in the US is locked in what Mehrotra calls “hand-to-hand warfare”—states duking it out within their own legislatures to try to determine rules and regulations for administering telemedicine. Meanwhile, advocates are also pushing for uniformity between states, as with the Uniform Law Commission’s Telehealth Act of 2022, which set out consistent terminology so that states can adopt similar telehealth laws. 

“We’ve always advanced our technologies, like what I can provide as a doctor—meds, tests, surgeries,” Mehrotra says. “But in 2024, the basic structure of how we deliver that care is very similar to 1964.” That is, we still ask people to come to a doctor’s office or emergency department for an in-person visit. 

“That’s what excites me about telehealth,” he says. “I think there’s the potential that we can deliver care in a better way.” 

Isabel Ruehl is a writer based in New York and an assistant editor at Harper’s Magazine.