Why basic science deserves our boldest investment

In December 1947, three physicists at Bell Telephone Laboratories—John Bardeen, William Shockley, and Walter Brattain—built a compact electronic device using thin gold wires and a piece of germanium, a material known as a semiconductor. Their invention, later named the transistor (for which they were awarded the Nobel Prize in 1956), could amplify and switch electrical signals, marking a dramatic departure from the bulky and fragile vacuum tubes that had powered electronics until then.

Its inventors weren’t chasing a specific product. They were asking fundamental questions about how electrons behave in semiconductors, experimenting with surface states and electron mobility in germanium crystals. Over months of trial and refinement, they combined theoretical insights from quantum mechanics with hands-on experimentation in solid-state physics—work many might have dismissed as too basic, academic, or unprofitable.

Their efforts culminated in a moment that now marks the dawn of the information age. Transistors don’t usually get the credit they deserve, yet they are the bedrock of every smartphone, computer, satellite, MRI scanner, GPS system, and artificial-intelligence platform we use today. With their ability to modulate (and route) electrical current at astonishing speeds, transistors make modern and future computing and electronics possible.

This breakthrough did not emerge from a business plan or product pitch. It arose from open-ended, curiosity-driven research and enabling development, supported by an institution that saw value in exploring the unknown. It took years of trial and error, collaborations across disciplines, and a deep belief that understanding nature—even without a guaranteed payoff—was worth the effort.

After the first successful demonstration in late 1947, the invention of the transistor remained confidential while Bell Labs filed patent applications and continued development. It was publicly announced at a press conference on June 30, 1948, in New York City. The scientific explanation followed in a seminal paper published in the journal Physical Review

How do they work? At their core, transistors are made of semiconductors—materials like germanium and, later, silicon—that can either conduct or resist electricity depending on subtle manipulations of their structure and charge. In a typical transistor, a small voltage applied to one part of the device (the gate) either allows or blocks the electric current flowing through another part (the channel). It’s this simple control mechanism, scaled up billions of times, that lets your phone run apps, your laptop render images, and your search engine return answers in milliseconds.

Though early devices used germanium, researchers soon discovered that silicon—more thermally stable, moisture resistant, and far more abundant—was better suited for industrial production. By the late 1950s, the transition to silicon was underway, making possible the development of integrated circuits and, eventually, the microprocessors that power today’s digital world.

A modern chip the size of a human fingernail now contains tens of billions of silicon transistors, each measured in nanometers—smaller than many viruses. These tiny switches turn on and off billions of times per second, controlling the flow of electrical signals involved in computation, data storage, audio and visual processing, and artificial intelligence. They form the fundamental infrastructure behind nearly every digital device in use today. 

The global semiconductor industry is now worth over half a trillion dollars. Devices that began as experimental prototypes in a physics lab now underpin economies, national security, health care, education, and global communication. But the transistor’s origin story carries a deeper lesson—one we risk forgetting.

Much of the fundamental understanding that moved transistor technology forward came from federally funded university research. Nearly a quarter of transistor research at Bell Labs in the 1950s was supported by the federal government. Much of the rest was subsidized by revenue from AT&T’s monopoly on the US phone system, which flowed into industrial R&D.

Inspired by the 1945 report “Science: The Endless Frontier,” authored by Vannevar Bush at the request of President Truman, the US government began a long-standing tradition of investing in basic research. These investments have paid steady dividends across many scientific domains—from nuclear energy to lasers, and from medical technologies to artificial intelligence. Trained in fundamental research, generations of students have emerged from university labs with the knowledge and skills necessary to push existing technology beyond its known capabilities.

And yet, funding for basic science—and for the education of those who can pursue it—is under increasing pressure. The new White House’s proposed federal budget includes deep cuts to the Department of Energy and the National Science Foundation (though Congress may deviate from those recommendations). Already, the National Institutes of Health has canceled or paused more than $1.9 billion in grants, while NSF STEM education programs suffered more than $700 million in terminations.

These losses have forced some universities to freeze graduate student admissions, cancel internships, and scale back summer research opportunities—making it harder for young people to pursue scientific and engineering careers. In an age dominated by short-term metrics and rapid returns, it can be difficult to justify research whose applications may not materialize for decades. But those are precisely the kinds of efforts we must support if we want to secure our technological future.

Consider John McCarthy, the mathematician and computer scientist who coined the term “artificial intelligence.” In the late 1950s, while at MIT, he led one of the first AI groups and developed Lisp, a programming language still used today in scientific computing and AI applications. At the time, practical AI seemed far off. But that early foundational work laid the groundwork for today’s AI-driven world.

After the initial enthusiasm of the 1950s through the ’70s, interest in neural networks—a leading AI architecture today inspired by the human brain—declined during the so-called “AI winters” of the late 1990s and early 2000s. Limited data, inadequate computational power, and theoretical gaps made it hard for the field to progress. Still, researchers like Geoffrey Hinton and John Hopfield pressed on. Hopfield, now a 2024 Nobel laureate in physics, first introduced his groundbreaking neural network model in 1982, in a paper published in Proceedings of the National Academy of Sciences of the USA. His work revealed the deep connections between collective computation and the behavior of disordered magnetic systems. Together with the work of colleagues including Hinton, who was awarded the Nobel the same year, this foundational research seeded the explosion of deep-learning technologies we see today.

One reason neural networks now flourish is the graphics processing unit, or GPU—originally designed for gaming but now essential for the matrix-heavy operations of AI. These chips themselves rely on decades of fundamental research in materials science and solid-state physics: high-dielectric materials, strained silicon alloys, and other advances making it possible to produce the most efficient transistors possible. We are now entering another frontier, exploring memristors, phase-changing and 2D materials, and spintronic devices.

If you’re reading this on a phone or laptop, you’re holding the result of a gamble someone once made on curiosity. That same curiosity is still alive in university and research labs today—in often unglamorous, sometimes obscure work quietly laying the groundwork for revolutions that will infiltrate some of the most essential aspects of our lives 50 years from now. At the leading physics journal where I am editor, my collaborators and I see the painstaking work and dedication behind every paper we handle. Our modern economy—with giants like Nvidia, Microsoft, Apple, Amazon, and Alphabet—would be unimaginable without the humble transistor and the passion for knowledge fueling the relentless curiosity of scientists like those who made it possible.

The next transistor may not look like a switch at all. It might emerge from new kinds of materials (such as quantum, hybrid organic-inorganic, or hierarchical types) or from tools we haven’t yet imagined. But it will need the same ingredients: solid fundamental knowledge, resources, and freedom to pursue open questions driven by curiosity, collaboration—and most importantly, financial support from someone who believes it’s worth the risk.

Julia R. Greer is a materials scientist at the California Institute of Technology. She is a judge for MIT Technology Review’s Innovators Under 35 and a former honoree (in 2008).

Putin says organ transplants could grant immortality. Not quite.

This week I’m writing from Manchester, where I’ve been attending a conference on aging. Wednesday was full of talks and presentations by scientists who are trying to understand the nitty-gritty of aging—all the way down to the molecular level. Once we can understand the complex biology of aging, we should be able to slow or prevent the onset of age-related diseases, they hope.

Then my editor forwarded me a video of the leaders of Russia and China talking about immortality. “These days at 70 years old you are still a child,” China’s Xi Jinping, 72, was translated as saying, according to footage livestreamed by CCTV to multiple media outlets.

“With the developments of biotechnology, human organs can be continuously transplanted, and people can live younger and younger, and even achieve immortality,” Russia’s Vladimir Putin, also 72, is reported to have replied.

Russian President Vladimir Putin, Chinese President Xi Jinping and North Korean leader Kim Jong Un walk side by side

SERGEI BOBYLEV, SPUTNIK, KREMLIN POOL PHOTO VIA AP

There’s a striking contrast between that radical vision and the incremental longevity science presented at the meeting. Repeated rounds of organ transplantation surgery aren’t likely to help anyone radically extend their lifespan anytime soon.

First, back to Putin’s proposal: the idea of continually replacing aged organs to stay young. It’s a simplistic way to think about aging. After all, aging is so complicated that researchers can’t agree on what causes it, why it occurs, or even how to define it, let alone “treat” it.

Having said that, there may be some merit to the idea of repairing worn-out body parts with biological or synthetic replacements. Replacement therapies—including bioengineered organs—are being developed by multiple research teams. Some have already been tested in people. This week, let’s take a look at the idea of replacement therapies.

No one fully understands why our organs start to fail with age. On the face of it, replacing them seems like a good idea. After all, we already know how to do organ transplants. They’ve been a part of medicine since the 1950s and have been used to save hundreds of thousands of lives in the US alone.

And replacing old organs with young ones might have more broadly beneficial effects. When a young mouse is stitched to an old one, the older mouse benefits from the arrangement, and its health seems to improve.

The problem is that we don’t really know why. We don’t know what it is about young body tissues that makes them health-promoting. We don’t know how long these effects might last in a person. We don’t know how different organ transplants will compare, either. Might a young heart be more beneficial than a young liver? No one knows.

And that’s before you consider the practicalities of organ transplantation. There is already a shortage of donor organs—thousands of people die on waiting lists. Transplantation requires major surgery and, typically, a lifetime of prescription drugs that damp down the immune system, leaving a person more susceptible to certain infections and diseases.

So the idea of repeated organ transplantations shouldn’t really be a particularly appealing one. “I don’t think that’s going to happen anytime soon,” says Jesse Poganik, who studies aging at Brigham and Women’s Hospital in Boston and is also in Manchester for the meeting.

Poganik has been collaborating with transplant surgeons in his own research. “The surgeries are good, but they’re not simple,” he tells me. And they come with real risks. His own 24-year-old cousin developed a form of cancer after a liver and heart transplant. She died a few weeks ago, he says.

So when it comes to replacing worn-out organs, scientists are looking for both biological and synthetic alternatives.  

We’ve been replacing body parts for centuries. Wooden toes were used as far back as the 15th century. Joint replacements have been around for more than a hundred years. And major innovations over the last 70 years have given us devices like pacemakers, hearing aids, brain implants, and artificial hearts.

Scientists are exploring other ways to make tissues and organs, too. There are different approaches here, but they include everything from injecting stem cells to seeding “scaffolds” with cells in a lab.

In 1999, researchers used volunteers’ own cells to seed bladder-shaped collagen scaffolds. The resulting bioengineered bladders went on to be transplanted into seven people in an initial trial

Now scientists are working on more complicated organs. Jean Hébert, a program manager at the US government’s Advanced Research Projects Agency for Health, has been exploring ways to gradually replace the cells in a person’s brain. The idea is that, eventually, the recipient will end up with a young brain.

Hébert showed my colleague Antonio Regalado how, in his early experiments, he removed parts of mice’s brains and replaced them with embryonic stem cells. That work seems a world away from the biochemical studies being presented at the British Society for Research on Ageing annual meeting in Manchester, where I am now.

On Wednesday, one scientist described how he’d been testing potential longevity drugs on the tiny nematode worm C. elegans. These worms live for only about 15 to 40 days, and his team can perform tens of thousands of experiments with them. About 40% of the drugs that extend lifespan in C. elegans also help mice live longer, he told us.

To me, that’s not an amazing hit rate. And we don’t know how many of those drugs will work in people. Probably less than 40% of that 40%.

Other scientists presented work on chemical reactions happening at the cellular level. It was deep, basic science, and my takeaway was that there’s a lot aging researchers still don’t fully understand.

It will take years—if not decades—to get the full picture of aging at the molecular level. And if we rely on a series of experiments in worms, and then mice, and then humans, we’re unlikely to make progress for a really long time. In that context, the idea of replacement therapy feels like a shortcut.

“Replacement is a really exciting avenue because you don’t have to understand the biology of aging as much,” says Sierra Lore, who studies aging at the University of Copenhagen in Denmark and the Buck Institute for Research on Aging in Novato, California.

Lore says she started her research career studying aging at the molecular level, but she soon changed course. She now plans to focus her attention on replacement therapies. “I very quickly realized we’re decades away [from understanding the molecular processes that underlie aging],” she says. “Why don’t we just take what we already know—replacement—and try to understand and apply it better?”

So perhaps Putin’s straightforward approach to delaying aging holds some merit. Whether it will grant him immortality is another matter.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

How Trump is helping China extend its massive lead in clean energy 

On a spring day in 1954, Bell Labs researchers showed off the first practical solar panels at a press conference in Murray Hill, New Jersey, using sunlight to spin a toy Ferris wheel before a stunned crowd.

The solar future looked bright. But in the race to commercialize the technology it invented, the US would lose resoundingly. Last year, China exported $40 billion worth of solar panels and modules, while America shipped just $69 million, according to the New York Times. It was a stunning forfeit of a huge technological lead. 

And now the US seems determined to repeat the mistake. In its quest to prop up aging fossil-fuel industries, the Trump administration has slashed federal support for the emerging cleantech sector, handing his nation’s chief economic rival the most generous of gifts: an unobstructed path to locking in its control of emerging energy technologies, and a leg up in inventing the industries of the future.

China’s dominance of solar was no accident. In the late 2000s, the government simply determined that the sector was a national priority. Then it leveraged deep subsidies, targeted policies, and price wars to scale up production, drive product improvements, and slash costs. It’s made similar moves in batteries, electric vehicles, and wind turbines. 

Meanwhile, President Donald Trump has set to work unraveling hard-won clean-energy achievements in the US, snuffing out the gathering momentum to rebuild the nation’s energy sector in cleaner, more sustainable ways.

The tax and spending bill that Trump signed into law in early July wound down the subsidies for solar and wind power contained in the Inflation Reduction Act of 2022. The legislation also cut off federal support for cleantech projects that rely too heavily on Chinese materials—a hamfisted bid to punish Chinese industries that will instead make many US projects financially unworkable.

Meanwhile, the administration has slashed federal funding for science and attacked the financial foundations of premier research universities, pulling up the roots of future energy innovations and industries.

A driving motivation for many of these policies is the quest to protect the legacy energy industry based on coal, oil, and natural gas, all of which the US is geologically blessed with. But this strategy amounts to the innovator’s dilemma playing out at a national scale—a country clinging to its declining industries rather than investing in the ones that will define the future.

It does not particularly matter whether Trump believes in or cares about climate change. The economic and international security imperatives to invest in modern, sustainable industries are every bit as indisputable as the chemistry of greenhouse gases.

Without sustained industrial policies that reward innovation, American entrepreneurs and investors won’t risk money and time creating new businesses, developing new products, or building first-of-a-kind projects here. Indeed, venture capitalists have told me that numerous US climate-tech companies are already looking overseas, seeking markets where they can count on government support. Some fear that many other companies will fail in the coming months as subsidies disappear, developments stall, and funding flags. 

All of which will help China extend an already massive lead.

The nation has installed nearly three times as many wind turbines as the US, and it generates more than twice as much solar power. It boasts five of the 10 largest EV companies in the world, and the three largest wind turbine manufacturers. China absolutely dominates the battery market, producing the vast majority of the anodes, cathodes, and battery cells that increasingly power the world’s vehicles, grids, and gadgets.

China harnessed the clean-energy transition to clean up its skies, upgrade its domestic industries, create jobs for its citizens, strengthen trade ties, and build new markets in emerging economies. In turn, it’s using those business links to accrue soft power and extend its influence—all while the US turns it back on global institutions.

These widening relationships increasingly insulate China from external pressures, including those threatened by Trump’s go-to tactic: igniting or inflaming trade wars. 

But stiff tariffs and tough talk aren’t what built the world’s largest economy and established the US as the global force in technology for more than a century. What did was deep, sustained federal investment into education, science, and research and development—the very budget items that Trump and his party have been so eager to eliminate. 

Another thing

Earlier this summer, the EPA announced plans to revoke the Obama-era “endangerment finding,” the legal foundation for regulating the nation’s greenhouse-gas pollution. 

The agency’s argument leans heavily on a report that rehashes decades-old climate-denial talking points to assert that rising emissions haven’t produced the harms that scientists expected. It’s a wild, Orwellian plea for you to reject the evidence of your eyes and ears in a summer that saw record heat waves in the Midwest and East and is now blanketing the West in wildfire smoke.

Over the weekend, more than 85 scientists sent a point-by-point, 459-page rebuttal to the federal government, highlighting myriad ways in which the report “is biased, full of errors, and not fit to inform policy making,” as Bob Kopp, a climate scientist at Rutgers, put it on Bluesky.

“The authors reached these flawed conclusions through selective filtering of evidence (‘cherry picking’), overemphasis of uncertainties, misquoting peer-reviewed research, and a general dismissal of the vast majority of decades of peer-reviewed research,” the dozens of reviewers found.

The Trump administration handpicked researchers who would write the report it wanted to support its quarrel with thermometers and justify its preordained decision to rescind the endangerment finding. But it’s legally bound to hear from others as well, notes Karen McKinnon, a climate researcher at the University of California, Los Angeles.

“Luckily, there is time to take action,” McKinnon said in a statement. “Comment on the report, and contact your representatives to let them know we need to take action to bring back the tolerable summers of years past.”

You can read the full report here, or NPR’s take here. And be sure to read Casey Crownhart’s earlier piece in The Spark on the endangerment finding.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Synthesia’s AI clones are more expressive than ever. Soon they’ll be able to talk back.

Earlier this summer, I walked through the glassy lobby of a fancy office in London, into an elevator, and then along a corridor into a clean, carpeted room. Natural light flooded in through its windows, and a large pair of umbrella-like lighting rigs made the room even brighter. I tried not to squint as I took my place in front of a tripod equipped with a large camera and a laptop displaying an autocue. I took a deep breath and started to read out the script.

I’m not a newsreader or an actor auditioning for a movie—I was visiting the AI company Synthesia to give it what it needed to create a hyperrealistic AI-generated avatar of me. The company’s avatars are a decent barometer of just how dizzying progress has been in AI over the past few years, so I was curious just how accurately its latest AI model, introduced last month, could replicate me. 

When Synthesia launched in 2017, its primary purpose was to match AI versions of real human faces—for example, the former footballer David Beckham—with dubbed voices speaking in different languages. A few years later, in 2020, it started giving the companies that signed up for its services the opportunity to make professional-level presentation videos starring either AI versions of staff members or consenting actors. But the technology wasn’t perfect. The avatars’ body movements could be jerky and unnatural, their accents sometimes slipped, and the emotions indicated by their voices didn’t always match their facial expressions.

Now Synthesia’s avatars have been updated with more natural mannerisms and movements, as well as expressive voices that better preserve the speaker’s accent—making them appear more humanlike than ever before. For Synthesia’s corporate clients, these avatars will make for slicker presenters of financial results, internal communications, or staff training videos.

I found the video demonstrating my avatar as unnerving as it is technically impressive. It’s slick enough to pass as a high-definition recording of a chirpy corporate speech, and if you didn’t know me, you’d probably think that’s exactly what it was. This demonstration shows how much harder it’s becoming to distinguish the artificial from the real. And before long, these avatars will even be able to talk back to us. But how much better can they get? And what might interacting with AI clones do to us?  

The creation process

When my former colleague Melissa visited Synthesia’s London studio to create an avatar of herself last year, she had to go through a long process of calibrating the system, reading out a script in different emotional states, and mouthing the sounds needed to help her avatar form vowels and consonants. As I stand in the brightly lit room 15 months later, I’m relieved to hear that the creation process has been significantly streamlined. Josh Baker-Mendoza, Synthesia’s technical supervisor, encourages me to gesture and move my hands as I would during natural conversation, while simultaneously warning me not to move too much. I duly repeat an overly glowing script that’s designed to encourage me to speak emotively and enthusiastically. The result is a bit as if if Steve Jobs had been resurrected as a blond British woman with a low, monotonous voice. 

It also has the unfortunate effect of making me sound like an employee of Synthesia.“I am so thrilled to be with you today to show off what we’ve been working on. We are on the edge of innovation, and the possibilities are endless,” I parrot eagerly, trying to sound lively rather than manic. “So get ready to be part of something that will make you go, ‘Wow!’ This opportunity isn’t just big—it’s monumental.”

Just an hour later, the team has all the footage it needs. A couple of weeks later I receive two avatars of myself: one powered by the previous Express-1 model and the other made with the latest Express-2 technology. The latter, Synthesia claims, makes its synthetic humans more lifelike and true to the people they’re modeled on, complete with more expressive hand gestures, facial movements, and speech. You can see the results for yourself below. 

COURTESY SYNTHESIA

Last year, Melissa found that her Express-1-powered avatar failed to match her transatlantic accent. Its range of emotions was also limited—when she asked her avatar to read a script angrily, it sounded more whiny than furious. In the months since, Synthesia has improved Express-1, but the version of my avatar made with the same technology blinks furiously and still struggles to synchronize body movements with speech.

By way of contrast, I’m struck by just how much my new Express-2 avatar looks like me: Its facial features mirror my own perfectly. Its voice is spookily accurate too, and although it gesticulates more than I do, its hand movements generally marry up with what I’m saying. 

But the tiny telltale signs of AI generation are still there if you know where to look. The palms of my hands are bright pink and as smooth as putty. Strands of hair hang stiffly around my shoulders instead of moving with me. Its eyes stare glassily ahead, rarely blinking. And although the voice is unmistakably mine, there’s something slightly off about my digital clone’s intonations and speech patterns. “This is great!” my avatar randomly declares, before slipping back into a saner register.

Anna Eiserbeck, a postdoctoral psychology researcher at the Humboldt University of Berlin who has studied how humans react to perceived deepfake faces, says she isn’t sure she’d have been able to identify my avatar as a deepfake at first glance.

But she would eventually have noticed something amiss. It’s not just the small details that give it away—my oddly static earring, the way my body sometimes moves in small, abrupt jerks. It’s something that runs much deeper, she explains.

“Something seemed a bit empty. I know there’s no actual emotion behind it— it’s not a conscious being. It does not feel anything,” she says. Watching the video gave her “this kind of uncanny feeling.” 

My digital clone, and Eiserbeck’s reaction to it, make me wonder how realistic these avatars really need to be. 

I realize that part of the reason I feel disconcerted by my avatar is that it behaves in a way I rarely have to. Its oddly upbeat register is completely at odds with how I normally speak; I’m a die-hard cynical Brit who finds it difficult to inject enthusiasm into my voice even when I’m genuinely thrilled or excited. It’s just the way I am. Plus, watching the videos on a loop makes me question if I really do wave my hands about that way, or move my mouth in such a weird manner. If you thought being confronted with your own face on a Zoom call was humbling, wait until you’re staring at a whole avatar of yourself. 

When Facebook was first taking off in the UK almost 20 years ago, my friends and I thought illicitly logging into each other’s accounts and posting the most outrageous or rage-inducing status updates imaginable was the height of comedy. I wonder if the equivalent will soon be getting someone else’s avatar to say something truly embarrassing: expressing support for a disgraced politician or (in my case) admitting to liking Ed Sheeran’s music. 

Express-2 remodels every person it’s presented with into a polished professional speaker with the body language of a hyperactive hype man. And while this makes perfect sense for a company focused on making glossy business videos, watching my avatar doesn’t feel like watching me at all. It feels like something else entirely.

How it works

The real technical challenge these days has less to do with creating avatars that match our appearance than with getting them to replicate our behavior, says Björn Schuller, a professor of artificial intelligence at Imperial College London. “There’s a lot to consider to get right; you have to have the right micro gesture, the right intonation, the sound of voice and the right word,” he says. “I don’t want an AI [avatar] to frown at the wrong moment—that could send an entirely different message.”

To achieve an improved level of realism, Synthesia developed a number of new audio and video AI models. The team created a voice cloning model to preserve the human speaker’s accent, intonation, and expressiveness—unlike other voice models, which can flatten speakers’ distinctive accents into generically American-sounding voices.

When a user uploads a script to Express-1, its system analyzes the words to infer the correct tone to use. That information is then fed into a diffusion model, which renders the avatar’s facial expressions and movements to match the speech. 

Alongside the voice model, Express-2 uses three other models to create and animate the avatars. The first generates an avatar’s gestures to accompany the speech fed into it by the Express-Voice model. A second evaluates how closely the input audio aligns with the multiple versions of the corresponding generated motion before selecting the best one. Then a final model renders the avatar with that chosen motion. 

This third rendering model is significantly more powerful than its Express-1 predecessor. Whereas the previous model had a few hundred million parameters, Express-2’s rendering model’s parameters number in the billions. This means it takes less time to create the avatar, says Youssef Alami Mejjati, Synthesia’s head of research and development:

“With Express-1, it needed to first see someone expressing emotions to be able to render them. Now, because we’ve trained it on much more diverse data and much larger data sets, with much more compute, it just learns these associations automatically without needing to see them.” 

Narrowing the uncanny valley

Although humanlike AI-generated avatars have been around for years, the recent boom in generative AI is making it increasingly easier and more affordable to create lifelike synthetic humans—and they’re already being put to work. Synthesia isn’t alone: AI avatar companies like Yuzu Labs, Creatify, Arcdads, and Vidyard give businesses the tools to quickly generate and edit videos starring either AI actors or artificial versions of members of staff, promising cost-effective ways to make compelling ads that audiences connect with. Similarly, AI-generated clones of livestreamers have exploded in popularity across China in recent years, partly because they can sell products 24/7 without getting tired or needing to be paid. 

For now at least, Synthesia is “laser focused” on the corporate sphere. But it’s not ruling out expanding into new sectors such as entertainment or education, says Peter Hill, the company’s chief technical officer. In an apparent step toward this, Synthesia recently partnered with Google to integrate Google’s powerful new generative video model Veo 3 into its platform, allowing users to directly generate and embed clips into Synthesia’s videos. It suggests that in the future, these hyperrealistic artificial humans could take up starring roles in detailed universes with ever-changeable backdrops. 

At present this could, for example, involve using Veo 3 to generate a video of meat-processing machinery, with a Synthesia avatar next to the machines talking about how to use them safely. But future versions of Synthesia’s technology could result in educational videos customizable to an individual’s level of knowledge, says Alex Voica, head of corporate affairs and policy at Synthesia. For example, a video about the evolution of life on Earth could be tweaked for someone with a biology degree or someone with high-school-level knowledge. “It’s going to be such a much more engaging and personalized way of delivering content that I’m really excited about,” he says. 

The next frontier, according to Synthesia, will be avatars that can talk back, “understanding” conversations with users and responding in real time Think ChatGPT, but with a lifelike digital human attached. 

Synthesia has already added an interactive element by letting users click through on-screen questions during quizzes presented by its avatars. But it’s also exploring making them truly interactive: Future users could ask their avatar to pause and expand on a point, or ask it a question. “We really want to make the best learning experience, and that means through video that’s entertaining but also personalized and interactive,” says Alami Mejjati. “This, for me, is the missing part in online learning experiences today. And I know we’re very close to solving that.”

We already know that humans can—and do—form deep emotional bonds with AI systems, even with basic text-based chatbots. Combining agentic technology—which is already capable of navigating the web, coding, and playing video games unsupervised—with a realistic human face could usher in a whole new kind of AI addiction, says Pat Pataranutaporn, an assistant professor at the MIT Media Lab.  

“If you make the system too realistic, people might start forming certain kinds of relationships with these characters,” he says. “We’ve seen many cases where AI companions have influenced dangerous behavior even when they are basically texting. If an avatar had a talking head, it would be even more addictive.”

Schuller agrees that avatars in the near future will be perfectly optimized to adjust their projected levels of emotion and charisma so that their human audiences will stay engaged for as long as possible. “It will be very hard [for humans] to compete with charismatic AI of the future; it’s always present, always has an ear for you, and is always understanding,” he says. “Al will change that human-to-human connection.”

As I pause and replay my Express-2 avatar, I imagine holding conversations with it—this uncanny, permanently upbeat, perpetually available product of pixels and algorithms that looks like me and sounds like me, but fundamentally isn’t me. Virtual Rhiannon has never laughed until she’s cried, or fallen in love, or run a marathon, or watched the sun set in another country. 

But, I concede, she could deliver a damned good presentation about why Ed Sheeran is the greatest musician ever to come out of the UK. And only my closest friends and family would know that it’s not the real me.

Therapists are secretly using ChatGPT. Clients are triggered.

Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap. The connection was patchy during one of their online sessions, so Declan suggested they turn off their video feeds. Instead, his therapist began inadvertently sharing his screen.

“Suddenly, I was watching him use ChatGPT,” says Declan, 31, who lives in Los Angeles. “He was taking what I was saying and putting it into ChatGPT, and then summarizing or cherry-picking answers.”

Declan was so shocked he didn’t say anything, and for the rest of the session he was privy to a real-time stream of ChatGPT analysis rippling across his therapist’s screen. The session became even more surreal when Declan began echoing ChatGPT in his own responses, preempting his therapist. 

“I became the best patient ever,” he says, “because ChatGPT would be like, ‘Well, do you consider that your way of thinking might be a little too black and white?’ And I would be like, ‘Huh, you know, I think my way of thinking might be too black and white,’ and [my therapist would] be like, ‘Exactly.’ I’m sure it was his dream session.”

Among the questions racing through Declan’s mind was, “Is this legal?” When Declan raised the incident with his therapist at the next session—“It was super awkward, like a weird breakup”—the therapist cried. He explained he had felt they’d hit a wall and had begun looking for answers elsewhere. “I was still charged for that session,” Declan says, laughing.

The large language model (LLM) boom of the past few years has had unexpected ramifications for the field of psychotherapy, mostly due to the growing number of people substituting the likes of ChatGPT for human therapists. But less discussed is how some therapists themselves are integrating AI into their practice. As in many other professions, generative AI promises tantalizing efficiency savings, but its adoption risks compromising sensitive patient data and undermining a relationship in which trust is paramount.

Suspicious sentiments

Declan is not alone, as I can attest from personal experience. When I received a recent email from my therapist that seemed longer and more polished than usual, I initially felt heartened. It seemed to convey a kind, validating message, and its length made me feel that she’d taken the time to reflect on all of the points in my (rather sensitive) email.

On closer inspection, though, her email seemed a little strange. It was in a new font, and the text displayed several AI “tells,” including liberal use of the Americanized em dash (we’re both from the UK), the signature impersonal style, and the habit of addressing each point made in the original email line by line.

My positive feelings quickly drained away, to be replaced by disappointment and mistrust, once I realized ChatGPT likely had a hand in drafting the message—which my therapist confirmed when I asked her.

Despite her assurance that she simply dictates longer emails using AI, I still felt uncertainty over the extent to which she, as opposed to the bot, was responsible for the sentiments expressed. I also couldn’t entirely shake the suspicion that she might have pasted my highly personal email wholesale into ChatGPT.

When I took to the internet to see whether others had had similar experiences, I found plenty of examples of people receiving what they suspected were AI-generated communiqués from their therapists. Many, including Declan, had taken to Reddit to solicit emotional support and advice.

So had Hope, 25, who lives on the east coast of the US, and had direct-messaged her therapist about the death of her dog. She soon received a message back. It would have been consoling and thoughtful—expressing how hard it must be “not having him by your side right now”—were it not for the reference to the AI prompt accidentally preserved at the top: “Here’s a more human, heartfelt version with a gentle, conversational tone.”

Hope says she felt “honestly really surprised and confused.” “It was just a very strange feeling,” she says. “Then I started to feel kind of betrayed. … It definitely affected my trust in her.” This was especially problematic, she adds, because “part of why I was seeing her was for my trust issues.”

Hope had believed her therapist to be competent and empathetic, and therefore “never would have suspected her to feel the need to use AI.” Her therapist was apologetic when confronted, and she explained that because she’d never had a pet herself, she’d turned to AI for help expressing the appropriate sentiment. 

A disclosure dilemma 

Betrayal or not, there may be some merit to the argument that AI could help therapists better communicate with their clients. A 2025 study published in PLOS Mental Health asked therapists to use ChatGPT to respond to vignettes describing problems of the kind patients might raise in therapy. Not only was a panel of 830 participants unable to distinguish between the human and AI responses, but AI responses were rated as conforming better to therapeutic best practice. 

However, when participants suspected responses to have been written by ChatGPT, they ranked them lower. (Responses written by ChatGPT but misattributed to therapists received the highest ratings overall.) 

Similarly, Cornell University researchers found in a 2023 study that AI-generated messages can increase feelings of closeness and cooperation between interlocutors, but only if the recipient remains oblivious to the role of AI. The mere suspicion of its use was found to rapidly sour goodwill.

“People value authenticity, particularly in psychotherapy,” says Adrian Aguilera, a clinical psychologist and professor at the University of California, Berkeley. “I think [using AI] can feel like, ‘You’re not taking my relationship seriously.’ Do I ChatGPT a response to my wife or my kids? That wouldn’t feel genuine.”

In 2023, in the early days of generative AI, the online therapy service Koko conducted a clandestine experiment on its users, mixing in responses generated by GPT-3 with ones drafted by humans. They discovered that users tended to rate the AI-generated responses more positively. The revelation that users had unwittingly been experimented on, however, sparked outrage.

The online therapy provider BetterHelp has also been subject to claims that its therapists have used AI to draft responses. In a Medium post, photographer Brendan Keen said his BetterHelp therapist admitted to using AI in their replies, leading to “an acute sense of betrayal” and persistent worry, despite reassurances, that his data privacy had been breached. He ended the relationship thereafter. 

A BetterHelp spokesperson told us the company “prohibits therapists from disclosing any member’s personal or health information to third-party artificial intelligence, or using AI to craft messages to members to the extent it might directly or indirectly have the potential to identify someone.”

All these examples relate to undisclosed AI usage. Aguilera believes time-strapped therapists can make use of LLMs, but transparency is essential. “We have to be up-front and tell people, ‘Hey, I’m going to use this tool for X, Y, and Z’ and provide a rationale,” he says. People then receive AI-generated messages with that prior context, rather than assuming their therapist is “trying to be sneaky.”

Psychologists are often working at the limits of their capacity, and levels of burnout in the profession are high, according to 2023 research conducted by the American Psychological Association. That context makes the appeal of AI-powered tools obvious. 

But lack of disclosure risks permanently damaging trust. Hope decided to continue seeing her therapist, though she stopped working with her a little later for reasons she says were unrelated. “But I always thought about the AI Incident whenever I saw her,” she says.

Risking patient privacy

Beyond the transparency issue, many therapists are leery of using LLMs in the first place, says Margaret Morris, a clinical psychologist and affiliate faculty member at the University of Washington.

“I think these tools might be really valuable for learning,” she says, noting that therapists should continue developing their expertise over the course of their career. “But I think we have to be super careful about patient data.” Morris calls Declan’s experience “alarming.” 

Therapists need to be aware that general-purpose AI chatbots like ChatGPT are not approved by the US Food and Drug Administration and are not HIPAA compliant, says Pardis Emami-Naeini, assistant professor of computer science at Duke University, who has researched the privacy and security implications of LLMs in a health context. (HIPAA is a set of US federal regulations that protect people’s sensitive health information.)

“This creates significant risks for patient privacy if any information about the patient is disclosed or can be inferred by the AI,” she says.

In a recent paper, Emami-Naeini found that many users wrongly believe ChatGPT is HIPAA compliant, creating an unwarranted sense of trust in the tool. “I expect some therapists may share this misconception,” she says.

As a relatively open person, Declan says, he wasn’t completely distraught to learn how his therapist was using ChatGPT. “Personally, I am not thinking, ‘Oh, my God, I have deep, dark secrets,’” he said. But it did still feel violating: “I can imagine that if I was suicidal, or on drugs, or cheating on my girlfriend … I wouldn’t want that to be put into ChatGPT.”

When using AI to help with email, “it’s not as simple as removing obvious identifiers such as names and addresses,” says Emami-Naeini. “Sensitive information can often be inferred from seemingly nonsensitive details.”

She adds, “Identifying and rephrasing all potential sensitive data requires time and expertise, which may conflict with the intended convenience of using AI tools. In all cases, therapists should disclose their use of AI to patients and seek consent.” 

A growing number of companies, including Heidi Health, Upheal, Lyssn, and Blueprint, are marketing specialized tools to therapists, such as AI-assisted note-taking, training, and transcription services. These companies say they are HIPAA compliant and store data securely using encryption and pseudonymization where necessary. But many therapists are still wary of the privacy implications—particularly of services that necessitate the recording of entire sessions.

“Even if privacy protections are improved, there is always some risk of information leakage or secondary uses of data,” says Emami-Naeini.

A 2020 hack on a Finnish mental health company, which resulted in tens of thousands of clients’ treatment records being accessed, serves as a warning. People on the list were blackmailed, and subsequently the entire trove was publicly released, revealing extremely sensitive details such as peoples’ experiences of child abuse and addiction problems.

What therapists stand to lose

In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.

A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.

Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.

A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable.

Daniel Kimmel, a psychiatrist and neuroscientist at Columbia University, conducted experiments with ChatGPT where he posed as a client having relationship troubles. He says he found the chatbot was a decent mimic when it came to “stock-in-trade” therapeutic responses, like normalizing and validating, asking for additional information, or highlighting certain cognitive or emotional associations.

However, “it didn’t do a lot of digging,” he says. It didn’t attempt “to link seemingly or superficially unrelated things together into something cohesive … to come up with a story, an idea, a theory.”

“I would be skeptical about using it to do the thinking for you,” he says. Thinking, he says, should be the job of therapists.

Therapists could save time using AI-powered tech, but this benefit should be weighed against the needs of patients, says Morris: “Maybe you’re saving yourself a couple of minutes. But what are you giving away?”

Can an AI doppelgänger help me do my job?

Everywhere I look, I see AI clones. On X and LinkedIn, “thought leaders” and influencers offer their followers a chance to ask questions of their digital replicas. OnlyFans creators are having AI models of themselves chat, for a price, with followers. “Virtual human” salespeople in China are reportedly outselling real humans. 

Digital clones—AI models that replicate a specific person—package together a few technologies that have been around for a while now: hyperrealistic video models to match your appearance, lifelike voices based on just a couple of minutes of speech recordings, and conversational chatbots increasingly capable of holding our attention. But they’re also offering something the ChatGPTs of the world cannot: an AI that’s not smart in the general sense, but that ‘thinks’ like you do. 

Who are they for? Delphi, a startup that recently raised $16 million from funders including Anthropic and actor/director Olivia Wilde’s venture capital firm, Proximity Ventures, helps famous people create replicas that can speak with their fans in both chat and voice calls. It feels like MasterClass—the platform for instructional seminars led by celebrities—vaulted into the AI age. On its website, Delphi writes that modern leaders “possess potentially life-altering knowledge and wisdom, but their time is limited and access is constrained.”

It has a library of official clones created by famous figures that you can speak with. Arnold Schwarzenegger, for example, told me, “I’m here to cut the crap and help you get stronger and happier,” before informing me cheerily that I’ve now been signed up to receive the Arnold’s Pump Club newsletter. Even if his or other celebrities’ clones fall short of Delphi’s lofty vision of spreading “personalized wisdom at scale,” they at least seem to serve as a funnel to find fans, build mailing lists, or sell supplements.

But what about for the rest of us? Could well-crafted clones serve as our stand-ins? I certainly feel stretched thin at work sometimes, wishing I could be in two places at once, and I bet you do too. I could see a replica popping into a virtual meeting with a PR representative, not to trick them into thinking it’s the real me, but simply to take a brief call on my behalf. A recording of this call might summarize how it went. 

To find out, I tried making a clone. Tavus, a Y Combinator alum that raised $18 million last year, will build a video avatar of you (plans start at $59 per month) that can be coached to reflect your personality and can join video calls. These clones have the “emotional intelligence of humans, with the reach of machines,” according to the company. “Reporter’s assistant” does not appear on the company’s site as an example use case, but it does mention therapists, physician’s assistants, and other roles that could benefit from an AI clone.

For Tavus’s onboarding process, I turned on my camera, read through a script to help it learn my voice (which also acted as a waiver, with me agreeing to lend my likeness to Tavus), and recorded one minute of me just sitting in silence. Within a few hours, my avatar was ready. Upon meeting this digital me, I found it looked and spoke like I do (though I hated its teeth). But faking my appearance was the easy part. Could it learn enough about me and what topics I cover to serve as a stand-in with minimal risk of embarrassing me?

Via a helpful chatbot interface, Tavus walked me through how to craft my clone’s personality, asking what I wanted the replica to do. It then helped me formulate instructions that became its operating manual. I uploaded three dozen of my stories that it could use to reference what I cover. It may have benefited from having more of my content—interviews, reporting notes, and the like—but I would never share that data for a host of reasons, not the least of which being that the other people who appear in it have not consented to their sides of our conversations being used to train an AI replica.

So in the realm of AI—where models learn from entire libraries of data—I didn’t give my clone all that much to learn from, but I was still hopeful it had enough to be useful. 

Alas, conversationally it was a wild card. It acted overly excited about story pitches I would never pursue. It repeated itself, and it kept saying it was checking my schedule to set up a meeting with the real me, which it could not do as I never gave it access to my calendar. It spoke in loops, with no way for the person on the other end to wrap up the conversation. 

These are common early quirks, Tavus’s cofounder Quinn Favret told me. The clones typically rely on Meta’s Llama model, which “often aims to be more helpful than it truly is,” Favret says, and developers building on top of Tavus’s platform are often the ones who set instructions for how the clones finish conversations or access calendars.

For my purposes, it was a bust. To be useful to me, my AI clone would need to show at least some basic instincts for understanding what I cover, and at the very least not creep out whoever’s on the other side of the conversation. My clone fell short.

Such a clone could be helpful in other jobs, though. If you’re an influencer looking for ways to engage with more fans, or a salesperson for whom work is a numbers game and a clone could give you a leg up, it might just work. You run the risk that your replica could go off the rails or embarrass the real you, but the tradeoffs might be reasonable. 

Favret told me some of Tavus’s bigger customers are companies using clones for health-care intake and job interviews. Replicas are also being used in corporate role-play, for practicing sales pitches or having HR-related conversations with employees, for example.

But companies building clones are promising that they will be much more than cold-callers or telemarketing machines. Delphi says its clones will offer “meaningful, personal interactions at infinite scale,” and Tavus says its replicas have “a face, a brain, and memories” that enable “meaningful face-to-face conversations.” Favret also told me a growing number of Tavus’s customers are building clones for mentorship and even decision-making, like AI loan officers who use clones to qualify and filter applicants.

Which is sort of the crux of it. Teaching an AI clone discernment, critical thinking, and taste—never mind the quirks of a specific person—is still the stuff of science fiction. That’s all fine when the person chatting with a clone is in on the bit (most of us know that Schwarzenegger’s replica, for example, will not coach me to be a better athlete).

But as companies polish clones with “human” features and exaggerate their capabilities, I worry that people chasing efficiency will start using their replicas at best for roles that are cringeworthy, and at worst for making decisions they should never be entrusted with. In the end, these models are designed for scale, not fidelity. They can flatter us, amplify us, even sell for us—but they can’t quite become us.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Here’s how we picked this year’s Innovators Under 35

Next week, we’ll publish our 2025 list of Innovators Under 35, highlighting smart and talented people who are working in many areas of emerging technology. This new class features 35 accomplished founders, hardware engineers, roboticists, materials scientists, and others who are already tackling tough problems and making big moves in their careers. All are under the age of 35. 

One is developing a technology to reduce emissions from shipping, while two others are improving fertility treatments and creating new forms of contraception. Another is making it harder for people to maliciously share intimate images online. And quite a few are applying artificial intelligence to their respective fields in novel ways. 

We’ll also soon reveal our 2025 Innovator of the Year, whose technical prowess is helping physicians diagnose and treat critically ill patients more quickly. What’s more (here’s your final hint), our winner even set a world record as a result of this work. 

MIT Technology Review first published a list of Innovators Under 35 in 1999. It’s a grand tradition for us, and we often follow the work of various featured innovators for years, even decades, after they appear on the list. So before the big announcement, I want to take a moment to explain how we select the people we recognize each year. 

Step 1: Call for nominations

Our process begins with a call for nominations, which typically goes out in the final months of the previous year and is open to anyone, anywhere in the world. We encourage people to nominate themselves, which takes just a few minutes. This method helps us discover people doing important work that we might not otherwise encounter. 

This year we had 420 nominations. Two-thirds of our candidates were put forward by someone else and one-third nominated themselves. We received nominations for people located in about 40 countries. Nearly 70% were based in the United States, with the UK, Switzerland, China, and the United Arab Emirates, respectively, having the next-highest concentrations. 

After nominations close, a few editors then spend several weeks reviewing the nominees and selecting semifinalists. During this phase, we look for people who have developed practical solutions to societal issues or made important scientific advances that could translate into new technologies. Their work should have the potential for broad impact—it can’t be niche or incremental. And what’s unique about their approach must be clear. 

Step 2: Semifinalist applications 

This year, we winnowed our initial list of hundreds of nominees to 108 semifinalists. Then we asked those entrants for more information to help us get to know them better and evaluate their work. 

We request three letters of reference and a résumé from each semifinalist, and we ask all of them to answer a few short questions about their work. We also give them the option to share a video or pass along relevant journal articles or other links to help us learn more about what they do.

Step 3: Expert judges weigh in

Next, we bring in dozens of experts to vet the semifinalists. This year, 38 judges evaluated and scored the applications. We match the contenders with judges who work in similar fields whenever possible. At least two judges review each entrant, though most are seen by three. 

All these judges volunteer their time, and some return to help year after year. A few of our longtime judges include materials scientists Yet-Ming Chiang (MIT) and Julia Greer (Caltech), MIT neuroscientist Ed Boyden, and computer scientist Ben Zhao of the University of Chicago. 

John Rogers, a materials scientist and biomedical engineer at Northwestern University, has been a judge for more than a decade (and was featured on our very first Innovators list, in 1999). Here’s what he had to say about why he stays involved: “This award is compelling because it recognizes young people with scientific achievements that are not only of fundamental interest but also of practical significance, at the highest levels.” 

Step 4: Editors make the final calls 

In a final layer of vetting, editors who specialize in covering biotechnology, climate and energy, and artificial intelligence review the semifinalists whom judges scored highly in their respective areas. Staff editors and reporters can also nominate people they’ve come across in their coverage, and we add them to the mix for consideration. 

Last, a small team of senior editors reviews all the semifinalists and the judges’ scores, as well as our own staff’s recommendations, and selects 35 honorees. We aim for a good combination of people from a variety of disciplines working in different regions of the world. And we take a staff vote to pick an Innovator of the Year—someone whose work we particularly admire. 

In the end, it’s impossible to include every deserving individual on our list. But by incorporating both external nominations and outside expertise from our judges, we aim to make the evaluation process as rigorous and open as possible.  

So who made the cut this year? Come back on September 8 to find out.

RFK Jr’s plan to improve America’s diet is missing the point

A lot of Americans don’t eat well. And they’re paying for it with their health. A diet high in sugar, sodium, and saturated fat can increase the risk of problems like diabetes, heart disease, and kidney disease, to name a few. And those are among the leading causes of death in the US.

This is hardly news. But this week Robert F Kennedy Jr., who heads the US Department of Health and Human Services, floated a new solution to the problem. Kennedy and education secretary Linda McMahon think that teaching medical students more about the role of nutrition in health could help turn things around.

“I’m working with Linda on forcing medical schools … to put nutrition into medical school education,” Kennedy said during a cabinet meeting on August 26. The next day, HHS released a statement calling for “increased nutrition education” for medical students.

“We can reverse the chronic-disease epidemic simply by changing our diets and lifestyles,” Kennedy said in an accompanying video statement. “But to do that, we need nutrition to be a basic part of every doctor’s training.”

It certainly sounds like a good idea. If more Americans ate a healthier diet, we could expect to see a decrease in those diseases. But this framing of America’s health crisis is overly simplistic, especially given that plenty of the administration’s other actions have directly undermined health in multiple ways—including by canceling a vital nutrition education program.

At any rate, there are other, more effective ways to tackle the chronic-disease crisis.

The biggest killers, heart disease and stroke, are responsible for more than a third of deaths, according to the US Centers for Disease Control and Prevention. A healthy diet can reduce your risk of developing those conditions. And it makes total sense to educate the future doctors of America about nutrition.

Medical bodies are on board with the idea, too. “The importance of nutrition in medical education is increasingly clear, and we support expanded, evidence-based instruction to better equip physicians to prevent and manage chronic disease and improve patient outcomes,” David H. Aizuss, chair of the American Medical Association’s board of trustees, said in a statement.

But it’s not as though medical students aren’t getting any nutrition education. And that training has increased in the last five years, according to surveys carried out by the American Association of Medical Colleges.

Kennedy has referred to a 2021 survey suggesting that medical students in the US get only around one hour of nutrition education per year. But the AAMC argues that nutrition education increasingly happens through “integrated experiences” rather than stand-alone lectures.

“Medical schools understand the critical role that nutrition plays in preventing, managing, and treating chronic health conditions, and incorporate significant nutrition education across their required curricula,” Alison J. Whelan, AAMC’s chief academic officer, said in a statement.

That’s not to say there isn’t room for improvement. Gabby Headrick, a food systems dietician and associate director of food and nutrition policy at George Washington University’s Institute for Food Safety & Nutrition Security, thinks nutritionists could take a more prominent role in patient care, too.

But it’s somewhat galling for the administration to choose medical education as its focus given the recent cuts in federal funding that will affect health. For example, funding for the National Diabetes Prevention Program, which offers support and guidance to help thousands of people adopt healthy diets and exercise routines, was canceled by the Trump administration in March.

The focus on medical schools also overlooks one of the biggest factors behind poor nutrition in the US: access to healthy food. A recent survey by the Pew Research Center found that increased costs make it harder for most Americans to eat well. Twenty percent of the people surveyed acknowledged that their diets were not healthy.

“So many people know what a healthy diet is, and they know what should be on their plate every night,” says Headrick, who has researched this issue. “But the vast majority of folks just truly do not have the money or the time to get the food on the plate.”

The Supplemental Nutrition Assistance Program (SNAP) has been helping low-income Americans afford some of those healthier foods. It supported over 41 million people in 2024. But under the Trump administration’s tax and spending bill, the program is set to lose around $186 billion in funding over the next 10 years.

Kennedy’s focus is on education. And it just so happens that there is a nutrition education program in place—one that helps people of all ages learn not only what healthy foods are, but how to source them on a budget and use them to prepare meals.

SNAP-Ed, as it’s known, has already provided this support to millions of Americans. Under the Trump administration, it is set to be eliminated.

It is difficult to see how these actions are going to help people adopt healthier diets. What might be a better approach? I put the question to Headrick: If she were in charge, what policies would she enact?

“Universal health care,” she told me. Being able to access health care without risking financial hardship not only improves health outcomes and life expectancy; it also spares people from medical debt—something that affects around 40% of adults in the US, according to a recent survey.

And the Trump administration’s plans to cut federal health spending by about a trillion dollars over the next decade certainly aren’t going to help with that. All told, around 16 million people could lose their health insurance by 2034, according to estimates by the Congressional Budget Office.

“The evidence suggests that if we cut folks’ social benefit programs, such as access to health care and food, we are going to see detrimental impacts,” says Headrick. “And it’s going to cause an increased burden of preventable disease.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

RFK Jr’s plan to improve America’s diet is missing the point

A lot of Americans don’t eat well. And they’re paying for it with their health. A diet high in sugar, sodium, and saturated fat can increase the risk of problems like diabetes, heart disease, and kidney disease, to name a few. And those are among the leading causes of death in the US.

This is hardly news. But this week Robert F Kennedy Jr., who heads the US Department of Health and Human Services, floated a new solution to the problem. Kennedy and education secretary Linda McMahon think that teaching medical students more about the role of nutrition in health could help turn things around.

“I’m working with Linda on forcing medical schools … to put nutrition into medical school education,” Kennedy said during a cabinet meeting on August 26. The next day, HHS released a statement calling for “increased nutrition education” for medical students.

“We can reverse the chronic-disease epidemic simply by changing our diets and lifestyles,” Kennedy said in an accompanying video statement. “But to do that, we need nutrition to be a basic part of every doctor’s training.”

It certainly sounds like a good idea. If more Americans ate a healthier diet, we could expect to see a decrease in those diseases. But this framing of America’s health crisis is overly simplistic, especially given that plenty of the administration’s other actions have directly undermined health in multiple ways—including by canceling a vital nutrition education program.

At any rate, there are other, more effective ways to tackle the chronic-disease crisis.

The biggest killers, heart disease and stroke, are responsible for more than a third of deaths, according to the US Centers for Disease Control and Prevention. A healthy diet can reduce your risk of developing those conditions. And it makes total sense to educate the future doctors of America about nutrition.

Medical bodies are on board with the idea, too. “The importance of nutrition in medical education is increasingly clear, and we support expanded, evidence-based instruction to better equip physicians to prevent and manage chronic disease and improve patient outcomes,” David H. Aizuss, chair of the American Medical Association’s board of trustees, said in a statement.

But it’s not as though medical students aren’t getting any nutrition education. And that training has increased in the last five years, according to surveys carried out by the American Association of Medical Colleges.

Kennedy has referred to a 2021 survey suggesting that medical students in the US get only around one hour of nutrition education per year. But the AAMC argues that nutrition education increasingly happens through “integrated experiences” rather than stand-alone lectures.

“Medical schools understand the critical role that nutrition plays in preventing, managing, and treating chronic health conditions, and incorporate significant nutrition education across their required curricula,” Alison J. Whelan, AAMC’s chief academic officer, said in a statement.

That’s not to say there isn’t room for improvement. Gabby Headrick, a food systems dietician and associate director of food and nutrition policy at George Washington University’s Institute for Food Safety & Nutrition Security, thinks nutritionists could take a more prominent role in patient care, too.

But it’s somewhat galling for the administration to choose medical education as its focus given the recent cuts in federal funding that will affect health. For example, funding for the National Diabetes Prevention Program, which offers support and guidance to help thousands of people adopt healthy diets and exercise routines, was canceled by the Trump administration in March.

The focus on medical schools also overlooks one of the biggest factors behind poor nutrition in the US: access to healthy food. A recent survey by the Pew Research Center found that increased costs make it harder for most Americans to eat well. Twenty percent of the people surveyed acknowledged that their diets were not healthy.

“So many people know what a healthy diet is, and they know what should be on their plate every night,” says Headrick, who has researched this issue. “But the vast majority of folks just truly do not have the money or the time to get the food on the plate.”

The Supplemental Nutrition Assistance Program (SNAP) has been helping low-income Americans afford some of those healthier foods. It supported over 41 million people in 2024. But under the Trump administration’s tax and spending bill, the program is set to lose around $186 billion in funding over the next 10 years.

Kennedy’s focus is on education. And it just so happens that there is a nutrition education program in place—one that helps people of all ages learn not only what healthy foods are, but how to source them on a budget and use them to prepare meals.

SNAP-Ed, as it’s known, has already provided this support to millions of Americans. Under the Trump administration, it is set to be eliminated.

It is difficult to see how these actions are going to help people adopt healthier diets. What might be a better approach? I put the question to Headrick: If she were in charge, what policies would she enact?

“Universal health care,” she told me. Being able to access health care without risking financial hardship not only improves health outcomes and life expectancy; it also spares people from medical debt—something that affects around 40% of adults in the US, according to a recent survey.

And the Trump administration’s plans to cut federal health spending by about a trillion dollars over the next decade certainly aren’t going to help with that. All told, around 16 million people could lose their health insurance by 2034, according to estimates by the Congressional Budget Office.

“The evidence suggests that if we cut folks’ social benefit programs, such as access to health care and food, we are going to see detrimental impacts,” says Headrick. “And it’s going to cause an increased burden of preventable disease.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This American nuclear company could help India’s thorium dream

For just the second time in nearly two decades, the United States has granted an export license to an American company planning to sell nuclear technology to India, MIT Technology Review has learned. The decision to greenlight Clean Core Thorium Energy’s license is a major step toward closer cooperation between the two countries on atomic energy and marks a milestone in the development of thorium as an alternative to uranium for fueling nuclear reactors. 

Starting from the issuance last week, the thorium fuel produced by the Chicago-based company can be shipped to reactors in India, where it could be loaded into the cores of existing reactors. Once Clean Core receives final approval from Indian regulators, it will become one of the first American companies to sell nuclear technology to India, just as the world’s most populous nation has started relaxing strict rules that have long kept the US private sector from entering its atomic power industry. 

“This license marks a turning point, not just for Clean Core but for the US-India civil nuclear partnership,” says Mehul Shah, the company’s chief executive and founder. “It places thorium at the center of the global energy transformation.”

Thorium has long been seen as a good alternative to uranium because it’s more abundant, produces both smaller amounts of long-lived radioactive waste and fewer byproducts with centuries-long half-lives, and reduces the risk that materials from the fuel cycle will be diverted into weapons manufacturing. 

But at least some uranium fuel is needed to make thorium atoms split, making it an imperfect replacement. It’s also less well suited for use in the light-water reactors that power the vast majority of commercial nuclear plants worldwide. And in any case, the complex, highly regulated nuclear industry is extremely resistant to change.

For India, which has scant uranium reserves but abundant deposits of thorium, the latter metal has been part of a long-term strategy for reducing dependence on imported fuels. The nation started negotiating a nuclear export treaty with the US in the early 2000s, and a 123 Agreement—a special, Senate-approved treaty the US requires with another country before sending it any civilian nuclear products—was approved in 2008.

A new approach

While most thorium advocates have envisioned new reactors designed to run on this fuel, which would mean rebuilding the nuclear industry from the ground up, Shah and his team took a different approach. Clean Core created a new type of fuel that blends thorium with a more concentrated type of uranium called HALEU (high-assay low-enriched uranium). This blended fuel can be used in India’s pressurized heavy-water reactors, which make up the bulk of the country’s existing fleet and many of the new units under development now. 

Thorium isn’t a fissile material itself, meaning its atoms aren’t inherently unstable enough for an extra neutron to easily split the nuclei and release energy. But the metal has what’s known as “fertile properties,” meaning it can absorb neutrons and transform into the fissile material uranium-233. Uranium-233 produces fewer long-lived radioactive isotopes than the uranium-235 that makes up the fissionable part of traditional fuel pellets. Most commercial reactors run on low-enriched uranium, which is about 5% U-235. When the fuel is spent, roughly 95% of the energy potential is left in the metal. And what remains is a highly toxic cocktail of long-lived radioactive isotopes such as cesium-137 and plutonium-239, which keep the waste dangerous for tens of thousands of years. Another concern is that the plutonium could be extracted for use in weapons. 

Enriched up to 20%, HALEU allows reactors to extract more of the available energy and thus reduce the volume of waste. Clean Core’s fuel goes further: The HALEU provides the initial spark to ignite fertile thorium and triggers a reaction that can burn much hotter and utilize the vast majority of the material in the core, as a study published last year in the journal Nuclear Engineering and Design showed.

“Thorium provides attributes needed to achieve higher burnups,” says Koroush Shirvan, an MIT professor of nuclear science and engineering who helped design Clean Core’s fuel assemblies. “It is enabling technology to go to higher burnups, which reduces your spent fuel volume, increases your fuel efficiency, and reduces the amount of uranium that you need.” 

Compared with traditional uranium fuel, Clean Core says, its fuel reduces waste by more than 85% while avoiding the most problematic isotopes produced during fission. “The result is a safer, more sustainable cycle that reframes nuclear power not as a source of millennia-long liabilities but as a pathway to cleaner energy and a viable future fuel supply,” says Milan Shah, Clean Core’s chief operating officer and Mehul’s son.

Pressurized heavy-water reactors are particularly well suited to thorium because heavy water—a version of H2O that has an extra neutron on the hydrogen atom—absorbs fewer neutrons during the fission process, increasing efficiency by allowing more neutrons to be captured by the thorium.

There are 46 so-called PHWRs operating worldwide: 17 in Canada, 19 in India, three each in Argentina and South Korea, and two each in China and Romania, according to data from the International Atomic Energy Agency. In 1954, India set out a three-stage development plan for nuclear power that involved eventually phasing thorium into the fuel cycle for its fleet. 

Yet in the 56 years since India built its first commercial nuclear plant, its state-controlled industry has remained relatively shut off to the private sector and the rest of the world. When the US signed the 123 Agreement with India in 2008, the moment heralded an era in which the subcontinent could become a testing ground for new American reactor designs. 

In 2010, however, India passed the Civil Liability for Nuclear Damage Act. The legislation was based on what lawmakers saw as legal shortcomings in the wake of the 1984 Bhopal chemical factory disaster, when a subsidiary of the American industrial giant Dow Chemical avoided major payouts to the victims of a catastrophe that killed thousands. Under this law, responsibility for an accident at an Indian nuclear plant would fall on suppliers. The statute effectively killed any exports to India, since few companies could shoulder that burden. Only Russia’s state-owned Rosatom charged ahead with exporting reactors to India.

But things are changing. In a joint statement issued after a February 2025 summit, Prime Minister Narendra Modi and President Donald Trump “announced their commitment to fully realise the US-India 123 Civil Nuclear Agreement by moving forward with plans to work together to build US-designed nuclear reactors in India through large scale localisation and possible technology transfer.” 

In March 2025, US federal officials gave the nuclear developer Holtec International an export license to sell Indian companies its as-yet-unbuilt small modular reactors, which are based on the light-water reactor design used in the US. In April, the Indian government suggested it would reform the nuclear liability law to relax rules on foreign companies in hopes of drawing more overseas developers. Last month, a top minister confirmed that the Modi administration would overhaul the law. 

“For India, the thing they need to do is get another international vendor in the marketplace,” says Chris Gadomski, the chief nuclear analyst at the consultancy BloombergNEF.

Path of least resistance

But Shah sees larger potential for Clean Core. Unlike Holtec, whose export license was endorsed by the two Mumbai-based industrial giants Larsen & Toubro and Tata Consulting Engineers, Clean Core had its permit approved by two of India’s atomic regulators and its main state-owned nuclear company. By focusing on fuel rather than new reactors, Clean Core could become a vendor to the majority of the existing plants already operating in India. 

Its technology diverges not only from that of other US nuclear companies but also from the approach used in China. Last year, China made waves by bringing its first thorium-fueled reactor online. This enabled it to establish a new foothold in a technology the US had invented and then abandoned, and it gave Beijing another leg up in atomic energy.

But scaling that technology will require building out a whole new kind of reactor. That comes at a cost. A recent Johns Hopkins University study found that China’s success in building nuclear reactors stemmed in large part from standardization and repetition of successful designs, virtually all of which have been light-water reactors. Using thorium in existing heavy-water reactors lowers the bar for popularizing the fuel, according to the younger Shah. 

“We think ours is the path of least resistance,” Milan Shah says. “Maybe not being completely revolutionary in the way you look at nuclear today, but incredibly evolutionary to progress humanity forward.” 

The company has plans to go beyond pressurized heavy-water reactors. Within two years, the elder Shah says, Clean Core plans to design a version of its fuel that could work in the light-water reactors that make up the entire US fleet of 94. But it’s not a simple conversion. For starters, there’s the size: While the PHWR fuel rods are about 50 centimeters in length, the rods that go into light-water reactors are roughly four meters long. Then there’s the history of challenges with light water’s absorption of neutrons that could otherwise be captured to induce fission in the thorium. 

For Anil Kakodkar, the former chairman of India’s Atomic Energy Commission and a mentor to Shah, popularizing thorium could help rectify one of the darker chapters in his country’s nuclear development. In 1974, India became the first country since the signing of the first global Treaty on the Non-Proliferation of Nuclear Weapons to successfully test an atomic weapon. New Delhi was never a signatory to the pact. But the milestone prompted neighboring Pakistan to develop its own weapons. 

In response, President Jimmy Carter tried to demonstrate Washington’s commitment to reversing the Cold War arms race by sacrificing the first US effort to commercialize nuclear waste recycling, since the technology to separate plutonium and other radioisotopes from uranium in spent fuel was widely seen as a potential new source of weapons-grade material. By running its own reactors on thorium, Kakodkar says, India can chart a new path for newcomer nations that want to harness the power of the atom without stoking fears that nuclear weapons capability will spread. 

“The proliferation concerns will be dismissed to a significant extent, allowing more rapid growth of nuclear power in emerging countries,” he says. “That will be a good thing for the world at large.” 

Alexander C. Kaufman is a reporter who has covered energy, climate change, pollution, business, and geopolitics for more than a decade.