Meet the Ethiopian entrepreneur who is reinventing ammonia production

Iwnetim Abate is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

“I’m the only one who wears glasses and has eye problems in the family,” Iwnetim Abate says with a smile as sun streams in through the windows of his MIT office. “I think it’s because of the candles.”

In the small town in Ethiopia where he grew up, Abate’s family had electricity, but it was unreliable. So, for several days each week when they were without power, Abate would finish his homework by candlelight.

Today, Abate, 32, is an assistant professor at MIT in the department of materials science and engineering. Part of his research focuses on sodium-ion batteries, which could be cheaper than the lithium-based ones that typically power electric vehicles and grid installations. He’s also pursuing a new research path, examining how to harness the heat and pressure under the Earth’s surface to make ammonia, a chemical used in fertilizer and as a green fuel.

Growing up without the ubiquitous access to electricity that many people take for granted shaped the way Abate thinks about energy issues, he says. He recalls rushing to dry out his school uniform over a fire before he left in the morning. One of his chores was preparing cow dung to burn as fuel—the key is strategically placing holes to ensure proper drying, he says.

Abate’s desire to devote his attention to energy crystallized in a high school chemistry class on fuel cells. “It was like magic,” he says, to learn it’s possible to basically convert water into energy. “Sometimes science is magic, right?”

Abate scored the highest of any student in Ethiopia on the national exam the year he took it, and he knew he wanted to go to the US to further his education. But actually getting there proved to be a challenge. 

Abate applied to US colleges for three years before he was granted admission to Concordia College Moorhead, a small liberal arts college, with a partial scholarship. To raise the remaining money, he reached out to various companies and wealthy people across Ethiopia. He received countless rejections but didn’t let that phase him. He laughs recalling how guards would chase him off when he dropped by prospects’ homes in person. Eventually, a family friend agreed to help.

When Abate finally made it to the Minnesota college, he walked into a room in his dorm building and the lights turned on automatically. “I both felt happy to have all this privilege and I felt guilty at the same time,” he says.

Lab notes

His college wasn’t a research institute, so Abate quickly set out to get into a laboratory. He reached out to Sossina Haile, then at the California Institute of Technology, to ask about a summer research position.

Haile, now at Northwestern University, recalls thinking that Abate was particularly eager. As a visible Ethiopian scientist, she gets a lot of email requests, but his stood out. “No obstacle was going to stand in his way,” she says. It was risky to take on a young student with no research experience who’d only been in the US for a year, but she offered him a spot in her lab.

Abate spent the summer working on materials for use in solid oxide fuel cells. He returned for the following summer, then held a string of positions in energy-materials research, including at IBM and Los Alamos National Lab, before completing his graduate degree at Stanford and postdoctoral work at the University of California, Berkeley.

Meet the rest of this year’s 
Innovators Under 35.

He joined the MIT faculty in 2023 and set out to build a research group of his own. Today, there are two major focuses of his lab. One is sodium-ion batteries, which are a popular alternative to the lithium-based cells used in EVs and grid storage installations. Sodium-ion batteries don’t require the kinds of critical minerals lithium-ion batteries do, which can be both expensive and tied up by geopolitics.  

One major stumbling block for sodium-ion batteries is their energy density. It’s possible to improve energy density by operating at higher voltages, but some of the materials used tend to degrade quickly at high voltages. That limits the total energy density of the battery, so it’s a problem for applications like electric vehicles, where a low energy density would restrict range.

Abate’s team is developing materials that could extend the lifetime of sodium-ion batteries while avoiding the need for nickel, which is considered a critical mineral in the US. The team is examining additives and testing materials-engineering techniques to help the batteries compete with lithium-ion cells.

Irons in the fire

Another vein of Abate’s work is in some ways a departure from his history in batteries and fuel cells. In January, his team published research describing a process to make ammonia underground, using naturally-occurring heat and pressure to drive the necessary chemical reactions.  

Today, making ammonia generates between 1% and 2% of global greenhouse gas emissions. It’s primarily used to fertilize crops, but it’s also being considered as a fuel for sectors like long-distance shipping.

Abate cofounded a company called Addis Energy to commercialize the research, alongside MIT serial entrepreneur Yet-Ming Chiang and a pair of oil industry experts. (Addis means “new” in Amharic, the official language of Ethiopia.) For an upcoming pilot, the company aims to build an underground reactor that can produce ammonia. 

When he’s not tied up in research or the new startup, Abate runs programs for African students. In 2017, he cofounded an organization called Scifro, which runs summer school programs in Ethiopia and plans to expand to other countries, including Rwanda. The programs focus on providing mentorship and educating students about energy and medical devices, which is the specialty of his cofounder. 

While Abate holds a position at one of the world’s most prestigious universities and serves as chief science officer of a buzzy startup, he’s quick to give credit to those around him. “It takes a village to build something, and it’s not just me,” he says.

Abate often thinks about his friends, family, and former neighbors in Ethiopia as he works on new energy solutions. “Of course, science is beautiful, and we want to make an impact,” he says. “Being good at what you do is important, but ultimately, it’s about people.”

How Yichao “Peak” Ji became a global AI app hitmaker

Yichao “Peak” Ji is one of MIT Technology Review’s 2025 Innovators Under 35. Meet the rest of this year’s honorees. 

When Yichao Ji—also known as “Peak”—appeared in a launch video for Manus in March, he didn’t expect it to go viral. Speaking in fluent English, the 32-year-old introduced the AI agent built by Chinese startup Butterfly Effect, where he serves as chief scientist. 

The video was not an elaborate production—it was directed by cofounder Zhang Tao and filmed in a corner of their Beijing office. But something about Ji’s delivery, and the vision behind the product, cut through the noise. The product, then still an early preview available only through invite codes, spread across the Chinese internet to the world in a matter of days. Within a week of its debut, Manus had attracted a waiting list of around 2 million people. 

At first sight, Manus works like most chatbots: Users can ask it questions in a chat window. However, besides providing answers, it can also carry out tasks (for example, finding an apartment that meets specified criteria within a certain budget). It does this by breaking tasks down into steps, then using a cloud-based virtual machine equipped with a browser and other tools to execute them—perusing websites, filling in forms, and so on.

Ji is the technical core of the team. Now based in Singapore, he leads product and infrastructure development as the company pushes forward with its global expansion. 

Despite his relative youth, Ji has over a decade of experience building products that merge technical complexity with real-world usability. That earned him credibility among both engineers and investors—and put him at the forefront of a rising class of Chinese technologists with AI products and global ambitions. 

Serial builder

The son of a professor and an IT professional, Ji moved to Boulder, Colorado, at age four for his father’s visiting scholar post, returning to Beijing in second grade.

His fluent English set him apart early on, but it was an elementary school robotics team that sparked his interest in programming. By high school, he was running the computer club, teaching himself how to build operating systems, and drawing inspiration from Bill Gates, Linux, and open-source culture. He describes himself as a lifelong Apple devotee, and it was Apple’s launch of the App Store in 2008 that ignited his passion for development.

In 2010, as a high school sophomore, Ji created the Mammoth browser, a customizable third-party iPhone browser. It quickly became the most-downloaded third-party browser developed by an individual in China and earned him the Macworld Asia Grand Prize in 2011. International tech site AppAdvice called it a product that “redefined the way you browse the internet.” At age 20, he was on the cover of Forbes magazine and made its “30 Under 30” list. 

Meet the rest of this year’s 
Innovators Under 35.

During his teenage years, Ji developed several other iOS apps, including a budgeting tool designed for Hasbro’s Monopoly game, which sold well—until it attracted a legal notice for using the trademarked name. But Ji wasn’t put off a career in tech by that early brush with a multinational legal team. If anything, he says, it sharpened his instincts for both product and risk. 

In 2012, Ji launched his own company, Peak Labs, and later led the development of Magi, a search engine. The tool extracted information from across the web to answer queries—conceptually similar to today’s AI-powered search, but powered by a custom language model. 

​​Magi was briefly popular, drawing millions of users in its first month, but consumer adoption didn’t stick. It did, however, attract enterprise interest, and Ji adapted it for B2B use, before selling it in 2022. 

AI acumen 

Manus would become his next act—and a more ambitious one. His cofounders, Zhang Tao and Xiao Hong, complement Ji’s technical core with product know-how, storytelling, and organizational savvy. Both Xiao and Ji are serial entrepreneurs who have been backed by venture capital firm ZhenFund multiple times. Together, they represent the kind of long-term collaboration and international ambition that increasingly defines China’s next wave of entrepreneurs.

JULIANA TAN

People who have worked with Ji describe him as a clear thinker, a fast talker, and a tireless, deeply committed builder who thinks in systems, products, and user flows. He represents a new generation of Chinese technologists: equally at home coding or in pitch meetings, fluent in both building and branding. He’s also a product of open-source culture, and remains an active contributor whose projects regularly garner attention—and GitHub stars—across developer communities.

With new funding led by US venture capital firm Benchmark, Ji and his team are taking Manus to the wider world, relocating operations outside of China, to Singapore, and actively targeting consumers around the world. The product is built on US-based infrastructure, drawing on technologies like Claude Sonnet, Microsoft Azure, and open-source tools such as Browser Use. It’s a distinctly global setup: an AI agent developed by a Chinese team, powered by Western platforms, and designed for international users. That isn’t incidental; it reflects the more fluid nature of AI entrepreneurship today, where talent, infrastructure, and ambition move across borders just as quickly as the technology itself.

For Ji, the goal isn’t just building a global company—it’s building a legacy. “I hope Manus is the last product I’ll ever build,” Ji says. “Because if I ever have another wild idea—(I’ll just) leave it to Manus!”

How Trump’s policies are affecting early-career scientists—in their own words

This story is part of MIT Technology Review’s “America Undone” series, examining how the foundations of US success in science and innovation are currently under threat. You can read the rest here.

Every year MIT Technology Review celebrates accomplished young scientists, entrepreneurs, and inventors from around the world in our Innovators Under 35 list. We’ve just published the 2025 edition. This year, though, the context is pointedly different: The US scientific community finds itself in an unprecedented position, with the very foundation of its work under attack

Since Donald Trump took office in January, his administration has fired top government scientists, targeted universities individually and academia more broadly, and made substantial funding cuts to the country’s science and technology infrastructure. It has also upended longstanding rights and norms related to free speech, civil rights, and immigration—all of which further affects the overall environment for research and innovation in science and technology. 

We wanted to understand how these changes are affecting the careers and work of our most recent classes of innovators. The US government is the largest source of research funding at US colleges and universities, and many of our honorees are new professors and current or recent graduate or PhD students, while others work with government-funded entities in other ways. Meanwhile, about 16% of those in US graduate programs are international students. 

We sent surveys to the six most recent cohorts, which include 210 people. We asked people about both positive and negative impacts of the administration’s new policies and invited them to tell us more in an optional interview. Thirty-seven completed our survey, and we spoke with 14 of them in follow-up calls. Most respondents are academic researchers (about two-thirds) and are based in the US (81%); 11 work in the private sector (six of whom are entrepreneurs). Their responses provide a glimpse into the complexities of building their labs, companies, and careers in today’s political climate. 

Twenty-six people told us that their work has been affected by the Trump administration’s changes; only one of them described those effects as “mostly positive.” The other 25 reported primarily negative effects. While a few agreed to be named in this story, most asked to be identified only by their job titles and general areas of work, or wished to remain anonymous, for fear of retaliation. “I would not want to flag the ire of the US government,” one interviewee told us. 

Across interviews and surveys, certain themes appeared repeatedly: the loss of jobs, funding, or opportunities; restrictions on speech and research topics; and limits on who can carry out that research. These shifts have left many respondents deeply concerned about the “long-term implications in IP generation, new scientists, and spinout companies in the US,” as one respondent put it. 

One of the things we heard most consistently is that the uncertainty of the current moment is pushing people to take a more risk-averse approach to their scientific work—either by selecting projects that require fewer resources or that seem more in line with the administration’s priorities, or by erring on the side of hiring fewer people. “We’re not thinking so much about building and enabling … we’re thinking about surviving,” said one respondent. 

Ultimately, many are worried that all the lost opportunities will result in less innovation overall—and caution that it will take time to grasp the full impact. 

“We’re not going to feel it right now, but in like two to three years from now, you will feel it,” said one entrepreneur with a PhD who started his company directly from his area of study. “There are just going to be fewer people that should have been inventing things.”

The money: “Folks are definitely feeling the pressure”

The most immediate impact has been financial. Already, the Trump administration has pulled back support for many areas of science—ending more than a thousand awards by the National Institutes of Health and over 100 grants for climate-related projects by the National Science Foundation. The rate of new awards granted by both agencies has slowed, and the NSF has cut the number of graduate fellowships it’s funding by half for this school year. 

The administration has also cut or threatened to cut funding from a growing number of universities, including Harvard, Columbia, Brown, and UCLA, for supposedly not doing enough to combat antisemitism.

As a result, our honorees said that finding funding to support their work has gotten much harder—and it was already a big challenge before. 

A biochemist at a public university told us she’d lost a major NIH grant. Since it was terminated earlier this year, she’s been spending less time in the lab and more on fundraising. 

Others described uncertainty about the status of grants from a wide range of agencies, including NSF, the Advanced Research Projects Agency for Health, the Department of Energy, and the Centers for Disease Control and Prevention, which collectively could pay out more than $44 million to the researchers we’ve recognized. Several had waited months for news on an application’s status or updates on when funds they had already won would be disbursed. One AI researcher who studies climate-related issues is concerned that her multiyear grant may not be renewed, even though renewal would have been “fairly standard” in the past.

Two individuals lamented the cancellation of 24 awards in May by the DOE’s Office of Clean Energy Demonstrations, including grants for carbon capture projects and a clean cement plant. One said the decision had “severely disrupted the funding environment for climate-tech startups” by creating “widespread uncertainty,” “undermining investor confidence,” and “complicating strategic planning.” 

Climate research and technologies have been a favorite target of the Trump administration: The recently passed tax and spending bill put stricter timelines in place that make it harder for wind and solar installations to qualify for tax credits via the Inflation Reduction Act. Already, at least 35 major commercial climate-tech projects have been canceled or downsized this year. 

In response to a detailed list of questions, a DOE spokesperson said, “Secretary [Chris] Wright and President Trump have made it clear that unleashing American scientific innovation is a top priority.” They pointed to “robust investments in science” in the president’s proposed budget and the spending bill and cited special areas of focus “to maintain America’s global competitiveness,” including nuclear fusion, high-performance computing, quantum computing, and AI. 

Other respondents cited tighter budgets brought on by a change in how the government calculates indirect costs, which are funds included in research grants to cover equipment, institutional overhead, and in some cases graduate students’ salaries. In February, the NIH instituted a 15% cap on indirect costs—which ran closer to 28% of the research funds the NIH awarded in 2023. The DOE, DOD, and NSF all soon proposed similar caps. This collective action has sparked lawsuits, and indirect costs remain in limbo. (MIT, which owns MIT Technology Review, is involved in several of these lawsuits; MIT Technology Review is editorially independent from the university.) 

Looking ahead, an academic at a public university in Texas, where the money granted for indirect costs funds student salaries, said he plans to hire fewer students for his own lab. “It’s very sad that I cannot promise [positions] at this point because of this,” he told us, adding that the cap could also affect the competitiveness of public universities in Texas, since schools elsewhere may fund their student researchers differently. 

At the same time, two people with funding through the Defense Department—which could see a surge of investment under the president’s proposed budget—said their projects were moving forward as planned. A biomedical engineer at a public university in the Midwest expressed excitement about what he perceives as a fresh surge of federal interest in industrial and defense applications of synthetic biology. Still, he acknowledged colleagues working on different projects don’t feel as optimistic: “Folks are definitely feeling the pressure.”

Many who are affected by cuts or delays are now looking for new funding sources in a bid to become less reliant on the federal government. Eleven people said they are pursuing or plan to pursue philanthropic and foundation funding or to seek out industry support. However, the amount of private funding available can’t begin to make up the difference in federal funds lost, and investors often focus more on low-risk, short-term applications than on open scientific questions. 

The NIH responded to a detailed list of questions with a statement pointing to unspecified investments in early-career researchers. “Recent updates to our priorities and processes are designed to broaden scientific opportunity rather than restrict it, ensuring that taxpayer-funded research is rigorous, reproducible, and relevant to all Americans,” it reads. The NSF declined a request for comment from MIT Technology Review

Further complicating this financial picture are tariffs—some of which are already in effect, and many more of which have been threatened. Nine people who responded to our survey said their work is already being affected by these taxes imposed on goods imported into the US. For some scientists, this has meant higher operating costs for their labs: An AI researcher said tariffs are making computational equipment more expensive, while the Texas academic said the cost of buying microscopes from a German firm had gone up by thousands of dollars since he first budgeted for them. (Neither the White House press office nor the White House Office of Science and Technology Policy responded to requests for comment.) 

One cleantech entrepreneur saw a positive impact on his business as more US companies reevaluated their supply chains and sought to incorporate more domestic suppliers. The entrepreneur’s firm, which is based in the US, has seen more interest for its services from potential customers seeking “tariff-proof vendors.”  

“Everybody is proactive on tariffs and we’re one of these solutions—we’re made in America,” he said. 

Another person, who works for a European firm, is factoring potential tariffs into decisions about where to open new production facilities. Though the Trump administration has said the taxes are meant to reinvigorate US manufacturing, she’s now less inclined to build out a significant presence in the US because, she said, tariffs may drive up the costs of importing raw materials that are required to make the company’s product. 

What’s more, financial backers have encouraged her company to stay rooted abroad because of the potential impact of tariffs for US-based facilities: “People who invest worldwide—they are saying it’s reassuring for them right now to consider investing in Europe,” she said.

The climate of fear: “It will impact the entire university if there is retaliation” 

Innovators working in both academia and the private sector described new concerns about speech and the politicization of science. Many have changed how they describe their work in order to better align with the administration’s priorities—fearing funding cuts, job terminations, immigration action, and other potential retaliation. 

This is particularly true for those who work at universities. The Trump administration has reached deals with some institutions, including Columbia and Brown, that would restore part of the funding it slashed—but only after the universities agreed to pay hefty fines and abide by terms that, critics say, hand over an unprecedented level of oversight to administration officials. 

Some respondents had received guidance on what they could or couldn’t say from program managers at their funding agencies or their universities or investors; others had not received any official guidance but made personal decisions on what to say and share publicly based on recent news of grant cancellations.

Both on and off campus, there is substantial pressure on diversity, equity, and inclusion (DEI) initiatives, which have been hit particularly hard as the administration seeks to eliminate what it called “illegal and immoral discrimination programs” in one of the first executive orders of President Trump’s second term.  

One respondent, whose work focuses on fighting child sexual abuse materials, recalled rewriting a grant abstract “3x to remove words banned” by Senator Ted Cruz of Texas, an administration ally; back in February, Cruz identified 3,400 NSF grants as “woke DEI” research advancing “neo-Marxist class warfare propaganda.” (His list includes grants to research self-driving cars and solar eclipses. His office did not respond to a request for comment.) 

Many other researchers we spoke with are also taking steps to avoid being put in the DEI bucket. A technologist at a Big Tech firm whose work used to include efforts to provide more opportunities for marginalized communities to get into computing has stopped talking about those recruiting efforts. One biologist described hearing that grant applications for the NIH now have to avoid words like “cell type diversity” for “DEI reasons”—no matter that “cell type diversity” is, she said, a common and “neutral” scientific term in microbiology. (In its statement, the NIH said: “To be clear, no scientific terms are banned, and commonly used terms like ‘cell type diversity’ are fully acceptable in applications and research proposals.”) 

Plenty of other research has also gotten caught up in the storm

One person who works in climate technology said that she now talks about “critical minerals,” “sovereignty,” and “energy independence” or “dominance” rather than “climate” or “industrial decarbonization.” (Trump’s Energy Department has boosted investment in critical minerals, pledging nearly $1 billion to support related projects.) Another individual working in AI said she has been instructed to talk less about “regulation,” “safety,” or “ethics” as they relate to her work. One survey respondent described the language shift as “definitely more red-themed.”

Some said that shifts in language won’t change the substance of their work, but others feared they will indeed affect the research itself. 

Emma Pierson, an assistant professor of computer science at the University of California, Berkeley, worried that AI companies may kowtow to the administration, which could in turn “influence model development.” While she noted that this fear is speculative, the Trump administration’s AI Action Plan contains language that directs the federal government to purchase large language models that generate “truthful responses” (by the administration’s definition), with a goal of “preventing woke AI in the federal government.” 

And one biomedical researcher fears that the administration’s effective ban on DEI will force an end to outreach “favoring any one community” and hurt efforts to improve the representation of women and people of color in clinical trials. The NIH and the Food and Drug Administration had been working for years to address the historic underrepresentation of these groups through approaches including specific funding opportunities to address health disparities; many of these efforts have recently been cut

Respondents from both academia and the private sector told us they’re aware of the high stakes of speaking out. 

“As an academic, we have to be very careful about how we voice our personal opinion because it will impact the entire university if there is retaliation,” one engineering professor told us. 

“I don’t want to be a target,” said one cleantech entrepreneur, who worries not only about reprisals from the current administration but also about potential blowback from Democrats if he cooperates with it. 

“I’m not a Trumper!” he said. “I’m just trying not to get fined by the EPA.” 

The people: “The adversarial attitude against immigrants … is posing a brain drain”

Immigrants are crucial to American science, but what one respondent called a broad “persecution of immigrants,” and an increasing climate of racism and xenophobia, are matters of growing concern. 

Some people we spoke with feel vulnerable, particularly those who are immigrants themselves. The Trump administration has revoked 6,000 international student visas (causing federal judges to intervene in some cases) and threatened to “aggressively” revoke the visas of Chinese students in particular. In recent months, the Justice Department has prioritized efforts to denaturalize certain citizens, while similar efforts to revoke green cards granted decades ago were shut down by court order. One entrepreneur who holds a green card told us, “I find myself definitely being more cognizant of what I’m saying in public and certainly try to stay away from anything political as a result of what’s going on, not just in science but in the rest of the administration’s policies.” 

On top of all this, federal immigration raids and other enforcement actions—authorities have turned away foreign academics upon arrival to the US and detained others with valid academic visas, sometimes because of their support for Palestine—have created a broad climate of fear.  

Four respondents said they were worried about their own immigration status, while 16 expressed concerns about their ability to attract or retain talent, including international students. More than a million international students studied in the US last year, with nearly half of those enrolling in graduate programs, according to the Institute of International Education

“The adversarial attitude against immigrants, especially those from politically sensitive countries, is posing a brain drain,” an AI researcher at a large public university on the West Coast told us. 

This attack on immigration in the US can be compounded by state-level restrictions. Texas and Florida both restrict international collaborations with and recruitment of scientists from countries including China, even though researchers told us that international collaborations could help mitigate the impacts of decreased domestic funding. “I cannot collaborate at this point because there’s too many restrictions and Texas also can limit us from visiting some countries,” the Texas academic said. “We cannot share results. We cannot visit other institutions … and we cannot give talks.”

All this is leading to more interest in positions outside the United States. One entrepreneur, whose business is multinational, said that their company has received a much higher share of applications from US-based candidates to openings in Europe than it did a year ago, despite the lower salaries offered there. 

“It is becoming easier to hire good people in the UK,” confirmed Karen Sarkisyan, a synthetic biologist based in London. 

At least one US-based respondent, an academic in climate technology, accepted a tenured position in the United Kingdom. Another said that she was looking for positions in other countries, despite her current job security and “very good” salary. “I can tell more layoffs are coming, and the work I do is massively devalued. I can’t stand to be in a country that treats their scientists and researchers and educated people like this,” she told us. 

Some professors reported in our survey and interviews that their current students are less interested in pursuing academic careers because graduate and PhD students are losing offers and opportunities as a result of grant cancellations. So even as the number of international students dwindles, there may also be “shortages in domestic grad students,” one mechanical engineer at a public university said, and “research will fall behind.”  

Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

In the end, this will affect not just academic research but also private-sector innovation. One biomedical entrepreneur told us that academic collaborators frequently help his company generate lots of ideas: “We hope that some of them will pan out and become very compelling areas for us to invest in.” Particularly for small startups without large research budgets, having fewer academics to work with will mean that “we just invest less, we just have fewer options to innovate,” he said. “The level of risk that industry is willing to take is generally lower than academia, and you can’t really bridge that gap.” 

Despite it all, a number of researchers and entrepreneurs who generally expressed frustration about the current political climate said they still consider the US the best place to do science. 

Pierson, the AI researcher at Berkeley, described staying committed to her research into social inequities despite the political backlash: “I’m an optimist. I do believe this will pass, and these problems are not going to pass unless we work on them.” 

And a biotech entrepreneur pointed out that US-based scientists can still command more resources than those in most other countries. “I think the US still has so much going for it. Like, there isn’t a comparable place to be if you’re trying to be on the forefront of innovation—trying to build a company or find opportunities,” he said.

Several academics and founders who came to the US to pursue scientific careers spoke about still being drawn to America’s spirit of invention and the chance to advance on their own merits. “For me, I’ve always been like, the American dream is something real,” said one. They said they’re holding fast to those ideals—for now.

Why basic science deserves our boldest investment

In December 1947, three physicists at Bell Telephone Laboratories—John Bardeen, William Shockley, and Walter Brattain—built a compact electronic device using thin gold wires and a piece of germanium, a material known as a semiconductor. Their invention, later named the transistor (for which they were awarded the Nobel Prize in 1956), could amplify and switch electrical signals, marking a dramatic departure from the bulky and fragile vacuum tubes that had powered electronics until then.

Its inventors weren’t chasing a specific product. They were asking fundamental questions about how electrons behave in semiconductors, experimenting with surface states and electron mobility in germanium crystals. Over months of trial and refinement, they combined theoretical insights from quantum mechanics with hands-on experimentation in solid-state physics—work many might have dismissed as too basic, academic, or unprofitable.

Their efforts culminated in a moment that now marks the dawn of the information age. Transistors don’t usually get the credit they deserve, yet they are the bedrock of every smartphone, computer, satellite, MRI scanner, GPS system, and artificial-intelligence platform we use today. With their ability to modulate (and route) electrical current at astonishing speeds, transistors make modern and future computing and electronics possible.

This breakthrough did not emerge from a business plan or product pitch. It arose from open-ended, curiosity-driven research and enabling development, supported by an institution that saw value in exploring the unknown. It took years of trial and error, collaborations across disciplines, and a deep belief that understanding nature—even without a guaranteed payoff—was worth the effort.

After the first successful demonstration in late 1947, the invention of the transistor remained confidential while Bell Labs filed patent applications and continued development. It was publicly announced at a press conference on June 30, 1948, in New York City. The scientific explanation followed in a seminal paper published in the journal Physical Review

How do they work? At their core, transistors are made of semiconductors—materials like germanium and, later, silicon—that can either conduct or resist electricity depending on subtle manipulations of their structure and charge. In a typical transistor, a small voltage applied to one part of the device (the gate) either allows or blocks the electric current flowing through another part (the channel). It’s this simple control mechanism, scaled up billions of times, that lets your phone run apps, your laptop render images, and your search engine return answers in milliseconds.

Though early devices used germanium, researchers soon discovered that silicon—more thermally stable, moisture resistant, and far more abundant—was better suited for industrial production. By the late 1950s, the transition to silicon was underway, making possible the development of integrated circuits and, eventually, the microprocessors that power today’s digital world.

A modern chip the size of a human fingernail now contains tens of billions of silicon transistors, each measured in nanometers—smaller than many viruses. These tiny switches turn on and off billions of times per second, controlling the flow of electrical signals involved in computation, data storage, audio and visual processing, and artificial intelligence. They form the fundamental infrastructure behind nearly every digital device in use today. 

The global semiconductor industry is now worth over half a trillion dollars. Devices that began as experimental prototypes in a physics lab now underpin economies, national security, health care, education, and global communication. But the transistor’s origin story carries a deeper lesson—one we risk forgetting.

Much of the fundamental understanding that moved transistor technology forward came from federally funded university research. Nearly a quarter of transistor research at Bell Labs in the 1950s was supported by the federal government. Much of the rest was subsidized by revenue from AT&T’s monopoly on the US phone system, which flowed into industrial R&D.

Inspired by the 1945 report “Science: The Endless Frontier,” authored by Vannevar Bush at the request of President Truman, the US government began a long-standing tradition of investing in basic research. These investments have paid steady dividends across many scientific domains—from nuclear energy to lasers, and from medical technologies to artificial intelligence. Trained in fundamental research, generations of students have emerged from university labs with the knowledge and skills necessary to push existing technology beyond its known capabilities.

And yet, funding for basic science—and for the education of those who can pursue it—is under increasing pressure. The new White House’s proposed federal budget includes deep cuts to the Department of Energy and the National Science Foundation (though Congress may deviate from those recommendations). Already, the National Institutes of Health has canceled or paused more than $1.9 billion in grants, while NSF STEM education programs suffered more than $700 million in terminations.

These losses have forced some universities to freeze graduate student admissions, cancel internships, and scale back summer research opportunities—making it harder for young people to pursue scientific and engineering careers. In an age dominated by short-term metrics and rapid returns, it can be difficult to justify research whose applications may not materialize for decades. But those are precisely the kinds of efforts we must support if we want to secure our technological future.

Consider John McCarthy, the mathematician and computer scientist who coined the term “artificial intelligence.” In the late 1950s, while at MIT, he led one of the first AI groups and developed Lisp, a programming language still used today in scientific computing and AI applications. At the time, practical AI seemed far off. But that early foundational work laid the groundwork for today’s AI-driven world.

After the initial enthusiasm of the 1950s through the ’70s, interest in neural networks—a leading AI architecture today inspired by the human brain—declined during the so-called “AI winters” of the late 1990s and early 2000s. Limited data, inadequate computational power, and theoretical gaps made it hard for the field to progress. Still, researchers like Geoffrey Hinton and John Hopfield pressed on. Hopfield, now a 2024 Nobel laureate in physics, first introduced his groundbreaking neural network model in 1982, in a paper published in Proceedings of the National Academy of Sciences of the USA. His work revealed the deep connections between collective computation and the behavior of disordered magnetic systems. Together with the work of colleagues including Hinton, who was awarded the Nobel the same year, this foundational research seeded the explosion of deep-learning technologies we see today.

One reason neural networks now flourish is the graphics processing unit, or GPU—originally designed for gaming but now essential for the matrix-heavy operations of AI. These chips themselves rely on decades of fundamental research in materials science and solid-state physics: high-dielectric materials, strained silicon alloys, and other advances making it possible to produce the most efficient transistors possible. We are now entering another frontier, exploring memristors, phase-changing and 2D materials, and spintronic devices.

If you’re reading this on a phone or laptop, you’re holding the result of a gamble someone once made on curiosity. That same curiosity is still alive in university and research labs today—in often unglamorous, sometimes obscure work quietly laying the groundwork for revolutions that will infiltrate some of the most essential aspects of our lives 50 years from now. At the leading physics journal where I am editor, my collaborators and I see the painstaking work and dedication behind every paper we handle. Our modern economy—with giants like Nvidia, Microsoft, Apple, Amazon, and Alphabet—would be unimaginable without the humble transistor and the passion for knowledge fueling the relentless curiosity of scientists like those who made it possible.

The next transistor may not look like a switch at all. It might emerge from new kinds of materials (such as quantum, hybrid organic-inorganic, or hierarchical types) or from tools we haven’t yet imagined. But it will need the same ingredients: solid fundamental knowledge, resources, and freedom to pursue open questions driven by curiosity, collaboration—and most importantly, financial support from someone who believes it’s worth the risk.

Julia R. Greer is a materials scientist at the California Institute of Technology. She is a judge for MIT Technology Review’s Innovators Under 35 and a former honoree (in 2008).

Putin says organ transplants could grant immortality. Not quite.

This week I’m writing from Manchester, where I’ve been attending a conference on aging. Wednesday was full of talks and presentations by scientists who are trying to understand the nitty-gritty of aging—all the way down to the molecular level. Once we can understand the complex biology of aging, we should be able to slow or prevent the onset of age-related diseases, they hope.

Then my editor forwarded me a video of the leaders of Russia and China talking about immortality. “These days at 70 years old you are still a child,” China’s Xi Jinping, 72, was translated as saying, according to footage livestreamed by CCTV to multiple media outlets.

“With the developments of biotechnology, human organs can be continuously transplanted, and people can live younger and younger, and even achieve immortality,” Russia’s Vladimir Putin, also 72, is reported to have replied.

Russian President Vladimir Putin, Chinese President Xi Jinping and North Korean leader Kim Jong Un walk side by side

SERGEI BOBYLEV, SPUTNIK, KREMLIN POOL PHOTO VIA AP

There’s a striking contrast between that radical vision and the incremental longevity science presented at the meeting. Repeated rounds of organ transplantation surgery aren’t likely to help anyone radically extend their lifespan anytime soon.

First, back to Putin’s proposal: the idea of continually replacing aged organs to stay young. It’s a simplistic way to think about aging. After all, aging is so complicated that researchers can’t agree on what causes it, why it occurs, or even how to define it, let alone “treat” it.

Having said that, there may be some merit to the idea of repairing worn-out body parts with biological or synthetic replacements. Replacement therapies—including bioengineered organs—are being developed by multiple research teams. Some have already been tested in people. This week, let’s take a look at the idea of replacement therapies.

No one fully understands why our organs start to fail with age. On the face of it, replacing them seems like a good idea. After all, we already know how to do organ transplants. They’ve been a part of medicine since the 1950s and have been used to save hundreds of thousands of lives in the US alone.

And replacing old organs with young ones might have more broadly beneficial effects. When a young mouse is stitched to an old one, the older mouse benefits from the arrangement, and its health seems to improve.

The problem is that we don’t really know why. We don’t know what it is about young body tissues that makes them health-promoting. We don’t know how long these effects might last in a person. We don’t know how different organ transplants will compare, either. Might a young heart be more beneficial than a young liver? No one knows.

And that’s before you consider the practicalities of organ transplantation. There is already a shortage of donor organs—thousands of people die on waiting lists. Transplantation requires major surgery and, typically, a lifetime of prescription drugs that damp down the immune system, leaving a person more susceptible to certain infections and diseases.

So the idea of repeated organ transplantations shouldn’t really be a particularly appealing one. “I don’t think that’s going to happen anytime soon,” says Jesse Poganik, who studies aging at Brigham and Women’s Hospital in Boston and is also in Manchester for the meeting.

Poganik has been collaborating with transplant surgeons in his own research. “The surgeries are good, but they’re not simple,” he tells me. And they come with real risks. His own 24-year-old cousin developed a form of cancer after a liver and heart transplant. She died a few weeks ago, he says.

So when it comes to replacing worn-out organs, scientists are looking for both biological and synthetic alternatives.  

We’ve been replacing body parts for centuries. Wooden toes were used as far back as the 15th century. Joint replacements have been around for more than a hundred years. And major innovations over the last 70 years have given us devices like pacemakers, hearing aids, brain implants, and artificial hearts.

Scientists are exploring other ways to make tissues and organs, too. There are different approaches here, but they include everything from injecting stem cells to seeding “scaffolds” with cells in a lab.

In 1999, researchers used volunteers’ own cells to seed bladder-shaped collagen scaffolds. The resulting bioengineered bladders went on to be transplanted into seven people in an initial trial

Now scientists are working on more complicated organs. Jean Hébert, a program manager at the US government’s Advanced Research Projects Agency for Health, has been exploring ways to gradually replace the cells in a person’s brain. The idea is that, eventually, the recipient will end up with a young brain.

Hébert showed my colleague Antonio Regalado how, in his early experiments, he removed parts of mice’s brains and replaced them with embryonic stem cells. That work seems a world away from the biochemical studies being presented at the British Society for Research on Ageing annual meeting in Manchester, where I am now.

On Wednesday, one scientist described how he’d been testing potential longevity drugs on the tiny nematode worm C. elegans. These worms live for only about 15 to 40 days, and his team can perform tens of thousands of experiments with them. About 40% of the drugs that extend lifespan in C. elegans also help mice live longer, he told us.

To me, that’s not an amazing hit rate. And we don’t know how many of those drugs will work in people. Probably less than 40% of that 40%.

Other scientists presented work on chemical reactions happening at the cellular level. It was deep, basic science, and my takeaway was that there’s a lot aging researchers still don’t fully understand.

It will take years—if not decades—to get the full picture of aging at the molecular level. And if we rely on a series of experiments in worms, and then mice, and then humans, we’re unlikely to make progress for a really long time. In that context, the idea of replacement therapy feels like a shortcut.

“Replacement is a really exciting avenue because you don’t have to understand the biology of aging as much,” says Sierra Lore, who studies aging at the University of Copenhagen in Denmark and the Buck Institute for Research on Aging in Novato, California.

Lore says she started her research career studying aging at the molecular level, but she soon changed course. She now plans to focus her attention on replacement therapies. “I very quickly realized we’re decades away [from understanding the molecular processes that underlie aging],” she says. “Why don’t we just take what we already know—replacement—and try to understand and apply it better?”

So perhaps Putin’s straightforward approach to delaying aging holds some merit. Whether it will grant him immortality is another matter.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

How Trump is helping China extend its massive lead in clean energy 

On a spring day in 1954, Bell Labs researchers showed off the first practical solar panels at a press conference in Murray Hill, New Jersey, using sunlight to spin a toy Ferris wheel before a stunned crowd.

The solar future looked bright. But in the race to commercialize the technology it invented, the US would lose resoundingly. Last year, China exported $40 billion worth of solar panels and modules, while America shipped just $69 million, according to the New York Times. It was a stunning forfeit of a huge technological lead. 

And now the US seems determined to repeat the mistake. In its quest to prop up aging fossil-fuel industries, the Trump administration has slashed federal support for the emerging cleantech sector, handing his nation’s chief economic rival the most generous of gifts: an unobstructed path to locking in its control of emerging energy technologies, and a leg up in inventing the industries of the future.

China’s dominance of solar was no accident. In the late 2000s, the government simply determined that the sector was a national priority. Then it leveraged deep subsidies, targeted policies, and price wars to scale up production, drive product improvements, and slash costs. It’s made similar moves in batteries, electric vehicles, and wind turbines. 

Meanwhile, President Donald Trump has set to work unraveling hard-won clean-energy achievements in the US, snuffing out the gathering momentum to rebuild the nation’s energy sector in cleaner, more sustainable ways.

The tax and spending bill that Trump signed into law in early July wound down the subsidies for solar and wind power contained in the Inflation Reduction Act of 2022. The legislation also cut off federal support for cleantech projects that rely too heavily on Chinese materials—a hamfisted bid to punish Chinese industries that will instead make many US projects financially unworkable.

Meanwhile, the administration has slashed federal funding for science and attacked the financial foundations of premier research universities, pulling up the roots of future energy innovations and industries.

A driving motivation for many of these policies is the quest to protect the legacy energy industry based on coal, oil, and natural gas, all of which the US is geologically blessed with. But this strategy amounts to the innovator’s dilemma playing out at a national scale—a country clinging to its declining industries rather than investing in the ones that will define the future.

It does not particularly matter whether Trump believes in or cares about climate change. The economic and international security imperatives to invest in modern, sustainable industries are every bit as indisputable as the chemistry of greenhouse gases.

Without sustained industrial policies that reward innovation, American entrepreneurs and investors won’t risk money and time creating new businesses, developing new products, or building first-of-a-kind projects here. Indeed, venture capitalists have told me that numerous US climate-tech companies are already looking overseas, seeking markets where they can count on government support. Some fear that many other companies will fail in the coming months as subsidies disappear, developments stall, and funding flags. 

All of which will help China extend an already massive lead.

The nation has installed nearly three times as many wind turbines as the US, and it generates more than twice as much solar power. It boasts five of the 10 largest EV companies in the world, and the three largest wind turbine manufacturers. China absolutely dominates the battery market, producing the vast majority of the anodes, cathodes, and battery cells that increasingly power the world’s vehicles, grids, and gadgets.

China harnessed the clean-energy transition to clean up its skies, upgrade its domestic industries, create jobs for its citizens, strengthen trade ties, and build new markets in emerging economies. In turn, it’s using those business links to accrue soft power and extend its influence—all while the US turns it back on global institutions.

These widening relationships increasingly insulate China from external pressures, including those threatened by Trump’s go-to tactic: igniting or inflaming trade wars. 

But stiff tariffs and tough talk aren’t what built the world’s largest economy and established the US as the global force in technology for more than a century. What did was deep, sustained federal investment into education, science, and research and development—the very budget items that Trump and his party have been so eager to eliminate. 

Another thing

Earlier this summer, the EPA announced plans to revoke the Obama-era “endangerment finding,” the legal foundation for regulating the nation’s greenhouse-gas pollution. 

The agency’s argument leans heavily on a report that rehashes decades-old climate-denial talking points to assert that rising emissions haven’t produced the harms that scientists expected. It’s a wild, Orwellian plea for you to reject the evidence of your eyes and ears in a summer that saw record heat waves in the Midwest and East and is now blanketing the West in wildfire smoke.

Over the weekend, more than 85 scientists sent a point-by-point, 459-page rebuttal to the federal government, highlighting myriad ways in which the report “is biased, full of errors, and not fit to inform policy making,” as Bob Kopp, a climate scientist at Rutgers, put it on Bluesky.

“The authors reached these flawed conclusions through selective filtering of evidence (‘cherry picking’), overemphasis of uncertainties, misquoting peer-reviewed research, and a general dismissal of the vast majority of decades of peer-reviewed research,” the dozens of reviewers found.

The Trump administration handpicked researchers who would write the report it wanted to support its quarrel with thermometers and justify its preordained decision to rescind the endangerment finding. But it’s legally bound to hear from others as well, notes Karen McKinnon, a climate researcher at the University of California, Los Angeles.

“Luckily, there is time to take action,” McKinnon said in a statement. “Comment on the report, and contact your representatives to let them know we need to take action to bring back the tolerable summers of years past.”

You can read the full report here, or NPR’s take here. And be sure to read Casey Crownhart’s earlier piece in The Spark on the endangerment finding.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Synthesia’s AI clones are more expressive than ever. Soon they’ll be able to talk back.

Earlier this summer, I walked through the glassy lobby of a fancy office in London, into an elevator, and then along a corridor into a clean, carpeted room. Natural light flooded in through its windows, and a large pair of umbrella-like lighting rigs made the room even brighter. I tried not to squint as I took my place in front of a tripod equipped with a large camera and a laptop displaying an autocue. I took a deep breath and started to read out the script.

I’m not a newsreader or an actor auditioning for a movie—I was visiting the AI company Synthesia to give it what it needed to create a hyperrealistic AI-generated avatar of me. The company’s avatars are a decent barometer of just how dizzying progress has been in AI over the past few years, so I was curious just how accurately its latest AI model, introduced last month, could replicate me. 

When Synthesia launched in 2017, its primary purpose was to match AI versions of real human faces—for example, the former footballer David Beckham—with dubbed voices speaking in different languages. A few years later, in 2020, it started giving the companies that signed up for its services the opportunity to make professional-level presentation videos starring either AI versions of staff members or consenting actors. But the technology wasn’t perfect. The avatars’ body movements could be jerky and unnatural, their accents sometimes slipped, and the emotions indicated by their voices didn’t always match their facial expressions.

Now Synthesia’s avatars have been updated with more natural mannerisms and movements, as well as expressive voices that better preserve the speaker’s accent—making them appear more humanlike than ever before. For Synthesia’s corporate clients, these avatars will make for slicker presenters of financial results, internal communications, or staff training videos.

I found the video demonstrating my avatar as unnerving as it is technically impressive. It’s slick enough to pass as a high-definition recording of a chirpy corporate speech, and if you didn’t know me, you’d probably think that’s exactly what it was. This demonstration shows how much harder it’s becoming to distinguish the artificial from the real. And before long, these avatars will even be able to talk back to us. But how much better can they get? And what might interacting with AI clones do to us?  

The creation process

When my former colleague Melissa visited Synthesia’s London studio to create an avatar of herself last year, she had to go through a long process of calibrating the system, reading out a script in different emotional states, and mouthing the sounds needed to help her avatar form vowels and consonants. As I stand in the brightly lit room 15 months later, I’m relieved to hear that the creation process has been significantly streamlined. Josh Baker-Mendoza, Synthesia’s technical supervisor, encourages me to gesture and move my hands as I would during natural conversation, while simultaneously warning me not to move too much. I duly repeat an overly glowing script that’s designed to encourage me to speak emotively and enthusiastically. The result is a bit as if if Steve Jobs had been resurrected as a blond British woman with a low, monotonous voice. 

It also has the unfortunate effect of making me sound like an employee of Synthesia.“I am so thrilled to be with you today to show off what we’ve been working on. We are on the edge of innovation, and the possibilities are endless,” I parrot eagerly, trying to sound lively rather than manic. “So get ready to be part of something that will make you go, ‘Wow!’ This opportunity isn’t just big—it’s monumental.”

Just an hour later, the team has all the footage it needs. A couple of weeks later I receive two avatars of myself: one powered by the previous Express-1 model and the other made with the latest Express-2 technology. The latter, Synthesia claims, makes its synthetic humans more lifelike and true to the people they’re modeled on, complete with more expressive hand gestures, facial movements, and speech. You can see the results for yourself below. 

COURTESY SYNTHESIA

Last year, Melissa found that her Express-1-powered avatar failed to match her transatlantic accent. Its range of emotions was also limited—when she asked her avatar to read a script angrily, it sounded more whiny than furious. In the months since, Synthesia has improved Express-1, but the version of my avatar made with the same technology blinks furiously and still struggles to synchronize body movements with speech.

By way of contrast, I’m struck by just how much my new Express-2 avatar looks like me: Its facial features mirror my own perfectly. Its voice is spookily accurate too, and although it gesticulates more than I do, its hand movements generally marry up with what I’m saying. 

But the tiny telltale signs of AI generation are still there if you know where to look. The palms of my hands are bright pink and as smooth as putty. Strands of hair hang stiffly around my shoulders instead of moving with me. Its eyes stare glassily ahead, rarely blinking. And although the voice is unmistakably mine, there’s something slightly off about my digital clone’s intonations and speech patterns. “This is great!” my avatar randomly declares, before slipping back into a saner register.

Anna Eiserbeck, a postdoctoral psychology researcher at the Humboldt University of Berlin who has studied how humans react to perceived deepfake faces, says she isn’t sure she’d have been able to identify my avatar as a deepfake at first glance.

But she would eventually have noticed something amiss. It’s not just the small details that give it away—my oddly static earring, the way my body sometimes moves in small, abrupt jerks. It’s something that runs much deeper, she explains.

“Something seemed a bit empty. I know there’s no actual emotion behind it— it’s not a conscious being. It does not feel anything,” she says. Watching the video gave her “this kind of uncanny feeling.” 

My digital clone, and Eiserbeck’s reaction to it, make me wonder how realistic these avatars really need to be. 

I realize that part of the reason I feel disconcerted by my avatar is that it behaves in a way I rarely have to. Its oddly upbeat register is completely at odds with how I normally speak; I’m a die-hard cynical Brit who finds it difficult to inject enthusiasm into my voice even when I’m genuinely thrilled or excited. It’s just the way I am. Plus, watching the videos on a loop makes me question if I really do wave my hands about that way, or move my mouth in such a weird manner. If you thought being confronted with your own face on a Zoom call was humbling, wait until you’re staring at a whole avatar of yourself. 

When Facebook was first taking off in the UK almost 20 years ago, my friends and I thought illicitly logging into each other’s accounts and posting the most outrageous or rage-inducing status updates imaginable was the height of comedy. I wonder if the equivalent will soon be getting someone else’s avatar to say something truly embarrassing: expressing support for a disgraced politician or (in my case) admitting to liking Ed Sheeran’s music. 

Express-2 remodels every person it’s presented with into a polished professional speaker with the body language of a hyperactive hype man. And while this makes perfect sense for a company focused on making glossy business videos, watching my avatar doesn’t feel like watching me at all. It feels like something else entirely.

How it works

The real technical challenge these days has less to do with creating avatars that match our appearance than with getting them to replicate our behavior, says Björn Schuller, a professor of artificial intelligence at Imperial College London. “There’s a lot to consider to get right; you have to have the right micro gesture, the right intonation, the sound of voice and the right word,” he says. “I don’t want an AI [avatar] to frown at the wrong moment—that could send an entirely different message.”

To achieve an improved level of realism, Synthesia developed a number of new audio and video AI models. The team created a voice cloning model to preserve the human speaker’s accent, intonation, and expressiveness—unlike other voice models, which can flatten speakers’ distinctive accents into generically American-sounding voices.

When a user uploads a script to Express-1, its system analyzes the words to infer the correct tone to use. That information is then fed into a diffusion model, which renders the avatar’s facial expressions and movements to match the speech. 

Alongside the voice model, Express-2 uses three other models to create and animate the avatars. The first generates an avatar’s gestures to accompany the speech fed into it by the Express-Voice model. A second evaluates how closely the input audio aligns with the multiple versions of the corresponding generated motion before selecting the best one. Then a final model renders the avatar with that chosen motion. 

This third rendering model is significantly more powerful than its Express-1 predecessor. Whereas the previous model had a few hundred million parameters, Express-2’s rendering model’s parameters number in the billions. This means it takes less time to create the avatar, says Youssef Alami Mejjati, Synthesia’s head of research and development:

“With Express-1, it needed to first see someone expressing emotions to be able to render them. Now, because we’ve trained it on much more diverse data and much larger data sets, with much more compute, it just learns these associations automatically without needing to see them.” 

Narrowing the uncanny valley

Although humanlike AI-generated avatars have been around for years, the recent boom in generative AI is making it increasingly easier and more affordable to create lifelike synthetic humans—and they’re already being put to work. Synthesia isn’t alone: AI avatar companies like Yuzu Labs, Creatify, Arcdads, and Vidyard give businesses the tools to quickly generate and edit videos starring either AI actors or artificial versions of members of staff, promising cost-effective ways to make compelling ads that audiences connect with. Similarly, AI-generated clones of livestreamers have exploded in popularity across China in recent years, partly because they can sell products 24/7 without getting tired or needing to be paid. 

For now at least, Synthesia is “laser focused” on the corporate sphere. But it’s not ruling out expanding into new sectors such as entertainment or education, says Peter Hill, the company’s chief technical officer. In an apparent step toward this, Synthesia recently partnered with Google to integrate Google’s powerful new generative video model Veo 3 into its platform, allowing users to directly generate and embed clips into Synthesia’s videos. It suggests that in the future, these hyperrealistic artificial humans could take up starring roles in detailed universes with ever-changeable backdrops. 

At present this could, for example, involve using Veo 3 to generate a video of meat-processing machinery, with a Synthesia avatar next to the machines talking about how to use them safely. But future versions of Synthesia’s technology could result in educational videos customizable to an individual’s level of knowledge, says Alex Voica, head of corporate affairs and policy at Synthesia. For example, a video about the evolution of life on Earth could be tweaked for someone with a biology degree or someone with high-school-level knowledge. “It’s going to be such a much more engaging and personalized way of delivering content that I’m really excited about,” he says. 

The next frontier, according to Synthesia, will be avatars that can talk back, “understanding” conversations with users and responding in real time Think ChatGPT, but with a lifelike digital human attached. 

Synthesia has already added an interactive element by letting users click through on-screen questions during quizzes presented by its avatars. But it’s also exploring making them truly interactive: Future users could ask their avatar to pause and expand on a point, or ask it a question. “We really want to make the best learning experience, and that means through video that’s entertaining but also personalized and interactive,” says Alami Mejjati. “This, for me, is the missing part in online learning experiences today. And I know we’re very close to solving that.”

We already know that humans can—and do—form deep emotional bonds with AI systems, even with basic text-based chatbots. Combining agentic technology—which is already capable of navigating the web, coding, and playing video games unsupervised—with a realistic human face could usher in a whole new kind of AI addiction, says Pat Pataranutaporn, an assistant professor at the MIT Media Lab.  

“If you make the system too realistic, people might start forming certain kinds of relationships with these characters,” he says. “We’ve seen many cases where AI companions have influenced dangerous behavior even when they are basically texting. If an avatar had a talking head, it would be even more addictive.”

Schuller agrees that avatars in the near future will be perfectly optimized to adjust their projected levels of emotion and charisma so that their human audiences will stay engaged for as long as possible. “It will be very hard [for humans] to compete with charismatic AI of the future; it’s always present, always has an ear for you, and is always understanding,” he says. “Al will change that human-to-human connection.”

As I pause and replay my Express-2 avatar, I imagine holding conversations with it—this uncanny, permanently upbeat, perpetually available product of pixels and algorithms that looks like me and sounds like me, but fundamentally isn’t me. Virtual Rhiannon has never laughed until she’s cried, or fallen in love, or run a marathon, or watched the sun set in another country. 

But, I concede, she could deliver a damned good presentation about why Ed Sheeran is the greatest musician ever to come out of the UK. And only my closest friends and family would know that it’s not the real me.

Therapists are secretly using ChatGPT. Clients are triggered.

Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap. The connection was patchy during one of their online sessions, so Declan suggested they turn off their video feeds. Instead, his therapist began inadvertently sharing his screen.

“Suddenly, I was watching him use ChatGPT,” says Declan, 31, who lives in Los Angeles. “He was taking what I was saying and putting it into ChatGPT, and then summarizing or cherry-picking answers.”

Declan was so shocked he didn’t say anything, and for the rest of the session he was privy to a real-time stream of ChatGPT analysis rippling across his therapist’s screen. The session became even more surreal when Declan began echoing ChatGPT in his own responses, preempting his therapist. 

“I became the best patient ever,” he says, “because ChatGPT would be like, ‘Well, do you consider that your way of thinking might be a little too black and white?’ And I would be like, ‘Huh, you know, I think my way of thinking might be too black and white,’ and [my therapist would] be like, ‘Exactly.’ I’m sure it was his dream session.”

Among the questions racing through Declan’s mind was, “Is this legal?” When Declan raised the incident with his therapist at the next session—“It was super awkward, like a weird breakup”—the therapist cried. He explained he had felt they’d hit a wall and had begun looking for answers elsewhere. “I was still charged for that session,” Declan says, laughing.

The large language model (LLM) boom of the past few years has had unexpected ramifications for the field of psychotherapy, mostly due to the growing number of people substituting the likes of ChatGPT for human therapists. But less discussed is how some therapists themselves are integrating AI into their practice. As in many other professions, generative AI promises tantalizing efficiency savings, but its adoption risks compromising sensitive patient data and undermining a relationship in which trust is paramount.

Suspicious sentiments

Declan is not alone, as I can attest from personal experience. When I received a recent email from my therapist that seemed longer and more polished than usual, I initially felt heartened. It seemed to convey a kind, validating message, and its length made me feel that she’d taken the time to reflect on all of the points in my (rather sensitive) email.

On closer inspection, though, her email seemed a little strange. It was in a new font, and the text displayed several AI “tells,” including liberal use of the Americanized em dash (we’re both from the UK), the signature impersonal style, and the habit of addressing each point made in the original email line by line.

My positive feelings quickly drained away, to be replaced by disappointment and mistrust, once I realized ChatGPT likely had a hand in drafting the message—which my therapist confirmed when I asked her.

Despite her assurance that she simply dictates longer emails using AI, I still felt uncertainty over the extent to which she, as opposed to the bot, was responsible for the sentiments expressed. I also couldn’t entirely shake the suspicion that she might have pasted my highly personal email wholesale into ChatGPT.

When I took to the internet to see whether others had had similar experiences, I found plenty of examples of people receiving what they suspected were AI-generated communiqués from their therapists. Many, including Declan, had taken to Reddit to solicit emotional support and advice.

So had Hope, 25, who lives on the east coast of the US, and had direct-messaged her therapist about the death of her dog. She soon received a message back. It would have been consoling and thoughtful—expressing how hard it must be “not having him by your side right now”—were it not for the reference to the AI prompt accidentally preserved at the top: “Here’s a more human, heartfelt version with a gentle, conversational tone.”

Hope says she felt “honestly really surprised and confused.” “It was just a very strange feeling,” she says. “Then I started to feel kind of betrayed. … It definitely affected my trust in her.” This was especially problematic, she adds, because “part of why I was seeing her was for my trust issues.”

Hope had believed her therapist to be competent and empathetic, and therefore “never would have suspected her to feel the need to use AI.” Her therapist was apologetic when confronted, and she explained that because she’d never had a pet herself, she’d turned to AI for help expressing the appropriate sentiment. 

A disclosure dilemma 

Betrayal or not, there may be some merit to the argument that AI could help therapists better communicate with their clients. A 2025 study published in PLOS Mental Health asked therapists to use ChatGPT to respond to vignettes describing problems of the kind patients might raise in therapy. Not only was a panel of 830 participants unable to distinguish between the human and AI responses, but AI responses were rated as conforming better to therapeutic best practice. 

However, when participants suspected responses to have been written by ChatGPT, they ranked them lower. (Responses written by ChatGPT but misattributed to therapists received the highest ratings overall.) 

Similarly, Cornell University researchers found in a 2023 study that AI-generated messages can increase feelings of closeness and cooperation between interlocutors, but only if the recipient remains oblivious to the role of AI. The mere suspicion of its use was found to rapidly sour goodwill.

“People value authenticity, particularly in psychotherapy,” says Adrian Aguilera, a clinical psychologist and professor at the University of California, Berkeley. “I think [using AI] can feel like, ‘You’re not taking my relationship seriously.’ Do I ChatGPT a response to my wife or my kids? That wouldn’t feel genuine.”

In 2023, in the early days of generative AI, the online therapy service Koko conducted a clandestine experiment on its users, mixing in responses generated by GPT-3 with ones drafted by humans. They discovered that users tended to rate the AI-generated responses more positively. The revelation that users had unwittingly been experimented on, however, sparked outrage.

The online therapy provider BetterHelp has also been subject to claims that its therapists have used AI to draft responses. In a Medium post, photographer Brendan Keen said his BetterHelp therapist admitted to using AI in their replies, leading to “an acute sense of betrayal” and persistent worry, despite reassurances, that his data privacy had been breached. He ended the relationship thereafter. 

A BetterHelp spokesperson told us the company “prohibits therapists from disclosing any member’s personal or health information to third-party artificial intelligence, or using AI to craft messages to members to the extent it might directly or indirectly have the potential to identify someone.”

All these examples relate to undisclosed AI usage. Aguilera believes time-strapped therapists can make use of LLMs, but transparency is essential. “We have to be up-front and tell people, ‘Hey, I’m going to use this tool for X, Y, and Z’ and provide a rationale,” he says. People then receive AI-generated messages with that prior context, rather than assuming their therapist is “trying to be sneaky.”

Psychologists are often working at the limits of their capacity, and levels of burnout in the profession are high, according to 2023 research conducted by the American Psychological Association. That context makes the appeal of AI-powered tools obvious. 

But lack of disclosure risks permanently damaging trust. Hope decided to continue seeing her therapist, though she stopped working with her a little later for reasons she says were unrelated. “But I always thought about the AI Incident whenever I saw her,” she says.

Risking patient privacy

Beyond the transparency issue, many therapists are leery of using LLMs in the first place, says Margaret Morris, a clinical psychologist and affiliate faculty member at the University of Washington.

“I think these tools might be really valuable for learning,” she says, noting that therapists should continue developing their expertise over the course of their career. “But I think we have to be super careful about patient data.” Morris calls Declan’s experience “alarming.” 

Therapists need to be aware that general-purpose AI chatbots like ChatGPT are not approved by the US Food and Drug Administration and are not HIPAA compliant, says Pardis Emami-Naeini, assistant professor of computer science at Duke University, who has researched the privacy and security implications of LLMs in a health context. (HIPAA is a set of US federal regulations that protect people’s sensitive health information.)

“This creates significant risks for patient privacy if any information about the patient is disclosed or can be inferred by the AI,” she says.

In a recent paper, Emami-Naeini found that many users wrongly believe ChatGPT is HIPAA compliant, creating an unwarranted sense of trust in the tool. “I expect some therapists may share this misconception,” she says.

As a relatively open person, Declan says, he wasn’t completely distraught to learn how his therapist was using ChatGPT. “Personally, I am not thinking, ‘Oh, my God, I have deep, dark secrets,’” he said. But it did still feel violating: “I can imagine that if I was suicidal, or on drugs, or cheating on my girlfriend … I wouldn’t want that to be put into ChatGPT.”

When using AI to help with email, “it’s not as simple as removing obvious identifiers such as names and addresses,” says Emami-Naeini. “Sensitive information can often be inferred from seemingly nonsensitive details.”

She adds, “Identifying and rephrasing all potential sensitive data requires time and expertise, which may conflict with the intended convenience of using AI tools. In all cases, therapists should disclose their use of AI to patients and seek consent.” 

A growing number of companies, including Heidi Health, Upheal, Lyssn, and Blueprint, are marketing specialized tools to therapists, such as AI-assisted note-taking, training, and transcription services. These companies say they are HIPAA compliant and store data securely using encryption and pseudonymization where necessary. But many therapists are still wary of the privacy implications—particularly of services that necessitate the recording of entire sessions.

“Even if privacy protections are improved, there is always some risk of information leakage or secondary uses of data,” says Emami-Naeini.

A 2020 hack on a Finnish mental health company, which resulted in tens of thousands of clients’ treatment records being accessed, serves as a warning. People on the list were blackmailed, and subsequently the entire trove was publicly released, revealing extremely sensitive details such as peoples’ experiences of child abuse and addiction problems.

What therapists stand to lose

In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.

A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.

Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.

A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable.

Daniel Kimmel, a psychiatrist and neuroscientist at Columbia University, conducted experiments with ChatGPT where he posed as a client having relationship troubles. He says he found the chatbot was a decent mimic when it came to “stock-in-trade” therapeutic responses, like normalizing and validating, asking for additional information, or highlighting certain cognitive or emotional associations.

However, “it didn’t do a lot of digging,” he says. It didn’t attempt “to link seemingly or superficially unrelated things together into something cohesive … to come up with a story, an idea, a theory.”

“I would be skeptical about using it to do the thinking for you,” he says. Thinking, he says, should be the job of therapists.

Therapists could save time using AI-powered tech, but this benefit should be weighed against the needs of patients, says Morris: “Maybe you’re saving yourself a couple of minutes. But what are you giving away?”

Can an AI doppelgänger help me do my job?

Everywhere I look, I see AI clones. On X and LinkedIn, “thought leaders” and influencers offer their followers a chance to ask questions of their digital replicas. OnlyFans creators are having AI models of themselves chat, for a price, with followers. “Virtual human” salespeople in China are reportedly outselling real humans. 

Digital clones—AI models that replicate a specific person—package together a few technologies that have been around for a while now: hyperrealistic video models to match your appearance, lifelike voices based on just a couple of minutes of speech recordings, and conversational chatbots increasingly capable of holding our attention. But they’re also offering something the ChatGPTs of the world cannot: an AI that’s not smart in the general sense, but that ‘thinks’ like you do. 

Who are they for? Delphi, a startup that recently raised $16 million from funders including Anthropic and actor/director Olivia Wilde’s venture capital firm, Proximity Ventures, helps famous people create replicas that can speak with their fans in both chat and voice calls. It feels like MasterClass—the platform for instructional seminars led by celebrities—vaulted into the AI age. On its website, Delphi writes that modern leaders “possess potentially life-altering knowledge and wisdom, but their time is limited and access is constrained.”

It has a library of official clones created by famous figures that you can speak with. Arnold Schwarzenegger, for example, told me, “I’m here to cut the crap and help you get stronger and happier,” before informing me cheerily that I’ve now been signed up to receive the Arnold’s Pump Club newsletter. Even if his or other celebrities’ clones fall short of Delphi’s lofty vision of spreading “personalized wisdom at scale,” they at least seem to serve as a funnel to find fans, build mailing lists, or sell supplements.

But what about for the rest of us? Could well-crafted clones serve as our stand-ins? I certainly feel stretched thin at work sometimes, wishing I could be in two places at once, and I bet you do too. I could see a replica popping into a virtual meeting with a PR representative, not to trick them into thinking it’s the real me, but simply to take a brief call on my behalf. A recording of this call might summarize how it went. 

To find out, I tried making a clone. Tavus, a Y Combinator alum that raised $18 million last year, will build a video avatar of you (plans start at $59 per month) that can be coached to reflect your personality and can join video calls. These clones have the “emotional intelligence of humans, with the reach of machines,” according to the company. “Reporter’s assistant” does not appear on the company’s site as an example use case, but it does mention therapists, physician’s assistants, and other roles that could benefit from an AI clone.

For Tavus’s onboarding process, I turned on my camera, read through a script to help it learn my voice (which also acted as a waiver, with me agreeing to lend my likeness to Tavus), and recorded one minute of me just sitting in silence. Within a few hours, my avatar was ready. Upon meeting this digital me, I found it looked and spoke like I do (though I hated its teeth). But faking my appearance was the easy part. Could it learn enough about me and what topics I cover to serve as a stand-in with minimal risk of embarrassing me?

Via a helpful chatbot interface, Tavus walked me through how to craft my clone’s personality, asking what I wanted the replica to do. It then helped me formulate instructions that became its operating manual. I uploaded three dozen of my stories that it could use to reference what I cover. It may have benefited from having more of my content—interviews, reporting notes, and the like—but I would never share that data for a host of reasons, not the least of which being that the other people who appear in it have not consented to their sides of our conversations being used to train an AI replica.

So in the realm of AI—where models learn from entire libraries of data—I didn’t give my clone all that much to learn from, but I was still hopeful it had enough to be useful. 

Alas, conversationally it was a wild card. It acted overly excited about story pitches I would never pursue. It repeated itself, and it kept saying it was checking my schedule to set up a meeting with the real me, which it could not do as I never gave it access to my calendar. It spoke in loops, with no way for the person on the other end to wrap up the conversation. 

These are common early quirks, Tavus’s cofounder Quinn Favret told me. The clones typically rely on Meta’s Llama model, which “often aims to be more helpful than it truly is,” Favret says, and developers building on top of Tavus’s platform are often the ones who set instructions for how the clones finish conversations or access calendars.

For my purposes, it was a bust. To be useful to me, my AI clone would need to show at least some basic instincts for understanding what I cover, and at the very least not creep out whoever’s on the other side of the conversation. My clone fell short.

Such a clone could be helpful in other jobs, though. If you’re an influencer looking for ways to engage with more fans, or a salesperson for whom work is a numbers game and a clone could give you a leg up, it might just work. You run the risk that your replica could go off the rails or embarrass the real you, but the tradeoffs might be reasonable. 

Favret told me some of Tavus’s bigger customers are companies using clones for health-care intake and job interviews. Replicas are also being used in corporate role-play, for practicing sales pitches or having HR-related conversations with employees, for example.

But companies building clones are promising that they will be much more than cold-callers or telemarketing machines. Delphi says its clones will offer “meaningful, personal interactions at infinite scale,” and Tavus says its replicas have “a face, a brain, and memories” that enable “meaningful face-to-face conversations.” Favret also told me a growing number of Tavus’s customers are building clones for mentorship and even decision-making, like AI loan officers who use clones to qualify and filter applicants.

Which is sort of the crux of it. Teaching an AI clone discernment, critical thinking, and taste—never mind the quirks of a specific person—is still the stuff of science fiction. That’s all fine when the person chatting with a clone is in on the bit (most of us know that Schwarzenegger’s replica, for example, will not coach me to be a better athlete).

But as companies polish clones with “human” features and exaggerate their capabilities, I worry that people chasing efficiency will start using their replicas at best for roles that are cringeworthy, and at worst for making decisions they should never be entrusted with. In the end, these models are designed for scale, not fidelity. They can flatter us, amplify us, even sell for us—but they can’t quite become us.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Here’s how we picked this year’s Innovators Under 35

Next week, we’ll publish our 2025 list of Innovators Under 35, highlighting smart and talented people who are working in many areas of emerging technology. This new class features 35 accomplished founders, hardware engineers, roboticists, materials scientists, and others who are already tackling tough problems and making big moves in their careers. All are under the age of 35. 

One is developing a technology to reduce emissions from shipping, while two others are improving fertility treatments and creating new forms of contraception. Another is making it harder for people to maliciously share intimate images online. And quite a few are applying artificial intelligence to their respective fields in novel ways. 

We’ll also soon reveal our 2025 Innovator of the Year, whose technical prowess is helping physicians diagnose and treat critically ill patients more quickly. What’s more (here’s your final hint), our winner even set a world record as a result of this work. 

MIT Technology Review first published a list of Innovators Under 35 in 1999. It’s a grand tradition for us, and we often follow the work of various featured innovators for years, even decades, after they appear on the list. So before the big announcement, I want to take a moment to explain how we select the people we recognize each year. 

Step 1: Call for nominations

Our process begins with a call for nominations, which typically goes out in the final months of the previous year and is open to anyone, anywhere in the world. We encourage people to nominate themselves, which takes just a few minutes. This method helps us discover people doing important work that we might not otherwise encounter. 

This year we had 420 nominations. Two-thirds of our candidates were put forward by someone else and one-third nominated themselves. We received nominations for people located in about 40 countries. Nearly 70% were based in the United States, with the UK, Switzerland, China, and the United Arab Emirates, respectively, having the next-highest concentrations. 

After nominations close, a few editors then spend several weeks reviewing the nominees and selecting semifinalists. During this phase, we look for people who have developed practical solutions to societal issues or made important scientific advances that could translate into new technologies. Their work should have the potential for broad impact—it can’t be niche or incremental. And what’s unique about their approach must be clear. 

Step 2: Semifinalist applications 

This year, we winnowed our initial list of hundreds of nominees to 108 semifinalists. Then we asked those entrants for more information to help us get to know them better and evaluate their work. 

We request three letters of reference and a résumé from each semifinalist, and we ask all of them to answer a few short questions about their work. We also give them the option to share a video or pass along relevant journal articles or other links to help us learn more about what they do.

Step 3: Expert judges weigh in

Next, we bring in dozens of experts to vet the semifinalists. This year, 38 judges evaluated and scored the applications. We match the contenders with judges who work in similar fields whenever possible. At least two judges review each entrant, though most are seen by three. 

All these judges volunteer their time, and some return to help year after year. A few of our longtime judges include materials scientists Yet-Ming Chiang (MIT) and Julia Greer (Caltech), MIT neuroscientist Ed Boyden, and computer scientist Ben Zhao of the University of Chicago. 

John Rogers, a materials scientist and biomedical engineer at Northwestern University, has been a judge for more than a decade (and was featured on our very first Innovators list, in 1999). Here’s what he had to say about why he stays involved: “This award is compelling because it recognizes young people with scientific achievements that are not only of fundamental interest but also of practical significance, at the highest levels.” 

Step 4: Editors make the final calls 

In a final layer of vetting, editors who specialize in covering biotechnology, climate and energy, and artificial intelligence review the semifinalists whom judges scored highly in their respective areas. Staff editors and reporters can also nominate people they’ve come across in their coverage, and we add them to the mix for consideration. 

Last, a small team of senior editors reviews all the semifinalists and the judges’ scores, as well as our own staff’s recommendations, and selects 35 honorees. We aim for a good combination of people from a variety of disciplines working in different regions of the world. And we take a staff vote to pick an Innovator of the Year—someone whose work we particularly admire. 

In the end, it’s impossible to include every deserving individual on our list. But by incorporating both external nominations and outside expertise from our judges, we aim to make the evaluation process as rigorous and open as possible.  

So who made the cut this year? Come back on September 8 to find out.