Hong Kong is targeting Western Big Tech companies in its new ban of a popular protest song

It wasn’t exactly surprising when, on Wednesday May 8, a Hong Kong appeals court sided with the city government to take down “Glory to Hong Kong” from the internet. The trial, in which no one represented the defense, was the culmination of a years-long battle over the song that has become the unofficial anthem for protesters fighting China’s tightening control and police brutality in the city. But it remains an open question how exactly Western Big Tech companies will respond. Even as the injunction is narrowly designed to make it easier for them to comply, the companies may still be seen as aiding authoritarian control and obstructing internet freedom if they do so.  

Google, Apple, Meta, Spotify, and more have spent the last several years largely refusing to cooperate with previous efforts by the Hong Kong government to prevent the spread of the song, which the government has claimed is a threat to national security. But the government has also hesitated to leverage criminal law to force them to comply with requests for removal of content, which could risk international uproar and have a negative effect on the city’s economy. 

Now, the new ruling seemingly finds a third option: By providing the platforms with a civil injunction that doesn’t invoke criminal prosecution—which is similar to how copyright violations are enforced—the platforms can theoretically face less reputational blowback when they comply with the court order.

“If you look closely at the judgment, it’s basically tailor-made for the tech companies at stake,” says Chung Ching Kwong, a senior analyst at the Inter-Parliamentary Alliance on China, an advocacy organization that connects legislators from over 30 countries to try to hold China accountable. She believes the language in the judgment suggests the tech companies will now be ready to comply with the government’s request.

A Google spokesperson says the company is reviewing the court’s judgment and didn’t respond to specific questions sent by MIT Technology Review. A Meta spokesperson pointed to a statement from Jeff Paine, the managing director of the Asia Internet Coalition, a trade group representing many tech companies in the Asia-Pacific region: The AIC “is assessing the implications of the decision made today, including how the injunction will be implemented, to determine its impact on businesses. We believe that a free and open internet is fundamental to the city’s ambitions to become an international technology and innovation hub.” The AIC did not immediately reply to questions sent via email. Apple and Spotify didn’t immediately respond to requests for comment.

But no matter what these companies do next, the ruling is already having an effect: Just over 24 hours after the court order, some of the 32 YouTube videos that are explicitly named in the injunction as requiring removal were inaccessible for users worldwide, not just in Hong Kong. 

While it’s unclear whether the videos were removed by the platform or by their creators, experts say the court decision will almost certainly set a precedent for more content to be censored from Hong Kong’s internet in the future.

“Censorship of the song would be a clear violation of internet freedom and freedom of expression,” says Yaqiu Wang, the research director for China, Hong Kong, and Taiwan at Freedom House, a human rights advocacy group. “Google and other internet companies should use all available channels to challenge the decision.” 

Erasing a song from the internet

Since “Glory to Hong Kong” was first uploaded to YouTube in August 2019 by an anonymous group called Dgx Music, it’s been adored by protesters and applauded as their anthem. Its popularity only grew after China passed the harsh Hong Kong national security law in 2020

It also unsurprisingly became a major flashpoint. With lyrics like, “Liberate Hong Kong, revolution of our times,” the city and national Chinese governments were wary of its spread. 

Their fears escalated when the song was repeatedly mistaken for China’s national anthem at international events, and was broadcast in sporting events after Hong Kong athletes won. By mid 2023, the mistake, intentional or not, had happened 887 times, according to the Hong Kong government’s request for the content’s removal; the request to the court credits YouTube videos and Google search results referring to the song as the “Hong Kong National Anthem” as the reason. 

The government has been arresting people for performing the song on the ground in Hong Kong, but it has been harder to prosecute the online activity since most of the videos and music were uploaded anonymously, and Hong Kong, unlike mainland China, has historically had a free internet. This meant officials needed to explore new approaches to content removal. 

To comply or not to comply

Using the controversial 2020 national security law as legal justification to make requests for removal of certain content deemed threatening, the Hong Kong government has been able to exert pressure on local companies, like internet service providers (ISPs). “In Hong Kong, all the major internet service providers are locally owned or Chinese-owned. For business reasons, probably within the last 20 years, most of the foreign investors like Verizon left on their own,” says Charles Mok, a researcher at Stanford University’s Cyber Policy Center and a former legislator in Hong Kong. “So right now, the government is focusing on telling the customer-facing internet service providers to do the blocking.” And it seems to have been somewhat effective, with a few websites for human rights activist organizations becoming inaccessible locally.

But the city government can’t get its way as easily when the content is on foreign-owned platforms like YouTube or Facebook. Back in 2020, most major Western companies declared they would pause processing data requests from the Hong Kong government while they assessed the law. Over time, some of them have started answering government requests again. But they’ve largely remained firm: Over the first six months of 2023, for example, Meta received 41 requests from the Hong Kong government to obtain user data and answered 0; during the same period, Google received requests for removing 164 items from Google services and ended up removing 82 of them, according to both companies’ transparency reports. Google specifically mentioned that it chose to not remove two YouTube videos and one Google Drive file related to “Glory to Hong Kong.”

Both sides are in tight spots. Tech companies don’t want to lose the Hong Kong market or endanger their local staff, but they are also worried about being seen as complying with authoritarian government actions. And the Hong Kong government doesn’t want to be seen as openly fighting Western platforms while trust in the region’s financial markets is already in decline. In particular, officials fear international headlines if the government invokes criminal law to force tech companies to remove certain content. 

“I think both sides are navigating this balancing act. So the government finally figured out a way that they thought might be able to solve the impasse: by going to the court and narrowly seeking an injunction,” Mok says.

That happened in June 2023, when Hong Kong’s government requested a court injunction to ban the distribution of the song online with the purpose of “inciting others to commit secession.” It named 32 YouTube videos explicitly, including the original version and live performances, translations in other languages, instrumental and opera versions, and an interview of the original creators. But the order would also cover “any adaptation of the song, the melody and/or lyrics of which are substantially the same as the song,” according to court documents. 

The injunction went through a year of back-and-forth hearings, including a lower court ruling that briefly swatted down the ban. But now, the Court of Appeal has granted the government approval. The case can theoretically be appealed one last time, but with no defendants present, that’s unlikely to happen.

The key difference between this action and previous attempts to remove content is that this is a civil injunction, unlike a criminal prosecution—meaning it is, at least legally speaking, closer to a copyright takedown request. In turn, a platform could arguably be less likely to take a reputational hit as long as it removes the content upon request. 

Kwong believes this will indeed make platforms more likely to cooperate and there have already been pretty clear signs to that effect. In one hearing in December, the government was asked by the court to consult online platforms for the feasibility of the injunction. The final judgment this week says that while the platforms “have not taken part in these proceedings, they have indicated that they are ready to accede to the Government’s request if there is a court order.”

“The actual targets in this case, mainly the tech giants, may have less hesitation to comply with a civil court order than a national security order because if it’s the latter, they may also face backfire from the US,” says Eric Yan-Ho Lai, a research fellow at Georgetown Center for Asian Law. 

Lai also says now that the injunction is granted, it will be easier to prosecute an individual based on violating a civil injunction rather than prosecuting someone based on criminal offenses, since the government won’t need to prove criminal intent.

The chilling effect

Immediately after the injunction, human rights advocates called on tech companies to remain committed to their values. “Companies like Google and Apple have repeatedly claimed that they stand by the universal right to freedom of expression. They should put their ideals into practice,” says Freedom House’s Wang. “Google and other tech companies should thoroughly document government demands, and publish detailed transparency reports on content takedowns, both for those initiated by the authorities and those done by the companies themselves.”

Without making their plans clear, it’s too early to know just how tech companies will react. But right after the injunction was granted, the song largely remained available on most platforms, including YouTube, iTunes, and Spotify, for Hong Kong users, according to the South China Morning Post. On iTunes, the song even returned to the top of the download rankings a few hours after the injunction.

One key factor that may still determine corporate cooperation is how far the content removal requests go. There will surely be more videos of the song that are uploaded to YouTube, not to mention independent websites hosting the videos and music for more people to access. Will the government go after each of them too?

The Hong Kong government has previously said in court hearings that it only seeks a local restriction of the online content, meaning content will only be inaccessible to users physically in the city, which large platforms like YouTube can do so without difficulty. 

Theoretically, this allows local residents to still circumvent the ban by using VPN software, but not everyone would be technologically savvy enough to do so. And that wouldn’t do much to minimize the larger chilling effect on free speech, says Kwong from the Inter-Parliamentary Alliance on China. 

“As a Hong Konger living abroad, I do rely on Hong Kong services or international services based in Hong Kong to get a hold of what’s happening in the city. I do use YouTube Hong Kong to see certain things, and I do use Spotify Hong Kong or Apple Music because I want access to Cantopop,” she says. “At the same time, you worry about what you can share with friends in Hong Kong and whatnot. We don’t want to put them into trouble by sharing things that they are not supposed to see, which they should be able to see.”

The court made at least two explicit exemptions to the song’s ban, for “lawful activities conducted in connection with the song, such as those for the purpose of academic activity and news activity.” But even the implementation of these could be incredibly complex and confusing in practice. “In the current political context in Hong Kong, I don’t see anyone willing to take the risk,” Kwong says. 

The government has already arrested prominent journalists in the name of endangering national security, and a new law passed in 2024 has expanded the crimes that can be prosecuted on national security grounds. As with all efforts to suppress free speech, the impact of vague boundaries that encourage self-censorship on potentially sensitive topics is often sprawling and hard to measure. 

“Nobody knows where the actual red line is,” Kwong says.

Google DeepMind’s new AlphaFold can model a much larger slice of biological life

Google DeepMind has released an improved version of its biology prediction tool, AlphaFold, that can predict the structures not only of proteins but of nearly all the elements of biological life.

It’s a development that could help accelerate drug discovery and other scientific research. The tool is currently being used to experiment with identifying everything from resilient crops to new vaccines. 

While the previous model, released in 2020, amazed the research community with its ability to predict proteins structures, researchers have been clamoring for the tool to handle more than just proteins. 

Now, DeepMind says, AlphaFold 3 can predict the structures of DNA, RNA, and molecules like ligands, which are essential to drug discovery. DeepMind says the tool provides a more nuanced and dynamic portrait of molecule interactions than anything previously available. 

“Biology is a dynamic system,” DeepMind CEO Demis Hassabis told reporters on a call. “Properties of biology emerge through the interactions between different molecules in the cell, and you can think about AlphaFold 3 as our first big sort of step toward [modeling] that.”

AlphaFold 2 helped us better map the human heart, model antimicrobial resistance, and identify the eggs of extinct birds, but we don’t yet know what advances AlphaFold 3 will bring. 

Mohammed AlQuraishi, an assistant professor of systems biology at Columbia University who is unaffiliated with DeepMind, thinks the new version of the model will be even better for drug discovery. “The AlphaFold 2 system only knew about amino acids, so it was of very limited utility for biopharma,” he says. “But now, the system can in principle predict where a drug binds a protein.”

Isomorphic Labs, a drug discovery spinoff of DeepMind, is already using the model for exactly that purpose, collaborating with pharmaceutical companies to try to develop new treatments for diseases, according to DeepMind. 

AlQuraishi says the release marks a big leap forward. But there are caveats.

“It makes the system much more general, and in particular for drug discovery purposes (in early-stage research), it’s far more useful now than AlphaFold 2,” he says. But as with most models, the impact of AlphaFold will depend on how accurate its predictions are. For some uses, AlphaFold 3 has double the success rate of similar leading models like RoseTTAFold. But for others, like protein-RNA interactions, AlQuraishi says it’s still very inaccurate. 

DeepMind says that depending on the interaction being modeled, accuracy can range from 40% to over 80%, and the model will let researchers know how confident it is in its prediction. With less accurate predictions, researchers have to use AlphaFold merely as a starting point before pursuing other methods. Regardless of these ranges in accuracy, if researchers are trying to take the first steps toward answering a question like which enzymes have the potential to break down the plastic in water bottles, it’s vastly more efficient to use a tool like AlphaFold than experimental techniques such as x-ray crystallography. 

A revamped model  

AlphaFold 3’s larger library of molecules and higher level of complexity required improvements to the underlying model architecture. So DeepMind turned to diffusion techniques, which AI researchers have been steadily improving in recent years and now power image and video generators like OpenAI’s DALL-E 2 and Sora. It works by training a model to start with a noisy image and then reduce that noise bit by bit until an accurate prediction emerges. That method allows AlphaFold 3 to handle a much larger set of inputs.

That marked “a big evolution from the previous model,” says John Jumper, director at Google DeepMind. “It really simplified the whole process of getting all these different atoms to work together.”

It also presented new risks. As the AlphaFold 3 paper details, the use of diffusion techniques made it possible for the model to hallucinate, or generate structures that look plausible but in reality could not exist. Researchers reduced that risk by adding more training data to the areas most prone to hallucination, though that doesn’t eliminate the problem completely. 

Restricted access

Part of AlphaFold 3’s impact will depend on how DeepMind divvies up access to the model. For AlphaFold 2, the company released the open-source code, allowing researchers to look under the hood to gain a better understanding of how it worked. It was also available for all purposes, including commercial use by drugmakers. For AlphaFold 3, Hassabis said, there are no current plans to release the full code. The company is instead releasing a public interface for the model called the AlphaFold Server, which imposes limitations on which molecules can be experimented with and can only be used for noncommercial purposes. DeepMind says the interface will lower the technical barrier and broaden the use of the tool to biologists who are less knowledgeable about this technology.

The new restrictions are significant, according to AlQuraishi. “The system’s main selling point—its ability to predict protein–small molecule interactions—is basically unavailable for public use,” he says. “It’s mostly a teaser at this point.”

How I learned to stop worrying and love fake meat

Fixing our collective meat problem is one of the trickiest challenges in addressing climate change—and for some baffling reason, the world seems intent on making the task even harder.

The latest example occurred last week, when Florida governor Ron DeSantis signed a law banning the production, sale, and transportation of cultured meat across the Sunshine State. 

“Florida is fighting back against the global elite’s plan to force the world to eat meat grown in a petri dish or bugs to achieve their authoritarian goals,” DeSantis seethed in a statement.

Alternative meat and animal products—be they lab-grown or plant-based—offer a far more sustainable path to mass-producing protein than raising animals for milk or slaughter. Yet again and again, politicians, dietitians, and even the press continue to devise ways to portray these products as controversial, suspect, or substandard. No matter how good they taste or how much they might reduce greenhouse-gas emissions, there’s always some new obstacle standing in the way—in this case, Governor DeSantis, wearing a not-at-all-uncomfortable smile.  

The new law clearly has nothing to do with the creeping threat of authoritarianism (though for more on that, do check out his administration’s crusade to ban books about gay penguins). First and foremost it is an act of political pandering, a way to coddle Florida’s sizable cattle industry, which he goes on to mention in the statement.

Cultured meat is seen as a threat to the livestock industry because animals are only minimally involved in its production. Companies grow cells originally extracted from animals in a nutrient broth and then form them into nuggets, patties or fillets. The US Department of Agriculture has already given its blessing to two companies, Upside Foods and Good Meat, to begin selling cultured chicken products to consumers. Israel recently became the first nation to sign off on a beef version.

It’s still hard to say if cultured meat will get good enough and cheap enough anytime soon to meaningfully reduce our dependence on cattle, chicken, pigs, sheep, goats, and other animals for our protein and our dining pleasure. And it’s sure to take years before we can produce it in ways that generate significantly lower emissions than standard livestock practices today.

But there are high hopes it could become a cleaner and less cruel way of producing meat. It wouldn’t require all the land, food, and energy needed to raise, feed, slaughter, and process animals today. One study found that cultured meat could reduce emissions per kilogram of meat 92% by 2030, even if cattle farming also achieves substantial improvements.

Those sorts of gains are essential if we hope to ease the rising dangers of climate change, because meat, dairy, and cheese production are huge contributors to greenhouse-gas emissions.

DeSantis and politicians in other states that may follow suit, including Alabama and Tennessee, are raising the specter of mandated bug-eating and global-elite string-pulling to turn cultured meat into a cultural issue, and kill the industry in its infancy. 

But, again, it’s always something. I’ve heard a host of other arguments across the political spectrum directed against various alternative protein products, which also include plant-based burgers, cheeses, and milks, or even cricket-derived powders and meal bars. Apparently these meat and dairy alternatives shouldn’t be highly processed, mass-produced, or genetically engineered, nor should they ever be as unhealthy as their animal-based counterparts. 

In effect, we are setting up tests that almost no products can pass, when really all we should ask of alternative proteins is that they be safe, taste good, and cut climate pollution.

The meat of the matter

Here’s the problem. 

Livestock production generates more than 7 billion tons of carbon dioxide, making up 14.5% of the world’s overall climate emissions, according to the United Nations Food and Agriculture Organization.

Beef, milk, and cheese production are, by far, the biggest problems, representing some 65% of the sector’s emissions. We burn down carbon-dense forests to provide cows with lots of grazing land; then they return the favor by burping up staggering amounts of methane, one of the most powerful greenhouse gases. Florida’s cattle population alone, for example, could generate about 180 million pounds of methane every year, as calculated from standard per-animal emissions

In an earlier paper, the World Resources Institute noted that in the average US diet, beef contributed 3% of the calories but almost half the climate pollution from food production. (If you want to take a single action that could meaningfully ease your climate footprint, read that sentence again.)

The added challenge is that the world’s population is both growing and becoming richer, which means more people can afford more meat. 

There are ways to address some of the emissions from livestock production without cultured meat or plant-based burgers, including developing supplements that reduce methane burps and encouraging consumers to simply reduce meat consumption. Even just switching from beef to chicken can make a huge difference.

Let’s clear up one matter, though. I can’t imagine a politician in my lifetime, in the US or most of the world, proposing a ban on meat and expecting to survive the next election. So no, dear reader. No one’s coming for your rib eye. If there’s any attack on personal freedoms and economic liberty here, DeSantis is the one waging it by not allowing Floridians to choose for themselves what they want to eat.

But there is a real problem in need of solving. And the grand hope of companies like Beyond Meat, Upside Foods, Miyoko’s Creamery, and dozens of others is that we can develop meat, milk, and cheese alternatives that are akin to EVs: that is to say, products that are good enough to solve the problem without demanding any sacrifice from consumers or requiring government mandates. (Though subsidies always help.)

The good news is the world is making some real progress in developing substitutes that increasingly taste like, look like, and have (with apologies for the snooty term) the “mouthfeel” of the traditional versions, whether they’ve been developed from animal cells or plants. If they catch on and scale up, it could make a real dent in emissions—with the bonus of reducing animal suffering, environmental damage, and the spillover of animal disease into the human population.

The bad news is we can’t seem to take the wins when we get them. 

The blue cheese blues

For lunch last Friday, I swung by the Butcher’s Son Vegan Delicatessen & Bakery in Berkeley, California, and ordered a vegan Buffalo chicken sandwich with a blue cheese on the side that was developed by Climax Foods, also based in Berkeley.

Late last month, it emerged that the product had, improbably, clinched the cheese category in the blind taste tests of the prestigious Good Food awards, as the Washington Post revealed.

Let’s pause here to note that this is a stunning victory for vegan cheeses, a clear sign that we can use plants to produce top-notch artisanal products, indistinguishable even to the refined palates of expert gourmands. If a product is every bit as tasty and satisfying as the original but can be produced without milking methane-burping animals, that’s a big climate win.

But sadly, that’s not where the story ended.

JAMES TEMPLE

After word leaked out that the blue cheese was a finalist, if not the winner, the Good Food Foundation seems to have added a rule that didn’t exist when the competition began but which disqualified Climax Blue, the Post reported.

I have no special insights into what unfolded behind the scenes. But it reads at least a little as if the competition concocted an excuse to dethrone a vegan cheese that had bested its animal counterparts and left traditionalists aghast. 

That victory might have done wonders to help promote acceptance of the Climax product, if not the wider category. But now the story is the controversy. And that’s a shame. Because the cheese is actually pretty good. 

I’m no professional foodie, but I do have a lifetime of expertise born of stubbornly refusing to eat any salad dressing other than blue cheese. In my own taste test, I can report it looked and tasted like mild blue cheese, which is all it needs to do.

A beef about burgers

Banning a product or changing a cheese contest’s rules after determining the winner are both bad enough. But the reaction to alternative proteins that has left me most befuddled is the media narrative that formed around the latest generation of plant-based burgers soon after they started getting popular a few years ago. Story after story would note, in the tone of a bold truth-teller revealing something new each time: Did you know these newfangled plant-based burgers aren’t actually all that much healthier than the meat variety? 

To which I would scream at my monitor: THAT WAS NEVER THE POINT!

The world has long been perfectly capable of producing plant-based burgers that are better for you, but the problem is that they tend to taste like plants. The actual innovation with the more recent options like Beyond Burger or Impossible Burger is that they look and taste like the real thing but can be produced with a dramatically smaller climate footprint.

That’s a big enough win in itself. 

If I were a health reporter, maybe I’d focus on these issues too. And if health is your personal priority, you should shop for a different plant-based patty (or I might recommend a nice salad, preferably with blue cheese dressing).

But speaking as a climate reporter, expecting a product to ease global warming, taste like a juicy burger, and also be low in salt, fat, and calories is absurd. You may as well ask a startup to conduct sorcery.

More important, making a plant-based burger healthier for us may also come at the cost of having it taste like a burger. Which would make it that much harder to win over consumers beyond the niche of vegetarians and thus have any meaningful impact on emissions. WHICH IS THE POINT!

It’s incredibly difficult to convince consumers to switch brands and change behaviors, even for a product as basic as toothpaste or toilet paper. Food is trickier still, because it’s deeply entwined with local culture, family traditions, festivals and celebrations. Whether we find a novel food product to be yummy or yucky is subjective and highly subject to suggestion. 

And so I’m ending with a plea. Let’s grant ourselves the best shot possible at solving one of the hardest, most urgent problems before us. Treat bans and political posturing with the ridicule they deserve. Reject the argument that any single product must, or can, solve all the problems related to food, health, and the environment.

Give these alternative foods a shot, afford them room to improve, and keep an open mind. 

Though it’s cool if you don’t want to try the crickets.

Deepfakes of your dead loved ones are a booming Chinese business

Once a week, Sun Kai has a video call with his mother. He opens up about work, the pressures he faces as a middle-aged man, and thoughts that he doesn’t even discuss with his wife. His mother will occasionally make a comment, like telling him to take care of himself—he’s her only child. But mostly, she just listens.

That’s because Sun’s mother died five years ago. And the person he’s talking to isn’t actually a person, but a digital replica he made of her—a moving image that can conduct basic conversations. They’ve been talking for a few years now. 

After she died of a sudden illness in 2019, Sun wanted to find a way to keep their connection alive. So he turned to a team at Silicon Intelligence, an AI company based in Nanjing, China, that he cofounded in 2017. He provided them with a photo of her and some audio clips from their WeChat conversations. While the company was mostly focused on audio generation, the staff spent four months researching synthetic tools and generated an avatar with the data Sun provided. Then he was able to see and talk to a digital version of his mom via an app on his phone. 

“My mom didn’t seem very natural, but I still heard the words that she often said: ‘Have you eaten yet?’” Sun recalls of the first interaction. Because generative AI was a nascent technology at the time, the replica of his mom can say only a few pre-written lines. But Sun says that’s what she was like anyway. “She would always repeat those questions over and over again, and it made me very emotional when I heard it,” he says.

There are plenty of people like Sun who want to use AI to preserve, animate, and interact with lost loved ones as they mourn and try to heal. The market is particularly strong in China, where at least half a dozen companies are now offering such technologies and thousands of people have already paid for them. In fact, the avatars are the newest manifestation of a cultural tradition: Chinese people have always taken solace from confiding in the dead. 

The technology isn’t perfect—avatars can still be stiff and robotic—but it’s maturing, and more tools are becoming available through more companies. In turn, the price of “resurrecting” someone—also called creating “digital immortality” in the Chinese industry—has dropped significantly. Now this technology is becoming accessible to the general public. 

Some people question whether interacting with AI replicas of the dead is actually a healthy way to process grief, and it’s not entirely clear what the legal and ethical implications of this technology may be. For now, the idea still makes a lot of people uncomfortable. But as Silicon Intelligence’s other cofounder, CEO Sima Huapeng, says, “Even if only 1% of Chinese people can accept [AI cloning of the dead], that’s still a huge market.” 

AI resurrection

Avatars of the dead are essentially deepfakes: the technologies used to replicate a living person and a dead person aren’t inherently different. Diffusion models generate a realistic avatar that can move and speak. Large language models can be attached to generate conversations. The more data these models ingest about someone’s life—including photos, videos, audio recordings, and texts—the more closely the result will mimic that person, whether dead or alive.

China has proved to be a ripe market for all kinds of digital doubles. For example, the country has a robust e-commerce sector, and consumer brands hire many livestreamers to sell products. Initially, these were real people—but as MIT Technology Review reported last fall—many brands are switching to AI-cloned influencers that can stream 24/7. 

In just the past three years, the Chinese sector developing AI avatars has matured rapidly, says Shen Yang, a professor studying AI and media at Tsinghua University in Beijing, and replicas have improved from minutes-long rendered videos to 3D “live” avatars that can interact with people.  

This year, Sima says, has seen a tipping point, with AI cloning becoming affordable for most individuals. “Last year, it cost about $2,000 to $3,000, but it now only costs a few hundred dollars,” he says. That’s thanks to a price war between Chinese AI companies, which are fighting to meet the thriving demand for digital avatars in other sectors like streaming.

In fact, demand for applications that re-create the dead has also boosted the capabilities of tools that digitally replicate the living. 

Silicon Intelligence offers both services. When Sun and Sima launched the company, they were focused on using text-to-speech technologies to create audio and then using those AI-generated voices in applications such as robocalls.

But after the company replicated Sun’s mother, it pivoted to generating realistic avatars. That decision turned the company into one of the leading Chinese players creating AI-powered influencers. 

Example of the tablet product by Silicon Intelligence. The avatar of the grandma can converse with the user.
SILICON INTELLIGENCE

Its technology has generated avatars for hundreds of thousands of TikTok-like videos and streaming channels, but Sima says more recently it’s seen around 1,000 clients use it to replicate someone who’s passed away. “We started our work on ‘resurrection’ in 2019 and 2020,” he says, but at first people were slow to accept it: “No one wanted to be the first adopters.” 

The quality of the avatars has improved, he says, which has boosted adoption. When the avatar looks increasingly lifelike and gives fewer out-of-character answers, it’s easier for users to treat it as their deceased family member. Plus, the idea is getting popularized through more depictions on Chinese TV. 

Now Silicon Intelligence offers the replication service for a price between several hundred and several thousand dollars. The most basic product comes as an interactive avatar in an app, and the options at the upper end of the range often involve more customization and better hardware components, such as a tablet or a display screen. There are at least a handful more Chinese companies working on the same technology.

A modern twist on tradition

The business in these deepfakes builds on China’s long cultural history of communicating with the dead. 

In Chinese homes, it’s common to put up a portrait of a deceased relative for a few years after the death. Zhang Zewei, founder of a Shanghai-based company called Super Brain, says he and his team wanted to revamp that tradition with an “AI photo frame.” They create avatars of deceased loved ones that are pre-loaded onto an Android tablet, which looks like a photo frame when standing up. Clients can choose a moving image that speaks words drawn from an offline database or from an LLM. 

“In its essence, it’s not much different from a traditional portrait, except that it’s interactive,” Zhang says.

Zhang says the company has made digital replicas for over 1,000 clients since March 2023 and charges $700 to $1,400, depending on the service purchased. The company plans to release an app-only product soon, so that users can access the avatars on their phones, and could further reduce the cost to around $140.

Super Brain demonstrates the app-only version with an avatar of Zhang Zewei answering his own questions.
SUPER BRAIN

The purpose of his products, Zhang says, is therapeutic. “When you really miss someone or need consolation during certain holidays, you can talk to the artificial living and heal your inner wounds,” he says.

And even if that conversation is largely one-sided, that’s in keeping with a strong cultural tradition. Every April during the Qingming festival, Chinese people sweep the tombs of their ancestors, burn joss sticks and fake paper money, and tell them what has happened in the past year. Of course, those conversations have always been one-way. 

But that’s not the case for all Super Brain services. The company also offers deepfaked video calls in which a company employee or a contract therapist pretends to be the relative who passed away. Using DeepFace, an open-source tool that analyzes facial features, the deceased person’s face is reconstructed in 3D and swapped in for the live person’s face with a real-time filter. 

Example of a deepfake video call Super Brain did in July 2023. The face in the top right corner is from the deceased son of the woman.
SUPER BRAIN

At the other end of the call is usually an elderly family member who may not know that the relative has died—and whose family has arranged the conversation as a ruse. 

Jonathan Yang, a Nanjing resident who works in the tech industry, paid for this service in September 2023. His uncle died in a construction accident, but the family hesitated to tell Yang’s grandmother, who is 93 and in poor health. They worried that she wouldn’t survive the devastating news.

So Yang paid $1,350 to commission three deepfaked calls of his dead uncle. He gave Super Brain a handful of photos and videos of his uncle to train the model. Then, on three Chinese holidays, a Super Brain employee video-called Yang’s grandmother and told her, as his uncle, that he was busy working in a faraway city and wouldn’t be able to come back home, even during the Chinese New Year. 

“The effect has met my expectations. My grandma didn’t suspect anything,” Yang says. His family did have mixed opinions about the idea, because some relatives thought maybe she would have wanted to see her son’s body before it was cremated. Still, the whole family got on board in the end, believing the ruse would be best for her health. After all, it’s pretty common for Chinese families to tell “necessary” lies to avoid overwhelming seniors, as depicted in the movie The Farewell

To Yang, a close follower of the AI industry trends, creating replicas of the dead is one of the best applications of the technology. “It best represents the warmth [of AI],” he says. His grandmother’s health has improved, and there may come a day when they finally tell her the truth. By that time, Yang says, he may purchase a digital avatar of his uncle for his grandma to talk to whenever she misses him.

Is AI really good for grief? 

Even as AI cloning technology improves, there are some significant barriers preventing more people from using it to speak with their dead relatives in China. 

On the tech side, there are limitations to what AI models can generate. Most LLMs can handle dominant languages like Mandarin and Cantonese, but they aren’t able to replicate the many niche dialects in China. It’s also challenging—and therefore costly—to replicate body movements and complex facial expressions in 3D models. 

Then there’s the issue of training data. Unlike cloning someone who’s still alive, which often involves asking the person to record body movements or say certain things, posthumous AI replications must rely on whatever videos or photos are already available. And many clients don’t have high-quality data, or enough of it, for the end result to be satisfactory. 

Complicating these technical challenges are myriad ethical questions. Notably, how can someone who is already dead consent to being digitally replicated? For now, companies like Super Brain and Silicon Intelligence rely on the permission of direct family members. But what if family members disagree? And if a digital avatar generates inappropriate answers, who is responsible?

Similar technology caused controversy earlier this year. A company in Ningbo reportedly used AI tools to create videos of deceased celebrities and posted them on social media to speak to their fans. The videos were generated using public data, but without seeking any approval or permission. The result was intense criticism from the celebrities’ families and fans, and the videos were eventually taken down. 

“It’s a new domain that only came about after the popularization of AI: the rights to digital eternity,” says Shen, the Tsinghua professor, who also runs a lab that creates digital replicas of people who have passed away. He believes it should be prohibited to use deepfake technology to replicate living people without their permission. For people who have passed away, all of their immediate living family members must agree beforehand, he says. 

There could be negative effects on clients’ mental health, too. While some people, like Sun, find their conversations with avatars to be therapeutic, not everyone thinks it’s a healthy way to grieve. “The controversy lies in the fact that if we replicate our family members because we miss them, we may constantly stay in the state of mourning and can’t withdraw from it to accept that they have truly passed away,” says Shen. A widowed person who’s in constant conversation with the digital version of their partner might be held back from seeking a new relationship, for instance. 

“When someone passes away, should we replace our real emotions with fictional ones and linger in that emotional state?” Shen asks. Psychologists and philosophers who talked to MIT Technology Review about the impact of grief tech have warned about the danger of doing so. 

Sun Kai, at least, has found the digital avatar of his mom to be a comfort. She’s like a 24/7 confidante on his phone. Even though it’s possible to remake his mother’s avatar with the latest technology, he hasn’t yet done that. “I’m so used to what she looks like and sounds like now,” he says. As years have gone by, the boundary between her avatar and his memory of her has begun to blur. “Sometimes I couldn’t even tell which one is the real her,” he says.

And Sun is still okay with doing most of the talking. “When I’m confiding in her, I’m merely letting off steam. Sometimes you already know the answer to your question, but you still need to say it out loud,” he says. “My conversations with my mom have always been like this throughout the years.” 

But now, unlike before, he gets to talk to her whenever he wants to.

The way whales communicate is closer to human language than we realized

Sperm whales are fascinating creatures. They possess the biggest brain of any species, six times larger than a human’s, which scientists believe may have evolved to support intelligent, rational behavior. They’re highly social, capable of making decisions as a group, and they exhibit complex foraging behavior.  

But there’s also a lot we don’t know about them, including what they may be trying to say to one another when they communicate using a system of short bursts of clicks, known as codas. Now, new research published in Nature Communications today suggests that sperm whales’ communication is actually much more expressive and complicated than was previously thought. 

A team of researchers led by Pratyusha Sharma at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) working with Project CETI, a nonprofit focused on using AI to understand whales, used statistical models to analyze whale codas and managed to identify a structure to their language that’s similar to features of the complex vocalizations humans use. Their findings represent a tool future research could use to decipher not just the structure but the actual meaning of whale sounds.

The team analyzed recordings of 8,719 codas from around 60 whales collected by the Dominica Sperm Whale Project between 2005 and 2018, using a mix of algorithms for pattern recognition and classification. They found that the way the whales communicate was not random or simplistic, but structured depending on the context of their conversations. This allowed them to identify distinct vocalizations that hadn’t been previously picked up on.

Instead of relying on more complicated machine-learning techniques, the researchers chose to use classical analysis to approach an existing database with fresh eyes.

“We wanted to go with a simpler model that would already give us a basis for our hypothesis,” says Sharma.

“The nice thing about a statistics approach is that you do not have to train a model and it’s not a black box, and [the analyses are] easier to perform,”  says Felix Effenberger, a senior AI research advisor to the Earth Species Project, a nonprofit that’s researching how to decode non-human communication using AI. But he points out that machine learning is a great way to speed up the process of discovering patterns in a data set, so adopting such a method could be useful in the future.

a diver with the whale recording unit

DAN TCHERNOV/PROJECT CETI

The algorithms turned the clicks within the coda data into a new kind of data visualization the researchers call an exchange plot, revealing that some codas featured extra clicks. These extra clicks, combined with variations in the duration of their calls, appeared in interactions between multiple whales, which the researchers say suggests that codas can carry more information and possess a more complicated internal structure than we’d previously believed.

“One way to think about what we found is that people have previously been analyzing the sperm whale communication system as being like Egyptian hieroglyphics, but it’s actually like letters,” says Jacob Andreas, an associate professor at CSAIL who was involved with the project.

Although the team isn’t sure whether what it uncovered can be interpreted as the equivalent of the letters, tongue position, or sentences that go into human language, they are confident that there was a lot of internal similarity between the codas they analyzed, he says.

“This in turn allowed us to recognize that there were more kinds of codas, or more kinds of distinctions between codas, that whales are clearly capable of perceiving—[and] that people just hadn’t picked up on at all in this data.”

The team’s next step is to build language models of whale calls and to examine how those calls relate to different behaviors. They also plan to work on a more general system that could be used across species, says Sharma. Taking a communication system we know nothing about, working out how it encodes and transmits information, and slowly beginning to understand what’s being communicated could have many purposes beyond whales. “I think we’re just starting to understand some of these things,” she says. “We’re very much at the beginning, but we are slowly making our way through.”

Gaining an understanding of what animals are saying to each other is the primary motivation behind projects such as these. But if we ever hope to understand what whales are communicating, there’s a large obstacle in the way: the need for experiments to prove that such an attempt can actually work, says Caroline Casey, a researcher at UC Santa Cruz who has been studying elephant seals’ vocal communication for over a decade.

“There’s been a renewed interest since the advent of AI in decoding animal signals,” Casey says. “It’s very hard to demonstrate that a signal actually means to animals what humans think it means. This paper has described the subtle nuances of their acoustic structure very well, but taking that extra step to get to the meaning of a signal is very difficult to do.”

Scientists are trying to get cows pregnant with synthetic embryos

It was a cool morning at the beef teaching unit in Gainesville, Florida, and cow number #307 was bucking in her metal cradle as the arm of a student perched on a stool disappeared into her cervix. The arm held a squirt bottle of water.

Seven other animals stood nearby behind a railing; it would be their turn next to get their uterus flushed out. As soon as the contents of #307’s womb spilled into a bucket, a worker rushed it to a small laboratory set up under the barn’s corrugated gables.

“It’s something!” said a postdoc named Hao Ming, dressed in blue overalls and muck boots, corralling a pink wisp of tissue under the lens of a microscope. But then he stepped back, not as sure. “It’s hard to tell.”

The experiment, at the University of Florida, is an attempt to create a large animal starting only from stem cells—no egg, no sperm, and no conception. A week earlier, “synthetic embryos,” artificial structures created in a lab, had been transferred to the uteruses of all eight cows. Now it was time to see what had grown.

About a decade ago, biologists started to observe that stem cells, left alone in a walled plastic container, will spontaneously self-assemble and try to make an embryo. These structures, sometimes called “embryo models” or embryoids, have gradually become increasingly realistic. In 2022, a lab in Israel grew the mouse version in a jar until cranial folds and a beating heart appeared.

At the Florida center, researchers are now attempting to go all the way. They want to make a live animal. If they do, it wouldn’t just be a totally new way to breed cattle. It could shake our notion of what life even is. “There has never been a birth without an egg,” says Zongliang “Carl” Jiang, the reproductive biologist heading the project. “Everyone says it is so cool, so important, but show me more data—show me it can go into a pregnancy. So that is our goal.”

For now, success isn’t certain, mostly because lab-made embryos generated from stem cells still aren’t exactly like the real thing. They’re more like an embryo seen through a fun-house mirror; the right parts, but in the wrong proportions. That’s why these are being flushed out after just a week—so the researchers can check how far they’ve grown and to learn how to make better ones.

“The stem cells are so smart they know what their fate is,” says Jiang. “But they also need help.”

So far, most research on synthetic embryos has involved mouse or human cells, and it’s stayed in the lab. But last year Jiang, along with researchers in Texas, published a recipe for making a bovine version, which they called “cattle blastoids” for their resemblance to blastocysts, the stage of the embryo suitable for IVF procedures.  

Some researchers think that stem-cell animals could be as big a deal as Dolly the sheep, whose birth in 1996 brought cloning technology to barnyards. Cloning, in which an adult cell is placed in an egg, has allowed scientists to copy mice, cattle, pet dogs, and even polo ponies. The players on one Argentine team all ride clones of the same champion mare, named Dolfina.

Synthetic embryos are clones, too—of the starting cells you grow them from. But they’re made without the need for eggs and can be created in far larger numbers—in theory, by the tens of thousands. And that’s what could revolutionize cattle breeding. Imagine that each year’s calves were all copies of the most muscled steer in the world, perfectly designed to turn grass into steak.

“I would love to see this become cloning 2.0,” says Carlos Pinzón-Arteaga, the veterinarian who spearheaded the laboratory work in Texas. “It’s like Star Wars with cows.”

Endangered species

Industry has started to circle around. A company called Genus PLC, which specializes in assisted reproduction of “genetically superior” pigs and cattle, has begun buying patents on synthetic embryos. This year it started funding Jiang’s lab to support his effort, locking up a commercial option to any discoveries he might make.

Zoos are interested too. With many endangered animals, assisted reproduction is difficult. And with recently extinct ones, it’s impossible. All that remains is some tissue in a freezer. But this technology could, theoretically, blow life back into these specimens—turning them into embryos, which could be brought to term in a surrogate of a sister species.

But there’s an even bigger—and stranger—reason to pay attention to Jiang’s effort to make a calf: several labs are creating super-realistic synthetic human embryos as well. It’s an ethically charged arena, particularly given recent changes in US abortion laws. Although these human embryoids are considered nonviable—mere “models” that are fair-game for research—all that could all change quickly if the Florida project succeeds. 

“If it can work in an animal, it can work in a human,” says Pinzón-Arteaga, who is now working at Harvard Medical School. “And that’s the Black Mirror episode.”

Industrial embryos

Three weeks before cow #307 stood in the dock, she and seven other heifers had been given stimulating hormones, to trick their bodies into thinking they were pregnant. After that, Jiang’s students had loaded blastoids into a straw they used like a popgun to shoot them towards each animal’s oviducts.

Many researchers think that if a stem-cell animal is born, the first one is likely to be a mouse. Mice are cheap to work with and reproduce fast. And one team has already grown a synthetic mouse embryo for eight days in an artificial womb—a big step, since a mouse pregnancy lasts only three weeks.

But bovines may not be far behind. There’s a large assisted-reproduction industry in cattle, with more than a million IVF attempts a year, half of them in North America. Many other beef and dairy cattle are artificially inseminated with semen from top-rated bulls. “Cattle is harder,” says Jiang. “But we have all the technology.”

hands adding a sample to a plate with a stripetter
Inspecting a “synthetic” embryo that gestated in a cow for a week at the University of Florida, Gainesville.
ANTONIO REGALADO

The thing that came out of cow #307 turned out to be damaged, just a fragment. But later that day, in Jiang’s main laboratory, students were speed-walking across the linoleum holding something in a petri dish. They’d retrieved intact embryonic structures from some of the other cows. These looked long and stringy, like worms, or the skin shed by a miniature snake.

That’s precisely what a two-week-old cattle embryo should look like. But the outer appearance is deceiving, Jiang says. After staining chemicals are added, the specimens are put under a microscope. Then the disorder inside them is apparent. These “elongated structures,” as Jiang calls them, have the right parts—cells of the embryonic disc and placenta—but nothing is in quite the right place.

“I wouldn’t call them embryos yet, because we still can’t say if they are healthy or not,” he says. “Those lineages are there, but they are disorganized.”

Cloning 2.0

Jiang demonstrated how the blastoids are grown in a plastic plate in his lab. First, his students deposit stem cells into narrow tubes. In confinement, the cells begin communicating and very quickly start trying to form a blastoid. “We can generate hundreds of thousands of blastoids. So it’s an industrial process,” he says. “It’s really simple.”

That scalability is what could make blastoids a powerful replacement for cloning technology. Cattle cloning is still a tricky process, which only skilled technicians can manage, and it requires eggs, too, which come from slaughterhouses. But unlike blastoids, cloning is well established and actually works, says Cody Kime, R&D director at Trans Ova Genetics, in Sioux Center, Iowa. Each year, his company clones thousands of pigs as well as hundreds of prize-winning cattle.

“A lot of people would like to see a way to amplify the very best animals as easily as you can,” Kime says. “But blastoids aren’t functional yet. The gene expression is aberrant to the point of total failure. The embryos look blurry, like someone sculpted them out of oatmeal or Play-Doh. It’s not the beautiful thing that you expect. The finer details are missing.”

This spring, Jiang learned that the US Department of Agriculture shared that skepticism, when they rejected his application for $650,000 in funding.  “I got criticism: ‘Oh, this is not going to work.’ That this is high risk and low efficiency,” he says. “But to me, this would change the entire breeding program.”

One problem may be the starting cells. Jiang uses bovine embryonic stem cells—taken from cattle embryos. But these stem cells aren’t as quite as versatile as they need to be. For instance, to make the first cattle blastoids, the team in Texas had to add a second type of cell, one that can make a placenta.

What’s needed instead are specially prepared “naïve” cells that are better poised to form the entire conceptus—both the embryo and placenta. Jiang showed me a PowerPoint with a large grid of different growth factors and lab conditions he is testing. Growing stem cells in different chemicals can shift the pattern of genes that are turned on. The latest batch of blastoids, he says, were made using a newer recipe and only needed to start with one type of cell.

Slaughterhouse

Jiang can’t say how long it will be before he makes a calf. His immediate goal is a pregnancy that lasts 30 days. If a synthetic embryo can grow that long, he thinks, it could go all the way, since “most pregnancy loss in cattle is in the first month.”

For a project to reinvent reproduction, Jiang’s budget isn’t particularly large, and he frets about the $2-a-day bill to feed each of his cows. During a tour of UFL’s animal science department, he opened the door to a slaughter room, a vaulted space with tracks and chains overhead, where a man in a slicker was running a hose. It smelled like freshly cleaned blood.

Carl Jiang with Cow #307
Reproductive biologist Carl Jiang leads an effort to make animals from stem cells. The cow stands in a “hydraulic squeeze chute” while its uterus is checked.
ANTONIO REGALADO

This is where cow #307 ended up. After a about 20 embryo transfers over three years, her cervix was worn out, and she came here. She was butchered, her meat wrapped and labeled, and sold to the public at market prices from a small shop at the front of the building. It’s important to everyone at the university that the research subjects aren’t wasted. “They are food,” says Jiang.

But there’s still a limit to how many cows he can use. He had 18 fresh heifers ready to join the experiment, but what if only 1% of embryos ever develop correctly? That would mean he’d need 100 surrogate mothers to see anything. It reminds Jiang of the first attempts at cloning: Dolly the sheep was one of 277 tries, and the others went nowhere. “How soon it happens may depend on industry. They have a lot of animals. It might take 30 years without them,” he says.

“It’s going to be hard,” agrees Peter Hansen, a distinguished professor in Jiang’s department. “But whoever does it first …” He lets the thought hang. “In vitro breeding is the next big thing.”

Human question

Cattle aren’t the only species in which researchers are checking the potential of synthetic embryos to keep developing into fetuses. Researchers in China have transplanted synthetic embryos into the wombs of monkeys several times. A report in 2023 found that the transplants caused hormonal signals of pregnancy, although no monkey fetus emerged.

Because monkeys are primates, like us, such experiments raise an obvious question. Will a lab somewhere try to transfer a synthetic embryo to a person? In many countries that would be illegal, and scientific groups say such an experiment should be strictly forbidden.

This summer, research leaders were alarmed by a media frenzy around reports of super-realistic models of human embryos that had been created in labs in the UK and Israel—some of which seemed to be nearly perfect mimics. To quell speculation, in June the International Society for Stem Cell Research, a powerful science and lobbying group, put out a statement declaring that the models “are not embryos” and “cannot and will not develop to the equivalent of postnatal stage humans.”

Some researchers worry that was a reckless thing to say. That’s because the statement would be disproved, biologically, as soon as any kind of stem-cell animal is born. And many top scientists expect that to happen. “I do think there is a pathway. Especially in mice, I think we will get there,” says Jun Wu, who leads the research group at UT Southwestern Medical Center, in Dallas, that collaborated with Jiang. “The question is, if that happens, how will we handle a similar technology in humans?”

Jiang says he doesn’t think anyone is going to make a person from stem cells. And he’s certainly not interested in doing so. He’s just a cattle researcher at an animal science department. “Scientists belong to society, and we need to follow ethical guidelines. So we can’t do it. It’s not allowed,” he says. “But in large animals, we are allowed. We’re encouraged. And so we can make it happen.”

Inside the quest to map the universe with mysterious bursts of radio energy

When our universe was less than half as old as it is today, a burst of energy that could cook a sun’s worth of popcorn shot out from somewhere amid a compact group of galaxies. Some 8 billion years later, radio waves from that burst reached Earth and were captured by a sophisticated low-frequency radio telescope in the Australian outback. 

The signal, which arrived on June 10, 2022, and lasted for under half a millisecond, is one of a growing class of mysterious radio signals called fast radio bursts. In the last 10 years, astronomers have picked up nearly 5,000 of them. This one was particularly special: nearly double the age of anything previously observed, and three and a half times more energetic. 

But like the others that came before, it was otherwise a mystery. No one knows what causes fast radio bursts. They flash in a seemingly random and unpredictable pattern from all over the sky. Some appear from within our galaxy, others from previously unexamined depths of the universe. Some repeat in cyclical patterns for days at a time and then vanish; others have been consistently repeating every few days since we first identified them. Most never repeat at all. 

Despite the mystery, these radio waves are starting to prove extraordinarily useful. By the time our telescopes detect them, they have passed through clouds of hot, rippling plasma, through gas so diffuse that particles barely touch each other, and through our own Milky Way. And every time they hit the free electrons floating in all that stuff, the waves shift a little bit. The ones that reach our telescopes carry with them a smeary fingerprint of all the ordinary matter they’ve encountered between wherever they came from and where we are now. 

This makes fast radio bursts, or FRBs, invaluable tools for scientific discovery—especially for astronomers interested in the very diffuse gas and dust floating between galaxies, which we know very little about. 

“We don’t know what they are, and we don’t know what causes them. But it doesn’t matter. This is the tool we would have constructed and developed if we had the chance to be playing God and create the universe,” says Stuart Ryder, an astronomer at Macquarie University in Sydney and the lead author of the Science paper that reported the record-breaking burst. 

Many astronomers now feel confident that finding more such distant FRBs will enable them to create the most detailed three-dimensional cosmological map ever made—what Ryder likens to a CT scan of the universe. Even just five years ago making such a map might have seemed an intractable technical challenge: spotting an FFB and then recording enough data to determine where it came from is extraordinarily difficult because most of that work must happen in the few milliseconds before the burst passes.

But that challenge is about to be obliterated. By the end of this decade, a new generation of radio telescopes and related technologies coming online in Australia, Canada, Chile, California, and elsewhere should transform the effort to find FRBs—and help unpack what they can tell us. What was once a series of serendipitous discoveries will become something that’s almost routine. Not only will astronomers be able to build out that new map of the universe, but they’ll have the chance to vastly improve our understanding of how galaxies are born and how they change over time. 

Where’s the matter?

In 1998, astronomers counted up the weight of all of the identified matter in the universe and got a puzzling result. 

We know that about 5% of the total weight of the universe is made up of baryons like protons and neutrons— the particles that make up atoms, or all the “stuff” in the universe. (The other 95% includes dark energy and dark matter.) But the astronomers managed to locate only about 2.5%, not 5%, of the universe’s total. “They counted the stars, black holes, white dwarfs, exotic objects, the atomic gas, the molecular gas in galaxies, the hot plasma, etc. They added it all up and wound up at least a factor of two short of what it should have been,” says Xavier Prochaska, an astrophysicist at the University of California, Santa Cruz, and an expert in analyzing the light in the early universe. “It’s embarrassing. We’re not actively observing half of the matter in the universe.” 

All those missing baryons were a serious problem for simulations of how galaxies form, how our universe is structured, and what happens as it continues to expand. 

Astronomers began to speculate that the missing matter exists in extremely diffuse clouds of what’s known as the warm–hot intergalactic medium, or WHIM. Theoretically, the WHIM would contain all that unobserved material. After the 1998 paper was published, Prochaska committed himself to finding it. 

But nearly 10 years of his life and about $50 million in taxpayer money later, the hunt was going very poorly.

That search had focused largely on picking apart the light from distant galactic nuclei and studying x-ray emissions from tendrils of gas connecting galaxies. The breakthrough came in 2007, when Prochaska was sitting on a couch in a meeting room at the University of California, Santa Cruz, reviewing new research papers with his colleagues. There, amid the stacks of research, sat the paper reporting the discovery of the first FRB.

Duncan Lorimer and David Narkevic, astronomers at West Virginia University, had discovered a recording of an energetic radio wave unlike anything previously observed. The wave lasted for less than five milliseconds, and its spectral lines were very smeared and distorted, unusual characteristics for a radio pulse that was also brighter and more energetic than other known transient phenomena. The researchers concluded that the wave could not have come from within our galaxy, meaning that it had traveled some unknown distance through the universe. 

Here was a signal that had traversed long distances of space, been shaped and affected by electrons along the way, and had enough energy to be clearly detectable despite all the stuff it had passed through. There are no other signals we can currently detect that commonly occur throughout the universe and have this exact set of traits.

“I saw that and I said, ‘Holy cow—that’s how we can solve the missing-baryons problem,’” Prochaska says. Astronomers had used a similar technique with the light from pulsars— spinning neutron stars that beam radiation from their poles—to count electrons in the Milky Way. But pulsars are too dim to illuminate more of the universe. FRBs were thousands of times brighter, offering a way to use that technique to study space well beyond our galaxy.

A visualization of the cosmic web, the large-scale structure of the universe. Each bright knot is an entire galaxy, while the purple filaments show material between them.
This visualization of large-scale structure in the universe shows galaxies (bright knots) and the filaments of material between them.
NASA/NCSA UNIVERSITY OF ILLINOIS VISUALIZATION BY FRANK SUMMERS, SPACE TELESCOPE SCIENCE INSTITUTE, SIMULATION BY MARTIN WHITE AND LARS HERNQUIST, HARVARD UNIVERSITY

There’s a catch, though: in order for an FRB to be an indicator of what lies in the seemingly empty space between galaxies, researchers have to know where it comes from. If you don’t know how far the FRB has traveled, you can’t make any definitive estimate of what space looks like between its origin point and Earth. 

Astronomers couldn’t even point to the direction that the first 2007 FRB came from, let alone calculate the distance it had traveled. It was detected by an enormous single-dish radio telescope at the Parkes Observatory (now called the Murriyang) in New South Wales, which is great at picking up incoming radio waves but can pinpoint FRBs only to an area of the sky as large as Earth’s full moon. For the next decade, telescopes continued to identify FRBs without providing a precise origin, making them a fascinating mystery but not practically useful.

Then, in 2015, one particular radio wave flashed—and then flashed again. Over the course of two months of observation from the Arecibo telescope in Puerto Rico, the radio waves came again and again, flashing 10 times. This was the first repeating burst of FRBs ever observed (a mystery in its own right), and now researchers had a chance to determine where the radio waves had begun, using the opportunity to home in on its location.

In 2017, that’s what happened. The researchers obtained an accurate position for the fast radio burst using the NRAO Very Large Array telescope in central New Mexico. Armed with that position, the researchers then used the Gemini optical telescope in Hawaii to take a picture of the location, revealing the galaxy where the FRB had begun and how far it had traveled. “That’s when it became clear that at least some of these we’d get the distance for. That’s when I got really involved and started writing telescope proposals,” Prochaska says. 

That same year, astronomers from across the globe gathered in Aspen, Colorado, to discuss the potential for studying FRBs. Researchers debated what caused them. Neutron stars? Magnetars, neutron stars with such powerful magnetic fields that they emit x-rays and gamma rays? Merging galaxies? Aliens? Did repeating FRBs and one-offs have different origins, or could there be some other explanation for why some bursts repeat and most do not? Did it even matter, since all the bursts could be used as probes regardless of what caused them? At that Aspen meeting, Prochaska met with a team of radio astronomers based in Australia, including Keith Bannister, a telescope expert involved in the early work to build a precursor facility for the Square Kilometer Array, an international collaboration to build the largest radio telescope arrays in the world. 

The construction of that precursor telescope, called ASKAP, was still underway during that meeting. But Bannister, a telescope expert at the Australian government’s scientific research agency, CSIRO, believed that it could be requisitioned and adapted to simultaneously locate and observe FRBs. 

Bannister and the other radio experts affiliated with ASKAP understood how to manipulate radio telescopes for the unique demands of FRB hunting; Prochaska was an expert in everything “not radio.” They agreed to work together to identify and locate one-off FRBs (because there are many more of these than there are repeating ones) and then use the data to address the problem of the missing baryons. 

And over the course of the next five years, that’s exactly what they did—with astonishing success.

Building a pipeline

To pinpoint a burst in the sky, you need a telescope with two things that have traditionally been at odds in radio astronomy: a very large field of view and high resolution. The large field of view gives you the greatest possible chance to detect a fleeting, unpredictable burst. High resolution  lets you determine where that burst actually sits in your field of view. 

ASKAP was the perfect candidate for the job. Located in the westernmost part of the Australian outback, where cattle and sheep graze on public land and people are few and far between, the telescope consists of 36 dishes, each with a large field of view. These dishes are separated by large distances, allowing observations to be combined through a technique called interferometry so that a small patch of the sky can be viewed with high precision.  

The dishes weren’t formally in use yet, but Bannister had an idea. He took them and jerry-rigged a “fly’s eye” telescope, pointing the dishes at different parts of the sky to maximize its ability to spot something that might flash anywhere. 

“Suddenly, it felt like we were living in paradise,” Bannister says. “There had only ever been three or four FRB detections at this point, and people weren’t entirely sure if [FRBs] were real or not, and we were finding them every two weeks.” 

When ASKAP’s interferometer went online in September 2018, the real work began. Bannister designed a piece of software that he likens to live-action replay of the FRB event. “This thing comes by and smacks into your telescope and disappears, and you’ve got a millisecond to get its phone number,” he says. To do so, the software detects the presence of an FRB within a hundredth of a second and then reaches upstream to create a recording of the telescope’s data before the system overwrites it. Data from all the dishes can be processed and combined to reconstruct a view of the sky and find a precise point of origin. 

The team can then send the coordinates on to optical telescopes, which can take detailed pictures of the spot to confirm the presence of a galaxy—the likely origin point of the FRB. 

CSIRO's Australian Square Kilometre Array Pathfinder (ASKAP) telescope
These two dishes are part of CSIRO’s Australian Square Kilometre Array Pathfinder (ASKAP) telescope.
CSIRO

Ryder’s team used data on the galaxy’s spectrum, gathered from the European Southern Observatory, to measure how much its light stretched as it traversed space to reach our telescopes. This “redshift” becomes a proxy for distance, allowing astronomers to estimate just how much space the FRB’s light has passed through. 

In 2018, the live-action replay worked for the first time, making Bannister, Ryder, Prochaska, and the rest of their research team the first to localize an FRB that was not repeating. By the following year, the team had localized about five of them. By 2020, they had published a paper in Nature declaring that the FRBs had let them count up the universe’s missing baryons. 

The centerpiece of the paper’s argument was something called the dispersion measure—a number that reflects how much an FRB’s light has been smeared by all the free electrons along our line of sight. In general, the farther an FRB travels, the higher the dispersion measure should be. Armed with both the travel distance (the redshift) and the dispersion measure for a number of FRBs, the researchers found they could extrapolate the total density of particles in the universe. J-P Macquart, the paper’s lead author, believed that the relationship between dispersion measure and FRB distance was predictable and could be applied to map the universe.

As a leader in the field and a key player in the advancement of FRB research, Macquart would have been interviewed for this piece. But he died of a heart attack one week after the paper was published, at the age of 45. FRB researchers began to call the relationship between dispersion and distance the “Macquart relation,” in honor of his memory and his push for the groundbreaking idea that FRBs could be used for cosmology. 

Proving that the Macquart relation would hold at greater distances became not just a scientific quest but also an emotional one. 

“I remember thinking that I know something about the universe that no one else knows.”

The researchers knew that the ASKAP telescope was capable of detecting bursts from very far away—they just needed to find one. Whenever the telescope detected an FRB, Ryder was tasked with helping to determine where it had originated. It took much longer than he would have liked. But one morning in July 2022, after many months of frustration, Ryder downloaded the newest data email from the European Southern Observatory and began to scroll through the spectrum data. Scrolling, scrolling, scrolling—and then there it was: light from 8 billion years ago, or a redshift of one, symbolized by two very close, bright lines on the computer screen, showing the optical emissions from oxygen. “I remember thinking that I know something about the universe that no one else knows,” he says. “I wanted to jump onto a Slack and tell everyone, but then I thought: No, just sit here and revel in this. It has taken a lot to get to this point.” 

With the October 2023 Science paper, the team had basically doubled the distance baseline for the Macquart relation, honoring Macquart’s memory in the best way they knew how. The distance jump was significant because Ryder and the others on his team wanted to confirm that their work would hold true even for FRBs whose light comes from so far away that it reflects a much younger universe. They also wanted to establish that it was possible to find FRBs at this redshift, because astronomers need to collect evidence about many more like this one in order to create the cosmological map that motivates so much FRB research.

“It’s encouraging that the Macquart relation does still seem to hold, and that we can still see fast radio bursts coming from those distances,” Ryder said. “We assume that there are many more out there.” 

Mapping the cosmic web

The missing stuff that lies between galaxies, which should contain the majority of the matter in the universe, is often called the cosmic web. The diffuse gases aren’t floating like random clouds; they’re strung together more like a spiderweb, a complex weaving of delicate filaments that stretches as the galaxies at their nodes grow and shift. This gas probably escaped from galaxies into the space beyond when the galaxies first formed, shoved outward by massive explosions.

“We don’t understand how gas is pushed in and out of galaxies. It’s fundamental for understanding how galaxies form and evolve,” says Kiyoshi Masui, the director of MIT’s Synoptic Radio Lab. “We only exist because stars exist, and yet this process of building up the building blocks of the universe is poorly understood … Our ability to model that is the gaping hole in our understanding of how the universe works.” 

Astronomers are also working to build large-scale maps of galaxies in order to precisely measure the expansion of the universe. But the cosmological modeling underway with FRBs should create a picture of invisible gasses between galaxies, one that currently does not exist. To build a three-dimensional map of this cosmic web, astronomers will need precise data on thousands of FRBs from regions near Earth and from very far away, like the FRB at redshift one. “Ultimately, fast radio bursts will give you a very detailed picture of how gas gets pushed around,” Masui says. “To get to the cosmological data, samples have to get bigger, but not a lot bigger.” 

That’s the task at hand for Masui, who leads a team searching for FRBs much closer to our galaxy than the ones found by the Australian-led collaboration. Masui’s team conducts FRB research with the CHIME telescope in British Columbia, a nontraditional radio telescope with a very wide field of view and focusing reflectors that look like half-pipes instead of dishes. CHIME (short for “Canadian Hydrogen Intensity Mapping Experiment)” has no moving parts and is less reliant on mirrors than a traditional telescope (focusing light in only one direction rather than two), instead using digital techniques to process its data. CHIME can use its digital technology to focus on many places at once, creating a 200-square-degree field of view compared with ASKAP’s 30-degree one. Masui likened it to a mirror that can be focused on thousands of different places simultaneously. 

Because of this enormous field of view, CHIME has been able to gather data on thousands of bursts that are closer to the Milky Way. While CHIME cannot yet precisely locate where they are coming from the way that ASKAP can (the telescope is much more compact, providing lower resolution), Masui is leading the effort to change that by building three smaller versions of the same telescope in British Columbia; Green Bank, West Virginia; and Northern California. The additional data provided by these telescopes, the first of which will probably be collected sometime this year, can be combined with data from the original CHIME telescope to produce location information that is about 1,000 times more precise. That should be detailed enough for cosmological mapping.

The Canadian Hydrogen Intensity Mapping Experiment, or CHIME, a Canadian radio telescope, is shown at night.
The reflectors of the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, have been used to spot thousands of FRBs.
ANDRE RECNIK/CHIME

Telescope technology is improving so fast that the quest to gather enough FRB samples from different parts of the universe for a cosmological map could be finished within the next 10 years. In addition to CHIME, the BURSTT radio telescope in Taiwan should go online this year; the CHORD telescope in Canada, designed to surpass CHIME, should begin operations in 2025; and the Deep Synoptic Array in California could transform the field of radio astronomy when it’s finished, which is expected to happen sometime around the end of the decade. 

And at ASKAP, Bannister is building a new tool that will quintuple the sensitivity of the telescope, beginning this year. If you can imagine stuffing a million people simultaneously watching uncompressed YouTube videos into a box the size of a fridge, that’s probably the easiest way to visualize the data handling capabilities of this new processor, called a field-programmable gate array, which Bannister is almost finished programming. He expects the new device to allow the team to detect one new FRB each day.

With all the telescopes in competition, Bannister says, “in five or 10 years’ time, there will be 1,000 new FRBs detected before you can write a paper about the one you just found … We’re in a race to make them boring.” 

Prochaska is so confident FRBs will finally give us the cosmological map he’s been working toward his entire life that he’s started studying for a degree in oceanography. Once astronomers have measured distances for 1,000 of the bursts, he plans to give up the work entirely. 

“In a decade, we could have a pretty decent cosmological map that’s very precise,” he says. “That’s what the 1,000 FRBs are for—and I should be fired if we don’t.”

Unlike most scientists, Prochaska can define the end goal. He knows that all those FRBs should allow astronomers to paint a map of the invisible gases in the universe, creating a picture of how galaxies evolve as gases move outward and then fall back in. FRBs will grant us an understanding of the shape of the universe that we don’t have today—even if the mystery of what makes them endures. 

Anna Kramer is a science and climate journalist based in Washington, D.C.

The depressing truth about TikTok’s impending ban

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Allow me to indulge in a little reflection this week. Last week, the divest-or-ban TikTok bill was passed in Congress and signed into law. Four years ago, when I was just starting to report on the world of Chinese technologies, one of my first stories was about very similar news: President Donald Trump announcing he’d ban TikTok. 

That 2020 executive order came to nothing in the end—it was blocked in the courts, put aside after the presidency changed hands, and eventually withdrawn by the Biden administration. Yet the idea—that the US government should ban TikTok in some way—never went away. It would repeatedly be suggested in different forms and shapes. And eventually, on April 24, 2024, things came full circle.

A lot has changed in the four years between these two news cycles. Back then, TikTok was a rising sensation that many people didn’t understand; now, it’s one of the biggest social media platforms, the originator of a generation-defining content medium, and a music-industry juggernaut. 

What has also changed is my outlook on the issue. For a long time, I thought TikTok would find a way out of the political tensions, but I’m increasingly pessimistic about its future. And I have even less hope for other Chinese tech companies trying to go global. If the TikTok saga tells us anything, it’s that their Chinese roots will be scrutinized forever, no matter what they do.

I don’t believe TikTok has become a larger security threat now than it was in 2020. There have always been issues with the app, like potential operational influence by the Chinese government, the black-box algorithms that produce unpredictable results, and the fact that parent company ByteDance never managed to separate the US side and the China side cleanly, despite efforts (one called Project Texas) to store and process American data locally. 

But none of those problems got worse over the last four years. And interestingly, while discussions in 2020 still revolved around potential remedies like setting up data centers in the US to store American data or having an organization like Oracle audit operations, those kinds of fixes are not in the law passed this year. As long as it still has Chinese owners, the app is not permissible in the US. The only thing it can do to survive here is transfer ownership to a US entity. 

That’s the cold, hard truth not only for TikTok but for other Chinese companies too. In today’s political climate, any association with China and the Chinese government is seen as unacceptable. It’s a far cry from the 2010s, when Chinese companies could dream about developing a killer app and finding audiences and investors around the globe—something many did pull off. 

There’s something I wrote four years ago that still rings true today: TikTok is the bellwether for Chinese companies trying to go global. 

The majority of Chinese tech giants, like Alibaba, Tencent, and Baidu, operate primarily within China’s borders. TikTok was the first to gain mass popularity in lots of other countries across the world and become part of daily life for people outside China. To many Chinese startups, it showed that the hard work of trying to learn about foreign countries and users can eventually pay off, and it’s worth the time and investment to try.

On the other hand, if even TikTok can’t get itself out of trouble, with all the resources that ByteDance has, is there any hope for the smaller players?

When TikTok found itself in trouble, the initial reaction of these other Chinese companies was to conceal their roots, hoping they could avoid attention. During my reporting, I’ve encountered multiple companies that fret about being described as Chinese. “We are headquartered in Boston,” one would say, while everyone in China openly talked about its product as the overseas version of a Chinese app.

But with all the political back-and-forth about TikTok, I think these companies are also realizing that concealing their Chinese associations doesn’t work—and it may make them look even worse if it leaves users and regulators feeling deceived.

With the new divest-or-ban bill, I think these companies are getting a clear signal that it’s not the technical details that matter—only their national origin. The same worry is spreading to many other industries, as I wrote in this newsletter last week. Even in the climate and renewable power industries, the presence of Chinese companies is becoming increasingly politicized. They, too, are finding themselves scrutinized more for their Chinese roots than for the actual products they offer.

Obviously, none of this is good news to me. When they feel unwelcome in the US market, Chinese companies don’t feel the need to talk to international media anymore. Without these vital conversations, it’s even harder for people in other countries to figure out what’s going on with tech in China.

Instead of banning TikTok because it’s Chinese, maybe we should go back to focus on what TikTok did wrong: why certain sensitive political topics seem deprioritized on the platform; why Project Texas has stalled; how to make the algorithmic workings of the platform more transparent. These issues, instead of whether TikTok is still controlled by China, are the things that actually matter. It’s a harder path to take than just banning the app entirely, but I think it’s the right one.

Do you believe the TikTok ban will go through? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Facing the possibility of a total ban on TikTok, influencers and creators are making contingency plans. (Wired $)

2. TSMC has brought hundreds of Taiwanese employees to Arizona to build its new chip factory. But the company is struggling to bridge cultural and professional differences between American and Taiwanese workers. (Rest of World)

3. The US secretary of state, Antony Blinken, met with Chinese president Xi Jinping during a visit to China this week. (New York Times $)

  • Here’s the best way to describe these recent US-China diplomatic meetings: “The US and China talk past each other on most issues, but at least they’re still talking.” (Associated Press)

4. Half of Russian companies’ payments to China are made through middlemen in Hong Kong, Central Asia, or the Middle East to evade sanctions. (Reuters $)

5. A massive auto show is taking place in Beijing this week, with domestic electric vehicles unsurprisingly taking center stage. (Associated Press)

  • Meanwhile, Elon Musk squeezed in a quick trip to China and met with his “old friend” the Chinese premier Li Qiang, who was believed to have facilitated establishing the Gigafactory in Shanghai. (BBC)
  • Tesla may finally get a license to deploy its autopilot system, which it calls Full Self Driving, in China after agreeing to collaborate with Baidu. (Reuters $)

6. Beijing has hosted two rival Palestinian political groups, Hamas and Fatah, to talk about potential reconciliation. (Al Jazeera)

Lost in translation

The Chinese dubbing community is grappling with the impacts of new audio-generating AI tools. According to the Chinese publication ACGx, for a new audio drama, a music company licensed the voice of the famous dubbing actor Zhao Qianjing and used AI to transform it into multiple characters and voice the entire script. 

But online, this wasn’t really celebrated as an advancement for the industry. Beyond criticizing the quality of the audio drama (saying it still doesn’t sound like real humans), dubbers are worried about the replacement of human actors and increasingly limited opportunities for newcomers. Other than this new audio drama, there have been several examples in China where AI audio generation has been used to replace human dubbers in documentaries and games. E-book platforms have also allowed users to choose different audio-generated voices to read out the text. 

One more thing

While in Beijing, Antony Blinken visited a record store and bought two vinyl records—one by Taylor Swift and another by the Chinese rock star Dou Wei. Many Chinese (and American!) people learned for the first time that Blinken had previously been in a rock band.

Three takeaways about the current state of batteries

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Batteries are on my mind this week. (Aren’t they always?) But I’ve got two extra reasons to be thinking about them today. 

First, there’s a new special report from the International Energy Agency all about how crucial batteries are for our future energy systems. The report calls batteries a “master key,” meaning they can unlock the potential of other technologies that will help cut emissions. Second, we’re seeing early signs in California of how the technology might be earning that “master key” status already by helping renewables play an even bigger role on the grid. So let’s dig into some battery data together. 

1) Battery storage in the power sector was the fastest-growing commercial energy technology on the planet in 2023

Deployment doubled over the previous year’s figures, hitting nearly 42 gigawatts. That includes utility-scale projects as well as projects installed “behind the meter,” meaning they’re somewhere like a home or business and don’t interact with the grid. 

Over half the additions in 2023 were in China, which has been the leading market in batteries for energy storage for the past two years. Growth is faster there than the global average, and installations tripled from 2022 to last year. 

One driving force of this quick growth in China is that some provincial policies require developers of new solar and wind power projects to pair them with a certain level of energy storage, according to the IEA report.

Intermittent renewables like wind and solar have grown rapidly in China and around the world, and the technologies are beginning to help clean up the grid. But these storage requirement policies reveal the next step: installing batteries to help unlock the potential of renewables even during times when the sun isn’t shining and the wind isn’t blowing. 

2) Batteries are starting to show exactly how they’ll play a crucial role on the grid.

When there are small amounts of renewables, it’s not all that important to have storage available, since the sun’s rising and setting will cause little more than blips in the overall energy mix. But as the share increases, some of the challenges with intermittent renewables become very clear. 

We’ve started to see this play out in California. Renewables are able to supply nearly all the grid’s energy demand during the day on sunny days. The problem is just how different the picture is at noon and just eight hours later, once the sun has gone down. 

In the middle of the day, there’s so much solar power available that gigawatts are basically getting thrown away. Electricity prices can actually go negative. Then, later on, renewables quickly fall off, and other sources like natural gas need to ramp up to meet demand. 

But energy storage is starting to catch up and make a dent in smoothing out that daily variation. On April 16, for the first time, batteries were the single greatest power source on the grid in California during part of the early evening, just as solar fell off for the day. (Look for the bump in the darkest line on the graph above—it happens right after 6 p.m.)

Batteries have reached this number-one status several more times over the past few weeks, a sign that the energy storage now installed—10 gigawatts’ worth—is beginning to play a part in a balanced grid. 

3) We need to build a lot more energy storage. Good news: batteries are getting cheaper.

While early signs show just how important batteries can be in our energy system, we still need gobs more to actually clean up the grid. If we’re going to be on track to cut greenhouse-gas emissions to zero by midcentury, we’ll need to increase battery deployment sevenfold. 

The good news is the technology is becoming increasingly economical. Battery costs have fallen drastically, dropping 90% since 2010, and they’re not done yet. According to the IEA report, battery costs could fall an additional 40% by the end of this decade. Those further cost declines would make solar projects with battery storage cheaper to build than new coal power plants in India and China, and cheaper than new gas plants in the US. 

Batteries won’t be the magic miracle technology that cleans up the entire grid. Other sources of low-carbon energy that are more consistently available, like geothermal, or able to ramp up and down to meet demand, like hydropower, will be crucial parts of the energy system. But I’m interested to keep watching just how batteries contribute to the mix. 


Now read the rest of The Spark

Related reading

Some companies are looking beyond lithium for stationary energy storage. Dig into the prospects for sodium-based batteries in this story from last year.

Lithium-sulfur technology could unlock cheaper, better batteries for electric vehicles that can go farther on a single charge. I covered one company trying to make them a reality earlier this year.

Two engineers in lab coats monitor the thermal battery powering a conveyor belt of bottles

SIMON LANDREIN

Another thing

Thermal batteries are so hot right now. In fact, readers chose the technology as our 11th Breakthrough Technology of 2024.

To celebrate, we’re hosting an online event in a couple of weeks for subscribers. We’ll dig into why thermal batteries are so interesting and why this is a breakthrough moment for the technology. It’s going to be a lot of fun, so subscribe if you haven’t already and then register here to join us on May 16 at noon Eastern time.

You’ll be able to submit a question when you register—please do that so I know what you want to hear about! See you there! 

Keeping up with climate  

New rules that force US power plants to slash emissions could effectively spell the end of coal power in the country. Here are five things to know about the regulations. (New York Times)

Wind farms use less land than you might expect. Turbines really take up only a small fraction of the land where they’re sited, and co-locating projects with farms or other developments can help reduce environmental impact. (Washington Post)

The fourth reactor at Plant Vogtle in Georgia officially entered commercial operation this week. The new reactor will provide electricity for up to 500,000 homes and businesses. (Axios

A new factory will be the first full-scale plant to produce sodium-ion batteries in the US. The chemistry could provide a cheaper alternative to the standard lithium-ion chemistry and avoid material constraints. (Bloomberg)

→ I wrote about the potential for sodium-based batteries last year. (MIT Technology Review)

Tesla has apparently laid off a huge portion of its charging team. The move comes as the company’s charging port has been adopted by most major automakers. (The Verge)

A vegan cheese was up for a major food award. Then, things got messy. (Washington Post)

→ For a look at how Climax Foods makes its plant-based cheese with AI, check out this story from our latest magazine issue. (MIT Technology Review)

Someday mining might be done with … seaweed? Early research is looking into using seaweed to capture and concentrate high-value metals. (Hakai)

The planet’s oceans contain enormous amounts of energy. Harnessing it is an early-stage industry, but some proponents argue there’s a role for wave and tidal power technologies. (Undark)

Cancer vaccines are having a renaissance

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Last week, Moderna and Merck launched a large clinical trial in the UK of a promising new cancer therapy: a personalized vaccine that targets a specific set of mutations found in each individual’s tumor. This study is enrolling patients with melanoma. But the companies have also launched a phase III trial for lung cancer. And earlier this month BioNTech and Genentech announced that a personalized vaccine they developed in collaboration shows promise in pancreatic cancer, which has a notoriously poor survival rate.

Drug developers have been working for decades on vaccines to help the body’s immune system fight cancer, without much success. But promising results in the past year suggest that the strategy may be reaching a turning point. Will these therapies finally live up to their promise?

This week in The Checkup, let’s talk cancer vaccines. (And, you guessed it, mRNA.)

Long before companies leveraged mRNA to fight covid, they were developing mRNA vaccines to combat cancer. BioNTech delivered its first mRNA vaccines to people with treatment-resistant melanoma nearly a decade ago. But when the pandemic hit, development of mRNA vaccines jumped into warp drive. Now dozens of trials are underway to test whether these shots can transform cancer the way they did covid. 

Recent news has some experts cautiously optimistic. In December, Merck and Moderna announced results from an earlier trial that included 150 people with melanoma who had undergone surgery to have their cancer removed. Doctors administered nine doses of the vaccine over about six months, as well as  what’s known as an immune checkpoint inhibitor. After three years of follow-up, the combination had cut the risk of recurrence or death by almost half compared with the checkpoint inhibitor alone.

The new results reported by BioNTech and Genentech, from a small trial of 16 patients with pancreatic cancer, are equally exciting. After surgery to remove the cancer, the participants received immunotherapy, followed by the cancer vaccine and a standard chemotherapy regimen. Half of them responded to the vaccine, and three years after treatment, six of those people still had not had a recurrence of their cancer. The other two had relapsed. Of the eight participants who did not respond to the vaccine, seven had relapsed. Some of these patients might not have responded  because they lacked a spleen, which plays an important role in the immune system. The organ was removed as part of their cancer treatment. 

The hope is that the strategy will work in many different kinds of cancer. In addition to pancreatic cancer, BioNTech’s personalized vaccine is being tested in colorectal cancer, melanoma, and metastatic cancers.

The purpose of a cancer vaccine is to train the immune system to better recognize malignant cells, so it can destroy them. The immune system has the capacity to clear cancer cells if it can find them. But tumors are slippery. They can hide in plain sight and employ all sorts of tricks to evade our immune defenses. And cancer cells often look like the body’s own cells because, well, they are the body’s own cells.

There are differences between cancer cells and healthy cells, however. Cancer cells acquire mutations that help them grow and survive, and some of those mutations give rise to proteins that stud the surface of the cell—so-called neoantigens.

Personalized cancer vaccines like the ones Moderna and BioNTech are developing are tailored to each patient’s particular cancer. The researchers collect a piece of the patient’s tumor and a sample of healthy cells. They sequence these two samples and compare them in order to identify mutations that are specific to the tumor. Those mutations are then fed into an AI algorithm that selects those most likely to elicit an immune response. Together these neoantigens form a kind of police sketch of the tumor, a rough picture that helps the immune system recognize cancerous cells. 

“A lot of immunotherapies stimulate the immune response in a nonspecific way—that is, not directly against the cancer,” said Patrick Ott, director of the Center for Personal Cancer Vaccines at the Dana-Farber Cancer Institute, in a 2022 interview.  “Personalized cancer vaccines can direct the immune response to exactly where it needs to be.”

How many neoantigens do you need to create that sketch?  “We don’t really know what the magical number is,” says Michelle Brown, vice president of individualized neoantigen therapy at Moderna. Moderna’s vaccine has 34. “It comes down to what we could fit on the mRNA strand, and it gives us multiple shots to ensure that the immune system is stimulated in the right way,” she says. BioNTech is using 20.

The neoantigens are put on an mRNA strand and injected into the patient. From there, they are taken up by cells and translated into proteins, and those proteins are expressed on the cell’s surface, raising an immune response

mRNA isn’t the only way to teach the immune system to recognize neoantigens. Researchers are also delivering neoantigens as DNA, as peptides, or via immune cells or viral vectors. And many companies are working on “off the shelf” cancer vaccines that aren’t personalized, which would save time and expense. Out of about 400 ongoing clinical trials assessing cancer vaccines last fall, roughly 50 included personalized vaccines.

There’s no guarantee any of these strategies will pan out. Even if they do, success in one type of cancer doesn’t automatically mean success against all. Plenty of cancer therapies have shown enormous promise initially, only to fail when they’re moved into large clinical trials.

But the burst of renewed interest and activity around cancer vaccines is encouraging. And personalized vaccines might have a shot at succeeding where others have failed. The strategy makes sense for “a lot of different tumor types and a lot of different settings,” Brown says. “With this technology, we really have a lot of aspirations.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

mRNA vaccines transformed the pandemic. But they can do so much more. In this feature from 2023, Jessica Hamzelou covered the myriad other uses of these shots, including fighting cancer. 

This article from 2020 covers some of the background on BioNTech’s efforts to develop personalized cancer vaccines. Adam Piore had the story

Years before the pandemic, Emily Mullin wrote about early efforts to develop personalized cancer vaccines—the promise and the pitfalls. 

From around the web

Yes, there’s bird flu in the nation’s milk supply. About one in five samples had evidence of the H5N1 virus. But new testing by the FDA suggests that the virus is unable to replicate. Pasteurization works! (NYT)

Studies in which volunteers are deliberately infected with covid—so-called challenge trials—have been floated as a way to test drugs and vaccines, and even to learn more about the virus. But it turns out it’s tougher to infect people than you might think. (Nature)

When should women get their first mammogram to screen for breast cancer? It’s a matter of hot debate. In 2009, an expert panel raised the age from 40 to 50. This week they lowered it to 40 again in response to rising cancer rates among younger women. Women with an average risk of breast cancer should get screened every two years, the panel says. (NYT)

Wastewater surveillance helped us track covid. Why not H5N1? A team of researchers from New York argues it might be our best tool for monitoring the spread of this virus. (Stat)

Long read: This story looks at how AI could help us better understand how babies learn language, and focuses on the lab I covered in this story about an AI model trained on the sights and sounds experienced by a single baby. (NYT)