Scientists are trying to get cows pregnant with synthetic embryos

It was a cool morning at the beef teaching unit in Gainesville, Florida, and cow number #307 was bucking in her metal cradle as the arm of a student perched on a stool disappeared into her cervix. The arm held a squirt bottle of water.

Seven other animals stood nearby behind a railing; it would be their turn next to get their uterus flushed out. As soon as the contents of #307’s womb spilled into a bucket, a worker rushed it to a small laboratory set up under the barn’s corrugated gables.

“It’s something!” said a postdoc named Hao Ming, dressed in blue overalls and muck boots, corralling a pink wisp of tissue under the lens of a microscope. But then he stepped back, not as sure. “It’s hard to tell.”

The experiment, at the University of Florida, is an attempt to create a large animal starting only from stem cells—no egg, no sperm, and no conception. A week earlier, “synthetic embryos,” artificial structures created in a lab, had been transferred to the uteruses of all eight cows. Now it was time to see what had grown.

About a decade ago, biologists started to observe that stem cells, left alone in a walled plastic container, will spontaneously self-assemble and try to make an embryo. These structures, sometimes called “embryo models” or embryoids, have gradually become increasingly realistic. In 2022, a lab in Israel grew the mouse version in a jar until cranial folds and a beating heart appeared.

At the Florida center, researchers are now attempting to go all the way. They want to make a live animal. If they do, it wouldn’t just be a totally new way to breed cattle. It could shake our notion of what life even is. “There has never been a birth without an egg,” says Zongliang “Carl” Jiang, the reproductive biologist heading the project. “Everyone says it is so cool, so important, but show me more data—show me it can go into a pregnancy. So that is our goal.”

For now, success isn’t certain, mostly because lab-made embryos generated from stem cells still aren’t exactly like the real thing. They’re more like an embryo seen through a fun-house mirror; the right parts, but in the wrong proportions. That’s why these are being flushed out after just a week—so the researchers can check how far they’ve grown and to learn how to make better ones.

“The stem cells are so smart they know what their fate is,” says Jiang. “But they also need help.”

So far, most research on synthetic embryos has involved mouse or human cells, and it’s stayed in the lab. But last year Jiang, along with researchers in Texas, published a recipe for making a bovine version, which they called “cattle blastoids” for their resemblance to blastocysts, the stage of the embryo suitable for IVF procedures.  

Some researchers think that stem-cell animals could be as big a deal as Dolly the sheep, whose birth in 1996 brought cloning technology to barnyards. Cloning, in which an adult cell is placed in an egg, has allowed scientists to copy mice, cattle, pet dogs, and even polo ponies. The players on one Argentine team all ride clones of the same champion mare, named Dolfina.

Synthetic embryos are clones, too—of the starting cells you grow them from. But they’re made without the need for eggs and can be created in far larger numbers—in theory, by the tens of thousands. And that’s what could revolutionize cattle breeding. Imagine that each year’s calves were all copies of the most muscled steer in the world, perfectly designed to turn grass into steak.

“I would love to see this become cloning 2.0,” says Carlos Pinzón-Arteaga, the veterinarian who spearheaded the laboratory work in Texas. “It’s like Star Wars with cows.”

Endangered species

Industry has started to circle around. A company called Genus PLC, which specializes in assisted reproduction of “genetically superior” pigs and cattle, has begun buying patents on synthetic embryos. This year it started funding Jiang’s lab to support his effort, locking up a commercial option to any discoveries he might make.

Zoos are interested too. With many endangered animals, assisted reproduction is difficult. And with recently extinct ones, it’s impossible. All that remains is some tissue in a freezer. But this technology could, theoretically, blow life back into these specimens—turning them into embryos, which could be brought to term in a surrogate of a sister species.

But there’s an even bigger—and stranger—reason to pay attention to Jiang’s effort to make a calf: several labs are creating super-realistic synthetic human embryos as well. It’s an ethically charged arena, particularly given recent changes in US abortion laws. Although these human embryoids are considered nonviable—mere “models” that are fair-game for research—all that could all change quickly if the Florida project succeeds. 

“If it can work in an animal, it can work in a human,” says Pinzón-Arteaga, who is now working at Harvard Medical School. “And that’s the Black Mirror episode.”

Industrial embryos

Three weeks before cow #307 stood in the dock, she and seven other heifers had been given stimulating hormones, to trick their bodies into thinking they were pregnant. After that, Jiang’s students had loaded blastoids into a straw they used like a popgun to shoot them towards each animal’s oviducts.

Many researchers think that if a stem-cell animal is born, the first one is likely to be a mouse. Mice are cheap to work with and reproduce fast. And one team has already grown a synthetic mouse embryo for eight days in an artificial womb—a big step, since a mouse pregnancy lasts only three weeks.

But bovines may not be far behind. There’s a large assisted-reproduction industry in cattle, with more than a million IVF attempts a year, half of them in North America. Many other beef and dairy cattle are artificially inseminated with semen from top-rated bulls. “Cattle is harder,” says Jiang. “But we have all the technology.”

hands adding a sample to a plate with a stripetter
Inspecting a “synthetic” embryo that gestated in a cow for a week at the University of Florida, Gainesville.
ANTONIO REGALADO

The thing that came out of cow #307 turned out to be damaged, just a fragment. But later that day, in Jiang’s main laboratory, students were speed-walking across the linoleum holding something in a petri dish. They’d retrieved intact embryonic structures from some of the other cows. These looked long and stringy, like worms, or the skin shed by a miniature snake.

That’s precisely what a two-week-old cattle embryo should look like. But the outer appearance is deceiving, Jiang says. After staining chemicals are added, the specimens are put under a microscope. Then the disorder inside them is apparent. These “elongated structures,” as Jiang calls them, have the right parts—cells of the embryonic disc and placenta—but nothing is in quite the right place.

“I wouldn’t call them embryos yet, because we still can’t say if they are healthy or not,” he says. “Those lineages are there, but they are disorganized.”

Cloning 2.0

Jiang demonstrated how the blastoids are grown in a plastic plate in his lab. First, his students deposit stem cells into narrow tubes. In confinement, the cells begin communicating and very quickly start trying to form a blastoid. “We can generate hundreds of thousands of blastoids. So it’s an industrial process,” he says. “It’s really simple.”

That scalability is what could make blastoids a powerful replacement for cloning technology. Cattle cloning is still a tricky process, which only skilled technicians can manage, and it requires eggs, too, which come from slaughterhouses. But unlike blastoids, cloning is well established and actually works, says Cody Kime, R&D director at Trans Ova Genetics, in Sioux Center, Iowa. Each year, his company clones thousands of pigs as well as hundreds of prize-winning cattle.

“A lot of people would like to see a way to amplify the very best animals as easily as you can,” Kime says. “But blastoids aren’t functional yet. The gene expression is aberrant to the point of total failure. The embryos look blurry, like someone sculpted them out of oatmeal or Play-Doh. It’s not the beautiful thing that you expect. The finer details are missing.”

This spring, Jiang learned that the US Department of Agriculture shared that skepticism, when they rejected his application for $650,000 in funding.  “I got criticism: ‘Oh, this is not going to work.’ That this is high risk and low efficiency,” he says. “But to me, this would change the entire breeding program.”

One problem may be the starting cells. Jiang uses bovine embryonic stem cells—taken from cattle embryos. But these stem cells aren’t as quite as versatile as they need to be. For instance, to make the first cattle blastoids, the team in Texas had to add a second type of cell, one that can make a placenta.

What’s needed instead are specially prepared “naïve” cells that are better poised to form the entire conceptus—both the embryo and placenta. Jiang showed me a PowerPoint with a large grid of different growth factors and lab conditions he is testing. Growing stem cells in different chemicals can shift the pattern of genes that are turned on. The latest batch of blastoids, he says, were made using a newer recipe and only needed to start with one type of cell.

Slaughterhouse

Jiang can’t say how long it will be before he makes a calf. His immediate goal is a pregnancy that lasts 30 days. If a synthetic embryo can grow that long, he thinks, it could go all the way, since “most pregnancy loss in cattle is in the first month.”

For a project to reinvent reproduction, Jiang’s budget isn’t particularly large, and he frets about the $2-a-day bill to feed each of his cows. During a tour of UFL’s animal science department, he opened the door to a slaughter room, a vaulted space with tracks and chains overhead, where a man in a slicker was running a hose. It smelled like freshly cleaned blood.

Carl Jiang with Cow #307
Reproductive biologist Carl Jiang leads an effort to make animals from stem cells. The cow stands in a “hydraulic squeeze chute” while its uterus is checked.
ANTONIO REGALADO

This is where cow #307 ended up. After a about 20 embryo transfers over three years, her cervix was worn out, and she came here. She was butchered, her meat wrapped and labeled, and sold to the public at market prices from a small shop at the front of the building. It’s important to everyone at the university that the research subjects aren’t wasted. “They are food,” says Jiang.

But there’s still a limit to how many cows he can use. He had 18 fresh heifers ready to join the experiment, but what if only 1% of embryos ever develop correctly? That would mean he’d need 100 surrogate mothers to see anything. It reminds Jiang of the first attempts at cloning: Dolly the sheep was one of 277 tries, and the others went nowhere. “How soon it happens may depend on industry. They have a lot of animals. It might take 30 years without them,” he says.

“It’s going to be hard,” agrees Peter Hansen, a distinguished professor in Jiang’s department. “But whoever does it first …” He lets the thought hang. “In vitro breeding is the next big thing.”

Human question

Cattle aren’t the only species in which researchers are checking the potential of synthetic embryos to keep developing into fetuses. Researchers in China have transplanted synthetic embryos into the wombs of monkeys several times. A report in 2023 found that the transplants caused hormonal signals of pregnancy, although no monkey fetus emerged.

Because monkeys are primates, like us, such experiments raise an obvious question. Will a lab somewhere try to transfer a synthetic embryo to a person? In many countries that would be illegal, and scientific groups say such an experiment should be strictly forbidden.

This summer, research leaders were alarmed by a media frenzy around reports of super-realistic models of human embryos that had been created in labs in the UK and Israel—some of which seemed to be nearly perfect mimics. To quell speculation, in June the International Society for Stem Cell Research, a powerful science and lobbying group, put out a statement declaring that the models “are not embryos” and “cannot and will not develop to the equivalent of postnatal stage humans.”

Some researchers worry that was a reckless thing to say. That’s because the statement would be disproved, biologically, as soon as any kind of stem-cell animal is born. And many top scientists expect that to happen. “I do think there is a pathway. Especially in mice, I think we will get there,” says Jun Wu, who leads the research group at UT Southwestern Medical Center, in Dallas, that collaborated with Jiang. “The question is, if that happens, how will we handle a similar technology in humans?”

Jiang says he doesn’t think anyone is going to make a person from stem cells. And he’s certainly not interested in doing so. He’s just a cattle researcher at an animal science department. “Scientists belong to society, and we need to follow ethical guidelines. So we can’t do it. It’s not allowed,” he says. “But in large animals, we are allowed. We’re encouraged. And so we can make it happen.”

Inside the quest to map the universe with mysterious bursts of radio energy

When our universe was less than half as old as it is today, a burst of energy that could cook a sun’s worth of popcorn shot out from somewhere amid a compact group of galaxies. Some 8 billion years later, radio waves from that burst reached Earth and were captured by a sophisticated low-frequency radio telescope in the Australian outback. 

The signal, which arrived on June 10, 2022, and lasted for under half a millisecond, is one of a growing class of mysterious radio signals called fast radio bursts. In the last 10 years, astronomers have picked up nearly 5,000 of them. This one was particularly special: nearly double the age of anything previously observed, and three and a half times more energetic. 

But like the others that came before, it was otherwise a mystery. No one knows what causes fast radio bursts. They flash in a seemingly random and unpredictable pattern from all over the sky. Some appear from within our galaxy, others from previously unexamined depths of the universe. Some repeat in cyclical patterns for days at a time and then vanish; others have been consistently repeating every few days since we first identified them. Most never repeat at all. 

Despite the mystery, these radio waves are starting to prove extraordinarily useful. By the time our telescopes detect them, they have passed through clouds of hot, rippling plasma, through gas so diffuse that particles barely touch each other, and through our own Milky Way. And every time they hit the free electrons floating in all that stuff, the waves shift a little bit. The ones that reach our telescopes carry with them a smeary fingerprint of all the ordinary matter they’ve encountered between wherever they came from and where we are now. 

This makes fast radio bursts, or FRBs, invaluable tools for scientific discovery—especially for astronomers interested in the very diffuse gas and dust floating between galaxies, which we know very little about. 

“We don’t know what they are, and we don’t know what causes them. But it doesn’t matter. This is the tool we would have constructed and developed if we had the chance to be playing God and create the universe,” says Stuart Ryder, an astronomer at Macquarie University in Sydney and the lead author of the Science paper that reported the record-breaking burst. 

Many astronomers now feel confident that finding more such distant FRBs will enable them to create the most detailed three-dimensional cosmological map ever made—what Ryder likens to a CT scan of the universe. Even just five years ago making such a map might have seemed an intractable technical challenge: spotting an FFB and then recording enough data to determine where it came from is extraordinarily difficult because most of that work must happen in the few milliseconds before the burst passes.

But that challenge is about to be obliterated. By the end of this decade, a new generation of radio telescopes and related technologies coming online in Australia, Canada, Chile, California, and elsewhere should transform the effort to find FRBs—and help unpack what they can tell us. What was once a series of serendipitous discoveries will become something that’s almost routine. Not only will astronomers be able to build out that new map of the universe, but they’ll have the chance to vastly improve our understanding of how galaxies are born and how they change over time. 

Where’s the matter?

In 1998, astronomers counted up the weight of all of the identified matter in the universe and got a puzzling result. 

We know that about 5% of the total weight of the universe is made up of baryons like protons and neutrons— the particles that make up atoms, or all the “stuff” in the universe. (The other 95% includes dark energy and dark matter.) But the astronomers managed to locate only about 2.5%, not 5%, of the universe’s total. “They counted the stars, black holes, white dwarfs, exotic objects, the atomic gas, the molecular gas in galaxies, the hot plasma, etc. They added it all up and wound up at least a factor of two short of what it should have been,” says Xavier Prochaska, an astrophysicist at the University of California, Santa Cruz, and an expert in analyzing the light in the early universe. “It’s embarrassing. We’re not actively observing half of the matter in the universe.” 

All those missing baryons were a serious problem for simulations of how galaxies form, how our universe is structured, and what happens as it continues to expand. 

Astronomers began to speculate that the missing matter exists in extremely diffuse clouds of what’s known as the warm–hot intergalactic medium, or WHIM. Theoretically, the WHIM would contain all that unobserved material. After the 1998 paper was published, Prochaska committed himself to finding it. 

But nearly 10 years of his life and about $50 million in taxpayer money later, the hunt was going very poorly.

That search had focused largely on picking apart the light from distant galactic nuclei and studying x-ray emissions from tendrils of gas connecting galaxies. The breakthrough came in 2007, when Prochaska was sitting on a couch in a meeting room at the University of California, Santa Cruz, reviewing new research papers with his colleagues. There, amid the stacks of research, sat the paper reporting the discovery of the first FRB.

Duncan Lorimer and David Narkevic, astronomers at West Virginia University, had discovered a recording of an energetic radio wave unlike anything previously observed. The wave lasted for less than five milliseconds, and its spectral lines were very smeared and distorted, unusual characteristics for a radio pulse that was also brighter and more energetic than other known transient phenomena. The researchers concluded that the wave could not have come from within our galaxy, meaning that it had traveled some unknown distance through the universe. 

Here was a signal that had traversed long distances of space, been shaped and affected by electrons along the way, and had enough energy to be clearly detectable despite all the stuff it had passed through. There are no other signals we can currently detect that commonly occur throughout the universe and have this exact set of traits.

“I saw that and I said, ‘Holy cow—that’s how we can solve the missing-baryons problem,’” Prochaska says. Astronomers had used a similar technique with the light from pulsars— spinning neutron stars that beam radiation from their poles—to count electrons in the Milky Way. But pulsars are too dim to illuminate more of the universe. FRBs were thousands of times brighter, offering a way to use that technique to study space well beyond our galaxy.

A visualization of the cosmic web, the large-scale structure of the universe. Each bright knot is an entire galaxy, while the purple filaments show material between them.
This visualization of large-scale structure in the universe shows galaxies (bright knots) and the filaments of material between them.
NASA/NCSA UNIVERSITY OF ILLINOIS VISUALIZATION BY FRANK SUMMERS, SPACE TELESCOPE SCIENCE INSTITUTE, SIMULATION BY MARTIN WHITE AND LARS HERNQUIST, HARVARD UNIVERSITY

There’s a catch, though: in order for an FRB to be an indicator of what lies in the seemingly empty space between galaxies, researchers have to know where it comes from. If you don’t know how far the FRB has traveled, you can’t make any definitive estimate of what space looks like between its origin point and Earth. 

Astronomers couldn’t even point to the direction that the first 2007 FRB came from, let alone calculate the distance it had traveled. It was detected by an enormous single-dish radio telescope at the Parkes Observatory (now called the Murriyang) in New South Wales, which is great at picking up incoming radio waves but can pinpoint FRBs only to an area of the sky as large as Earth’s full moon. For the next decade, telescopes continued to identify FRBs without providing a precise origin, making them a fascinating mystery but not practically useful.

Then, in 2015, one particular radio wave flashed—and then flashed again. Over the course of two months of observation from the Arecibo telescope in Puerto Rico, the radio waves came again and again, flashing 10 times. This was the first repeating burst of FRBs ever observed (a mystery in its own right), and now researchers had a chance to determine where the radio waves had begun, using the opportunity to home in on its location.

In 2017, that’s what happened. The researchers obtained an accurate position for the fast radio burst using the NRAO Very Large Array telescope in central New Mexico. Armed with that position, the researchers then used the Gemini optical telescope in Hawaii to take a picture of the location, revealing the galaxy where the FRB had begun and how far it had traveled. “That’s when it became clear that at least some of these we’d get the distance for. That’s when I got really involved and started writing telescope proposals,” Prochaska says. 

That same year, astronomers from across the globe gathered in Aspen, Colorado, to discuss the potential for studying FRBs. Researchers debated what caused them. Neutron stars? Magnetars, neutron stars with such powerful magnetic fields that they emit x-rays and gamma rays? Merging galaxies? Aliens? Did repeating FRBs and one-offs have different origins, or could there be some other explanation for why some bursts repeat and most do not? Did it even matter, since all the bursts could be used as probes regardless of what caused them? At that Aspen meeting, Prochaska met with a team of radio astronomers based in Australia, including Keith Bannister, a telescope expert involved in the early work to build a precursor facility for the Square Kilometer Array, an international collaboration to build the largest radio telescope arrays in the world. 

The construction of that precursor telescope, called ASKAP, was still underway during that meeting. But Bannister, a telescope expert at the Australian government’s scientific research agency, CSIRO, believed that it could be requisitioned and adapted to simultaneously locate and observe FRBs. 

Bannister and the other radio experts affiliated with ASKAP understood how to manipulate radio telescopes for the unique demands of FRB hunting; Prochaska was an expert in everything “not radio.” They agreed to work together to identify and locate one-off FRBs (because there are many more of these than there are repeating ones) and then use the data to address the problem of the missing baryons. 

And over the course of the next five years, that’s exactly what they did—with astonishing success.

Building a pipeline

To pinpoint a burst in the sky, you need a telescope with two things that have traditionally been at odds in radio astronomy: a very large field of view and high resolution. The large field of view gives you the greatest possible chance to detect a fleeting, unpredictable burst. High resolution  lets you determine where that burst actually sits in your field of view. 

ASKAP was the perfect candidate for the job. Located in the westernmost part of the Australian outback, where cattle and sheep graze on public land and people are few and far between, the telescope consists of 36 dishes, each with a large field of view. These dishes are separated by large distances, allowing observations to be combined through a technique called interferometry so that a small patch of the sky can be viewed with high precision.  

The dishes weren’t formally in use yet, but Bannister had an idea. He took them and jerry-rigged a “fly’s eye” telescope, pointing the dishes at different parts of the sky to maximize its ability to spot something that might flash anywhere. 

“Suddenly, it felt like we were living in paradise,” Bannister says. “There had only ever been three or four FRB detections at this point, and people weren’t entirely sure if [FRBs] were real or not, and we were finding them every two weeks.” 

When ASKAP’s interferometer went online in September 2018, the real work began. Bannister designed a piece of software that he likens to live-action replay of the FRB event. “This thing comes by and smacks into your telescope and disappears, and you’ve got a millisecond to get its phone number,” he says. To do so, the software detects the presence of an FRB within a hundredth of a second and then reaches upstream to create a recording of the telescope’s data before the system overwrites it. Data from all the dishes can be processed and combined to reconstruct a view of the sky and find a precise point of origin. 

The team can then send the coordinates on to optical telescopes, which can take detailed pictures of the spot to confirm the presence of a galaxy—the likely origin point of the FRB. 

CSIRO's Australian Square Kilometre Array Pathfinder (ASKAP) telescope
These two dishes are part of CSIRO’s Australian Square Kilometre Array Pathfinder (ASKAP) telescope.
CSIRO

Ryder’s team used data on the galaxy’s spectrum, gathered from the European Southern Observatory, to measure how much its light stretched as it traversed space to reach our telescopes. This “redshift” becomes a proxy for distance, allowing astronomers to estimate just how much space the FRB’s light has passed through. 

In 2018, the live-action replay worked for the first time, making Bannister, Ryder, Prochaska, and the rest of their research team the first to localize an FRB that was not repeating. By the following year, the team had localized about five of them. By 2020, they had published a paper in Nature declaring that the FRBs had let them count up the universe’s missing baryons. 

The centerpiece of the paper’s argument was something called the dispersion measure—a number that reflects how much an FRB’s light has been smeared by all the free electrons along our line of sight. In general, the farther an FRB travels, the higher the dispersion measure should be. Armed with both the travel distance (the redshift) and the dispersion measure for a number of FRBs, the researchers found they could extrapolate the total density of particles in the universe. J-P Macquart, the paper’s lead author, believed that the relationship between dispersion measure and FRB distance was predictable and could be applied to map the universe.

As a leader in the field and a key player in the advancement of FRB research, Macquart would have been interviewed for this piece. But he died of a heart attack one week after the paper was published, at the age of 45. FRB researchers began to call the relationship between dispersion and distance the “Macquart relation,” in honor of his memory and his push for the groundbreaking idea that FRBs could be used for cosmology. 

Proving that the Macquart relation would hold at greater distances became not just a scientific quest but also an emotional one. 

“I remember thinking that I know something about the universe that no one else knows.”

The researchers knew that the ASKAP telescope was capable of detecting bursts from very far away—they just needed to find one. Whenever the telescope detected an FRB, Ryder was tasked with helping to determine where it had originated. It took much longer than he would have liked. But one morning in July 2022, after many months of frustration, Ryder downloaded the newest data email from the European Southern Observatory and began to scroll through the spectrum data. Scrolling, scrolling, scrolling—and then there it was: light from 8 billion years ago, or a redshift of one, symbolized by two very close, bright lines on the computer screen, showing the optical emissions from oxygen. “I remember thinking that I know something about the universe that no one else knows,” he says. “I wanted to jump onto a Slack and tell everyone, but then I thought: No, just sit here and revel in this. It has taken a lot to get to this point.” 

With the October 2023 Science paper, the team had basically doubled the distance baseline for the Macquart relation, honoring Macquart’s memory in the best way they knew how. The distance jump was significant because Ryder and the others on his team wanted to confirm that their work would hold true even for FRBs whose light comes from so far away that it reflects a much younger universe. They also wanted to establish that it was possible to find FRBs at this redshift, because astronomers need to collect evidence about many more like this one in order to create the cosmological map that motivates so much FRB research.

“It’s encouraging that the Macquart relation does still seem to hold, and that we can still see fast radio bursts coming from those distances,” Ryder said. “We assume that there are many more out there.” 

Mapping the cosmic web

The missing stuff that lies between galaxies, which should contain the majority of the matter in the universe, is often called the cosmic web. The diffuse gases aren’t floating like random clouds; they’re strung together more like a spiderweb, a complex weaving of delicate filaments that stretches as the galaxies at their nodes grow and shift. This gas probably escaped from galaxies into the space beyond when the galaxies first formed, shoved outward by massive explosions.

“We don’t understand how gas is pushed in and out of galaxies. It’s fundamental for understanding how galaxies form and evolve,” says Kiyoshi Masui, the director of MIT’s Synoptic Radio Lab. “We only exist because stars exist, and yet this process of building up the building blocks of the universe is poorly understood … Our ability to model that is the gaping hole in our understanding of how the universe works.” 

Astronomers are also working to build large-scale maps of galaxies in order to precisely measure the expansion of the universe. But the cosmological modeling underway with FRBs should create a picture of invisible gasses between galaxies, one that currently does not exist. To build a three-dimensional map of this cosmic web, astronomers will need precise data on thousands of FRBs from regions near Earth and from very far away, like the FRB at redshift one. “Ultimately, fast radio bursts will give you a very detailed picture of how gas gets pushed around,” Masui says. “To get to the cosmological data, samples have to get bigger, but not a lot bigger.” 

That’s the task at hand for Masui, who leads a team searching for FRBs much closer to our galaxy than the ones found by the Australian-led collaboration. Masui’s team conducts FRB research with the CHIME telescope in British Columbia, a nontraditional radio telescope with a very wide field of view and focusing reflectors that look like half-pipes instead of dishes. CHIME (short for “Canadian Hydrogen Intensity Mapping Experiment)” has no moving parts and is less reliant on mirrors than a traditional telescope (focusing light in only one direction rather than two), instead using digital techniques to process its data. CHIME can use its digital technology to focus on many places at once, creating a 200-square-degree field of view compared with ASKAP’s 30-degree one. Masui likened it to a mirror that can be focused on thousands of different places simultaneously. 

Because of this enormous field of view, CHIME has been able to gather data on thousands of bursts that are closer to the Milky Way. While CHIME cannot yet precisely locate where they are coming from the way that ASKAP can (the telescope is much more compact, providing lower resolution), Masui is leading the effort to change that by building three smaller versions of the same telescope in British Columbia; Green Bank, West Virginia; and Northern California. The additional data provided by these telescopes, the first of which will probably be collected sometime this year, can be combined with data from the original CHIME telescope to produce location information that is about 1,000 times more precise. That should be detailed enough for cosmological mapping.

The Canadian Hydrogen Intensity Mapping Experiment, or CHIME, a Canadian radio telescope, is shown at night.
The reflectors of the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, have been used to spot thousands of FRBs.
ANDRE RECNIK/CHIME

Telescope technology is improving so fast that the quest to gather enough FRB samples from different parts of the universe for a cosmological map could be finished within the next 10 years. In addition to CHIME, the BURSTT radio telescope in Taiwan should go online this year; the CHORD telescope in Canada, designed to surpass CHIME, should begin operations in 2025; and the Deep Synoptic Array in California could transform the field of radio astronomy when it’s finished, which is expected to happen sometime around the end of the decade. 

And at ASKAP, Bannister is building a new tool that will quintuple the sensitivity of the telescope, beginning this year. If you can imagine stuffing a million people simultaneously watching uncompressed YouTube videos into a box the size of a fridge, that’s probably the easiest way to visualize the data handling capabilities of this new processor, called a field-programmable gate array, which Bannister is almost finished programming. He expects the new device to allow the team to detect one new FRB each day.

With all the telescopes in competition, Bannister says, “in five or 10 years’ time, there will be 1,000 new FRBs detected before you can write a paper about the one you just found … We’re in a race to make them boring.” 

Prochaska is so confident FRBs will finally give us the cosmological map he’s been working toward his entire life that he’s started studying for a degree in oceanography. Once astronomers have measured distances for 1,000 of the bursts, he plans to give up the work entirely. 

“In a decade, we could have a pretty decent cosmological map that’s very precise,” he says. “That’s what the 1,000 FRBs are for—and I should be fired if we don’t.”

Unlike most scientists, Prochaska can define the end goal. He knows that all those FRBs should allow astronomers to paint a map of the invisible gases in the universe, creating a picture of how galaxies evolve as gases move outward and then fall back in. FRBs will grant us an understanding of the shape of the universe that we don’t have today—even if the mystery of what makes them endures. 

Anna Kramer is a science and climate journalist based in Washington, D.C.

The depressing truth about TikTok’s impending ban

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Allow me to indulge in a little reflection this week. Last week, the divest-or-ban TikTok bill was passed in Congress and signed into law. Four years ago, when I was just starting to report on the world of Chinese technologies, one of my first stories was about very similar news: President Donald Trump announcing he’d ban TikTok. 

That 2020 executive order came to nothing in the end—it was blocked in the courts, put aside after the presidency changed hands, and eventually withdrawn by the Biden administration. Yet the idea—that the US government should ban TikTok in some way—never went away. It would repeatedly be suggested in different forms and shapes. And eventually, on April 24, 2024, things came full circle.

A lot has changed in the four years between these two news cycles. Back then, TikTok was a rising sensation that many people didn’t understand; now, it’s one of the biggest social media platforms, the originator of a generation-defining content medium, and a music-industry juggernaut. 

What has also changed is my outlook on the issue. For a long time, I thought TikTok would find a way out of the political tensions, but I’m increasingly pessimistic about its future. And I have even less hope for other Chinese tech companies trying to go global. If the TikTok saga tells us anything, it’s that their Chinese roots will be scrutinized forever, no matter what they do.

I don’t believe TikTok has become a larger security threat now than it was in 2020. There have always been issues with the app, like potential operational influence by the Chinese government, the black-box algorithms that produce unpredictable results, and the fact that parent company ByteDance never managed to separate the US side and the China side cleanly, despite efforts (one called Project Texas) to store and process American data locally. 

But none of those problems got worse over the last four years. And interestingly, while discussions in 2020 still revolved around potential remedies like setting up data centers in the US to store American data or having an organization like Oracle audit operations, those kinds of fixes are not in the law passed this year. As long as it still has Chinese owners, the app is not permissible in the US. The only thing it can do to survive here is transfer ownership to a US entity. 

That’s the cold, hard truth not only for TikTok but for other Chinese companies too. In today’s political climate, any association with China and the Chinese government is seen as unacceptable. It’s a far cry from the 2010s, when Chinese companies could dream about developing a killer app and finding audiences and investors around the globe—something many did pull off. 

There’s something I wrote four years ago that still rings true today: TikTok is the bellwether for Chinese companies trying to go global. 

The majority of Chinese tech giants, like Alibaba, Tencent, and Baidu, operate primarily within China’s borders. TikTok was the first to gain mass popularity in lots of other countries across the world and become part of daily life for people outside China. To many Chinese startups, it showed that the hard work of trying to learn about foreign countries and users can eventually pay off, and it’s worth the time and investment to try.

On the other hand, if even TikTok can’t get itself out of trouble, with all the resources that ByteDance has, is there any hope for the smaller players?

When TikTok found itself in trouble, the initial reaction of these other Chinese companies was to conceal their roots, hoping they could avoid attention. During my reporting, I’ve encountered multiple companies that fret about being described as Chinese. “We are headquartered in Boston,” one would say, while everyone in China openly talked about its product as the overseas version of a Chinese app.

But with all the political back-and-forth about TikTok, I think these companies are also realizing that concealing their Chinese associations doesn’t work—and it may make them look even worse if it leaves users and regulators feeling deceived.

With the new divest-or-ban bill, I think these companies are getting a clear signal that it’s not the technical details that matter—only their national origin. The same worry is spreading to many other industries, as I wrote in this newsletter last week. Even in the climate and renewable power industries, the presence of Chinese companies is becoming increasingly politicized. They, too, are finding themselves scrutinized more for their Chinese roots than for the actual products they offer.

Obviously, none of this is good news to me. When they feel unwelcome in the US market, Chinese companies don’t feel the need to talk to international media anymore. Without these vital conversations, it’s even harder for people in other countries to figure out what’s going on with tech in China.

Instead of banning TikTok because it’s Chinese, maybe we should go back to focus on what TikTok did wrong: why certain sensitive political topics seem deprioritized on the platform; why Project Texas has stalled; how to make the algorithmic workings of the platform more transparent. These issues, instead of whether TikTok is still controlled by China, are the things that actually matter. It’s a harder path to take than just banning the app entirely, but I think it’s the right one.

Do you believe the TikTok ban will go through? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Facing the possibility of a total ban on TikTok, influencers and creators are making contingency plans. (Wired $)

2. TSMC has brought hundreds of Taiwanese employees to Arizona to build its new chip factory. But the company is struggling to bridge cultural and professional differences between American and Taiwanese workers. (Rest of World)

3. The US secretary of state, Antony Blinken, met with Chinese president Xi Jinping during a visit to China this week. (New York Times $)

  • Here’s the best way to describe these recent US-China diplomatic meetings: “The US and China talk past each other on most issues, but at least they’re still talking.” (Associated Press)

4. Half of Russian companies’ payments to China are made through middlemen in Hong Kong, Central Asia, or the Middle East to evade sanctions. (Reuters $)

5. A massive auto show is taking place in Beijing this week, with domestic electric vehicles unsurprisingly taking center stage. (Associated Press)

  • Meanwhile, Elon Musk squeezed in a quick trip to China and met with his “old friend” the Chinese premier Li Qiang, who was believed to have facilitated establishing the Gigafactory in Shanghai. (BBC)
  • Tesla may finally get a license to deploy its autopilot system, which it calls Full Self Driving, in China after agreeing to collaborate with Baidu. (Reuters $)

6. Beijing has hosted two rival Palestinian political groups, Hamas and Fatah, to talk about potential reconciliation. (Al Jazeera)

Lost in translation

The Chinese dubbing community is grappling with the impacts of new audio-generating AI tools. According to the Chinese publication ACGx, for a new audio drama, a music company licensed the voice of the famous dubbing actor Zhao Qianjing and used AI to transform it into multiple characters and voice the entire script. 

But online, this wasn’t really celebrated as an advancement for the industry. Beyond criticizing the quality of the audio drama (saying it still doesn’t sound like real humans), dubbers are worried about the replacement of human actors and increasingly limited opportunities for newcomers. Other than this new audio drama, there have been several examples in China where AI audio generation has been used to replace human dubbers in documentaries and games. E-book platforms have also allowed users to choose different audio-generated voices to read out the text. 

One more thing

While in Beijing, Antony Blinken visited a record store and bought two vinyl records—one by Taylor Swift and another by the Chinese rock star Dou Wei. Many Chinese (and American!) people learned for the first time that Blinken had previously been in a rock band.

Three takeaways about the current state of batteries

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Batteries are on my mind this week. (Aren’t they always?) But I’ve got two extra reasons to be thinking about them today. 

First, there’s a new special report from the International Energy Agency all about how crucial batteries are for our future energy systems. The report calls batteries a “master key,” meaning they can unlock the potential of other technologies that will help cut emissions. Second, we’re seeing early signs in California of how the technology might be earning that “master key” status already by helping renewables play an even bigger role on the grid. So let’s dig into some battery data together. 

1) Battery storage in the power sector was the fastest-growing commercial energy technology on the planet in 2023

Deployment doubled over the previous year’s figures, hitting nearly 42 gigawatts. That includes utility-scale projects as well as projects installed “behind the meter,” meaning they’re somewhere like a home or business and don’t interact with the grid. 

Over half the additions in 2023 were in China, which has been the leading market in batteries for energy storage for the past two years. Growth is faster there than the global average, and installations tripled from 2022 to last year. 

One driving force of this quick growth in China is that some provincial policies require developers of new solar and wind power projects to pair them with a certain level of energy storage, according to the IEA report.

Intermittent renewables like wind and solar have grown rapidly in China and around the world, and the technologies are beginning to help clean up the grid. But these storage requirement policies reveal the next step: installing batteries to help unlock the potential of renewables even during times when the sun isn’t shining and the wind isn’t blowing. 

2) Batteries are starting to show exactly how they’ll play a crucial role on the grid.

When there are small amounts of renewables, it’s not all that important to have storage available, since the sun’s rising and setting will cause little more than blips in the overall energy mix. But as the share increases, some of the challenges with intermittent renewables become very clear. 

We’ve started to see this play out in California. Renewables are able to supply nearly all the grid’s energy demand during the day on sunny days. The problem is just how different the picture is at noon and just eight hours later, once the sun has gone down. 

In the middle of the day, there’s so much solar power available that gigawatts are basically getting thrown away. Electricity prices can actually go negative. Then, later on, renewables quickly fall off, and other sources like natural gas need to ramp up to meet demand. 

But energy storage is starting to catch up and make a dent in smoothing out that daily variation. On April 16, for the first time, batteries were the single greatest power source on the grid in California during part of the early evening, just as solar fell off for the day. (Look for the bump in the darkest line on the graph above—it happens right after 6 p.m.)

Batteries have reached this number-one status several more times over the past few weeks, a sign that the energy storage now installed—10 gigawatts’ worth—is beginning to play a part in a balanced grid. 

3) We need to build a lot more energy storage. Good news: batteries are getting cheaper.

While early signs show just how important batteries can be in our energy system, we still need gobs more to actually clean up the grid. If we’re going to be on track to cut greenhouse-gas emissions to zero by midcentury, we’ll need to increase battery deployment sevenfold. 

The good news is the technology is becoming increasingly economical. Battery costs have fallen drastically, dropping 90% since 2010, and they’re not done yet. According to the IEA report, battery costs could fall an additional 40% by the end of this decade. Those further cost declines would make solar projects with battery storage cheaper to build than new coal power plants in India and China, and cheaper than new gas plants in the US. 

Batteries won’t be the magic miracle technology that cleans up the entire grid. Other sources of low-carbon energy that are more consistently available, like geothermal, or able to ramp up and down to meet demand, like hydropower, will be crucial parts of the energy system. But I’m interested to keep watching just how batteries contribute to the mix. 


Now read the rest of The Spark

Related reading

Some companies are looking beyond lithium for stationary energy storage. Dig into the prospects for sodium-based batteries in this story from last year.

Lithium-sulfur technology could unlock cheaper, better batteries for electric vehicles that can go farther on a single charge. I covered one company trying to make them a reality earlier this year.

Two engineers in lab coats monitor the thermal battery powering a conveyor belt of bottles

SIMON LANDREIN

Another thing

Thermal batteries are so hot right now. In fact, readers chose the technology as our 11th Breakthrough Technology of 2024.

To celebrate, we’re hosting an online event in a couple of weeks for subscribers. We’ll dig into why thermal batteries are so interesting and why this is a breakthrough moment for the technology. It’s going to be a lot of fun, so subscribe if you haven’t already and then register here to join us on May 16 at noon Eastern time.

You’ll be able to submit a question when you register—please do that so I know what you want to hear about! See you there! 

Keeping up with climate  

New rules that force US power plants to slash emissions could effectively spell the end of coal power in the country. Here are five things to know about the regulations. (New York Times)

Wind farms use less land than you might expect. Turbines really take up only a small fraction of the land where they’re sited, and co-locating projects with farms or other developments can help reduce environmental impact. (Washington Post)

The fourth reactor at Plant Vogtle in Georgia officially entered commercial operation this week. The new reactor will provide electricity for up to 500,000 homes and businesses. (Axios

A new factory will be the first full-scale plant to produce sodium-ion batteries in the US. The chemistry could provide a cheaper alternative to the standard lithium-ion chemistry and avoid material constraints. (Bloomberg)

→ I wrote about the potential for sodium-based batteries last year. (MIT Technology Review)

Tesla has apparently laid off a huge portion of its charging team. The move comes as the company’s charging port has been adopted by most major automakers. (The Verge)

A vegan cheese was up for a major food award. Then, things got messy. (Washington Post)

→ For a look at how Climax Foods makes its plant-based cheese with AI, check out this story from our latest magazine issue. (MIT Technology Review)

Someday mining might be done with … seaweed? Early research is looking into using seaweed to capture and concentrate high-value metals. (Hakai)

The planet’s oceans contain enormous amounts of energy. Harnessing it is an early-stage industry, but some proponents argue there’s a role for wave and tidal power technologies. (Undark)

Cancer vaccines are having a renaissance

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Last week, Moderna and Merck launched a large clinical trial in the UK of a promising new cancer therapy: a personalized vaccine that targets a specific set of mutations found in each individual’s tumor. This study is enrolling patients with melanoma. But the companies have also launched a phase III trial for lung cancer. And earlier this month BioNTech and Genentech announced that a personalized vaccine they developed in collaboration shows promise in pancreatic cancer, which has a notoriously poor survival rate.

Drug developers have been working for decades on vaccines to help the body’s immune system fight cancer, without much success. But promising results in the past year suggest that the strategy may be reaching a turning point. Will these therapies finally live up to their promise?

This week in The Checkup, let’s talk cancer vaccines. (And, you guessed it, mRNA.)

Long before companies leveraged mRNA to fight covid, they were developing mRNA vaccines to combat cancer. BioNTech delivered its first mRNA vaccines to people with treatment-resistant melanoma nearly a decade ago. But when the pandemic hit, development of mRNA vaccines jumped into warp drive. Now dozens of trials are underway to test whether these shots can transform cancer the way they did covid. 

Recent news has some experts cautiously optimistic. In December, Merck and Moderna announced results from an earlier trial that included 150 people with melanoma who had undergone surgery to have their cancer removed. Doctors administered nine doses of the vaccine over about six months, as well as  what’s known as an immune checkpoint inhibitor. After three years of follow-up, the combination had cut the risk of recurrence or death by almost half compared with the checkpoint inhibitor alone.

The new results reported by BioNTech and Genentech, from a small trial of 16 patients with pancreatic cancer, are equally exciting. After surgery to remove the cancer, the participants received immunotherapy, followed by the cancer vaccine and a standard chemotherapy regimen. Half of them responded to the vaccine, and three years after treatment, six of those people still had not had a recurrence of their cancer. The other two had relapsed. Of the eight participants who did not respond to the vaccine, seven had relapsed. Some of these patients might not have responded  because they lacked a spleen, which plays an important role in the immune system. The organ was removed as part of their cancer treatment. 

The hope is that the strategy will work in many different kinds of cancer. In addition to pancreatic cancer, BioNTech’s personalized vaccine is being tested in colorectal cancer, melanoma, and metastatic cancers.

The purpose of a cancer vaccine is to train the immune system to better recognize malignant cells, so it can destroy them. The immune system has the capacity to clear cancer cells if it can find them. But tumors are slippery. They can hide in plain sight and employ all sorts of tricks to evade our immune defenses. And cancer cells often look like the body’s own cells because, well, they are the body’s own cells.

There are differences between cancer cells and healthy cells, however. Cancer cells acquire mutations that help them grow and survive, and some of those mutations give rise to proteins that stud the surface of the cell—so-called neoantigens.

Personalized cancer vaccines like the ones Moderna and BioNTech are developing are tailored to each patient’s particular cancer. The researchers collect a piece of the patient’s tumor and a sample of healthy cells. They sequence these two samples and compare them in order to identify mutations that are specific to the tumor. Those mutations are then fed into an AI algorithm that selects those most likely to elicit an immune response. Together these neoantigens form a kind of police sketch of the tumor, a rough picture that helps the immune system recognize cancerous cells. 

“A lot of immunotherapies stimulate the immune response in a nonspecific way—that is, not directly against the cancer,” said Patrick Ott, director of the Center for Personal Cancer Vaccines at the Dana-Farber Cancer Institute, in a 2022 interview.  “Personalized cancer vaccines can direct the immune response to exactly where it needs to be.”

How many neoantigens do you need to create that sketch?  “We don’t really know what the magical number is,” says Michelle Brown, vice president of individualized neoantigen therapy at Moderna. Moderna’s vaccine has 34. “It comes down to what we could fit on the mRNA strand, and it gives us multiple shots to ensure that the immune system is stimulated in the right way,” she says. BioNTech is using 20.

The neoantigens are put on an mRNA strand and injected into the patient. From there, they are taken up by cells and translated into proteins, and those proteins are expressed on the cell’s surface, raising an immune response

mRNA isn’t the only way to teach the immune system to recognize neoantigens. Researchers are also delivering neoantigens as DNA, as peptides, or via immune cells or viral vectors. And many companies are working on “off the shelf” cancer vaccines that aren’t personalized, which would save time and expense. Out of about 400 ongoing clinical trials assessing cancer vaccines last fall, roughly 50 included personalized vaccines.

There’s no guarantee any of these strategies will pan out. Even if they do, success in one type of cancer doesn’t automatically mean success against all. Plenty of cancer therapies have shown enormous promise initially, only to fail when they’re moved into large clinical trials.

But the burst of renewed interest and activity around cancer vaccines is encouraging. And personalized vaccines might have a shot at succeeding where others have failed. The strategy makes sense for “a lot of different tumor types and a lot of different settings,” Brown says. “With this technology, we really have a lot of aspirations.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

mRNA vaccines transformed the pandemic. But they can do so much more. In this feature from 2023, Jessica Hamzelou covered the myriad other uses of these shots, including fighting cancer. 

This article from 2020 covers some of the background on BioNTech’s efforts to develop personalized cancer vaccines. Adam Piore had the story

Years before the pandemic, Emily Mullin wrote about early efforts to develop personalized cancer vaccines—the promise and the pitfalls. 

From around the web

Yes, there’s bird flu in the nation’s milk supply. About one in five samples had evidence of the H5N1 virus. But new testing by the FDA suggests that the virus is unable to replicate. Pasteurization works! (NYT)

Studies in which volunteers are deliberately infected with covid—so-called challenge trials—have been floated as a way to test drugs and vaccines, and even to learn more about the virus. But it turns out it’s tougher to infect people than you might think. (Nature)

When should women get their first mammogram to screen for breast cancer? It’s a matter of hot debate. In 2009, an expert panel raised the age from 40 to 50. This week they lowered it to 40 again in response to rising cancer rates among younger women. Women with an average risk of breast cancer should get screened every two years, the panel says. (NYT)

Wastewater surveillance helped us track covid. Why not H5N1? A team of researchers from New York argues it might be our best tool for monitoring the spread of this virus. (Stat)

Long read: This story looks at how AI could help us better understand how babies learn language, and focuses on the lab I covered in this story about an AI model trained on the sights and sounds experienced by a single baby. (NYT)

Sam Altman says helpful agents are poised to become AI’s killer function

A number of moments from my brief sit-down with Sam Altman brought the OpenAI CEO’s worldview into clearer focus. The first was when he pointed to my iPhone SE (the one with the home button that’s mostly hated) and said, “That’s the best iPhone.” More revealing, though, was the vision he sketched for how AI tools will become even more enmeshed in our daily lives than the smartphone.

“What you really want,” he told MIT Technology Review, “is just this thing that is off helping you.” Altman, who was visiting Cambridge for a series of events hosted by Harvard and the venture capital firm Xfund, described the killer app for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to. 

It’s a leap from OpenAI’s current offerings. Its leading applications, like DALL-E, Sora, and ChatGPT (which Altman referred to as “incredibly dumb” compared with what’s coming next), have wowed us with their ability to generate convincing text and surreal videos and images. But they mostly remain tools we use for isolated tasks, and they have limited capacity to learn about us from our conversations with them. 

In the new paradigm, as Altman sees it, the AI will be capable of helping us outside the chat interface and taking real-world tasks off our plates. 

Altman on AI hardware’s future 

I asked Altman if we’ll need a new piece of hardware to get to this future. Though smartphones are extraordinarily capable, and their designers are already incorporating more AI-driven features, some entrepreneurs are betting that the AI of the future will require a device that’s more purpose built. Some of these devices are already beginning to appear in his orbit. There is the (widely panned) wearable AI Pin from Humane, for example (Altman is an investor in the company but has not exactly been a booster of the device). He is also rumored to be working with former Apple designer Jony Ive on some new type of hardware. 

But Altman says there’s a chance we won’t necessarily need a device at all. “I don’t think it will require a new piece of hardware,” he told me, adding that the type of app envisioned could exist in the cloud. But he quickly added that even if this AI paradigm shift won’t require consumers buy a new hardware, “I think you’ll be happy to have [a new device].” 

Though Altman says he thinks AI hardware devices are exciting, he also implied he might not be best suited to take on the challenge himself: “I’m very interested in consumer hardware for new technology. I’m an amateur who loves it, but this is so far from my expertise.”

On the hunt for training data

Upon hearing his vision for powerful AI-driven agents, I wondered how it would square with the industry’s current scarcity of training data. To build GPT-4 and other models, OpenAI has scoured internet archives, newspapers, and blogs for training data, since scaling laws have long shown that making models bigger also makes them better. But finding more data to train on is a growing problem. Much of the internet has already been slurped up, and access to private or copyrighted data is now mired in legal battles. 

Altman is optimistic this won’t be a problem for much longer, though he didn’t articulate the specifics. 

“I believe, but I’m not certain, that we’re going to figure out a way out of this thing of you always just need more and more training data,” he says. “Humans are existence proof that there is some other way to [train intelligence]. And I hope we find it.”

On who will be poised to create AGI

OpenAI’s central vision has long revolved around the pursuit of artificial general intelligence (AGI), or an AI that can reason as well as or better than humans. Its stated mission is to ensure such a technology “benefits all of humanity.” It is far from the only company pursuing AGI, however. So in the race for AGI, what are the most important tools? I asked Altman if he thought the entity that marshals the largest amount of chips and computing power will ultimately be the winner. 

Altman suspects there will be “several different versions [of AGI] that are better and worse at different things,” he says. “You’ll have to be over some compute threshold, I would guess. But even then I wouldn’t say I’m certain.”

On when we’ll see GPT-5

You thought he’d answer that? When another reporter in the room asked Altman if he knew when the next version of GPT is slated to be released, he gave a calm response. “Yes,” he replied, smiling, and said nothing more. 

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

I’m stressed and running late, because what do you wear for the rest of eternity? 

This makes it sound like I’m dying, but it’s the opposite. I am, in a way, about to live forever, thanks to the AI video startup Synthesia. For the past several years, the company has produced AI-generated avatars, but today it launches a new generation, its first to take advantage of the latest advancements in generative AI, and they are more realistic and expressive than anything I’ve ever seen. While today’s release means almost anyone will now be able to make a digital double, on this early April afternoon, before the technology goes public, they’ve agreed to make one of me. 

When I finally arrive at the company’s stylish studio in East London, I am greeted by Tosin Oshinyemi, the company’s production lead. He is going to guide and direct me through the data collection process—and by “data collection,” I mean the capture of my facial features, mannerisms, and more—much like he normally does for actors and Synthesia’s customers. 

In this AI-generated footage, synthetic “Melissa” gives a performance of Hamlet’s famous soliloquy. (The magazine had no role in producing this video.)
SYNTHESIA

He introduces me to a waiting stylist and a makeup artist, and I curse myself for wasting so much time getting ready. Their job is to ensure that people have the kind of clothes that look good on camera and that they look consistent from one shot to the next. The stylist tells me my outfit is fine (phew), and the makeup artist touches up my face and tidies my baby hairs. The dressing room is decorated with hundreds of smiling Polaroids of people who have been digitally cloned before me. 

Apart from the small supercomputer whirring in the corridor, which processes the data generated at the studio, this feels more like going into a news studio than entering a deepfake factory. 

I joke that Oshinyemi has what MIT Technology Review might call a job title of the future: “deepfake creation director.” 

“We like the term ‘synthetic media’ as opposed to ‘deepfake,’” he says. 

It’s a subtle but, some would argue, notable difference in semantics. Both mean AI-generated videos or audio recordings of people doing or saying something that didn’t necessarily happen in real life. But deepfakes have a bad reputation. Since their inception nearly a decade ago, the term has come to signal something unethical, says Alexandru Voica, Synthesia’s head of corporate affairs and policy. Think of sexual content produced without consent, or political campaigns that spread disinformation or propaganda.

“Synthetic media is the more benign, productive version of that,” he argues. And Synthesia wants to offer the best version of that version.  

Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley. 

Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words. 

But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences. 

“I think we might just have to say goodbye to finding out about the truth in a quick way,” says Sandra Wachter, a professor at the Oxford Internet Institute, who researches the legal and ethical implications of AI. “The idea that you can just quickly Google something and know what’s fact and what’s fiction—I don’t think it works like that anymore.” 

monitor on a video camera showing Heikkilä and Oshinyemi on set in front of the green screen
Tosin Oshinyemi, the company’s production lead, guides and directs actors and customers through the data collection process.
DAVID VINTINER

So while I was excited for Synthesia to make my digital double, I also wondered if the distinction between synthetic media and deepfakes is fundamentally meaningless. Even if the former centers a creator’s intent and, critically, a subject’s consent, is there really a way to make AI avatars safely if the end result is the same? And do we really want to get out of the uncanny valley if it means we can no longer grasp the truth?

But more urgently, it was time to find out what it’s like to see a post-truth version of yourself.

Almost the real thing

A month before my trip to the studio, I visited Synthesia CEO Victor Riparbelli at his office near Oxford Circus. As Riparbelli tells it, Synthesia’s origin story stems from his experiences exploring avant-garde, geeky techno music while growing up in Denmark. The internet allowed him to download software and produce his own songs without buying expensive synthesizers. 

“I’m a huge believer in giving people the ability to express themselves in the way that they can, because I think that that provides for a more meritocratic world,” he tells me. 

He saw the possibility of doing something similar with video when he came across research on using deep learning to transfer expressions from one human face to another on screen. 

“What that showcased was the first time a deep-learning network could produce video frames that looked and felt real,” he says. 

That research was conducted by Matthias Niessner, a professor at the Technical University of Munich, who cofounded Synthesia with Riparbelli in 2017, alongside University College London professor Lourdes Agapito and Steffen Tjerrild, whom Riparbelli had previously worked with on a cryptocurrency project. 

Initially the company built lip-synching and dubbing tools for the entertainment industry, but it found that the bar for this technology’s quality was very high and there wasn’t much demand for it. Synthesia changed direction in 2020 and launched its first generation of AI avatars for corporate clients. That pivot paid off. In 2023, Synthesia achieved unicorn status, meaning it was valued at over $1 billion—making it one of the relatively few European AI companies to do so. 

That first generation of avatars looked clunky, with looped movements and little variation. Subsequent iterations started looking more human, but they still struggled to say complicated words, and things were slightly out of sync. 

The challenge is that people are used to looking at other people’s faces. “We as humans know what real humans do,” says Jonathan Starck, Synthesia’s CTO. Since infancy, “you’re really tuned in to people and faces. You know what’s right, so anything that’s not quite right really jumps out a mile.” 

These earlier AI-generated videos, like deepfakes more broadly, were made using generative adversarial networks, or GANs—an older technique for generating images and videos that uses two neural networks that play off one another. It was a laborious and complicated process, and the technology was unstable. 

But in the generative AI boom of the last year or so, the company has found it can create much better avatars using generative neural networks that produce higher quality more consistently. The more data these models are fed, the better they learn. Synthesia uses both large language models and diffusion models to do this; the former help the avatars react to the script, and the latter generate the pixels. 

Despite the leap in quality, the company is still not pitching itself to the entertainment industry. Synthesia continues to see itself as a platform for businesses. Its bet is this: As people spend more time watching videos on YouTube and TikTok, there will be more demand for video content. Young people are already skipping traditional search and defaulting to TikTok for information presented in video form. Riparbelli argues that Synthesia’s tech could help companies convert their boring corporate comms and reports and training materials into content people will actually watch and engage with. He also suggests it could be used to make marketing materials. 

He claims Synthesia’s technology is used by 56% of the Fortune 100, with the vast majority of those companies using it for internal communication. The company lists Zoom, Xerox, Microsoft, and Reuters as clients. Services start at $22 a month.

This, the company hopes, will be a cheaper and more efficient alternative to video from a professional production company—and one that may be nearly indistinguishable from it. Riparbelli tells me its newest avatars could easily fool a person into thinking they are real. 

“I think we’re 98% there,” he says. 

For better or worse, I am about to see it for myself. 

Don’t be garbage

In AI research, there is a saying: Garbage in, garbage out. If the data that went into training an AI model is trash, that will be reflected in the outputs of the model. The more data points the AI model has captured of my facial movements, microexpressions, head tilts, blinks, shrugs, and hand waves, the more realistic the avatar will be. 

Back in the studio, I’m trying really hard not to be garbage. 

I am standing in front of a green screen, and Oshinyemi guides me through the initial calibration process, where I have to move my head and then eyes in a circular motion. Apparently, this will allow the system to understand my natural colors and facial features. I am then asked to say the sentence “All the boys ate a fish,” which will capture all the mouth movements needed to form vowels and consonants. We also film footage of me “idling” in silence.

image of Melissa standing on her mark in front of a green screen with server racks in background image
The more data points the AI system has on facial movements, microexpressions, head tilts, blinks, shrugs, and hand waves, the more realistic the avatar will be.
DAVID VINTINER

He then asks me to read a script for a fictitious YouTuber in different tones, directing me on the spectrum of emotions I should convey. First I’m supposed to read it in a neutral, informative way, then in an encouraging way, an annoyed and complain-y way, and finally an excited, convincing way. 

“Hey, everyone—welcome back to Elevate Her with your host, Jess Mars. It’s great to have you here. We’re about to take on a topic that’s pretty delicate and honestly hits close to home—dealing with criticism in our spiritual journey,” I read off the teleprompter, simultaneously trying to visualize ranting about something to my partner during the complain-y version. “No matter where you look, it feels like there’s always a critical voice ready to chime in, doesn’t it?” 

Don’t be garbage, don’t be garbage, don’t be garbage. 

“That was really good. I was watching it and I was like, ‘Well, this is true. She’s definitely complaining,’” Oshinyemi says, encouragingly. Next time, maybe add some judgment, he suggests.   

We film several takes featuring different variations of the script. In some versions I’m allowed to move my hands around. In others, Oshinyemi asks me to hold a metal pin between my fingers as I do. This is to test the “edges” of the technology’s capabilities when it comes to communicating with hands, Oshinyemi says. 

Historically, making AI avatars look natural and matching mouth movements to speech has been a very difficult challenge, says David Barber, a professor of machine learning at University College London who is not involved in Synthesia’s work. That is because the problem goes far beyond mouth movements; you have to think about eyebrows, all the muscles in the face, shoulder shrugs, and the numerous different small movements that humans use to express themselves. 

motion capture stage with detail of a mocap pattern inset
The motion capture process uses reference patterns to help align footage captured from multiple angles around the subject.
DAVID VINTINER

Synthesia has worked with actors to train its models since 2020, and their doubles make up the 225 stock avatars that are available for customers to animate with their own scripts. But to train its latest generation of avatars, Synthesia needed more data; it has spent the past year working with around 1,000 professional actors in London and New York. (Synthesia says it does not sell the data it collects, although it does release some of it for academic research purposes.)

The actors previously got paid each time their avatar was used, but now the company pays them an up-front fee to train the AI model. Synthesia uses their avatars for three years, at which point actors are asked if they want to renew their contracts. If so, they come into the studio to make a new avatar. If not, the company will delete their data. Synthesia’s enterprise customers can also generate their own custom avatars by sending someone into the studio to do much of what I’m doing.

“” class=”wp-image-1091775″ srcset=”https://wp.technologyreview.com/wp-content/uploads/2024/04/David_Vintiner__A7A0695.jpg 3000w, https://wp.technologyreview.com/wp-content/uploads/2024/04/David_Vintiner__A7A0695.jpg?resize=300,232 300w, https://wp.technologyreview.com/wp-content/uploads/2024/04/David_Vintiner__A7A0695.jpg?resize=768,593 768w, https://wp.technologyreview.com/wp-content/uploads/2024/04/David_Vintiner__A7A0695.jpg?resize=1536,1187 1536w, https://wp.technologyreview.com/wp-content/uploads/2024/04/David_Vintiner__A7A0695.jpg?resize=2048,1582 2048w” sizes=”(max-width: 3000px) 100vw, 3000px” />
The initial calibration process allows the system to understand the subject’s natural colors and facial features.
Melissa recording audio into a boom mic seated in front of a laptop stand
Synthesia also collects voice samples. In the studio, I read a passage indicating that I explicitly consent to having my voice cloned.

Between takes, the makeup artist comes in and does some touch-ups to make sure I look the same in every shot. I can feel myself blushing because of the lights in the studio, but also because of the acting. After the team has collected all the shots it needs to capture my facial expressions, I go downstairs to read more text aloud for voice samples. 

This process requires me to read a passage indicating that I explicitly consent to having my voice cloned, and that it can be used on Voica’s account on the Synthesia platform to generate videos and speech. 

Consent is key

This process is very different from the way many AI avatars, deepfakes, or synthetic media—whatever you want to call them—are created. 

Most deepfakes aren’t created in a studio. Studies have shown that the vast majority of deepfakes online are nonconsensual sexual content, usually using images stolen from social media. Generative AI has made the creation of these deepfakes easy and cheap, and there have been several high-profile cases in the US and Europe of children and women being abused in this way. Experts have also raised alarms that the technology can be used to spread political disinformation, a particularly acute threat given the record number of elections happening around the world this year. 

Synthesia’s policy is to not create avatars of people without their explicit consent. But it hasn’t been immune from abuse. Last year, researchers found pro-China misinformation that was created using Synthesia’s avatars and packaged as news, which the company said violated its terms of service. 

Since then, the company has put more rigorous verification and content moderation systems in place. It applies a watermark with information on where and how the AI avatar videos were created. Where it once had four in-house content moderators, people doing this work now make up 10% of its 300-person staff. It also hired an engineer to build better AI-powered content moderation systems. These filters help Synthesia vet every single thing its customers try to generate. Anything suspicious or ambiguous, such as content about cryptocurrencies or sexual health, gets forwarded to the human content moderators. Synthesia also keeps a record of all the videos its system creates.

And while anyone can join the platform, many features aren’t available until people go through an extensive vetting system similar to that used by the banking industry, which includes talking to the sales team, signing legal contracts, and submitting to security auditing, says Voica. Entry-level customers are limited to producing strictly factual content, and only enterprise customers using custom avatars can generate content that contains opinions. On top of this, only accredited news organizations are allowed to create content on current affairs.

“We can’t claim to be perfect. If people report things to us, we take quick action, [such as] banning or limiting individuals or organizations,” Voica says. But he believes these measures work as a deterrent, which means most bad actors will turn to freely available open-source tools instead. 

I put some of these limits to the test when I head to Synthesia’s office for the next step in my avatar generation process. In order to create the videos that will feature my avatar, I have to write a script. Using Voica’s account, I decide to use passages from Hamlet, as well as previous articles I have written. I also use a new feature on the Synthesia platform, which is an AI assistant that transforms any web link or document into a ready-made script. I try to get my avatar to read news about the European Union’s new sanctions against Iran. 

Voica immediately texts me: “You got me in trouble!” 

The system has flagged his account for trying to generate content that is restricted.

AI-powered content filters help Synthesia vet every single thing its customers try to generate. Only accredited news organizations are allowed to create content on current affairs.
COURTESY OF SYNTHESIA

Offering services without these restrictions would be “a great growth strategy,” Riparbelli grumbles. But “ultimately, we have very strict rules on what you can create and what you cannot create. We think the right way to roll out these technologies in society is to be a little bit over-restrictive at the beginning.” 

Still, even if these guardrails operated perfectly, the ultimate result would nevertheless be an internet where everything is fake. And my experiment makes me wonder how we could possibly prepare ourselves. 

Our information landscape already feels very murky. On the one hand, there is heightened public awareness that AI-generated content is flourishing and could be a powerful tool for misinformation. But on the other, it is still unclear whether deepfakes are used for misinformation at scale and whether they’re broadly moving the needle to change people’s beliefs and behaviors. 

If people become too skeptical about the content they see, they might stop believing in anything at all, which could enable bad actors to take advantage of this trust vacuum and lie about the authenticity of real content. Researchers have called this the “liar’s dividend.” They warn that politicians, for example, could claim that genuinely incriminating information was fake or created using AI. 

Claire Leibowicz, the head of the AI and media integrity at the nonprofit Partnership on AI, says she worries that growing awareness of this gap will make it easier to “plausibly deny and cast doubt on real material or media as evidence in many different contexts, not only in the news, [but] also in the courts, in the financial services industry, and in many of our institutions.” She tells me she’s heartened by the resources Synthesia has devoted to content moderation and consent but says that process is never flawless.

Even Riparbelli admits that in the short term, the proliferation of AI-generated content will probably cause trouble. While people have been trained not to believe everything they read, they still tend to trust images and videos, he adds. He says people now need to test AI products for themselves to see what is possible, and should not trust anything they see online unless they have verified it. 

Never mind that AI regulation is still patchy, and the tech sector’s efforts to verify content provenance are still in their early stages. Can consumers, with their varying degrees of media literacy, really fight the growing wave of harmful AI-generated content through individual action? 

Watch out, PowerPoint

The day after my final visit, Voica emails me the videos with my avatar. When the first one starts playing, I am taken aback. It’s as painful as seeing yourself on camera or hearing a recording of your voice. Then I catch myself. At first I thought the avatar was me. 

The more I watch videos of “myself,” the more I spiral. Do I really squint that much? Blink that much? And move my jaw like that? Jesus. 

It’s good. It’s really good. But it’s not perfect. “Weirdly good animation,” my partner texts me. 

“But the voice sometimes sounds exactly like you, and at other times like a generic American and with a weird tone,” he adds. “Weird AF.” 

He’s right. The voice is sometimes me, but in real life I umm and ahh more. What’s remarkable is that it picked up on an irregularity in the way I talk. My accent is a transatlantic mess, confused by years spent living in the UK, watching American TV, and attending international school. My avatar sometimes says the word “robot” in a British accent and other times in an American accent. It’s something that probably nobody else would notice. But the AI did. 

My avatar’s range of emotions is also limited. It delivers Shakespeare’s “To be or not to be” speech very matter-of-factly. I had guided it to be furious when reading a story I wrote about Taylor Swift’s nonconsensual nude deepfakes; the avatar is complain-y and judgy, for sure, but not angry. 

This isn’t the first time I’ve made myself a test subject for new AI. Not too long ago, I tried generating AI avatar images of myself, only to get a bunch of nudes. That experience was a jarring example of just how biased AI systems can be. But this experience—and this particular way of being immortalized—was definitely on a different level.

Carl Öhman, an assistant professor at Uppsala University who has studied digital remains and is the author of a new book, The Afterlife of Data, calls avatars like the ones I made “digital corpses.” 

“It looks exactly like you, but no one’s home,” he says. “It would be the equivalent of cloning you, but your clone is dead. And then you’re animating the corpse, so that it moves and talks, with electrical impulses.” 

That’s kind of how it feels. The little, nuanced ways I don’t recognize myself are enough to put me off. Then again, the avatar could quite possibly fool anyone who doesn’t know me very well. It really shines when presenting a story I wrote about how the field of robotics could be getting its own ChatGPT moment; the virtual AI assistant summarizes the long read into a decent short video, which my avatar narrates. It is not Shakespeare, but it’s better than many of the corporate presentations I’ve had to sit through. I think if I were using this to deliver an end-of-year report to my colleagues, maybe that level of authenticity would be enough. 

And that is the sell, according to Riparbelli: “What we’re doing is more like PowerPoint than it is like Hollywood.”

Once a likeness has been generated, Synthesia is able to generate video presentations quickly from a script. In this video, synthetic “Melissa” summarizes an article real Melissa wrote about Taylor Swift deepfakes.
SYNTHESIA

The newest generation of avatars certainly aren’t ready for the silver screen. They’re still stuck in portrait mode, only showing the avatar front-facing and from the waist up. But in the not-too-distant future, Riparbelli says, the company hopes to create avatars that can communicate with their hands and have conversations with one another. It is also planning for full-body avatars that can walk and move around in a space that a person has generated. (The rig to enable this technology already exists; in fact it’s where I am in the image at the top of this piece.)

But do we really want that? It feels like a bleak future where humans are consuming AI-generated content presented to them by AI-generated avatars and using AI to repackage that into more content, which will likely be scraped to generate more AI. If nothing else, this experiment made clear to me that the technology sector urgently needs to step up its content moderation practices and ensure that content provenance techniques such as watermarking are robust. 

Even if Synthesia’s technology and content moderation aren’t yet perfect, they’re significantly better than anything I have seen in the field before, and this is after only a year or so of the current boom in generative AI. AI development moves at breakneck speed, and it is both exciting and daunting to consider what AI avatars will look like in just a few years. Maybe in the future we will have to adopt safewords to indicate that you are in fact communicating with a real human, not an AI. 

But that day is not today. 

I found it weirdly comforting that in one of the videos, my avatar rants about nonconsensual deepfakes and says, in a sociopathically happy voice, “The tech giants? Oh! They’re making a killing!” 

I would never. 

Want less mining? Switch to clean energy.

Political fights over mining and minerals are heating up, and there are growing environmental and sociological concerns about how to source the materials the world needs to build new energy technologies. 

But low-emissions energy sources, including wind, solar, and nuclear power, have a smaller mining footprint than coal and natural gas, according to a new report from the Breakthrough Institute released today.

The report’s findings add to a growing body of evidence that technologies used to address climate change will likely lead to a future with less mining than a world powered by fossil fuels. However, experts point out that oversight will be necessary to minimize harm from the mining needed to transition to lower-emission energy sources. 

“In many ways, we talk so much about the mining of clean energy technologies, and we forget about the dirtiness of our current system,” says Seaver Wang, an author of the report and co-director of Climate and Energy at the Breakthrough Institute, an environmental research center.  

In the new analysis, Wang and his colleagues considered the total mining footprint of different energy technologies, including the amount of material needed for these energy sources and the total amount of rock that needs to be moved to extract that material.

Many minerals appear in small concentrations in source rock, so the process of extracting them has a large footprint relative to the amount of final product. A mining operation would need to move about seven kilograms of rock to get one kilogram of aluminum, for instance. For copper, the ratio is much higher, at over 500 to one. Taking these ratios into account allows for a more direct comparison of the total mining required for different energy sources. 

With this adjustment, it becomes clear that the energy source with the highest mining burden is coal. Generating one gigawatt-hour of electricity with coal requires 20 times the mining footprint as generating the same electricity with low-carbon power sources like wind and solar. Producing the same electricity with natural gas requires moving about twice as much rock.

Tallying up the amount of rock moved is an imperfect approximation of the potential environmental and sociological impact of mining related to different technologies, Wang says, but the report’s results allow researchers to draw some broad conclusions. One is that we’re on track for less mining in the future. 

Other researchers have projected a decrease in mining accompanying a move to low-emissions energy sources. “We mine so many fossil fuels today that the sum of mining activities decreases even when we assume an incredibly rapid expansion of clean energy technologies,” Joey Nijnens, a consultant at Monitor Deloitte and author of another recent study on mining demand, said in an email.

That being said, potentially moving less rock around in the future “hardly means that society shouldn’t look for further opportunities to reduce mining impacts throughout the energy transition,” Wang says.

There’s already been progress in cutting down on the material required for technologies like wind and solar. Solar modules have gotten more efficient, so the same amount of material can yield more electricity generation. Recycling can help further cut material demand in the future, and it will be especially crucial to reduce the mining needed to build batteries.  

Resource extraction may decrease overall, but it’s also likely to increase in some places as our demands change, researchers pointed out in a 2021 study. Between 32% and 40% of the mining increase in the future could occur in countries with weak, poor, or failing resource governance, where mining is more likely to harm the environment and may fail to benefit people living near the mining projects. 

“We need to ensure that the energy transition is accompanied by responsible mining that benefits local communities,” Takuma Watari, a researcher at the National Institute for Environmental Studies and an author of the study, said via email. Otherwise, the shift to lower-emissions energy sources could lead to a reduction of carbon emissions in the Global North “at the expense of increasing socio-environmental risks in local mining areas, often in the Global South.” 

Strong oversight and accountability are crucial to make sure that we can source minerals in a responsible way, Wang says: “We want a rapid energy transition, but we also want an energy transition that’s equitable.”

My biotech plants are dead

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Six weeks ago, I pre-ordered the “Firefly Petunia,” a houseplant engineered with genes from bioluminescent fungi so that it glows in the dark. 

After years of writing about anti-GMO sentiment in the US and elsewhere, I felt it was time to have some fun with biotech. These plants are among the first direct-to-consumer GM organisms you can buy, and they certainly seem like the coolest.

But when I unboxed my two petunias this week, they were in bad shape, with rotted leaves. And in a day, they were dead crisps. My first attempt to do biotech at home is a total bust, and it cost me $84, shipping included.

My plants did arrive in a handsome black box with neon lettering that alerted me to the living creature within. The petunias, about five inches tall, were each encased in a see-through plastic pod to keep them upright. Government warnings on the back of the box assured me they were free of Japanese beetles, sweet potato weevils, the snail Helix aspera, and gypsy moths.

The problem was when I opened the box. As it turns out, I left for a week’s vacation in Florida the same day that Light Bio, the startup selling the petunia, sent me an email saying “Glowing plants headed your way,” with a UPS tracking number. I didn’t see the email, and even if I had, I wasn’t there to receive them. 

That meant my petunias sat in darkness for seven days. The box became their final sarcophagus.

My fault? Perhaps. But I had no idea when Light Bio would ship my order. And others have had similar experiences. Mat Honan, the editor in chief of MIT Technology Review, told me his petunia arrived the day his family flew to Japan. Luckily, a house sitter feeding his lizard eventually opened the box, and Mat reports the plant is still clinging to life in his yard.

One of the ill-fated petunia plants and its sarcophagus. Credit: Antonio Regalado
ANTONIO REGALADO

But what about the glow? How strong is it? 

Mat says so far, he doesn’t notice any light coming from the plant, even after carrying it into a pitch-dark bathroom. But buyers may have to wait a bit to see anything. It’s the flowers that glow most brightly, and you may need to tend your petunia for a couple of weeks before you get blooms and see the mysterious effect.  

“I had two flowers when I opened mine, but sadly they dropped and I haven’t got to see the brightness yet. Hoping they will bloom again soon,” says Kelsey Wood, a postdoctoral researcher at the University of California, Davis. 

She would like to use the plants in classes she teaches at the university. “It’s been a dream of synthetic biologists for so many years to make a bioluminescent plant,” she says. “But they couldn’t get it bright enough to see with the naked eye.”

Others are having success right out of the box. That’s the case with Tharin White, publisher of EYNTK.info, a website about theme parks. “It had a lot of protection around it and a booklet to explain what you needed to do to help it,” says White. “The glow is strong, if you are [in] total darkness. Just being in a dark room, you can’t really see it. That being said, I didn’t expect a crazy glow, so [it] meets my expectations.”

That’s no small recommendation coming from White, who has been a “cast member” at Disney parks and an operator of the park’s Avatar ride, named after the movie whose action takes place on a planet where the flora glows. “I feel we are leaps closer to Pandora—The World of Avatar being reality,” White posted to his X account.

Chronobiologist Brian Hodge also found success by resettling his petunia immediately into a larger eight-inch pot, giving it flower food and a good soaking, and putting it in the sunlight. “After a week or so it really started growing fast, and the buds started to show up around day 10. Their glow is about what I expected. It is nothing like a neon light but more of a soft gentle glow,” says Hodge, a staff scientist at the University of California, San Francisco.

In his daily work, Hodge has handled bioluminescent beings before—bacteria mostly—and says he always needed photomultiplier tubes to see anything. “My experience with bioluminescent cells is that the light they would produce was pretty hard to see with the naked eye,” he says. “So I was happy with the amount of light I was seeing from the plants. You really need to turn off all the lights for them to really pop out at you.”

Hodge posted a nifty snapshot of his petunia, but only after setting his iPhone for a two-second exposure.

Light Bio’s CEO Keith Wood didn’t respond to an email about how my plants died, but in an interview last month he told me sales of the biotech plant had been “viral” and that the company would probably run out of its initial supply. To generate new ones, it hires commercial greenhouses to place clippings in water, where they’ll sprout new roots after a couple of weeks. According to Wood, the plant is “a rare example where the benefits of GM technology are easily recognized and experienced by the public.”

Hodge says he got interested in the plants after reading an article about combating light pollution by using bioluminescent flora instead of streetlamps. As a biologist who studies how day and night affect life, he’s worried that city lights and computer screens are messing with natural cycles.

“I just couldn’t pass up being one of the first to own one,” says Hodge. “Once you flip the lights off, the glow is really beautiful … and it sorta feels like you are witnessing something out of a futuristic sci-fi movie!” 

It makes me tempted to try again. 


Now read the rest of The Checkup

From the archives 

We’re not sure if rows of glowing plants can ever replace streetlights, but there’s no doubt light pollution is growing. Artificial light emissions on Earth grew by about 50% between 1992 and 2017—and as much as 400% in some regions. That’s according to Shel Evergreen,in his story on the switch to bright LED streetlights.

It’s taken a while for scientists to figure out how to make plants glow brightly enough to interest consumers. In 2016, I looked at a failed Kickstarter that promised glow-in-the-dark roses but couldn’t deliver.  

Another thing 

Cassandra Willyard is updating us on the case of Lisa Pisano, a 54-year-old woman who is feeling “fantastic” two weeks after surgeons gave her a kidney from a genetically modified pig. It’s the latest in a series of extraordinary animal-to-human organ transplants—a technology, known as xenotransplantation, that may end the organ shortage.

From around the web

Taiwan’s government is considering steps to ease restrictions on the use of IVF. The country has an ultra-low birth rate, but it bans surrogacy, limiting options for male couples. One Taiwanese pair spent $160,000 to have a child in the United States.  (CNN)

Communities in Appalachia are starting to get settlement payments from synthetic-opioid makers like Johnson & Johnson, which along with other drug vendors will pay out $50 billion over several years. But the money, spread over thousands of jurisdictions, is “a feeble match for the scale of the problem.” (Wall Street Journal)

A startup called Climax Foods claims it has used artificial intelligence to formulate vegan cheese that tastes “smooth, rich, and velvety,” according to writer Andrew Rosenblum. He relates the results of his taste test in the new “Build” issue of MIT Technology Review. But one expert Rosenblum spoke to warns that computer-generated cheese is “significantly” overhyped.

AI hype continued this week in medicine when a startup claimed it has used “generative AI” to quickly discover new versions of CRISPR, the powerful gene-editing tool. But new gene-editing tricks won’t conquer the main obstacle, which is how to deliver these molecules where they’re needed in the bodies of patients. (New York Times).

Here’s the defense tech at the center of US aid to Israel, Ukraine, and Taiwan

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

After weeks of drawn-out congressional debate over how much the United States should spend on conflicts abroad, President Joe Biden signed a $95.3 billion aid package into law on Wednesday.

The bill will send a significant quantity of supplies to Ukraine and Israel, while also supporting Taiwan with submarine technology to aid its defenses against China. It’s also sparked renewed calls for stronger crackdowns on Iranian-produced drones. 

Though much of the money will go toward replenishing fairly standard munitions and supplies, the spending bill provides a window into US strategies around four key defense technologies that continue to reshape how today’s major conflicts are being fought.

For a closer look at the military technology at the center of the aid package, I spoke with Andrew Metrick, a fellow with the defense program at the Center for a New American Security, a think tank.

Ukraine and the role of long-range missiles

Ukraine has long sought the Army Tactical Missile System (ATACMS), a long-range ballistic missile made by Lockheed Martin. First debuted in Operation Desert Storm in Iraq in 1990, it’s 13 feet high, two feet wide, and over 3,600 pounds. It can use GPS to accurately hit targets 190 miles away. 

Last year, President Biden was apprehensive about sending such missiles to Ukraine, as US stockpiles of the weapons were relatively low. In October, the administration changed tack. The US sent shipments of ATACMS, a move celebrated by President Volodymyr Zelensky of Ukraine, but they came with restrictions: the missiles were older models with a shorter range, and Ukraine was instructed not to fire them into Russian territory, only Ukrainian territory. 

This week, just hours before the new aid package was signed, multiple news outlets reported that the US had secretly sent more powerful long-range ATACMS to Ukraine several weeks before. They were used on Tuesday, April 23, to target a Russian airfield in Crimea and Russian troops in Berdiansk, 50 miles southwest of Mariupol.

The long range of the weapons has proved essential for Ukraine, says Metrick. “It allows the Ukrainians to strike Russian targets at ranges for which they have very few other options,” he says. That means being able to hit locations like supply depots, command centers, and airfields behind Russia’s front lines in Ukraine. This capacity has grown more important as Ukraine’s troop numbers have waned, Metrick says.

Replenishing Israel’s Iron Dome

On April 13, Iran launched its first-ever direct attack on Israeli soil. In the attack, which Iran says was retaliation for Israel’s airstrike on its embassy in Syria, hundreds of missiles were lobbed into Israeli airspace. Many of them were neutralized by the web of cutting-edge missile launchers dispersed throughout Israel that can automatically detonate incoming strikes before they hit land. 

One of those systems is Israel’s Iron Dome, in which radar systems detect projectiles and then signal units to launch defensive missiles that detonate the target high in the sky before it strikes populated areas. Israel’s other system, called David’s Sling, works a similar way but can identify rockets coming from a greater distance, upwards of 180 miles. 

Both systems are hugely costly to research and build, and the new US aid package allocates $15 billion to replenish their missile stockpile. The missiles can cost anywhere from $100,000 to $10 million each, and a system like Iron Dome might fire them daily during intense periods of conflict. 

The aid comes as funding for Israel has grown more contentious amid the dire conditions faced by displaced Palestinians in Gaza. While the spending bill worked its way through Congress, increasing numbers of Democrats sought to put conditions on the military aid to Israel, particularly after an Israeli air strike on April 1 killed seven aid workers from World Central Kitchen, an international food charity. The funding package does provide $9 billion in humanitarian assistance for the conflict, but the efforts to impose conditions for Israeli military aid failed. 

Taiwan and underwater defenses against China

A rising concern for the US defense community—and a subject of “wargaming” simulations that Metrick has carried out—is an amphibious invasion of Taiwan from China. The rising risk of that scenario has driven the US to build and deploy larger numbers of advanced submarines, Metrick says. A bigger fleet of these submarines would be more likely to keep attacks from China at bay, thereby protecting Taiwan.

The trouble is that the US shipbuilding effort, experts say, is too slow. It’s been hampered by budget cuts and labor shortages, but the new aid bill aims to jump-start it. It will provide $3.3 billion to do so, specifically for the production of Columbia-class submarines, which carry nuclear weapons, and Virginia-class submarines, which carry conventional weapons. 

Though these funds aim to support Taiwan by building up the US supply of submarines, the package also includes more direct support, like $2 billion to help it purchase weapons and defense equipment from the US. 

The US’s Iranian drone problem 

Shahed drones are used almost daily on the Russia-Ukraine battlefield, and Iran launched more than 100 against Israel earlier this month. Produced by Iran and resembling model planes, the drones are fast, cheap, and lightweight, capable of being launched from the back of a pickup truck. They’re used frequently for potent one-way attacks, where they detonate upon reaching their target. US experts say the technology is tipping the scales toward Russian and Iranian military groups and their allies. 

The trouble of combating them is partly one of cost. Shooting down the drones, which can be bought for as little as $40,000, can cost millions in ammunition.

“Shooting down Shaheds with an expensive missile is not, in the long term, a winning proposition,” Metrick says. “That’s what the Iranians, I think, are banking on. They can wear people down.”

This week’s aid package renewed White House calls for stronger sanctions aimed at curbing production of the drones. The United Nations previously passed rules restricting any drone-related material from entering or leaving Iran, but those expired in October. The US now wants them reinstated. 

Even if that happens, it’s unlikely the rules would do much to contain the Shahed’s dominance. The components of the drones are not all that complex or hard to obtain to begin with, but experts also say that Iran has built a sprawling global supply chain to acquire the materials needed to manufacture them and has worked with Russia to build factories. 

“Sanctions regimes are pretty dang leaky,” Metrick says. “They [Iran] have friends all around the world.”