MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.
MDMA, sometimes called Molly or ecstasy, has been banned in the United States for more than three decades. Now this potent mind-altering drug is poised to become a badly needed therapy for PTSD.
On June 4, the Food and Drug Administration’s advisory committee will meet to discuss the risks and benefits of MDMA therapy. If the committee votes in favor of the drug, it could be approved to treat PTSD this summer. The approval would represent a momentous achievement for proponents of mind-altering drugs, who have been working toward this goal for decades. And it could help pave the way for FDA approval of other illicit drugs like psilocybin. But the details surrounding how these compounds will make the transition from illicit substances to legitimate therapies are still foggy.
Here’s what to know ahead of the upcoming hearing.
What’s the argument for legitimizing MDMA?
Studies suggest the compound can help treat mental-health disorders like PTSD and depression. Lykos, the company that has been developing MDMA as a therapy, looked at efficacy in two clinical trials that included about 200 people with PTSD. Researchers randomly assigned participants to receive psychotherapy with or without MDMA. The group that received MDMA-assisted therapy had a greater reduction in PTSD symptoms. They were also more likely to respond to treatment, to meet the criteria for PTSD remission, and to lose their diagnosis of PTSD.
But some experts question the validity of the results. With substances like MDMA, study participants almost always know whether they’ve received the drug or a placebo. That can skew the results, especially when the participants and therapists strongly believe a drug is going to help. The Institute for Clinical and Economic Review (ICER), a nonprofit research organization that evaluates the clinical and economic value of drugs, recently rated the evidence for MDMA-assisted therapy as “insufficient.”
In briefing documents published ahead of the June 4 meeting, FDA officials write that the question of approving MDMA “presents a number of complex review issues.”
The ICER report also referenced allegations of misconduct and ethical violations. Lykos (formerly the Multidisciplinary Association for Psychedelic Studies Public Benefit Corporation) acknowledges that ethical violations occurred in one particularly high-profile case. But in a rebuttal to the ICER report, more than 70 researchers involved in the trials wrote that “a number of assertions in the ICER report represent hearsay, and should be weighted accordingly.” Lykos did not respond to an interview request.
At the meeting on the 4th, the FDA has asked experts to discuss whether Lykos has demonstrated that MDMA is effective, whether the drug’s effect lasts, and what role psychotherapy plays. The committee will also discuss safety, including the drug’s potential for abuse and the risk posed by the impairment MDMA causes.
What’s stopping people from using this therapy?
MDMA is illegal. In 1985, the Drug Enforcement Agency grew concerned about growing street use of the drug and added it to its list of Schedule 1 substances—those with a high abuse potential and no accepted medical use.
MDMA boosts the brain’s production of feel-good neurotransmitters, causing a burst of euphoria and good will toward others. But the drug can also cause high blood pressure, memory problems, anxiety, irritability, and confusion. And repeated use can cause lasting changes in the brain.
If the FDA approves MDMA therapy, when will people be able to access it?
That has yet to be determined. It could take months for the DEA to reclassify the drug. After that, it’s up to individual states.
Lykos applied for approval of MDMA-assisted therapy, not just the compound itself. In the clinical trials, MDMA administration happened in the presence of licensed therapists, who then helped patients process their emotions during therapy sessions that lasted for hours.
But regulating therapy isn’t part of the FDA’s purview. The FDA approves drugs; it doesn’t oversee how they’re administered. “The agency has been clear with us,” says Kabir Nath, CEO of Compass Pathways, the company working to bring psilocybin to market. “They don’t want to regulate psychotherapy, because they see that as the practice of medicine, and that’s not their job.”
However, for drugs that carry a risk of serious side effects, the FDA can add a risk evaluation and mitigation strategy to its approval. For MDMA that might include mandating that the health-care professionals who administer the medication have certain certifications or specialized training, or requiring that the drug be dispensed only in licensed facilities.
For example, Spravato, a nasal spray approved in 2019 for depression that works much like ketamine, is available only at a limited number of health-care facilities and must be taken under the observation of a health-care provider. Having safeguards in place for MDMA makes sense, at least at the outset, says Matt Lamkin, an associate professor at the University of Tulsa College of Law who has been following the field closely.: “Given the history, I think it would only take a couple of high-profile bad incidents to potentially set things back.”
What mind-altering drug is next in line for FDA approval?
Psilocybin, a.k.a. the active ingredient in magic mushrooms. This summer Compass Pathways will release the first results from one of its phase 3 trials of psilocybin to treat depression. Results from the other trial will come in the middle of 2025, which—if all goes well—puts the company on track to file for approval in the fall or winter of next year. With the FDA review and the DEA rescheduling, “it’s still kind of two to three years out,” Nath says.
Some states are moving ahead without formal approval. Oregon voters made psilocybin legal in 2020, and the drug is now accessible there at about 20 licensed centers for supervised use. “It’s an adult use program that has a therapeutic element,” says Ismail Ali, director of policy and advocacy at the Multidisciplinary Association for Psychedelic Studies (MAPS).
Colorado voted to legalize psilocybin and some other plant-based psychedelics in 2022, and the state is now working to develop a framework to guide the licensing of facilitators to administer these drugs for therapeutic purposes. More states could follow.
So would FDA approval of these compounds open the door to legal recreational use of psychedelics?
Maybe. The DEA can still prosecute physicians if they’re prescribing drugs outside of their medically accepted uses. But Lamkin does see the lines between recreational use and medical use getting blurry. “What we’re seeing is that the therapeutic uses have recreational side effects and the recreation has therapeutic side effects,” he says. “I’m interested to see how long they can keep the genie in the bottle.”
What’s the status of MDMA therapies elsewhere in the world?
Last summer, Australia became the first country to approve MDMA and psilocybin as medicines to treat psychiatric disorders, but the therapies are not yet widely available. The first clinic opened just a few months ago. The US is poised to become the second country if the FDA greenlights Lykos’s application. Health Canada told the CBC it is watching the FDA’s review of MDMA “with interest.” Europe is lagging a bit behind, but there are some signs of movement. In April, the European Medicines Agency convened a workshop to bring together a variety of stakeholders to discuss a regulatory framework for psychedelics.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
Here in the US, bird flu has now infected cows in nine states, millions of chickens, and—as of last week—a second dairy worker. There’s no indication that the virus has acquired the mutations it would need to jump between humans, but the possibility of another pandemic has health officials on high alert. Last week, they said they are working to get 4.8 million doses of H5N1 bird flu vaccine packaged into vials as a precautionary measure.
The good news is that we’re far more prepared for a bird flu outbreak than we were for covid. We know so much more about influenza than we did about coronaviruses. And we already have hundreds of thousands of doses of a bird flu vaccine sitting in the nation’s stockpile.
The bad news is we would need more than 600 million doses to cover everyone in the US, at two shots per person. And the process we typically use to produce flu vaccines takes months and relies on massive quantities of chicken eggs. Yes, chickens. One of the birds that’s susceptible to avian flu. (Talk about putting all our eggs in one basket. #sorrynotsorry)
This week in The Checkup, let’s look at why we still use a cumbersome, 80-year-old vaccine production process to make flu vaccines—and how we can speed it up.
The idea to grow flu virus in fertilized chicken eggs originated with Frank Macfarlane Burnet, an Australian virologist. In 1936, he discovered that if he bored a tiny hole in the shell of a chicken egg and injected flu virus between the shell and the inner membrane, he could get the virus to replicate.
Even now, we still grow flu virus in much the same way. “I think a lot of it has to do with the infrastructure that’s already there,” says Scott Hensley, an immunologist at the University of Pennsylvania’s Perelman School of Medicine. It’s difficult for companies to pivot.
The process works like this:Health officials provide vaccine manufacturers with a candidate vaccine virus that matches circulating flu strains. That virus is injected into fertilized chicken eggs, where it replicates for several days. The virus is then harvested, killed (for most use cases), purified, and packaged.
Making flu vaccine in eggs has a couple of major drawbacks. For a start, the virus doesn’t always grow well in eggs. So the first step in vaccine development is creating a virus that does. That happens through an adaptation process that can take weeks or even months. This process is particularly tricky for bird flu: Viruses like H5N1 are deadly to birds, so the virus might end up killing the embryo before the egg can produce much virus. To avoid this, scientists have to develop a weakened version of the virus by combining genes from the bird flu virus with genes typically used to produce seasonal flu virus vaccines.
And then there’s the problem of securing enough chickens and eggs. Right now, many egg-based production lines are focused on producing vaccines for seasonal flu. They could switch over to bird flu, but “we don’t have the capacity to do both,” Amesh Adalja, an infectious disease specialist at Johns Hopkins University, told KFF Health News. The US government is so worried about its egg supply that it keeps secret, heavily guarded flocks of chickens peppered throughout the country.
Most of the flu virus used in vaccines is grown in eggs, but there are alternatives. The seasonal flu vaccine Flucelvax, produced by CSL Seqirus, is grown in a cell line derived in the 1950s from the kidney of a cocker spaniel. The virus used in the seasonal flu vaccine FluBlok, made by Protein Sciences, isn’t grown; it’s synthesized. Scientists engineer an insect virus to carry the gene for hemagglutinin, a key component of the flu virus that triggers the human immune system to create antibodies against it. That engineered virus turns insect cells into tiny hemagglutinin production plants.
And then we have mRNA vaccines, which wouldn’t require vaccine manufacturers to grow any virus at all. There aren’t yet any approved mRNA vaccines for influenza, but many companies are fervently working on them, including Pfizer, Moderna, Sanofi, and GSK. “With the covid vaccines and the infrastructure that’s been built for covid, we now have the capacity to ramp up production of mRNA vaccines very quickly,” says Hensley. This week, the Financial Timesreported that the US government will soon close a deal with Moderna to provide tens of millions of dollars to fund a large clinical trial of a bird flu vaccine the company is developing.
There are hints that egg-free vaccines might work better than egg-based vaccines.A CDC study published in January showed that people who received Flucelvax or FluBlok had more robust antibody responses than those who received egg-based flu vaccines. That may be because viruses grown in eggs sometimes acquire mutations that help them grow better in eggs. Those mutations can change the virus so much that the immune response generated by the vaccine doesn’t work as well against the actual flu virus that’s circulating in the population.
Hensley and his colleagues are developing an mRNA vaccine against bird flu. So far they’ve only tested it in animals, but the shot performed well, he claims. “All of our preclinical studies in animals show that these vaccines elicit a much stronger antibody response compared with conventional flu vaccines.”
No one can predict when we might need a pandemic flu vaccine. But just because bird flu hasn’t made the jump to a pandemic doesn’t mean it won’t. “The cattle situation makes me worried,” Hensley says. Humans are in constant contact with cows, he explains. While there have only been a couple of human cases so far, “the fear is that some of those exposures will spark a fire.” Let’s make sure we can extinguish it quickly.
I don’t have to tell you that mRNA vaccines are a big deal. In 2021, MIT Technology Review highlighted them as one of the year’s 10 breakthrough technologies. Antonio Regalado explored their massive potential to transform medicine. Jessica Hamzelou wrote about the other diseases researchers are hoping to tackle.I followed up with a story after two mRNA researchers won a Nobel Prize. And earlier this year I wrote about a new kind of mRNA vaccine that’s self-amplifying, meaning it not only works at lower doses, but also sticks around for longer in the body.
From around the web
Researchers installed a literal window into the brain, allowing for ultrasound imaging that they hope will be a step toward less invasive brain-computer interfaces. (Stat)
People who carry antibodies against the common viruses used to deliver gene therapies can mount a dangerous immune response if they’re re-exposed. That means many people are ineligible for these therapies and others can’t get a second dose. Now researchers are hunting for a solution. (Nature)
More good news about Ozempic. A new study shows that the drug can cut the risk of kidney complications, including death in people with diabetes and chronic kidney disease. (NYT)
Must read: This story, the second in series on the denial of reproductive autonomy for people with sickle-cell disease, examines how the US medical system undermines a woman’s right to choose. (Stat)
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.
When OpenAI revealed its new generative video model, Sora, last month, it invited a handful of filmmakers to try it out. This week the company published the results: seven surreal short films that leave no doubt that the future of generative video is coming fast.
Fast-forward 18 months, and the best of Sora’s high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. Runway’s latest models can produce short clips that rival those made by blockbuster animation studios. Midjourney and Stability AI, the firms behind two of the most popular text-to-image models, are now working on video as well.
A number of companies are racing to make a business on the back of these breakthroughs. Most are figuring out what that business is as they go. “I’ll routinely scream, ‘Holy cow, that is wicked cool’ while playing with these tools,” says Gary Lipkowitz, CEO of Vyond, a firm that provides a point-and-click platform for putting together short animated videos. “But how can you use this at work?”
Whatever the answer to that question, it will probably upend a wide range of businesses and change the roles of many professionals, from animators to advertisers. Fears of misuse are also growing. The widespread ability to generate fake video will make it easier than ever to flood the internet with propaganda and nonconsensual porn. We can see it coming. The problem is, nobody has a good fix.
As we continue to get to grips what’s ahead—good and bad—here are four things to think about. We’ve also curated a selection of the best videos filmmakers have made using this technology, including an exclusive reveal of “Somme Requiem,” an experimental short film by Los Angeles–based production company Myles. Read on for a taste of where AI moviemaking is headed.
1. Sora is just the start
OpenAI’s Sora is currently head and shoulders above the competition in video generation. But other companies are working hard to catch up. The market is going to get extremely crowded over the next few months as more firms refine their technology and start rolling out Sora’s rivals.
The UK-based startup Haiper came out of stealth this month. It was founded in 2021 by former Google DeepMind and TikTok researchers who wanted to work on technology called neural radiance fields, or NeRF, which can transform 2D images into 3D virtual environments. They thought a tool that turned snapshots into scenes users could step into would be useful for making video games.
But six months ago, Haiper pivoted from virtual environments to video clips, adapting its technology to fit what CEO Yishu Miao believes will be an even bigger market than games. “We realized that video generation was the sweet spot,” says Miao. “There will be a super-high demand for it.”
“Air Head” is a short film made by Shy Kids, a pop band and filmmaking collective based in Toronto, using Sora.
Like OpenAI’s Sora, Haiper’s generative video tech uses a diffusion model to manage the visuals and a transformer (the component in large language models like GPT-4 that makes them so good at predicting what comes next), to manage the consistency between frames. “Videos are sequences of data, and transformers are the best model to learn sequences,” says Miao.
Consistency is a big challenge for generative video and the main reason existing tools produce just a few seconds of video at a time. Transformers for video generation can boost the quality and length of the clips. The downside is that transformers make stuff up, or hallucinate. In text, this is not always obvious. In video, it can result in, say, a person with multiple heads. Keeping transformers on track requires vast silos of training data and warehouses full of computers.
That’s why Irreverent Labs, founded by former Microsoft researchers, is taking a different approach. Like Haiper, Irreverent Labs started out generating environments for games before switching to full video generation. But the company doesn’t want to follow the herd by copying what OpenAI and others are doing. “Because then it’s a battle of compute, a total GPU war,” says David Raskino, Irreverent’s cofounder and CTO. “And there’s only one winner in that scenario, and he wears a leather jacket.” (He’s talking about Jensen Huang, CEO of the trillion-dollar chip giant Nvidia.)
Instead of using a transformer, Irreverent’s tech combines a diffusion model with a model that predicts what’s in the next frame on the basis of common-sense physics, such as how a ball bounces or how water splashes on the floor. Raskino says this approach reduces both training costs and the number of hallucinations. The model still produces glitches, but they are distortions of physics (like a bouncing ball not following a smooth curve, for example) with known mathematical fixes that can be applied to the video after it is generated, he says.
Which approach will last remains to be seen. Miao compares today’s technology to large language models circa GPT-2. Five years ago, OpenAI’s groundbreaking early model amazed people because it showed what was possible. But it took several more years for the technology to become a game-changer.
It’s the same with video, says Miao: “We’re all at the bottom of the mountain.”
2. What will people do with generative video?
Video is the medium of the internet. YouTube, TikTok, newsreels, ads: expect to see synthetic video popping up everywhere there’s video already.
The marketing industry is one of the most enthusiastic adopters of generative technology. Two-thirds of marketing professionals have experimented with generative AI in their jobs, according to a recent survey Adobe carried out in the US, with more than half saying they have used the technology to produce images.
Generative video is next. A few marketing firms have already put out short films to demonstrate the technology’s potential. The latest example is the 2.5-minute-long “Somme Requiem,” made by Myles. You can watch the film below in an exclusive reveal from MIT Technology Review.
“Somme Requiem” is a short film made by Los Angeles production company Myles. Every shot was generated using Runway’s Gen 2 model. The clips were then edited together by a team of video editors at Myles.
“Somme Requiem” depicts snowbound soldiers during the World War I Christmas ceasefire in 1914. The film is made up of dozens of different shots that were produced using a generative video model from Runway, then stitched together, color-corrected, and set to music by human video editors at Myles. “The future of storytelling will be a hybrid workflow,” says founder and CEO Josh Kahn.
Kahn picked the period wartime setting to make a point. He notes that the Apple TV+ series Masters of the Air, which follows a group of World War II airmen, cost $250 million. The team behind Peter Jackson’s World War I documentary They Shall Not Grow Old spent four years curating and restoring more than 100 hours of archival film. “Most filmmakers can only dream of ever having an opportunity to tell a story in this genre,” says Kahn.
“Independent filmmaking has been kind of dying,” he adds. “I think this will create an incredible resurgence.”
Raskino hopes so. “The horror movie genre is where people test new things, to try new things until they break,” he says. “I think we’re going to see a blockbuster horror movie created by, like, four people in a basement somewhere using AI.”
So is generative video a Hollywood-killer? Not yet. The scene-setting shots in ”Somme Requiem”—empty woods, a desolate military camp—look great. But the people in it are still afflicted with mangled fingers and distorted faces, hallmarks of the technology. Generative video is best at wide-angle pans or lingering close-ups, which creates an eerie atmosphere but little action. If ”Somme Requiem” were any longer it would get dull.
But scene-setting shots pop up all the time in feature-length movies. Most are just a few seconds long, but they can take hours to film. Raskino suggests that generative video models could soon be used to produce those in-between shots for a fraction of the cost. This could also be done on the fly in later stages of production, without requiring a reshoot.
Michal Pechoucek, CTO at Gen Digital, the cybersecurity giant behind a range of antivirus brands including Norton and Avast, agrees. “I think this is where the technology is headed,” he says. “We’ll see many different models, each specifically trained in a certain domain of movie production. These will just be tools used by talented video production teams.”
We’re not there quite yet. A big problem with generative video is the lack of control users have over the output. Producing still images can be hit and miss; producing a few seconds of video is even more risky.
“Right now it’s still fun, you get a-ha moments,” says Miao. “But generating video that is exactly what you want is a very hard technical problem. We are some way off generating long, consistent videos from a single prompt.”
That’s why Vyond’s Lipkowitz thinks the technology isn’t yet ready for most corporate clients. These users want a lot more control over the look of a video than current tools give them, he says.
Thousands of companies around the world, including around 65% of the Fortune 500 firms, use Vyond’s platform to create animated videos for in-house communications, training, marketing, and more. Vyond draws on a range of generative models, including text-to-image and text-to-voice, but provides a simple drag-and-drop interface that lets users put together a video by hand, piece by piece, rather than generate a full clip with a click.
Running a generative model is like rolling dice, says Lipkowitz. “This is a hard no for most video production teams, particularly in the enterprise sector where everything must be pixel-perfect and on brand,” he says. “If the video turns out bad—maybe the characters have too many fingers, or maybe there is a company logo that is the wrong color—well, unlucky, that’s just how gen AI works.”
The solution? More data, more training, repeat. “I wish I could point to some sophisticated algorithms,” says Miao. “But no, it’s just a lot more learning.”
3. Misinformation isn’t new, but deepfakes will make it worse.
Online misinformation has been undermining our faith in the media, in institutions, and in each other for years. Some fear that adding fake video to the mix will destroy whatever pillars of shared reality we have left.
“We are replacing trust with mistrust, confusion, fear, and hate,” says Pechoucek. “Society without ground truth will degenerate.”
Pechoucek is especially worried about the malicious use of deepfakes in elections. During last year’s elections in Slovakia, for example, attackers shared a fake video that showed the leading candidate discussing plans to manipulate voters. The video was low quality and easy to spot as a deepfake. But Pechoucek believes it was enough to turn the result in favor of the other candidate.
“Adventurous Puppies” is a short clip made by OpenAI using with Sora.
John Wissinger, who leads the strategy and innovation teams at Blackbird AI, a firm that tracks and manages the spread of misinformation online, believes fake video will be most persuasive when it blends real and fake footage. Take two videos showing President Joe Biden walking across a stage. In one he stumbles, in the other he doesn’t. Who is to say which is real?
“Let’s say an event actually occurred, but the way it’s presented to me is subtly different,” says Wissinger. “That can affect my emotional response to it.” As Pechoucek noted, a fake video doesn’t even need to be that good to make an impact. A bad fake that fits existing biases will do more damage than a slick fake that doesn’t, says Wissinger.
That’s why Blackbird focuses on who is sharing what with whom. In some sense, whether something is true or false is less important than where it came from and how it is being spread, says Wissinger. His company already tracks low-tech misinformation, such as social media posts showing real images out of context. Generative technologies make things worse, but the problem of people presenting media in misleading ways, deliberately or otherwise, is not new, he says.
Throw bots into the mix, sharing and promoting misinformation on social networks, and things get messy. Just knowing that fake media is out there will sow seeds of doubt into bad-faith discourse. “You can see how pretty soon it could become impossible to discern between what’s synthesized and what’s real anymore,” says Wissinger.
4. We are facing a new online reality.
Fakes will soon be everywhere, from disinformation campaigns, to ad spots, to Hollywood blockbusters. So what can we do to figure out what’s real and what’s just fantasy? There are a range of solutions, but none will work by themselves.
The tech industry is working on the problem. Most generative tools try to enforce certain terms of use, such as preventing people from creating videos of public figures. But there are ways to bypass these filters, and open-source versions of the tools may come with more permissive policies.
Companies are also developing standards for watermarking AI-generated media and tools for detecting it. But not all tools will add watermarks, and watermarks can be stripped from a video’s metadata. No reliable detection tool exists either. Even if such tools worked, they would become part of a cat-and-mouse game of trying to keep up with advances in the models they are designed to police.
Online platforms like X and Facebook have poor track recordswhen it comes to moderation. We should not expect them to do better once the problem gets harder. Miao used to work at TikTok, where he helped build a moderation tool that detects video uploads that violate TikTok’s terms of use. Even he is wary of what’s coming: “There’s real danger out there,” he says. “Don’t trust things that you see on your laptop.”
Blackbird has developed a tool called Compass, which lets you fact check articles and social media posts. Paste a link into the tool and a large language model generates a blurb drawn from trusted online sources (these are always open to review, says Wissinger) that gives some context for the linked material. The result is very similar to the community notes that sometimes get attached to controversial posts on sites like X, Facebook, and Instagram. The company envisions having Compass generate community notes for anything. “We’re working on it,” says Wissinger.
But people who put links into a fact-checking website are already pretty savvy—and many others may not know such tools exist, or may not be inclined to trust them. Misinformation also tends to travel far wider than any subsequent correction.
In the meantime, people disagree on whose problem this is in the first place. Pechoucek says tech companies need to open up their software to allow for more competition around safety and trust. That would also let cybersecurity firms like his develop third-party software to police this tech. It’s what happened 30 years ago when Windows had a malware problem, he says: “Microsoft let antivirus firms in to help protect Windows. As a result, the online world became a safer place.”
But Pechoucek isn’t too optimistic. “Technology developers need to build their tools with safety as the top objective,” he says. “But more people think about how to make the technology more powerful than worry about how to make it more safe.”
Made by OpenAI using Sora.
There’s a common fatalistic refrain in the tech industry: change is coming, deal with it. “Generative AI is not going to get uninvented,” says Raskino. “This may not be very popular, but I think it’s true: I don’t think tech companies can bear the full burden. At the end of the day, the best defense against any technology is a very well-educated public. There’s no shortcut.”
Miao agrees. “It’s inevitable that we will massively adopt generative technology,” he says. “But it’s also the responsibility of the whole of society. We need to educate people.”
“Technology will move forward, and we need to be prepared for this change,” he adds. “We need to remind our parents, our friends, that the things they see on their screen might not be authentic.” This is especially true for older generations, he says: “Our parents need to be aware of this kind of danger. I think everyone should work together.”
We’ll need to work together quickly. When Sora came out a month ago, the tech world was stunned by how quickly generative video had progressed. But the vast majority of people have no idea this kind of technology even exists, says Wissinger: “They certainly don’t understand the trend lines that we’re on. I think it’s going to catch the world by storm.”
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our serieshere.
In 2023, it almost felt as if the promise of robotaxis was soon to be fulfilled. Hailing a robotaxi had briefly become the new trendy thing to do in San Francisco, as simple and everyday as ordering a delivery via app. However, that dream crashed and burned in October, when a serious accident in downtown San Francisco involving a vehicle belonging to Cruise, one of the leading US robotaxi companies, ignited distrust, casting a long shadow over the technology’s future.
Following that and another accident, the state of California suspended Cruise’s operations there indefinitely, and the National Highway Traffic Safety Administration launched an investigation of the company. Since then, Cruise has pulled all its vehicles from the road and laid off 24% of its workforce.
Despite that, other robotaxi companies are still forging ahead. In half a dozen cities in the US and China, fleets of robotaxis run by companies such as Waymo and Baidu are still serving anyone who would like to try them. Regulators in places like San Francisco, Phoenix, Beijing, and Shanghai now allow these vehicles to drive without human safety operators.
However, other perils loom. Robotaxi companies need to make a return on the vast sums that have been invested into getting them up and running. Until robotaxis become cheaper, they can’t meaningfully compete with conventional taxis and Uber. Yet at the same time, if companies try to increase adoption too fast, they risk following in Cruise’s footsteps. Waymo, another major robotaxi operator, has been going more slowly and cautiously. But no one is immune to accidents.
“If they have an accident, it’s going to be big news, and it will hurt everyone,” says Missy Cummings, a professor and director of the Mason Autonomy and Robotics Center at George Mason University. “That’s the big lesson of this year. The whole industry is on thin ice.”
MIT Technology Review talked to experts about how to understand the challenges facing the robotaxi industry. Here’s how they expect it to change in 2024.
Money, money, money
After years of testing robotaxis on the road, companies have demonstrated that a version of the autonomous driving technology is ready today, though with some heavy asterisks. They operate only within strict, pre-set geographical boundaries; while some cars no longer have a human operator in the driver’s seat, they still require remote operators to take control in case of emergencies; and they are limited to warmer climates, because snow can be challenging for the cars’ cameras and sensors.
“From what has been disclosed publicly, these systems still rely on some remote human supervision to operate safely. This is why I am calling them automated rather than autonomous,” says Ramanarayan Vasudevan, an associate professor of robotics and mechanical engineering at the University of Michigan.
The problem is that this version of automated driving is much more costly than traditional taxis. A robotaxi ride can be “several orders of magnitude more expensive than what it costs other taxi companies,” he says. “Unfortunately I don’t think the technology will dramatically change in the coming year to really drive down that cost.”
That higher ticket price will inevitably suppress demand. If robotaxis want to keep customers—not just those curious to try it out for the first time—they need to make the service cheaper than other forms of transportation.
Bryant Walker Smith, an associate professor of law at the University of South Carolina, echoes this concern. “These companies are competing with an Uber driver who, in any estimate, makes less than minimum wage, has a midpriced car, and probably maintains it themselves,” he says.
By way of contrast, robotaxis are expensive vehicles packed full of cameras, sensors, and advanced software systems, and they require constant monitoring and help from humans. It’s almost impossible for them to compete with ride-sharing services yet, at least until a lot more robotaxis can hit the road.
And as robotaxi companies keep burning the cash from investors, concerns are growing that they are not getting enough in return for their vast expenditure, says Smith. That means even more pressure to produce results, while balancing the potential revenues and costs.
The resistance to scaling up
In the US, there are currently four cities where people can take a robotaxi: San Francisco, Phoenix, Los Angeles, and Las Vegas.
The terms differ by city. Some require you to sign up for a waitlist first, which could take months to clear, while others only operate the vehicles in a small area.
Expanding robotaxi services into a new city involves a huge upfront effort and cost: the new area has to be thoroughly mapped (and that map has to be kept up to date), and the operator has to buy more autonomous vehicles to keep up with demand.
Also, cars whose autonomous systems are geared toward, say, San Francisco have a limited ability to adapt to Austin, says Cummings, who’s researching how to measure this type of adaptability. “If I’m looking at that as a basic research question, it probably means the companies haven’t learned something important yet,” she says.
These factors have combined to cause renewed concern about robotaxis’ profitability. Even after Cruise removed its vehicles from the road, Waymo, the other major robotaxi company in the US, hasn’t jumped in to fill the vacuum. Since each robotaxi ride currently costs the company more money than it makes, there’s hardly an appetite for endless expansion.
Worldwide development
It’s not just the US where robotaxis are being researched, tested, and even deployed.
China is the other leader right now, and it is proceeding on roughly the same timeline as the US. In 2023, a few cities in China, including Beijing and Shanghai, received government clearance to run robotaxies on the road without any safety operators. However, the cars can only run in certain small and relatively remote areas of the cities, making the service tricky to access for most people.
The Middle East is also quickly gaining a foothold in the sector, with the help of Chinese and American companies. Saudi Arabia invested $100 million in the Chinese robotaxi startup Pony.AI to bring its cars to Neom, a futuristic city it’s constructing that will supposedly be built with all the latest technologies. Meanwhile, Dubai and Abu Dhabi are competing with each other to become the first city in the Middle East to pilot driverless vehicles on the road, with vehicles made by Cruise and the Chinese company WeRide.
Chinese robotaxi companies face the same central challenge as their US peers: proving their profitability. A push to monetize permeated the Chinese industry in 2023 and launched a new trend: Chinese self-driving companies are now racing to sell their autopilot systems to other companies. This lets them make some quick cash by repackaging their technologies into less advanced but more in-demand services, like urban autopilot systems that can be sold to carmakers.
Meanwhile, robotaxi development in Europe has lagged behind, partly because countries there prefer deploying autonomous vehicles in mass transit. While Germany, the UK, and France have seen robotaxis running road tests, commercial operations remain a distant hope.
Lessons from Cruise’s fiasco
Cruise’s dreadful experience points to one major remaining roadblock for robotaxis: they still sometimes behave erratically. When a human driver (in a non-autonomous vehicle) hit a pedestrian in San Francisco in October and drove away from the scene, a passing Cruise car then ran over the victim and dragged her 20 feet before stopping.
“We are deeply concerned that more people will be killed, more first responders will be obstructed, more sudden stops will happen,” says Cathy Chase, president of Advocates for Highway and Auto Safety, an activist group based in Washington, DC. “We are not against autonomous vehicles. We are concerned about the unsafe deployment and a rush to the market at the expense of the traveling public.”
These companies are simply not reporting enough data to show us how safe their vehicles are, she says. While they are required to submit data to the National Highway Traffic Safety Administration, the data is heavily redacted in the name of protecting trade secrets before it’s released to the public. Some federal bills proposed in the last year, which haven’t passed, could even lighten these reporting requirements, Chase says.
“If there’s a silver lining in this accident, it’s that people were forced to reckon with the fact that these operations are not simple and not that straightforward,” Cummings says. It will likely cause the industry to rely more on remote human operators, something that could have changed the Cruise vehicle’s response in the October accident. But introducing more humans will further tip the balance away from profitability.
Meanwhile, Cruise was accused by the California Public Utilities Commission of misleading the public and regulators about its involvement in the incident. “If we cannot trust these companies, then they have no businesses on our roads,” says Smith.
A Cruise spokesperson told MIT Technology Review the company has no updates to share currently but pointed to a blog post from November saying it had hired third-party law firms and technology consultants to review the accident and Cruise’s responses to the regulators. In a settlement proposal to CPUC, Cruise also offered to share more data, including “collision reporting as well as regular reports detailing incidents involving stopped AVs.”
The future of Cruise remains unclear, and so does the company’s original plan to launch operations in several more cities soon. Meanwhile, though, Waymo is applying to expand its services in Los Angeles while taking its vehicles to the highways of Phoenix. Zoox, an autonomous-driving startup owned by Amazon, could launch commercial service in Las Vegas this year. For residents of these cities, more and more robotaxis may be on the road in 2024.
Correction: The story has been updated to clarify that Cruise’s October 2 accident was not fatal. The victim was hospitalized with serious injuries but survived.
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our serieshere.
It’s a turbulent time for offshore wind power.
Large groups of turbines installed along coastlines can harness the powerful, consistent winds that blow offshore. Given that 40% of the global population lives within 60 miles of the ocean, offshore wind farms can be a major boon to efforts to clean up the electricity supply around the world.
But in recent months, projects around the world have been delayed or even canceled as costs have skyrocketed and supply chain disruptions have swelled. These setbacks could spell trouble for efforts to cut the greenhouse-gas emissions that cause climate change.
The coming year and beyond will likely be littered with more delayed and canceled projects, but the industry is also seeing new starts and continuing technological development. The question is whether current troubles are more like a speed bump or a sign that 2024 will see the industry run off the road. Here’s what’s next for offshore wind power.
Speed bumps and setbacks
Wind giant Ørsted cited rising interest rates, high inflation, and supply chain bottlenecks in late October when it canceled its highly anticipated Ocean Wind 1 and Ocean Wind 2 projects. The two projects would have supplied just over 2.2 gigawatts to the New Jersey grid—enough energy to power over a million homes. Ørsted is one of the world’s leading offshore wind developers, and the company was included in MIT Technology Review’s list of 15 Climate Tech Companies to Watch in 2023.
The shuttered projects are far from the only setback for offshore wind in the US today—over 12 gigawatts’ worth of contracts were either canceled or targeted for renegotiation in 2023, according to analysis by BloombergNEF, an energy research group.
Part of the problem lies in how projects are typically built and financed, says Chelsea Jean-Michel, a wind analyst at BloombergNEF. After securing a place to build a wind farm, a developer sets up contracts to sell the electricity that will be generated by the turbines. That price gets locked in years before the project is finished. For projects getting underway now, contracts were generally negotiated in 2019 or 2020.
A lot has changed in just the past five years. Prices for steel, one of the most important materials in turbine construction, increased by over 50% from January 2019 through the end of 2022 in North America and northern Europe, according to a 2023 report from the American Clean Power Association.
Inflation has also increased the price for other materials, and higher interest rates mean that borrowing money is more expensive too. So now, developers are arguing that the prices they agreed to previously aren’t reasonable anymore.
China stands out in an otherwise struggling landscape. The country is now the world’s largest offshore wind market, accounting for nearly half of installed capacity globally. Quick development and rising competition have actually led to falling prices for some projects there.
Growing pains
While many projects around the world have seen setbacks over the last year, the problems are most concentrated in newer markets, including the US. Problems have continued since the New Jersey cancellations—in the first weeks of 2024, developers of several New York projects asked to renegotiate their contracts, which could delay progress even if those developments end up going ahead.
While over 10% of electricity in the US comes from wind power, the vast majority is generated by land-based turbines. The offshore wind market in the US is at least a decade behind the more established ones in countries like the UK and Denmark, says Walt Musial, chief engineer of offshore wind energy at the US National Renewable Energy Laboratory.
One open question over the next year will be how quickly the industry can increase the capacity to build and install wind turbines in the US. “The supply chain in the US for offshore wind is basically in its infancy. It doesn’t really exist,” Jean-Michel says.
That’s been a problem for some projects, especially when it comes for the ships needed to install wind turbines. One of the reasons Ørsted gave for canceling its New Jersey project was a lack of these vessels.
The troubles have been complicated by a single century-old law, which mandates that only ships built and operated by the US can operate from US ports. Projects in the US have worked around this restriction by operating from European ports and using large US barges offshore, but that can slow construction times significantly, Musial says.
Tax credits are providing extra incentive to build out the offshore wind supply chain in the US. Existing credits for offshore wind projects are being extended and expanded by the Inflation Reduction Act, with as much as 40% available on the cost of building a new wind farm. However, to qualify for the full tax credit, projects will need to use domestically sourced materials. Strengthening the supply chain for those materials will be a long process, and the industry is still trying to adjust to existing conditions.
Still, there are some significant signs of progress for US offshore wind. The nation’s second large-scale offshore wind farm began producing electricity in early January. Several areas of seafloor are expected to go up for auction for new development in 2024, including sites in the central Atlantic and off the coast of Oregon. Sites off the coast of Maine are expected to be offered up the following year.
But even that forward momentum may not be enough for the nation to meet its offshore wind goals. While the Biden administration has set a target of 30 gigawatts of offshore wind capacity installed by the end of the decade, BloombergNEF’s projection is that the country will likely install around half that, with 16.4 gigawatts of capacity expected by 2030.
Technological transformation
While economic considerations will likely be a limiting factor in offshore wind this year, we’re also going to be on the lookout for technological developments in the industry.
Wind turbines still follow the same blueprint from decades ago, but they are being built bigger and bigger, and that trend is expected to continue. That’s because bigger turbines tend to be more efficient, capturing more energy at a lower cost.
A decade ago, the average offshore wind turbine produced an output of around 4 megawatts. In 2022, that number was just under 8 MW. Now, the major turbine manufacturers are making models in the 15 MW range. These monstrous structures are starting to rival the size of major landmarks, with recent installations nearing the height of the Eiffel Tower.
In 2023, the wind giant Vestas tested a 15 MW model, which earned the distinction of being the world’s most powerful wind turbine. The company received certification for the design at the end of the year, and it will be used in a Danish wind farm that’s expected to begin construction in 2024.
In addition, we’ll likely see more developments in the technology for floating offshore wind turbines. While most turbines deployed offshore are secured in the seabed floor, some areas, like the west coast of the US, have deep water offshore, making this impossible.
There’s a wide variety of platform designs for floating turbines, including versions resembling camera tripods, broom handles, and tires. It’s possible the industry will start to converge on one in the coming years, since standardization will help bring prices down, says BloombergNEF’s Jean-Michel. But whether that will be enough to continue the growth of this nascent industry will depend on how economic factors shake out. And it’s likely that floating projects will continue to make up less than 5% of offshore wind power installations, even a decade from now.
The winds of change are blowing for renewable energy around the world. Even with economic uncertainty ahead, offshore wind power will certainly be a technology to watch in 2024.
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our serieshere.
In 2023, AI policy and regulation went from a niche, nerdy topic to front-page news. This is partly thanks to OpenAI’s ChatGPT, which helped AI go mainstream, but which also exposed people to how AI systems work—and don’t work. It has been a monumental year for policy: we saw the first sweeping AI law agreed upon in the European Union, Senate hearings and executive orders in the US, and specific rules in China for things like recommender algorithms.
If 2023 was the year lawmakers agreed on a vision, 2024 will be the year policies start to morph into concrete action. Here’s what to expect.
The United States
AI really entered the political conversation in the US in 2023. But it wasn’t just debate. There was also action, culminating in President Biden’s executive order on AI at the end of October—a sprawling directive calling for more transparency and new standards.
Through this activity, a US flavor of AI policy began to emerge: one that’s friendly to the AI industry, with an emphasis on best practices, a reliance on different agencies to craft their own rules, and a nuanced approach of regulating each sector of the economy differently.
Next year will build on the momentum of 2023, and many items detailed in Biden’s executive order will be enacted. We’ll also be hearing a lot about the new US AI Safety Institute, which will be responsible for executing most of the policies called for in the order.
From a congressional standpoint, it’s not clear what exactly will happen. Senate Majority Leader Chuck Schumer recently signaled that new laws may be coming in addition to the executive order. There are already several legislative proposals in play that touch various aspects of AI, such as transparency, deepfakes, and platform accountability. But it’s not clear which, if any, of these already proposed bills will gain traction next year.
What we can expect, though, is an approach that grades types and uses of AI by how much risk they pose—a framework similar to the EU’s AI Act. The National Institute of Standards and Technology has already proposed such a framework that each sector and agency will now have to put into practice, says Chris Meserole, executive director of the Frontier Model Forum, an industry lobbying body.
Another thing is clear: the US presidential election in 2024 will color much of the discussion on AI regulation. As we see in generative AI’s impact on social media platforms and misinformation, we can expect the debate around how we prevent harms from this technology to be shaped by what happens during election season.
Europe
The European Union has just agreed on the AI Act, the world’s first sweeping AI law.
After intense technical tinkering and official approval by European countries and the EU Parliament in the first half of 2024, the AI Act will kick in fairly quickly. In the most optimistic scenario, bans on certain AI uses could apply as soon as the end of the year.
This all means 2024 will be a busy year for the AI sector as it prepares to comply with the new rules. Although most AI applications will get a free pass from the AI Act, companies developing foundation models and applications that are considered to pose a “high risk” to fundamental rights, such as those meant to be used in sectors like education, health care, and policing, will have to meet new EU standards. In Europe, the police will not be allowed to use the technology in public places, unless they get court approval first for specific purposes such as fighting terrorism, preventing human trafficking, or finding a missing person.
Other AI uses will be entirely banned in the EU, such as creating facial recognition databases like Clearview AI’s or using emotion recognition technology at work or in schools. The AI Act will require companies to be more transparent about how they develop their models, and it will make them, and organizations using high-risk AI systems, more accountable for any harms that result.
Companies developing foundation models—the models upon which other AI products, such as GPT-4, are based—will have to comply with the law within one year of the time it enters into force. Other tech companies have two years to implement the rules.
To meet the new requirements, AI companieswill have to be more thoughtful about how they build their systems, and document their work more rigorously so it can be audited. The law will require companies to be more transparent about how their models have been trained and will ensure that AI systems deemed high-risk are trained and tested with sufficiently representative data sets in order to minimize biases, for example.
The EU believes that the most powerful AI models, such as OpenAI’s GPT-4 and Google’s Gemini, could pose a “systemic” risk to citizens and thus need additional work to meet EU standards. Companies must take steps to assess and mitigate risks and ensure that the systems are secure, and they will be required to report serious incidents and share details on their energy consumption. It will be up to companies to assess whether their models are powerful enough to fall into this category.
Open-source AI companies are exempted from most of the AI Act’s transparency requirements, unless they are developing models as computing-intensive as GPT-4. Not complying with rules could lead to steep fines or cause their products to be blocked from the EU.
The EU is also working on another bill, called the AI Liability Directive, which will ensure that people who have been harmed by the technology can get financial compensation. Negotiations for that are still ongoing and will likely pick up this year.
Some other countries are taking a more hands-off approach. For example, the UK, home of Google DeepMind, has said it does not intend to regulate AI in the short term. However, any company outside the EU, the world’s second-largest economy, will still have to comply with the AI Act if it wants to do business in the trading bloc.
Columbia University law professor Anu Bradford has called this the “Brussels effect”—by being the first to regulate, the EU is able to set the de facto global standard, shaping the way the world does business and develops technology. The EU successfully achieved this with its strict data protection regime, the GDPR, which has been copied everywhere from California to India. It hopes to repeat the trick when it comes to AI.
China
So far, AI regulation in China has been deeply fragmented and piecemeal. Rather than regulating AI as a whole, the country has released individual pieces of legislation whenever a new AI product becomes prominent. That’s why China has one set of rules for algorithmic recommendation services (TikTok-like apps and search engines), another for deepfakes, and yet another for generative AI.
The strength of this approach is it allows Beijing to quickly react to risks emerging from the advances in technology—both for the users and for the government. But the problem is it prevents a more long-term and panoramic perspective from developing.
That could change next year. In June 2023, China’s state council, the top governing body, announced that “an artificial intelligence law” is on its legislative agenda. This law would cover everything—like the AI Act for Europe. Because of its ambitious scope, it’s hard to say how long the legislative process will take. We might see a first draft in 2024, but it might take longer. In the interim, it won’t be surprising if Chinese internet regulators introduce new rules to deal with popular new AI tools or types of content that emerge next year.
So far, very little information about it has been released, but one document could help us predict the new law: scholars from the Chinese Academy of Social Sciences, a state-owned research institute, released an “expert suggestion” version of the Chinese AI law in August. This document proposes a “national AI office” to oversee the development of AI in China, demands a yearly independent “social responsibility report” on foundation models, and sets up a “negative list” of AI areas with higher risks, which companies can’t even research without government approval.
Currently, Chinese AI companies are already subject to plenty of regulations. In fact, any foundation model needs to be registered with the government before it can be released to the Chinese public (as of the end of 2023, 22 companies have registered their AI models).
This means that AI in China is no longer a Wild West environment. But exactly how these regulations will be enforced remains uncertain. In the coming year, generative-AI companies will have to try to figure out the compliance reality, especially around safety reviews and IP infringement.
At the same time, since foreign AI companies haven’t received any approval to release their products in China (and likely won’t in the future), the resulting domestic commercial environment protects Chinese companies. It may help them gain an edge against Western AI companies, but it may also stifle competition and reinforcing China’s control of online speech.
The rest of the world
We’re likely to see more AI regulations introduced in other parts of the world throughout the next year. One region to watch will be Africa. The African Union is likely to release an AI strategy for the continent early in 2024, meant to establish policies that individual countries can replicate to compete in AI and protect African consumers from Western tech companies, says Melody Musoni, a policy officer at the European Centre for Development Policy Management.
Some countries, like Rwanda, Nigeria, and South Africa, have already drafted national AI strategies and are working to develop education programs, computing power, and industry-friendly policies to support AI companies. Global bodies like the UN, OECD, G20, and regional alliances have started to create working groups, advisory boards, principles, standards, and statements about AI. Groups like the OECD may prove useful in creating regulatory consistency across different regions, which could ease the burden of compliance for AI companies.
Geopolitically, we’re likely to see growing differences between how democratic and authoritarian countries foster—and weaponize—their AI industries. It will be interesting to see to what extent AI companies prioritize global expansion or domestic specialization in 2024. They might have to make some tough decisions.
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our serieshere.
It can be difficult to wrap your brain around the number-crunching capability of the world’s fastest supercomputer. But computer scientist Jack Dongarra, of the University of Tennessee, puts it this way: “If everybody on Earth were to do one calculation per second, it would take four years to equal what that computer can do in one second.”
The supercomputer in question is called Frontier. It takes up the space of two tennis courts at Oak Ridge National Laboratory in the eastern Tennessee hills, where it was unveiled in May 2022.
Here are some more specs: Frontier uses approximately 50,000 processors, compared with the most powerful laptop’s 16 or 24. It consumes 20 million watts, compared with a laptop’s 65 or so. It cost $600 million to build.
When Frontier came online, it marked the dawn of so-called exascale computing, with machines that can execute an exaflop—or a quintillion (1018) floating point operations a second. Since then, scientists have geared up to make more of these blazingly fast computers: several exascale machines are due to come online in the US and Europe in 2024.
But speed itself isn’t the endgame. Researchers are building exascale computers to explore previously inaccessible science and engineering questions in biology, climate, astronomy, and other fields. In the next few years, scientists will use Frontier to run the most complicated computer simulations humans have ever devised. They hope to pursue yet unanswered questions about nature and to design new technologies in areas from transportation to medicine.
Evan Schneider of the University of Pittsburgh, for example, is using Frontier to run simulations of how our galaxy has evolved over time. In particular, she’s interested in the flow of gas in and out of the Milky Way. A galaxy breathes, in a way: gas flows into it, coalescing via gravity into stars, but gas also flows out—for example, when stars explode and release matter. Schneider studies the mechanisms by which galaxies exhale. “We can compare the simulations to the real observed universe, and that gives us a sense of whether we’re getting the physics right,” Schneider says.
Schneider is using Frontier to build a computer model of the Milky Way with high enough resolution to zoom in on individual exploding stars. That means the model must capture large-scale properties of our galaxy at 100,000 light-years, as well as properties of the supernovas at about 10 light-years across. “That really hasn’t been done,” she says. To get a sense of what that resolution means, it would be analogous to creating a physically accurate model of a can of beer along with the individual yeast cells within it, and the interactions at each scale in between.
Stephan Priebe, a senior engineer at GE, is using Frontier to simulate the aerodynamics of the next generation of airplane designs. To increase fuel efficiency, GE is investigating an engine design known as an “open fan architecture.” Jet engines use fans to generate thrust, and larger fans mean higher efficiency. To make fans even larger, engineers have proposed removing the outer structural frame, known as the nacelle, so that the blades are exposed as in a pinwheel. “The simulations allow us to obtain a detailed view of the aerodynamic performance early in the design phase,” says Priebe. They give engineers insight into how to shape the fan blades for better aerodynamics, for example, or to make them quieter.
Frontier will particularly benefit Priebe’s studies of turbulence, the chaotic motion of a disturbed fluid—in this case, air—around the fan. Turbulence is a common phenomenon. We see it in the crashing of ocean waves and in the curl of smoke rising from an extinguished candle. But scientists still struggle to predict how exactly a turbulent fluid will flow. That is because it moves in response to both macroscopic influences, such as pressure and temperature changes, and microscopic influences, such as the rubbing of individual molecules of nitrogen in the air against one another. The interplay of forces on multiple scales complicates the motion.
“In graduate school, [a professor] once told me, ‘Bronson, if anybody tells you that they understand turbulence, you should put one hand on your wallet and back out of the room, because they’re trying to sell you something,’” says astrophysicist Bronson Messer, the director of science at Oak Ridge Leadership Computing Facility, which houses Frontier. “Nobody understands turbulence. It really is the last great classical physics problem.”
These scientific studies illustrate the distinct forte of supercomputers: simulating physical objects at multiple scales simultaneously. Other applications echo this theme. Frontier enables more accurate climate models, which have to simulate weather at different spatial scales across the entire planet and also on both long and short time scales. Physicists can also simulate nuclear fusion, the turbulent process in which the sun generates energy by pushing atoms together to form different elements. They want to better understand the process in order to develop fusion as a clean energy technology. While these sorts of multi-scale simulations have been a staple of supercomputing for many years, Frontier can incorporate a wider range of different scales than ever before.
To use Frontier, approved scientists log in to the supercomputer remotely, submitting their jobs over the internet. To make the most of the machine, Oak Ridge aims to have around 90% of the supercomputer’s processors running computations 24 hours a day, seven days a week. “We enter this sort of steady state where we’re constantly doing scientific simulations for a handful of years,” says Messer. Users keep their data at Oak Ridge in a data storage facility that can store up to 700 petabytes, the equivalent of about 700,000 portable hard drives.
While Frontier is the first exascale supercomputer, more are coming down the line. In the US, researchers are currently installing two machines that will be capable of more than two exaflops: Aurora, at Argonne National Laboratory in Illinois, and El Capitan, at Lawrence Livermore National Laboratory in California. Beginning in early 2024, scientists plan to use Aurora to create maps of neurons in the brain and search for catalysts that could make industrial processes such as fertilizer production more efficient. El Capitan, also slated to come online in 2024, will simulate nuclear weapons in order to help the government to maintain its stockpile without weapons testing. Meanwhile, Europe plans to deploy its first exascale supercomputer, Jupiter, in late 2024.
China purportedly has exascale supercomputers as well, but it has not released results from standard benchmark tests of their performance, so the computers do not appear on the TOP500, a semiannual list of the fastest supercomputers. “The Chinese are concerned about the US imposing further limits in terms of technology going to China, and they’re reluctant to disclose how many of these high-performance machines are available,” says Dongarra, who designed the benchmark that supercomputers must run for TOP500.
The hunger for more computing power doesn’t stop with the exascale. Oak Ridge is already considering the next generation of computers, says Messer. These would have three to five times the computational power of Frontier. But one major challenge looms: the massive energy footprint. The power that Frontier draws, even when it is idling, is enough to run thousands of homes. “It’s probably not sustainable for us to just grow machines bigger and bigger,” says Messer.
As Oak Ridge has built progressively larger supercomputers, engineers have worked to improve the machines’ efficiency with innovations including a new cooling method. Summit, the predecessor to Frontier that is still running at Oak Ridge, expends about 10% of its total energy usage to cool itself. By comparison, 3% to 4% of Frontier’s energy consumption is for cooling. This improvement came from using water at ambient temperature to cool the supercomputer, rather than chilled water.
Next-generation supercomputers would be able to simulate even more scales simultaneously. For example, with Frontier, Schneider’s galaxy simulation has resolution down to the tens of light-years. That’s still not quite enough to get down to the scale of individual supernovas, so researchers must simulate the individual explosions separately. A future supercomputer may be able to unite all these scales.
By simulating the complexity of nature and technology more realistically, these supercomputers push the limits of science. A more realistic galaxy simulation brings the vastness of the universe to scientists’ fingertips. A precise model of air turbulence around an airplane fan circumvents the need to build a prohibitively expensive wind tunnel. Better climate models allow scientists to predict the fate of our planet. In other words, they give us a new tool to prepare for an uncertain future.
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our serieshere.
It can be difficult to wrap your brain around the number-crunching capability of the world’s fastest supercomputer. But computer scientist Jack Dongarra, of the University of Tennessee, puts it this way: “If everybody on Earth were to do one calculation per second, it would take four years to equal what that computer can do in one second.”
The supercomputer in question is called Frontier. It takes up the space of two tennis courts at Oak Ridge National Laboratory in the eastern Tennessee hills, where it was unveiled in May 2022.
Here are some more specs: Frontier uses approximately 50,000 processors, compared with the most powerful laptop’s 16 or 24. It consumes 20 million watts, compared with a laptop’s 65 or so. It cost $600 million to build.
When Frontier came online, it marked the dawn of so-called exascale computing, with machines that can execute an exaflop—or a quintillion (1018) floating point operations a second. Since then, scientists have geared up to make more of these blazingly fast computers: several exascale machines are due to come online in the US and Europe in 2024.
But speed itself isn’t the endgame. Researchers are building exascale computers to explore previously inaccessible science and engineering questions in biology, climate, astronomy, and other fields. In the next few years, scientists will use Frontier to run the most complicated computer simulations humans have ever devised. They hope to pursue yet unanswered questions about nature and to design new technologies in areas from transportation to medicine.
Evan Schneider of the University of Pittsburgh, for example, is using Frontier to run simulations of how our galaxy has evolved over time. In particular, she’s interested in the flow of gas in and out of the Milky Way. A galaxy breathes, in a way: gas flows into it, coalescing via gravity into stars, but gas also flows out—for example, when stars explode and release matter. Schneider studies the mechanisms by which galaxies exhale. “We can compare the simulations to the real observed universe, and that gives us a sense of whether we’re getting the physics right,” Schneider says.
Schneider is using Frontier to build a computer model of the Milky Way with high enough resolution to zoom in on individual exploding stars. That means the model must capture large-scale properties of our galaxy at 100,000 light-years, as well as properties of the supernovas at about 10 light-years across. “That really hasn’t been done,” she says. To get a sense of what that resolution means, it would be analogous to creating a physically accurate model of a can of beer along with the individual yeast cells within it, and the interactions at each scale in between.
Stephan Priebe, a senior engineer at GE, is using Frontier to simulate the aerodynamics of the next generation of airplane designs. To increase fuel efficiency, GE is investigating an engine design known as an “open fan architecture.” Jet engines use fans to generate thrust, and larger fans mean higher efficiency. To make fans even larger, engineers have proposed removing the outer structural frame, known as the nacelle, so that the blades are exposed as in a pinwheel. “The simulations allow us to obtain a detailed view of the aerodynamic performance early in the design phase,” says Priebe. They give engineers insight into how to shape the fan blades for better aerodynamics, for example, or to make them quieter.
Frontier will particularly benefit Priebe’s studies of turbulence, the chaotic motion of a disturbed fluid—in this case, air—around the fan. Turbulence is a common phenomenon. We see it in the crashing of ocean waves and in the curl of smoke rising from an extinguished candle. But scientists still struggle to predict how exactly a turbulent fluid will flow. That is because it moves in response to both macroscopic influences, such as pressure and temperature changes, and microscopic influences, such as the rubbing of individual molecules of nitrogen in the air against one another. The interplay of forces on multiple scales complicates the motion.
“In graduate school, [a professor] once told me, ‘Bronson, if anybody tells you that they understand turbulence, you should put one hand on your wallet and back out of the room, because they’re trying to sell you something,’” says astrophysicist Bronson Messer, the director of science at Oak Ridge Leadership Computing Facility, which houses Frontier. “Nobody understands turbulence. It really is the last great classical physics problem.”
These scientific studies illustrate the distinct forte of supercomputers: simulating physical objects at multiple scales simultaneously. Other applications echo this theme. Frontier enables more accurate climate models, which have to simulate weather at different spatial scales across the entire planet and also on both long and short time scales. Physicists can also simulate nuclear fusion, the turbulent process in which the sun generates energy by pushing atoms together to form different elements. They want to better understand the process in order to develop fusion as a clean energy technology. While these sorts of multi-scale simulations have been a staple of supercomputing for many years, Frontier can incorporate a wider range of different scales than ever before.
To use Frontier, approved scientists log in to the supercomputer remotely, submitting their jobs over the internet. To make the most of the machine, Oak Ridge aims to have around 90% of the supercomputer’s processors running computations 24 hours a day, seven days a week. “We enter this sort of steady state where we’re constantly doing scientific simulations for a handful of years,” says Messer. Users keep their data at Oak Ridge in a data storage facility that can store up to 700 petabytes, the equivalent of about 700,000 portable hard drives.
While Frontier is the first exascale supercomputer, more are coming down the line. In the US, researchers are currently installing two machines that will be capable of more than two exaflops: Aurora, at Argonne National Laboratory in Illinois, and El Capitan, at Lawrence Livermore National Laboratory in California. Beginning in early 2024, scientists plan to use Aurora to create maps of neurons in the brain and search for catalysts that could make industrial processes such as fertilizer production more efficient. El Capitan, also slated to come online in 2024, will simulate nuclear weapons in order to help the government to maintain its stockpile without weapons testing. Meanwhile, Europe plans to deploy its first exascale supercomputer, Jupiter, in late 2024.
China purportedly has exascale supercomputers as well, but it has not released results from standard benchmark tests of their performance, so the computers do not appear on the TOP500, a semiannual list of the fastest supercomputers. “The Chinese are concerned about the US imposing further limits in terms of technology going to China, and they’re reluctant to disclose how many of these high-performance machines are available,” says Dongarra, who designed the benchmark that supercomputers must run for TOP500.
The hunger for more computing power doesn’t stop with the exascale. Oak Ridge is already considering the next generation of computers, says Messer. These would have three to five times the computational power of Frontier. But one major challenge looms: the massive energy footprint. The power that Frontier draws, even when it is idling, is enough to run thousands of homes. “It’s probably not sustainable for us to just grow machines bigger and bigger,” says Messer.
As Oak Ridge has built progressively larger supercomputers, engineers have worked to improve the machines’ efficiency with innovations including a new cooling method. Summit, the predecessor to Frontier that is still running at Oak Ridge, expends about 10% of its total energy usage to cool itself. By comparison, 3% to 4% of Frontier’s energy consumption is for cooling. This improvement came from using water at ambient temperature to cool the supercomputer, rather than chilled water.
Next-generation supercomputers would be able to simulate even more scales simultaneously. For example, with Frontier, Schneider’s galaxy simulation has resolution down to the tens of light-years. That’s still not quite enough to get down to the scale of individual supernovas, so researchers must simulate the individual explosions separately. A future supercomputer may be able to unite all these scales.
By simulating the complexity of nature and technology more realistically, these supercomputers push the limits of science. A more realistic galaxy simulation brings the vastness of the universe to scientists’ fingertips. A precise model of air turbulence around an airplane fan circumvents the need to build a prohibitively expensive wind tunnel. Better climate models allow scientists to predict the fate of our planet. In other words, they give us a new tool to prepare for an uncertain future.