What’s next for generative video

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

When OpenAI revealed its new generative video model, Sora, last month, it invited a handful of filmmakers to try it out. This week the company published the results: seven surreal short films that leave no doubt that the future of generative video is coming fast. 

The first batch of models that could turn text into video appeared in late 2022, from companies including Meta, Google, and video-tech startup Runway. It was a neat trick, but the results were grainy, glitchy, and just a few seconds long.

Fast-forward 18 months, and the best of Sora’s high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. Runway’s latest models can produce short clips that rival those made by blockbuster animation studios. Midjourney and Stability AI, the firms behind two of the most popular text-to-image models, are now working on video as well.

A number of companies are racing to make a business on the back of these breakthroughs. Most are figuring out what that business is as they go. “I’ll routinely scream, ‘Holy cow, that is wicked cool’ while playing with these tools,” says Gary Lipkowitz, CEO of Vyond, a firm that provides a point-and-click platform for putting together short animated videos. “But how can you use this at work?”

Whatever the answer to that question, it will probably upend a wide range of businesses and change the roles of many professionals, from animators to advertisers. Fears of misuse are also growing. The widespread ability to generate fake video will make it easier than ever to flood the internet with propaganda and nonconsensual porn. We can see it coming. The problem is, nobody has a good fix.

As we continue to get to grips what’s ahead—good and bad—here are four things to think about. We’ve also curated a selection of the best videos filmmakers have made using this technology, including an exclusive reveal of “Somme Requiem,” an experimental short film by Los Angeles–based production company Myles. Read on for a taste of where AI moviemaking is headed. 

1. Sora is just the start

OpenAI’s Sora is currently head and shoulders above the competition in video generation. But other companies are working hard to catch up. The market is going to get extremely crowded over the next few months as more firms refine their technology and start rolling out Sora’s rivals.

The UK-based startup Haiper came out of stealth this month. It was founded in 2021 by former Google DeepMind and TikTok researchers who wanted to work on technology called neural radiance fields, or NeRF, which can transform 2D images into 3D virtual environments. They thought a tool that turned snapshots into scenes users could step into would be useful for making video games.

But six months ago, Haiper pivoted from virtual environments to video clips, adapting its technology to fit what CEO Yishu Miao believes will be an even bigger market than games. “We realized that video generation was the sweet spot,” says Miao. “There will be a super-high demand for it.”

“Air Head” is a short film made by Shy Kids, a pop band and filmmaking collective based in Toronto, using Sora.

Like OpenAI’s Sora, Haiper’s generative video tech uses a diffusion model to manage the visuals and a transformer (the component in large language models like GPT-4 that makes them so good at predicting what comes next), to manage the consistency between frames. “Videos are sequences of data, and transformers are the best model to learn sequences,” says Miao.

Consistency is a big challenge for generative video and the main reason existing tools produce just a few seconds of video at a time. Transformers for video generation can boost the quality and length of the clips. The downside is that transformers make stuff up, or hallucinate. In text, this is not always obvious. In video, it can result in, say, a person with multiple heads. Keeping transformers on track requires vast silos of training data and warehouses full of computers.

That’s why Irreverent Labs, founded by former Microsoft researchers, is taking a different approach. Like Haiper, Irreverent Labs started out generating environments for games before switching to full video generation. But the company doesn’t want to follow the herd by copying what OpenAI and others are doing. “Because then it’s a battle of compute, a total GPU war,” says David Raskino, Irreverent’s cofounder and CTO. “And there’s only one winner in that scenario, and he wears a leather jacket.” (He’s talking about Jensen Huang, CEO of the trillion-dollar chip giant Nvidia.)

Instead of using a transformer, Irreverent’s tech combines a diffusion model with a model that predicts what’s in the next frame on the basis of common-sense physics, such as how a ball bounces or how water splashes on the floor. Raskino says this approach reduces both training costs and the number of hallucinations. The model still produces glitches, but they are distortions of physics (like a bouncing ball not following a smooth curve, for example) with known mathematical fixes that can be applied to the video after it is generated, he says.

Which approach will last remains to be seen. Miao compares today’s technology to large language models circa GPT-2. Five years ago, OpenAI’s groundbreaking early model amazed people because it showed what was possible. But it took several more years for the technology to become a game-changer.

It’s the same with video, says Miao: “We’re all at the bottom of the mountain.”

2. What will people do with generative video? 

Video is the medium of the internet. YouTube, TikTok, newsreels, ads: expect to see synthetic video popping up everywhere there’s video already.

The marketing industry is one of the most enthusiastic adopters of generative technology. Two-thirds of marketing professionals have experimented with generative AI in their jobs, according to a recent survey Adobe carried out in the US, with more than half saying they have used the technology to produce images.

Generative video is next. A few marketing firms have already put out short films to demonstrate the technology’s potential. The latest example is the 2.5-minute-long “Somme Requiem,” made by Myles. You can watch the film below in an exclusive reveal from MIT Technology Review.

“Somme Requiem” is a short film made by Los Angeles production company Myles. Every shot was generated using Runway’s Gen 2 model. The clips were then edited together by a team of video editors at Myles.

“Somme Requiem” depicts snowbound soldiers during the World War I Christmas ceasefire in 1914. The film is made up of dozens of different shots that were produced using a generative video model from Runway, then stitched together, color-corrected, and set to music by human video editors at Myles. “The future of storytelling will be a hybrid workflow,” says founder and CEO Josh Kahn.

Kahn picked the period wartime setting to make a point. He notes that the Apple TV+ series Masters of the Air, which follows a group of World War II airmen, cost $250 million. The team behind Peter Jackson’s World War I documentary They Shall Not Grow Old spent four years curating and restoring more than 100 hours of archival film. “Most filmmakers can only dream of ever having an opportunity to tell a story in this genre,” says Kahn.

“Independent filmmaking has been kind of dying,” he adds. “I think this will create an incredible resurgence.”

Raskino hopes so. “The horror movie genre is where people test new things, to try new things until they break,” he says. “I think we’re going to see a blockbuster horror movie created by, like, four people in a basement somewhere using AI.”

So is generative video a Hollywood-killer? Not yet. The scene-setting shots in ”Somme Requiem”—empty woods, a desolate military camp—look great. But the people in it are still afflicted with mangled fingers and distorted faces, hallmarks of the technology. Generative video is best at wide-angle pans or lingering close-ups, which creates an eerie atmosphere but little action. If ”Somme Requiem” were any longer it would get dull.

But scene-setting shots pop up all the time in feature-length movies. Most are just a few seconds long, but they can take hours to film. Raskino suggests that generative video models could soon be used to produce those in-between shots for a fraction of the cost. This could also be done on the fly in later stages of production, without requiring a reshoot.

Michal Pechoucek, CTO at Gen Digital, the cybersecurity giant behind a range of antivirus brands including Norton and Avast, agrees. “I think this is where the technology is headed,” he says. “We’ll see many different models, each specifically trained in a certain domain of movie production. These will just be tools used by talented video production teams.”

We’re not there quite yet. A big problem with generative video is the lack of control users have over the output. Producing still images can be hit and miss; producing a few seconds of video is even more risky.

“Right now it’s still fun, you get a-ha moments,” says Miao. “But generating video that is exactly what you want is a very hard technical problem. We are some way off generating long, consistent videos from a single prompt.”

That’s why Vyond’s Lipkowitz thinks the technology isn’t yet ready for most corporate clients. These users want a lot more control over the look of a video than current tools give them, he says.

Thousands of companies around the world, including around 65% of the Fortune 500 firms, use Vyond’s platform to create animated videos for in-house communications, training, marketing, and more. Vyond draws on a range of generative models, including text-to-image and text-to-voice, but provides a simple drag-and-drop interface that lets users put together a video by hand, piece by piece, rather than generate a full clip with a click.

Running a generative model is like rolling dice, says Lipkowitz. “This is a hard no for most video production teams, particularly in the enterprise sector where everything must be pixel-perfect and on brand,” he says. “If the video turns out bad—maybe the characters have too many fingers, or maybe there is a company logo that is the wrong color—well, unlucky, that’s just how gen AI works.”

The solution? More data, more training, repeat. “I wish I could point to some sophisticated algorithms,” says Miao. “But no, it’s just a lot more learning.”

3. Misinformation isn’t new, but deepfakes will make it worse.

Online misinformation has been undermining our faith in the media, in institutions, and in each other for years. Some fear that adding fake video to the mix will destroy whatever pillars of shared reality we have left.

“We are replacing trust with mistrust, confusion, fear, and hate,” says Pechoucek. “Society without ground truth will degenerate.”

Pechoucek is especially worried about the malicious use of deepfakes in elections. During last year’s elections in Slovakia, for example, attackers shared a fake video that showed the leading candidate discussing plans to manipulate voters. The video was low quality and easy to spot as a deepfake. But Pechoucek believes it was enough to turn the result in favor of the other candidate.

“Adventurous Puppies” is a short clip made by OpenAI using with Sora.

John Wissinger, who leads the strategy and innovation teams at Blackbird AI, a firm that tracks and manages the spread of misinformation online, believes fake video will be most persuasive when it blends real and fake footage. Take two videos showing President Joe Biden walking across a stage. In one he stumbles, in the other he doesn’t. Who is to say which is real?

“Let’s say an event actually occurred, but the way it’s presented to me is subtly different,” says Wissinger. “That can affect my emotional response to it.” As Pechoucek noted, a fake video doesn’t even need to be that good to make an impact. A bad fake that fits existing biases will do more damage than a slick fake that doesn’t, says Wissinger.

That’s why Blackbird focuses on who is sharing what with whom. In some sense, whether something is true or false is less important than where it came from and how it is being spread, says Wissinger. His company already tracks low-tech misinformation, such as social media posts showing real images out of context. Generative technologies make things worse, but the problem of people presenting media in misleading ways, deliberately or otherwise, is not new, he says.

Throw bots into the mix, sharing and promoting misinformation on social networks, and things get messy. Just knowing that fake media is out there will sow seeds of doubt into bad-faith discourse. “You can see how pretty soon it could become impossible to discern between what’s synthesized and what’s real anymore,” says Wissinger.

4. We are facing a new online reality.

Fakes will soon be everywhere, from disinformation campaigns, to ad spots, to Hollywood blockbusters. So what can we do to figure out what’s real and what’s just fantasy? There are a range of solutions, but none will work by themselves.

The tech industry is working on the problem. Most generative tools try to enforce certain terms of use, such as preventing people from creating videos of public figures. But there are ways to bypass these filters, and open-source versions of the tools may come with more permissive policies.

Companies are also developing standards for watermarking AI-generated media and tools for detecting it. But not all tools will add watermarks, and watermarks can be stripped from a video’s metadata. No reliable detection tool exists either. Even if such tools worked, they would become part of a cat-and-mouse game of trying to keep up with advances in the models they are designed to police.

Online platforms like X and Facebook have poor track records when it comes to moderation. We should not expect them to do better once the problem gets harder. Miao used to work at TikTok, where he helped build a moderation tool that detects video uploads that violate TikTok’s terms of use. Even he is wary of what’s coming: “There’s real danger out there,” he says. “Don’t trust things that you see on your laptop.” 

Blackbird has developed a tool called Compass, which lets you fact check articles and social media posts. Paste a link into the tool and a large language model generates a blurb drawn from trusted online sources (these are always open to review, says Wissinger) that gives some context for the linked material. The result is very similar to the community notes that sometimes get attached to controversial posts on sites like X, Facebook, and Instagram. The company envisions having Compass generate community notes for anything. “We’re working on it,” says Wissinger.

But people who put links into a fact-checking website are already pretty savvy—and many others may not know such tools exist, or may not be inclined to trust them. Misinformation also tends to travel far wider than any subsequent correction.

In the meantime, people disagree on whose problem this is in the first place. Pechoucek says tech companies need to open up their software to allow for more competition around safety and trust. That would also let cybersecurity firms like his develop third-party software to police this tech. It’s what happened 30 years ago when Windows had a malware problem, he says: “Microsoft let antivirus firms in to help protect Windows. As a result, the online world became a safer place.”

But Pechoucek isn’t too optimistic. “Technology developers need to build their tools with safety as the top objective,” he says. “But more people think about how to make the technology more powerful than worry about how to make it more safe.”

Made by OpenAI using Sora.

There’s a common fatalistic refrain in the tech industry: change is coming, deal with it. “Generative AI is not going to get uninvented,” says Raskino. “This may not be very popular, but I think it’s true: I don’t think tech companies can bear the full burden. At the end of the day, the best defense against any technology is a very well-educated public. There’s no shortcut.”

Miao agrees. “It’s inevitable that we will massively adopt generative technology,” he says. “But it’s also the responsibility of the whole of society. We need to educate people.” 

“Technology will move forward, and we need to be prepared for this change,” he adds. “We need to remind our parents, our friends, that the things they see on their screen might not be authentic.” This is especially true for older generations, he says: “Our parents need to be aware of this kind of danger. I think everyone should work together.”

We’ll need to work together quickly. When Sora came out a month ago, the tech world was stunned by how quickly generative video had progressed. But the vast majority of people have no idea this kind of technology even exists, says Wissinger: “They certainly don’t understand the trend lines that we’re on. I think it’s going to catch the world by storm.”

What’s next for robotaxis in 2024

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here.

In 2023, it almost felt as if the promise of robotaxis was soon to be fulfilled. Hailing a robotaxi had briefly become the new trendy thing to do in San Francisco, as simple and everyday as ordering a delivery via app. However, that dream crashed and burned in October, when a serious accident in downtown San Francisco involving a vehicle belonging to Cruise, one of the leading US robotaxi companies, ignited distrust, casting a long shadow over the technology’s future. 

Following that and another accident, the state of California suspended Cruise’s operations there indefinitely, and the National Highway Traffic Safety Administration launched an investigation of the company. Since then, Cruise has pulled all its vehicles from the road and laid off 24% of its workforce.

Despite that, other robotaxi companies are still forging ahead. In half a dozen cities in the US and China, fleets of robotaxis run by companies such as Waymo and Baidu are still serving anyone who would like to try them. Regulators in places like San Francisco, Phoenix, Beijing, and Shanghai now allow these vehicles to drive without human safety operators. 

However, other perils loom. Robotaxi companies need to make a return on the vast sums that have been invested into getting them up and running. Until robotaxis become cheaper, they can’t meaningfully compete with conventional taxis and Uber. Yet at the same time, if companies try to increase adoption too fast, they risk following in Cruise’s footsteps. Waymo, another major robotaxi operator, has been going more slowly and cautiously. But no one is immune to accidents. 

“If they have an accident, it’s going to be big news, and it will hurt everyone,” says Missy Cummings, a professor and director of the Mason Autonomy and Robotics Center at George Mason University. “That’s the big lesson of this year. The whole industry is on thin ice.”

MIT Technology Review talked to experts about how to understand the challenges facing the robotaxi industry. Here’s how they expect it to change in 2024.

Money, money, money

After years of testing robotaxis on the road, companies have demonstrated that a version of the autonomous driving technology is ready today, though with some heavy asterisks. They operate only within strict, pre-set geographical boundaries; while some cars no longer have a human operator in the driver’s seat, they still require remote operators to take control in case of emergencies; and they are limited to warmer climates, because snow can be challenging for the cars’ cameras and sensors. 

“From what has been disclosed publicly, these systems still rely on some remote human supervision to operate safely. This is why I am calling them automated rather than autonomous,” says Ramanarayan Vasudevan, an associate professor of robotics and mechanical engineering at the University of Michigan.

The problem is that this version of automated driving is much more costly than traditional taxis. A robotaxi ride can be “several orders of magnitude more expensive than what it costs other taxi companies,” he says. “Unfortunately I don’t think the technology will dramatically change in the coming year to really drive down that cost.”

That higher ticket price will inevitably suppress demand. If robotaxis want to keep customers—not just those curious to try it out for the first time—they need to make the service cheaper than other forms of transportation. 

Bryant Walker Smith, an associate professor of law at the University of South Carolina, echoes this concern. “These companies are competing with an Uber driver who, in any estimate, makes less than minimum wage, has a midpriced car, and probably maintains it themselves,” he says. 

By way of contrast, robotaxis are expensive vehicles packed full of cameras, sensors, and advanced software systems, and they require constant monitoring and help from humans. It’s almost impossible for them to compete with ride-sharing services yet, at least until a lot more robotaxis can hit the road.

And as robotaxi companies keep burning the cash from investors, concerns are growing that they are not getting enough in return for their vast expenditure, says Smith. That means even more pressure to produce results, while balancing the potential revenues and costs. 

The resistance to scaling up

In the US, there are currently four cities where people can take a robotaxi: San Francisco, Phoenix, Los Angeles, and Las Vegas. 

The terms differ by city. Some require you to sign up for a waitlist first, which could take months to clear, while others only operate the vehicles in a small area.

Expanding robotaxi services into a new city involves a huge upfront effort and cost: the new area has to be thoroughly mapped (and that map has to be kept up to date), and the operator has to buy more autonomous vehicles to keep up with demand. 

Also, cars whose autonomous systems are geared toward, say, San Francisco have a limited ability to adapt to Austin, says Cummings, who’s researching how to measure this type of adaptability. “If I’m looking at that as a basic research question, it probably means the companies haven’t learned something important yet,” she says. 

These factors have combined to cause renewed concern about robotaxis’ profitability. Even after Cruise removed its vehicles from the road, Waymo, the other major robotaxi company in the US, hasn’t jumped in to fill the vacuum. Since each robotaxi ride currently costs the company more money than it makes, there’s hardly an appetite for endless expansion.

Worldwide development

It’s not just the US where robotaxis are being researched, tested, and even deployed. 

China is the other leader right now, and it is proceeding on roughly the same timeline as the US. In 2023, a few cities in China, including Beijing and Shanghai, received government clearance to run robotaxies on the road without any safety operators. However, the cars can only run in certain small and relatively remote areas of the cities, making the service tricky to access for most people.

The Middle East is also quickly gaining a foothold in the sector, with the help of Chinese and American companies. Saudi Arabia invested $100 million in the Chinese robotaxi startup Pony.AI to bring its cars to Neom, a futuristic city it’s constructing that will supposedly be built with all the latest technologies. Meanwhile, Dubai and Abu Dhabi are competing with each other to become the first city in the Middle East to pilot driverless vehicles on the road, with vehicles made by Cruise and the Chinese company WeRide.

Chinese robotaxi companies face the same central challenge as their US peers: proving their profitability. A push to monetize permeated the Chinese industry in 2023 and launched a new trend: Chinese self-driving companies are now racing to sell their autopilot systems to other companies. This lets them make some quick cash by repackaging their technologies into less advanced but more in-demand services, like urban autopilot systems that can be sold to carmakers.

Meanwhile, robotaxi development in Europe has lagged behind, partly because countries there prefer deploying autonomous vehicles in mass transit. While Germany, the UK, and France have seen robotaxis running road tests, commercial operations remain a distant hope. 

Lessons from Cruise’s fiasco

Cruise’s dreadful experience points to one major remaining roadblock for robotaxis: they still sometimes behave erratically. When a human driver (in a non-autonomous vehicle) hit a pedestrian in San Francisco in October and drove away from the scene, a passing Cruise car then ran over the victim and dragged her 20 feet before stopping. 

“We are deeply concerned that more people will be killed, more first responders will be obstructed, more sudden stops will happen,” says Cathy Chase, president of Advocates for Highway and Auto Safety, an activist group based in Washington, DC. “We are not against autonomous vehicles. We are concerned about the unsafe deployment and a rush to the market at the expense of the traveling public.”

These companies are simply not reporting enough data to show us how safe their vehicles are, she says. While they are required to submit data to the National Highway Traffic Safety Administration, the data is heavily redacted in the name of protecting trade secrets before it’s released to the public. Some federal bills proposed in the last year, which haven’t passed, could even lighten these reporting requirements, Chase says.

“If there’s a silver lining in this accident, it’s that people were forced to reckon with the fact that these operations are not simple and not that straightforward,” Cummings says. It will likely cause the industry to rely more on remote human operators, something that could have changed the Cruise vehicle’s response in the October accident. But introducing more humans will further tip the balance away from profitability.

Meanwhile, Cruise was accused by the California Public Utilities Commission of misleading the public and regulators about its involvement in the incident. “If we cannot trust these companies, then they have no businesses on our roads,” says Smith. 

A Cruise spokesperson told MIT Technology Review the company has no updates to share currently but pointed to a blog post from November saying it had hired third-party law firms and technology consultants to review the accident and Cruise’s responses to the regulators. In a settlement proposal to CPUC, Cruise also offered to share more data, including “collision reporting as well as regular reports detailing incidents involving stopped AVs.”

The future of Cruise remains unclear, and so does the company’s original plan to launch operations in several more cities soon. Meanwhile, though, Waymo is applying to expand its services in Los Angeles while taking its vehicles to the highways of Phoenix. Zoox, an autonomous-driving startup owned by Amazon, could launch commercial service in Las Vegas this year. For residents of these cities, more and more robotaxis may be on the road in 2024.

Correction: The story has been updated to clarify that Cruise’s October 2 accident was not fatal. The victim was hospitalized with serious injuries but survived.

What’s next for offshore wind

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here.

It’s a turbulent time for offshore wind power.

Large groups of turbines installed along coastlines can harness the powerful, consistent winds that blow offshore. Given that 40% of the global population lives within 60 miles of the ocean, offshore wind farms can be a major boon to efforts to clean up the electricity supply around the world. 

But in recent months, projects around the world have been delayed or even canceled as costs have skyrocketed and supply chain disruptions have swelled. These setbacks could spell trouble for efforts to cut the greenhouse-gas emissions that cause climate change.

The coming year and beyond will likely be littered with more delayed and canceled projects, but the industry is also seeing new starts and continuing technological development. The question is whether current troubles are more like a speed bump or a sign that 2024 will see the industry run off the road. Here’s what’s next for offshore wind power.

Speed bumps and setbacks

Wind giant Ørsted cited rising interest rates, high inflation, and supply chain bottlenecks in late October when it canceled its highly anticipated Ocean Wind 1 and Ocean Wind 2 projects. The two projects would have supplied just over 2.2 gigawatts to the New Jersey grid—enough energy to power over a million homes. Ørsted is one of the world’s leading offshore wind developers, and the company was included in MIT Technology Review’s list of 15 Climate Tech Companies to Watch in 2023. 

The shuttered projects are far from the only setback for offshore wind in the US today—over 12 gigawatts’ worth of contracts were either canceled or targeted for renegotiation in 2023, according to analysis by BloombergNEF, an energy research group.

Part of the problem lies in how projects are typically built and financed, says Chelsea Jean-Michel, a wind analyst at BloombergNEF. After securing a place to build a wind farm, a developer sets up contracts to sell the electricity that will be generated by the turbines. That price gets locked in years before the project is finished. For projects getting underway now, contracts were generally negotiated in 2019 or 2020.

A lot has changed in just the past five years. Prices for steel, one of the most important materials in turbine construction, increased by over 50% from January 2019 through the end of 2022 in North America and northern Europe, according to a 2023 report from the American Clean Power Association.

Inflation has also increased the price for other materials, and higher interest rates mean that borrowing money is more expensive too. So now, developers are arguing that the prices they agreed to previously aren’t reasonable anymore.

Economic trouble for the industry is global. The UK’s last auction for offshore wind leases yielded no bidders. In addition, a major project that had been planned for the North Sea was canceled by the developer in July. Japanese developers that had jumped into projects in Taiwan are suddenly pulling out as costs shoot up in that still-developing market.

China stands out in an otherwise struggling landscape. The country is now the world’s largest offshore wind market, accounting for nearly half of installed capacity globally. Quick development and rising competition have actually led to falling prices for some projects there.

Growing pains

While many projects around the world have seen setbacks over the last year, the problems are most concentrated in newer markets, including the US. Problems have continued since the New Jersey cancellations—in the first weeks of 2024, developers of several New York projects asked to renegotiate their contracts, which could delay progress even if those developments end up going ahead.

While over 10% of electricity in the US comes from wind power, the vast majority is generated by land-based turbines. The offshore wind market in the US is at least a decade behind the more established ones in countries like the UK and Denmark, says Walt Musial, chief engineer of offshore wind energy at the US National Renewable Energy Laboratory.

One open question over the next year will be how quickly the industry can increase the capacity to build and install wind turbines in the US. “The supply chain in the US for offshore wind is basically in its infancy. It doesn’t really exist,” Jean-Michel says.

That’s been a problem for some projects, especially when it comes for the ships needed to install wind turbines. One of the reasons Ørsted gave for canceling its New Jersey project was a lack of these vessels.

The troubles have been complicated by a single century-old law, which mandates that only ships built and operated by the US can operate from US ports. Projects in the US have worked around this restriction by operating from European ports and using large US barges offshore, but that can slow construction times significantly, Musial says. 

One of the biggest developments in 2024 could be the completion of a single US-built ship that can help with turbine installation. The ship is under construction in Texas, and Dominion Energy has spent over $600 million on it so far. After delays, it’s scheduled to be completed in late 2024. 

Tax credits are providing extra incentive to build out the offshore wind supply chain in the US. Existing credits for offshore wind projects are being extended and expanded by the Inflation Reduction Act, with as much as 40% available on the cost of building a new wind farm. However, to qualify for the full tax credit, projects will need to use domestically sourced materials. Strengthening the supply chain for those materials will be a long process, and the industry is still trying to adjust to existing conditions. 

Still, there are some significant signs of progress for US offshore wind. The nation’s second large-scale offshore wind farm began producing electricity in early January. Several areas of seafloor are expected to go up for auction for new development in 2024, including sites in the central Atlantic and off the coast of Oregon. Sites off the coast of Maine are expected to be offered up the following year. 

But even that forward momentum may not be enough for the nation to meet its offshore wind goals. While the Biden administration has set a target of 30 gigawatts of offshore wind capacity installed by the end of the decade, BloombergNEF’s projection is that the country will likely install around half that, with 16.4 gigawatts of capacity expected by 2030.

Technological transformation

While economic considerations will likely be a limiting factor in offshore wind this year, we’re also going to be on the lookout for technological developments in the industry.

Wind turbines still follow the same blueprint from decades ago, but they are being built bigger and bigger, and that trend is expected to continue. That’s because bigger turbines tend to be more efficient, capturing more energy at a lower cost.

A decade ago, the average offshore wind turbine produced an output of around 4 megawatts. In 2022, that number was just under 8 MW. Now, the major turbine manufacturers are making models in the 15 MW range. These monstrous structures are starting to rival the size of major landmarks, with recent installations nearing the height of the Eiffel Tower.

In 2023, the wind giant Vestas tested a 15 MW model, which earned the distinction of being the world’s most powerful wind turbine. The company received certification for the design at the end of the year, and it will be used in a Danish wind farm that’s expected to begin construction in 2024. 

In addition, we’ll likely see more developments in the technology for floating offshore wind turbines. While most turbines deployed offshore are secured in the seabed floor, some areas, like the west coast of the US, have deep water offshore, making this impossible.

Floating turbines could solve that problem, and several pilot projects are underway around the world, including Hywind Tampen in Norway, which launched in mid-2023, and WindFloat Atlantic in Portugal.

There’s a wide variety of platform designs for floating turbines, including versions resembling camera tripods, broom handles, and tires. It’s possible the industry will start to converge on one in the coming years, since standardization will help bring prices down, says BloombergNEF’s Jean-Michel. But whether that will be enough to continue the growth of this nascent industry will depend on how economic factors shake out. And it’s likely that floating projects will continue to make up less than 5% of offshore wind power installations, even a decade from now. 

The winds of change are blowing for renewable energy around the world. Even with economic uncertainty ahead, offshore wind power will certainly be a technology to watch in 2024.

What’s next for AI regulation in 2024? 

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here.

In 2023, AI policy and regulation went from a niche, nerdy topic to front-page news. This is partly thanks to OpenAI’s ChatGPT, which helped AI go mainstream, but which also exposed people to how AI systems work—and don’t work. It has been a monumental year for policy: we saw the first sweeping AI law agreed upon in the European Union, Senate hearings and executive orders in the US, and specific rules in China for things like recommender algorithms. 

If 2023 was the year lawmakers agreed on a vision, 2024 will be the year policies start to morph into concrete action. Here’s what to expect. 

The United States

AI really entered the political conversation in the US in 2023. But it wasn’t just debate. There was also action, culminating in President Biden’s executive order on AI at the end of October—a sprawling directive calling for more transparency and new standards. 

Through this activity, a US flavor of AI policy began to emerge: one that’s friendly to the AI industry, with an emphasis on best practices, a reliance on different agencies to craft their own rules, and a nuanced approach of regulating each sector of the economy differently. 

Next year will build on the momentum of 2023, and many items detailed in Biden’s executive order will be enacted. We’ll also be hearing a lot about the new US AI Safety Institute, which will be responsible for executing most of the policies called for in the order. 

From a congressional standpoint, it’s not clear what exactly will happen. Senate Majority Leader Chuck Schumer recently signaled that new laws may be coming in addition to the executive order. There are already several legislative proposals in play that touch various aspects of AI, such as transparency, deepfakes, and platform accountability. But it’s not clear which, if any, of these already proposed bills will gain traction next year.

What we can expect, though, is an approach that grades types and uses of AI by how much risk they pose—a framework similar to the EU’s AI Act. The National Institute of Standards and Technology has already proposed such a framework that each sector and agency will now have to put into practice, says Chris Meserole, executive director of the Frontier Model Forum, an industry lobbying body. 

Another thing is clear: the US presidential election in 2024 will color much of the discussion on AI regulation. As we see in generative AI’s impact on social media platforms and misinformation, we can expect the debate around how we prevent harms from this technology to be shaped by what happens during election season. 

Europe

The European Union has just agreed on the AI Act, the world’s first sweeping AI law. 

After intense technical tinkering and official approval by European countries and the EU Parliament in the first half of 2024, the AI Act will kick in fairly quickly. In the most optimistic scenario, bans on certain AI uses could apply as soon as the end of the year. 

This all means 2024 will be a busy year for the AI sector as it prepares to comply with the new rules. Although most AI applications will get a free pass from the AI Act, companies developing foundation models and applications that are considered to pose a “high risk” to fundamental rights, such as those meant to be used in sectors like education, health care, and policing, will have to meet new EU standards. In Europe, the police will not be allowed to use the technology in public places, unless they get court approval first for specific purposes such as fighting terrorism, preventing human trafficking, or finding a missing person. 

Other AI uses will be entirely banned in the EU, such as creating facial recognition databases like Clearview AI’s or using emotion recognition technology at work or in schools. The AI Act will require companies to be more transparent about how they develop their models, and it will make them, and organizations using high-risk AI systems, more accountable for any harms that result. 

Companies developing foundation models—the models upon which other AI products, such as GPT-4,  are based—will have to comply with the law within one year of the time it enters into force. Other tech companies have two years to implement the rules. 

To meet the new requirements, AI companies will have to be more thoughtful about how they build their systems, and document their work more rigorously so it can be audited. The law will require companies to be more transparent about how their models have been trained and will ensure that AI systems deemed high-risk are trained and tested with sufficiently representative data sets in order to minimize biases, for example. 

The EU believes that the most powerful AI models, such as OpenAI’s GPT-4 and Google’s Gemini, could pose a “systemic” risk to citizens and thus need additional work to meet EU standards. Companies must take steps to assess and mitigate risks and ensure that the systems are secure, and they will be required to report serious incidents and share details on their energy consumption. It will be up to companies to assess whether their models are powerful enough to fall into this category. 

Open-source AI companies are exempted from most of the AI Act’s transparency requirements, unless they are developing models as computing-intensive as GPT-4. Not complying with rules could lead to steep fines or cause their products to be blocked from the EU. 

The EU is also working on another bill, called the AI Liability Directive, which will ensure that people who have been harmed by the technology can get financial compensation. Negotiations for that are still ongoing and will likely pick up this year. 

Some other countries are taking a more hands-off approach. For example, the UK, home of Google DeepMind, has said it does not intend to regulate AI in the short term. However, any company outside the EU, the world’s second-largest economy, will still have to comply with the AI Act if it wants to do business in the trading bloc. 

Columbia University law professor Anu Bradford has called this the “Brussels effect”—by being the first to regulate, the EU is able to set the de facto global standard, shaping the way the world does business and develops technology. The EU successfully achieved this with its strict data protection regime, the GDPR, which has been copied everywhere from California to India. It hopes to repeat the trick when it comes to AI. 

China

So far, AI regulation in China has been deeply fragmented and piecemeal. Rather than regulating AI as a whole, the country has released individual pieces of legislation whenever a new AI product becomes prominent. That’s why China has one set of rules for algorithmic recommendation services (TikTok-like apps and search engines), another for deepfakes, and yet another for generative AI. 

The strength of this approach is it allows Beijing to quickly react to risks emerging from the advances in technology—both for the users and for the government. But the problem is it prevents a more long-term and panoramic perspective from developing.

That could change next year. In June 2023, China’s state council, the top governing body, announced that “an artificial intelligence law” is on its legislative agenda. This law would cover everything—like the AI Act for Europe. Because of its ambitious scope, it’s hard to say how long the legislative process will take. We might see a first draft in 2024, but it might take longer. In the interim, it won’t be surprising if Chinese internet regulators introduce new rules to deal with popular new AI tools or types of content that emerge next year. 

So far, very little information about it has been released, but one document could help us predict the new law: scholars from the Chinese Academy of Social Sciences, a state-owned research institute, released an “expert suggestion” version of the Chinese AI law in August. This document proposes a “national AI office” to oversee the development of AI in China, demands a yearly independent “social responsibility report” on foundation models, and sets up a “negative list” of AI areas with higher risks, which companies can’t even research without government approval.

Currently, Chinese AI companies are already subject to plenty of regulations. In fact, any foundation model needs to be registered with the government before it can be released to the Chinese public (as of the end of 2023, 22 companies have registered their AI models). 

This means that AI in China is no longer a Wild West environment. But exactly how these regulations will be enforced remains uncertain. In the coming year, generative-AI companies will have to try to figure out the compliance reality, especially around safety reviews and IP infringement. 

At the same time, since foreign AI companies haven’t received any approval to release their products in China (and likely won’t in the future), the resulting domestic commercial environment protects Chinese companies. It may help them gain an edge against Western AI companies, but it may also stifle competition and reinforcing China’s control of online speech.

The rest of the world

We’re likely to see more AI regulations introduced in other parts of the world throughout the next year. One region to watch will be Africa. The African Union is likely to release an AI strategy for the continent early in 2024, meant to establish policies that individual countries can replicate to compete in AI and protect African consumers from Western tech companies, says Melody Musoni, a policy officer at the European Centre for Development Policy Management.

Some countries, like Rwanda, Nigeria, and South Africa, have already drafted national AI strategies and are working to develop education programs, computing power, and industry-friendly policies to support AI companies. Global bodies like the UN, OECD, G20, and regional alliances have started to create working groups, advisory boards, principles, standards, and statements about AI. Groups like the OECD may prove useful in creating regulatory consistency across different regions, which could ease the burden of compliance for AI companies. 

Geopolitically, we’re likely to see growing differences between how democratic and authoritarian countries foster—and weaponize—their AI industries. It will be interesting to see to what extent AI companies prioritize global expansion or domestic specialization in 2024. They might have to make some tough decisions.

What’s next for the world’s fastest supercomputers

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here.

It can be difficult to wrap your brain around the number-crunching capability of the world’s fastest supercomputer. But computer scientist Jack Dongarra, of the University of Tennessee, puts it this way: “If everybody on Earth were to do one calculation per second, it would take four years to equal what that computer can do in one second.”

The supercomputer in question is called Frontier. It takes up the space of two tennis courts at Oak Ridge National Laboratory in the eastern Tennessee hills, where it was unveiled in May 2022. 

Here are some more specs: Frontier uses approximately 50,000 processors, compared with the most powerful laptop’s 16 or 24. It consumes 20 million watts, compared with a laptop’s 65 or so. It cost $600 million to build. 

When Frontier came online, it marked the dawn of so-called exascale computing, with machines that can execute an exaflop—or a quintillion (1018) floating point operations a second. Since then, scientists have geared up to make more of these blazingly fast computers: several exascale machines are due to come online in the US and Europe in 2024.

But speed itself isn’t the endgame. Researchers are building exascale computers to explore previously inaccessible science and engineering questions in biology, climate, astronomy, and other fields. In the next few years, scientists will use Frontier to run the most complicated computer simulations humans have ever devised. They hope to pursue yet unanswered questions about nature and to design new technologies in areas from transportation to medicine.

Evan Schneider of the University of Pittsburgh, for example, is using Frontier to run simulations of how our galaxy has evolved over time. In particular, she’s interested in the flow of gas in and out of the Milky Way. A galaxy breathes, in a way: gas flows into it, coalescing via gravity into stars, but gas also flows out—for example, when stars explode and release matter. Schneider studies the mechanisms by which galaxies exhale. “We can compare the simulations to the real observed universe, and that gives us a sense of whether we’re getting the physics right,” Schneider says. 

Schneider is using Frontier to build a computer model of the Milky Way with high enough resolution to zoom in on individual exploding stars. That means the model must capture large-scale properties of our galaxy at 100,000 light-years, as well as properties of the supernovas at about 10 light-years across. “That really hasn’t been done,” she says. To get a sense of what that resolution means, it would be analogous to creating a physically accurate model of a can of beer along with the individual yeast cells within it, and the interactions at each scale in between.

Stephan Priebe, a senior engineer at GE, is using Frontier to simulate the aerodynamics of the next generation of airplane designs. To increase fuel efficiency, GE is investigating an engine design known as an “open fan architecture.” Jet engines use fans to generate thrust, and larger fans mean higher efficiency. To make fans even larger, engineers have proposed removing the outer structural frame, known as the nacelle, so that the blades are exposed as in a pinwheel. “The simulations allow us to obtain a detailed view of the aerodynamic performance early in the design phase,” says Priebe. They give engineers insight into how to shape the fan blades for better aerodynamics, for example, or to make them quieter.

Frontier will particularly benefit Priebe’s studies of turbulence, the chaotic motion of a disturbed fluid—in this case, air—around the fan. Turbulence is a common phenomenon. We see it in the crashing of ocean waves and in the curl of smoke rising from an extinguished candle. But scientists still struggle to predict how exactly a turbulent fluid will flow. That is because it moves in response to both macroscopic influences, such as pressure and temperature changes, and microscopic influences, such as the rubbing of individual molecules of nitrogen in the air against one another. The interplay of forces on multiple scales complicates the motion. 

“In graduate school, [a professor] once told me, ‘Bronson, if anybody tells you that they understand turbulence, you should put one hand on your wallet and back out of the room, because they’re trying to sell you something,’” says astrophysicist Bronson Messer, the director of science at Oak Ridge Leadership Computing Facility, which houses Frontier. “Nobody understands turbulence. It really is the last great classical physics problem.” 

These scientific studies illustrate the distinct forte of supercomputers: simulating physical objects at multiple scales simultaneously. Other applications echo this theme. Frontier enables more accurate climate models, which have to simulate weather at different spatial scales across the entire planet and also on both long and short time scales. Physicists can also simulate nuclear fusion, the turbulent process in which the sun generates energy by pushing atoms together to form different elements. They want to better understand the process in order to develop fusion as a clean energy technology. While these sorts of multi-scale simulations have been a staple of supercomputing for many years, Frontier can incorporate a wider range of different scales than ever before.

To use Frontier, approved scientists log in to the supercomputer remotely, submitting their jobs over the internet. To make the most of the machine, Oak Ridge aims to have around 90% of the supercomputer’s processors running computations 24 hours a day, seven days a week. “We enter this sort of steady state where we’re constantly doing scientific simulations for a handful of years,” says Messer. Users keep their data at Oak Ridge in a data storage facility that can store up to 700 petabytes, the equivalent of about 700,000 portable hard drives.

While Frontier is the first exascale supercomputer, more are coming down the line. In the US, researchers are currently installing two machines that will be capable of more than two exaflops: Aurora, at Argonne National Laboratory in Illinois, and El Capitan, at Lawrence Livermore National Laboratory in California. Beginning in early 2024, scientists plan to use Aurora to create maps of neurons in the brain and search for catalysts that could make industrial processes such as fertilizer production more efficient. El Capitan, also slated to come online in 2024, will simulate nuclear weapons in order to help the government to maintain its stockpile without weapons testing. Meanwhile, Europe plans to deploy its first exascale supercomputer, Jupiter, in late 2024.

China purportedly has exascale supercomputers as well, but it has not released results from standard benchmark tests of their performance, so the computers do not appear on the TOP500, a semiannual list of the fastest supercomputers. “The Chinese are concerned about the US imposing further limits in terms of technology going to China, and they’re reluctant to disclose how many of these high-performance machines are available,” says Dongarra, who designed the benchmark that supercomputers must run for TOP500.

The hunger for more computing power doesn’t stop with the exascale. Oak Ridge is already considering the next generation of computers, says Messer. These would have three to five times the computational power of Frontier. But one major challenge looms: the massive energy footprint. The power that Frontier draws, even when it is idling, is enough to run thousands of homes. “It’s probably not sustainable for us to just grow machines bigger and bigger,” says Messer. 

As Oak Ridge has built progressively larger supercomputers, engineers have worked to improve the machines’ efficiency with innovations including a new cooling method. Summit, the predecessor to Frontier that is still running at Oak Ridge, expends about 10% of its total energy usage to cool itself. By comparison, 3% to 4% of Frontier’s energy consumption is for cooling. This improvement came from using water at ambient temperature to cool the supercomputer, rather than chilled water.

Next-generation supercomputers would be able to simulate even more scales simultaneously. For example, with Frontier, Schneider’s galaxy simulation has resolution down to the tens of light-years. That’s still not quite enough to get down to the scale of individual supernovas, so researchers must simulate the individual explosions separately. A future supercomputer may be able to unite all these scales.

By simulating the complexity of nature and technology more realistically, these supercomputers push the limits of science. A more realistic galaxy simulation brings the vastness of the universe to scientists’ fingertips. A precise model of air turbulence around an airplane fan circumvents the need to build a prohibitively expensive wind tunnel. Better climate models allow scientists to predict the fate of our planet. In other words, they give us a new tool to prepare for an uncertain future.

What’s next for the world’s fastest supercomputers

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here.

It can be difficult to wrap your brain around the number-crunching capability of the world’s fastest supercomputer. But computer scientist Jack Dongarra, of the University of Tennessee, puts it this way: “If everybody on Earth were to do one calculation per second, it would take four years to equal what that computer can do in one second.”

The supercomputer in question is called Frontier. It takes up the space of two tennis courts at Oak Ridge National Laboratory in the eastern Tennessee hills, where it was unveiled in May 2022. 

Here are some more specs: Frontier uses approximately 50,000 processors, compared with the most powerful laptop’s 16 or 24. It consumes 20 million watts, compared with a laptop’s 65 or so. It cost $600 million to build. 

When Frontier came online, it marked the dawn of so-called exascale computing, with machines that can execute an exaflop—or a quintillion (1018) floating point operations a second. Since then, scientists have geared up to make more of these blazingly fast computers: several exascale machines are due to come online in the US and Europe in 2024.

But speed itself isn’t the endgame. Researchers are building exascale computers to explore previously inaccessible science and engineering questions in biology, climate, astronomy, and other fields. In the next few years, scientists will use Frontier to run the most complicated computer simulations humans have ever devised. They hope to pursue yet unanswered questions about nature and to design new technologies in areas from transportation to medicine.

Evan Schneider of the University of Pittsburgh, for example, is using Frontier to run simulations of how our galaxy has evolved over time. In particular, she’s interested in the flow of gas in and out of the Milky Way. A galaxy breathes, in a way: gas flows into it, coalescing via gravity into stars, but gas also flows out—for example, when stars explode and release matter. Schneider studies the mechanisms by which galaxies exhale. “We can compare the simulations to the real observed universe, and that gives us a sense of whether we’re getting the physics right,” Schneider says. 

Schneider is using Frontier to build a computer model of the Milky Way with high enough resolution to zoom in on individual exploding stars. That means the model must capture large-scale properties of our galaxy at 100,000 light-years, as well as properties of the supernovas at about 10 light-years across. “That really hasn’t been done,” she says. To get a sense of what that resolution means, it would be analogous to creating a physically accurate model of a can of beer along with the individual yeast cells within it, and the interactions at each scale in between.

Stephan Priebe, a senior engineer at GE, is using Frontier to simulate the aerodynamics of the next generation of airplane designs. To increase fuel efficiency, GE is investigating an engine design known as an “open fan architecture.” Jet engines use fans to generate thrust, and larger fans mean higher efficiency. To make fans even larger, engineers have proposed removing the outer structural frame, known as the nacelle, so that the blades are exposed as in a pinwheel. “The simulations allow us to obtain a detailed view of the aerodynamic performance early in the design phase,” says Priebe. They give engineers insight into how to shape the fan blades for better aerodynamics, for example, or to make them quieter.

Frontier will particularly benefit Priebe’s studies of turbulence, the chaotic motion of a disturbed fluid—in this case, air—around the fan. Turbulence is a common phenomenon. We see it in the crashing of ocean waves and in the curl of smoke rising from an extinguished candle. But scientists still struggle to predict how exactly a turbulent fluid will flow. That is because it moves in response to both macroscopic influences, such as pressure and temperature changes, and microscopic influences, such as the rubbing of individual molecules of nitrogen in the air against one another. The interplay of forces on multiple scales complicates the motion. 

“In graduate school, [a professor] once told me, ‘Bronson, if anybody tells you that they understand turbulence, you should put one hand on your wallet and back out of the room, because they’re trying to sell you something,’” says astrophysicist Bronson Messer, the director of science at Oak Ridge Leadership Computing Facility, which houses Frontier. “Nobody understands turbulence. It really is the last great classical physics problem.” 

These scientific studies illustrate the distinct forte of supercomputers: simulating physical objects at multiple scales simultaneously. Other applications echo this theme. Frontier enables more accurate climate models, which have to simulate weather at different spatial scales across the entire planet and also on both long and short time scales. Physicists can also simulate nuclear fusion, the turbulent process in which the sun generates energy by pushing atoms together to form different elements. They want to better understand the process in order to develop fusion as a clean energy technology. While these sorts of multi-scale simulations have been a staple of supercomputing for many years, Frontier can incorporate a wider range of different scales than ever before.

To use Frontier, approved scientists log in to the supercomputer remotely, submitting their jobs over the internet. To make the most of the machine, Oak Ridge aims to have around 90% of the supercomputer’s processors running computations 24 hours a day, seven days a week. “We enter this sort of steady state where we’re constantly doing scientific simulations for a handful of years,” says Messer. Users keep their data at Oak Ridge in a data storage facility that can store up to 700 petabytes, the equivalent of about 700,000 portable hard drives.

While Frontier is the first exascale supercomputer, more are coming down the line. In the US, researchers are currently installing two machines that will be capable of more than two exaflops: Aurora, at Argonne National Laboratory in Illinois, and El Capitan, at Lawrence Livermore National Laboratory in California. Beginning in early 2024, scientists plan to use Aurora to create maps of neurons in the brain and search for catalysts that could make industrial processes such as fertilizer production more efficient. El Capitan, also slated to come online in 2024, will simulate nuclear weapons in order to help the government to maintain its stockpile without weapons testing. Meanwhile, Europe plans to deploy its first exascale supercomputer, Jupiter, in late 2024.

China purportedly has exascale supercomputers as well, but it has not released results from standard benchmark tests of their performance, so the computers do not appear on the TOP500, a semiannual list of the fastest supercomputers. “The Chinese are concerned about the US imposing further limits in terms of technology going to China, and they’re reluctant to disclose how many of these high-performance machines are available,” says Dongarra, who designed the benchmark that supercomputers must run for TOP500.

The hunger for more computing power doesn’t stop with the exascale. Oak Ridge is already considering the next generation of computers, says Messer. These would have three to five times the computational power of Frontier. But one major challenge looms: the massive energy footprint. The power that Frontier draws, even when it is idling, is enough to run thousands of homes. “It’s probably not sustainable for us to just grow machines bigger and bigger,” says Messer. 

As Oak Ridge has built progressively larger supercomputers, engineers have worked to improve the machines’ efficiency with innovations including a new cooling method. Summit, the predecessor to Frontier that is still running at Oak Ridge, expends about 10% of its total energy usage to cool itself. By comparison, 3% to 4% of Frontier’s energy consumption is for cooling. This improvement came from using water at ambient temperature to cool the supercomputer, rather than chilled water.

Next-generation supercomputers would be able to simulate even more scales simultaneously. For example, with Frontier, Schneider’s galaxy simulation has resolution down to the tens of light-years. That’s still not quite enough to get down to the scale of individual supernovas, so researchers must simulate the individual explosions separately. A future supercomputer may be able to unite all these scales.

By simulating the complexity of nature and technology more realistically, these supercomputers push the limits of science. A more realistic galaxy simulation brings the vastness of the universe to scientists’ fingertips. A precise model of air turbulence around an airplane fan circumvents the need to build a prohibitively expensive wind tunnel. Better climate models allow scientists to predict the fate of our planet. In other words, they give us a new tool to prepare for an uncertain future.

What’s next for China’s digital currency?

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here.

China’s digital yuan was seemingly born out of a desire to centralize a payment system dominated by the tech companies Alibaba and Tencent. According to its central bank, the digital currency, also known as the e-CNY, is both a risk-free alternative to these commercial platforms and a replacement for physical cash, which is becoming obsolete. 

Almost three years into the pilot, though, it seems the government is still struggling to find compelling applications for it, and adoption has been minimal. Now the goal may be shifting, or at least broadening. China appears to be charging ahead with plans to use the e-CNY outside its borders, for international trade. 

If it’s successful, it could challenge the US dollar’s position as the world’s dominant reserve currency—and in the process shake up the global geopolitical order.  

The (public) rationale

From the outside looking in, it is impossible to fully ascertain the government’s plans for the e-CNY. Though the People’s Bank of China (PBOC) has not been shy about its central bank digital currency (CBDC) project, it has revealed few specific details about how the e-CNY actually works—or how it ultimately intends to use it.

One thing we do know is that it’s been a long time in the making. 

While Alibaba and Tencent launched their digital payment systems in 2004 and 2005 respectively, China began researching digital currency technology in 2014 and launched a research institute devoted to the concept in 2016, hoping to create a centralized alternative. Then in 2019, after Meta (then called Facebook) proposed its own global digital currency, PBOC officials expressed concern that the coin, called Libra, might undermine the monetary sovereignty of China’s currency, the yuan. The next year it started the e-CNY pilot phase, which is still ongoing.

According to Mu Changchun, director general of the PBOC’s Digital Currency Institute, the e-CNY project has three main goals: to improve the efficiency of the central bank’s payment system, provide a backup for the retail payment system, and “enhance financial inclusion.”

“Now we can provide 24/7 services to the general public,” he said during a talk he gave via Zoom for an event hosted last year by the Atlantic Council, a foreign policy think tank in Washington, DC. Mu added that the e-CNY will broaden access to the PBOC’s payment system—extending it to, among others, more private-sector firms, including fintech companies and telecom operators.

Mu said e-CNY will also serve as a necessary backup to the popular mobile payment apps Alipay and WeChat Pay, which dominate China’s daily retail transactions. Most people in China don’t use cash or credit cards but rely on their phones to buy things, so these commercial platforms have become “significantly important financial infrastructure,” Mu said. If something ever goes wrong with them, “that will bring a very significant negative impact to the financial stability of China,” he said.

On top of that, between 10% and 20% of people in China don’t have bank accounts and can’t access the commercial financial system, said Mu. Visitors to China from other countries also often have difficulty participating in the mobile-dominated payment system, where many vendors no longer take cash or even cards. They could use e-CNY instead, according to Mu. 

It’s possible to access e-CNY through commercial bank apps, but it’s also possible through an app run by the PBOC. The PBOC’s choice to connect directly with retail customers through its app is remarkable, because central banks typically deal only with other banks.

“Managed anonymity”

China is at the forefront of an increasingly global push to adopt CBDCsMore than 100 countries around the world are exploring possible CBDC designs, and a big question they are wrestling with is how directly the central bank should be involved versus letting the currency be run by private-sector intermediaries. For example, to prevent money laundering and other financial crimes, traditional banks require users to verify their identities. Most central banks don’t want to have to do this kind of admin for millions of people, says Ananya Kumar, an associate director for digital currency research at the Atlantic Council.

But the PBOC’s desire to do just that explains why some civil liberties activists oppose the idea of CBDCs. Around the world, retail transactions are going cashless, and if cash becomes obsolete, governments will use CBDCs as tools for surveillance and control, argues Alex Gladstein, chief strategy officer at the Human Rights Foundation. 

“The Chinese government wants more control over payments,” says Gladstein. Though it already has a firm grip on the two commercial payment giants, direct control and oversight over a digital currency would provide much better data and the power to deny people access, he says. 

The PBOC, on the other hand, says the e-CNY will protect people’s privacy thanks to a policy that it calls “managed anonymity.” In short, it’s possible to get an e-CNY account using only a mobile number. Balances for these accounts are capped. In his talk last year, Mu said the cap was 10,000 yuan (around $1,400 at the time of this writing) and that users of such software wallets can spend up to 2,000 yuan per transaction and 5,000 yuan per day. 

Mu dismissed the idea that the government could determine users’ real identities from the mobile numbers. China’s new Personal Information Protection Law will prevent telecom operators from sharing identifying information with the central bank or other e-CNY operators, he said.

One particularly advanced feature of e-CNY is the ability to transfer money between two people using devices that are not connected to the internet. During the Atlantic Council talk, Mu showed a video of people using this offline payment function with smartphones as well as plastic cards with e-ink displays.

Feeling around for a fit

But why should people adopt the e-CNY? It seems the government is still trying to figure that out. The PBOC has been piloting the currency for almost three years, testing a wide variety of potential uses. 

Kumar and her colleagues have documented 30 different test applications, ranging from bank loans to cards that combine e-CNY wallets with other functions. Examples include an “elderly care card,” which integrates health-care information and location data with an emergency service system; a “smart student ID”; and a card that pays e-CNY rewards for using low-carbon transportation. There are also several pilots focused on online commerce in rural areas. And in April, Changshu, a city of 1.5 million people, said it would start paying public employees in e-CNY.

“All kinds of little nooks and crannies of the payment system are getting reached by e-CNY,” says Darrell Duffie, a professor of finance at Stanford’s graduate school of business. 

Still, hardly anyone is using it. Though the system has been tested in 25 cities, and 260 million unique wallets hold a total of 13.61 billion RMB ($1.9 billion), last year e-CNY accounted for only 0.13% of the supply of central bank reserves and cash in circulation. “That’s very small after two years of piloting,” says Duffie. He says the only reason it’s still called a pilot is that it hasn’t taken off.

Officials have said there is no set timetable for the formal launch of the e-CNY, and that the bank is focused on improving the user experience and testing the security and resilience of the network rather than increasing adoption. But some close observers think the government underestimated how hard it would be to create a retail payment network from scratch.

“A lot of the ambition for this project has proven more difficult to achieve than they thought, and the timeline has been longer than people anticipated,” says Martin Chorzempa, a senior fellow at the Peterson Institute for Economics, a think tank in DC. What’s been especially difficult, he says, has been signing up enough merchants and creating a rich enough “ecosystem” to make the e-CNY as useful as established payment methods. 

“The e-CNY has to be as useful as Alipay and WeChat Pay for it to actually have a user base, and right now there really is not a use case,” says Chorzempa, who has written a book about China’s payment system. “People just get a red envelope, they spend it, and they generally don’t open the e-CNY app again,” he says, referring to the electronic icon the government uses when it doles out digital money to pilot participants. Chorzempa speculates that the challenges the PBOC has had in getting traction for the e-CNY inside China may be contributing to its increased focus on international uses. 

And that has put the e-CNY on a collision course with the US dollar.

e-CNY vs. USD

Though the government may be struggling to find compelling applications in China, it may have found one elsewhere, in the form of large cross-border payments between banks. The international payment system, which consists of a network of so-called correspondent banks, can be cumbersome and slow. CBDCs could be faster and more efficient.

For China, there could also be a geopolitical upside: an alternative set of international payment rails that the United States does not control.

Because the dollar is the world’s dominant reserve currency, a country that wants to do business with others typically needs dollars. That means the US can effectively expel an adversary from the global financial system. For example, when Russia invaded Ukraine in early 2022, the US, with help from its allies, imposed sanctions on Russia’s biggest banks that targeted assets and barred them from US-controlled global financial infrastructure, hampering their ability to raise capital.

This geopolitical advantage could be undermined if currencies like the Chinese yuan become more prevalent in global trade, a process that the e-CNY could accelerate. That’s why an e-CNY pilot called Project mBridge is important, says Chorzempa.

The project is a test of infrastructure for a “wholesale” CBDC led by China, which would be used for large-value cross-border transfers between banks, says Chorzempa. “That’s really where you get into actual potential competition with the US dollar,” he says. 

Today, banks typically execute such transactions using what’s called the correspondent banking system. A correspondent bank is a third party that serves as an intermediary between domestic banks in different countries. 

According to the Bank of International Settlements (BIS), however, this system “has not kept pace” with the “rapid growth in global economic integration” that has occurred in recent decades. Correspondent banks duplicate processes and steps, making cross-border payments costly, slow, and operationally complex—with “limited access and low transparency,” according to the BIS. Researchers at the BIS believe that a CBDC-based system could make this system more efficient and cheaper.

That’s part of the BIS’s rationale for working with the PBOC—along with the central banks of Hong Kong, the United Arab Emirates, and Thailand—on Project mBridge. Over the course of six weeks last August and September, 20 commercial banks used a custom blockchain to settle $22 million in cross-border transactions using CBDCs. 

That’s only a “drop in the bucket,” acknowledges the Atlantic Council’s Kumar. Still, it’s important because it’s the first time multiple jurisdictions have settled CBDC transactions, she says: “Not can, not potential—have actually done it.” It’s also a big deal in the context of the ongoing global discussion about “de-dollarization” and the internationalization of other currencies, says Kumar. 

Given the “weaponization of the dollar” via sanctions, China and other countries are trying to develop new ways to settle trades, she says. For example, in April, Bangladesh used yuan to pay off a loan from Russian lenders, using an interbank payment system that China developed called the Cross-border Interbank Payment System, or CIPS. “It did that partly because a) its dollar reserves aren’t very high and b) it wants to actually transact with Russia,” says Kumar. Sanctioned Russian banks are blocked from using the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the platform that facilitates most wholesale cross-border transactions.

In Kumar’s view, however, mBridge is about more than currency: “The more important part there is about the technology that is getting internationalized and used by other countries because there’s a geopolitical motivation for them to do that.” She has hypothesized that what might emerge if China is successful is “a set of technical and regulatory standards built in the image of the e-CNY.” 

It’s likely no coincidence that the US Federal Reserve is now researching CBDC systems for large-value cross-border transactions too. Last November, the New York Fed released results from the first phase of what it calls Project Cedar: “a multiphase research effort to develop a technical framework for a theoretical wholesale central bank digital currency.” In May, it published results from Phase II, a collaboration with Singapore’s central bank. According to the New York Fed, the project demonstrated that distributed ledger technology could support “enhancements to multi-currency payments and settlements.”

It will be “a long, long time” before the e-CNY might become a geopolitical problem for the US, says Stanford’s Duffie, not only because the technology is complicated but also because the legal and governance issues for cross-border payments are so complex.

Chorzempa agrees that it would take a long time for the e-CNY to significantly disrupt the dollar’s power, if it ever happens. China is betting that many other countries will adopt “tokenized” central bank payment systems, he says, but it’s still not clear if the technology offers advantages over conventional payment systems. Nevertheless, he says, “I would not write it off.”

What’s next for the moon

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here.

We’re going back to the moon. And back. And back. And back again.

It’s been more than 50 years since humans last walked on the lunar surface, but starting this year, an array of missions from private companies and national space agencies plan to take us back, sending everything from small robotic probes to full-fledged human landers.

The ultimate goal? Getting humans living and working on the moon, and then using it as a way station for possible later missions into deep space.

Here’s what’s next for the moon.

Robotic missions are leading the charge

More than a dozen robotic vehicles are scheduled to land on the moon in the 2020s.

On July 14, India launched its Chandrayaan-3 mission, a second attempt from the country to land on the surface of the moon after Chandrayaan-2 crashed there in 2019. That landing attempt will come in August. 

Hot on its heels are two private companies in the US, Astrobotic and Intuitive Machines, both partly funded by NASA to begin moon landings this year. Astrobotic’s Peregrine One lander is scheduled  to carry a suite of instruments (some from NASA) to the moon’s northern hemisphere later this year to study the surface, including a sensor to hunt for water ice and a small rover to explore. And Intuitive Machines’ Nova-C lander will attempt a lunar first.

“Our primary objective is to land softly on the south pole region of the moon, which has never been done before,” said Steve Altemus, the company’s CEO, after NASA recently asked the company to change the original planned landing site. The mission will include a telescope to image the Milky Way’s center from the moon, another first, and some demonstration lunar data centers. The launch on a SpaceX Falcon 9 rocket is provisionally set for September.

Both companies have bigger ambitions. In 2024, Astrobotic hopes to send a NASA rover called VIPER to drive into some of the moon’s permanently shadowed craters and hunt for water ice. Intuitive Machines’ second mission, meanwhile, will deploy a small hopping vehicle that will jump into one of these pitch-black craters and carry a drill for NASA.

“There’s quite a lot of excitement around that,” says Xavier Orr, the CEO of the Australian firm Advanced Navigation, which will provide the landing navigation system for Nova-C and the hopper. The craters, he adds, are thought to be “the most likely places of finding ice on the moon.”

These private companies are backed by millions of dollars in government money, driven by NASA’s desire to return humans to the moon as part of its Artemis program. NASA wants to expand commercial moon activity in the same way it has helped fund commercial activity in Earth orbit with companies such as SpaceX.

“The goal is we return to the moon, open up a lunar economy, and continue exploring to Mars,” says Nujoud Merancy, chief of NASA’s Exploration Mission Planning Office at the Johnson Space Center in Texa. The ultimate plan, Merancy says, is to foster a “permanent settlement on the moon.”

Not all are convinced, especially when it comes to how companies will make money on lunar missions outside of funding from NASA. “What is the GDP of lunar activities?” says Sinead O’Sullivan, a former senior researcher at Harvard Business School’s Institute for Strategy and Competitiveness. “Some commercial economy may evolve, but it’s kind of hard to tell.”

 Humans are going back, too

In November 2024, if all goes to plan, the Artemis II mission will send a crew of four astronauts—three American and one Canadian—around the moon on a 10-day mission in NASA’s Orion spacecraft, launched by the agency’s mighty new Space Launch System rocket.

Humans have not traveled to the moon since Apollo 17 in 1972. The goal, however, is “not just returning, but staying and exploring,” says Merancy. Artemis II “is really ensuring that the vehicles are ready for longer-duration missions in the future.”

Also in November 2024, a SpaceX Falcon Heavy rocket is scheduled to carry the first modules of NASA’s new space station near the moon, called the Lunar Gateway. Gateway is meant to support Artemis missions to the moon, although the exact relationship is still somewhat murky.  The first humans back on the moon are due to land in 2025, aboard a SpaceX Starship vehicle as part of Artemis III.

Much work remains to be done, however, not least proving Starship can launch from Earth (following a botched test flight in April 2023) and be refueled in space. This leaves some in doubt of the 2025 time frame. “A landing in 2029 would be really optimistic,” says Jonathan McDowell, an astronomer at the Harvard-Smithsonian Center for Astrophysics in Massachusetts.

NASA, meanwhile, has contracted both SpaceX and more recently Jeff Bezos’s competing Blue Origin for its planned landings at the moon’s south pole to prospect for water ice, which can be used both as drinking water and maybe as rocket fuel so that the moon could become a staging point for missions to more distant destinations in the solar system, such as Mars.

But the goal “isn’t just Mars,” says Teasel Muir-Harmony, a curator at the National Air and Space Museum in Washington, DC. “It’s learning how to live and work in deep space and creating a sustained presence further than Earth orbit.”

Moon laws need updating

International laws will need to be updated to cope with this uptick in lunar activity. At the moment, such activities are largely governed by the Outer Space Treaty, signed in 1967, but many of its particulars are vague.

“We are getting into areas like private space platforms and lunar mining facilities, for which there really is no clear government precedent,” says Scott Pace, a space policy expert at George Washington University and former executive secretary of the National Space Council in the US. “We have to be responsible for activities in space.”

Chris Johnson, space law advisor for the Secure World Foundation in the US, expects to see discussions at the United Nations over the next five or so years to iron out some of the issues. “We’re going to need norms for radio quiet zones, lunar roadways between valleys and craters, and landing pads on the moon,” he says. Or perhaps if emergencies break out with astronauts from different countries on the moon, “everyone has to take shelter at the nearest shelter, whether it’s yours or another’s,” he says.

NASA has begun tentative steps toward this goal, getting countries to sign up to its Artemis Accords, a set of guidelines about lunar activities. But they are not legally binding. “We only have a set of principles,” says Johnson.

Lunar missions could come thick and fast while these discussions take place, potentially moving us into a new dawn of space travel. “With the International Space Station, we learned how to live and work in low Earth orbit,” says Muir-Harmony. “Now there’s this opportunity to learn how to do that on another celestial body, and then travel to Mars—and perhaps other locations.”

What’s next for quantum computing

This story is a part of MIT Technology Review’s What’s Next series, where we look across industries, trends, and technologies to give you a first look at the future

In 2023, progress in quantum computing will be defined less by big hardware announcements than by researchers consolidating years of hard work, getting chips to talk to one another, and shifting away from trying to make do with noise as the field gets ever more international in scope.

For years, quantum computing’s news cycle was dominated by headlines about record-setting systems. Researchers at Google and IBM have had spats over who achieved what—and whether it was worth the effort. But the time for arguing over who’s got the biggest processor seems to have passed: firms are heads-down and preparing for life in the real world. Suddenly, everyone is behaving like grown-ups.

As if to emphasize how much researchers want to get off the hype train, IBM is expected to announce a processor in 2023 that bucks the trend of putting ever more quantum bits, or “qubits,” into play. Qubits, the processing units of quantum computers, can be built from a variety of technologies, including superconducting circuitry, trapped ions, and photons, the quantum particles of light. 

IBM has long pursued superconducting qubits, and over the years the company has been making steady progress in increasing the number it can pack on a chip. In 2021, for example, IBM unveiled one with a record-breaking 127 of them. In November, it debuted  its 433-qubit Osprey processor, and the company aims to release a 1,121-qubit processor called Condor in 2023. 

But this year IBM is also expected to debut its Heron processor, which will have just 133 qubits. It might look like a backwards step, but as the company is keen to point out, Heron’s qubits will be of the highest quality. And, crucially, each chip will be able to connect directly to other Heron processors, heralding a shift from single quantum computing chips toward “modular” quantum computers built from multiple processors connected together—a move that is expected to help quantum computers scale up significantly. 

Heron is a signal of larger shifts in the quantum computing industry. Thanks to some recent breakthroughs, aggressive roadmapping, and high levels of funding, we may see general-purpose quantum computers earlier than many would have anticipated just a few years ago, some experts suggest. “Overall, things are certainly progressing at a rapid pace,” says Michele Mosca, deputy director of the Institute for Quantum Computing at the University of Waterloo. 

Here are a few areas where experts expect to see progress.

Stringing quantum computers together

IBM’s Heron project is just a first step into the world of modular quantum computing. The chips will be connected with conventional electronics, so they will not be able to maintain the “quantumness” of information as it moves from processor to processor. But the hope is that such chips, ultimately linked together with quantum-friendly fiber-optic or microwave connections, will open the path toward distributed, large-scale quantum computers with as many as a million connected qubits. That may be how many are needed to run useful, error-corrected quantum algorithms. “We need technologies that scale both in size and in cost, so modularity is key,” says Jerry Chow, director at IBM Quantum Hardware System Development.

Other companies are beginning similar experiments. “Connecting stuff together is suddenly a big theme,” says Peter Shadbolt, chief scientific officer of PsiQuantum, which uses photons as its qubits. PsiQuantum is putting the finishing touches on a silicon-based modular chip. Shadbolt says the last piece it requires—an extremely fast, low-loss optical switch—will be fully demonstrated by the end of 2023. “That gives us a feature-complete chip,” he says. Then warehouse-scale construction can begin: “We’ll take all of the silicon chips that we’re making and assemble them together in what is going to be a building-scale, high-performance computer-like system.” 

The desire to shuttle qubits among processors means that a somewhat neglected quantum technology will come to the fore now, according to Jack Hidary, CEO of SandboxAQ, a quantum technology company that was spun out of Alphabet last year. Quantum communications, where coherent qubits are transferred over distances as large as hundreds of kilometers, will be an essential part of the quantum computing story in 2023, he says.

“The only pathway to scale quantum computing is to create modules of a few thousand qubits and start linking them to get coherent linkage,” Hidary told MIT Technology Review. “That could be in the same room, but it could also be across campus, or across cities. We know the power of distributed computing from the classical world, but for quantum, we have to have coherent links: either a fiber-optic network with quantum repeaters, or some fiber that goes to a ground station and a satellite network.”

Many of these communication components have been demonstrated in recent years. In 2017, for example, China’s Micius satellite showed that coherent quantum communications could be accomplished between nodes separated by 1,200 kilometers. And in March 2022, an international group of academic and industrial researchers demonstrated a quantum repeater that effectively relayed quantum information over 600 kilometers of fiber optics. 

Taking on the noise

At the same time that the industry is linking up qubits, it is also moving away from an idea that came into vogue in the last five years—that chips with just a few hundred qubits might be able to do useful computing, even though noise easily disrupts their operations. 

This notion, called “noisy intermediate-scale quantum” (NISQ), would have been a way to see some short-term benefits from quantum computing, potentially years before reaching the ideal of large-scale quantum computers with many hundreds of thousands of qubits devoted to correcting errors. But optimism about NISQ seems to be fading. “The hope was that these computers could be used well before you did any error correction, but the emphasis is shifting away from that,” says Joe Fitzsimons, CEO of Singapore-based Horizon Quantum Computing.

Some companies are taking aim at the classic form of error correction, using some qubits to correct errors in others. Last year, both Google Quantum AI and Quantinuum, a new company formed by Honeywell and Cambridge Quantum Computing, issued papers demonstrating that qubits can be assembled into error-correcting ensembles that outperform the underlying physical qubits.

Other teams are trying to see if they can find a way to make quantum computers “fault tolerant” without as much overhead. IBM, for example, has been exploring characterizing the error-inducing noise in its machines and then programming in a way to subtract it (similar to what noise-canceling headphones do). It’s far from a perfect system—the algorithm works from a prediction of the noise that is likely to occur, not what actually shows up. But it does a decent job, Chow says: “We can build an error-correcting code, with a much lower resource cost, that makes error correction approachable in the near term.”

Maryland-based IonQ, which is building trapped-ion quantum computers, is doing something similar. “The majority of our errors are imposed by us as we poke at the ions and run programs,” says Chris Monroe, chief scientist at IonQ. “That noise is knowable, and different types of mitigation have allowed us to really push our numbers.”

Getting serious about software

For all the hardware progress, many researchers feel that more attention needs to be given to programming. “Our toolbox is definitely limited, compared to what we need to have 10 years down the road,” says Michal Stechly of Zapata Computing, a quantum software company based in Boston. 

The way code runs on a cloud-accessible quantum computer is generally “circuit-based,” which means the data is put through a specific, predefined series of quantum operations before a final quantum measurement is made, giving the output. That’s problematic for algorithm designers, Fitzsimons says. Conventional programming routines tend to involve looping some steps until a desired output is reached, and then moving into another subroutine. In circuit-based quantum computing, getting an output generally ends the computation: there is no option for going round again.

Horizon Quantum Computing is one of the companies that have been building programming tools to allow these flexible computation routines. “That gets you to a different regime in terms of the kinds of things you’re able to run, and we’ll start rolling out early access in the coming year,” Fitzsimons says.

Helsinki-based Algorithmiq is also innovating in the programming space. “We need nonstandard frameworks to program current quantum devices,” says CEO Sabrina Maniscalco. Algorithmiq’s newly launched drug discovery platform, Aurora, combines the results of a quantum computation with classical algorithms. Such “hybrid” quantum computing is a growing area, and it’s widely acknowledged as the way the field is likely to function in the long term. The company says it expects to achieve a useful quantum advantage—a demonstration that a quantum system can outperform a classical computer on real-world, relevant calculations—in 2023. 

Competition around the world

Change is likely coming on the policy front as well. Government representatives including Alan Estevez, US undersecretary of commerce for industry and security, have hinted that trade restrictions surrounding quantum technologies are coming. 

Tony Uttley, COO of Quantinuum, says that he is in active dialogue with the US government about making sure this doesn’t adversely affect what is still a young industry. “About 80% of our system is components or subsystems that we buy from outside the US,” he says. “Putting a control on them doesn’t help, and we don’t want to put ourselves at a disadvantage when competing with other companies in other countries around the world.”

And there are plenty of competitors. Last year, the Chinese search company Baidu opened access to a 10-superconducting-qubit processor that it hopes will help researchers make forays into applying quantum computing to fields such as materials design and pharmaceutical development. The company says it has recently completed the design of a 36-qubit superconducting quantum chip. “Baidu will continue to make breakthroughs in integrating quantum software and hardware and facilitate the industrialization of quantum computing,” a spokesman for the company told MIT Technology Review. The tech giant Alibaba also has researchers working on quantum computing with superconducting qubits.

In Japan, Fujitsu is working with the Riken research institute to offer companies access to the country’s first home-grown quantum computer in the fiscal year starting April 2023. It will have 64 superconducting qubits. “The initial focus will be on applications for materials development, drug discovery, and finance,” says Shintaro Sato, head of the quantum laboratory at Fujitsu Research.

Not everyone is following the well-trodden superconducting path, however. In 2020, the Indian government pledged to spend 80 billion rupees ($1.12 billion when the announcement was made) on quantum technologies. A good chunk will go to photonics technologies—for satellite-based quantum communications, and for innovative “qudit” photonics computing.

Qudits expand the data encoding scope of qubits—they offer three, four, or more dimensions, as opposed to just the traditional binary 0 and 1, without necessarily increasing the scope for errors to arise. “This is the kind of work that will allow us to create a niche, rather than competing with what has already been going on for several decades elsewhere,” says Urbasi Sinha, who heads the quantum information and computing laboratory at the Raman Research Institute in Bangalore, India.

Though things are getting serious and internationally competitive, quantum technology remains largely collaborative—for now. “The nice thing about this field is that competition is fierce, but we all recognize that it’s necessary,” Monroe says. “We don’t have a zero-sum-game mentality: there are different technologies out there, at different levels of maturity, and we all play together right now. At some point there’s going to be some kind of consolidation, but not yet.”

Michael Brooks is a freelance science journalist based in the UK.

What’s next for mRNA vaccines

Cast your mind back to 2020, if you can bear it. As the year progressed, so did the impact of covid-19. We were warned that wearing face coverings, disinfecting everything we touched, and keeping away from other people were some of the only ways we could protect ourselves from the potentially fatal disease.

Thankfully, a more effective form of protection was in the works. Scientists were developing all-new vaccines at rapid speed. The virus behind covid-19 was sequenced in January, and clinical trials of vaccines using messenger RNA started in March. By the end of the year, the US Food and Drug Administration issued emergency-use authorization for these vaccines, and vaccination efforts took off. 

As things stand today, over 670 million doses of the vaccines have been delivered to people in the US.

This is an astonishingly fast turnaround for any new drug. But it follows years of research on the core technology. Scientists and companies have been working on mRNA-based treatments and vaccines for decades. The first experimental treatments were tested in rodents back in the 1990s, for diseases including diabetes and cancer. 

These vaccines don’t rely on injecting part of a virus into a person, like many other vaccines do. Instead, they deliver genetic code that our bodies can use to make the relevant piece of viral protein ourselves. The entire process is much quicker and simpler and sidesteps the need to grow viruses in a lab and purify the proteins they make, for example.

But while the first approved mRNA vaccines are for covid-19, similar vaccines are now being explored for a whole host of other diseases. Malaria, HIV, tuberculosis, and Zika are just some of the potential targets. mRNA vaccines might also be used in cancer treatments tailored to individual people. Here, the idea is to trigger a specific response by the immune system—one that is designed to attack tumor cells in the body.

Moderna, the biotech company behind one of the two approved mRNA vaccines for covid-19, is developing mRNA vaccines for RSV (respiratory syncytial virus), HIV, Zika, Epstein-Barr virus, and more. BioNTech, which partnered with Pfizer on the other approved mRNA-based covid-19 vaccine, is exploring vaccines for tuberculosis, malaria, HIV, shingles, and flu. Both companies are working on treatments for cancer. And many other companies and academic labs are getting in on the action.

Self-made vaccines

Messenger RNA itself is a strand of genetic code that can be read by your DNA and used to make proteins. The lab-made mRNA used in vaccines can code for a specific protein—one that we’d like to train our immune systems to recognize. In the case of covid-19 vaccines, the code is for the spike protein found on the outer shell of the Sars-CoV-2 virus, which causes the disease. The mRNA itself is packaged up in lipid nanoparticles—tiny little envelopes that help it survive the journey into your body.

The vaccines are cheap, quick, and easy to make, says Katalin Karikó, an adjunct professor at the University of Pennsylvania who has pioneered research into the use of mRNA for vaccines. They are also very efficient. “You put [the mRNA] in cells, and half an hour later, they are already producing the protein,” she says.

The idea is that once your immune system has been exposed to such a protein, it is better placed to mount a strong response should it ever encounter the virus itself. In the case of covid-19, this is thought to be largely due to the production of antibodies—proteins that protect us against infections. Trained-up immune cells play an important role, too.

In theory, we could make mRNA for pretty much any protein—and potentially target any infectious disease. It’s an exciting time for mRNA vaccine technology, and vaccines for plenty of infectious diseases are currently making their way through clinical trials.

Universal protection

It’s tricky to predict exactly which mRNA vaccines might be the next to make it into health clinics. But hopes are high for a flu vaccine. Potentially, a universal vaccine could protect against multiple strains of flu, while protecting against the coronavirus at the same time.

The current flu vaccine works by introducing a protein from the virus to your immune system, which should mount a response and learn how to defeat the virus. But it takes months to grow the virus in eggs to make this protein. The production process has to start in February in order to have a vaccine ready for October, says Anna Blakney, who studies RNA at the University of British Columbia in Vancouver, Canada. Every year, scientists in the Northern Hemisphere guess which strain of flu is likely to take off there by looking at what has happened in the Southern Hemisphere.

These guesses aren’t always spot on, and the flu virus can mutate over time, even while it is in the eggs. As a result, “it’s a notoriously underperforming vaccine,” says Blakney. The flu vaccine used in the US in 2019-2020 was 39% effective, but the one used in the 2004-2005 flu season was only 10% effective, according to estimates from the US Centers for Disease Control and Prevention

mRNA vaccines, on the other hand, are relatively quick to make. “You could imagine having a one-month turnaround for an RNA vaccine,” says Blakney. By September, scientists should have a much better idea of which flu strain is likely to take off in October and be better placed to target it.

There’s another potential benefit. Scientists can make mRNA vaccines that encode for more than one viral protein—which could allow us to create vaccines that protect against multiple strains of flu. Norbert Pardi at the University of Pennsylvania and his colleagues are working on a universal flu vaccine—one that Pardi believes would protect against every type of flu that can make humans sick. His team recently showed that the vaccine could protect mice and ferrets from 20 flu subtypes. Other labs are working on mRNA vaccines that protect against all coronaviruses.

If we can include the code for several proteins, there’s the possibility to protect against multiple diseases in one shot. Moderna’s vaccine for covid, flu, and RSV is already in clinical trials, for example. In the future, we could go even further—just one or two shots could, in theory, protect you from 20 different viruses, says Karikó.

Cancer vaccines

Before anyone had started developing mRNA vaccines for the coronavirus that causes covid-19, researchers were trying to find ways to use mRNA to treat cancer. Here the approach is slightly different—the mRNA would be working as a “vaccine therapeutic.”

In the same way that we can train our immune systems to recognize viral proteins, we could also train them to recognize proteins on cancer cells. In theory, this approach could be totally personalized—scientists could study the cells of a specific person’s tumor and create a custom-made treatment that would help that individual’s own immune system defeat the cancer. “It’s a fantastic application of RNA,” says Blakney. “I think there’s huge potential there.”

Cancer vaccines have been trickier to make, partly because there’s often no clear protein target. We can make mRNA for a protein on the outer shell of a virus, such as the spike protein on the virus that causes covid-19. But when our own cells form tumors, there’s often no such obvious target, says Karikó.

Cancer cells probably require a different kind of immune response from that required to protect against a coronavirus, adds Pardi: “We will need to come up with slightly different mRNA vaccines.” Several clinical trials are underway, but “the breakthrough hasn’t happened yet,” he adds.

The next pandemic

Despite their huge promise, mRNA vaccines are unlikely to prevent or treat every disease out there, at least as the technology stands today. For a start, some of these vaccines need to be stored in low-temperature freezers, says Karin Loré, an immunologist at the Karolinska Institute in Stockholm, Sweden. That just isn’t an option in some parts of the world.

And some diseases pose more of a challenge than others. To protect against an infectious  disease, the mRNA in a vaccine will need to code for a relevant protein—a key signal that will give the immune system something to recognize and defend against. For some viruses, like covid-19, finding such a protein is quite straightforward.

But it’s not so easy for others. It might be harder to find good targets for vaccines that protect us against bacterial infections, for example, says Blakney. HIV has also been difficult. “They’ve never found that form of the protein that induces an immune response that works really well for HIV,” says Blakney.

“I don’t want to give the impression that mRNA vaccines will be the solution for everything,” says Loré. Blakney agrees. “We’ve seen the effects that these vaccines can [have], and it’s really exciting,” she says. “But I don’t think that, overnight, all vaccines are going to become RNA vaccines.”

Still, there’s plenty to look forward to. In 2023, we can expect an updated covid-19 vaccine. And researchers are hopeful we’ll see more mRNA vaccines enter clinics in the near future. “I really hope that in the next couple of years, we will have other approved mRNA vaccines against infectious disease,” says Pardi.

He is planning ahead for the next global disease outbreak, which may well involve a flu virus. We don’t know when the next pandemic will hit, “but we have to be ready for it,” he says. “It’s crystal clear that if you start vaccine development in the middle of a pandemic, it’s already too late.”

This story is a part of MIT Technology Review’s What’s Next series, where we look across industries, trends, and technologies to give you a first look at the future.