2024 Innovator of the Year: Shawn Shan builds tools to help artists fight back against exploitative AI

Shawn Shan is one of MIT Technology Review’s 2024 Innovators Under 35. Meet the rest of this year’s honorees. 

When image-generating models such as DALL-E 2, Midjourney, and Stable Diffusion kick-started the generative AI boom in early 2022, artists started noticing odd similarities between AI-generated images and those they’d created themselves. Many found that their work had been scraped into massive data sets and used to train AI models, which then produced knockoffs in their creative style. Many also lost work when potential clients used AI tools to generate images instead of hiring artists, and others were asked to use AI themselves and received lower rates. 

Now artists are fighting back. And some of the most powerful tools they have were built by Shawn Shan, 26, a PhD student in computer science at the University of Chicago (and MIT Technology Review’s 2024 Innovator of the Year). 

Shan got his start in AI security and privacy as an undergraduate there and participated in a project that built Fawkes, a tool to protect faces from facial recognition technology. But it was conversations with artists who had been hurt by the generative AI boom that propelled him into the middle of one of the biggest fights in the field. Soon after learning about the impact on artists, Shan and his advisors Ben Zhao (who made our Innovators Under 35 list in 2006) and Heather Zheng (who was on the 2005 list) decided to build a tool to help. They gathered input from more than a thousand artists to learn what they needed and how they would use any protective technology. 

Shawn Shan - Innovator of the Year 2024

CLARISSA BONET

Shan coded the algorithm behind Glaze, a tool that lets artists mask their personal style from AI mimicry. Glaze came out in early 2023, and last October, Shan and his team introduced another tool called Nightshade, which adds an invisible layer of “poison” to images to hinder image-generating AI models if they attempt to incorporate those images into their data sets. If enough poison is drawn into a machine-learning model’s training data, it could permanently break models and make their outputs unpredictable. Both algorithms work by adding invisible changes to the pixels of images that disrupt the way machine-learning models interpret them.

The response to Glaze was both “overwhelming and stressful,” Shan says. The team received backlash from generative AI boosters on social media, and there were several attempts to break the protections.  

But artists loved it. Glaze has been downloaded nearly 3.5 million times (and Nightshade over 700,000). It has also been integrated into the popular new art platform Cara, allowing artists to embed its protection in their work when they upload their images. And Glaze received a distinguished paper award and the Internet Defense Prize at the Usenix Security Symposium, a top computer security conference

Shan’s work has also allowed artists to be creative online again, says Karla Ortiz, an artist who has worked with him and the team to build Glaze and is part of a class action lawsuit against generative AI companies for copyright violation. 

Meet the rest of this year’s 
Innovators Under 35

“They do it because they’re passionate about a community that’s been … taken advantage of [and] exploited, and they’re just really invested in it,” says Ortiz. 

It was Shan, Zhao says, who first understood what kinds of protections artists were looking for and realized that the work they did together on Fawkes could help them build Glaze. Zhao describes Shan’s technical abilities as some of the strongest he’s ever seen, but what really sets him apart, he says, is his ability to connect dots across disciplines. “These are the kinds of things that you really can’t train,” Zhao adds.  

Shan says he wants to tilt the power balance back from large corporations to people. 

Shawn Shan - Innovator of the Year 2024

CLARISSA BONET

“Right now, the AI powerhouses are all private companies, and their job is not to protect people and society,” he says. “Their job is to make shareholders happy.” He aims to show, through his work on Glaze and Nightshade, that AI companies can collaborate with artists and help them benefit from AI or empower them to opt out. Some firms are looking into how they could use the tools to protect their intellectual property. 

Next, Shan wants to build tools to help regulators audit AI models and enforce laws. He also plans to further develop Glaze and Nightshade in ways that could make them easier to apply to other industries, such as gaming, music, or journalism. “I will be in [this] project for life,” he says.

Watch Shan talk about what’s next for his work in a recent interview by Amy Nordrum, MIT Technology Review’s executive editor.

This story has been updated.

To be more useful, robots need to become lazier

Robots perceive the world around them very differently from the way humans do. 

When we walk down the street, we know what we need to pay attention to—passing cars, potential dangers, obstacles in our way—and what we don’t, like pedestrians walking in the distance. Robots, on the other hand, treat all the information they receive about their surroundings with equal importance. Driverless cars, for example, have to continuously analyze data about things around them whether or not they are relevant. This keeps drivers and pedestrians safe, but it draws on a lot of energy and computing power. What if there’s a way to cut that down by teaching robots what they should prioritize and what they can safely ignore?

That’s the principle underpinning “lazy robotics,” a field of study championed by René van de Molengraft, a professor at Eindhoven University of Technology in the Netherlands. He believes that teaching all kinds of robots to be “lazier” with their data could help pave the way for machines that are better at interacting with things in their real-world environments, including humans. Essentially, the more efficient a robot can be with information, the better.

Van de Molengraft’s lazy robotics is just one approach researchers and robotics companies are now taking as they train their robots to complete actions successfully, flexibly, and in the most efficient manner possible.

Teaching them to be smarter when they sift through the data they gather and then de-prioritize anything that’s safe to overlook will help make them safer and more reliable—a long-standing goal of the robotics community.

Simplifying tasks in this way is necessary if robots are to become more widely adopted, says Van de Molengraft, because their current energy usage won’t scale—it would be prohibitively expensive and harmful to the environment. “I think that the best robot is a lazy robot,” he says. “They should be lazy by default, just like we are.”

Learning to be lazier

Van de Molengraft has hit upon a fun way to test these efforts out: teaching robots to play soccer. He recently led his university’s autonomous robot soccer team, Tech United, to victory at RoboCup, an annual international robotics and AI competition that tests robots’ skills on the soccer field. Soccer is a tough challenge for robots, because both scoring and blocking goals require quick, controlled movements, strategic decision-making, and coordination. 

Learning to focus and tune out distractions around them, much as the best human soccer players do, will make them not only more energy efficient (especially for robots powered by batteries) but more likely to make smarter decisions in dynamic, fast-moving situations.

Tech United’s robots used several “lazy” tactics to give them an edge over their opponents during the RoboCup. One approach involved creating a “world model” of a soccer pitch that identifies and maps out its layout and line markings—things that remain the same throughout the game. This frees the battery-powered robots from constantly scanning their surroundings, which would waste precious power. Each robot also shares what its camera is capturing with its four teammates, creating a broader view of the pitch to help keep track of the fast-moving ball. 

Previously, the robots needed a precise, pre-coded trajectory to move around the pitch. Now Van de Molengraft and his team are experimenting with having them choose their own paths to a specified destination. This saves the energy needed to track a specific journey and helps the robots cope with obstacles they may encounter along the way.

The group also successfully taught the squad to execute “penetrating passes”—where a robot shoots toward an open region in the field and communicates to the best-positioned member of its team to receive it—and skills such as receiving or passing the ball within configurations such as triangles. Giving the robots access to world models built using data from the surrounding environment allows them to execute their skills anywhere on the pitch, instead of just in specific spots.

Beyond the soccer pitch

While soccer is a fun way to test how successful these robotics methods are, other researchers are also working on the problem of efficiency—and dealing with much higher stakes.

Making robots that work in warehouses better at prioritizing different data inputs is essential to ensuring that they can operate safely around humans and be relied upon to complete tasks, for example. If the machines can’t manage this, companies could end up with a delayed shipment, damaged goods, an injured human worker—or worse, says Chris Walti, the former head of Tesla’s robotics division. 

Walti left the company to set up his own firm after witnessing how challenging it was to get robots to simply move materials around. His startup, Mytra, designs fully autonomous machines that use computer vision and an AI reinforcement-learning system to give them awareness of other robots closest to them, and to help them reason and collaborate to complete tasks (like moving a broken pallet) in much more computationally efficient ways. 

The majority of mobile robots in warehouses today are controlled by a single central “brain” that dictates the paths they follow, meaning a robot has to wait for instructions before it can do anything. Not only is this approach difficult to scale, but it consumes a lot of central computing power and requires very dependable communication links.

Mytra believes it’s hit upon a significantly more efficient approach, which acknowledges that individual robots don’t really need to know what hundreds of other robots are doing on the other side of the warehouse. Its machine-learning system cuts down on this unnecessary data, and the computing power it would take to process it, by simulating the optimal route each robot can take through the warehouse to perform its task. This enables them to act much more autonomously. 

“In the context of soccer, being efficient allows you to score more goals. In the context of manufacturing, being efficient is even more important because it means a system operates more reliably,” he says. “By providing robots with the ability to to act and think autonomously and efficiently, you’re also optimizing the efficiency and the reliability of the broader operation.”

While simplifying the types of information that robots need to process is a major challenge, inroads are being made, says Daniel Polani, a professor from the University of Hertfordshire in the UK who specializes in replicating biological processes in artificial systems. He’s also a fan of the RoboCup challenge—in fact, he leads his university’s Bold Hearts robot soccer team, which made it to the second round of this year’s RoboCup’s humanoid league.

“Organisms try not to process information that they don’t need to because that processing is very expensive, in terms of metabolic energy,” he says. Polani is interested in applying these  lessons from biology to the vast networks that power robots to make them more efficient with their information. Reducing the amount of information a robot is allowed to process will just make it weaker depending on the nature of the task it’s been given, he says. Instead, they should learn to use the data they have in more intelligent ways.

Simplifying software

Amazon, which has more than 750,000 robots, the largest such fleet in the world, is also interested in using AI to help them make smarter, safer, and more efficient decisions. Amazon’s robots mostly fall into two categories: mobile robots that move stock, and robotic arms designed to handle objects. The AI systems that power these machines collect millions of data points every day to help train them to complete their tasks. For example, they must learn which item to grasp and move from a pile, or how to safely avoid human warehouse workers. These processes require a lot of computing power, which the new techniques can help minimize.

Generally, robotic arms and similar “manipulation” robots use machine learning to figure out how to identify objects, for example. Then they follow hard-coded rules or algorithms to decide how to act. With generative AI, these same robots can predict the outcome of an action before even attempting it, so they can choose the action most likely to succeed or determine the best possible approach to grasping an object that needs to be moved. 

These learning systems are much more scalable than traditional methods of training robots, and the combination of generative AI and massive data sets helps streamline the sequencing of a task and cut out layers of unnecessary analysis. That’s where the savings in computing power come in. “We can simplify the software by asking the models to do more,” says Michael Wolf, a principal scientist at Amazon Robotics. “We are entering a phase where we’re fundamentally rethinking how we build autonomy for our robotic systems.”

Achieving more by doing less

This year’s RoboCup competition may be over, but Van de Molengraft isn’t resting on his laurels after his team’s resounding success. “There’s still a lot of computational activities going on in each of the robots that are not per se necessary at each moment in time,” he says. He’s already starting work on new ways to make his robotic team even lazier to gain an edge on its rivals next year.  

Although current robots are still nowhere near able to match the energy efficiency of humans, he’s optimistic that researchers will continue to make headway and that we’ll start to see a lot more lazy robots that are better at their jobs. But it won’t happen overnight. “Increasing our robots’ awareness and understanding so that they can better perform their tasks, be it football or any other task in basically any domain in human-built environments—that’s a continuous work in progress,” he says.

A brief guide to the greenhouse gases driving climate change

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

For the last week or so, I’ve been obsessed with a gas that I’d never given much thought to before. Sulfur hexafluoride (SF6) is used in high-voltage equipment on the grid. It’s also, somewhat inconveniently, a monster greenhouse gas. 

Greenhouse gases are those that trap heat in the atmosphere. SF6 and other fluorinated gases can be thousands of times more powerful at warming the planet than carbon dioxide, and yet, because they tend to escape in relatively small amounts, we hardly ever talk about them. Taken alone, their effects might be minor compared with those of carbon dioxide, but together, these gases add significantly to the challenge of addressing climate change. 

For more on the specifics of sulfur hexafluoride, check out my story from earlier this week. And in the meantime, here’s a quick cheat sheet on the most important greenhouse gases you need to know about. 

Carbon dioxide: The leading actor

I couldn’t in good conscience put together a list of greenhouse gases and not at least mention the big one. Human activities released 37.4 billion tons of carbon dioxide into the atmosphere in 2023. It’s the most abundant greenhouse gas we emit, and the most significant one driving climate change. 

It’s difficult to nail down exactly how long CO2 stays in the atmosphere, since the gas participates in a global carbon cycle—some will immediately be soaked up by oceans, forests, or other ecosystems, while the rest lingers in the atmosphere for centuries. 

Carbon dioxide comes from nearly every corner of our economy—the largest source is power plants, followed by transportation and then industrial activities. 

Methane: The flash in the pan

Methane is also a powerful contributor to climate change, making up about 30% of the warming we’ve experienced to date, even though carbon dioxide is roughly 200 times more abundant in the atmosphere. 

What’s most different about methane is that the gas is very short-lived, having a lifetime of somewhere around a decade in the atmosphere before it breaks down. But in that time, methane can cause about 86 times more warming than an equivalent amount of carbon dioxide. (Quick side note: Comparisons of greenhouse gases are usually made over a specific period of time, since gases all have different lifetimes and there’s no one number that can represent the complexity of atmospheric chemistry and physics.)

Methane’s largest sources are the fossil-fuel industry, agriculture, and waste. Cutting down leaks from the process of extracting oil and gas is one of the most straightforward and currently available ways to slim down methane emissions. There’s a growing movement to track methane more accurately—with satellites, among other techniques—and hold accountable the oil and gas companies that are releasing the most. 

Nitrous oxide: No laughing matter

You may have come across nitrous oxide at the dentist, where it might be called “laughing gas.” But its effects on climate change are serious, as the gas makes up about 6% of warming to date

Nitrous oxide emissions come almost entirely from agriculture. Applying certain nitrogen-based fertilizers can release the gas as bacteria break those chemicals down. Emissions can also come from burning certain agricultural wastes. 

Nitrous oxide emissions grew roughly 40% from 1980 to 2020. The gas lasts in the atmosphere for roughly a century, and over that time it can trap over 200 times more heat than carbon dioxide does in the same period. 

Cutting down on these emissions will largely require careful adjustment of soil management practices in agriculture. Decreasing use of synthetic fertilizers, applying the fertilizer we do use more efficiently, and choosing products that eliminate as many emissions as possible will be the main levers we can pull.

Fluorinated gases: The quiet giants

Last but certainly not least, fluorinated gases are some of the most powerful greenhouse gases that we emit. A variety of them fall under this umbrella, including hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and SF6. They last for centuries (or even millennia) in the atmosphere and have some eye-popping effects, with each having at least 10,000 times more global warming potential than carbon dioxide. 

HFCs are refrigerants, used in air conditioners, refrigerators, and similar appliances. One major area of research in heat pumps seeks alternative refrigerants that don’t have the same potential to warm the planet. The chemicals are also used in aerosol cans (think hair spray), as well as in fire retardants and solvents. 

SF6 is used in high-voltage power equipment, and it’s the single worst greenhouse gas that’s been covered by the International Panel on Climate change, clocking in at 23,500 times more powerful than carbon dioxide over the course of a century. Scientists are trying to find alternatives, but it’s turning out to be a difficult switch—as you’ll see if you read my latest story.

The good news is that we know change is possible when it comes to fluorinated gases. We’ve already moved away from one category, chlorofluorocarbons (CFCs). These were generally used in the same industries that use HFCs today, but they had the nasty habit of tearing a hole in the ozone layer. The 1987 Montreal Protocol successfully spurred a phaseout of CFCs, and we would be on track for significantly more warming without the change.


Now read the rest of The Spark

Related reading

Some scientists want to speed up or encourage chemical reactions that remove methane from the atmosphere, including researchers and companies who aim to spray iron particles above the ocean

Methane can come from food waste, and some companies want to capture that gas and use it for energy instead of allowing it to escape into the atmosphere.

Carbon dioxide emissions from aviation are only one source of the industry’s climate impact. Planes also emit clouds of water vapor and particulate matter called contrails, and they’re a huge cause of the warming from air travel. Rerouting planes could help.

Another thing

We’re inching closer to climate tipping points, thresholds where ecosystems and planetary processes can create feedback loops or rapid shifts. A UK research agency just launched a $106 million effort to develop early warning systems that could alert us if we get dangerously close to these tipping points. 

The agency will focus on two main areas: the melting of the Greenland Ice Sheet and the weakening of the North Atlantic Subpolar Gyre. Read more about the program’s goals in my colleague James Temple’s latest story.

Keeping up with climate  

Volkswagen has thrown over $20 billion at EV, battery, and software startups over the past six years. Experts aren’t sure this shotgun approach is helping the automaker compete on electric cars. (The Information)

We’re finally starting to understand how clouds affect climate change. Clouds reflect light back into space, but they also trap heat in the atmosphere. Researchers are starting to puzzle out how this will add up in our future climate. (New Scientist)

Vehicles in the US just keep getting bigger, and the trend is deadly. Larger vehicles are safer for their occupants but more dangerous for everyone around them. (The Economist)

→ Big cars can also be a problem for climate change, since they require bigger batteries and more power to get around. (MIT Technology Review)

The plant-based-meat industry has had trouble converting consumers in the US, and sales are on the decline. Now advocates are appealing to Congress for help. (Vox)

Last Energy wants to build small nuclear reactors, and the startup just secured $40 million in funding. The company is claiming that it can meet aggressive timelines and says it’ll bring its first reactor online as early as 2026 in Europe. (Canary Media)

There could be 43 million tons of wind turbine blades in landfills by 2050. Researchers say they’ve found alternative materials for the blades that could make them recyclable. (New York Times)

→ Other research aims to recycle the fiberglass in current blades using chemical methods. (MIT Technology Review)

The last coal-fired power plant in the UK is set to shut down at the end of the month. The facility just accepted its final fuel delivery. (BBC

How plants could mine metals from the soil

Nickel may not grow on trees—but there’s a chance it could someday be mined using plants. Many plant species naturally soak up metal and concentrate it in their tissues, and new funding will support research on how to use that trait for plant-based mining, or phytomining. 

Seven phytomining projects just received $9.9 million in funding from the US Department of Energy’s Advanced Research Projects Agency for Energy (ARPA-E). The goal is to better understand which plants could help with mining and determine how researchers can tweak them to get our hands on all the critical metals we’ll need in the future.

Metals like nickel, crucial for the lithium-ion batteries used in electric vehicles, are in high demand. But building new mines to meet that demand can be difficult because the mining industry has historically faced community backlash, often over environmental concerns. New mining technologies could help diversify the supply of crucial metals and potentially offer alternatives to traditional mines.  

“Everyone wants to talk about opening a new gigafactory, but no one wants to talk about opening a new mine,” says Philseok Kim, program director at ARPA-E for the phytomining project. The agency saw a need for sustainable, responsible new mining technologies, even if they’re a major departure from what’s currently used in the industry. Phytomining is a prime example. “It’s a crazy idea,” Kim says.

Roughly 750 species of plants are known to be hyperaccumulators, meaning they soak up large amounts of metals and hold them within their tissues, Kim says. The plants, which tend to absorb these metals along with other nutrients in the soil, have adapted to tolerate them.

Of the species known to take in and concentrate metals, more than two-thirds do so with nickel. While nickel is generally toxic to plants at high concentrations, these species have evolved to thrive in nickel-rich soils, which are common in some parts of the world where geologic processes have brought the metal to the surface. 

Even in hyperaccumulators, the overall level of nickel in a plant’s tissues would still be relatively small—something like one milligram of metal for every gram of dried plant material. But burning a dried plant (which largely removes the organic material) can result in ash that’s roughly 25% nickel or even higher.

The sheer number of nickel-tolerant plants, plus the metal’s importance for energy technologies, made it the natural focus for early research, Kim says.

But while plants already have a head start on nickel mining, it wouldn’t be feasible to start commercial operations with them today. The most efficient known hyperaccumulators might be able to produce 50 to 100 kilograms of nickel per hectare of land each year, Kim says. That would yield enough of the metal for just two to four EV batteries, on average, and require more land than a typical soccer field. The research program will aim to boost that yield to at least 250 kilograms per hectare in an attempt to improve the prospects for economical mining.

The seven projects being funded will aim to increase production in several ways. Some of the researchers are hunting for species that accumulate nickel even more efficiently than known species. One candidate is vetiver, a perennial grass that grows deep roots. It’s known to accumulate metals like lead and is often used in cleanup projects, so it could be a good prospect for soaking up other metals like nickel, says Rupali Datta, a biology researcher at Michigan Technological University and head of one of the projects.

Another awardee will examine over 100,000 herbarium samples—preserved and catalogued plant specimens. Using a technique called x-ray fluorescence scanning, the researchers will look for nickel in those plants’ tissues in the hopes of identifying new hyperaccumulator species. 

Other researchers are looking to boost the mining talents of known nickel hyperaccumulators. One problem with many of the established options is that they don’t have very high biomass—in other words, they’re small. So even if the plant has a relatively high concentration of nickel in its tissues, each plant will collect only a small amount of the metal. Researchers want to tweak the known hyperaccumulators to plump them up—for example, by giving them bigger root systems that would allow them to reach deeper into the soil for metal.

Another potential way to improve nickel uptake is to change the plants’ growth cycle. Most perennial plants will basically stop growing once they flower, says Richard Amasino, a biochemistry researcher at the University of Wisconsin–Madison. So one of his goals for the project is figuring out a way to delay flowering in Odontarrhena, a family of plants with bright yellow flowers, so they have more time to soak up nickel before they quit growing for the season.

Researchers are also working with these known target species to make sure they won’t become invasive in the places they’re planted. For example, Odontarrhena are native to Europe, and researchers want to make sure they wouldn’t run wild and disrupt natural ecosystems if they’re brought to the US or other climates where they’d grow well.

Hyperaccumulating plants are already used in mineral exploration, but they likely won’t be able to produce the high volumes of nickel we mine today, Simon Jowitt, director of the Center for Research in Economic Geology at the University of Nevada, Reno, said in an email. But plants might be a feasible solution for dealing with mine waste, he said. 

There’s also the question of what will happen once plants suck up the metals from a given area of soil. According to Jowitt, that layer may need to be removed to access more metal from the lower layers after a crop is planted and harvested. 

In addition to identifying and altering target species, researchers on all these projects need to gain a better understanding where plants might be grown and whether and how natural processes like groundwater movement might replenish target metals in the soil, Kim says. Also, scientists will need to analyze the environmental sustainability of phytomining, he adds. For example, burning plants to produce nickel-rich ash will lead to greenhouse-gas emissions. 

Even so, addressing climate change is all about making and installing things, Kim adds, and we need lots of materials to do that. Phytomining may be able to help in the future. “This is something we believe is possible,” Kim says, “but it’s extremely hard.”

Roblox is launching a generative AI that builds 3D environments in a snap

Roblox plans to roll out a generative AI tool that will let creators make whole 3D scenes just using text prompts, it announced today. 

Once it’s up and running, developers on the hugely popular online game platform will be able to simply write “Generate a race track in the desert,” for example, and the AI will spin one up. Users will also be able to modify scenes or expand their scope—say, to change a daytime scene to night or switch the desert for a forest. 

Although developers can already create similar scenes like this manually in the platform’s creator studio, Roblox claims its new generative AI model will make the changes happen in a fraction of the time. It also claims that it will give developers with minimal 3D art skills the ability to craft more compelling environments. The firm didn’t give a specific date for when the tool will be live.

Developers are already excited. “Instead of sitting and doing it by hand, now you can test different approaches,” says Marcus Holmström, CEO of The Gang, a company that builds some of the top games on Roblox.  “For example, if you’re going to build a mountain, you can do different types of mountains, and on the fly, you can change it. Then we would tweak it and fix it manually so it fits. It’s going to save a lot of time.”

Roblox’s new tool works by “tokenizing” the 3D blocks that make up its millions of in-game worlds, or treating them as units that can be assigned a numerical value on the basis of how likely they are to come next in a sequence. This is similar to the way in which a large language model handles words or fractions of words. If you put “The capital of France is …” into a large language model like GPT-4, for example, it assesses what the next token is most likely to be. In this case, it would be “Paris.” Roblox’s system handles 3D blocks in much the same way to create the environment, block by most likely next block. 

Finding a way to do this has been difficult, for a couple of reasons. One, there’s far less data for 3D environments than there is for text. To train its models, Roblox has had to rely on user-generated data from creators as well as external data sets. 

“Finding high-quality 3D information is difficult,” says Anupam Singh, vice president of AI and growth engineering at Roblox. “Even if you get all the data sets that you would think of, being able to predict the next cube requires it to have literally three dimensions, X, Y, and Z.”

The lack of 3D data can create weird situations, where objects appear in unusual places—a tree in the middle of your racetrack, for example. To get around this issue, Roblox will use a second AI model that has been trained on more plentiful 2D data, pulled from open-source and licensed data sets, to check the work of the first one. 

Basically, while one AI is making a 3D environment, the 2D model will convert the new environment to 2D and assess whether or not the image is logically consistent. If the images don’t make sense and you have, say, a cat with 12 arms driving a racecar, the 3D AI generates a new block again and again until the 2D AI “approves.”

Roblox game designers will still need to be involved in crafting fun game environments for the platform’s millions of players, says Chris Totten, an associate professor in the animation game design program at Kent State University. “A lot of level generators will produce something that’s plain and flat. You need a human guiding hand,” he says. “It’s kind of like people trying to do an essay with ChatGPT for a class. It is also going to open up a conversation about what does it mean to do good, player-responsive level design?”

ROBLOX

The new tool is part of Roblox’s push to integrate AI into all its processes. The company currently has 250 AI models live. One AI analyzes voice chat in real time and screens for bad language, instantly issuing reprimands and possible bans for repeated infractions.

Roblox plans to open-source its 3D foundation model so that it can be modified and used as a basis for innovation. “We’re doing it in open source, which means anybody, including our competitors, can use this model,” says Singh. 

Getting it into as many hands as possible also opens creative possibilities for developers who are not as skilled at creating Roblox environments. “There are a lot of developers that are working alone, and for them, this is going to be a game changer, because now they don’t have to try to find someone else to work with,” says Holmström.

The race to replace the powerful greenhouse gas that underpins the power grid

The power grid is underpinned by a single gas that is used to insulate a range of high-voltage equipment. The problem is, it’s also a super powerful greenhouse gas, a nightmare for climate change.

Sulfur hexafluoride (or SF6) is far from the most common gas that warms the planet, contributing around 1% of warming to date—carbon dioxide and methane are much more well-known and abundant. However, like many other fluorinated gases, SF6 is especially potent: It traps about 20,000 times more energy than carbon dioxide does over the course of a century, and it can last in the atmosphere for 1,000 years or more.

Despite their relatively small contributions so far, emissions of the gas are ticking up, and the growth rate has been climbing every year. SF6 emissions in China nearly doubled between 2011 and 2021, accounting for more than half the world’s emissions of the gas.

Now, companies are looking to do away with equipment that relies on the gas and searching for replacements that can match its performance. Last week, Hitachi Energy announced it’s producing new equipment that replaces SF6 with other materials. And there’s momentum building to ban SF6 in the power industry, including a recently passed plan in the European Union that will phase out the gas’s use in high-voltage equipment by 2032. 

As equipment manufacturers work to produce alternatives, some researchers say that we should go even further and are trying to find solutions that avoid fluorine-containing materials entirely.

High voltage, high stakes

You probably have a circuit-breaker box in your home—if a circuit gets overloaded, the breaker flips, stopping the flow of electricity. The power grid has something similar, called switchgear.  

The difference is, it often needs to handle something like a million times more energy than your home’s equipment does, says Markus Heimbach, executive vice president and managing director of the high-voltage products business unit at Hitachi Energy. That’s because parts of the power grid operate at high voltages, allowing them to move energy around while losing as little as possible. Those high voltages require careful insulation at all times and safety measures in case something goes wrong.

Some switchgear uses the same materials as your home circuit-breaker boxes—there’s air around it to insulate it. But when it’s scaled up to handle high voltage, it ends up being gigantic and requiring a large land footprint, making it inconvenient for larger, denser cities.

The solution today is SF6, “a super gas, from a technology point of view,” Heimbach says. It’s able to insulate equipment during normal operation and help interrupt current when needed. And the whole thing has a much smaller footprint than air-insulated equipment.

The problem is, small amounts of SF6 leak out of equipment during normal operation, and more can be released during a failure or when old equipment isn’t handled properly. When the gas escapes, its strong ability to trap heat and the fact that it has such a long lifetime makes it a menace in the atmosphere.

Some governments will soon ban the gas for the power industry, which makes up the vast majority of the emissions. The European Union agreed to ban SF6-containing medium-voltage switchgear by 2030, and high-voltage switchgear that uses the gas by 2032. Several states in the US have proposed or adopted limits and phaseouts.

Making changes 

Hitachi Energy recently announced it’s producing high-voltage switchgear that can handle up to 550 kilovolts (kV). The model follows products rated for 420 kV the company began installing in 2023—there are more than 250 booked by customers today, Heimbach says.  

Hitachi Energy’s new switchgear substitutes SF6 with a gas mixture that contains mostly carbon dioxide and oxygen. It works as well as SF6 and is as safe and reliable but with a much lower global warming potential, trapping 99% less energy in the atmosphere, Heimbach says. 

However, for some of its new equipment, Hitachi Energy still uses some C4-fluoronitriles, which helps with insulation, Heimbach says. This gas is present at a low fraction, less than 5% of the mixture, and it’s less potent than SF6, Heimbach says. But C4-fluoronitriles are still powerful greenhouse gases, up to a few thousand times more potent than carbon dioxide. These and other fluorinated substances could soon be in trouble too—chemical giant 3M announced in late 2022 that the company would stop manufacturing all fluoropolymers, fluorinated fluids, and PFAS-additive products by 2025.

In order to eliminate the need for fluorine-containing gases, some researchers are looking into the grid’s past for alternatives. “We know that there’s no one-for-one replacement gas that has the properties of SF6,” says Lukas Graber, an associate professor in electrical engineering at Georgia Institute of Technology.

SF6 is both extremely stable and extremely electronegative, meaning it tends to grab onto free electrons, and nothing else can quite match it, Graber says. So he’s working on a research project that aims to replace SF6 gas with supercritical carbon dioxide. (Supercritical fluids are those at temperatures and pressures so high that distinct liquid and gas phases don’t quite exist.) The inspiration came from equipment that used to use oil-based materials—instead of trying to grab electrons like SF6, supercritical carbon dioxide can basically slow them down.

Graber and his research team received project funding from the US Department of Energy’s Advanced Research Projects Agency for Energy. The first small-scale prototype is nearly finished, he adds, and the plan is to test out a full-scale prototype in 2025.

Utilities are known for being conservative, since the safety and reliability of the electrical grid have high stakes, Hitachi Energy’s Heimbach says. But with more SF6 bans coming, they’ll need to find and adopt solutions that don’t rely on the gas.

How “personhood credentials” could help prove you’re a human online

As AI models become better at mimicking human behavior, it’s becoming increasingly difficult to distinguish between real human internet users and sophisticated systems imitating them. 

That’s a real problem when those systems are deployed for nefarious ends like spreading misinformation or conducting fraud, and it makes it a lot harder to trust what you encounter online.

A group of 32 researchers from institutions including OpenAI, Microsoft, MIT, and Harvard has developed a potential solution—a verification concept called “personhood credentials.” These credentials prove that their holder is a real person, without revealing any further information about the person’s identity. The team explored the idea in a non-peer-reviewed paper posted to the arXiv preprint server earlier this month.

Personhood credentials rely on the fact that AI systems still cannot bypass state-of-the-art cryptographic systems or pass as people in the offline, real world. 

To request such credentials, people would have to physically go to one of a number of issuers, like a government or some other kind of trusted organization. They would be asked to provide evidence of being a real human, such as a passport or biometric data. Once approved, they’d receive a single credential to store on their devices the way it’s currently possible to store credit and debit cards in smartphones’ wallet apps.

To use these credentials online, a user could present them to a third-party digital service provider, which could then verify them using a cryptographic protocol called a zero-knowledge proof. That would confirm the holder was in possession of a personhood credential without disclosing any further unnecessary information.

The ability to filter out anyone other than verified humans on a platform could be useful in many ways. People could reject Tinder matches that don’t come with personhood credentials, for example, or choose not to see anything on social media that wasn’t definitely posted by a person. 

The authors want to encourage governments, companies, and standards bodies to consider adopting such a system in the future to prevent AI deception from ballooning out of our control. 

“AI is everywhere. There will be many issues, many problems, and many solutions,” says Tobin South, a PhD student at MIT who worked on the project. “Our goal is not to prescribe this to the world, but to open the conversation about why we need this and how it could be done.”

Possible technical options already exist. For example, a network called Idena claims to be the first blockchain proof-of-person system. It works by getting humans to solve puzzles that would be difficult for bots within a short time frame. The controversial Worldcoin program, which collects users’ biometric data, bills itself as the world’s largest privacy-preserving human identity and financial network. It recently partnered with the Malaysian government to provide proof of humanness online by scanning users’ irises, which creates a code. As in the concept of personhood credentials, each code is protected using cryptography.

However, the project has been criticized for using deceptive marketing practices, collecting more personal data than acknowledged, and failing to obtain meaningful consent from users. Regulators in Hong Kong and Spain banned Worldcoin from operating earlier this year, while its operations have been suspended in countries including Brazil, Kenya, and India. 

So fresh concepts are still needed. The rapid rise of accessible AI tools has ushered in a dangerous period in which internet users are hyper-suspicious about what is and isn’t true online, says Henry Ajder, an expert on AI and deepfakes who is an advisor to Meta and the UK government. And while ideas for verifying personhood have been around for some time, these credentials feel like one of the most substantive ideas for how to push back against encroaching skepticism, he says.

But the biggest challenge the credentials will face is getting enough platforms, digital services, and governments to adopt them, since they may feel uncomfortable conforming to a standard they don’t control. “For this to work effectively, it would have to be something which is universally adopted,” he says. “In principle the technology is quite compelling, but in practice and the messy world of humans and institutions, I think there would be quite a lot of resistance.”

Martin Tschammer, head of security at the startup Synthesia, which creates AI-generated hyperrealistic deepfakes, says he agrees with the principle driving personhood credentials: the need to verify humans online. However, he is unsure whether it’s the right solution or whether it would be practical to implement. He also expresses skepticism over who would run such a scheme.  

“We may end up in a world in which we centralize even more power and concentrate decision-making over our digital lives, giving large internet platforms even more ownership over who can exist online and for what purpose,” he says. “And given the lackluster performance of some governments in adopting digital services, and autocratic tendencies that are on the rise, is it practical or realistic to expect this type of technology to be adopted en masse and in a responsible way by the end of this decade?” 

Rather than waiting for collaboration across industries, Synthesia is currently evaluating how to integrate other personhood-proving mechanisms into its products. He says it already has several measures in place. For example, it requires businesses to prove that they are legitimate registered companies, and will ban and refuse refunds to customers found to have broken its rules. 

One thing is clear: We are in urgent need of ways to differentiate humans from bots, and encouraging discussions between stakeholders in the tech and policy worlds is a step in the right direction, says Emilio Ferrara, a professor of computer science at the University of Southern California, who was not involved in the project. 

“We’re not far from a future where, if things remain unchecked, we’re going to be essentially unable to tell apart interactions that we have online with other humans or some kind of bots. Something has to be done,” he says. “We can’t be naïve as previous generations were with technologies.”

AI’s impact on elections is being overblown

This year, close to half the world’s population has the opportunity to participate in an election. And according to a steady stream of pundits, institutions, academics, and news organizations, there’s a major new threat to the integrity of those elections: artificial intelligence. 

The earliest predictions warned that a new AI-powered world was, apparently, propelling us toward a “tech-enabled Armageddon” where “elections get screwed up”, and that “anybody who’s not worried [was] not paying attention.” The internet is full of doom-laden stories proclaiming that AI-generated deepfakes will mislead and influence voters, as well as enabling new forms of personalized and targeted political advertising. Though such claims are concerning, it is critical to look at the evidence. With a substantial number of this year’s elections concluded, it is a good time to ask how accurate these assessments have been so far. The preliminary answer seems to be not very; early alarmist claims about AI and elections appear to have been blown out of proportion.

While there will be more elections this year where AI could have an effect, the United States being one likely to attract particular attention, the trend observed thus far is unlikely to change. AI is being used to try to influence electoral processes, but these efforts have not been fruitful. Commenting on the upcoming US election, Meta’s latest Adversarial Threat Report acknowledged that AI was being used to meddle—for example, by Russia-based operations—but that “GenAI-powered tactics provide only incremental productivity and content-generation gains” to such “threat actors.” This echoes comments from the company’s president of global affairs, Nick Clegg, who earlier this year stated that “it is striking how little these tools have been used on a systematic basis to really try to subvert and disrupt the elections.”

Far from being dominated by AI-enabled catastrophes, this election “super year” at that point was pretty much like every other election year.

While Meta has a vested interest in minimizing AI’s alleged impact on elections, it is not alone. Similar findings were also reported by the UK’s respected Alan Turing Institute in May. Researchers there studied more than 100 national elections held since 2023 and found “just 19 were identified to show AI interference.” Furthermore, the evidence did not demonstrate any “clear signs of significant changes in election results compared to the expected performance of political candidates from polling data.”

This all raises a question: Why were these initial speculations about AI-enabled electoral interference so off, and what does it tell us about the future of our democracies? The short answer: Because they ignored decades of research on the limited influence of mass persuasion campaigns, the complex determinants of voting behaviors, and the indirect and human-mediated causal role of technology. 

First, mass persuasion is notoriously challenging. AI tools may facilitate persuasion, but other factors are critical. When presented with new information, people generally update their beliefs accordingly; yet even in the best conditions, such updating is often minimal and rarely translates into behavioral change. Though political parties and other groups invest colossal sums to influence voters, evidence suggests that most forms of political persuasion have very small effects at best. And in most high-stakes events, such as national elections, a multitude of factors are at play, diminishing the effect of any single persuasion attempt.

Second, for a piece of content to be influential, it must first reach its intended audience. But today, a tsunami of information is published daily by individuals, political campaigns, news organizations, and others. Consequently, AI-generated material, like any other content, faces significant challenges in cutting through the noise and reaching its target audience. Some political strategists in the United States have also argued that the overuse of AI-generated content might make people simply tune out, further reducing the reach of manipulative AI content. Even if a piece of such content does reach a significant number of potential voters, it will probably not succeed in influencing enough of them to alter election results.

Third, emerging research challenges the idea that using AI to microtarget people and sway their voting behavior works as well as initially feared. Voters seem to not only recognize excessively tailored messages but actively dislike them. According to some recent studies, the persuasive effects of AI are also, at least for now, vastly overstated. This is likely to remain the case, as ever-larger AI-based systems do not automatically translate to better persuasion. Political campaigns seem to have recognized this too. If you speak to campaign professionals, they will readily admit that they are using AI, but mainly to optimize “mundane” tasks such as fundraising, get-out-the-vote efforts, and overall campaign operations rather than generating new AI-generated, highly tailored content.

Fourth, voting behavior is shaped by a complex nexus of factors. These include gender, age, class, values, identities, and socialization. Information, regardless of its veracity or origin—whether made by an AI or a human—often plays a secondary role in this process. This is because the consumption and acceptance of information are contingent on preexisting factors, like whether it chimes with the person’s political leanings or values, rather than whether that piece of content happens to be generated by AI.

Concerns about AI and democracy, and particularly elections, are warranted. The use of AI can perpetuate and amplify existing social inequalities or reduce the diversity of perspectives individuals are exposed to. The harassment and abuse of female politicians with the help of AI is deplorable. And the perception, partially co-created by media coverage, that AI has significant effects could itself be enough to diminish trust in democratic processes and sources of reliable information, and weaken the acceptance of election results. None of this is good for democracy and elections. 

However, these points should not make us lose sight of threats to democracy and elections that have nothing to do with technology: mass voter disenfranchisement; intimidation of election officials, candidates, and voters; attacks on journalists and politicians; the hollowing out of checks and balances; politicians peddling falsehoods; and various forms of state oppression (including restrictions on freedom of speech, press freedom and the right to protest). 

Of at least 73 countries holding elections this year, only 47 are classified as full (or at least flawed) democracies, according to Our World in Data/Economist Democracy Index, with the rest being hybrid or authoritarian regimes. In countries where elections are not even free or fair, and where political choice that leads to real change is an illusion, people have arguably bigger fish to fry.

And still, technology—including AI—often becomes a convenient scapegoat, singled out by politicians and public intellectuals as one of the major ills befalling democratic life. Earlier this year, Swiss president Viola Amherd warned at the World Economic Forum in Davos, Switzerland, that “advances in artificial intelligence allow … false information to seem ever more credible” and present a threat to trust. Pope Francis, too, warned that fake news could be legitimized through AI. US Deputy Attorney General Lisa Monaco said that AI could supercharge mis- and disinformation and incite violence at elections. This August, the mayor of London, Sadiq Kahn, called for a review of the UK’s Online Safety Act after far-right riots across the country, arguing that “the way the algorithms work, the way that misinformation can spread very quickly and disinformation … that’s a cause to be concerned. We’ve seen a direct consequence of this.”

The motivations to blame technology are plenty and not necessarily irrational. For some politicians, it can be easier to point fingers at AI than to face scrutiny or commit to improving democratic institutions that could hold them accountable. For others, attempting to “fix the technology” can seem more appealing than addressing some of the fundamental issues that threaten democratic life. Wanting to speak to the zeitgeist might play a role, too.

Yet we should remember that there’s a cost to overreaction based on ill-founded assumptions, especially when other critical issues go unaddressed. Overly alarmist narratives about AI’s presumed effects on democracy risk fueling distrust and sowing confusion among the public—potentially further eroding already low levels of trust in reliable news and institutions in many countries. One point often raised in the context of these discussions is the need for facts. People argue that we cannot have democracy without facts and a shared reality. That is true. But we cannot bang on about needing a discussion rooted in facts when evidence against the narrative of AI turbocharging democratic and electoral doom is all too easily dismissed. Democracy is under threat, but our obsession with AI’s supposed impact is unlikely to make things better—and could even make them worse when it leads us to focus solely on the shiny new thing while distracting us from the more lasting problems that imperil democracies around the world. 

Felix M. Simon is a research fellow in AI and News at the Reuters Institute for the Study of Journalism; Keegan McBride is an assistant professor in AI, government, and policy at the Oxford Internet Institute; Sacha Altay is a research fellow in the department of political science at the University of Zurich.

Here’s how ed-tech companies are pitching AI to teachers

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.

This back-to-school season marks the third year in which AI models like ChatGPT will be used by thousands of students around the globe (among them my nephews, who tell me with glee each time they ace an assignment using AI). A top concern among educators remains that when students use such models to write essays or come up with ideas for projects, they miss out on the hard and focused thinking that builds creative reasoning skills. 

But this year, more and more educational technology companies are pitching schools on a different use of AI. Rather than scrambling to tamp down the use of it in the classroom, these companies are coaching teachers how to use AI tools to cut down on time they spend on tasks like grading, providing feedback to students, or planning lessons. They’re positioning AI as a teacher’s ultimate time saver. 

One company, called Magic School, says its AI tools like quiz generators and text summarizers are used by 2.5 million educators. Khan Academy offers a digital tutor called Khanmigo, which it bills to teachers as “your free, AI-powered teaching assistant.” Teachers can use it to assist students in subjects ranging from coding to humanities. Writing coaches like Pressto help teachers provide feedback on student essays.  

The pitches from ed-tech companies often cite a 2020 report from McKinsey and Microsoft, which found teachers work an average of 50 hours per week. Many of those hours, according to the report, consist of “late nights marking papers, preparing lesson plans, or filling out endless paperwork.” The authors suggested that embracing AI tools could save teachers 13 hours per week. 

Companies aren’t the only ones making this pitch. Educators and policymakers have also spent the last year pushing for AI in the classroom. Education departments in South Korea, Japan, Singapore, and US states like North Carolina and Colorado have issued guidance for how teachers can positively and safely incorporate AI. 

But when it comes to how willing teachers are to turn over some of their responsibilities to an AI model, the answer really depends on the task, according to Leon Furze, an educator and PhD candidate at Deakin University who studies the impact of generative AI on writing instruction and education.

“We know from plenty of research that teacher workload actually comes from data collection and analysis, reporting, and communications,” he says. “Those are all areas where AI can help.”

Then there are a host of not-so-menial tasks that teachers are more skeptical AI can excel at. They often come down to two core teaching responsibilities: lesson planning and grading. A host of companies offer large language models that they say can generate lesson plans to conform to different curriculum standards. Some teachers, including in some California districts, have also used AI models to grade and provide feedback for essays. For these applications of AI, Furze says, many of the teachers he works with are less confident in its reliability. 

When companies promise time savings for planning and grading, it is “a huge red flag,” he says, because “those are core parts of the profession.” He adds, “Lesson planning is—or should be—thoughtful, creative, even fun.” Automated feedback on creative skills like writing is controversial too: “Students want feedback from humans, and assessment is a way for teachers to get to know students. Some feedback can be automated, but not all.” 

So how eager are teachers to adopt AI to save time? Earlier this year, in May, a Pew research poll found that only 6% of teachers think AI can provide more benefits than harm in education. But with AI changing faster than ever, this school year might be when ed-tech companies start to win them over.

Now read the rest of The Algorithm


Deeper learning

How machine learning is helping us probe the secret names of animals

Until now, only humans, dolphins, elephants, and probably parrots had been known to use specific sounds to call out to other individuals. But now, researchers armed with audio recorders and pattern-recognition software are making unexpected discoveries about the secrets of animal names—at least with small monkeys called marmosets. They’ve found that the animals will adjust the sounds they make in a way that’s specific to whoever they’re “conversing” with at the time.  

Why this matters: In years past, it’s been argued that human language is unique and that animals lack both the brains and vocal apparatus to converse. But there’s growing evidence that isn’t the case, especially now that the use of names has been found in at least four distantly related species. Read more from Antonio Regalado.

Bits and bytes

How will AI change the future of sex? 

Porn and real-life sex affect each other in a loop. If people become accustomed to getting exactly what they want from erotic media, this could further affect their expectations of relationships. (MIT Technology Review)

There’s a new way to build neural networks that could make AI more understandable

The new method, studied in detail by a group led by researchers at MIT, could make it easier to understand why neural networks produce certain outputs, help verify their decisions, and even probe for bias. (MIT Technology Review)

Researchers built an “AI scientist.” What can it do?

The large language model does everything from reading the literature to writing and reviewing its own papers, but it has a limited range of applications so far. (Nature)

OpenAI is weighing changes to its corporate structure as it seeks more funding 

These discussions come as Apple, Nvidia, and Microsoft are considering a funding round that would value OpenAI at more than $100 billion. (Financial Times)

The UK is building an alarm system for climate tipping points

The UK’s new moonshot research agency just launched an £81 million ($106 million) program to develop early warning systems to sound the alarm if Earth gets perilously close to crossing climate tipping points.

A climate tipping point is a threshold beyond which certain ecosystems or planetary processes begin to shift from one stable state to another, triggering dramatic and often self-reinforcing changes in the climate system. 

The Advanced Research and Invention Agency (ARIA) will announce today that it’s seeking proposals to work on systems for two related climate tipping points. One is the accelerating melting of the Greenland Ice Sheet, which could raise sea levels dramatically. The other is the weakening of the North Atlantic Subpolar Gyre, a huge current rotating counterclockwise south of Greenland that may have played a role in triggering the Little Ice Age around the 14th century. 

The goal of the five-year program will be to reduce scientific uncertainty about when these events could occur, how they would affect the planet and the species on it, and over what period those effects might develop and persist. In the end, ARIA hopes to deliver a proof of concept demonstrating that early warning systems can be “affordable, sustainable, and justified.” No such dedicated system exists today, though there’s considerable research being done to better understand the likelihood and consequences of surpassing these and other climate tipping points.

Sarah Bohndiek, a program director for the tipping points research program, says we underappreciate the possibility that crossing these points could significantly accelerate the effects of climate change and increase the dangers, possibly within the next few decades.

By developing an early warning system, “we might be able to change the way that we think about climate change and think about our preparedness for it,” says Bohndiek, a professor of biomedical physics at the University of Cambridge. 

ARIA intends to support teams that will work toward three goals: developing low-cost sensors that can withstand harsh environments and provide more precise and needed data about the conditions of these systems; deploying those and other sensing technologies to create “an observational network to monitor these tipping systems”; and building computer models that harness the laws of physics and artificial intelligence to pick up “subtle early warning signs of tipping” in the data.

But observers stress that designing precise early warning systems for either system would be no simple feat and might not be possible anytime soon. Not only do scientists have limited understanding of these systems, but the data  on how they’ve behaved in the past is patchy and noisy, and setting up extensive monitoring tools in these environments is expensive and cumbersome. 

Still, there’s wide agreement that we need to better understand these systems and the risks that the world may face.

Unlocking breakthroughs

It is clear that the tipping of either of these systems could have huge effects on Earth and its inhabitants.

As the world warmed in recent decades, trillions of tons of ice melted off the Greenland Ice Sheet, pouring fresh water into the North Atlantic, pushing up ocean levels, and reducing the amount of heat that the snow and ice reflected back into space. 

Melting rates are increasing as Arctic warming speeds ahead of the global average and hotter ocean waters chip away at ice shelves that buttress land-based glaciers. Scientists fear that as those shelves collapse, the ice sheet will become increasingly unstable. 

The complete loss of the ice sheet would raise global sea levels by more than 20 feet (six meters), submerging coastlines and kick-starting mass climate migration around the globe.

But at any point along the way, the influx of water into the North Atlantic could also substantially slow down the convection systems that help to drive the Subpolar Gyre, because fresher water isn’t as dense and prone to sinking. (Saltier, cooler water readily sinks.)

The weakening of the Subpolar Gyre could cool parts of northwest Europe and eastern Canada, shift the jet stream northward, create more erratic weather patterns across Europe, and undermine the productivity of agriculture and fisheries, according to one study last year. 

The Subpolar Gyre may also influence the strength of the Atlantic Meridional Overturning Circulation (AMOC), a network of ocean currents that moves massive amounts of heat, salt, and carbon dioxide around the globe. The specifics of how a weakened Subpolar Gyre would affect the AMOC are still the subject of ongoing research, but a dramatic slowdown or shutdown of that system is considered one of the most dangerous climate tipping points. It could substantially cool Northern Europe, among other wide-ranging effects.  

The tipping of the AMOC itself, however, is not the focus of the ARIA research program. 

The agency, established last year to “unlock scientific and technological breakthroughs,” is a UK answer to the US’s DARPA and ARPA-E research programs. Other projects it’s funding include efforts to develop precision neurotechnologies, improve robot dexterity, and build safer and more energy-efficient AI systems. ARIA is also setting up programs for developing synthetic plants and exploring climate interventions that could cool the planet, including solar geoengineering. 

Bohndiek and the other program director of the tipping points program—Gemma Bale, an assistant professor at the University of Cambridge—are both medical physicists who previously focused on developing medical devices. At ARIA, they initially expected to work on efforts to decentralize health care.

But Bohndiek says they soon realized that “a lot of these things that need to change at the individual health level will be irrelevant if climate change truly is going to cross these big thresholds.” She adds, “If we’re going to end up in a society where the world is so much warmer … does the problem of decentralizing health care matter anymore?” 

Bohndiek and Bale stress that they hope the program will draw applications from researchers who haven’t traditionally worked on climate change. They add that any research teams proposing to work in or around Greenland must take appropriate steps to engage with local communities, governments, and other research groups.

Tipping dangers

Efforts are already underway to develop greater understanding of the Subpolar Gyre and the Greenland Ice Sheet, including the likelihood, timing, and consequences of their tipping into different states.

There are, for instance, regular field expeditions to measure and refine modeling of ice loss in Greenland. A variety of research groups have set up sensor networks that cross various points of the Atlantic to more closely monitor the shifting conditions of current systems. And several studies have already highlighted the appearance of some “early warning signals” of a potential collapse of the AMOC in the coming decades.

But the goal of the ARIA program is to accelerate such research efforts and sharpen the field’s focus on improving our ability to predict tipping events. 

William Johns, an oceanographer focused on observation of the AMOC at the University of Miami, says the field is a long way from being able to state confidently that systems like the Subpolar Gyre or AMOC will weaken beyond the bounds of normal natural fluctuations, much less say with any precision when they would do so. 

He stresses that there’s still wide disagreement between models on these sorts of questions and limited evidence of what took place before they tipped in the ancient past, all of which makes it difficult to even know what signals we should be monitoring for most closely.

Jaime Palter, an associate professor of oceanography at the University of Rhode Island, adds that she found it a “puzzling” choice to fund a research program focused on the tipping of the Subpolar Gyre. She notes that researchers believe the wind drives the system more than convection, that its connection to the AMOC isn’t well understood, and that the slowdown of the latter system is the one that more of the field is focused on—and more of the world is worried about.

But she and Johns both said that providing funds to monitor these systems more closely is critical to improve scientific understanding of how they work and the odds that they will tip.

Radical interventions

So what could the world do if ARIA or anyone else does manage to develop systems that can predict, with high confidence, that one of these systems will shift into a new state in, say, the next decade?

Bohndiek stresses that the effects of reaching a tipping point wouldn’t be immediate, and that the world would still have years or even decades to take actions that might prevent the breakdown of such systems, or begin adapting to the changes they’ll bring. In the case of runaway melting of the ice sheet, that could mean building higher seawalls or relocating cities. In the case of the Subpolar Gyre weakening, big parts of Europe might have to look to other areas of the world for their food supplies.

More reliable predictions might also alter people’s thinking about more dramatic interventions, such as massive and hugely expensive engineering projects to prop up ice shelves or to freeze glaciers more stably onto the bedrock they’re sliding upon. 

Similarly, they might shift how some people weigh the trade-offs between the dangers of climate change and the risks of interventions like solar geoengineering, which would involve releasing particles in the atmosphere that could reflect more heat back into space.

But some observers note that if enough fresh water is pouring into the Atlantic to weaken the gyre and substantially slow the broader Atlantic current system, there’s very little the world can do to stop it.

“I’m afraid I don’t really see an action you could take,” Johns says. “You can’t go vacuum up all the fresh water—it’s not going to be feasible—and you can’t stop it from melting on the scale we’d have to.”

Bale readily acknowledges that they’ve selected a very hard problem to solve, but she stresses that the point of ARIA research programs is to work at the “edge of the possible.” 

“We genuinely don’t know if an early warning system for these systems is possible,” she says. “But I think if it is possible, we know that it would be valuable and important for society, and that’s part of our mission.”