The world’s first industrial-scale plant for green steel promises a cleaner future

As of 2023, nearly 2 billion metric tons of it were being produced annually, enough to cover Manhattan in a layer more than 13 feet thick. 

Making this metal produces a huge amount of carbon dioxide. Overall, steelmaking accounts for around 8% of the world’s carbon emissions—one of the largest industrial emitters and far more than such sources as aviation. The most common manufacturing process yields about two tons of carbon dioxide for every ton of steel.  

A handful of groups and companies are now making serious progress toward low- or zero-emission steel. Among them, the Swedish company Stegra stands out. (Originally named H2 Green Steel, the company renamed itself Stegra—which means “to elevate” in Swedish—in September.) The startup, formed in 2020, has raised close to $7 billion and is building a plant in Boden, a town in northern Sweden. It will be the first industrial-scale plant in the world to make green steel. Stegra says it is on track to begin production in 2026, initially producing 2.5 million metric tons per year and eventually making 4.5 million metric tons. 

The company uses so-called green hydrogen, which is produced using renewable energy, to process iron ore into steel. Located in a part of Sweden with abundant hydropower, Stegra’s plant will use hydro and wind power to drive a massive electrolyzer that splits water to make the hydrogen. The hydrogen gas will then be used to pull the oxygen out of iron ore to make metallic iron—a key step in steelmaking.  

This process of using hydrogen to make iron—and subsequently steel—has already been used at pilot plants by Midrex, an American company from which Stegra is purchasing the equipment. But Stegra will have to show that it will work in a far larger plant.

The world produces about 60,000 metric tons of steel every 15 minutes.

“We have multiple steps that haven’t really been proven at scale before,” says Maria Persson Gulda, Stegra’s chief technology officer. These steps include building one of the world’s largest electrolyzers. 

Beyond the unknowns of scaling up a new technology, Stegra also faces serious business challenges. The steel industry is a low-margin, intensely competitive sector in which companies win customers largely on price.

aerial view of construction site
The startup, formed in 2020, has raised close to $7 billion in financing and expects to begin operations in 2026 at its plant in Boden.
STEGRA

Once operations begin, Stegra calculates, it can come close to producing steel at the same cost as the conventional product, largely thanks to its access to cheap electricity. But it plans to charge 20% to 30% more to cover the €4.5 billion it will take to build the plant. Gulda says the company has already sold contracts for 1.2 million metric tons to be produced in the next five to seven years. And its most recent customers—such as car manufacturers seeking to reduce their carbon emissions and market their products as green—have agreed to pay the 30% premium. 

Now the question is: Can Stegra deliver? 

The secret of hydrogen

To make steel—an alloy of iron and carbon, with a few other elements thrown in as needed—you first need to get the oxygen out of the iron ore dug from the ground. That leaves you with the purified metal.

The most common steelmaking process starts in blast furnaces, where the ore is mixed with a carbon-­rich coal derivative called coke and heated. The carbon reacts with the oxygen in the ore to produce carbon dioxide; the metal left behind then enters another type of furnace, where more oxygen is forced into it under high heat and pressure. The gas reacts with remaining impurities to produce various oxides, which are then removed—leaving steel behind.  

The second conventional method, which is used to make a much smaller share of the world’s steel, is a process called direct reduction. This usually employs natural gas, which is separated into hydrogen and carbon monoxide. Both gases react with the oxygen to pull it out of the iron ore, creating carbon dioxide and water as by-products. 

The iron that remains is melted in an electric arc furnace and further processed to remove impurities and create steel. Overall, this method is about 40% lower in emissions than the blast furnace technique, but it still produces over a ton of carbon dioxide for every ton of steel.

But why not just use hydrogen instead of starting with natural gas? The only by-product would be water. And if, as Stegra plans to do, you use green hydrogen made using clean power, the result is a new and promising way of making steel that can theoretically produce close to zero emissions. 

Stegra’s process is very similar to the standard direct reduction technique, except that since it uses only hydrogen, it needs a higher temperature. It’s not the only possible way to make steel with a negligible carbon footprint, but it’s the only method on the verge of being used at an industrial scale. 

Premium marketing

Stegra has laid the foundations for its plant and is putting the roof and walls on its steel mill. The first equipment has been installed in the building where electric arc furnaces will melt the iron and churn out steel, and work is underway on the facility that will house a 700-megawatt electrolyzer, the largest in Europe.

To make hydrogen, purify iron, and produce 2.5 million metric tons of green steel annually, the plant will consume 10 terawatt-hours of electricity. This is a massive amount, on par with the annual usage of a small country such as Estonia. Though the costs of electricity in Stegra’s agreements are confidential, publicly available data suggest rates around €30 ($32) per megawatt-hour or more. (At that rate, 10 terawatt-hours would cost $320 million.) 

STEGRA

Many of the buyers of the premium green steel are in the automotive industry; they include Mercedes-Benz, Porsche, BMW, Volvo Group, and Scania, a Swedish company that makes trucks and buses. Six companies that make furniture, appliances, and construction material—including Ikea—have also signed up, as have five companies that buy steel and distribute it to many different manufacturers.

Some of these automakers—including Volvo, which will buy from Stegra and rival SSAB—are marketing cars made with the green steel as “fossil-free.” And since cars and trucks also have many parts that are much more expensive than the steel they use, steel that costs the automakers a bit more adds only a little to the cost of a vehicle—perhaps a couple of hundred dollars or less, according to some estimates. Many companies have also set internal targets to reduce emissions, and buying green steel can get them closer to those goals.

Stegra’s business model is made possible in part by the unique economic conditions within the European Union. In December 2022, the European Parliament approved a tariff on imported carbon-­intensive products such as steel, known as the Carbon Border Adjustment Mechanism (CBAM). As of 2024, this law requires those who import iron, steel, and other commodities to report the materials’ associated carbon emissions. 

Starting in 2026, companies will have to begin paying fees designed to be proportional to the materials’ carbon footprint. Some companies are already betting that it will be enough to make Stegra’s 30% premium worthwhile. 

crane hoisting an i-beam  next to a steel building frame

STEGRA

Though the law could incentivize decarbonization within the EU and for those importing steel into Europe, green steelmakers will probably also need subsidies to defray the costs of scaling up, says Charlotte Unger, a researcher at the Research Institute for Sustainability in Potsdam, Germany. In Stegra’s case, it will receive €265 million from the European Commission to help build its plant; it was also granted €250 million from the European Union’s Innovation Fund.  

Meanwhile, Stegra is working to reduce costs and beef up revenues. Olof Hernell, the chief digital officer, says the company has invested heavily in digital products to improve efficiency. For example, a semi-automated system will be used to increase or decrease usage of electricity according to its fluctuating price on the grid.

Stegra realized there was no sophisticated software for keeping track of the emissions that the company is producing at every step of the steelmaking process. So it is making its own carbon accounting software, which it will soon sell as part of a new spinoff company. This type of accounting is ultra-important to Stegra, Hernell says, since “we ask for a pretty significant premium, and that premium lives only within the promise of a low carbon footprint.” 

Not for everyone

As long as CBAM stays in place, Stegra believes, there will be more than enough demand for its green steel, especially if other carbon pricing initiatives come into force. The company’s optimism is boosted by the fact that it expects to be the first to market and anticipates costs coming down over time. But for green steel to affect the market more broadly, or stay viable once several companies begin making significant quantities of it, its manufacturing costs will eventually have to be competitive with those of conventional steel.

Stegra has sold contracts for 1.2 million metric tons of steel to be produced in the next five to seven years.

Even if Stegra has a promising outlook in Europe, its hydrogen-based steelmaking scheme is unlikely to make economic sense in many other places in the world—at least in the near future. There are very few regions with such a large amount of clean electricity and easy access to the grid. What’s more, northern Sweden is also rich in high-quality ore that is easy to process using the hydrogen direct reduction method, says Chris Pistorius, a metallurgical engineer and co-director of the Center for Iron and Steelmaking Research at Carnegie Mellon University.

Green steel can be made from lower-grade ore, says Pistorius, “but it does have the negative effects of higher electricity consumption, hence slower processing.”

Given the EU incentives, other hydrogen-based steel plants are in the works in Sweden and elsewhere in Europe. Hybrit, a green steel technology developed by SSAB, the mining company LKAB, and the energy producer Vattenfall, uses a process similar to Stegra’s. LKAB hopes to finish a demonstration plant by 2028 in Gällivare, also in northern Sweden. However, progress has been delayed by challenges in getting the necessary environmental permit.

Meanwhile, a company called Boston Metal is working to commercialize a different technique to break the bonds in iron oxide by running a current through a mixture of iron ore and an electrolyte, creating extremely high heat. This electrochemical process yields a purified iron metal that can be turned into steel. The technology hasn’t been proved at scale yet, but Boston Metal hopes to license its green steel process in 2026. 

Understandably, these new technologies will cost more at first, and consumers or governments will have to foot the bill, says Jessica Allen, an expert on green steel production at the University of Newcastle in Australia. 

In Stegra’s case, both seem willing to do so. But it will be more difficult outside the EU. What’s more, producing enough green steel to make a large dent in the sector’s emissions will likely require a portfolio of different techniques to succeed. 

Still, as the first to market, Stegra is playing a vital role, Allen says, and its performance will color perceptions of green steel for years to come. “Being willing to take a risk and actually build … that’s exactly what we need,” she adds. “We need more companies like this.”

For now, Stegra’s plant—rising from the boreal forests of northern Sweden—represents the industry’s leading effort. When it begins operations in 2026, that plant will be the first demonstration that steel can be made at an industrial scale without releasing large amounts of carbon dioxide—and, just as important, that customers are willing to pay for it. 

Douglas Main is a journalist and former senior editor and writer at National Geographic.

This international surveillance project aims to protect wheat from deadly diseases

When Dave Hodson walked through wheat fields in Ethiopia in 2010, it seemed as if everything had been painted yellow. A rust fungus was in the process of infecting about one-third of the country’s wheat, and winds had carried its spores far and wide, coating everything in their path. “The fields were completely yellow. You’d walk through them and your clothes were just bright yellow,” he says.

Hodson, who was then at the UN’s Food and Agriculture Organization in Rome, had flown down to Ethiopia with colleagues to investigate the epidemic. But there was little that could be done: Though the authorities had some fungicides, by the time they realized what was happening, it was too late. Ethiopia, the biggest wheat-producing nation in sub-Saharan Africa, lost between 15% and 20% of its harvest that year. “Talking with farmers—they were just losing everything,” Hodson told MIT Technology Review. “And it’s just like, ‘Well, we should have been able to do more to help you.’”

Hodson, now aprincipal scientist at the international nonprofit CIMMYT, has since been working with colleagues on a plan to stop such losses in the future. Together with Maricelis Acevedo at Cornell University’s College of Agriculture and Life Sciences, he co-leads the Wheat Disease Early Warning Advisory System, known as Wheat DEWAS, an international initiative that brings together scientists from 23 organizations around the world.

The idea is to scale up a system to track wheat diseases and forecast potential outbreaks to governments and farmers in close to real time. In doing so, they hope to protect a crop that supplies about one-fifth of the world’s calories.

The effort could not be more timely. For as long as there’s been domesticated wheat (about 8,000 years), there has been harvest-devastating rust. Breeding efforts in the mid-20th century led to rust-resistant wheat strains that boosted crop yields, and rust epidemics receded in much of the world. But now, after decades, rusts are considered a reemerging disease in Europe. That’s due partly to climate change, because warmer conditions are more conducive to infection. Vulnerable regions including South Asia and Africa are also under threat.

Wheat DEWAS officially launched in 2023 with $7.3 million from the Bill & Melinda Gates Foundation (now called the Gates Foundation) and the UK’s Foreign, Commonwealth & Development Office. But an earlier incarnation of the system averted disaster in 2021, when another epidemic threatened Ethiopia’s wheat fields. Early field surveys by a local agricultural research team had picked up a new strain of yellow rust. The weather conditions were “super optimal” for the development of rust in the field, Hodson says, but the team’s early warning system meant that action was taken in good time—the government deployed fungicides quickly, and the farmers had a bumper wheat harvest. 

Wheat DEWAS works by scaling up and coordinating efforts and technologies across continents. At the ground level is surveillance—teams of local pathologistswho survey wheat fields, inputting data on smartphones. They gather information on which wheat varieties are growing and take photos and samples. The project is now developing a couple of apps, one of which will use AI to help identify diseases by analyzing photos.

Another arm of the system, based at the John Innes Centre in the UK, focuses on diagnostics. The group there, working with researchers at CIMMYT and the Ethiopian Institute of Agricultural Research, developed MARPLE (a loose acronym for “mobile and real-time plant disease”), which Hodson describes as a mini gene sequencer about the size of a cell phone. It can test wheat samples for the rust fungus locally and provide a result within two to three days, whereas conventional diagnostics need months.

 “The beauty of it is you could pick up something new very quickly,” says Hodson. “And it’s often the new things that give the biggest problems.”

The data from the field is sent directly to a team at the Global Rust Reference Center at Aarhus University in Denmark, which combines everything into one huge database. Enabling nations and globally scattered groups to share an infrastructure is key, says Aarhus’s Jens Grønbech Hansen, who leads the data management package for Wheat DEWAS. Without collaborating and harmonizing data, he says, “technology won’t solve these problems all on its own.”

“We build up trust so that by combining the data, we can benefit from a bigger picture and see patterns we couldn’t see when it was all fragmented,” Hansen says.

Their automated system sends data to Chris Gilligan, who leads the modeling arm of Wheat DEWAS at the University of Cambridge. With his team, he works with the UK’s Met Office, using their supercomputer to model how the fungal spores at a given site might spread under specific weather conditions and what the risk is of their landing, germinating, and infecting other areas. The team drew on previous models, including work on the ash plume from the eruption of the Icelandic volcano Eyjafjallajökull, which caused havoc in Europe in 2010.

Each day, a downloadable bulletin is posted online with a seven-day forecast. Additional alerts or advisories are also sent out. Information is then disseminated from governments or national authorities to farmers. For example, in Ethiopia, immediate risks are conveyed to farmers by SMS text messaging. Crucially, if there’s likely to be a problem, the alerts offer time to respond. “You’ve got, in effect, three weeks’ grace,” says Gilligan. That is, growers may know of the risk up to a week ahead of time, enabling them to take action as the spores are landing and causing infections.

The project is currently focused on eight countries: Ethiopia, Kenya, Tanzania, and Zambia in Africa and Nepal, Pakistan, Bangladesh, and Bhutan in Asia. But the researchers hope they will get additional funding to carry the project on beyond 2026 and, ideally, to extend it in a variety of ways, including the addition of more countries. 

Gilligan says the technology may be potentially transferable to other wheat diseases, and other crops—like rice—that are also affected by weather-­dispersed pathogens.

Dagmar Hanold, a plant pathologist at the University of Adelaide who is not involved in the project, describes it as “vital work for global agriculture.”

“Cereals, including wheat, are vital staples for people and animals worldwide,” Hanold says. Although programs have been set up to breed more pathogen-­resistant crops, new pathogen strains emerge frequently. And if these combine and swap genes, she warns, they could become “even more ­aggressive.”

Shaoni Bhattacharya is a freelance writer and editor based in London.

These stunning images trace ships’ routes as they move

As we run, drive, bike, and fly, we leave behind telltale marks of our movements on Earth—if you know where to look. Physical tracks, thermal signatures, and chemical traces can reveal where we’ve been. But another type of trail we leave comes from the radio signals emitted by the cars, planes, trains, and boats we use.

On airplanes, technology called ADS-B (Automatic Dependent Surveillance–Broadcast) provides real-time location, identification, speed, and orientation data. For ships at sea, that function is performed by the AIS (Automatic Identification System).

Operating at 161.975 and 162.025 megahertz, AIS transmitters broadcast a ship’s identification number, name, call sign, length and beam, type, and antenna location every six minutes. Ship location, position time stamp, and direction are transmitted more frequently. The primary purpose of AIS is maritime safety—it helps prevent collisions, assists in rescues, and provides insight into the impact of ship traffic on marine life. US Coast Guard regulations say that generally, private boats under 65 feet in length are not required to use AIS, but most commercial vessels are. Unlike ADS-B in planes, AIS can be turned off only in rare circumstances. 

A variety of sectors use AIS data for many different applications, including monitoring ship traffic to avoid disruption of undersea internet cables, identifying whale strikes, and studying the footprint of underwater noise.

Using the US National Oceanic and Atmospheric Association’s Marine Cadastre tool, you can download 16 years of detailed daily ship movements, as well as “transit count” maps generated from a year’s worth of data showing each ship’s accumulated paths. The data is collected entirely from ground-based stations along the US coasts.

I downloaded all of 2023’s transit count maps and loaded them up in geographic information system software called QGIS to visualize this year of marine traffic.

The maps are abstract and electric. With landmasses removed, the ship traces resemble long-exposure photos of sparklers, high-energy particle collisions, or strands of fiber-optic wire.

Victoria, British Columbia, and Seattle.
DATA: NOAA; MAP: JON KEEGAN / BEAUTIFUL PUBLIC DATA
Lake Huron
DATA: NOAA; MAP: JON KEEGAN / BEAUTIFUL PUBLIC DATA
Savannah, Georgia
DATA: NOAA; MAP: JON KEEGAN / BEAUTIFUL PUBLIC DATA
Louisiana
DATA: NOAA; MAP: JON KEEGAN / BEAUTIFUL PUBLIC DATA

Zooming in on these maps, you might see strange geometric patterns of perfect circles, or lines in a grid. Some of these are fishing grounds, others are scientific surveys mapping the seafloor, and others represent boats going to and from offshore oil rigs, especially off Louisiana’s gulf coast.

Hiding in plain sight

Having a global, near-real-time system for tracking the precise movements of all ships at sea sounds like a great innovation—unless you’re trying to keep your ships’ movements and cargoes secret.

In 2023, Bloomberg investigated how Russia evaded sanctions on its oil exports after the invasion of Ukraine by “spoofing”—transmitting fake AIS data—to mislead observers. Tracking a fleet of rusting ships of questionable seaworthiness, reporters compared AIS data with what they actually saw on the sea—and discovered that the ships weren’t where the data said they were. 

Monitoring the fishing industry

Clusters of fishing vessels gravitating toward known fishing grounds create some of the most interesting patterns on the maps. 

Global Fishing Watch is an international nonprofit that uses AIS to monitor the fishing industry, seeking to protect marine life from overfishing. But it says that only 2% of fishing vessels use AIS transmitters. 

The organization, which is backed by Google, the ocean conservation group Oceana, and the satellite imagery company SkyTruth, combines AIS data with satellite imagery and uses machine learning to classify the types of fishing technology being used. 

In a press release announcing the creation of Global Fishing Watch, John Amos, the president and founder of SkyTruth, said: “So much of what happens out on the high seas is invisible, and that has been a huge barrier to understanding and showing the world what’s at stake for the ocean.” 

A version of this story appeared in Beautiful Public Data (beautifulpublicdata.com), a newsletter that curates visually interesting datasets collected by government agencies.

The humans behind the robots

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Here’s a question. Imagine that, for $15,000, you could purchase a robot to pitch in with all the mundane tasks in your household. The catch (aside from the price tag) is that for 80% of those tasks, the robot’s AI training isn’t good enough for it to act on its own. Instead, it’s aided by a remote assistant working from the Philippines to help it navigate your home and clear your table or put away groceries. Would you want one?

That’s the question at the center of my story for our magazine, published online today, on whether we will trust humanoid robots enough to welcome them into our most private spaces, particularly if they’re part of an asymmetric labor arrangement in which workers in low-wage countries perform physical tasks for us in our homes through robot interfaces. In the piece, I wrote about one robotics company called Prosper and its massive effort—bringing in former Pixar designers and professional butlers—to design a trustworthy household robot named Alfie. It’s quite a ride. Read the story here.

There’s one larger question that the story raises, though, about just how profound a shift in labor dynamics robotics could bring in the coming years. 

For decades, robots have found success on assembly lines and in other somewhat predictable environments. Then, in the last couple of years, robots started being able to learn tasks more quickly thanks to AI, and that has broadened their applications to tasks in more chaotic settings, like picking orders in warehouses. But a growing number of well-funded companies are pushing for an even more monumental shift. 

Prosper and others are betting that they don’t have to build a perfect robot that can do everything on its own. Instead, they can build one that’s pretty good, but receives help from remote operators anywhere in the world. If that works well enough, they’re hoping to bring robots into jobs that most of us would have guessed couldn’t be automated: the work of hotel housekeepers, care providers in hospitals, or domestic help. “Almost any indoor physical labor” is on the table, Prosper’s founder and CEO, Shariq Hashme, told me. 

Until now, we’ve mostly thought about automation and outsourcing as two separate forces that can affect the labor market. Jobs might be outsourced overseas or lost to automation, but not both. A job that couldn’t be sent offshore and could not yet be fully automated by machines, like cleaning a hotel room, wasn’t going anywhere. Now, advancements in robotics are promising that employers can outsource such a job to low-wage countries without needing the technology to fully automate it. 

It’s a tall order, to be clear. Robots, as advanced as they’ve gotten, may find it difficult to move around complex environments like hotels and hospitals, even with assistance. That will take years to change. However, robots will only get more nimble, as will the systems that enable them to be controlled from halfway around the world. Eventually, the bets made by these companies may pay off.

What would that mean? One, the labor movement’s battle with AI—which this year has focused its attention on automation at ports and generative AI’s theft of artists’ work—will have a whole new battle to fight. It won’t just be dock workers, delivery drivers, and actors seeking contracts to protect their jobs from automation—it will be hospitality and domestic workers too, along with many others. 

Second, our expectations of privacy would radically shift. People buying those hypothetical household robots would have to be comfortable with the idea that someone that they have never met is seeing their dirty laundry—literally and figuratively. 

Some of those changes might happen sooner rather than later. For robots to learn how to navigate places effectively, they need training data, and this year has already seen a race to collect new data sets to help them learn. To achieve their ambitions for teleoperated robots, companies will expand their search for training data to hospitals, workplaces, hotels, and more. 


Now read the rest of The Algorithm

Deeper Learning

This is where the data to build AI comes from

AI developers often don’t really know or share much about the sources of the data they are using, and the Data Provenance Initiative, a group of over 50 researchers from both academia and industry, wanted to fix that. They dug into 4,000 public data sets spanning over 600 languages, 67 countries, and three decades to understand what’s feeding today’s top AI models, and how that will affect the rest of us. 

Why it matters: AI is being incorporated into everything, and what goes into the AI models determines what comes out. However, the team found that AI’s data practices risk concentrating power overwhelmingly in the hands of a few dominant technology companies, a shift from how AI models were being trained just a decade ago. Over 90% of the data sets that the researchers analyzed came from Europe and North America, and over 70% of data for both speech and image data sets comes from YouTube. This concentration means that AI models are unlikely to “capture all the nuances of humanity and all the ways that we exist,” says Sara Hooker, a researcher involved in the project. Read more from Melissa Heikkilä.

Bits and Bytes

In the shadows of Arizona’s data center boom, thousands live without power

As new research shows that AI’s emissions have soared, Arizona is expanding plans for AI data centers while rejecting plans to finally provide electricity to parts of the Navajo Nation’s land. (Washington Post)

AI is changing how we study bird migration

After decades of frustration, machine-learning tools are unlocking a treasure trove of acoustic data for ecologists. (MIT Technology Review)

OpenAI unveils a more advanced reasoning model in race with Google

The new o3 model, unveiled during a livestreamed event on Friday, spends more time computing an answer before responding to user queries, with the goal of solving more complex multi-step problems. (Bloomberg)

How your car might be making roads safer

Researchers say data from long-haul trucks and General Motors cars is critical for addressing traffic congestion and road safety. Data privacy experts have concerns. (New York Times)

Why childhood vaccines are a public health success story

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Later today, around 10 minutes after this email lands in your inbox, I’ll be holding my four-year-old daughter tight as she receives her booster dose of the MMR vaccine. This shot should protect her from a trio of nasty infections—infections that can lead to meningitis, blindness, and hearing loss. I feel lucky to be offered it.

This year marks the 50-year anniversary of an ambitious global childhood vaccination program. The Expanded Programme on Immunization was launched by the World Health Organization in 1974 with the goal of getting lifesaving vaccines to all the children on the planet.

Vaccines are estimated to have averted 154 million deaths since the launch of the EPI. That number includes 146 million children under the age of five. Vaccination efforts are estimated to have reduced infant mortality by 40%, and to have contributed an extra 10 billion years of healthy life among the global population.

Childhood vaccination is a success story. But concerns around vaccines endure. Especially, it seems, among the individuals Donald Trump has picked as his choices to lead US health agencies from January. This week, let’s take a look at their claims, and where the evidence really stands on childhood vaccines.

WHO, along with health agencies around the world, recommends a suite of vaccinations for babies and young children. Some, such as the BCG vaccine, which offers some protection against tuberculosis, are recommended from birth. Others, like the vaccines for pertussis, diphtheria, tetanus, and whooping cough, which are often administered in a single shot, are introduced at eight weeks. Other vaccinations and booster doses follow.

The idea is to protect babies as soon as possible, says Kaja Abbas of the London School of Hygiene & Tropical Medicine in the UK and Nagasaki University in Japan.

The full vaccine schedule will depend on what infections pose the greatest risks and will vary by country. In the US, the recommended schedule is determined by the Centers for Disease Control and Prevention, and individual states can opt to set vaccine mandates or allow various exemptions.

Some scientists are concerned about how these rules might change in January, when Donald Trump makes his return to the White House. Trump has already listed his picks for top government officials, including those meant to lead the country’s health agencies. These individuals must be confirmed by the Senate before they can assume these roles, but it appears that Trump intends to surround himself with vaccine skeptics.

For starters, Trump has selected Robert F. Kennedy Jr. as his pick to lead the Department of Health and Human Services. Kennedy, who has long been a prominent anti-vaxxer, has a track record of spreading false information about vaccines.

In 2005, he published an error-laden article in Salon and Rolling Stone linking thimerosal—an antifungal preservative that was previously used in vaccines but phased out in the US by 2001—to neurological disorders in children. (That article was eventually deleted in 2011. “I regret we didn’t move on this more quickly, as evidence continued to emerge debunking the vaccines and autism link,” wrote Joan Walsh, Salon’s editor at large at the time.)

Kennedy hasn’t let up since. In 2015, he made outrageous comments about childhood vaccinations at a screening of a film that linked thimerosal to autism. “They get the shot, that night they have a fever of a hundred and three, they go to sleep, and three months later their brain is gone,” Kennedy said, as reported by the Sacramento Bee. “This is a holocaust, what this is doing to our country.”

Aaron Siri, the lawyer who has been helping Kennedy pick health officials for the upcoming Trump administration, has petitioned the government to pause the distribution of multiple vaccines and to revoke approval of the polio vaccine entirely. And Dave Weldon, Trump’s pick to direct the CDC, also has a history of vaccine skepticism. He has championed the disproven link between thimerosal and autism.

These arguments aren’t new. The MMR vaccine in particular has been subject to debate, controversy, and conspiracy theories for decades. All the way back in 1998, a British doctor, Andrew Wakefield, published a paper suggesting a link between the vaccine and autism in children.

The study has since been debunked—multiple times over—and Wakefield was found to have unethically subjected children to invasive and unnecessary procedures. The paper was retracted 12 years after it was published, and the UK’s General Medical Council found Wakefield guilty of serious professional misconduct. He was struck off the medical register and is no longer allowed to practice medicine in the UK. (He continues to peddle false information, though, and directed the 2016 film Vaxxed, which Weldon appeared in.)

So it’s remarkable that his “study” still seems to be affecting public opinion. A recent Pew Research Center survey suggests that four in 10 US adults worry that “not all vaccines are necessary,” and while most Americans think the benefits outweigh any risks, some are still concerned about side effects. Views among Republicans in particular seem to have shifted over the years. In 2019, 82% supported school-based vaccine requirements. That figure dropped to 70% in 2023.

The problem is that we need more than 70% of children to be vaccinated to reach “herd immunity”—the level needed to protect communities. For a super-contagious infection like measles, 95% of the population needs to be vaccinated, according to WHO. “If [coverage drops to] 80%, we should expect outbreaks,” says Abbas.

And that’s exactly what is happening. In 2023, only 83% of children got their first dose of a measles vaccine through routine health services. Nearly 35 million children are thought to have either partial protection from the disease or none at all. And over the last five years, there have been measles outbreaks in 103 countries.

Polio vaccines—the ones whose approval Siri sought to revoke—have also played a vital role in protecting children, in this case from a devastating infection that can cause paralysis. “People were so afraid of polio in the ‘30s, ‘40s, and ‘50s here in the United States,” says William Moss, an epidemiologist at Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. “When the trial results of [the first] vaccine were announced in the United States, people were dancing in the streets.”

That vaccine was licensed in the US in 1955. By 1994, polio was considered eliminated in North and South America. Today, wild forms of the virus have been eradicated in all but two countries.

But the polio vaccine story is not straightforward. There are two types of polio vaccine: an injected type that includes a “dead” form of the virus, and an oral version that includes “live” virus. This virus can be shed in feces, and in places with poor sanitation, it can spread. It can also undergo genetic changes to create a form of the virus that can cause paralysis. Although this is rare, it does happen—and today there are more cases of vaccine-derived polio than wild-type polio.

It is worth noting that since 2000, more than 10 billion doses of the oral polio vaccine have been administered to almost 3 billion children. It is estimated that more than 13 million cases of polio have been prevented through these efforts. But there have been just under 760 cases of vaccine-derived polio.

We could prevent these cases by switching to the injected vaccine, which wealthy countries have already done. But that’s not easy in countries with fewer resources and those trying to reach children in remote rural areas or war zones.

Even the MMR vaccine is not entirely risk-free. Some people will experience minor side effects, and severe allergic reactions, while rare, can occur. And neither vaccine offers 100% protection against disease. No vaccine does. “Even if you vaccinate 100% [of the population], I don’t think we’ll be able to attain herd immunity for polio,” says Abbas. It’s important to acknowledge these limitations.

While there are some small risks, though, they are far outweighed by the millions of lives being saved. “[People] often underestimate the risk of the disease and overestimate the risk of the vaccine,” says Moss.

In some ways, vaccines have become a victim of their own success. “Most of today’s parents fortunately have never seen the tragedy caused by vaccine-preventable diseases such as measles encephalitis, congenital rubella syndrome, and individuals crippled by polio,” says Kimberly Thompson, president of Kid Risk, a nonprofit that conducts research on health risks to children. “With some individuals benefiting from the propagation of scary messages about vaccines and the proliferation of social media providing reinforcement, it’s no surprise that fears may endure.”

“But most Americans recognize the benefits of vaccines and choose to get their children immunized,” she adds. Now, that is a sentiment I can relate to.


Now read the rest of The Checkup

Read more from MIT Technology Review‘s archive

A couple of years ago, the polio virus was detected in wastewater in London, where I live. I immediately got my daughter (who was only one year old then!) vaccinated. 

Measles outbreaks continue to spring up in places where vaccination rates drop. Researchers hope that searching for traces of the virus in wastewater could help them develop early warning systems. 

Last year, the researchers whose work paved the way for the development of mRNA vaccines were awarded the Nobel Prize. Now, scientists are hoping to use the same technology to treat and vaccinate against a host of diseases.

Most vaccines work by priming the immune system to respond to a pathogen. Scientists are also working on “inverse vaccines” that teach the immune system to stand down. They might help treat autoimmune disorders.

From around the web

A person in the US is the first in the country to have become severely ill after being infected with the bird flu virus, the US Centers for Disease Control and Prevention shared on December 18. The case was confirmed on December 13. The person was exposed to sick and dead birds in backyard flocks in Louisiana. (CDC

Gavin Newsom, the governor of California, declared a state of emergency as the bird flu virus moved from the Central Valley to Southern California dairy herds. Since August, 645 herds have been reported to be infected with the virus. (LA Times)

Pharmacy benefit managers control access to prescription drugs for most Americans. These middlemen were paid billions of dollars by drug companies to allow the free flow of opioids during the US’s deadly addiction epidemic, an investigation has revealed. (New York Times)

Weight-loss drugs like Ozempic have emerged as blockbuster medicines over the past couple of years. We’re learning that they may have benefits beyond weight loss. Might they also protect organ function or treat kidney disease? (Nature Medicine)

Doctors and scientists have been attempting head transplants on animals for decades. Can they do it in people? Watch this delightful cartoon to learn more about the early head transplant attempts. (Aeon)

Drugs like Ozempic now make up 5% of prescriptions in the US

US doctors write billions of prescriptions each year. During 2024, though, one type of drug stood out—“wonder drugs” known as GLP-1 agonists.

As of September, one of every 20 prescriptions written for adults was for one of these drugs, according to the health data company Truveta.

The drugs, which include Wegovy, Mounjaro, and Victoza, are used to treat diabetes, since they help generate insulin. But their popularity exploded after scientists determined the drugs tell your brain you’re not hungry. Without those hunger cues, people find they can lose 10% of their body weight, or even more.

During 2024, the drugs’ popularity hit an all-time high, according to Tricia Rodriguez, a principal applied scientist at Truveta, which studies medical records of 120 million Americans, or about a third of the population.

“Among adults, 5.4% of all prescriptions in September 2024 were for GLP-1s,” Rodriguez says. That is up from 3.5% a year earlier, in 2023, and 1% at the start of 2021.

According to Truveta’s data, people who get prescriptions for these drugs are younger, whiter, and more likely to be female. In fact, women are twice as likely as men to get a prescription.

Yet not everyone who’s prescribed the drugs ends up taking them. In fact, Rodriguez says, half the new prescriptions for obesity are going unfilled.

That’s very unusual, she says, and could be due to shortages or sticker shock over the cost of the treatment. Many insurers don ’t cover weight-loss drugs, and the out-of-pocket price can be $1,300 a month, according to USA Today.

“For most medications, prescribing rates and dispensing rates are pretty much identical,” says Rodriguez. “But for GLP-1s, we see this gap, which is really unique. It’s suggestive that people are really interested in getting these medications, but for whatever reason, they are not always able to.”

It also means the number of people taking these drugs could go higher—maybe much higher—if insurers would pay. “I don’t think that we are at the saturation point, or necessarily nearing the saturation point,” says Rodriguez, noting that around 70% of Americans are overweight or obese.

Use of the drugs may also grow dramatically if new applications are found. Companies are already exploring whether they can treat addiction, or even Alzheimer’s.

Many of the clues about those potential uses are coming directly out of people’s medical records. Because so many people are on the drugs, it means researchers like Rodriguez have a gold mine to sift through for signs of how use of the drugs is affecting other health problems.

“Because we have so many patients that are on these medications, you’re certainly likely to have a good number that also have all of these other conditions,” she says. “One of the things we’re excited about is: How can real-world data help accelerate how quickly we can understand those?”

Here are some of the new uses of GLP-1 drugs that are being explored, based on hints from real-world patient records.

Alzheimer’s disease

This year, researchers poking through records of a million people found that taking semaglutide (sold as Wegovy and Ozempic) was associated with a 40% to 70% lower chance of an Alzheimer’s diagnosis.

It’s still a guess why the drugs might be helping (or whether they really do), but large international studies are underway to follow up on the lead. Doctors are recruiting people with early Alzheimer’s in more than 30 countries who will take either a placebo or semaglutide for two years. Then we’ll see how much their dementia has progressed.

Addiction

The anecdotes are everywhere: A person on a weight-loss drug finds hunger isn’t the only craving that seems to stop.

Those are the types of clues Eli Lilly’s CEO, David Ricks, says his company will pursue next year, testing whether its GLP-1 drug, tirzepatide (called Mounjaro for diabetes treatment, and Zepbound for weight loss), could help with addiction to alcohol, nicotine, and “other things we don’t think about [as being] connected to weight.”

In comments he made in December, Ricks said the drugs might be “anti-hedonics”—meaning they counteract our hedonistic pursuit of pleasure, be it from food, alcohol, or drugs. A study this year mining digital health records found that opioid addicts taking the drugs were about half as likely to have had an overdose.

Sleep apnea

This idea goes back a ways, including to a 2015 case study of a 260-pound man with diabetes and sleep apnea. When he went on the drug liraglutide, doctors noticed that his sleeping improved.

In sleep apnea, a person gasps for air at night—it’s annoying and, with time, causes health problems.  This year, Eli Lilly published a study in the New England Journal of Medicine on its drug tirzepatide , finding that it caused a 50% decrease in breathing interruption in overweight patients with sleep apnea.

Longevity

This year, the U.S. Food and Drug Administration approved Wegovy as a cardiovascular medicine, after researchers showed the drugs could reduce heart attack and stroke in overweight people.

But that wasn’t all. The study, involving 17,000 people, found that the drug reduced the overall chance someone would die for any reason (known as “all-cause mortality”) by 19%.

That now has aging researchers paying attention. This year they named Wegovy, and drugs like it, among their the top four candidates for a general life-extension drug.

AI is changing how we study bird migration

A small songbird soars above Ithaca, New York, on a September night. He is one of 4 billion birds, a great annual river of feathered migration across North America. Midair, he lets out what ornithologists call a nocturnal flight call to communicate with his flock. It’s the briefest of signals, barely 50 milliseconds long, emitted in the woods in the middle of the night. But humans have caught it nevertheless, with a microphone topped by a focusing funnel. Moments later, software called BirdVoxDetect, the result of a collaboration between New York University, the Cornell Lab of Ornithology, and École Centrale de Nantes, identifies the bird and classifies it to the species level.

Biologists like Cornell’s Andrew Farnsworth had long dreamed of snooping on birds this way. In a warming world increasingly full of human infrastructure that can be deadly to them, like glass skyscrapers and power lines, migratory birds are facing many existential threats. Scientists rely on a combination of methods to track the timing and location of their migrations, but each has shortcomings. Doppler radar, with the weather filtered out, can detect the total biomass of birds in the air, but it can’t break that total down by species. GPS tags on individual birds and careful observations by citizen-scientist birders help fill in that gap, but tagging birds at scale is an expensive and invasive proposition. And there’s another key problem: Most birds migrate at night, when it’s more difficult to identify them visually and while most birders are in bed. For over a century, acoustic monitoring has hovered tantalizingly out of reach as a method that would solve ornithologists’ woes.

In the late 1800s, scientists realized that migratory birds made species-specific nocturnal flight calls—“acoustic fingerprints.” When microphones became commercially available in the 1950s, scientists began recording birds at night. Farnsworth led some of this acoustic ecology research in the 1990s. But even then it was challenging to spot the short calls, some of which are at the edge of the frequency range humans can hear. Scientists ended up with thousands of tapes they had to scour in real time while looking at spectrograms that visualize audio. Though digital technology made recording easier, the “perpetual problem,” Farnsworth says, “was that it became increasingly easy to collect an enormous amount of audio data, but increasingly difficult to analyze even some of it.”

Then Farnsworth met Juan Pablo Bello, director of NYU’s Music and Audio Research Lab. Fresh off a project using machine learning to identify sources of urban noise pollution in New York City, Bello agreed to take on the problem of nocturnal flight calls. He put together a team including the French machine-listening expert Vincent Lostanlen, and in 2015, the BirdVox project was born to automate the process. “Everyone was like, ‘Eventually, when this nut is cracked, this is going to be a super-rich source of information,’” Farnsworth says. But in the beginning, Lostanlen recalls, “there was not even a hint that this was doable.” It seemed unimaginable that machine learning could approach the listening abilities of experts like Farnsworth.

“Andrew is our hero,” says Bello. “The whole thing that we want to imitate with computers is Andrew.”

They started by training BirdVoxDetect, a neural network, to ignore faults like low buzzes caused by rainwater damage to microphones. Then they trained the system to detect flight calls, which differ between (and even within) species and can easily be confused with the chirp of a car alarm or a spring peeper. The challenge, Lostanlen says, was similar to the one a smart speaker faces when listening for its unique “wake word,” except in this case the distance from the target noise to the microphone is far greater (which means much more background noise to compensate for). And, of course, the scientists couldn’t choose a unique sound like “Alexa” or “Hey Google” for their trigger. “For birds, we don’t really make that choice. Charles Darwin made that choice for us,” he jokes. Luckily, they had a lot of training data to work with—Farnsworth’s team had hand-annotated thousands of hours of recordings collected by the microphones in Ithaca.

With BirdVoxDetect trained to detect flight calls, another difficult task lay ahead: teaching it to classify the detected calls by species, which few expert birders can do by ear. To deal with uncertainty, and because there is not training data for every species, they decided on a hierarchical system. For example, for a given call, BirdVoxDetect might be able to identify the bird’s order and family, even if it’s not sure about the species—just as a birder might at least identify a call as that of a warbler, whether yellow-rumped or chestnut-sided. In training, the neural network was penalized less when it mixed up birds that were closer on the taxonomical tree.  

Last August, capping off eight years of research, the team published a paper detailing BirdVoxDetect’s machine-learning algorithms. They also released the software as a free, open-source product for ornithologists to use and adapt. In a test on a full season of migration recordings totaling 6,671 hours, the neural network detected 233,124 flight calls. In a 2022 study in the Journal of Applied Ecology, the team that tested BirdVoxDetect found acoustic data as effective as radar for estimating total biomass.

BirdVoxDetect works on a subset of North American migratory songbirds. But through “few-shot” learning, it can be trained to detect other, similar birds with just a few training examples. It’s like learning a language similar to one you already speak, Bello says. With cheap microphones, the system could be expanded to places around the world without birders or Doppler radar, even in vastly different recording conditions. “If you go to a bioacoustics conference and you talk to a number of people, they all have different use cases,” says Lostanlen. The next step for bioacoustics, he says, is to create a foundation model, like the ones scientists are working on for natural-language processing and image and video analysis, that would be reconfigurable for any species—even beyond birds. That way, scientists won’t have to build a new BirdVoxDetect for every animal they want to study.

The BirdVox project is now complete, but scientists are already building on its algorithms and approach. Benjamin Van Doren, a migration biologist at the University of Illinois Urbana-Champaign who worked on BirdVox, is using Nighthawk, a new user-friendly neural network based on both BirdVoxDetect and the popular birdsong ID app Merlin, to study birds migrating over Chicago and elsewhere in North and South America. And Dan Mennill, who runs a bioacoustics lab at the University of Windsor, says he’s excited to try Nighthawk on flight calls his team currently hand-­annotates after they’re recorded by microphones on the Canadian side of the Great Lakes. One weakness of acoustic monitoring is that unlike radar, a single microphone can’t detect the altitude of a bird overhead or the direction in which it is moving. Mennill’s lab is experimenting with an array of eight microphones that can triangulate to solve that problem. Sifting through recordings has been slow. But with Nighthawk, the analysis will speed dramatically.

With birds and other migratory animals under threat, Mennill says, BirdVoxDetect came at just the right time. Knowing exactly which birds are flying over in real time can help scientists keep tabs on how species are doing and where they’re going. That can inform practical conservation efforts like “Lights Out” initiatives that encourage skyscrapers to go dark at night to prevent bird collisions. “Bioacoustics is the future of migration research, and we’re really just getting to the stage where we have the right tools,” he says. “This ushers us into a new era.”

Christian Elliott is a science and environmental reporter based in Illinois.  

This is where the data to build AI comes from

AI is all about data. Reams and reams of data are needed to train algorithms to do what we want, and what goes into the AI models determines what comes out. But here’s the problem: AI developers and researchers don’t really know much about the sources of the data they are using. AI’s data collection practices are immature compared with the sophistication of AI model development. Massive data sets often lack clear information about what is in them and where it came from. 

The Data Provenance Initiative, a group of over 50 researchers from both academia and industry, wanted to fix that. They wanted to know, very simply: Where does the data to build AI come from? They audited nearly 4,000 public data sets spanning over 600 languages, 67 countries, and three decades. The data came from 800 unique sources and nearly 700 organizations. 

Their findings, shared exclusively with MIT Technology Review, show a worrying trend: AI’s data practices risk concentrating power overwhelmingly in the hands of a few dominant technology companies. 

In the early 2010s, data sets came from a variety of sources, says Shayne Longpre, a researcher at MIT who is part of the project. 

It came not just from encyclopedias and the web, but also from sources such as parliamentary transcripts, earning calls, and weather reports. Back then, AI data sets were specifically curated and collected from different sources to suit individual tasks, Longpre says.

Then transformers, the architecture underpinning language models, were invented in 2017, and the AI sector started seeing performance get better the bigger the models and data sets were. Today, most AI data sets are built by indiscriminately hoovering material from the internet. Since 2018, the web has been the dominant source for data sets used in all media, such as audio, images, and video, and a gap between scraped data and more curated data sets has emerged and widened.

“In foundation model development, nothing seems to matter more for the capabilities than the scale and heterogeneity of the data and the web,” says Longpre. The need for scale has also boosted the use of synthetic data massively.

The past few years have also seen the rise of multimodal generative AI models, which can generate videos and images. Like large language models, they need as much data as possible, and the best source for that has become YouTube. 

For video models, as you can see in this chart, over 70% of data for both speech and image data sets comes from one source.

This could be a boon for Alphabet, Google’s parent company, which owns YouTube. Whereas text is distributed across the web and controlled by many different websites and platforms, video data is extremely concentrated in one platform.

“It gives a huge concentration of power over a lot of the most important data on the web to one company,” says Longpre. 

And because Google is also developing its own AI models, its massive advantage also raises questions about how the company will make this data available for competitors, says Sarah Myers West, the co–executive director at the AI Now Institute.

“It’s important to think about data not as though it’s sort of this naturally occurring resource, but it’s something that is created through particular processes,” says Myers West.

“If the data sets on which most of the AI that we’re interacting with reflect the intentions and the design of big, profit-motivated corporations—that’s reshaping the infrastructures of our world in ways that reflect the interests of those big corporations,” she says.

This monoculture also raises questions about how accurately the human experience is portrayed in the data set and what kinds of models we are building, says Sara Hooker, the vice president of research at the technology company Cohere, who is also part of the Data Provenance Initiative.

People upload videos to YouTube with a particular audience in mind, and the way people act in those videos is often intended for very specific effect. “Does [the data] capture all the nuances of humanity and all the ways that we exist?” says Hooker. 

Hidden restrictions

AI companies don’t usually share what data they used to train their models. One reason is that they want to protect their competitive edge. The other is that because of the complicated and opaque way data sets are bundled, packaged, and distributed, they likely don’t even know where all the data came from.

They also probably don’t have complete information about any constraints on how that data is supposed to be used or shared. The researchers at the Data Provenance Initiative found that data sets often have restrictive licenses or terms attached to them, which should limit their use for commercial purposes, for example.

“This lack of consistency across the data lineage makes it very hard for developers to make the right choice about what data to use,” says Hooker.

It also makes it almost impossible to be completely certain you haven’t trained your model on copyrighted data, adds Longpre.

More recently, companies such as OpenAI and Google have struck exclusive data-sharing deals with publishers, major forums such as Reddit, and social media platforms on the web. But this becomes another way for them to concentrate their power.

“These exclusive contracts can partition the internet into various zones of who can get access to it and who can’t,” says Longpre.

The trend benefits the biggest AI players, who can afford such deals, at the expense of researchers, nonprofits, and smaller companies, who will struggle to get access. The largest companies also have the best resources for crawling data sets.

“This is a new wave of asymmetric access that we haven’t seen to this extent on the open web,” Longpre says.

The West vs. the rest

The data that is used to train AI models is also heavily skewed to the Western world. Over 90% of the data sets that the researchers analyzed came from Europe and North America, and fewer than 4% came from Africa. 

“These data sets are reflecting one part of our world and our culture, but completely omitting others,” says Hooker.

The dominance of the English language in training data is partly explained by the fact that the internet is still over 90% in English, and there are still a lot of places on Earth where there’s really poor internet connection or none at all, says Giada Pistilli, principal ethicist at Hugging Face, who was not part of the research team. But another reason is convenience, she adds: Putting together data sets in other languages and taking other cultures into account requires conscious intention and a lot of work. 

The Western focus of these data sets becomes particularly clear with multimodal models. When an AI model is prompted for the sights and sounds of a wedding, for example, it might only be able to represent Western weddings, because that’s all that it has been trained on, Hooker says. 

This reinforces biases and could lead to AI models that push a certain US-centric worldview, erasing other languages and cultures.

“We are using these models all over the world, and there’s a massive discrepancy between the world we’re seeing and what’s invisible to these models,” Hooker says. 

AI’s search for more energy is growing more urgent

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

If you drove by one of the 2,990 data centers in the United States, you’d probably think little more than “Huh, that’s a boring-looking building.” You might not even notice it at all. However, these facilities underpin our entire digital world, and they are responsible for tons of greenhouse-gas emissions. New research shows just how much those emissions have skyrocketed during the AI boom. 

Since 2018, carbon emissions from data centers in the US have tripled, according to new research led by a team at the Harvard T.H. Chan School of Public Health. That puts data centers slightly below domestic commercial airlines as a source of this pollution.

That leaves a big problem for the world’s leading AI companies, which are caught between pressure to meet their own sustainability goals and the relentless competition in AI that’s leading them to build bigger models requiring tons of energy. The trend toward ever more energy-intensive new AI models, including video generators like OpenAI’s Sora, will only send those numbers higher. 

A growing coalition of companies is looking toward nuclear energy as a way to power artificial intelligence. Meta announced on December 3 it was looking for nuclear partners, and Microsoft is working to restart the Three Mile Island nuclear plant by 2028. Amazon signed nuclear agreements in October. 

However, nuclear plants take ages to come online. And though public support has increased in recent years, and president-elect Donald Trump has signaled support, only a slight majority of Americans say they favor more nuclear plants to generate electricity. 

Though OpenAI CEO Sam Altman pitched the White House in September on an unprecedented effort to build more data centers, the AI industry is looking far beyond the United States. Countries in Southeast Asia, like Malaysia, Indonesia, Thailand, and Vietnam, are all courting AI companies, hoping to be their new data center hubs. 

In the meantime, AI companies will continue to use up power from their current sources, which are far from renewable. Since so many data centers are located in coal-producing regions, like Virginia, the “carbon intensity” of the energy they use is 48% higher than the national average. The researchers found that 95% of data centers in the US are built in places with sources of electricity that are dirtier than the national average. Read more about the new research here.


Deeper Learning

We saw a demo of the new AI system powering Anduril’s vision for war

We’re living through the first drone wars, but AI is poised to change the future of warfare even more drastically. I saw that firsthand during a visit to a test site in Southern California run by Anduril, the maker of AI-powered drones, autonomous submarines, and missiles. Anduril has built a way for the military to command much of its hardware—from drones to radars to unmanned fighter jets—from a single computer screen. 

Why it matters: Anduril, other companies in defense tech, and growing numbers of people within the Pentagon itself are increasingly adopting a new worldview: A future “great power” conflict—military jargon for a global war involving multiple countries—will not be won by the entity with the most advanced drones or firepower, or even the cheapest firepower. It will be won by whoever can sort through and share information the fastest. The Pentagon is betting lots of energy and money that AI—despite its flaws and risks—will be what puts the US and its allies ahead in that fight. Read more here.

Bits and Bytes

Bluesky has an impersonator problem 

The platform’s rise has brought with it a surge of crypto scammers, as my colleague Melissa Heikkilä experienced firsthand. (MIT Technology Review)

Tech’s elite make large donations to Trump ahead of his inauguration 

Leaders in Big Tech, who have been lambasted by Donald Trump, have made sizable donations to his ​​inauguration committee. (The Washington Post)

Inside the premiere of the first commercially streaming AI-generated movies

The films, according to writer Jason Koebler, showed the telltale flaws of AI-generated video: dead eyes, vacant expressions, unnatural movements, and a reliance on voice-overs, since dialogue doesn’t work well. The company behind the films is confident viewers will stomach them anyway. (404 Media)

Meta asked California’s attorney general to stop OpenAI from becoming for-profit

Meta now joins Elon Musk in alleging that OpenAI has improperly enjoyed the benefits of nonprofit status while developing its technology. (Wall Street Journal)

How Silicon Valley is disrupting democracy

Two books explore the price we’ve paid for handing over unprecedented power to Big Tech—and explain why it’s imperative we start taking it back. (MIT Technology Review)

The 8 worst technology failures of 2024

They say you learn more from failure than success. If so, this is the story for you: MIT Technology Review’s annual roll call of the biggest flops, flimflams, and fiascos in all domains of technology.

Some of the foul-ups were funny, like the “woke” AI which got Google in trouble after it drew Black Nazis. Some caused lawsuits, like a computer error by CrowdStrike that left thousands of Delta passengers stranded. We also reaped failures among startups that raced to expand from 2020 to 2022, a period of ultra-low interest rates. But then the economic winds shifted. Money wasn’t free anymore. The result? Bankruptcy and dissolution for companies whose ambitious technological projects, from vertical farms to carbon credits, hadn’t yet turned a profit and might never do so.

Read on.

Woke AI blunder

ai-generated image of a female pope

GOOGLE GEMINI VIA X.COM/END WOKENESS

People worry about bias creeping into AI. But what if you add bias on purpose? Thanks to Google, we know where that leads: Black Vikings and female popes.

Google’s Gemini AI image feature, launched last February, had been tuned to zealously showcase diversity, damn the history books. Ask Google for a picture of German soldiers from World War II, and it would create a Benetton ad in Wehrmacht uniforms. 

Critics pounced and Google beat an embarrassed retreat. It paused Gemini’s ability to draw people and agreed its well-intentioned effort to be inclusive had “missed the mark.” 

The free version of Gemini still won’t create images of people. But paid versions will. When we asked for an image of 12 CEOs of public biotech companies, the software produced a photographic-quality image of middle-aged white men. Less than ideal. But closer to the truth. 

More: Is Google’s Gemini chatbot woke by accident, or by design? (The Economist), Gemini image generation got it wrong. We’ll do better. (Google)


Boeing Starliner

Boeing CST-100 Starliner

THE BOEING COMPANY VIA NASA

Boeing, we have a problem. And it’s your long-delayed reusable spaceship, the Starliner, which stranded NASA astronauts Sunita “Suni” Williams and  Barry “Butch” Wilmore on the International Space Station.

The June mission was meant to be a quick eight-day round trip to test Starliner before it embarked on longer missions. But, plagued by helium leaks and thruster problems, it had to come back empty. 

Now Butch and Suni won’t return to Earth until 2025, when a craft from Boeing competitor SpaceX is scheduled to bring them home. 

Credit Boeing and NASA with putting safety first. But this wasn’t Boeing’s only malfunction during 2024. The company began the year with a door blowing off one of its planes midflight, faced a worker strike, agreed to a major fine for misleading the government about the safety of its 737 Max airplane (which made our 2019 list of worst technologies), and saw its CEO step down in March.

After the Starliner fiasco, Boeing fired the chief of its space and defense unit. “At this critical juncture, our priority is to restore the trust of our customers and meet the high standards they expect of us to enable their critical missions around the world,” Boeing’s new CEO, Kelly Ortberg, said in a memo.

More: Boeing’s beleaguered space capsule is heading back to Earth without two NASA astronauts (NY Post), Boeing’s space and defense chief exits in new CEO’s first executive move (Reuters), CST-100 Starliner (Boeing)


CrowdStrike outage

MITTR / ENVATO

The motto of the cybersecurity company CrowdStrike is “We stop breaches.” And it’s true: No one can breach your computer if you can’t turn it on.

That’s exactly what happened to many people on July 19, when thousands of Windows computers at airlines, TV stations, and hospitals started displaying the “blue screen of death.” 

The cause wasn’t hackers or ransomware. Instead, those computers were stuck in a boot loop because of a bad update shipped by CrowdStrike itself. CEO George Kurtz jumped on X to say the “issue” had been identified as a “defect” in a single computer file.

So who is liable? CrowdStrike customer Delta Airlines, which canceled 7,000 flights, is suing for $500 million. It alleges that the security firm caused a “global catastrophe” when it took “uncertified and untested shortcuts.” 

CrowdStrike countersued. It says Delta’s management is to blame for its troubles and that the airline is due little more than a refund. 

More: “Crowdstrike is working with customers(George Kurtz), How to fix a Windows PC affected by the global outage (MIT Technology Review), Delta Sues CrowdStrike Over July Operations Meltdown (WSJ)


Vertical farms

a blighted brown leaf of lettuce

MITTR / ENVATO

Grow lettuce in buildings using robots, hydroponics, and LED lights. That’s what Bowery, a “vertical farming” startup, raised over $700 million to do. But in November, Bowery went bust, making it the biggest startup failure of the year, according to the business analytics firm CB Insights. 

Bowery claimed that vertical farms were “100 times more productive” per square foot than traditional farms, since racks of plants could be stacked 40 feet high. In reality, the company’s lettuce was more expensive, and when a stubborn plant infection spread through its East Coast facilities, Bowery had trouble delivering the green stuff at any price.

More: How a leaf-eating pathogen, failed deals brought down Bowery Farming (Pitchbook), Vertical farming “unicorn” Bowery to shut down (Axios)


Exploding pagers

an explosion behind a pager

MITTR / ADOBE STOCK

They beeped, and then they blew up. Across Lebanon, fingers and faces were shredded in what was called Israel’s “surprise opening blow in an all-out war to try to cripple Hezbollah.” 

The deadly attack was diabolically clever. Israel set up shell companies that sold thousands of pagers packed with explosives to the Islamic faction, which was already worried that its phones were being spied on. 

A coup for Israel’s spies. But was it a war crime? A 1996 treaty prohibits intentionally manufacturing “apparently harmless objects” designed to explode. The New York Times says nine-year-old Fatima Abdullah died when her father’s booby-trapped beeper chimed and she raced to take it to him.

More: Israel conducted Lebanon pager attack… (Axios), A 9-Year-Old Girl Killed in Pager Attack Is Mourned in Lebanon (New York Times), Did Israel break international law? (Middle East Eye)


23andMe

The 23 and me logo protruding from a cardboard box of desk items held by an office worker.

MITTR / ADOBE STOCK

The company that pioneered direct-to-consumer gene testing is sinking fast. Its stock price is going toward zero, and a plan to create valuable drugs is kaput after that team got pink slips this November.

23andMe always had a celebrity aura, bathing in good press. Now, though, the press is all bad. It’s a troubled company in the grip of a controlling founder, Anne Wojcicki, after its independent directors resigned en masse this September. Customers are starting to worry about what’s going to happen to their DNA data if 23andMe goes under.

23andMe says it created “the world’s largest crowdsourced platform for genetic research.” That’s true. It just never figured out how to turn a profit. 

More:  23andMe’s fall from $6 billion to nearly $0 (Wall Street Journal), How to…delete your 23andMe data (MIT Technology Review), 23andMe Financial Report, November 2024 (23andMe)


AI slop

ai-generated image of a representation of Jesus with outspread arms and body composed of shrimp parts

AUTHOR UNKNOWN VIA WIKIMEDIA COMMONS

Slop is the scraps and leftovers that pigs eat. “AI slop” is what you and I are increasingly consuming online now that people are flooding the internet with computer-generated text and pictures.  

AI slop is “dubious,” says the New York Times, and “dadaist,” according to Wired. It’s frequently weird, like Shrimp Jesus (don’t ask if you don’t know), or deceptive, like the picture of a shivering girl in a rowboat, supposedly showing the US government’s poor response to Hurricane Helene.

AI slop is often entertaining. AI slop is usually a waste of your time. AI slop is not fact-checked. AI slop exists mostly to get clicks. AI slop is that blue-check account on X posting 10-part threads on how great AI is—threads that were written by AI. 

Most of all, AI slop is very, very common. This year, researchers claimed that about half the long posts on LinkedIn and Medium were partly AI-generated.

More: First came ‘Spam.’ Now, With A.I., We’ve got ‘Slop’ (New York Times), AI Slop Is Flooding Medium (Wired)


Voluntary carbon markets

a spindly tree with a cloud of emissions hovering around it

MITTR / ENVATO

Your business creates emissions that contribute to global warming. So why not pay to have some trees planted or buy a more efficient cookstove for someone in Central America? Then you could reach net-zero emissions and help save the planet.

Neat idea, but good intentions aren’t enough. This year the carbon marketplace Nori shut down, and so did Running Tide, a firm trying to sink carbon into the ocean. “The problem is the voluntary carbon market is voluntary,” Running Tide’s CEO wrote in a farewell post, citing a lack of demand.

While companies like to blame low demand, it’s not the only issue. Sketchy technology, questionable credits, and make-believe offsets have created a credibility problem in carbon markets. In October, US prosecutors charged two men in a $100 million scheme involving the sale of nonexistent emissions savings. 

More: The growing signs of trouble for global carbon markets (MIT Technology Review), Running Tide’s ill-fated adventure in ocean carbon removal (Canary Media), Ex-carbon offsetting boss charged in New York with multimillion-dollar fraud (The Guardian)