AI’s emissions are about to skyrocket even further

It’s no secret that the current AI boom is using up immense amounts of energy. Now we have a better idea of how much. 

A new paper, from a team at the Harvard T.H. Chan School of Public Health, examined 2,132 data centers operating in the United States (78% of all facilities in the country). These facilities—essentially buildings filled to the brim with rows of servers—are where AI models get trained, and they also get “pinged” every time we send a request through models like ChatGPT. They require huge amounts of energy both to power the servers and to keep them cool. 

Since 2018, carbon emissions from data centers in the US have tripled. For the 12 months ending August 2024, data centers were responsible for 105 million metric tons of CO2, accounting for 2.18% of national emissions (for comparison, domestic commercial airlines are responsible for about 131 million metric tons). About 4.59% of all the energy used in the US goes toward data centers, a figure that’s doubled since 2018.

It’s difficult to put a number on how much AI in particular, which has been booming since ChatGPT launched in November 2022, is responsible for this surge. That’s because data centers process lots of different types of data—in addition to training or pinging AI models, they do everything from hosting websites to storing your photos in the cloud. However, the researchers say, AI’s share is certainly growing rapidly as nearly every segment of the economy attempts to adopt the technology.

“It’s a pretty big surge,” says Eric Gimon, a senior fellow at the think tank Energy Innovation, who was not involved in the research. “There’s a lot of breathless analysis about how quickly this exponential growth could go. But it’s still early days for the business in terms of figuring out efficiencies, or different kinds of chips.”

Notably, the sources for all this power are particularly “dirty.” Since so many data centers are located in coal-producing regions, like Virginia, the “carbon intensity” of the energy they use is 48% higher than the national average. The paper, which was published on arXiv and has not yet been peer-reviewed, found that 95% of data centers in the US are built in places with sources of electricity that are dirtier than the national average. 

There are causes other than simply being located in coal country, says Falco Bargagli-Stoffi, an author of the paper. “Dirtier energy is available throughout the entire day,” he says, and plenty of data centers require that to maintain peak operation 24-7. “Renewable energy, like wind or solar, might not be as available.” Political or tax incentives, and local pushback, can also affect where data centers get built.  

One key shift in AI right now means that the field’s emissions are soon likely to skyrocket. AI models are rapidly moving from fairly simple text generators like ChatGPT toward highly complex image, video, and music generators. Until now, many of these “multimodal” models have been stuck in the research phase, but that’s changing. 

OpenAI released its video generation model Sora to the public on December 9, and its website has been so flooded with traffic from people eager to test it out that it is still not functioning properly. Competing models, like Veo from Google and Movie Gen from Meta, have still not been released publicly, but if those companies follow OpenAI’s lead as they have in the past, they might be soon. Music generation models from Suno and Udio are growing (despite lawsuits), and Nvidia released its own audio generator last month. Google is working on its Astra project, which will be a video-AI companion that can converse with you about your surroundings in real time. 

“As we scale up to images and video, the data sizes increase exponentially,” says Gianluca Guidi, a PhD student in artificial intelligence at University of Pisa and IMT Lucca, who is the paper’s lead author. Combine that with wider adoption, he says, and emissions will soon jump. 

One of the goals of the researchers was to build a more reliable way to get snapshots of just how much energy data centers are using. That’s been a more complicated task than you might expect, given that the data is dispersed across a number of sources and agencies. They’ve now built a portal that shows data center emissions across the country. The long-term goal of the data pipeline is to inform future regulatory efforts to curb emissions from data centers, which are predicted to grow enormously in the coming years. 

“There’s going to be increased pressure, between the environmental and sustainability-conscious community and Big Tech,” says Francesca Dominici, director of the Harvard Data Science Initiative and another coauthor. “But my prediction is that there is not going to be regulation. Not in the next four years.”

China banned exports of a few rare minerals to the US. Things could get messier.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

I’ve thought more about gallium and germanium over the last week than I ever have before (and probably more than anyone ever should).

As you may already know, China banned the export of those materials to the US last week and placed restrictions on others. The move is just the latest drama in escalating trade tensions between the two countries.

While the new export bans could have significant economic consequences, this might be only the beginning. China is a powerhouse, and not just in those niche materials—it’s also a juggernaut in clean energy, and particularly in battery supply chains. So what comes next could have significant consequences for EVs and climate action more broadly.

A super-quick catch-up on the news here: The Biden administration recently restricted exports of chips and other technology that could help China develop advanced semiconductors. Also, president-elect Donald Trump has floated all sorts of tariffs on Chinese goods.

Apparently in response to some or all of this, China banned the export of gallium, germanium, antimony, and superhard materials used in manufacturing, and said it may further restrict graphite sales. The materials are all used for both military and civilian technologies, and significantly, gallium and germanium are used in semiconductors.

It’s a ramp-up from last July, when China placed restrictions on gallium and germanium exports after enduring years of restrictions by the US and its Western allies on cutting-edge technology. (For more on the details of China’s most recent move, including potential economic impacts, check out the full coverage from my colleague James Temple.)

What struck me about this news is that this could be only the beginning, because China is central to many of the supply chains snaking around the globe.

This is no accident—take gallium as an example. The metal is a by-product of aluminum production from bauxite ore. China, as the world’s largest aluminum producer, certainly has a leg up to be a major player in the niche material. But other countries could produce gallium, and I’m sure more will. China has a head start because it invested in gallium separation and refining technologies.

A similar situation exists in the battery world. China is a dominant player all over the supply chain for lithium-ion batteries—not because it happens to have the right metals on its shores (it doesn’t), but because it’s invested in extraction and processing technologies.

Take lithium, a crucial component in those batteries. China has around 8% of the world’s lithium reserves but processes about 58% percent of the world’s lithium supply. The situation is similar for other key battery metals. Nickel that’s mined in Indonesia goes to China for processing, and the same goes for cobalt from the Democratic Republic of Congo.

Over the past two decades, China has thrown money, resources, and policy behind electric vehicles. Now China leads the world in EV registrations, many of the largest EV makers are Chinese companies, and the country is home to a huge chunk of the supply chain for the vehicles and their batteries.

As the world begins a shift toward technologies like EVs, it’s becoming clear just how dominant China’s position is in many of the materials crucial to building that tech.

Lithium prices have dropped by 80% over the past year, and while part of the reason is a slowdown in EV demand, another part is that China is oversupplying lithium, according to US officials. By flooding the market and causing prices to drop, China could make it tougher for other lithium processors to justify sticking around in the business.

The new graphite controls from China could wind up affecting battery markets, too. Graphite is crucial for lithium-ion batteries, which use the material in their anodes. It’s still not clear whether the new bans will affect battery materials or just higher-purity material that’s used in military applications, according to reporting from Carbon Brief.

To this point, China hasn’t specifically banned exports of key battery materials, and it’s not clear exactly how far the country would go. Global trade politics are delicate and complicated, and any move that China makes in battery supply chains could wind up coming back to hurt the country’s economy. 

But we could be entering into a new era of material politics. Further restrictions on graphite, or moves that affect lithium, nickel, or copper, could have major ripple effects around the world for climate technology, because batteries are key not only for electric vehicles, but increasingly for our power grids. 

While it’s clear that tensions are escalating, it’s still unclear what’s going to happen next. The vibes, at best, are uncertain, and this sort of uncertainty is exactly why so many folks in technology are so focused on how to diversify global supply chains. Otherwise, we may find out just how tangled those supply chains really are, and what happens when you yank on threads that run through the center of them. 


Now read the rest of The Spark

Related reading

Check out James Temple’s breakdown of what China’s ban on some rare minerals could mean for the US.

Last July, China placed restrictions on some of these materials—read this story from Zeyi Yang, who explains what the moves and future ones might mean for semiconductor technology.

As technology shifts, so too do the materials we need to build it. The result: a never-ending effort to build out mining, processing, and recycling infrastructure, as I covered in a feature story earlier this year.

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | GETTY, ENVATO

Another thing 

Each year we release a list of 10 Breakthrough Technologies, and it’s nearly time for the 2025 edition. But before we announce the picks, here are a few things that didn’t make the cut

A couple of interesting ones on the cutting-room floor here, including eVTOLs, electric aircraft that can take off and land like helicopters. For more on why the runway is looking pretty long for electric planes (especially ones with funky ways to move through the skies), check out this story from last year

Keeping up with climate  

Denmark received no bids in its latest offshore wind auction. It’s a disappointing result for the birthplace of offshore wind power. (Reuters)

Surging methane emissions could be the sign of a concerning shift for the climate. A feedback loop of emissions from the Arctic and a slowdown in how the powerful greenhouse gas breaks down could spell trouble. (Inside Climate News)

Battery prices are dropping faster than expected. Costs for  lithium-ion packs just saw their steepest drop since 2017. (Electrek)

This fusion startup is rethinking how to configure its reactors by floating powerful magnets in the middle of the chamber. This sounds even more like science fiction than most other approaches to fusion. (IEEE Spectrum)

The US plans to put monarch butterflies on a list of threatened species. Temperature shifts brought on by climate change could wreak havoc with the insects’ migration. (Associated Press)

Sources close to Elon Musk say he’s undergone quite a shift on climate change, morphing from “environmental crusader to critic of dire climate predictions.” (Washington Post)

Google has a $20 billion plan to build data centers and clean power together. “Bring your own power” is an interesting idea, but not a tested prospect just yet. (Canary Media)

The Franklin Fire in Los Angeles County sparked Monday evening and quickly grew into a major blaze. At the heart of the fire’s rapid spread: dry weather and Santa Ana winds. (Scientific American)

Places in the US that are most at risk for climate disasters are also most at risk for insurance hikes. Check out these great data visualizations on insurance and climate change. (The Guardian)

AI’s hype and antitrust problem is coming under scrutiny

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The AI sector is plagued by a lack of competition and a lot of deceit—or at least that’s one way to interpret the latest flurry of actions taken in Washington. 

Last Thursday, Senators Elizabeth Warren and Eric Schmitt introduced a bill aimed at stirring up more competition for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle currently dominate those contracts. “The way that the big get bigger in AI is by sucking up everyone else’s data and using it to train and expand their own systems,” Warren told the Washington Post

The new bill would “require a competitive award process” for contracts, which would ban the use of “no-bid” awards by the Pentagon to companies for cloud services or AI foundation models. (The lawmakers’ move came a day after OpenAI announced that its technology would be deployed on the battlefield for the first time in a partnership with Anduril, completing a year-long reversal of its policy against working with the military.)

While Big Tech is hit with antitrust investigations—including the ongoing lawsuit against Google about its dominance in search, as well as a new investigation opened into Microsoft—regulators are also accusing AI companies of, well, just straight-up lying. 

On Tuesday, the Federal Trade Commission took action against the smart-camera company IntelliVision, saying that the company makes false claims about its facial recognition technology. IntelliVision has promoted its AI models, which are used in both home and commercial security camera systems, as operating without gender or racial bias and being trained on millions of images, two claims the FTC says are false. (The company couldn’t support the bias claim and the system was trained on only 100,000 images, the FTC says.)

A week earlier, the FTC made similar claims of deceit against the security giant Evolv, which sells AI-powered security scanning products to stadiums, K-12 schools, and hospitals. Evolv advertises its systems as offering better protection than simple metal detectors, saying they use AI to accurately screen for guns, knives, and other threats while ignoring harmless items. The FTC alleges that Evolv has inflated its accuracy claims, and that its systems failed in consequential cases, such as a 2022 incident when they failed to detect a seven-inch knife that was ultimately used to stab a student. 

Those add to the complaints the FTC made back in September against a number of AI companies, including one that sold a tool to generate fake product reviews and one selling “AI lawyer” services. 

The actions are somewhat tame. IntelliVision and Evolv have not actually been served fines. The FTC has simply prohibited the companies from making claims that they can’t back up with evidence, and in the case of Evolv, it requires the company to allow certain customers to get out of contracts if they wish to. 

However, they do represent an effort to hold the AI industry’s hype to account in the final months before the FTC’s chair, Lina Khan, is likely to be replaced when Donald Trump takes office. Trump has not named a pick for FTC chair, but he said on Thursday that Gail Slater, a tech policy advisor and a former aide to vice president–elect JD Vance, was picked to head the Department of Justice’s Antitrust Division. Trump has signaled that the agency under Slater will keep tech behemoths like Google, Amazon, and Microsoft in the crosshairs. 

“Big Tech has run wild for years, stifling competition in our most innovative sector and, as we all know, using its market power to crack down on the rights of so many Americans, as well as those of Little Tech!” Trump said in his announcement of the pick. “I was proud to fight these abuses in my First Term, and our Department of Justice’s antitrust team will continue that work under Gail’s leadership.”

That said, at least some of Trump’s frustrations with Big Tech are different—like his concerns that conservatives could be targets of censorship and bias. And that could send antitrust efforts in a distinctly new direction on his watch. 


Now read the rest of The Algorithm

Deeper Learning

The US Department of Defense is investing in deepfake detection

The Pentagon’s Defense Innovation Unit, a tech accelerator within the military, has awarded its first contract for deepfake detection. Hive AI will receive $2.4 million over two years to help detect AI-generated video, image, and audio content. 

Why it matters: As hyperrealistic deepfakes get cheaper and easier to produce, they hurt our ability to tell what’s real. The military’s investment in deepfake detection shows that the problem has national security implications as well. The open question is how accurate these detection tools are, and whether they can keep up with the unrelenting pace at which deepfake generation techniques are improving. Read more from Melissa Heikkilä

Bits and Bytes

The owner of the LA Times plans to add an AI-powered “bias meter” to its news stories

Patrick Soon-Shiong is building a tool that will allow readers to “press a button and get both sides” of a story. But trying to create an AI model that can somehow provide an objective view of news events is controversial, given that models are biased both by their training data and by fine-tuning methods. (Yahoo

Google DeepMind’s new AI model is the best yet at weather forecasting

It’s the second AI weather model that Google has launched in just the past few months. But this one’s different: It leaves out traditional physics models and relies on AI methods alone. (MIT Technology Review)

How the Ukraine-Russia war is reshaping the tech sector in Eastern Europe

Startups in Latvia and other nearby countries see the mobilization of Ukraine as a warning and an inspiration. They are now changing consumer products—from scooters to recreational drones—for use on the battlefield. (MIT Technology Review)

How Nvidia’s Jensen Huang is avoiding $8 billion in taxes

Jensen Huang runs Nvidia, the world’s top chipmaker and most valuable company. His wealth has soared during the AI boom, and he has taken advantage of a number of tax dodges “that will enable him to pass on much of his fortune tax free,” according to the New York Times. (The New York Times)

Meta is pursuing nuclear energy for its AI ambitions
Meta wants more of its AI training and development to be powered by nuclear energy, joining the ranks of Amazon and Microsoft. The news comes as many companies in Big Tech struggle to meet their sustainability goals amid the soaring energy demands from AI. (Meta)

Correction: A previous version of this article stated that Gail Slater was picked by Donald Trump to be the head of the FTC. Slater was in fact picked to lead the Department of Justice’s Antitrust Division. We apologize for the error.

We saw a demo of the new AI system powering Anduril’s vision for war

One afternoon in late November, I visited a weapons test site in the foothills east of San Clemente, California, operated by Anduril, a maker of AI-powered drones and missiles that recently announced a partnership with OpenAI. I went there to witness a new system it’s expanding today, which allows external parties to tap into its software and share data in order to speed up decision-making on the battlefield. If it works as planned over the course of a new three-year contract with the Pentagon, it could embed AI more deeply into the theater of war than ever before. 

Near the site’s command center, which looked out over desert scrubs and sage, sat pieces of Anduril’s hardware suite that have helped the company earn its $14 billion valuation. There was Sentry, a security tower of cameras and sensors currently deployed at both US military bases and the US-Mexico border, and advanced radars. Multiple drones, including an eerily quiet model called Ghost, sat ready to be deployed. What I was there to watch, though, was a different kind of weapon, displayed on two large television screens positioned at the test site’s command station. 

I was here to examine the pitch being made by Anduril, other companies in defense tech, and growing numbers of people within the Pentagon itself: A future “great power” conflict—military jargon for a global war involving competition between multiple countries—will not be won by the entity with the most advanced drones or firepower, or even the cheapest firepower. It will be won by whoever can sort through and share information the fastest. And that will have to be done “at the edge” where threats arise, not necessarily at a command post in Washington. 

A desert drone test

“You’re going to need to really empower lower levels to make decisions, to understand what’s going on, and to fight,” Anduril CEO Brian Schimpf says. “That is a different paradigm than today.” Currently, information flows poorly among people on the battlefield and decision-makers higher up the chain. 

To show how the new tech will fix that, Anduril walked me through an exercise demonstrating how its system would take down an incoming drone threatening a base of the US military or its allies (the scenario at the center of Anduril’s new partnership with OpenAI). It began with a truck in the distance, driving toward the base. The AI-powered Sentry tower automatically recognized the object as a possible threat, highlighting it as a dot on one of the screens. Anduril’s software, called Lattice, sent a notification asking the human operator if he would like to send a Ghost drone to monitor. After a click of his mouse, the drone piloted itself autonomously toward the truck, as information on its location gathered by the Sentry was sent to the drone by the software.

The truck disappeared behind some hills, so the Sentry tower camera that was initially trained on it lost contact. But the surveillance drone had already identified it, so its location stayed visible on the screen. We watched as someone in the truck got out and launched a drone, which Lattice again labeled as a threat. It asked the operator if he’d like to send a second attack drone, which then piloted autonomously and locked onto the threatening drone. With one click, it could be instructed to fly into it fast enough to take it down. (We stopped short here, since Anduril isn’t allowed to actually take down drones at this test site.) The entire operation could have been managed by one person with a mouse and computer.

Anduril is building on these capabilities further by expanding Lattice Mesh, a software suite that allows other companies to tap into Anduril’s software and share data, the company announced today. More than 10 companies are now building their hardware into the system—everything from autonomous submarines to self-driving trucks—and Anduril has released a software development kit to help them do so. Military personnel operating hardware can then “publish” their own data to the network and “subscribe” to receive data feeds from other sensors in a secure environment. On December 3, the Pentagon’s Chief Digital and AI Office awarded a three-year contract to Anduril for Mesh. 

Anduril’s offering will also join forces with Maven, a program operated by the defense data giant Palantir that fuses information from different sources, like satellites and geolocation data. It’s the project that led Google employees in 2018 to protest against working in warfare. Anduril and Palantir announced on December 6 that the military will be able to use the Maven and Lattice systems together. 

The military’s AI ambitions

The aim is to make Anduril’s software indispensable to decision-makers. It also represents a massive expansion of how the military is currently using AI. You might think the US Department of Defense, advanced as it is, would already have this level of hardware connectivity. We have some semblance of it in our daily lives, where phones, smart TVs, laptops, and other devices can talk to each other and share information. But for the most part, the Pentagon is behind.

“There’s so much information in this battle space, particularly with the growth of drones, cameras, and other types of remote sensors, where folks are just sopping up tons of information,” says Zak Kallenborn, a warfare analyst who works with the Center for Strategic and International Studies. Sorting through to find the most important information is a challenge. “There might be something in there, but there’s so much of it that we can’t just set a human down and to deal with it,” he says. 

Right now, humans also have to translate between systems made by different manufacturers. One soldier might have to manually rotate a camera to look around a base and see if there’s a drone threat, and then manually send information about that drone to another soldier operating the weapon to take it down. Those instructions might be shared via a low-tech messenger app—one on par with AOL Instant Messenger. That takes time. It’s a problem the Pentagon is attempting to solve through its Joint All-Domain Command and Control plan, among other initiatives.

“For a long time, we’ve known that our military systems don’t interoperate,” says Chris Brose, former staff director of the Senate Armed Services Committee and principal advisor to Senator John McCain, who now works as Anduril’s chief strategy officer. Much of his work has been convincing Congress and the Pentagon that a software problem is just as worthy of a slice of the defense budget as jets and aircraft carriers. (Anduril spent nearly $1.6 million on lobbying last year, according to data from Open Secrets, and has numerous ties with the incoming Trump administration: Anduril founder Palmer Luckey has been a longtime donor and supporter of Trump, and JD Vance spearheaded an investment in Anduril in 2017 when he worked at venture capital firm Revolution.) 

Defense hardware also suffers from a connectivity problem. Tom Keane, a senior vice president in Anduril’s connected warfare division, walked me through a simple example from the civilian world. If you receive a text message while your phone is off, you’ll see the message when you turn the phone back on. It’s preserved. “But this functionality, which we don’t even think about,” Keane says, “doesn’t really exist” in the design of many defense hardware systems. Data and communications can be easily lost in challenging military networks. Anduril says its system instead stores data locally. 

An AI data treasure trove

The push to build more AI-connected hardware systems in the military could spark one of the largest data collection projects the Pentagon has ever undertaken, and companies like Anduril and Palantir have big plans. 

“Exabytes of defense data, indispensable for AI training and inferencing, are currently evaporating,” Anduril said on December 6, when it announced it would be working with Palantir to compile data collected in Lattice, including highly sensitive classified information, to train AI models. Training on a broader collection of data collected by all these sensors will also hugely boost the model-building efforts that Anduril is now doing in a partnership with OpenAI, announced on December 4. Earlier this year, Palantir also offered its AI tools to help the Pentagon reimagine how it categorizes and manages classified data. When Anduril founder Palmer Luckey told me in an interview in October that “it’s not like there’s some wealth of information on classified topics and understanding of weapons systems” to train AI models on, he may have been foreshadowing what Anduril is now building. 

Even if some of this data from the military is already being collected, AI will suddenly make it much more useful. “What is new is that the Defense Department now has the capability to use the data in new ways,” Emelia Probasco, a senior fellow at the Center for Security and Emerging Technology at Georgetown University, wrote in an email. “More data and ability to process it could support great accuracy and precision as well as faster information processing.”

The sum of these developments might be that AI models are brought more directly into military decision-making. That idea has brought scrutiny, as when Israel was found last year to have been using advanced AI models to process intelligence data and generate lists of targets. Human Rights Watch wrote in a report that the tools “rely on faulty data and inexact approximations.”

“I think we are already on a path to integrating AI, including generative AI, into the realm of decision-making,” says Probasco, who authored a recent analysis of one such case. She examined a system built within the military in 2023 called Maven Smart System, which allows users to “access sensor data from diverse sources [and] apply computer vision algorithms to help soldiers identify and choose military targets.”

Probasco said that building an AI system to control an entire decision pipeline, possibly without human intervention, “isn’t happening” and that “there are explicit US policies that would prevent it.”

A spokesperson for Anduril said that the purpose of Mesh is not to make decisions. “The Mesh itself is not prescribing actions or making recommendations for battlefield decisions,” the spokesperson said. “Instead, the Mesh is surfacing time-sensitive information”—information that operators will consider as they make those decisions.

Bluesky has an impersonator problem 

Like many others, I recently fled the social media platform X for Bluesky. In the process, I started following many of the people I followed on X. On Thanksgiving, I was delighted to see a private message from a fellow AI reporter, Will Knight from Wired. Or at least that’s who I thought I was talking to. I became suspicious when the person claiming to be Knight mentioned being from Miami, when Knight is, in fact, from the UK. The account handle was almost identical to the real Will Knight’s handle, and the profile used his profile photo. 

Then more messages started to appear. Paris Marx, a prominent tech critic, slid into my DMs to ask me how I was doing. “Things are going splendid over here,” he replied to me. Then things got suspicious again. “How are your trades going?” fake-Marx asked me. This account was far more sophisticated than Knight’s; it had meticulously copied every single tweet and retweet from Marx’s real page over the past few weeks.

Both accounts were eventually deleted, but not before trying to get me to set up a crypto wallet and a “cloud mining pool” account. Knight and Marx confirmed to us that these accounts did not belong to them, and that they have been fighting impersonator accounts of themselves for weeks. 

They are not the only ones. The New York Times tech journalist Sheera Frankel and Molly White, a researcher and cryptocurrency critic, have also experienced people impersonating them on Bluesky, most likely to scam people. This tracks with research from Alexios Mantzarlis, the director of the Security, Trust, and Safety Initiative at Cornell Tech, who manually went through the top 500 Bluesky users by follower count and found that of the 305 accounts belonging to a named person, at least 74 had been impersonated by at least one other account. 

The platform has had to suddenly cater to an influx of millions of new users in recent months as people leave X in protest of Elon Musk’s takeover of the platform. Its user base has more than doubled since September, from 10 million users to over 20 million. This sudden wave of new users—and the inevitable scammers—means Bluesky is still playing catch-up, says White. 

“These accounts block me as soon as they’re created, so I don’t initially see them,” Marx says. Both Marx and White describe a frustrating pattern: When one account is taken down, another one pops up soon after. White says she had experienced a similar phenomenon on X and TikTok too. 

A way to prove that people are who they say they are would help. Before Musk took the reins of the platform, employees at X, previously known as Twitter, verified users such as journalists and politicians, and gave them a blue tick next to their handles so people knew they were dealing with credible news sources. After Musk took over, he scrapped the old verification system and offered blue ticks to all paying customers. 

The ongoing crypto-impersonation scams have raised calls for Bluesky to initiate something similar to Twitter’s original verification program. Some users, such as the investigative journalist Hunter Walker, have set up their own initiatives to verify journalists. However, users are currently limited in the ways they can verify themselves on the platform. By default, usernames on Bluesky end with the suffix bsky.social. The platform recommends that news organizations and high-profile people verify their identities by setting up their own websites as their usernames. For example, US senators have verified their accounts with the suffix senate.gov. But this technique isn’t foolproof. For one, it doesn’t actually verify people’s identity—only their affiliation with a particular website. 

Bluesky did not respond to MIT Technology Review’s requests for comment, but the company’s safety team posted that the platform had updated its impersonation policy to be more aggressive and would remove impersonation and handle-squatting accounts. The company says it has also quadrupled its moderation team to take action on impersonation reports more quickly. But it seems to be struggling to keep up. “We still have a large backlog of moderation reports due to the influx of new users as we shared previously, though we are making progress,” the company continued. 

Bluesky’s decentralized nature makes kicking out impersonators a trickier problem to solve. Competitors such as X and Threads rely on centralized teams within the company who moderate unwanted content and behavior, such as impersonation. But Bluesky is built on the AT Protocol, a decentralized, open-source technology, which allows users more control over what kind of content they see and enables them to build communities around particular content. Most people sign up to Bluesky Social, the main social network, whose community guidelines ban impersonation. However, Bluesky Social is just one of the services or “clients” that people can use, and other services have their own moderation practices and terms. 

This approach means that until now, Bluesky itself hasn’t needed an army of content moderators to weed out unwanted behaviors because it relies on this community-led approach, says Wayne Chang, the founder and CEO of SpruceID, a digital identity company. That might have to change.

“In order to make these apps work at all, you need some level of centralization,” says Chang. Despite community guidelines, it’s hard to stop people from creating impersonation accounts, and Bluesky is engaged in a cat-and-mouse game trying to take suspicious accounts down. 

Cracking down on a problem such as impersonation is important because it poses a serious problem for the credibility of Bluesky, says Chang. “It’s a legitimate complaint as a Bluesky user that ‘Hey, all those scammers are basically harassing me.’ You want your brand to be tarnished? Or is there something we can do about this?” he says.

A fix for this is urgently needed, because attackers might abuse Bluesky’s open-source code to create spam and disinformation campaigns at a much larger scale, says Francesco Pierri, an assistant professor at Politecnico di Milano who has researched Bluesky. His team found that the platform has seen a rise in suspicious accounts since it was made open to the public earlier this year. 

Bluesky acknowledges that its current practices are not enough. In a post, the company said it has received feedback that users want more ways to confirm their identities beyond domain verification, and it is “exploring additional options to enhance account verification.” 

In a livestream at the end of November, Bluesky CEO Jay Graber said the platform was considering becoming a verification provider, but because of its decentralized approach it would also allow others to offer their own user verification services. “And [users] can choose to trust us—the Bluesky team’s verification—or they could do their own. Or other people could do their own,” Graber said. 

But at least Bluesky seems to “have some willingness to actually moderate content on the platform,” says White. “I would love to see something a little bit more proactive that didn’t require me to do all of this reporting,” she adds. 

As for Marx, “I just hope that no one truly falls for it and gets tricked into crypto scams,” he says. 

Google’s new Project Astra could be generative AI’s killer app

Google DeepMind has announced an impressive grab bag of new products and prototypes that may just let it seize back its lead in the race to turn generative artificial intelligence into a mass-market concern. 

Top billing goes to Gemini 2.0—the latest iteration of Google DeepMind’s family of multimodal large language models, now redesigned around the ability to control agents—and a new version of Project Astra, the experimental everything app that the company teased at Google I/O in May.

MIT Technology Review got to try out Astra in a closed-door live demo last week. It was a stunning experience, but there’s a gulf between polished promo and live demo.

Astra uses Gemini 2.0’s built-in agent framework to answer questions and carry out tasks via text, speech, image, and video, calling up existing Google apps like Search, Maps, and Lens when it needs to. “It’s merging together some of the most powerful information retrieval systems of our time,” says Bibo Xu, product manager for Astra.

Gemini 2.0 and Astra are joined by Mariner, a new agent built on top of Gemini that can browse the web for you; Jules, a new Gemini-powered coding assistant; and Gemini for Games, an experimental assistant that you can chat to and ask for tips as you play video games. 

(And let’s not forget that in the last week Google DeepMind also announced Veo, a new video generation model; Imagen 3, a new version of its image generation model; and Willow, a new kind of chip for quantum computers. Whew. Meanwhile, CEO Demis Hassabis was in Sweden yesterday receiving his Nobel Prize.)

Google DeepMind claims that Gemini 2.0 is twice as fast as the previous version, Gemini 1.5, and outperforms it on a number of standard benchmarks, including MMLU-Pro, a large set of multiple-choice questions designed to test the abilities of large language models across a range of subjects, from math and physics to health, psychology, and philosophy. 

But the margins between top-end models like Gemini 2.0 and those from rival labs like OpenAI and Anthropic are now slim. These days, advances in large language models are less about how good they are and more about what you can do with them. 

And that’s where agents come in. 

Hands on with Project Astra 

Last week I was taken through an unmarked door on an upper floor of a building in London’s King’s Cross district into a room with strong secret-project vibes. The word “ASTRA” was emblazoned in giant letters across one wall. Xu’s dog, Charlie, the project’s de facto mascot, roamed between desks where researchers and engineers were busy building a product that Google is betting its future on.

“The pitch to my mum is that we’re building an AI that has eyes, ears, and a voice. It can be anywhere with you, and it can help you with anything you’re doing” says Greg Wayne, co-lead of the Astra team. “It’s not there yet, but that’s the kind of vision.” 

The official term for what Xu, Wayne, and their colleagues are building is “universal assistant.” Exactly what that means in practice, they’re still figuring out. 

At one end of the Astra room were two stage sets that the team uses for demonstrations: a drinks bar and a mocked-up art gallery. Xu took me to the bar first. “A long time ago we hired a cocktail expert and we got them to instruct us to make cocktails,” said Praveen Srinivasan, another co-lead. “We recorded those conversations and used that to train our initial model.”

Xu opened a cookbook to a recipe for a chicken curry, pointed her phone at it, and woke up Astra. “Ni hao, Bibo!” said a female voice. 

“Oh! Why are you speaking to me in Mandarin?” Xu asked her phone. “Can you speak to me in English, please?”

“My apologies, Bibo. I was following a previous instruction to speak in Mandarin. I will now speak in English as you have requested.”

Astra remembers previous conversations, Xu told me. It also keeps track of the previous 10 minutes of video. (There’s a remarkable moment in the promo video that Google put out in May when Astra tells the person giving the demo where she had left her glasses, having spotted them on a desk a few seconds earlier. But I saw nothing like this in the live demo.)

Back to the cookbook. Moving her phone camera over the page for a few seconds, Xu asked Astra to read the recipe and tell her what spices were in it. “I recall the recipe mentioning a teaspoon of black peppercorns, a teaspoon of hot chili powder, and a cinnamon stick,” it replied.

“I think you’re missing a few,” said Xu. “Take another look.”

“You are correct—I apologize. I also see ground turmeric and curry leaves in the ingredients.” 

Seeing this tech in action, two things hit you straight away. First, it’s glitchy and often needs correcting. Second, those glitches can be corrected with just a few spoken words. You simply interrupt the voice, repeat your instructions, and move on. It feels more like coaching a child than butting heads with broken software.   

Next Xu pointed her phone at a row of wine bottles and asked Astra to pick the one that would go best with the chicken curry. It went for a rioja and explained why. Xu asked how much a bottle would cost. Astra said it would need to use Search to look prices up online. A few seconds later it came back with its answer. 

We moved to the art gallery, and Xu showed Astra a number of screens with famous paintings on them: the Mona Lisa, Munch’s The Scream, a Vermeer, a Seurat, and several others. “Ni hao, Bibo!” the voice said. 

“You’re speaking to me in Mandarin again,” Xu said. “Try to speak to me in English, please.” 

“My apologies, I seem to have misunderstood. Yes, I will respond in English.” (I should know better, but I could swear I heard the snark.)

It was my turn. Xu handed me her phone. 

I tried to trip Astra up, but it was having none of it. I asked it what famous art gallery we were in, but it refused to hazard a guess. I asked why it had identified the paintings as replicas and it started to apologize for its mistake (Astra apologizes a lot). I was compelled to interrupt: “No, no—you’re right, it’s not a mistake. You’re correct to identify paintings on screens as fake paintings.” I couldn’t help feeling a bit bad: I’d confused an app that exists only to please. 

When it works well, Astra is enthralling. The experience of striking up a conversation with your phone about whatever you’re pointing it at feels fresh and seamless. In a media briefing yesterday, Google DeepMind shared a video showing off other uses: reading an email on your phone’s screen to find a door code (and then reminding you of that code later), pointing a phone at a passing bus and asking where it goes, quizzing it about a public artwork as you walk past. This could be generative AI’s killer app. 

And yet there’s a long way to go before most people get their hands on tech like this. There’s no mention of a release date. Google DeepMind has also shared videos of Astra working on a pair of smart glasses, but that tech is even further down the company’s wish list.

Mixing it up

For now, researchers outside Google DeepMind are keeping a close eye on its progress. “The way that things are being combined is impressive,” says Maria Liakata, who works on large language models at Queen Mary University of London and the Alan Turing Institute. “It’s hard enough to do reasoning with language, but here you need to bring in images and more. That’s not trivial.”

Liakata is also impressed by Astra’s ability to recall things it has seen or heard. She works on what she calls long-range context, getting models to keep track of information that they have come across before. “This is exciting,” says Liakata. “Even doing it in a single modality is exciting.”

But she admits that a lot of her assessment is guesswork. “Multimodal reasoning is really cutting-edge,” she says. “But it’s very hard to know exactly where they’re at, because they haven’t said a lot about what is in the technology itself.”

For Bodhisattwa Majumder, a researcher who works on multimodal models and agents at the Allen Institute for AI, that’s a key concern. “We absolutely don’t know how Google is doing it,” he says. 

He notes that if Google were to be a little more open about what it is building, it would help consumers understand the limitations of the tech they could soon be holding in their hands. “They need to know how these systems work,” he says. “You want a user to be able to see what the system has learned about you, to correct mistakes, or to remove things you want to keep private.”

Liakata is also worried about the implications for privacy, pointing out that people could be monitored without their consent. “I think there are things I’m excited about and things that I’m concerned about,” she says. “There’s something about your phone becoming your eyes—there’s something unnerving about it.” 

“The impact these products will have on society is so big that it should be taken more seriously,” she says. “But it’s become a race between the companies. It’s problematic, especially since we don’t have any agreement on how to evaluate this technology.”

Google DeepMind says it takes a long, hard look at privacy, security, and safety for all its new products. Its tech will be tested by teams of trusted users for months before it hits the public. “Obviously, we’ve got to think about misuse. We’ve got to think about, you know, what happens when things go wrong,” says Dawn Bloxwich, director of responsible development and innovation at Google DeepMind. “There’s huge potential. The productivity gains are huge. But it is also risky.”

No team of testers can anticipate all the ways that people will use and misuse new technology. So what’s the plan for when the inevitable happens? Companies need to design products that can be recalled or switched off just in case, says Bloxwich: “If we need to make changes quickly or pull something back, then we can do that.”

The world’s next big environmental problem could come from space

Early on a Sunday morning in September, a team of 12 sleep-deprived, jet-lagged researchers assembled at the world’s most remote airport. There, on Easter Island, some 2,330 miles off the coast of Chile, they were preparing for a unique chase: a race to catch a satellite’s last moments as it fell out of space and blazed into ash across the sky.

That spacecraft was Salsa, one of four satellites that were part of the European Space Agency (ESA) Cluster constellation. Salsa and its counterparts had been studying Earth’s magnetic field since the early 2000s, but its mission was now over. Months earlier, the spacecraft had been set on a spiral of death that would end with a fiery disintegration high up in Earth’s atmosphere about a thousand miles away from Easter Island’s coast.

Now, the scientists were poised to catch this reentry as it happened. Equipped with precise trajectory calculations from ESA’s ground control, the researchers took off in a rented business jet, with 25 cameras and spectrometers mounted by the windows. The hope was that they’d be able to gather priceless insights into the physical and chemical processes that occur when satellites burn up as they fall to Earth at the end of their missions.

Researchers were able to monitor the reentry of Cluster Salsa from a rented business jet.

This kind of study is growing more urgent. Some 15 years ago, barely a thousand satellites orbited our planet. Now the number has risen to about 10,000, and with the rise of satellite constellations like Starlink, another tenfold increase is forecast by the end of this decade. Letting these satellites burn up in the atmosphere at the end of their lives helps keep the quantity of space junk to a minimum. But doing so deposits satellite ash in the middle layers of Earth’s atmosphere. This metallic ash can harm the atmosphere and potentially alter the climate. Scientists don’t yet know how serious the problem is likely to be in the coming decades.

The ash from the reentries contains ozone-damaging substances. Modeling studies have shown that some of its components can also cool down Earth’s stratosphere, while others can warm it. Some worry that the metallic particles could even disrupt Earth’s magnetic field, obscure the view of Earth-observing satellites, and increase the frequency of thunderstorms.

“We need to see what kind of physics takes place up there,” says Stijn Lemmens, a senior analyst at ESA who oversaw the campaign. “If there are more [reentering] objects, there will be more consequences.”

A community of atmospheric scientists scattered all over the world is awaiting results from these measurements, hoping to fill major gaps in their understanding. 

The Salsa reentry was only the fifth such observation campaign in the history of spaceflight. The previous campaigns, however, tracked much larger objects, like a 19-ton upper stage from an Ariane 5 rocket.  

Cluster Salsa, at 550 kilograms, was quite tiny in comparison. And that makes it of special interest to scientists, because it’s spacecraft of this general size that will be increasingly crowding Earth orbit in the coming years.

The downside of mega-constellations

Most of the forecasted growth in satellite numbers is expected to come from satellites roughly the same size as Salsa: individual members of mega-constellations, designed to provide internet service with decent speed and latency to anyone, anywhere.

SpaceX’s Starlink is the biggest of these. Currently consisting of about 6,500 satellites, the fleet is expected to mushroom to more than 40,000 at some point in the 2030s. Other mega-constellations, including Amazon Kuiper, France-based E-Space, and the Chinese projects G60 and Guowang, are in the works. Each could encompass several thousand satellites, or even tens of thousands. 

Mega-constellation developers don’t want their spacecraft to fly for two or three decades like their old-school, government-funded counterparts. They want to replace these orbiting internet routers with newer, better tech every five years, sending the old ones back into the atmosphere to burn up. The rockets needed to launch all those satellites emit their own cocktail of contaminants (and their upper stages also end their life burning up in the atmosphere).

The amount of space debris vaporizing in Earth’s atmosphere has more than doubled in the past few years, says Jonathan McDowell, an astronomer at the Harvard-Smithsonian Center for Astrophysics who has built a second career as a leading space debris tracker..

“We used to see about 50 to 100 rocket stages reentering every year,” he says. “Now we’re looking at 300 a year.” 

In 2019, some 115 satellites burned up in the atmosphere. As of late November, 2024 had already set a new record with 950 satellite reentries, McDowell says.

The mass of vaporizing space junk will continue to grow in line with the size of the satellite fleets. By 2033, it could reach 4,000 tons per year, according to estimates presented at a workshop called Protecting Earth and Outer Space from the Disposal of Spacecraft and Debris, held in September at the University of Southampton in the UK.

Crucially, most of the ash these reentries produce will remain suspended in the thin midatmospheric air for decades, perhaps centuries. But acquiring precise data about satellite burn-up is nearly impossible, because it takes place in territory that is too high for meteorological balloons to measure and too low for sounding instruments aboard orbiting satellites. The closest scientists can get is remote sensing of a satellite’s final moments.

Changing chemistry

None of the researchers aboard the business jet turned scientific laboratory that took off from Easter Island in September got to see the moment when Cluster Salsa burst into a fireball above the deep, dark waters of the Pacific Ocean. Against the bright daylight, the fleeting explosion appeared about as vivid as a midday full moon. The windows of the plane, however, were covered with dark fabric (to prevent light reflected from inside to skew the measurements), allowing only the camera lenses to peek out, says Jiří Šilha, CEO of Slovakia-based Astros Solutions, a space situational awareness company developing new techniques for space debris monitoring, which coordinated the observation campaign.

“We were about 300 kilometers [186 miles] away when it happened, far enough to avoid being hit by any remaining debris,” Šilha says. “It’s all very quick. The object reenters at a very high velocity, some 11 kilometers [seven miles] per second, and disintegrates 80 to 60 kilometers above Earth.”

nfographic that describes the reentry of the first of four Cluster satellites

ESA

The instruments collected measurements of the disintegration in the visible and near-infrared part of the light spectrum, including observations with special filters for detecting chemical elements including aluminum, titanium, and sodium. The data will help scientists reconstruct the satellite breakup process, working out the altitudes at which the incineration takes place, the temperatures at which it occurs, and the nature and quantity of the chemical compounds it releases.

The dusty leftovers of Cluster Salsa have by now begun their leisurely drift through the mesosphere and stratosphere—the atmospheric layers stretching at altitudes from 31 to 53 miles and 12 to 31 miles, respectively. Throughout their decades-long descent, these ash particles will interact with atmospheric gases, causing mischief, says Connor Barker, a researcher in atmospheric chemical modeling at University College London and author of a satellite air pollution inventory published in early October in the journal Scientific Data

Satellite bodies and rocket stages are mostly made of aluminum, which burns into aluminum oxide, or alumina—a white, powdery substance known to contribute to ozone depletion. Alumina also reflects sunlight, which means it could alter the temperature of those higher atmospheric layers.

“In our simulations, we start to see a warming over time of the upper layers of the atmosphere that has several knock-on effects for atmospheric composition,” Barker says. 

For example, some models suggest the warming could add moisture to the stratosphere. This could deplete the ozone layer and could cause further warming, which in turn would cause additional ozone depletion.

The extreme speeds of reentering satellites also produces “a shockwave that compresses nitrogen in the atmosphere and makes it react with oxygen, producing nitrogen oxides,” says McDowell. Nitrogen oxides, too, damage atmospheric ozone. Currently, 50% of the ozone depletion caused by satellite burn-ups and rocket launches comes from the effects of nitrogen oxides. The soot that rockets produce alters the atmosphere’s thermal balance too.

In some ways, high-altitude atmospheric pollution is nothing new. Every year, about 18,000 tons of meteorites vaporize in the mesosphere. Even 10 years from now, if all planned mega-constellations get developed, the quantity of natural space rock burning up during its fall to Earth will exceed the amount of incinerated space junk by a factor of five.

That, however, is no comfort to researchers like McDowell and Barker. Meteorites contain only trace amounts of aluminum, and their atmospheric disintegration is faster, meaning they produce less nitrogen oxide, says Barker. 

“The amount of nitrogen oxides we’re getting [from satellite reentries and rocket launches] is already at the lower end of our yearly estimates of what the natural emissions of nitrogen oxides [from meteorites] are,” said Barker. “It’s certainly a concern, because we might soon be doing more to the atmosphere than naturally occurs.”

The annual amount of alumina from satellite reentries is also already approaching that arising from incinerated meteorites. Under current worse-case scenarios, the human-made contribution of this pollutant will be 10 times the amount from natural sources by 2040.

Impact on Earth?

What exactly does all this mean for life on Earth? At this stage, nobody’s certain. Studies focusing on various components of the air pollution cocktail from satellite and rocket activity are trickling in at a steady rate. 

Barker says computer modeling puts the current contribution of the space industry to overall ozone depletion at a minuscule 0.1%. But how much this share will grow 10, 20, or 50 years from now, nobody knows. There are way too many uncertainties in this equation, including the size of the particles—which will affect how long they will take to sink—and the ratio of particles to gaseous by-products.

“We have to make a decision, as a society, whether we prioritize reducing space traffic or reducing emissions,” Barker says. “A lot of these increased reentry rates are because the global community is doing a really good job of cleaning up low-Earth-orbit space debris. But we really need to understand the environmental impact of those emissions so we can decide what is the best way for humanity to deal with all these objects in space.”

A ground antenna captured radar data of some of the final moments of the ESA satellite Aeolus, as it reentered Earth’s atmosphere in July 2023.
FRAUNHOFER FHR

The disaster of 21st-century climate change was set in motion when humankind began burning fossil fuels in the mid-19th century. Similarly, it took 40 years for chlorofluorocarbons to eat a hole in Earth’s protective ozone layer. The contamination of Earth by so-called forever chemicals—per-and polyfluoroalkyl substances used in manufacturing nonstick coatings and firefighting foams—started in the 1950s. Researchers like McDowell are concerned the story may repeat yet again.

“Humanity’s activities in space have now gotten big enough that they are affecting the space environment in a similar way we have affected the oceans,” McDowell says. “The problem is that we’re making these changes without really understanding at what stage these changes will become concerning.”

Previous observation campaigns mostly analyzed the physical disintegration of reentering satellites. With the Cluster constellation, scientists hope to begin unraveling the chemical side of this elusive process. For researchers like Barker, that means finally getting data that could validate and further improve their models. The Cluster constellation will provide three more opportunities to fill the blanks in this environmental puzzle when the siblings of Salsa reenter in 2025 and 2026. 

“The great thing with Cluster is that we have four satellites that are identical and that we know every detail about,” says Šilha. “It’s a scientist’s dream, because we can repeat the experiment and learn from every previous campaign.”

OpenAI’s “12 days of shipmas” tell us a lot about the AI arms race

This week, OpenAI announced what it calls the 12 days of OpenAI, or 12 days of shipmas. On December 4, CEO Sam Altman took to X to announce that the company would be “doing 12 days of openai. each weekday, we will have a livestream with a launch or demo, some big ones and some stocking stuffers.”

The company will livestream about new products every morning for 12 business days in a row during December. It’s an impressive-sounding (and media-savvy) schedule, to be sure. But it also speaks to how tight the race between the AI bigs has become, and also how much OpenAI is scrambling to build more revenue.

While it remains to be seen whether or not they’ve got AGI in a pear tree up their sleeve, and maybe putting aside whether or not Sam Altman is your true love, the man can ship. OpenAI has been a monster when it comes to actually getting new products out the door and into the hands of users. It’s hard for me to believe that it was just two years ago, almost exactly, that it released ChatGPT. That was a world-changing release, but was also just one of many. The company has been on an absolute tear:  Since 2022, it’s shipped DALL-E 2, DALL-E 3, GPT-4, ChatGPT Plus, a realtime API, GPT-4o, an advanced voice mode, a preview version of a new model called o1, and a web search engine. And that’s just a partial list.

When it kicked off its 12-days shenanigans on Thursday, it was with an official roll out of OpenAI o1 and a new, $200-per-month service called ChatGPT Pro. Friday morning, it followed that up with an announcement about a new model customization technique.

If the point you have taken away from all this is that OpenAI is very, very bad at naming things, you would be right. But! There’s another point to be made, which is that the stuff it is shipping is not coming out in a vacuum anymore, as it was two years ago. When DALL-E 2 shipped, OpenAI seemed a little like the only game in town. That was still mostly true when ChatGPT came out a few months later. But those releases sent Google into full-on freakout mode, issuing a “code red” to catch up. And then it was off to the races.

Now, there is a full-scale sprint happening between OpenAI, Google (which released its Gemini models to the public almost exactly a year ago), Anthropic (which was founded by a bunch of OpenAI formers), Meta, and, to some extent, Microsoft (OpenAI’s partner).

To wit: A little over a month ago, Anthropic unveiled a bananas demo of its chatbot Claude’s ability to use a computer. On Thursday (aka: the first day of shipmas), Microsoft announced a version of CoPilot that can follow along with you while you browse the web using AI vision. And ahead of what is widely predicted to be OpenAI’s biggest release of shipmas, its new video generation tool Sora, Google jumped ahead with its own generative video product, Veo (although it has not released it widely to the public yet).

Oh. There was also one other announcement from OpenAI, just ahead of shipmas, that seems relevant. On Wednesday, it announced a new partnership with defense contractor Anduril. Some of you may remember that OpenAI is the company that had once pledged not to let its technology be used for weapons development or the military. As James O’Donnell points out, “OpenAI’s policies banning military use of its technology unraveled in less than a year.”

This is notable in its own right, but also in crystallizing just how much OpenAI needs cold hard cash. See also: the new $200-per-month ChatGPT Pro tier. (And while recurring revenue from users will bring in some much-needed cash flow, there is a fortune in defense spending.) In addition, the company is looking into bringing paid advertisements to its services, according to its CFO Sarah Friar in an interview with the FT way back in … (checks watch) … Monday.

As has been oft-discussed, OpenAI is just incinerating piles of money. It’s on track to lose billions and billions of dollars for several more years. It has to start bringing in more revenue, lots more. And to do that it has to stay ahead of its rivals. And to do that, it has to get new, compelling products to market that are better in some way than what its competitors offer. Which means it has to ship. And monetize. And ship. And monetize. Because Google and Anthropic and Meta and a host of others are all going to keep coming out with new products, and new services too.

The arms race is on. And while the 12 days of shipmas may seem jolly, internally I bet it feels a lot more like Santa’s workshop on December 23. Pressure’s on. Time to deliver.

If someone forwarded you this edition of The Debrief, you can subscribe here. I appreciate your feedback on this newsletter. Drop me a line at mat.honan@technologyreview.com with any and all thoughts. And of course, I love tips.


Now read the rest of The Debrief

The News

• Bitcoin breaks $100,000 after Trump announces Paul Atkins as SEC pick. 

• China’s critical mineral ban is an opening salvo, not a kill shot. This is what it means for the US.

• OpenAI announced a deal with defense contractor Anduril. It’s a huge pivot

• In an effort to combat sophisticated disinformation campaigns, the US Department of Defense is investing in deepfake detection

• President-elect Trump names PayPal Mafia member, All-in Podcast host, and former Yammer CEO David Sacks as White House AI and crypto Czar

• An appeals court upheld the US’ TikTok ban. It’s likely going to the Supreme Court.


The Chat

Every week, I talk to one of MIT Technology Review’s journalists to go behind the scenes of a story they are working on. This week, I hit up Amanda Silverman, our features and investigations editor, about our big story on the way the war in Ukraine is reshaping the tech sector in eastern Europe.

Mat: Amanda, we published a story this week from Peter Guest that’s about the ways civilian tech is being repurposed for the war in Ukraine. I could be wrong, but ultimately I think it showed how warfare has truly changed thanks to inexpensive, easily-built tech products. Is that right?

Amanda: I think that’s pretty spot on. Though maybe it’s more accurate to say, less expensive, more-easily-built tech products. It’s all relative, right? Like, the retrofitted consumer drones that have been so prevalent in Ukraine over the past few years are vastly cheaper than traditional weapons systems, and what we’re seeing now is that lots of other tech that was initially developed for civilian purposes—like, Pete reported on a type of scooter—are being sent to the front. And again, these are much, much cheaper than traditional weaponry. And they can be developed and shipped out really quickly.

The other thing Pete found was that this tech is being quickly reworked to respond to battlefield feedback—like that scooter has been customized to carry NATO standard-sized bullet boxes. I can’t imagine that happening in the old way of doing things.

Mat: It’s move fast and (hope not to) break things, but for war…. There is also this other, much scarier idea in there, which is that the war is changing, maybe has changed, Eastern Europe’s tech sector. What did Pete find is happening there?

Amanda: So a lot of the countries neighboring Ukraine are understandably pretty freaked out by what happened there and how the country had to turn on a dime to respond to the full-scale invasion by Russia. At the same time, Pete found that a lot of people in these countries, particularly in Latvia and particularly leading tech startups, have been inspired by how Ukrainians mobilized for the war and they’re trying to sort of get ahead of the potential enemy and get ready for a conflict within their borders. It’s not all scary, to be clear. It’s arguably somewhat thrilling to see all this innovation happening so quickly and to have some of the more burdensome red tape removed.

Mat: Okay so Russia’s neighbors are freaked out, as you say, understandably. Did anything about this story freak you out?

Amanda: Yeah, it’s impossible to ignore that there is a huge, scary risk here, too: as these companies develop new tech for war, they have an unprecedented opportunity to test it out in Ukraine without going through the traditional development and procurement process—which can be slow and laborious, sure, but also includes a lot of important testing, checks and balances, and more to prevent fraud and lots of other abuses and dangers. Like, Pete nods to how Clearview AI was deploying its tech to identify Russian war dead, which is scary in and of itself and also may violate the Geneva Conventions.

Mat: And then I’m curious, what do you look for when you are assigning a story like this? What caught your attention?

Amanda: I felt like I’d read quite a bit about the total mobilization of Ukrainian society (including a story from Pete inWired). But I had sort of thought about all this activity as happening in a bit of a vacuum. Or at least in a limited sense, within Ukrainian borders. Of course, the US and our European allies are sending loads of money and loads of weapons but (at least as I understand it) they’re largely weapons we already have in our arsenals. So when Pete pitched us this story about how the war was reshaping the tech sector of Ukraine’s neighbors, particularly civilian tech, I was really intrigued.


The Recommendation

Several weeks ago, we had our e-bike stolen. Some guy with an angle iron cut the lock. And as it turned out, our insurance didn’t cover the loss because the bike (like almost all e-bikes) had a top speed above 15 mph. As I came to learn, this is not uncommon. But you know what is common? E-bike theft. The police told us there is little chance of recovering our bike—in large part because we did not have a tracker attached to it. It was an all-around frustrating experience.  We replaced the bike, and this time I’ve invested in one of these Elevation Labs waterproof mounts to affix an AirTag to the frame, hidden away below the seat. They have a whole line of mounts, a few of which are bike-specific. Much cheaper than a new bike. They make a good stocking stuffer.

The Download: satellites’ climate impact, and OpenAI’s frantic release schedule

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The world’s next big environmental problem could come from space

In September, a unique chase took place in the skies above Easter Island. From a rented jet, a team of researchers captured a satellite’s last moments as it fell out of space and blazed into ash across the sky, using cameras and scientific equipment. Their hope was to gather priceless insights into the physical and chemical processes that occur when satellites burn up as they fall to Earth at the end of their missions.

This kind of study is growing more urgent. The number of satellites in the sky is rapidly rising—with a tenfold increase forecast by the end of the decade. Letting these satellites burn up in the atmosphere at the end of their lives helps keep the quantity of space junk to a minimum. But doing so deposits satellite ash in the Earth’s atmosphere. This metallic ash could potentially alter the climate, and we don’t yet know how serious the problem is likely to be. Read the full story

—Tereza Pultarova

OpenAI’s “12 days of shipmas” tell us a lot about the AI arms race

Last week, OpenAI announced what it calls the 12 days of OpenAI, or 12 days of shipmas. On December 4, CEO Sam Altman took to X to announce that the company would be “doing 12 days of openai. each weekday, we will have a livestream with a launch or demo, some big ones and some stocking stuffers.”

The company will livestream about new products every morning for 12 business days in a row during December. It’s an impressive-sounding (and media-savvy) schedule, to be sure. But it also speaks to how tight the race between the AI bigs has become, and also how much OpenAI is scrambling to build more revenue. Read the full story

—Mat Honan

This story originally appeared in The Debrief with Mat Honan, our weekly take on what’s really going on behind the biggest tech headlines. The story is subscriber-only so nab a subscription too, if you haven’t already! Or you can sign up to the newsletter for free to get the next edition in your inbox on Friday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The USDA is launching a national program to test milk for bird flu 
A full nine months after the current outbreak was first detected in dairy cows. (STAT)
The risk of a bird flu pandemic is rising. (MIT Technology Review)

2 Here’s what sets OpenAI’s new models apart 
They’re shifting from predicting to reasoning, which could be a huge deal. (The Atlantic $)
Regardless of whether capabilities are slowing, AI’s impact is only poised to grow. (Vox)
It may be comforting to dismiss AI as hype—but it misses the point. (Platformer)

3 A federal appeals court has upheld the US TikTok ban
But what happens next is anyone’s guess. (WSJ $)
Whether TikTok is banned or not, the actions against it have had a big impact. (MIT Technology Review)

4 Top internet sleuths are sitting out the hunt for the UnitedHealthcare CEO killer 
In fact, some are even criticizing people who are trying to help. (NBC)
+ Why so many Americans are at best indifferent to this particular murder. (New Yorker $)

5 Schools are attempting to stop teens self-harming before they even try
The AI tools they’re adopting could be doing far more damage than help, though. (NYT $)

6 China is building its own Starlink system
The Qianfan constellation could eventually grow to nearly 14,000 satellites. (The Economist $)
The end of the ISS will usher in a more commercialized future in space. (The Verge)

7 This was an exciting year for superconductors
Superconductivity—the flow of electric current with no resistance—was discovered in three new materials. (Quanta $)

8 Meet the world’s least productive programmers 
It seems a small minority of disillusioned ‘ghost engineers’ do pretty much no work at all. (WP $)

9 Why people are turning their backs on dating apps
There’s a large degree of fatigue, and a feeling that they’re somehow detached from reality. (The Guardian)

10 Fake snacks are racking up millions of views on Instagram 🍿
There’s even a word for this trend: snackfishing. (Wired $)

Quote of the day

“I think Twitter and now X is like a crack addiction for him, though. He is clearly chasing a particular hit all the time and he has ended up self-radicalising himself with the platform he has purchased.”

—A former Twitter employee in London tells The Guardian how Elon Musk has changed since he purchased the platform.

 The big story

How electricity could help tackle a surprising climate villain

Sublime Systems
BOB O’CONNOR

January 2024

Cement is used to build everything from roads and buildings to dams and basement floors. But it’s also a climate threat. Cement production accounts for more than 7% of global carbon dioxide emissions—more than sectors like aviation, shipping, or landfills.

One solution to this climate catastrophe might be coursing through the pipes at Sublime Systems. The startup is developing an entirely new way to make cement. Instead of heating crushed-up rocks in lava-hot kilns, Sublime’s technology zaps them in water with electricity, kicking off chemical reactions that form the main ingredients in its cement.

But it faces huge challenges: competing with established industry players, and persuading builders to use its materials in the first place. Read the full story.

—Casey Crownhart

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Who will be the Lord of Misrule in your household this Christmas?
+ People’s Wikipedia browsing data always makes for interesting reading.
+ Wait, so we’ve been mispronouncing these words all along? (Apart from espresso, c’mon)
+ The Muppet Christmas Carol might just be the greatest festive film. 

How to use Sora, OpenAI’s new video generating tool

MIT Technology Review’s How To series helps you get things done

Today, OpenAI released its video generation model Sora to the public. The announcement comes on the fifth day of the company’s “shipmas” event, a 12-day marathon of tech releases and demos. Here’s what you should know—and how you can use the video model right now.

What is Sora?

Sora is a powerful AI video generation model that can create videos from text prompts, animate images, or remix videos in new styles. OpenAI first previewed the model back in February, but today is the first time the company is releasing it for broader use. 

What’s new about this release?

The core function of Sora—creating impressive videos with simple prompts—remains similar to what was previewed in February, but OpenAI worked to make the model faster and cheaper ahead of this wider release. There are a few new features, and two stand out.

One is called Storyboard. With it, you can create multiple AI-generated videos and then assemble them together on a timeline, much the way you would with conventional video editors like Adobe Premiere Pro. 

The second is a feed that functions as a sort of creative gallery. Users can post their Sora-generated videos to the feed, see the prompts behind certain videos, tweak them, and generally get inspiration, OpenAI says. 

How much can you do with it?

You can generate videos from text prompts, change the style of videos and change elements with a tool called Remix, and assemble multiple clips together with Storyboard. Sora also provides preset styles you can apply to your videos, like moody film noir or cardboard and papercraft, which gives a stop-motion feel. You can also trim and loop the videos that you make. 

Who can use it?

To generate videos with Sora, you’ll need to subscribe to one of OpenAI’s premium plans—either ChatGPT Plus ($20 per month) or ChatGPT Pro ($200 per month). Both subscriptions include access to other OpenAI products as well. Users with ChatGPT Plus can generate videos as long as five seconds with a resolution up to 720p. This plan lets you create 50 videos per month. 

Users with a ChatGPT Pro subscription can generate longer, higher-resolution videos, capped at a resolution of 1080p and a duration of 20 seconds. They can also have Sora generate up to five variations of a video at once from a single prompt, making it possible to review options faster. Pro users are limited to 500 videos per month but can also create unlimited “relaxed” videos, which are not generated in the moment but rather queued for when site traffic is low. 

Both subscription levels make it possible to create videos in three aspect ratios: vertical, horizontal, and square. 

If you don’t have a subscription, you’ll be limited to viewing the feed of Sora-generated videos. 

OpenAI is starting its global launch of Sora today, but it will take longer to launch in “most of Europe,” the company said. 

OPENAI

Where can I access it?

OpenAI has broken Sora out from ChatGPT. To access it, go to Sora.com and log in with your ChatGPT Plus or Pro account. (MIT Technology Review was unable to access the site at press time—a note on the site indicated that signups were paused because they were “currently experiencing heavy traffic.”) 

How’d we get here?

A number of things have happened since OpenAI first unveiled Sora back in February. Other tech companies have also launched video generation tools, like Meta Movie Gen and Google Veo. There’s also been plenty of backlash. For example, artists who had early access to experiment with Sora leaked the tool to protest the way OpenAI has trained it on artists’ work without compensation. 

What’s next?

As with any new release of a model, it remains to be seen what steps OpenAI has taken to keep Sora from being used for nefarious, illegal, or unethical purposes, like the creation of deepfakes. On the question of moderation and safety, an OpenAI employee said they “might not get it perfect on day one.”

Another looming question is how much computing capacity and energy Sora will use up every time it creates a video. Generating a video uses much more computing time, and therefore energy, than generating a typical text response in a tool like ChatGPT.  The AI boom has already been an energy hog, presenting a challenge to tech companies aiming to rein in their emissions, and the wide availability of Sora and other video models like it has the potential to make that problem worse.