A woman in the US is the third person to receive a gene-edited pig kidney

Towana Looney, a 53-year-old woman from Alabama, has become the third living person to receive a kidney transplant from a gene-edited pig. 

Looney, who donated one of her kidneys to her mother back in 1999, developed kidney failure several years later following a pregnancy complication that caused high blood pressure. She started dialysis treatment in December of 2016 and was put on a waiting list for a kidney transplant soon after, in early 2017. 

But it was difficult to find a match. So Looney’s doctors recommended the experimental pig organ as an alternative. After eight years on the waiting list, Looney was authorized to receive the kidney under the US Food and Drug Administration’s expanded access program, which allows people with serious or life-threatening conditions to try experimental treatments.

The pig in question was developed by Revivicor, a United Therapeutics company. The company’s technique involves making 10 gene edits to a pig cell. The edits are made to prevent too much organ growth, curb inflammation, and, importantly, stop the recipient’s immune system from rejecting the organ. The edited pig cell is then placed into a pig egg cell that has had its nucleus removed, and the egg is transferred to the uterus of a sow, which eventually gives birth to a gene-edited piglet.

JOE CARROTTA FOR NYU LANGONE HEALTH

In theory, once the piglet has grown, its organs can be used for human transplantation. Pig organs are similar in size to human ones, after all. A few years ago, David Bennett Sr. became the first person to receive a heart transplant from such a pig. He died two months after the operation, and the heart was later found to have been infected with a pig virus.

Richard Slayman was the first person to get a gene-edited pig kidney, which he received in early 2024. He died two months after his surgery, although the hospital treating him said in a statement that it had “no indication that it was the result of his recent transplant.” In April, Lisa Pisano was reported to be the second person to receive such an organ. Pisano also received a heart pump alongside her kidney transplant. Her kidney failed because of an inadequate blood supply and was removed the following month. She died in July.

Looney received her pig kidney during a seven-hour operation that took place at NYU Langone Health in New York City on November 25. The surgery was led by Jayme Locke of the US Health Resources & Services Administration and Robert Montgomery of the NYU Langone Transplant Institute.

Looney was discharged from the hospital 11 days after her surgery, to an apartment in New York City. She’ll stay in New York for another three months so she can check in with doctors at the hospital for evaluations.

“It’s a blessing,” Looney said in a statement. “I feel like I’ve been given another chance at life. I cannot wait to be able to travel again and spend more quality time with my family and grandchildren.”

Looney’s doctors are hopeful that her kidney will last longer than those of her predecessors. For a start, Looney was in better health to begin with—she had chronic kidney disease and required dialysis, but unlike previous recipients, she was not close to death, Montgomery said in a briefing. He and his colleagues plan to start clinical trials within the next year.

There is a huge unmet need for organs. In the US alone, there more than 100,000 people are waiting for one, and 17 people on the waiting list die every day. Researchers hope that gene-edited animals might provide a new source of organs for such individuals.

Revivicor isn’t the only company working on this. Rival company eGenesis, which has a different approach to gene editing, has used CRISPR to create pigs with around 70 gene edits

“Transplant is one of the few therapies that can cure a complex disease overnight, yet there are too few organs to provide a cure for all in need,” Locke said in a statement. “The thought that we may now have a solution to the organ shortage crisis for others who have languished on our waiting lists invokes the most welcome of feelings: pure joy!”

Today, Looney is the only person living with a pig organ. “I am full of energy. I got an appetite I’ve never had in eight years,” she said at a briefing. “I can put my hand on this kidney and feel it buzzing.”

This story has been updated with additional information after a press briefing.

Google’s big week was a flex for the power of big tech

Last week, this space was all about OpenAI’s 12 days of shipmas. This week, the spotlight is on Google, which has been speeding toward the holiday by shipping or announcing its own flurry of products and updates. The combination of stuff here is pretty monumental, not just for a single company, but I think because it speaks to the power of the technology industry—even if it does trigger a personal desire that we could do more to harness that power and put it to more noble uses.

To start, last week Google Introduced Veo, a new video generation model, and Imagen 3, a new version of its image generation model. 

Then on Monday, Google announced a  breakthrough in quantum computing with its Willow chip. The company claims the new machine is capable of a “standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years.” you may recall that MIT Technology Review covered some of the Willow work after researchers posted a paper preprint in August.   But this week marked the big media splash. It was a stunning update that had Silicon Valley abuzz. (Seriously, I have never gotten so many quantum computing pitches as in the past few days.)

Google followed this on Wednesday with even more gifts: a Gemini 2 release, a Project Astra update, and even more news about forthcoming agents called Mariner, an agent that can browse the web, and Jules, a coding assistant.  

First: Gemini 2. It’s impressive, with a lot of performance updates. But I have frankly grown a little inured by language-model performance updates to the point of apathy. Or at least near-apathy. I want to see them do something.

So for me, the cooler update was second on the list: Project Astra, which comes across like an AI from a futuristic movie set. Google first showed a demo of Astra back in May at its developer conference, and it was the talk of the show. But, since demos offer companies chances to show off products at their most polished, it can be hard to tell what’s real and what’s just staged for the audience. Still, when my colleague Will Douglas Heaven recently got to try it out himself, live and unscripted, it largely lived up to the hype. Although he found it glitchy, he noted that those glitches can be easily corrected. He called the experience “stunning” and said it could be generative AI’s killer app.

On top of all this, Will notes that this week Google DeepMind CEO (the company’s AI division) Demis Hassabis was in Sweden to receive his Nobel Prize. And what did you do with your week?

Making all this even more impressive, the advances represented in Willow, Gemini, Astra, and Veo are ones that just a few years ago many, many people would have said were not possible—or at least not in this timeframe. 

A popular knock on the tech industry is that it has a tendency to over-promise and under-deliver. The phone in your pocket gives the lie to this. So too do the rides I took in Waymo’s self-driving cars this week. (Both of which arrived faster than Uber’s estimated wait time. And honestly it’s not been that long since the mere ability to summon an Uber was cool!) And while quantum has a long way to go, the Willow announcement seems like an exceptional advance; if not a tipping point exactly, then at least a real waypoint on a long road. (For what it’s worth, I’m still not totally sold on chatbots. They do offer novel ways of interacting with computers, and have revolutionized information retrieval. But whether they are beneficial for humanity—especially given energy debts, the use of copyrighted material in their training data, their perhaps insurmountable tendency to hallucinate, etc.—is debatable, and certainly is being debated. But I’m pretty floored by this week’s announcements from Google, as well as OpenAI—full stop.)

And for all the necessary and overdue talk about reining in the power of Big Tech, the ability to hit significant new milestones on so many different fronts all at once is something that only a company with the resources of a Google (or Apple or Microsoft or Amazon or Meta or Baidu or whichever other behemoth) can do. 

All this said, I don’t want us to buy more gadgets or spend more time looking at our screens. I don’t want us to become more isolated physically, socializing with others only via our electronic devices. I don’t want us to fill the air with carbon or our soil with e-waste. I do not think these things should be the price we pay to drive progress forward. It’s indisputable that humanity would be better served if more of the tech industry was focused on ending poverty and hunger and disease and war.

Yet every once in a while, in the ever-rising tide of hype and nonsense that pumps out of Silicon Valley, epitomized by the AI gold rush of the past couple of years, there are moments that make me sit back in awe and amazement at what people can achieve, and in which I become hopeful about our ability to actually solve our larger problems—if only because we can solve so many other dumber, but incredibly complicated ones. This week was one of those times for me. 


Now read the rest of The Debrief

The News

• Robotaxi adoption is hitting a tipping point

• But also, GM is shutting down its Cruise robotaxi division.

• Here’s how to use OpenAI’s new video editing tool Sora.

• Bluesky has an impersonator problem.

• The AI hype machine is coming under government scrutiny.


The Chat

Every week, I talk to one of MIT Technology Review’s journalists to go behind the scenes of a story they are working on. This week, I hit up James O’Donnell, who covers AI and hardware, about his story on how the startup defense contractor Anduril is bringing AI to the battlefield.

Mat: James, you got a pretty up close look at something most people probably haven’t even thought about yet, which is how the future of AI-assisted warfare might look. What did you learn on that trip that you think will surprise people?

James: Two things stand out. One, I think people would be surprised by the gulf between how technology has developed for the last 15 years for consumers versus the military. For consumers, we’ve gotten phones, computers, smart TVs and other technologies that generally do a pretty good job of talking to each other and sharing our data, even though they’re made by dozens of different manufacturers. It’s called the “internet of things.” In the military, technology has developed in exactly the opposite way, and it’s putting them in a crisis. They have stealth aircraft all over the world, but communicating about a drone threat might be done with Powerpoints and a chat service reminiscent of AOL Instant Messenger.

The second is just how much the Pentagon is now looking to AI to change all of this. New initiatives have surged in the current AI boom. They are spending on training new AI models to better detect threats, autonomous fighter jets, and intelligence platforms that use AI to find pertinent information. What I saw at Anduril’s test site in California is also a key piece of that. Using AI to connect to and control lots of different pieces of hardware, like drones and cameras and submarines, from a single platform. The amount being invested in AI is much smaller than for aircraft carriers and jets, but it’s growing.

Mat: I was talking with a different startup defense contractor recently, who was talking to me about the difficulty of getting all these increasingly autonomous devices on the battlefield talking to each other in a coordinated way. Like Anduril, he was making the case that this has to be done at the edge, and that there is too much happening for human decision making to process. Do you think that’s true?  Why is that?

James: So many in the defense space have pointed to the war in Ukraine as a sign that warfare is changing. Drones are cheaper and more capable than they ever were in the wars in the Middle East. It’s why the Pentagon is spending $1 billion on the Replicator initiative to field thousands of cheap drones by 2025. It’s also looking to field more underwater drones as it plans for scenarios in which China may invade Taiwan.

Once you get these systems, though, the problem is having all the devices communicate with one another securely. You need to play Air Traffic Control at the same time that you’re pulling in satellite imagery and intelligence information, all in environments where communication links are vulnerable to attacks.

Mat: I guess I still have a mental image of a control room somewhere, like you might see in Dr. Strangelove or War Games (or Star Wars for that matter) with a handful of humans directing things. Are those days over?

James: I think a couple things will change. One, a single person in that control room will be responsible for a lot more than they are now. Rather than running just one camera or drone system manually, they’ll command software that does it for them, for lots of different devices. The idea that the defense tech sector is pushing is to take them out of the mundane tasks—rotating a camera around to look for threats—and instead put them in the driver’s seat for decisions that only humans, not machines, can make.

Mat: I know that critics of the industry push back on the idea of AI being empowered to make battlefield decisions, particularly when it comes to life and death, but it seems to me that we are increasingly creeping toward that and it seems perhaps inevitable. What’s your sense?

James: This is painting with broad strokes, but I think the debates about military AI fall along similar lines to what we see for autonomous vehicles. You have proponents saying that driving is not a thing humans are particularly good at, and when they make mistakes, it takes lives. Others might agree conceptually, but debate at what point it’s appropriate to fully adopt fallible self-driving technology in the real world. How much better does it have to be than humans?

In the military, the stakes are higher. There’s no question that AI is increasingly being used to sort through and surface information to decision-makers. It’s finding patterns in data, translating information, and identifying possible threats. Proponents are outspoken that that will make warfare more precise and reduce casualties. What critics are concerned about is how far across that decision-making pipeline AI is going, and how much there is human oversight.

I think where it leaves me is wanting transparency. When AI systems make mistakes, just like when human military commanders make mistakes, I think we deserve to know, and that transparency does not have to compromise national security. It took years for reporter Azmat Khan to piece together the mistakes made during drone strikes in the Middle East, because agencies were not forthcoming. That obfuscation absolutely cannot be the norm as we enter the age of military AI.

Mat: Finally, did you have a chance to hit an In-N-Out burger while you were in California?

James: Normally In-N-Out is a requisite stop for me in California, but ahead of my trip I heard lots of good things about the burgers at The Apple Pan in West LA, so I went there. To be honest, the fries were better, but for the burger I have to hand it to In-N-Out.


The Recommendation

A few weeks ago I suggested Ca7riel and Paco  Amoroso’s appearance on NPR Tiny Desk. At the risk of this space becoming a Tiny Desk stan account, I’m back again with another. I was completely floored by Doechii’s Tiny Desk appearance last week. It’s so full of talent and joy and style and power. I came away completely inspired and have basically had her music on repeat in Spotify ever since. If you are already a fan of her recorded music, you will love her live. If she’s new to you, well, you’re welcome. Go check it out. Oh, and don’t worry: I’m not planning to recommend Billie Eilish’s new Tiny Desk concert in next week’s newsletter. Mostly because I’m doing so now.

How Silicon Valley is disrupting democracy

The internet loves a good neologism, especially if it can capture a purported vibe shift or explain a new trend. In 2013, the columnist Adrian Wooldridge coined a word that eventually did both. Writing for the Economist, he warned of the coming “techlash,” a revolt against Silicon Valley’s rich and powerful fueled by the public’s growing realization that these “sovereigns of cyberspace” weren’t the benevolent bright-future bringers they claimed to be. 

While Wooldridge didn’t say precisely when this techlash would arrive, it’s clear today that a dramatic shift in public opinion toward Big Tech and its leaders did in fact ­happen—and is arguably still happening. Say what you will about the legions of Elon Musk acolytes on X, but if an industry and its executives can bring together the likes of Elizabeth Warren and Lindsey Graham in shared condemnation, it’s definitely not winning many popularity contests.   

To be clear, there have always been critics of Silicon Valley’s very real excesses and abuses. But for the better part of the last two decades, many of those voices of dissent were either written off as hopeless Luddites and haters of progress or drowned out by a louder and far more numerous group of techno-optimists. Today, those same critics (along with many new ones) have entered the fray once more, rearmed with popular Substacks, media columns, and—increasingly—book deals.

Two of the more recent additions to the flourishing techlash genre—Rob Lalka’s The Venture Alchemists: How Big Tech Turned Profits into Power and Marietje Schaake’s The Tech Coup: How to Save Democracy from Silicon Valley—serve as excellent reminders of why it started in the first place. Together, the books chronicle the rise of an industry that is increasingly using its unprecedented wealth and power to undermine democracy, and they outline what we can do to start taking some of that power back.

Lalka is a business professor at Tulane University, and The Venture Alchemists focuses on how a small group of entrepreneurs managed to transmute a handful of novel ideas and big bets into unprecedented wealth and influence. While the names of these demigods of disruption will likely be familiar to anyone with an internet connection and a passing interest in Silicon Valley, Lalka also begins his book with a page featuring their nine (mostly) young, (mostly) smiling faces. 

There are photos of the famous founders Mark Zuckerberg, Larry Page, and Sergey Brin; the VC funders Keith Rabois, Peter Thiel, and David Sacks; and a more motley trio made up of the disgraced former Uber CEO Travis Kalanick, the ardent eugenicist and reputed father of Silicon Valley Bill Shockley (who, it should be noted, died in 1989), and a former VC and the future vice president of the United States, JD Vance.

To his credit, Lalka takes this medley of tech titans and uses their origin stories and interrelationships to explain how the so-called Silicon Valley mindset (mind virus?) became not just a fixture in California’s Santa Clara County but also the preeminent way of thinking about success and innovation across America.

This approach to doing business, usually cloaked in a barrage of cringey innovation-speak—disrupt or be disrupted, move fast and break things, better to ask for forgiveness than permission—can often mask a darker, more authoritarian ethos, according to Lalka. 

One of the nine entrepreneurs in the book, Peter Thiel, has written that “I no longer believe that freedom and democracy are compatible” and that “competition [in business] is for losers.” Many of the others think that all technological progress is inherently good and should be pursued at any cost and for its own sake. A few also believe that privacy is an antiquated concept—even an illusion—and that their companies should be free to hoard and profit off our personal data. Most of all, though, Lalka argues, these men believe that their newfound power should be unconstrained by governments, ­regulators, or anyone else who might have the gall to impose some limitations.

Where exactly did these beliefs come from? Lalka points to people like the late free-market economist Milton Friedman, who famously asserted that a company’s only social responsibility is to increase profits, as well as to Ayn Rand, the author, philosopher, and hero to misunderstood teenage boys everywhere who tried to turn selfishness into a virtue. 

cover of Venture Alchemists
The Venture Alchemists: How Big Tech Turned Profits into Power
Rob Lalka
COLUMBIA BUSINESS SCHOOL PUBLISHING, 2024

It’s a somewhat reductive and not altogether original explanation of Silicon Valley’s libertarian inclinations. What ultimately matters, though, is that many of these “values” were subsequently encoded into the DNA of the companies these men founded and funded—companies that today shape how we communicate with one another, how we share and consume news, and even how we think about our place in the world. 

The Venture Alchemists is strongest when it’s describing the early-stage antics and on-campus controversies that shaped these young entrepreneurs or, in many cases, simply reveal who they’ve always been. Lalka is a thorough and tenacious researcher, as the book’s 135 pages of endnotes suggest. And while nearly all these stories have been told before in other books and articles, he still manages to provide new perspectives and insights from sources like college newspapers and leaked documents. 

One thing the book is particularly effective at is deflating the myth that these entrepreneurs were somehow gifted seers of (and investors in) a future the rest of us simply couldn’t comprehend or predict. 

Sure, someone like Thiel made what turned out to be a savvy investment in Facebook early on, but he also made some very costly mistakes with that stake. As Lalka points out, Thiel’s Founders Fund dumped tens of millions of shares shortly after Facebook went public, and Thiel himself went from owning 2.5% of the company in 2012 to 0.000004% less than a decade later (around the same time Facebook hit its trillion-dollar valuation). Throw in his objectively terrible wagers in 2008, 2009, and beyond, when he effectively shorted what turned out to be one of the longest bull markets in world history, and you get the impression he’s less oracle and more ideologue who happened to take some big risks that paid off. 

One of Lalka’s favorite mantras throughout The Venture Alchemists is that “words matter.” Indeed, he uses a lot of these entrepreneurs’ own words to expose their hypocrisy, bullying, juvenile contrarianism, casual racism, and—yes—outright greed and self-interest. It is not a flattering picture, to say the least. 

Unfortunately, instead of simply letting those words and deeds speak for themselves, Lalka often feels the need to interject with his own, frequently enjoining readers against ­finger-pointing or judging these men too harshly even after he’s chronicled their many transgressions. Whether this is done to try to convey some sense of objectivity or simply to remind readers that these entrepreneurs are complex and complicated men making difficult decisions, it doesn’t work. At all. 

For one thing, Lalka clearly has his own strong opinions about the behavior of these entrepreneurs—opinions he doesn’t try to disguise. At one point in the book he suggests that Kalanick’s alpha-male, dominance-at-any-cost approach to running Uber is “almost, but not quite” like rape, which is maybe not the comparison you’d make if you wanted to seem like an arbiter of impartiality. And if he truly wants readers to come to a different conclusion about these men, he certainly doesn’t provide many reasons for doing so. Simply telling us to “judge less, and discern more” seems worse than a cop-out. It comes across as “almost, but not quite” like victim-blaming—as if we’re somehow just as culpable as they are for using their platforms and buying into their self-mythologizing. 

“In many ways, Silicon Valley has become the antithesis of what its early pioneers set out to be.”

Marietje Schaake

Equally frustrating is the crescendo of empty platitudes that ends the book. “The technologies of the future must be pursued thoughtfully, ethically, and cautiously,” Lalka says after spending 313 pages showing readers how these entrepreneurs have willfully ignored all three adverbs. What they’ve built instead are massive wealth-creation machines that divide, distract, and spy on us. Maybe it’s just me, but that kind of behavior seems ripe not only for judgment, but also for action.

So what exactly do you do with a group of men seemingly incapable of serious self-reflection—men who believe unequivocally in their own greatness and who are comfortable making decisions on behalf of hundreds of millions of people who did not elect them, and who do not necessarily share their values?

You regulate them, of course. Or at least you regulate the companies they run and fund. In Marietje Schaake’s The Tech Coup, readers are presented with a road map for how such regulation might take shape, along with an eye-opening account of just how much power has already been ceded to these corporations over the past 20 years.

There are companies like NSO Group, whose powerful Pegasus spyware tool has been sold to autocrats, who have in turn used it to crack down on dissent and monitor their critics. Billionaires are now effectively making national security decisions on behalf of the United States and using their social media companies to push right-wing agitprop and conspiracy theories, as Musk does with his Starlink satellites and X. Ride-sharing companies use their own apps as propaganda tools and funnel hundreds of millions of dollars into ballot initiatives to undo laws they don’t like. The list goes on and on. According to Schaake, this outsize and largely unaccountable power is changing the fundamental ways that democracy works in the United States. 

“In many ways, Silicon Valley has become the antithesis of what its early pioneers set out to be: from dismissing government to literally taking on equivalent functions; from lauding freedom of speech to becoming curators and speech regulators; and from criticizing government overreach and abuse to accelerating it through spyware tools and opaque algorithms,” she writes.

Schaake, who’s a former member of the European Parliament and the current international policy director at Stanford University’s Cyber Policy Center, is in many ways the perfect chronicler of Big Tech’s power grab. Beyond her clear expertise in the realms of governance and technology, she’s also Dutch, which makes her immune to the distinctly American disease that seems to equate extreme wealth, and the power that comes with it, with virtue and intelligence. 

This resistance to the various reality-distortion fields emanating from Silicon Valley plays a pivotal role in her ability to see through the many justifications and self-serving solutions that come from tech leaders themselves. Schaake understands, for instance, that when someone like OpenAI’s Sam Altman gets in front of Congress and begs for AI regulation, what he’s really doing is asking Congress to create a kind of regulatory moat between his company and any other startups that might threaten it, not acting out of some genuine desire for accountability or governmental guardrails. 

cover of The Tech Coup
The Tech Coup:
How to Save Democracy
from Silicon Valley

Marietje Schaake
PRINCETON UNIVERSITY PRESS, 2024

Like Shoshana Zuboff, the author of The Age of Surveillance Capitalism, Schaake believes that “the digital” should “live within democracy’s house”—that is, technologies should be developed within the framework of democracy, not the other way around. To accomplish this realignment, she offers a range of solutions, from banning what she sees as clearly antidemocratic technologies (like face-recognition software and other spyware tools) to creating independent teams of expert advisors to members of Congress (who are often clearly out of their depth when attempting to understand technologies and business models). 

Predictably, all this renewed interest in regulation has inspired its own backlash in recent years—a kind of “tech revanchism,” to borrow a phrase from the journalist James Hennessy. In addition to familiar attacks, such as trying to paint supporters of the techlash as somehow being antitechnology (they’re not), companies are also spending massive amounts of money to bolster their lobbying efforts. 

Some venture capitalists, like LinkedIn cofounder Reid Hoffman, who made big donations to the Kamala Harris presidential campaign, wanted to evict Federal Trade Commission chair Lina Khan, claiming that regulation is killing innovation (it isn’t) and removing the incentives to start a company (it’s not). And then of course there’s Musk, who now seems to be in a league of his own when it comes to how much influence he may exert over Donald Trump and the government that his companies have valuable contracts with.

What all these claims of victimization and subsequent efforts to buy their way out of regulatory oversight miss is that there’s actually a vast and fertile middle ground between simple techno­-optimism and techno-skepticism. As the New Yorker contributor Cal Newport and others have noted, it’s entirely possible to support innovations that can significantly improve our lives without accepting that every popular invention is good or inevitable. 

Regulating Big Tech will be a crucial part of leveling the playing field and ensuring that the basic duties of a democracy can be fulfilled. But as both Lalka and Schaake suggest, another battle may prove even more difficult and contentious. This one involves undoing the flawed logic and cynical, self-serving philosophies that have led us to the point where we are now. 

What if we admitted that constant bacchanals of disruption are in fact not all that good for our planet or our brains? What if, instead of “creative destruction,” we started fetishizing stability, and in lieu of putting “dents in the universe,” we refocused our efforts on fixing what’s already broken? What if—and hear me out—we admitted that technology might not be the solution to every problem we face as a society, and that while innovation and technological change can undoubtedly yield societal benefits, they don’t have to be the only measures of economic success and quality of life? 

When ideas like these start to sound less like radical concepts and more like common sense, we’ll know the techlash has finally achieved something truly revolutionary. 

Bryan Gardiner is a writer based in Oakland, California.

AI’s emissions are about to skyrocket even further

It’s no secret that the current AI boom is using up immense amounts of energy. Now we have a better idea of how much. 

A new paper, from a team at the Harvard T.H. Chan School of Public Health, examined 2,132 data centers operating in the United States (78% of all facilities in the country). These facilities—essentially buildings filled to the brim with rows of servers—are where AI models get trained, and they also get “pinged” every time we send a request through models like ChatGPT. They require huge amounts of energy both to power the servers and to keep them cool. 

Since 2018, carbon emissions from data centers in the US have tripled. For the 12 months ending August 2024, data centers were responsible for 105 million metric tons of CO2, accounting for 2.18% of national emissions (for comparison, domestic commercial airlines are responsible for about 131 million metric tons). About 4.59% of all the energy used in the US goes toward data centers, a figure that’s doubled since 2018.

It’s difficult to put a number on how much AI in particular, which has been booming since ChatGPT launched in November 2022, is responsible for this surge. That’s because data centers process lots of different types of data—in addition to training or pinging AI models, they do everything from hosting websites to storing your photos in the cloud. However, the researchers say, AI’s share is certainly growing rapidly as nearly every segment of the economy attempts to adopt the technology.

“It’s a pretty big surge,” says Eric Gimon, a senior fellow at the think tank Energy Innovation, who was not involved in the research. “There’s a lot of breathless analysis about how quickly this exponential growth could go. But it’s still early days for the business in terms of figuring out efficiencies, or different kinds of chips.”

Notably, the sources for all this power are particularly “dirty.” Since so many data centers are located in coal-producing regions, like Virginia, the “carbon intensity” of the energy they use is 48% higher than the national average. The paper, which was published on arXiv and has not yet been peer-reviewed, found that 95% of data centers in the US are built in places with sources of electricity that are dirtier than the national average. 

There are causes other than simply being located in coal country, says Falco Bargagli-Stoffi, an author of the paper. “Dirtier energy is available throughout the entire day,” he says, and plenty of data centers require that to maintain peak operation 24-7. “Renewable energy, like wind or solar, might not be as available.” Political or tax incentives, and local pushback, can also affect where data centers get built.  

One key shift in AI right now means that the field’s emissions are soon likely to skyrocket. AI models are rapidly moving from fairly simple text generators like ChatGPT toward highly complex image, video, and music generators. Until now, many of these “multimodal” models have been stuck in the research phase, but that’s changing. 

OpenAI released its video generation model Sora to the public on December 9, and its website has been so flooded with traffic from people eager to test it out that it is still not functioning properly. Competing models, like Veo from Google and Movie Gen from Meta, have still not been released publicly, but if those companies follow OpenAI’s lead as they have in the past, they might be soon. Music generation models from Suno and Udio are growing (despite lawsuits), and Nvidia released its own audio generator last month. Google is working on its Astra project, which will be a video-AI companion that can converse with you about your surroundings in real time. 

“As we scale up to images and video, the data sizes increase exponentially,” says Gianluca Guidi, a PhD student in artificial intelligence at University of Pisa and IMT Lucca, who is the paper’s lead author. Combine that with wider adoption, he says, and emissions will soon jump. 

One of the goals of the researchers was to build a more reliable way to get snapshots of just how much energy data centers are using. That’s been a more complicated task than you might expect, given that the data is dispersed across a number of sources and agencies. They’ve now built a portal that shows data center emissions across the country. The long-term goal of the data pipeline is to inform future regulatory efforts to curb emissions from data centers, which are predicted to grow enormously in the coming years. 

“There’s going to be increased pressure, between the environmental and sustainability-conscious community and Big Tech,” says Francesca Dominici, director of the Harvard Data Science Initiative and another coauthor. “But my prediction is that there is not going to be regulation. Not in the next four years.”

China banned exports of a few rare minerals to the US. Things could get messier.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

I’ve thought more about gallium and germanium over the last week than I ever have before (and probably more than anyone ever should).

As you may already know, China banned the export of those materials to the US last week and placed restrictions on others. The move is just the latest drama in escalating trade tensions between the two countries.

While the new export bans could have significant economic consequences, this might be only the beginning. China is a powerhouse, and not just in those niche materials—it’s also a juggernaut in clean energy, and particularly in battery supply chains. So what comes next could have significant consequences for EVs and climate action more broadly.

A super-quick catch-up on the news here: The Biden administration recently restricted exports of chips and other technology that could help China develop advanced semiconductors. Also, president-elect Donald Trump has floated all sorts of tariffs on Chinese goods.

Apparently in response to some or all of this, China banned the export of gallium, germanium, antimony, and superhard materials used in manufacturing, and said it may further restrict graphite sales. The materials are all used for both military and civilian technologies, and significantly, gallium and germanium are used in semiconductors.

It’s a ramp-up from last July, when China placed restrictions on gallium and germanium exports after enduring years of restrictions by the US and its Western allies on cutting-edge technology. (For more on the details of China’s most recent move, including potential economic impacts, check out the full coverage from my colleague James Temple.)

What struck me about this news is that this could be only the beginning, because China is central to many of the supply chains snaking around the globe.

This is no accident—take gallium as an example. The metal is a by-product of aluminum production from bauxite ore. China, as the world’s largest aluminum producer, certainly has a leg up to be a major player in the niche material. But other countries could produce gallium, and I’m sure more will. China has a head start because it invested in gallium separation and refining technologies.

A similar situation exists in the battery world. China is a dominant player all over the supply chain for lithium-ion batteries—not because it happens to have the right metals on its shores (it doesn’t), but because it’s invested in extraction and processing technologies.

Take lithium, a crucial component in those batteries. China has around 8% of the world’s lithium reserves but processes about 58% percent of the world’s lithium supply. The situation is similar for other key battery metals. Nickel that’s mined in Indonesia goes to China for processing, and the same goes for cobalt from the Democratic Republic of Congo.

Over the past two decades, China has thrown money, resources, and policy behind electric vehicles. Now China leads the world in EV registrations, many of the largest EV makers are Chinese companies, and the country is home to a huge chunk of the supply chain for the vehicles and their batteries.

As the world begins a shift toward technologies like EVs, it’s becoming clear just how dominant China’s position is in many of the materials crucial to building that tech.

Lithium prices have dropped by 80% over the past year, and while part of the reason is a slowdown in EV demand, another part is that China is oversupplying lithium, according to US officials. By flooding the market and causing prices to drop, China could make it tougher for other lithium processors to justify sticking around in the business.

The new graphite controls from China could wind up affecting battery markets, too. Graphite is crucial for lithium-ion batteries, which use the material in their anodes. It’s still not clear whether the new bans will affect battery materials or just higher-purity material that’s used in military applications, according to reporting from Carbon Brief.

To this point, China hasn’t specifically banned exports of key battery materials, and it’s not clear exactly how far the country would go. Global trade politics are delicate and complicated, and any move that China makes in battery supply chains could wind up coming back to hurt the country’s economy. 

But we could be entering into a new era of material politics. Further restrictions on graphite, or moves that affect lithium, nickel, or copper, could have major ripple effects around the world for climate technology, because batteries are key not only for electric vehicles, but increasingly for our power grids. 

While it’s clear that tensions are escalating, it’s still unclear what’s going to happen next. The vibes, at best, are uncertain, and this sort of uncertainty is exactly why so many folks in technology are so focused on how to diversify global supply chains. Otherwise, we may find out just how tangled those supply chains really are, and what happens when you yank on threads that run through the center of them. 


Now read the rest of The Spark

Related reading

Check out James Temple’s breakdown of what China’s ban on some rare minerals could mean for the US.

Last July, China placed restrictions on some of these materials—read this story from Zeyi Yang, who explains what the moves and future ones might mean for semiconductor technology.

As technology shifts, so too do the materials we need to build it. The result: a never-ending effort to build out mining, processing, and recycling infrastructure, as I covered in a feature story earlier this year.

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | GETTY, ENVATO

Another thing 

Each year we release a list of 10 Breakthrough Technologies, and it’s nearly time for the 2025 edition. But before we announce the picks, here are a few things that didn’t make the cut

A couple of interesting ones on the cutting-room floor here, including eVTOLs, electric aircraft that can take off and land like helicopters. For more on why the runway is looking pretty long for electric planes (especially ones with funky ways to move through the skies), check out this story from last year

Keeping up with climate  

Denmark received no bids in its latest offshore wind auction. It’s a disappointing result for the birthplace of offshore wind power. (Reuters)

Surging methane emissions could be the sign of a concerning shift for the climate. A feedback loop of emissions from the Arctic and a slowdown in how the powerful greenhouse gas breaks down could spell trouble. (Inside Climate News)

Battery prices are dropping faster than expected. Costs for  lithium-ion packs just saw their steepest drop since 2017. (Electrek)

This fusion startup is rethinking how to configure its reactors by floating powerful magnets in the middle of the chamber. This sounds even more like science fiction than most other approaches to fusion. (IEEE Spectrum)

The US plans to put monarch butterflies on a list of threatened species. Temperature shifts brought on by climate change could wreak havoc with the insects’ migration. (Associated Press)

Sources close to Elon Musk say he’s undergone quite a shift on climate change, morphing from “environmental crusader to critic of dire climate predictions.” (Washington Post)

Google has a $20 billion plan to build data centers and clean power together. “Bring your own power” is an interesting idea, but not a tested prospect just yet. (Canary Media)

The Franklin Fire in Los Angeles County sparked Monday evening and quickly grew into a major blaze. At the heart of the fire’s rapid spread: dry weather and Santa Ana winds. (Scientific American)

Places in the US that are most at risk for climate disasters are also most at risk for insurance hikes. Check out these great data visualizations on insurance and climate change. (The Guardian)

AI’s hype and antitrust problem is coming under scrutiny

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The AI sector is plagued by a lack of competition and a lot of deceit—or at least that’s one way to interpret the latest flurry of actions taken in Washington. 

Last Thursday, Senators Elizabeth Warren and Eric Schmitt introduced a bill aimed at stirring up more competition for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle currently dominate those contracts. “The way that the big get bigger in AI is by sucking up everyone else’s data and using it to train and expand their own systems,” Warren told the Washington Post

The new bill would “require a competitive award process” for contracts, which would ban the use of “no-bid” awards by the Pentagon to companies for cloud services or AI foundation models. (The lawmakers’ move came a day after OpenAI announced that its technology would be deployed on the battlefield for the first time in a partnership with Anduril, completing a year-long reversal of its policy against working with the military.)

While Big Tech is hit with antitrust investigations—including the ongoing lawsuit against Google about its dominance in search, as well as a new investigation opened into Microsoft—regulators are also accusing AI companies of, well, just straight-up lying. 

On Tuesday, the Federal Trade Commission took action against the smart-camera company IntelliVision, saying that the company makes false claims about its facial recognition technology. IntelliVision has promoted its AI models, which are used in both home and commercial security camera systems, as operating without gender or racial bias and being trained on millions of images, two claims the FTC says are false. (The company couldn’t support the bias claim and the system was trained on only 100,000 images, the FTC says.)

A week earlier, the FTC made similar claims of deceit against the security giant Evolv, which sells AI-powered security scanning products to stadiums, K-12 schools, and hospitals. Evolv advertises its systems as offering better protection than simple metal detectors, saying they use AI to accurately screen for guns, knives, and other threats while ignoring harmless items. The FTC alleges that Evolv has inflated its accuracy claims, and that its systems failed in consequential cases, such as a 2022 incident when they failed to detect a seven-inch knife that was ultimately used to stab a student. 

Those add to the complaints the FTC made back in September against a number of AI companies, including one that sold a tool to generate fake product reviews and one selling “AI lawyer” services. 

The actions are somewhat tame. IntelliVision and Evolv have not actually been served fines. The FTC has simply prohibited the companies from making claims that they can’t back up with evidence, and in the case of Evolv, it requires the company to allow certain customers to get out of contracts if they wish to. 

However, they do represent an effort to hold the AI industry’s hype to account in the final months before the FTC’s chair, Lina Khan, is likely to be replaced when Donald Trump takes office. Trump has not named a pick for FTC chair, but he said on Thursday that Gail Slater, a tech policy advisor and a former aide to vice president–elect JD Vance, was picked to head the Department of Justice’s Antitrust Division. Trump has signaled that the agency under Slater will keep tech behemoths like Google, Amazon, and Microsoft in the crosshairs. 

“Big Tech has run wild for years, stifling competition in our most innovative sector and, as we all know, using its market power to crack down on the rights of so many Americans, as well as those of Little Tech!” Trump said in his announcement of the pick. “I was proud to fight these abuses in my First Term, and our Department of Justice’s antitrust team will continue that work under Gail’s leadership.”

That said, at least some of Trump’s frustrations with Big Tech are different—like his concerns that conservatives could be targets of censorship and bias. And that could send antitrust efforts in a distinctly new direction on his watch. 


Now read the rest of The Algorithm

Deeper Learning

The US Department of Defense is investing in deepfake detection

The Pentagon’s Defense Innovation Unit, a tech accelerator within the military, has awarded its first contract for deepfake detection. Hive AI will receive $2.4 million over two years to help detect AI-generated video, image, and audio content. 

Why it matters: As hyperrealistic deepfakes get cheaper and easier to produce, they hurt our ability to tell what’s real. The military’s investment in deepfake detection shows that the problem has national security implications as well. The open question is how accurate these detection tools are, and whether they can keep up with the unrelenting pace at which deepfake generation techniques are improving. Read more from Melissa Heikkilä

Bits and Bytes

The owner of the LA Times plans to add an AI-powered “bias meter” to its news stories

Patrick Soon-Shiong is building a tool that will allow readers to “press a button and get both sides” of a story. But trying to create an AI model that can somehow provide an objective view of news events is controversial, given that models are biased both by their training data and by fine-tuning methods. (Yahoo

Google DeepMind’s new AI model is the best yet at weather forecasting

It’s the second AI weather model that Google has launched in just the past few months. But this one’s different: It leaves out traditional physics models and relies on AI methods alone. (MIT Technology Review)

How the Ukraine-Russia war is reshaping the tech sector in Eastern Europe

Startups in Latvia and other nearby countries see the mobilization of Ukraine as a warning and an inspiration. They are now changing consumer products—from scooters to recreational drones—for use on the battlefield. (MIT Technology Review)

How Nvidia’s Jensen Huang is avoiding $8 billion in taxes

Jensen Huang runs Nvidia, the world’s top chipmaker and most valuable company. His wealth has soared during the AI boom, and he has taken advantage of a number of tax dodges “that will enable him to pass on much of his fortune tax free,” according to the New York Times. (The New York Times)

Meta is pursuing nuclear energy for its AI ambitions
Meta wants more of its AI training and development to be powered by nuclear energy, joining the ranks of Amazon and Microsoft. The news comes as many companies in Big Tech struggle to meet their sustainability goals amid the soaring energy demands from AI. (Meta)

Correction: A previous version of this article stated that Gail Slater was picked by Donald Trump to be the head of the FTC. Slater was in fact picked to lead the Department of Justice’s Antitrust Division. We apologize for the error.

We saw a demo of the new AI system powering Anduril’s vision for war

One afternoon in late November, I visited a weapons test site in the foothills east of San Clemente, California, operated by Anduril, a maker of AI-powered drones and missiles that recently announced a partnership with OpenAI. I went there to witness a new system it’s expanding today, which allows external parties to tap into its software and share data in order to speed up decision-making on the battlefield. If it works as planned over the course of a new three-year contract with the Pentagon, it could embed AI more deeply into the theater of war than ever before. 

Near the site’s command center, which looked out over desert scrubs and sage, sat pieces of Anduril’s hardware suite that have helped the company earn its $14 billion valuation. There was Sentry, a security tower of cameras and sensors currently deployed at both US military bases and the US-Mexico border, and advanced radars. Multiple drones, including an eerily quiet model called Ghost, sat ready to be deployed. What I was there to watch, though, was a different kind of weapon, displayed on two large television screens positioned at the test site’s command station. 

I was here to examine the pitch being made by Anduril, other companies in defense tech, and growing numbers of people within the Pentagon itself: A future “great power” conflict—military jargon for a global war involving competition between multiple countries—will not be won by the entity with the most advanced drones or firepower, or even the cheapest firepower. It will be won by whoever can sort through and share information the fastest. And that will have to be done “at the edge” where threats arise, not necessarily at a command post in Washington. 

A desert drone test

“You’re going to need to really empower lower levels to make decisions, to understand what’s going on, and to fight,” Anduril CEO Brian Schimpf says. “That is a different paradigm than today.” Currently, information flows poorly among people on the battlefield and decision-makers higher up the chain. 

To show how the new tech will fix that, Anduril walked me through an exercise demonstrating how its system would take down an incoming drone threatening a base of the US military or its allies (the scenario at the center of Anduril’s new partnership with OpenAI). It began with a truck in the distance, driving toward the base. The AI-powered Sentry tower automatically recognized the object as a possible threat, highlighting it as a dot on one of the screens. Anduril’s software, called Lattice, sent a notification asking the human operator if he would like to send a Ghost drone to monitor. After a click of his mouse, the drone piloted itself autonomously toward the truck, as information on its location gathered by the Sentry was sent to the drone by the software.

The truck disappeared behind some hills, so the Sentry tower camera that was initially trained on it lost contact. But the surveillance drone had already identified it, so its location stayed visible on the screen. We watched as someone in the truck got out and launched a drone, which Lattice again labeled as a threat. It asked the operator if he’d like to send a second attack drone, which then piloted autonomously and locked onto the threatening drone. With one click, it could be instructed to fly into it fast enough to take it down. (We stopped short here, since Anduril isn’t allowed to actually take down drones at this test site.) The entire operation could have been managed by one person with a mouse and computer.

Anduril is building on these capabilities further by expanding Lattice Mesh, a software suite that allows other companies to tap into Anduril’s software and share data, the company announced today. More than 10 companies are now building their hardware into the system—everything from autonomous submarines to self-driving trucks—and Anduril has released a software development kit to help them do so. Military personnel operating hardware can then “publish” their own data to the network and “subscribe” to receive data feeds from other sensors in a secure environment. On December 3, the Pentagon’s Chief Digital and AI Office awarded a three-year contract to Anduril for Mesh. 

Anduril’s offering will also join forces with Maven, a program operated by the defense data giant Palantir that fuses information from different sources, like satellites and geolocation data. It’s the project that led Google employees in 2018 to protest against working in warfare. Anduril and Palantir announced on December 6 that the military will be able to use the Maven and Lattice systems together. 

The military’s AI ambitions

The aim is to make Anduril’s software indispensable to decision-makers. It also represents a massive expansion of how the military is currently using AI. You might think the US Department of Defense, advanced as it is, would already have this level of hardware connectivity. We have some semblance of it in our daily lives, where phones, smart TVs, laptops, and other devices can talk to each other and share information. But for the most part, the Pentagon is behind.

“There’s so much information in this battle space, particularly with the growth of drones, cameras, and other types of remote sensors, where folks are just sopping up tons of information,” says Zak Kallenborn, a warfare analyst who works with the Center for Strategic and International Studies. Sorting through to find the most important information is a challenge. “There might be something in there, but there’s so much of it that we can’t just set a human down and to deal with it,” he says. 

Right now, humans also have to translate between systems made by different manufacturers. One soldier might have to manually rotate a camera to look around a base and see if there’s a drone threat, and then manually send information about that drone to another soldier operating the weapon to take it down. Those instructions might be shared via a low-tech messenger app—one on par with AOL Instant Messenger. That takes time. It’s a problem the Pentagon is attempting to solve through its Joint All-Domain Command and Control plan, among other initiatives.

“For a long time, we’ve known that our military systems don’t interoperate,” says Chris Brose, former staff director of the Senate Armed Services Committee and principal advisor to Senator John McCain, who now works as Anduril’s chief strategy officer. Much of his work has been convincing Congress and the Pentagon that a software problem is just as worthy of a slice of the defense budget as jets and aircraft carriers. (Anduril spent nearly $1.6 million on lobbying last year, according to data from Open Secrets, and has numerous ties with the incoming Trump administration: Anduril founder Palmer Luckey has been a longtime donor and supporter of Trump, and JD Vance spearheaded an investment in Anduril in 2017 when he worked at venture capital firm Revolution.) 

Defense hardware also suffers from a connectivity problem. Tom Keane, a senior vice president in Anduril’s connected warfare division, walked me through a simple example from the civilian world. If you receive a text message while your phone is off, you’ll see the message when you turn the phone back on. It’s preserved. “But this functionality, which we don’t even think about,” Keane says, “doesn’t really exist” in the design of many defense hardware systems. Data and communications can be easily lost in challenging military networks. Anduril says its system instead stores data locally. 

An AI data treasure trove

The push to build more AI-connected hardware systems in the military could spark one of the largest data collection projects the Pentagon has ever undertaken, and companies like Anduril and Palantir have big plans. 

“Exabytes of defense data, indispensable for AI training and inferencing, are currently evaporating,” Anduril said on December 6, when it announced it would be working with Palantir to compile data collected in Lattice, including highly sensitive classified information, to train AI models. Training on a broader collection of data collected by all these sensors will also hugely boost the model-building efforts that Anduril is now doing in a partnership with OpenAI, announced on December 4. Earlier this year, Palantir also offered its AI tools to help the Pentagon reimagine how it categorizes and manages classified data. When Anduril founder Palmer Luckey told me in an interview in October that “it’s not like there’s some wealth of information on classified topics and understanding of weapons systems” to train AI models on, he may have been foreshadowing what Anduril is now building. 

Even if some of this data from the military is already being collected, AI will suddenly make it much more useful. “What is new is that the Defense Department now has the capability to use the data in new ways,” Emelia Probasco, a senior fellow at the Center for Security and Emerging Technology at Georgetown University, wrote in an email. “More data and ability to process it could support great accuracy and precision as well as faster information processing.”

The sum of these developments might be that AI models are brought more directly into military decision-making. That idea has brought scrutiny, as when Israel was found last year to have been using advanced AI models to process intelligence data and generate lists of targets. Human Rights Watch wrote in a report that the tools “rely on faulty data and inexact approximations.”

“I think we are already on a path to integrating AI, including generative AI, into the realm of decision-making,” says Probasco, who authored a recent analysis of one such case. She examined a system built within the military in 2023 called Maven Smart System, which allows users to “access sensor data from diverse sources [and] apply computer vision algorithms to help soldiers identify and choose military targets.”

Probasco said that building an AI system to control an entire decision pipeline, possibly without human intervention, “isn’t happening” and that “there are explicit US policies that would prevent it.”

A spokesperson for Anduril said that the purpose of Mesh is not to make decisions. “The Mesh itself is not prescribing actions or making recommendations for battlefield decisions,” the spokesperson said. “Instead, the Mesh is surfacing time-sensitive information”—information that operators will consider as they make those decisions.

Bluesky has an impersonator problem 

Like many others, I recently fled the social media platform X for Bluesky. In the process, I started following many of the people I followed on X. On Thanksgiving, I was delighted to see a private message from a fellow AI reporter, Will Knight from Wired. Or at least that’s who I thought I was talking to. I became suspicious when the person claiming to be Knight mentioned being from Miami, when Knight is, in fact, from the UK. The account handle was almost identical to the real Will Knight’s handle, and the profile used his profile photo. 

Then more messages started to appear. Paris Marx, a prominent tech critic, slid into my DMs to ask me how I was doing. “Things are going splendid over here,” he replied to me. Then things got suspicious again. “How are your trades going?” fake-Marx asked me. This account was far more sophisticated than Knight’s; it had meticulously copied every single tweet and retweet from Marx’s real page over the past few weeks.

Both accounts were eventually deleted, but not before trying to get me to set up a crypto wallet and a “cloud mining pool” account. Knight and Marx confirmed to us that these accounts did not belong to them, and that they have been fighting impersonator accounts of themselves for weeks. 

They are not the only ones. The New York Times tech journalist Sheera Frankel and Molly White, a researcher and cryptocurrency critic, have also experienced people impersonating them on Bluesky, most likely to scam people. This tracks with research from Alexios Mantzarlis, the director of the Security, Trust, and Safety Initiative at Cornell Tech, who manually went through the top 500 Bluesky users by follower count and found that of the 305 accounts belonging to a named person, at least 74 had been impersonated by at least one other account. 

The platform has had to suddenly cater to an influx of millions of new users in recent months as people leave X in protest of Elon Musk’s takeover of the platform. Its user base has more than doubled since September, from 10 million users to over 20 million. This sudden wave of new users—and the inevitable scammers—means Bluesky is still playing catch-up, says White. 

“These accounts block me as soon as they’re created, so I don’t initially see them,” Marx says. Both Marx and White describe a frustrating pattern: When one account is taken down, another one pops up soon after. White says she had experienced a similar phenomenon on X and TikTok too. 

A way to prove that people are who they say they are would help. Before Musk took the reins of the platform, employees at X, previously known as Twitter, verified users such as journalists and politicians, and gave them a blue tick next to their handles so people knew they were dealing with credible news sources. After Musk took over, he scrapped the old verification system and offered blue ticks to all paying customers. 

The ongoing crypto-impersonation scams have raised calls for Bluesky to initiate something similar to Twitter’s original verification program. Some users, such as the investigative journalist Hunter Walker, have set up their own initiatives to verify journalists. However, users are currently limited in the ways they can verify themselves on the platform. By default, usernames on Bluesky end with the suffix bsky.social. The platform recommends that news organizations and high-profile people verify their identities by setting up their own websites as their usernames. For example, US senators have verified their accounts with the suffix senate.gov. But this technique isn’t foolproof. For one, it doesn’t actually verify people’s identity—only their affiliation with a particular website. 

Bluesky did not respond to MIT Technology Review’s requests for comment, but the company’s safety team posted that the platform had updated its impersonation policy to be more aggressive and would remove impersonation and handle-squatting accounts. The company says it has also quadrupled its moderation team to take action on impersonation reports more quickly. But it seems to be struggling to keep up. “We still have a large backlog of moderation reports due to the influx of new users as we shared previously, though we are making progress,” the company continued. 

Bluesky’s decentralized nature makes kicking out impersonators a trickier problem to solve. Competitors such as X and Threads rely on centralized teams within the company who moderate unwanted content and behavior, such as impersonation. But Bluesky is built on the AT Protocol, a decentralized, open-source technology, which allows users more control over what kind of content they see and enables them to build communities around particular content. Most people sign up to Bluesky Social, the main social network, whose community guidelines ban impersonation. However, Bluesky Social is just one of the services or “clients” that people can use, and other services have their own moderation practices and terms. 

This approach means that until now, Bluesky itself hasn’t needed an army of content moderators to weed out unwanted behaviors because it relies on this community-led approach, says Wayne Chang, the founder and CEO of SpruceID, a digital identity company. That might have to change.

“In order to make these apps work at all, you need some level of centralization,” says Chang. Despite community guidelines, it’s hard to stop people from creating impersonation accounts, and Bluesky is engaged in a cat-and-mouse game trying to take suspicious accounts down. 

Cracking down on a problem such as impersonation is important because it poses a serious problem for the credibility of Bluesky, says Chang. “It’s a legitimate complaint as a Bluesky user that ‘Hey, all those scammers are basically harassing me.’ You want your brand to be tarnished? Or is there something we can do about this?” he says.

A fix for this is urgently needed, because attackers might abuse Bluesky’s open-source code to create spam and disinformation campaigns at a much larger scale, says Francesco Pierri, an assistant professor at Politecnico di Milano who has researched Bluesky. His team found that the platform has seen a rise in suspicious accounts since it was made open to the public earlier this year. 

Bluesky acknowledges that its current practices are not enough. In a post, the company said it has received feedback that users want more ways to confirm their identities beyond domain verification, and it is “exploring additional options to enhance account verification.” 

In a livestream at the end of November, Bluesky CEO Jay Graber said the platform was considering becoming a verification provider, but because of its decentralized approach it would also allow others to offer their own user verification services. “And [users] can choose to trust us—the Bluesky team’s verification—or they could do their own. Or other people could do their own,” Graber said. 

But at least Bluesky seems to “have some willingness to actually moderate content on the platform,” says White. “I would love to see something a little bit more proactive that didn’t require me to do all of this reporting,” she adds. 

As for Marx, “I just hope that no one truly falls for it and gets tricked into crypto scams,” he says. 

Google’s new Project Astra could be generative AI’s killer app

Google DeepMind has announced an impressive grab bag of new products and prototypes that may just let it seize back its lead in the race to turn generative artificial intelligence into a mass-market concern. 

Top billing goes to Gemini 2.0—the latest iteration of Google DeepMind’s family of multimodal large language models, now redesigned around the ability to control agents—and a new version of Project Astra, the experimental everything app that the company teased at Google I/O in May.

MIT Technology Review got to try out Astra in a closed-door live demo last week. It was a stunning experience, but there’s a gulf between polished promo and live demo.

Astra uses Gemini 2.0’s built-in agent framework to answer questions and carry out tasks via text, speech, image, and video, calling up existing Google apps like Search, Maps, and Lens when it needs to. “It’s merging together some of the most powerful information retrieval systems of our time,” says Bibo Xu, product manager for Astra.

Gemini 2.0 and Astra are joined by Mariner, a new agent built on top of Gemini that can browse the web for you; Jules, a new Gemini-powered coding assistant; and Gemini for Games, an experimental assistant that you can chat to and ask for tips as you play video games. 

(And let’s not forget that in the last week Google DeepMind also announced Veo, a new video generation model; Imagen 3, a new version of its image generation model; and Willow, a new kind of chip for quantum computers. Whew. Meanwhile, CEO Demis Hassabis was in Sweden yesterday receiving his Nobel Prize.)

Google DeepMind claims that Gemini 2.0 is twice as fast as the previous version, Gemini 1.5, and outperforms it on a number of standard benchmarks, including MMLU-Pro, a large set of multiple-choice questions designed to test the abilities of large language models across a range of subjects, from math and physics to health, psychology, and philosophy. 

But the margins between top-end models like Gemini 2.0 and those from rival labs like OpenAI and Anthropic are now slim. These days, advances in large language models are less about how good they are and more about what you can do with them. 

And that’s where agents come in. 

Hands on with Project Astra 

Last week I was taken through an unmarked door on an upper floor of a building in London’s King’s Cross district into a room with strong secret-project vibes. The word “ASTRA” was emblazoned in giant letters across one wall. Xu’s dog, Charlie, the project’s de facto mascot, roamed between desks where researchers and engineers were busy building a product that Google is betting its future on.

“The pitch to my mum is that we’re building an AI that has eyes, ears, and a voice. It can be anywhere with you, and it can help you with anything you’re doing” says Greg Wayne, co-lead of the Astra team. “It’s not there yet, but that’s the kind of vision.” 

The official term for what Xu, Wayne, and their colleagues are building is “universal assistant.” Exactly what that means in practice, they’re still figuring out. 

At one end of the Astra room were two stage sets that the team uses for demonstrations: a drinks bar and a mocked-up art gallery. Xu took me to the bar first. “A long time ago we hired a cocktail expert and we got them to instruct us to make cocktails,” said Praveen Srinivasan, another co-lead. “We recorded those conversations and used that to train our initial model.”

Xu opened a cookbook to a recipe for a chicken curry, pointed her phone at it, and woke up Astra. “Ni hao, Bibo!” said a female voice. 

“Oh! Why are you speaking to me in Mandarin?” Xu asked her phone. “Can you speak to me in English, please?”

“My apologies, Bibo. I was following a previous instruction to speak in Mandarin. I will now speak in English as you have requested.”

Astra remembers previous conversations, Xu told me. It also keeps track of the previous 10 minutes of video. (There’s a remarkable moment in the promo video that Google put out in May when Astra tells the person giving the demo where she had left her glasses, having spotted them on a desk a few seconds earlier. But I saw nothing like this in the live demo.)

Back to the cookbook. Moving her phone camera over the page for a few seconds, Xu asked Astra to read the recipe and tell her what spices were in it. “I recall the recipe mentioning a teaspoon of black peppercorns, a teaspoon of hot chili powder, and a cinnamon stick,” it replied.

“I think you’re missing a few,” said Xu. “Take another look.”

“You are correct—I apologize. I also see ground turmeric and curry leaves in the ingredients.” 

Seeing this tech in action, two things hit you straight away. First, it’s glitchy and often needs correcting. Second, those glitches can be corrected with just a few spoken words. You simply interrupt the voice, repeat your instructions, and move on. It feels more like coaching a child than butting heads with broken software.   

Next Xu pointed her phone at a row of wine bottles and asked Astra to pick the one that would go best with the chicken curry. It went for a rioja and explained why. Xu asked how much a bottle would cost. Astra said it would need to use Search to look prices up online. A few seconds later it came back with its answer. 

We moved to the art gallery, and Xu showed Astra a number of screens with famous paintings on them: the Mona Lisa, Munch’s The Scream, a Vermeer, a Seurat, and several others. “Ni hao, Bibo!” the voice said. 

“You’re speaking to me in Mandarin again,” Xu said. “Try to speak to me in English, please.” 

“My apologies, I seem to have misunderstood. Yes, I will respond in English.” (I should know better, but I could swear I heard the snark.)

It was my turn. Xu handed me her phone. 

I tried to trip Astra up, but it was having none of it. I asked it what famous art gallery we were in, but it refused to hazard a guess. I asked why it had identified the paintings as replicas and it started to apologize for its mistake (Astra apologizes a lot). I was compelled to interrupt: “No, no—you’re right, it’s not a mistake. You’re correct to identify paintings on screens as fake paintings.” I couldn’t help feeling a bit bad: I’d confused an app that exists only to please. 

When it works well, Astra is enthralling. The experience of striking up a conversation with your phone about whatever you’re pointing it at feels fresh and seamless. In a media briefing yesterday, Google DeepMind shared a video showing off other uses: reading an email on your phone’s screen to find a door code (and then reminding you of that code later), pointing a phone at a passing bus and asking where it goes, quizzing it about a public artwork as you walk past. This could be generative AI’s killer app. 

And yet there’s a long way to go before most people get their hands on tech like this. There’s no mention of a release date. Google DeepMind has also shared videos of Astra working on a pair of smart glasses, but that tech is even further down the company’s wish list.

Mixing it up

For now, researchers outside Google DeepMind are keeping a close eye on its progress. “The way that things are being combined is impressive,” says Maria Liakata, who works on large language models at Queen Mary University of London and the Alan Turing Institute. “It’s hard enough to do reasoning with language, but here you need to bring in images and more. That’s not trivial.”

Liakata is also impressed by Astra’s ability to recall things it has seen or heard. She works on what she calls long-range context, getting models to keep track of information that they have come across before. “This is exciting,” says Liakata. “Even doing it in a single modality is exciting.”

But she admits that a lot of her assessment is guesswork. “Multimodal reasoning is really cutting-edge,” she says. “But it’s very hard to know exactly where they’re at, because they haven’t said a lot about what is in the technology itself.”

For Bodhisattwa Majumder, a researcher who works on multimodal models and agents at the Allen Institute for AI, that’s a key concern. “We absolutely don’t know how Google is doing it,” he says. 

He notes that if Google were to be a little more open about what it is building, it would help consumers understand the limitations of the tech they could soon be holding in their hands. “They need to know how these systems work,” he says. “You want a user to be able to see what the system has learned about you, to correct mistakes, or to remove things you want to keep private.”

Liakata is also worried about the implications for privacy, pointing out that people could be monitored without their consent. “I think there are things I’m excited about and things that I’m concerned about,” she says. “There’s something about your phone becoming your eyes—there’s something unnerving about it.” 

“The impact these products will have on society is so big that it should be taken more seriously,” she says. “But it’s become a race between the companies. It’s problematic, especially since we don’t have any agreement on how to evaluate this technology.”

Google DeepMind says it takes a long, hard look at privacy, security, and safety for all its new products. Its tech will be tested by teams of trusted users for months before it hits the public. “Obviously, we’ve got to think about misuse. We’ve got to think about, you know, what happens when things go wrong,” says Dawn Bloxwich, director of responsible development and innovation at Google DeepMind. “There’s huge potential. The productivity gains are huge. But it is also risky.”

No team of testers can anticipate all the ways that people will use and misuse new technology. So what’s the plan for when the inevitable happens? Companies need to design products that can be recalled or switched off just in case, says Bloxwich: “If we need to make changes quickly or pull something back, then we can do that.”

The world’s next big environmental problem could come from space

Early on a Sunday morning in September, a team of 12 sleep-deprived, jet-lagged researchers assembled at the world’s most remote airport. There, on Easter Island, some 2,330 miles off the coast of Chile, they were preparing for a unique chase: a race to catch a satellite’s last moments as it fell out of space and blazed into ash across the sky.

That spacecraft was Salsa, one of four satellites that were part of the European Space Agency (ESA) Cluster constellation. Salsa and its counterparts had been studying Earth’s magnetic field since the early 2000s, but its mission was now over. Months earlier, the spacecraft had been set on a spiral of death that would end with a fiery disintegration high up in Earth’s atmosphere about a thousand miles away from Easter Island’s coast.

Now, the scientists were poised to catch this reentry as it happened. Equipped with precise trajectory calculations from ESA’s ground control, the researchers took off in a rented business jet, with 25 cameras and spectrometers mounted by the windows. The hope was that they’d be able to gather priceless insights into the physical and chemical processes that occur when satellites burn up as they fall to Earth at the end of their missions.

Researchers were able to monitor the reentry of Cluster Salsa from a rented business jet.

This kind of study is growing more urgent. Some 15 years ago, barely a thousand satellites orbited our planet. Now the number has risen to about 10,000, and with the rise of satellite constellations like Starlink, another tenfold increase is forecast by the end of this decade. Letting these satellites burn up in the atmosphere at the end of their lives helps keep the quantity of space junk to a minimum. But doing so deposits satellite ash in the middle layers of Earth’s atmosphere. This metallic ash can harm the atmosphere and potentially alter the climate. Scientists don’t yet know how serious the problem is likely to be in the coming decades.

The ash from the reentries contains ozone-damaging substances. Modeling studies have shown that some of its components can also cool down Earth’s stratosphere, while others can warm it. Some worry that the metallic particles could even disrupt Earth’s magnetic field, obscure the view of Earth-observing satellites, and increase the frequency of thunderstorms.

“We need to see what kind of physics takes place up there,” says Stijn Lemmens, a senior analyst at ESA who oversaw the campaign. “If there are more [reentering] objects, there will be more consequences.”

A community of atmospheric scientists scattered all over the world is awaiting results from these measurements, hoping to fill major gaps in their understanding. 

The Salsa reentry was only the fifth such observation campaign in the history of spaceflight. The previous campaigns, however, tracked much larger objects, like a 19-ton upper stage from an Ariane 5 rocket.  

Cluster Salsa, at 550 kilograms, was quite tiny in comparison. And that makes it of special interest to scientists, because it’s spacecraft of this general size that will be increasingly crowding Earth orbit in the coming years.

The downside of mega-constellations

Most of the forecasted growth in satellite numbers is expected to come from satellites roughly the same size as Salsa: individual members of mega-constellations, designed to provide internet service with decent speed and latency to anyone, anywhere.

SpaceX’s Starlink is the biggest of these. Currently consisting of about 6,500 satellites, the fleet is expected to mushroom to more than 40,000 at some point in the 2030s. Other mega-constellations, including Amazon Kuiper, France-based E-Space, and the Chinese projects G60 and Guowang, are in the works. Each could encompass several thousand satellites, or even tens of thousands. 

Mega-constellation developers don’t want their spacecraft to fly for two or three decades like their old-school, government-funded counterparts. They want to replace these orbiting internet routers with newer, better tech every five years, sending the old ones back into the atmosphere to burn up. The rockets needed to launch all those satellites emit their own cocktail of contaminants (and their upper stages also end their life burning up in the atmosphere).

The amount of space debris vaporizing in Earth’s atmosphere has more than doubled in the past few years, says Jonathan McDowell, an astronomer at the Harvard-Smithsonian Center for Astrophysics who has built a second career as a leading space debris tracker..

“We used to see about 50 to 100 rocket stages reentering every year,” he says. “Now we’re looking at 300 a year.” 

In 2019, some 115 satellites burned up in the atmosphere. As of late November, 2024 had already set a new record with 950 satellite reentries, McDowell says.

The mass of vaporizing space junk will continue to grow in line with the size of the satellite fleets. By 2033, it could reach 4,000 tons per year, according to estimates presented at a workshop called Protecting Earth and Outer Space from the Disposal of Spacecraft and Debris, held in September at the University of Southampton in the UK.

Crucially, most of the ash these reentries produce will remain suspended in the thin midatmospheric air for decades, perhaps centuries. But acquiring precise data about satellite burn-up is nearly impossible, because it takes place in territory that is too high for meteorological balloons to measure and too low for sounding instruments aboard orbiting satellites. The closest scientists can get is remote sensing of a satellite’s final moments.

Changing chemistry

None of the researchers aboard the business jet turned scientific laboratory that took off from Easter Island in September got to see the moment when Cluster Salsa burst into a fireball above the deep, dark waters of the Pacific Ocean. Against the bright daylight, the fleeting explosion appeared about as vivid as a midday full moon. The windows of the plane, however, were covered with dark fabric (to prevent light reflected from inside to skew the measurements), allowing only the camera lenses to peek out, says Jiří Šilha, CEO of Slovakia-based Astros Solutions, a space situational awareness company developing new techniques for space debris monitoring, which coordinated the observation campaign.

“We were about 300 kilometers [186 miles] away when it happened, far enough to avoid being hit by any remaining debris,” Šilha says. “It’s all very quick. The object reenters at a very high velocity, some 11 kilometers [seven miles] per second, and disintegrates 80 to 60 kilometers above Earth.”

nfographic that describes the reentry of the first of four Cluster satellites

ESA

The instruments collected measurements of the disintegration in the visible and near-infrared part of the light spectrum, including observations with special filters for detecting chemical elements including aluminum, titanium, and sodium. The data will help scientists reconstruct the satellite breakup process, working out the altitudes at which the incineration takes place, the temperatures at which it occurs, and the nature and quantity of the chemical compounds it releases.

The dusty leftovers of Cluster Salsa have by now begun their leisurely drift through the mesosphere and stratosphere—the atmospheric layers stretching at altitudes from 31 to 53 miles and 12 to 31 miles, respectively. Throughout their decades-long descent, these ash particles will interact with atmospheric gases, causing mischief, says Connor Barker, a researcher in atmospheric chemical modeling at University College London and author of a satellite air pollution inventory published in early October in the journal Scientific Data

Satellite bodies and rocket stages are mostly made of aluminum, which burns into aluminum oxide, or alumina—a white, powdery substance known to contribute to ozone depletion. Alumina also reflects sunlight, which means it could alter the temperature of those higher atmospheric layers.

“In our simulations, we start to see a warming over time of the upper layers of the atmosphere that has several knock-on effects for atmospheric composition,” Barker says. 

For example, some models suggest the warming could add moisture to the stratosphere. This could deplete the ozone layer and could cause further warming, which in turn would cause additional ozone depletion.

The extreme speeds of reentering satellites also produces “a shockwave that compresses nitrogen in the atmosphere and makes it react with oxygen, producing nitrogen oxides,” says McDowell. Nitrogen oxides, too, damage atmospheric ozone. Currently, 50% of the ozone depletion caused by satellite burn-ups and rocket launches comes from the effects of nitrogen oxides. The soot that rockets produce alters the atmosphere’s thermal balance too.

In some ways, high-altitude atmospheric pollution is nothing new. Every year, about 18,000 tons of meteorites vaporize in the mesosphere. Even 10 years from now, if all planned mega-constellations get developed, the quantity of natural space rock burning up during its fall to Earth will exceed the amount of incinerated space junk by a factor of five.

That, however, is no comfort to researchers like McDowell and Barker. Meteorites contain only trace amounts of aluminum, and their atmospheric disintegration is faster, meaning they produce less nitrogen oxide, says Barker. 

“The amount of nitrogen oxides we’re getting [from satellite reentries and rocket launches] is already at the lower end of our yearly estimates of what the natural emissions of nitrogen oxides [from meteorites] are,” said Barker. “It’s certainly a concern, because we might soon be doing more to the atmosphere than naturally occurs.”

The annual amount of alumina from satellite reentries is also already approaching that arising from incinerated meteorites. Under current worse-case scenarios, the human-made contribution of this pollutant will be 10 times the amount from natural sources by 2040.

Impact on Earth?

What exactly does all this mean for life on Earth? At this stage, nobody’s certain. Studies focusing on various components of the air pollution cocktail from satellite and rocket activity are trickling in at a steady rate. 

Barker says computer modeling puts the current contribution of the space industry to overall ozone depletion at a minuscule 0.1%. But how much this share will grow 10, 20, or 50 years from now, nobody knows. There are way too many uncertainties in this equation, including the size of the particles—which will affect how long they will take to sink—and the ratio of particles to gaseous by-products.

“We have to make a decision, as a society, whether we prioritize reducing space traffic or reducing emissions,” Barker says. “A lot of these increased reentry rates are because the global community is doing a really good job of cleaning up low-Earth-orbit space debris. But we really need to understand the environmental impact of those emissions so we can decide what is the best way for humanity to deal with all these objects in space.”

A ground antenna captured radar data of some of the final moments of the ESA satellite Aeolus, as it reentered Earth’s atmosphere in July 2023.
FRAUNHOFER FHR

The disaster of 21st-century climate change was set in motion when humankind began burning fossil fuels in the mid-19th century. Similarly, it took 40 years for chlorofluorocarbons to eat a hole in Earth’s protective ozone layer. The contamination of Earth by so-called forever chemicals—per-and polyfluoroalkyl substances used in manufacturing nonstick coatings and firefighting foams—started in the 1950s. Researchers like McDowell are concerned the story may repeat yet again.

“Humanity’s activities in space have now gotten big enough that they are affecting the space environment in a similar way we have affected the oceans,” McDowell says. “The problem is that we’re making these changes without really understanding at what stage these changes will become concerning.”

Previous observation campaigns mostly analyzed the physical disintegration of reentering satellites. With the Cluster constellation, scientists hope to begin unraveling the chemical side of this elusive process. For researchers like Barker, that means finally getting data that could validate and further improve their models. The Cluster constellation will provide three more opportunities to fill the blanks in this environmental puzzle when the siblings of Salsa reenter in 2025 and 2026. 

“The great thing with Cluster is that we have four satellites that are identical and that we know every detail about,” says Šilha. “It’s a scientist’s dream, because we can repeat the experiment and learn from every previous campaign.”