What it’s like to be banned from the US for fighting online hate

It was early evening in Berlin, just a day before Christmas Eve, when Josephine Ballon got an unexpected email from US Customs and Border Protection. The status of her ability to travel to the United States had changed—she’d no longer be able to enter the country. 

At first, she couldn’t find any information online as to why, though she had her suspicions. She was one of the directors of HateAid, a small German nonprofit founded to support the victims of online harassment and violence. As the organization has become a strong advocate of EU tech regulations, it has increasingly found itself attacked in campaigns from right-wing politicians and provocateurs who claim that it engages in censorship. 

It was only later that she saw what US Secretary of State Marco Rubio had posted on X:

Rubio was promoting a conspiracy theory about what he has called the “censorship-industrial complex,” which alleges widespread collusion between the US government, tech companies, and civil society organizations to silence conservative voices—the very conspiracy theory HateAid has recently been caught up in. 

Then Undersecretary of State Sarah B. Rogers posted on X the names of the people targeted by travel bans. The list included Ballon, as well as her HateAid co-director, Anna Lena von Hodenberg. Also named were three others doing similar or related work: former EU commissioner Thierry Breton, who had helped author Europe’s Digital Services Act (DSA); Imran Ahmed of the Center for Countering Digital Hate, which documents hate speech on social media platforms; and Clare Melford of the Global Disinformation Index, which provides risk ratings warning advertisers about placing ads on websites promoting hate speech and disinformation. 

It was an escalation in the Trump administration’s war on digital rights—fought in the name of free speech. But EU officials, freedom of speech experts, and the five people targeted all flatly reject the accusations of censorship. Ballon, von Hodenberg, and some of their clients tell me that their work is fundamentally about making people feel safer online. And their experiences over the past few weeks show just how politicized and besieged their work in online safety has become. They almost certainly won’t be the last people targeted in this way. 

Ballon was the one to tell von Hodenberg that both their names were on the list. “We kind of felt a chill in our bones,” von Hodenberg told me when I caught up with the pair in early January. 

But she added that they also quickly realized, “Okay, it’s the old playbook to silence us.” So they got to work—starting with challenging the narrative the US government was pushing about them.

Within a few hours, Ballon and von Hodenberg had issued a strongly worded statement refuting the allegations: “We will not be intimidated by a government that uses accusations of censorship to silence those who stand up for human rights and freedom of expression,” they wrote. “We demand a clear signal from the German government and the European Commission that this is unacceptable. Otherwise, no civil society organisation, no politician, no researcher, and certainly no individual will dare to denounce abuses by US tech companies in the future.” 

Those signals came swiftly. On X, Johann Wadephul, the German foreign minister, called the entry bans “not acceptable,” adding that “the DSA was democratically adopted by the EU, for the EU—it does not have extraterritorial effect.” Also on X, French president Emmanuel Macron wrote that “these measures amount to intimidation and coercion aimed at undermining European digital sovereignty.” The European Commission issued a statement that it “strongly condemns” the Trump administration’s actions and reaffirmed its “sovereign right to regulate economic activity in line with our democratic values.” 

Ahmed, Melford, Breton, and their respective organizations also made their own statements denouncing the entry bans. Ahmed, the only one of the five based in the United States, also successfully filed suit to preempt any attempts to detain him, which the State Department had indicated it would consider doing.  

But alongside the statements of solidarity, Ballon and von Hodenberg said, they also received more practical advice: Assume the travel ban was just the start and that more consequences could be coming. Service providers might preemptively revoke access to their online accounts; banks might restrict their access to money or the global payment system; they might see malicious attempts to get hold of their personal data or that of their clients. Perhaps, allies told them, they should even consider moving their money into friends’ accounts or keeping cash on hand so that they could pay their team’s salaries—and buy their families’ groceries. 

These warnings felt particularly urgent given that just days before, the Trump administration had sanctioned two International Criminal Court judges for “illegitimate targeting of Israel.” As a result, they had lost access to many American tech platforms, including Microsoft, Amazon, and Gmail. 

“If Microsoft does that to someone who is a lot more important than we are,” Ballon told me, “they will not even blink to shut down the email accounts from some random human rights organization in Germany.”   

“We have now this dark cloud over us that any minute, something can happen,” von Hodenberg added. “We’re running against time to take the appropriate measures.”

Helping navigate “a lawless place”

Founded in 2018 to support people experiencing digital violence, HateAid has since evolved to defend digital rights more broadly. It provides ways for people to report illegal online content and offers victims advice, digital security, emotional support, and help with evidence preservation. It also educates German police, prosecutors, and politicians about how to handle online hate crimes. 

Once the group is contacted for help, and if its lawyers determine that the type of harassment has likely violated the law, the organization connects victims with legal counsel who can help them file civil and criminal lawsuits against perpetrators, and if necessary, helps finance the cases. (HateAid itself does not file cases against individuals.) Ballon and von Hodenberg estimate that HateAid has worked with around 7,500 victims and helped them file 700 criminal cases and 300 civil cases, mostly against individual offenders.

For 23-year-old German law student and outspoken political activist Theresia Crone, HateAid’s support has meant that she has been able to regain some sense of agency in her life, both on and offline. She had reached out after she discovered entire online forums dedicated to making deepfakes of her. Without HateAid, she told me, “I would have had to either put my faith into the police and the public prosecutor to prosecute this properly, or I would have had to foot the bill of an attorney myself”—a huge financial burden for “a student with basically no fixed income.” 

In addition, working alone would have been retraumatizing: “I would have had to document everything by myself,” she said—meaning “I would have had to see all of these pictures again and again.” 

“The internet is a lawless place,” Ballon told me when we first spoke, back in mid-December, a few weeks before the travel ban was announced. In a conference room at the HateAid office in Berlin, she said there are many cases that “cannot even be prosecuted, because no perpetrator is identified.” That’s why the nonprofit also advocates for better laws and regulations governing technology companies in Germany and across the European Union. 

On occasion, they have also engaged in strategic litigation against the platforms themselves. In 2023, for example, HateAid and the European Union of Jewish Students sued X for failing to enforce its terms of service against posts that were antisemitic or that denied the Holocaust, which is illegal in Germany. 

This almost certainly put the organization in the crosshairs of X owner Elon Musk; it also made HateAid a frequent target of Germany’s far right party, the Alternative für Deutschland, which Musk has called “the only hope for Germany.” (X did not respond to a request to comment on this lawsuit.)

HateAid gets caught in Trump World’s dragnet

For better and worse, HateAid’s profile grew further when it took on another critical job in online safety. In June 2024, it was named as a trusted flagger organization under the Digital Services Act, a 2022 EU law that requires social media companies to remove certain content (including hate speech and violence) that violates national laws, and to provide more transparency to the public, in part by allowing more appeals on platforms’ moderation decisions. 

Trusted flaggers are entities designated by individual EU countries to point out illegal content, and they are a key part of DSA enforcement. While anyone can report such content, trusted flaggers’ reports are prioritized and legally require a response from the platforms. 

The Trump administration has loudly argued that the trusted flagger program and the DSA more broadly are examples of censorship that disproportionately affect voices on the right and American technology companies, like X. 

When we first spoke in December, Ballon said these claims of censorship simply don’t hold water: “We don’t delete content, and we also don’t, like, flag content publicly for everyone to see and to shame people. The only thing that we do: We use the same notification channels that everyone can use, and the only thing that is in the Digital Services Act is that platforms should prioritize our reporting.” Then it is on the platforms to decide what to do. 

Nevertheless, the idea that HateAid and like-minded organizations are censoring the right has become a powerful conspiracy theory with real-world consequences. (Last year, MIT Technology Review covered the closure of a small State Department office following allegations that it had conducted “censorship,” as well as an unusual attempt by State leadership to access internal records related to supposed censorship—including information about two of the people who have now been banned, Medford and Ahmed, and both of their organizations.) 

HateAid saw a fresh wave of harassment starting last February, when 60 Minutes aired a documentary on hate speech laws in Germany; it featured a quote from Ballon that “free speech needs boundaries,” which, she added, “are part of our constitution.” The interview happened to air just days before Vice President JD Vance attended the Munich Security Conference; there he warned that “across Europe, free speech … is in retreat.” This, Ballon told me, led to heightened hostility toward her and her organization. 

Fast-forward to July, when a report by Republicans in the US House of Representatives claimed that the DSA “compels censorship and infringes on American free speech.” HateAid was explicitly named in the report. 

All of this has made its work “more dangerous,” Ballon told me in December. Before the 60 Minutes interview, “maybe one and a half years ago, as an organization, there were attacks against us, but mostly against our clients, because they were the activists, the journalists, the politicians at the forefront. But now … we see them becoming more personal.” 

As a result, over the last year, HateAid has taken more steps to protect its reputation and get ahead of the damaging narratives. Ballon has reported the hate speech targeted at her—“More [complaints] than in all the years I did this job before,” she said—as well as defamation lawsuits on behalf of HateAid. 

All these tensions finally came to a head in December. At the start of the month, the European Commission fined X $140 million for DSA violations. This set off yet another round of recriminations about supposed censorship of the right, with Trump calling the fine “a nasty one” and warning: “Europe has to be very careful.”

Just a few weeks later, the day before Christmas Eve, retaliation against individuals finally arrived. 

Who gets to define—and experience—free speech

Digital rights groups are pushing back against the Trump administration’s narrow view of what constitutes free speech and censorship.

“What we see from this administration is a conception of freedom of expression that is not a human-rights-based conception where this is an inalienable, indelible right that’s held by every person,” says David Greene, the civil liberties director of the Electronic Frontier Foundation, a US-based digital rights group. Rather, he sees an “expectation that… [if] anybody else’s speech is challenged, there’s a good reason for it, but it should never happen to them.” 

Since Trump won his second term, social media platforms have walked back their commitments to trust and safety. Meta, for example, ended fact-checking on Facebook and adopted much of the administration’s censorship language, with CEO Mark Zuckerberg telling the podcaster Joe Rogan that it would “work with President Trump to push back on governments around the world” if they are seen as “going after American companies and pushing to censor more.”

Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

And as the recent fines on X show, Musk’s platform has gone even further in flouting European law—and, ultimately, ignoring the user rights that the DSA was written to protect. In perhaps one of the most egregious examples yet, in recent weeks X allowed people to use Grok, its AI generator, to create nonconsensual nude images of women and children, with few limits—and, so far at least, few consequences. (Last week, X released a statement that it would start limiting users’ ability to create explicit images with Grok; in response to a number of questions, X representative Rosemarie Esposito pointed me to that statement.) 

For Ballon, it makes perfect sense: “You can better make money if you don’t have to implement safety measures and don’t have to invest money in making your platform the safest place,” she told me.

“It goes both ways,” von Hodenberg added. “It’s not only the platforms who profit from the US administration undermining European laws … but also, obviously, the US administration also has a huge interest in not regulating the platforms … because who is amplified right now? It’s the extreme right.”

She believes this explains why HateAid—and Ahmed’s Center for Countering Digital Hate and Melford’s Global Disinformation Index, as well as Breton and the DSA—have been targeted: They are working to disrupt this “unholy deal where the platforms profit economically and the US administration is profiting in dividing the European Union,” she said. 

The travel restrictions intentionally send a strong message to all groups that work to hold tech companies accountable. “It’s purely vindictive,” Greene says. “It’s designed to punish people from pursuing further work on disinformation or anti-hate work.” (The State Department did not respond to a request for comment.)

And ultimately, this has a broad effect on who feels safe enough to participate online. 

Ballon pointed to research that shows the “silencing effect” of harassment and hate speech, not only for “those who have been attacked,” but also for those who witness such attacks. This is particularly true for women, who tend to face more online hate that is also more sexualized and violent. It’ll only be worse if groups like HateAid get deplatformed or lose funding. 

Von Hodenberg put it more bluntly: “They reclaim freedom of speech for themselves when they want to say whatever they want, but they silence and censor the ones that criticize them.”

Still, the HateAid directors insist they’re not backing down. They say they’re taking “all advice” they have received seriously, especially with regard to “becoming more independent from service providers,” Ballon told me.

“Part of the reason that they don’t like us is because we are strengthening our clients and empowering them,” said von Hodenberg. “We are making sure that they are not succeeding, and not withdrawing from the public debate.” 

“So when they think they can silence us by attacking us? That is just a very wrong perception.”

Martin Sona contributed reporting.

Correction: This article originally misstated the name of Germany’s far right party.

The man who made India digital isn’t done yet

Nandan Nilekani can’t stop trying to push India into the future. He started nearly 30 years ago, masterminding an ongoing experiment in technological state capacity that started with Aadhaar—the world’s largest digital identity system. Aadhaar means “foundation” in Hindi, and on that bedrock Nilekani and people working with him went on to build a sprawling collection of free, interoperating online tools that add up to nothing less than a digital infrastructure for society. They cover government services, digital payments, banking, credit, and health care, offering convenience and access that would be eye-popping in wealthy countries a tenth of India’s size. In India those systems are called, collectively, “digital public infrastructure,” or DPI.

At 70 years old, Nilekani should be retired. But he has a few more ideas. India’s electrical grid is creaky and prone to failure; Nilekani wants to add a layer of digital communication to stabilize it. And then there’s his idea to expand the financial functions in DPI to the rest of the world, creating a global digital backbone for commerce that he calls the “finternet.”

“It sounds like some crazy stuff,” Nilekani says. “But I think these are all big ideas, which over the next five years will have demonstrable, material impact.” As a last act in public life, why not Aadhaarize the world?

India’s digital backbone

Today, a farmer in a village in India, hours from the nearest bank, can collect welfare payments or transfer money by simply pressing a thumb to a fingerprint scanner at the local store. Digitally authenticated copies of driver’s licenses, birth certificates, and educational records can be accessed and shared via a digital wallet that sits on your smartphone.

In big cities, where cash is less and less common (just trying to break a bill can be a major headache), mobile payments are ubiquitous, whether you’re buying a TV from a high-street retailer or a coconut from a roadside cart. There are no fees, and any payment app or bank account can send money to any other. The country’s chaotic patchwork of public and private hospitals have begun digitizing all their medical records and uploading them to a nationwide platform. On the Open Network for Digital Commerce (ONDC), people can do online shopping searches on whatever app they want, and the results show sellers from an array of other platforms, too. The idea is to liberate small merchants and consumers from the walled gardens of online shopping giants like Amazon and the domestic giant Flipkart. 

In the most populous nation on Earth—with 1.4 billion people—a large portion of the bureaucracy anyone encounters in daily life happens seamlessly and in the cloud.

At the heart of all these tools is Aadhaar. The system gives every Indian a 12-digit number that, in combination with either a fingerprint scan or an SMS code, allows access to government services, SIM cards, basic bank accounts, digital signature services, and social welfare payments. The Indian government says that since its inception in 2009, Aadhaar has saved 3.48 trillion rupees ($39.2 billion) by boosting efficiency, bypassing corrupt officials, and cutting other types of fraud. The system is controversial and imperfect—a database with 1.4 billion people in it comes with inherent security and privacy concerns. Still, in the most populous nation on Earth, a big portion of the bureaucracy anyone might encounter in daily life just happens in the cloud.

Nilekani was behind much of that innovation, marshaling an army of civil servants, tech companies, and volunteers. Now he sees it in action every day. “It reinforces that what you have done is not some abstract stuff, but real stuff for real people,” he says.

By his own admission, Nilekani is entering the twilight of his career. But it’s not over yet. He’s now “chief mentor” for the India Energy Stack (IES), a government initiative to connect the fragmented data held by companies responsible for generating, transmitting, and distributing power. India’s grids are unstable and disparate, but Nilekani hopes an Aadhaar-like move will help. IES aims to give unique digital identities not only to power plants and energy storage facilities but even to rooftop solar panels and electric vehicles. All the data attached to those things—device characteristics, energy rating certifications, usage information—will be in a common, machine-readable format and shared on the same open protocols.

Ideally, that’ll give grid operators a real-time view of energy supply and demand. And if it works, it might also make it simpler and cheaper for anyone to connect to the grid—even everyday folks selling excess power from their rooftop solar rigs, says RS Sharma, the chair of the project and Nilekani’s deputy while building Aadhaar.

Nilekani’s other side hustle is even more ambitious. His idea for a global “finternet” combines Aadhaarization with blockchains—creating digital representations called tokens for not only financial instruments like stocks or bonds but also real-world assets like houses or jewelry. Anyone from a bank to an asset manager or even a company could create and manage these tokens, but Nilekani’s team especially hopes the idea will help poor people trade their assets, or use them as loan collateral—expanding financial services to those who otherwise couldn’t access them. 

It sounds almost wild-eyed. Yet the finternet project has 30 partners across four continents. Nilekani says it’ll launch next year.

A call to service

Nilekani was born in Bengaluru, in 1955. His family was middle class and, Nilekani says, “seized with societal issues and challenges.” His upbringing was also steeped in the kind of socialism espoused by the newish nation’s first prime minister, Jawaharlal Nehru.

After studying electrical engineering at the Indian Institute of Technology, in 1981 Nilekani helped found Infosys, an information technology company that pioneered outsourcing and helped turned India into the world’s IT back office. In 1999, he was part of a government-appointed task force trying to upgrade the infrastructure and services in Bengaluru, then emerging as India’s tech capital. But Nilekani was at the time leery of being viewed as just another techno-optimist. “I didn’t want to be seen as naive enough to believe that tech could solve everything,” he says.

Nilekani holds a device to one eye
Nilekani demonstrates the biometric technology at the heart of Aadhaar, the system he spearheaded that provides a unique digital identity number to all Indians.
PALLAVA BAGLA/CORBIS/GETTY IMAGES

Seeing the scope of the problem changed his mind—sclerotic bureaucracy, endemic corruption, and financial exclusion were intractable without technological solutions. In 2008 Nilekani published a book, Imagining India: The Idea of a Renewed Nation. It was a manifesto for an India that could leapfrog into a networked future.

And it got him a job. At the time more than half the births in the country were not recorded, and up to 400 million Indians had no official identity document. Manmohan Singh, the prime minister, asked Nilekani to put into action an ill-defined plan to create a national identity card.

Nilekani’s team made a still-controversial decision to rely on biometrics. A system based on people’s fingerprints and retina scans meant nobody could sign up twice, and nobody had to carry paperwork. In terms of execution, it was like trying to achieve industrialization but skip a steam era. Deployment required a monumental data collection effort, as well as new infrastructure that could compare each new enrollment against hundreds of millions of existing records in seconds. At its peak, the Unique Identification Authority of India (UIDAI), the agency responsible for administering Aadhaar, was registering more than a million new users a day. That happened with a technical team of just about 50 developers, and in the end cost slightly less than half a billion dollars.

Buoyed by their success, Nilekani and his allies started casting around for other problems they could solve using the same digitize-the-real-world playbook. “We built more and more layers of capability,” Nilekani says, “and then this became a wider-ranging idea. More grandiose.”

While other countries were building digital backbones with full state control (as in China) or in public-private partnerships that favored profit-seeking corporate approaches (as in the US), Nilekani thought India needed something else. He wanted critical technologies in areas like identity, payments, and data sharing to be open and interoperable, not monopolized by either the state or private industry. So the tools that make up DPI use open standards and open APIs, meaning that anyone can plug into the system. No single company or institution controls access—no walled gardens.

A contested legacy

Of course, another way to look at putting financial and government services and records into giant databases is that it’s a massive risk to personal liberty. Aadhaar, in particular, has faced criticism from privacy advocates concerned about the potential for surveillance. Several high-profile data breaches of Aadhaar records held by government entities have shaken confidence in the system, most recently in 2023, when security researchers found hackers selling the records of more than 800 million Indians on the dark web.

Technically, this shouldn’t matter—an Aadhaar number ought to be useless without biometric or SMS-based authentication. It’s “a myth that this random number is a very powerful number,” says Sharma, the onetime co-lead of UIDAI. “I don’t have any example where somebody’s Aadhaar disclosure would have harmed somebody.” 

One problem is that in everyday use, Aadhaar users often bypass the biometric authentication system. To ensure that people use a genuine address at registration, Aadhaar administrators give people their numbers on an official-looking document. Indians co-opted this paperwork as a proof of identity on its own. And since the document—Indians even call it an “Aadhaar card”—doesn’t have an expiration date, it’s possible for people to get multiple valid cards with different details by changing their address or date of birth. That’s quite a loophole. In 2018 an NGO report found that 67% of people using Aadhaar to open a bank account relied on this verification document rather than digital authentication. That report was the last time anyone published data on the problem, so nobody knows how bad it is today. “Everybody’s living on anecdotes,” says Kiran Jonnalagadda, an anti-Aadhaar activist.

In other cases, flaws in Aadhaar’s biometric technology have caused people to be denied essential government services. The government downplays these risks, but again, it’s impossible to tell how serious the problem is because the UIDAI won’t disclose numbers. “There needs to be a much more honest acknowledgment, documentation, and then an examination of how those exclusions can be mitigated,” says Apar Gupta, director of the Internet Freedom Foundation.

Beyond the potential for fraud, it’s also true that the free and interoperable tools haven’t reached all the people who might find them useful, especially among India’s rural and poorer populations. Nilekani’s hopes for openness haven’t fully come to pass. Big e-commerce companies still dominate, and retail sales on ONDC have been dropping steadily since 2024, when financial incentives to participate began to taper off. The digital payments and government documentation services have hundreds of millions of users, numbers most global technology companies would love to see—but in a country as large as India, that leaves a lot of people out.

Going global

The usually calm Nilekani bristles at that criticism; he has heard it before. Detractors overlook the dysfunction that preceded these efforts, he says, and he remains convinced that technology was the only way forward. “How do you move a country of 1.4 billion people?” he asks. “There’s no other way you can fix it.”

The proof is self-evident, he says. Indians have opened more than 500 million basic bank accounts using Aadhaar; before it came into use, millions of those people had been completely unbanked. Earlier this year, India’s Unified Payments Interface overtook Visa as the world’s largest real-time payments system. “There is no way Aadhaar could have worked but for the fact that people needed this thing,” Nilekani says. “There’s no way payments would have worked without people needing it. So the voice of the people—they’re voting with their feet.”

A street vendor in Kolkata displays a QR code that lets him get paid via India’s Unified Payments Interface, part of the digital public infrastructure Nilekani helped build. The Reserve Bank of India says more than 657 million people used the system in the financial year 2024–2025.
DEBAJYOTI CHAKRABORTY/NURPHOTO/GETTY IMAGES

That need might be present in countries beyond India. “Many countries don’t have a proper birth registration system. Many countries don’t have a payment system. Many countries don’t have a way for data to be leveraged,” Nilekani says. “So this is a very powerful idea.” It seems to be spreading. Foreign governments regularly send delegations to Bengaluru to study India’s DPI tools. The World Bank and the United Nations have tried to introduce the concept to other developing countries equally eager to bring their economies into the digital age. The Gates Foundation has established projects to promote digital infrastructure, and Nilekani has set up and funded a network of think tanks, research institutes, and other NGOs aimed at, as he says, “propagating the gospel.”

Still, he admits he might not live to see DPI go global. “There are two races,” Nilekani says. “My personal race against time and India’s race against time.” He worries that the economic potential of its vast young population—the so-called demographic dividend—could turn into a demographic disaster. Despite rapid growth, gains have been uneven. Youth unemployment remains stubbornly high—a particularly volatile problem in a large and economically turbulent country. 

“Maybe I’m a junkie,” he says. “Why the hell am I doing all this? I think I need it. I think I need to keep curious and alive and looking at the future.” But that’s the thing about building the future: It never quite arrives.

Edd Gent is a journalist based in Bengaluru, India.

The State of AI: Is China about to win the race? 

The State of AI is a collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power. Every Monday for the next six weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.

In this conversation, the FT’s tech columnist and Innovation Editor John Thornhill and MIT Technology Review’s Caiwei Chen consider the battle between Silicon Valley and Beijing for technological supremacy.

John Thornhill writes:

Viewed from abroad, it seems only a matter of time before China emerges as the AI superpower of the 21st century. 

Here in the West, our initial instinct is to focus on America’s significant lead in semiconductor expertise, its cutting-edge AI research, and its vast investments in data centers. The legendary investor Warren Buffett once warned: “Never bet against America.” He is right that for more than two centuries, no other “incubator for unleashing human potential” has matched the US.

Today, however, China has the means, motive, and opportunity to commit the equivalent of technological murder. When it comes to mobilizing the whole-of-society resources needed to develop and deploy AI to maximum effect, it may be just as rash to bet against. 

The data highlights the trends. In AI publications and patents, China leads. By 2023, China accounted for 22.6% of all citations, compared with 20.9% from Europe and 13% from the US, according to Stanford University’s Artificial Intelligence Index Report 2025. As of 2023, China also accounted for 69.7% of all AI patents. True, the US maintains a strong lead in the top 100 most cited publications (50 versus 34 in 2023), but its share has been steadily declining. 

Similarly, the US outdoes China in top AI research talent, but the gap is narrowing. According to a report from the US Council of Economic Advisers, 59% of the world’s top AI researchers worked in the US in 2019, compared with 11% in China. But by 2022 those figures were 42% and 28%. 

The Trump administration’s tightening of restrictions for foreign H-1B visa holders may well lead more Chinese AI researchers in the US to return home. The talent ratio could move further in China’s favor.

Regarding the technology itself, US-based institutions produced 40 of the world’s most notable AI models in 2024, compared with 15 from China. But Chinese researchers have learned to do more with less, and their strongest large language models—including the open-source DeepSeek-V3 and Alibaba’s Qwen 2.5-Max—surpass the best US models in terms of algorithmic efficiency.

Where China is really likely to excel in future is in applying these open-source models. The latest report from Air Street Capital shows that China has now overtaken the US in terms of monthly downloads of AI models. In AI-enabled fintech, e-commerce, and logistics, China already outstrips the US. 

Perhaps the most intriguing—and potentially the most productive—applications of AI may yet come in hardware, particularly in drones and industrial robotics. With the research field evolving toward embodied AI, China’s advantage in advanced manufacturing will shine through.

Dan Wang, the tech analyst and author of Breakneck, has rightly highlighted the strengths of China’s engineering state in developing manufacturing process knowledge—even if he has also shown the damaging effects of applying that engineering mentality in the social sphere. “China has been growing technologically stronger and economically more dynamic in all sorts of ways,” he told me. “But repression is very real. And it is getting worse in all sorts of ways as well.”

I’d be fascinated to hear from you, Caiwei, about your take on the strengths and weaknesses of China’s AI dream. To what extent will China’s engineered social control hamper its technological ambitions? 

Caiwei Chen responds:

Hi, John!

You’re right that the US still holds a clear lead in frontier research and infrastructure. But “winning” AI can mean many different things. Jeffrey Ding, in his book Technology and the Rise of Great Powers, makes a counterintuitive point: For a general-purpose technology like AI, long-term advantage often comes down to how widely and deeply technologies spread across society. And China is in a good position to win that race (although “murder” might be pushing it a bit!).

Chips will remain China’s biggest bottleneck. Export restrictions have throttled access to top GPUs, pushing buyers into gray markets and forcing labs to recycle or repair banned Nvidia stock. Even as domestic chip programs expand, the performance gap at the very top still stands.

Yet those same constraints have pushed Chinese companies toward a different playbook: pooling compute, optimizing efficiency, and releasing open-weight models. DeepSeek-V3’s training run, for example, used just 2.6 million GPU-hours—far below the scale of US counterparts. But Alibaba’s Qwen models now rank among the most downloaded open-weights globally, and companies like Zhipu and MiniMax are building competitive multimodal and video models. 

China’s industrial policy means new models can move from lab to implementation fast. Local governments and major enterprises are already rolling out reasoning models in administration, logistics, and finance. 

Education is another advantage. Major Chinese universities are implementing AI literacy programs in their curricula, embedding skills before the labor market demands them. The Ministry of Education has also announced plans to integrate AI training for children of all school ages. I’m not sure the phrase “engineering state” fully captures China’s relationship with new technologies, but decades of infrastructure building and top-down coordination have made the system unusually effective at pushing large-scale adoption, often with far less social resistance than you’d see elsewhere. The use at scale, naturally, allows for faster iterative improvements.

Meanwhile, Stanford HAI’s 2025 AI Index found Chinese respondents to be the most optimistic in the world about AI’s future—far more optimistic than populations in the US or the UK. It’s striking, given that China’s economy has slowed since the pandemic for the first time in over two decades. Many in government and industry now see AI as a much-needed spark. Optimism can be powerful fuel, but whether it can persist through slower growth is still an open question.

Social control remains part of the picture, but a different kind of ambition is taking shape. The Chinese AI founders in this new generation are the most globally minded I’ve seen, moving fluidly between Silicon Valley hackathons and pitch meetings in Dubai. Many are fluent in English and in the rhythms of global venture capital. Having watched the last generation wrestle with the burden of a Chinese label, they now build companies that are quietly transnational from the start.

The US may still lead in speed and experimentation, but China could shape how AI becomes part of daily life, both at home and abroad. Speed matters, but speed isn’t the same thing as supremacy.

John Thornhill replies:

You’re right, Caiwei, that speed is not the same as supremacy (and “murder” may be too strong a word). And you’re also right to amplify the point about China’s strength in open-weight models and the US preference for proprietary models. This is not just a struggle between two different countries’ economic models but also between two different ways of deploying technology.  

Even OpenAI’s chief executive, Sam Altman, admitted earlier this year: “We have been on the wrong side of history here and need to figure out a different open-source strategy.” That’s going to be a very interesting subplot to follow. Who’s called that one right?

Further reading on the US-China competition

There’s been a lot of talk about how people may be using generative AI in their daily lives. This story from the FT’s visual story team explores the reality 

From China, FT reporters ask how long Nvidia can maintain its dominance over Chinese rivals

When it comes to real-world uses, toys and companions devices are a novel but emergent application of AI that is gaining traction in China—but is also heading to the US. This MIT Technology Review story explored it.

The once-frantic data center buildout in China has hit walls, and as the sanctions and AI demands shift, this MIT Technology Review story took an on-the-ground look at how stakeholders are figuring it out.

How to measure the returns on R&D spending

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Given the draconian cuts to US federal funding for science, including the administration’s proposal to reduce the 2026 budgets of the National Institutes of Health by 40% and the National Science Foundation by 57%, it’s worth asking some hard-nosed money questions: How much should we be spending on R&D? How much value do we get out of such investments, anyway? To answer that, it’s important to look at both successful returns and investments that went nowhere.

Sure, it’s easy to argue for the importance of spending on science by pointing out that many of today’s most useful technologies had their origins in government-funded R&D. The internet, CRISPR, GPS—the list goes on and on. All true. But this argument ignores all the technologies that received millions in government funding and haven’t gone anywhere—at least not yet. We still don’t have DNA computers or molecular electronics. Never mind the favorite examples cited by contrarian politicians of seemingly silly or frivolous science projects (think shrimp on treadmills).

While cherry-picking success stories help illustrate the glories of innovation and the role of science in creating technologies that have changed our lives, it provides little guidance for how much we should spend in the future—and where the money should go.

A far more useful approach to quantifying the value of R&D is to look at its return on investment (ROI). A favorite metric for stock pickers and PowerPoint-wielding venture capitalists, ROI weighs benefits versus costs. If applied broadly to the nation’s R&D funding, the same kind of thinking could help account for both the big wins and all the money spent on research that never got out of the lab.

The problem is that it’s notoriously difficult to calculate returns for science funding—the payoffs can take years to appear and often take a circuitous route, so the eventual rewards are distant from the original funding. (Who could have predicted Uber as an outcome of GPS? For that matter, who could have predicted that the invention of ultra-precise atomic clocks in the late 1940s and 1950s would eventually make GPS possible?) And forget trying to track the costs of countless failures or apparent dead ends.

But in several recent papers, economists have approached the problem in clever new ways, and though they ask slightly different questions, their conclusions share a bottom line: R&D is, in fact, one of the better long-term investments that the government can make.

This story is part of MIT Technology Review’s “America Undone” series, examining how the foundations of US success in science and innovation are currently under threat. You can read the rest here.

That might not seem very surprising. We’ve long thought that innovation and scientific advances are key to our prosperity. But the new studies provide much-needed details, supplying systematic and rigorous evidence for the impact that R&D funding, including public investment in basic science, has on overall economic growth.

And the magnitude of the benefits is surprising.

Bang for your buck

In “A Calculation of the Social Returns to Innovation,” Benjamin Jones, an economist at Northwestern University, and Lawrence Summers, a Harvard economist and former US Treasury secretary, calculate the effects of the nation’s total R&D spending on gross domestic product and our overall standard of living. They’re taking on the big picture, and it’s ambitious because there are so many variables. But they are able to come up with a convincing range of estimates for the returns, all of them impressive.

On the conservative end of their estimates, says Jones, investing $1 in R&D yields about $5 in returns—defined in this case as additional GDP per person (basically, how much richer we become). Change some of the assumptions—for example, by attempting to account for the value of better medicines and improved health care, which aren’t fully captured in GDP—and you get even larger payoffs.

While the $5 return is at the low end of their estimates, it’s still “a remarkably good investment,” Jones says. “There aren’t many where you put in $1 and get $5 back.”

That’s the return for the nation’s overall R&D funding. But what do we get for government-funded R&D in particular? Andrew Fieldhouse, an economist at Texas A&M, and Karel Mertens at the Federal Reserve Bank of Dallas looked specifically at how changes in public R&D spending affect the total factor productivity (TFP) of businesses. A favorite metric of economists, TFP is driven by new technologies and innovative business know-how—not by adding more workers or machines—and is the main driver of the nation’s prosperity over the long term.

The economists tracked changes in R&D spending at five major US science funding agencies over many decades to see how the shifts eventually affected private-sector productivity. They found that the government was getting a huge bang for its nondefense R&D buck.

The benefits begin kicking in after around five to 10 years and often have a long-lasting impact on the economy. Nondefense public R&D funding has been responsible for 20% to 25% of all private-sector productivity growth in the country since World War II, according to the economists. It’s an astonishing number, given that the government invests relatively little in nondefense R&D. For example, its spending on infrastructure, another contributor to productivity growth, has been far greater over those years.

The large impact of public R&D investments also provides insight into one of America’s most troubling economic mysteries: the slowdown in productivity growth that began in the 1970s, which has roiled the country’s politics as many people face stunted living standards and limited financial prospects. Their research, says Fieldhouse, suggests that as much as a quarter of that slowdown was caused by a decline in public R&D funding that happened roughly over the same time.

After reaching a high of 1.86% of GDP in 1964, federal R&D spending began dropping. Starting in the early 1970s, TFP growth also began to decline, from above 2% a year in the late 1960s to somewhere around 1% since the 1970s (with the exception of a rise during the late 1990s), roughly tracking the spending declines with a lag of a few years.

If in fact the productivity slowdown was at least partially caused by a drop in public R&D spending, it’s evidence that we would be far richer today if we had kept up a higher level of science investment. And it also flags the dangers of today’s proposed cuts. “Based on our research,” says Fieldhouse, “I think it’s unambiguously clear that if you actually slash the budget of the NIH by 40%, if you slash the NSF budget by 50%, there’s going to be a deceleration in US productivity growth over the next seven to 10 years that will be measurable.”

Out of whack

Though the Trump administration’s proposed 2026 budget would slash science budgets to an unusual degree, public funding of R&D has actually been in slow decline for decades. Federal funding of science is at its lowest rate in the last 70 years, accounting for only around 0.6% of GDP.

Even as public funding has dropped, business R&D investments have steadily risen. Today businesses spend far more than the government; in 2023, companies invested about $700 billion in R&D while the US government spent $172 billion, according to data from the NSF’s statistical agency. You might think, Good—let companies do research. It’s more efficient. It’s more focused. Keep the government out of it.

But there is a big problem with that argument. Publicly funded research, it turns out, tends to lead to relatively more productivity growth over time because it skews more toward fundamental science than the applied work typically done by companies.

In a new working paper called “Public R&D Spillovers and Productivity Growth,” Arnaud Dyèvre, an assistant professor at of economics at HEC Paris, documents the broad and often large impacts of so-called knowledge spillovers—the benefits that flow to others from work done by the original research group. Dyèvre found that the spillovers of public-funded R&D have three times more impact on productivity growth across businesses and industries than those from private R&D funding.

The findings are preliminary, and Dyèvre is still updating the research—much of which he did as a postdoc at MIT—but he says it does suggest that the US “is underinvesting in fundamental R&D,” which is heavily funded by the government. “I wouldn’t be able to tell you exactly which percentage of R&D in the US needs to be funded by the government or what percent needs to be funded by the private sector. We need both,” he says. But, he adds, “the empirical evidence” suggests that “we’re out of balance.”

The big question

Getting the balance of funding for fundamental science and applied research right is just one of the big questions that remain around R&D funding. In mid-July, Open Philanthropy and the Alfred P. Sloan Foundation, both nonprofit organizations, jointly announced that they planned to fund a five-year “pop-up journal” that would attempt to answer many of the questions still swirling around how to define and optimize the ROI of research funding.

“There is a lot of evidence consistent with a really high return to R&D, which suggests we should do more of it,” says Matt Clancy, a senior program officer at Open Philanthropy. “But when you ask me how much more, I don’t have a good answer. And when you ask me what types of R&D should get more funding, we don’t have a good answer.”

Pondering such questions should keep innovation economists busy for the next several years. But there is another mystifying piece of the puzzle, says Northwestern’s Jones. If the returns on R&D investments are so high—the kind that most venture capitalists or investors would gladly take—why isn’t the government spending more?

“I think it’s unambiguously clear that if you actually slash the budget of the NIH by 40%, if you slash the NSF budget by 50%, there’s going to be a deceleration in US productivity growth over the next seven to 10 years that will be measurable.”

Jones, who served as a senior economic advisor in the Obama administration, says discussions over R&D budgets in Washington are often “a war of anecdotes.” Science advocates cite the great breakthroughs that resulted from earlier government funding, while budget hawks point to seemingly ludicrous projects or spectacular failures. Both have plenty of ammunition. “People go back and forth,” says Jones, “and it doesn’t really lead to anywhere.”

The policy gridlock is rooted in in the very nature of fundamental research. Today’s science will lead to great advances. And there will be countless failures; a lot of money will be wasted on fruitless experiments. The problem, of course, is that when you’re deciding to fund new projects, it’s impossible to predict which the outcome will be, even in the case of odd, seemingly silly science. Guessing just what research will or will not lead to the next great breakthrough is a fool’s errand.

Take the cuts in the administration’s proposed fiscal 2026 budget for the NSF, a leading funder of basic science. The administration’s summary begins with the assertion that its NSF budget “is prioritizing investments that complement private-sector R&D and offer strong potential to drive economic growth and strengthen U.S. technological leadership.” So far, so good. It cites the government’s commitment to AI and quantum information science. But dig deeper and you will see the contradictions in the numbers.

Not only is NSF’s overall budget cut by 57%, but funding for physical sciences like chemistry and materials research—fields critical to advancing AI and quantum computers—has also been blown apart. Funding for the NSF’s mathematical and physical sciences program was reduced by 67%. The directorate for computer and information science and engineering fared little better; its research funding was cut by 66%.

There is a great deal of hope among many in the science community that Congress, when it passes the actual 2026 budget, will at least partially reverse these cuts. We’ll see. But even if it does, why attack R&D funding in the first place? It’s impossible to answer that without plunging into the messy depths of today’s chaotic politics. And it is equally hard to know whether the recent evidence gathered by academic economists on the strong returns to R&D investments will matter when it comes to partisan policymaking.

But at least those defending the value of public funding now have a far more productive way to make their argument, rather than simply touting past breakthroughs. Even for fiscal hawks and those pronouncing concerns about budget deficits, the recent work provides a compelling and simple conclusion: More public funding for basic science is a sound investment that makes us more prosperous.

Taiwan’s “silicon shield” could be weakening

One winter afternoon in a conference room in Taipei, a pair of twentysomething women dragged their friend across the floor. Lying on the ground in checkered pants and a brown sweatshirt, she was pretending to be either injured or dead. One friend picked her up by her arms, the other grabbed hold of her legs, and they managed to move her, despite momentarily breaking character to laugh at the awkwardness of the exercise. The three women had paid approximately $40 to spend their Sunday here, undergoing basic training to prepare for a possibility every Taiwanese citizen has an opinion about: Will China invade? 

Taiwanese politics increasingly revolves around that question. China’s ruling party has wanted to seize Taiwan for more than half a century. But in recent years, China’s leader, Xi Jinping, has placed greater emphasis on the idea of “taking back” the island (which the Chinese Communist Party, or CCP, has never controlled). As China’s economic and military might has grown, some analysts believe the country now has the capacity to quarantine Taiwan whenever it wants, making the decision a calculation of costs and benefits.

Many in Taiwan and elsewhere think one major deterrent has to do with the island’s critical role in semiconductor manufacturing. Taiwan produces the majority of the world’s semiconductors and more than 90% of the most advanced chips needed for AI applications. Bloomberg Economics estimates that a blockade would cost the global economy, including China, $5 trillion in the first year alone.

“The international community must certainly do everything in its power to avoid a conflict in the Taiwan Strait; there is too great a cost.”

Lai Ching-te, Taiwanese president

The island, which is approximately the size of Maryland, owes its remarkably disproportionate chip dominance to the inventiveness and prowess of one company: Taiwan Semiconductor Manufacturing Company, or TSMC. The chipmaker, which reached a market capitalization of $1 trillion in July, has contributed more than any other to Taiwan’s irreplaceable role in the global semiconductor supply chain. Its clients include Apple and the leading chip designer Nvidia. Its chips are in your iPhone, your laptop, and the data centers that run ChatGPT. 

For a company that makes what amounts to an invisible product, TSMC holds a remarkably prominent role in Taiwanese society. I’ve heard people talk about it over background noise in loud bars in the southern city of Tainan and listened to Taipei cab drivers connect Taiwan’s security situation to the company, unprompted. “Taiwan will be okay,” one driver told me as we sped by the national legislature, “because TSMC.” 

The idea is that world leaders (particularly the United States)—aware of the island’s critical role in the semiconductor supply chain—would retaliate economically, and perhaps militarily, if China were to attack Taiwan. That, in turn, deters Beijing. “Because TSMC is now the most recognizable company of Taiwan, it has embedded itself in a notion of Taiwan’s sovereignty,” says Rupert Hammond-Chambers, president of the US-Taiwan Business Council. 

Now some Taiwan specialists and some of the island’s citi­zens are worried that this “silicon shield,” if it ever existed, is cracking. Facing pressure from Washington, TSMC is investing heavily in building out manufacturing capacity at its US hub in Arizona. It is also building facilities in Japan and Germany in addition to maintaining a factory in mainland China, where it has been producing less advanced legacy chips since 2016. 

In Taiwan, there is a worry that expansion abroad will dilute the company’s power at home, making the US and other countries less inclined to feel Taiwan is worthy of defense. TSMC’s investments in the US have come with no guarantees for Taiwan in return, and high-ranking members of Taiwan’s opposition party have accused the ruling Democratic Progressive Party (DPP) of gambling with the future of the island. It doesn’t help that TSMC’s expansion abroad coincides with what many see as a worrying attitude in the White House. On top of his overarching “America First” philosophy, Donald Trump has declined to comment on the specific question of whether the US would intervene if China attempted to take Taiwan by force. “I don’t want to ever put myself in that position,” he said in February. 

At the same time, Beijing’s interest in Taiwan has continued unabated. While China is making progress toward semiconductor self-­sufficiency, it’s currently in a transition period, with companies relying on foreign-made chips manufactured in Taiwan—some in compliance with export controls and some smuggled in. Meanwhile, the CCP persistently suggests that seizing the island would bring about a kind of family reunion. “It is the common aspiration and sacred responsibility of all Chinese sons and daughters to realize the complete reunification of the motherland,” reads a statement released by the foreign ministry after Nancy Pelosi’s controversial 2022 visit to Taiwan. Though it’s impossible to know the full scope of Beijing’s motivations, there is also obvious strategic appeal: Controlling the island would give China deep-water access, which is critical for naval routes and submarines. Plus, it could significantly disrupt American AI firms’ access to advanced chips.  

While China ramps up militarily, Taiwan is trying to make itself hard to ignore. The government is increasingly portraying the island as strategically essential to the global community, with semiconductors as its primary offering. “The international community must certainly do everything in its power to avoid a conflict in the Taiwan Strait; there is too great a cost,” Taiwanese president Lai Ching-te said in an interview earlier this year with Japan’s Nippon Television. Parts of the international community are hearing that message—and seizing the opportunity it presents: earlier this month, defense tech company Anduril Industries announced it is opening a new office in Taiwan, where it will be expanding partnerships and selling autonomous munitions. 

For its part, the chip industry is actively showing its commitment to Taiwan. While other tech CEOs attended Trump’s second inauguration, for instance, Nvidia chief executive Jensen Huang met instead with TSMC’s chairman, and the company announced in May that its overseas headquarters would be in Taipei. In recent years, US government officials have also started paying more attention to Taiwan’s security situation and its interconnectedness with the chip industry. “There was a moment when everybody started waking up to the dependence on TSMC,” says Bonnie Glaser, managing director of the German Marshall Fund’s Indo-Pacific Program. The realization emerged, she says, over the last decade but was underscored in March of 2021, when Phil Davidson, then leader of the United States Indo-Pacific Command, testified to the Senate Armed Services Committee that there could be an invasion by 2027. Parallel to the security threat is the potential issue of overdependence, since so much chipmaking capability is concentrated in Taiwan.

For now, Taiwan is facing a tangle of interests and time frames. China presents its claim to Taiwan as a historical inevitability, albeit one with an uncertain timeline, while the United States’ relationship with the island is focused on an AI-driven future. But from Taiwan’s perspective, the fight for its fate is playing out right now, amid unprecedented geopolitical instability. The next few years will likely determine whether TSMC’s chipmaking dominance is enough to convince the world Taiwan is worth protecting.

Innovation built on interconnectivity 

TSMC is an uncontested success story. Its founder, Morris Chang, studied and worked in the United States before he was lured to Taiwan to start a new business on the promise of state support and inexpensive yet qualified labor. Chang founded TSMC in 1987 on the basis of his innovative business model. Rather than design and produce chips in-house, as was the norm, TSMC would act as a foundry: Clients would design the chips, and TSMC would make them. 

This focus on manufacturing allowed TSMC to optimize its operations, building up process knowledge and, eventually, outperforming competitors like Intel. It also freed up other businesses to go “fabless,” meaning they could stop maintaining their own semiconductor factories, or fabs, and throw their resources behind other parts of the chipmaking enterprise. Tapping into Taiwan’s domestic electronics supply chain proved effective and efficient for TSMC. Throughout the 1990s and early 2000s, global demand for semiconductors powering personal computers and other devices continued to grow. TSMC thrived.

Then, in 2022, the US imposed export controls on China that restricted its access to advanced chips. Taiwan was forced to either comply, by cutting off Chinese clients, or risk losing the support of the country that was home to 70% of its client base—and, possibly, 100% of its hopes for external military support in the event of an attack. 

Soon after, Chang announced that he believed globalization and free markets were “almost dead.” The nearly three years since have shown he was onto something. For one thing, in contrast to President Biden’s pursuit of supply chain integration with democratic allies, President Trump’s foreign policy is characterized by respect for big, undemocratic powers and punitive tariffs against both America’s rivals and its friends. Trump has largely abandoned Biden’s economic diplomacy with European and Asian allies but kept his China-targeted protectionism—and added his trademark transactionalism. In an unprecedented move earlier this month, the administration allowed Nvidia and AMD to sell previously banned chips to China on the condition that the companies pay the government 15% of revenues made from China sales. 

Protectionism, it turns out, spurs self-reliance. China’s government has been making a massive effort to build up its domestic chip production capabilities—a goal that was identified at the beginning of Xi’s rise but has been turbocharged in the wake of Washington’s export controls. 

Any hope the US has for significantly expanding domestic chip production comes from its friends—TSMC first among them. The semiconductor industry developed as a global endeavor out of practicality, playing to the strengths of each region: design in the US and manufacturing in Asia, with key inputs from Europe central to the process. Yet the US government, entrenched in its “tech war” with China, is now dead set on deglobalizing the chip supply chain, or at least onshoring as much of it as possible. There’s just one hiccup: The best chip manufacturer isn’t American. It’s TSMC. Even if some manufacturing happens in Arizona, the US still relies on Taiwan’s chipmaking ecosystem. And copying that supply chain outside Taiwan could be harder than the current administration imagines.

Squarely in the middle

Taiwan’s modern security uncertainties stem from the long-­contested issue of the island’s sovereignty. After losing the first Sino-Japanese War in the late 1800s, the Qing dynasty forfeited Taiwan to Japanese imperial control. It was Japan’s “model colony” until 1945, when postwar negotiations resulted in its transfer to the Republic of China under Chiang Kai-shek of the Nationalist Party, known as the KMT. The insurgent CCP under Mao Zedong ultimately defeated the Nationalists in a civil war fought on the mainland until 1949. Chiang and many of his party’s defeated generals decamped to Taiwan, controlling it under martial law for nearly 40 years. 

Taiwan held its first free democratic elections in 1996, kicking off a two-party rivalry between the KMT, which favors closer relations with Beijing, and the DPP, which opposes integration with China. Kitchen-table issues like economic growth are central to Taiwanese elections, but so is the overarching question of how best to handle the threat of invasion, which has persisted for nearly 80 years. The DPP is increasingly calling for raising defense spending and civilian preparedness to make sure Taiwan is ready for the worst, while the KMT supports direct talks with Beijing.  

cactus and the sign in front of the TSMC plant in Arizona
In March 2025, President Trump and TSMC CEO C.C. Wei jointly announced that the firm will make an additional $100 billion investment (on top of a previously announced $65 billion) in TSMC’s US hub in Arizona.
REBECCA NOBLE/BLOOMBERG VIA GETTY IMAGES

Meanwhile, Chinese military incursions around Taiwan—known as “gray zone” tactics because they fall short of acts of war—are increasingly frequent. In May, Taiwan’s defense ministry reportedly estimated that Chinese warplanes were entering Taiwan’s air defense zone more than 200 times a month, up from fewer than 10 times per month five years ago. China has conducted drills mirroring the actions needed for a full-scale invasion or a blockade, which would cut Taiwan off from the outside world. Chinese military officials are now publicly talking about achieving a blockade, says Lyle Morris, an expert on foreign policy and national security at the Asia Society Policy Institute. “They’re punishing Lai and the DPP,” Morris says. Meanwhile, the CCP has its own people to answer to: When it comes to the Taiwan issue, Morris says, “Beijing is probably quite worried about the people of China being upset if they aren’t hawkish enough or if they come out looking weak.” Indeed, in response to Lai’s recent policy statements, including one declaring that China is a “hostile foreign force,” Gao Zhikai, a prominent scholar in China who opposes Taiwanese independence, recently wrote, “The reunification with the motherland cannot be endlessly delayed. Decisive action must be taken.” 

Intimidation from China has made some ordinary Taiwanese citizens more concerned; according to a recent poll conducted by a defense-focused think tank, 51% think defense spending should be increased (although 65% of respondents said they thought an attack within five years was “unlikely”). No matter how much money Taipei spends, the sheer military imbalance between China and Taiwan means Taiwan would need help. But especially in the wake of Ukraine’s experience, many believe US aid would be contingent on whether Taiwan demonstrates the will to defend itself. “Based on war games, Taiwan would have to hold out for a month before the US could potentially intervene,” says Iris Shaw, director of the DPP mission in the US. And support from Taiwan’s neighbors like Japan might be contingent on US involvement.

But how likely is the US to intervene in such a scenario? The author Craig Addison popularized the argument that Taiwan’s fate is tied to its chip production prowess in his 2001 book Silicon Shield: Taiwan’s Protection Against Chinese Attack. Back then, Addison wrote that although the US had been intentionally vague about whether it would go to war to protect the island, America’s technological reliance on “a safe and productive Taiwan” made it highly probable that Washington would intervene. President Joe Biden deviated from those decades of calculated ambiguity by asserting multiple times that America would defend the island in the event of an attack. Yet now, Trump seems to have taken the opposite position, possibly presenting an opportunity for Beijing. 

TSMC in the Trump era 

In many ways, Taiwan finds itself in a catch-22. It feels the need to cozy up to the US for protection, yet that defensive maneuver is arguably risky in itself. It’s a common belief in Taiwan that forging stronger ties to the US could be dangerous. According to a public opinion poll released in January, 34.7% of Taiwanese believe that a “pro-US” policy provokes China and will cause a war. 

But the Lai administration’s foreign policy is “inexorably intertwined with the notion that a strong relationship with the US is essential,” says Hammond-Chambers.

Bolstering US support may not be the only reason TSMC is building fabs outside Taiwan. As the company readily points out, the majority of its customers are American. TSMC is also responding to its home base’s increasingly apparent land and energy limitations: finding land to build new fabs sometimes causes rifts with Taiwanese people who, for example, don’t want their temples and ancestral burial sites repurposed as science parks. Taiwan also relies on imports to meet more than 95% of its energy needs, and the dominant DPP has pledged to phase out nuclear, Taiwan’s most viable yet most hotly contested renewable energy source. Geopolitical tensions compound these physical restraints: Even if TSMC would never say as much, it’s fairly likely that if China did attack Taiwan, the firm would rather remain operational in other countries than be wiped out completely. 

However, building out TSMC’s manufacturing capabilities outside Taiwan will not be easy. “The ecosystem they created is truly unique. It’s a function of the talent pipeline, the culture, and laws in Taiwan; you can’t easily replicate it anywhere,” says Glaser. TSMC has 2,500 Taiwan-based suppliers. Plenty are within a couple of hours’ drive or an even shorter trip on high-speed rail. Taiwan has built a fully operational chip cluster, the product of four decades of innovation, industrial policy, and labor.

In many ways, Taiwan finds itself in a catch-22. It feels the need to cozy up to the US for protection, yet that defensive maneuver is arguably risky in itself.

As a result, it’s unclear whether TSMC will be able to copy its model and paste it into the suburbs of Phoenix, where it has 3,000 employees working on chip manufacturing. “Putting aside the geopolitical factor, they wouldn’t have expanded abroad,” says Feifei Hung, a researcher at the Asia Society. Rather than standalone facilities, the Arizona fabs are “appendages of TSMC that happen to be in Arizona,” says Paul Triolo, partner and tech policy lead at the international consulting firm DGA-Albright Stonebridge Group. When the full complex is operational, it will represent only a small percentage of TSMC’s overall capacity, most of which will remain in Taiwan. Triolo doubts the US buildout will yield results similar to what TSMC has built there: “Arizona ain’t that yet, and never will be.” 

Still, the second Trump administration has placed even more pressure on the company to “friendshore”—without providing any discernible signs of friendship. During this spring’s tariff frenzy, the administration threatened to hit Taiwan with a 32% “reciprocal” tariff, a move that was then paused and revived at 20% in late July (and was still being negotiated as of press time). The administration has also announced a 100% tariff on semiconductor imports, with the caveat that companies with US-based production, like TSMC, are exempt—though it’s unclear whether imports from critical suppliers in Taiwan will be tariffed. And the threat of a chip-specific tariff remains. “This is in line with [Trump’s] rhetoric of restoring manufacturing in the US and using tariffs as a one size fits all tool to force it,” says Nancy Wei, a trade and supply chain analyst at the Eurasia Group. The US is also apparently considering levying a $1 billion fine against TSMC after TSMC-made chips were reportedly found in some Huawei devices.

Despite these kinds of maneuvers, TSMC has been steadfast in its attempts to get on Washington’s good side. In March, Trump and TSMC’s CEO, C.C. Wei, jointly announced that the firm will make an additional $100 billion investment (on top of a previously announced $65 billion) in TSMC’s US hub in Arizona. The pledge represents the largest single source of foreign direct investment into the US, ever. While the deal was negotiated during Biden’s term, Trump was happy to take credit for ensuring that “the most powerful AI chips will be made right here in America.” 

The Arizona buildout will also include an R&D facility—a critical element for tech transfer and intellectual-property development. Then there’s the very juicy cherry on top: TSMC announced in April that once all six new fabs are operational, 30% of its most advanced chips will be produced in Arizona. Up until then, the thinking was that US-based production would remain a generation or two behind. It looks as if the administration’s public and, presumably, private arm-twisting has paid off. 

Meanwhile, as Trump cuts government programs and subsidies while demanding the “return” of manufacturing to the US, it’s TSMC that is running a technician apprenticeship program in Arizona to create good American jobs. TSMC’s leaders, Triolo says, must question how serious the Trump administration is about long-term industrial policy. They’re probably asking themselves, he says, “Do they understand what it takes to support the semiconductor industry, like our government does?” 

Dealing with an administration that is so explicitly “America first” represents “one of the biggest challenges in history for Taiwanese companies,” says Thung-Hong Lin, a sociology researcher at the Taipei-based Academia Sinica. Semiconductor manufacturing relies on reliability. Trump has so far offered TSMC no additional incentives supporting its US expansion—and started a trade war that has directly affected the semiconductor industry, partly by introducing irrevocable uncertainty. “Trump’s tariffs have set off a new, more intensified bifurcation of semiconductor supply chains,” says Chris Miller, author of Chip War. For now, Miller says, TSMC must navigate a world in which the US and China are both intense competitors and, despite trade restrictions, important clients. 

Warring narratives

China has been taking advantage of these changes to wage a war of disinformation. In response to Nancy Pelosi’s visit to Taiwan in 2022, when she was US Speaker of the House, Beijing sent warships, aircraft, and propaganda across the Taiwan Strait. Hackers using Chinese software infiltrated the display screens in Taiwan’s 7-Eleven stores to display messages telling “warmonger Pelosi” to “get out of Taiwan.” That might not be an act of war, but it’s close; “7” is an institution of daily life on the island. It is not difficult to imagine how a similar tactic might be used to spread more devastating disinformation, falsely alleging, for example, that Taiwan’s military has surrendered to China during a future crisis. 

Taiwan is “perpetually on the front lines” of cyberattacks from China, says Francesca Chen, a cybersecurity systems analyst at Taiwan’s Ministry of Digital Affairs. According to Taiwan’s National Security Bureau, instances of propaganda traceable to China grew by 60% in 2024 over the previous year, reaching 2.16 million. 

Visitors take selfies outside the TSMC Museum of Innovation in Hsinchu, Taiwan.
ANNABELLE CHIH/GETTY IMAGES

Over the last few years, online discussion of TSMC’s investments in the US “has become a focal point” of China’s state-­sponsored disinformation campaigns aimed at Taiwan, Chen says. They claim TSMC is transferring its most advanced technology, talent, and resources to the US, “weakening Taiwan’s economic lifeline and critical position in global supply chains.” Key terms include “hollowing out Taiwan” and “de-Taiwanization.” This framing depicts TSMC’s diversification as a symbol of Taiwan’s vulnerability, Chen says. The idea is to exploit real domestic debates in Taiwan to generate heightened levels of internal division, weakening social cohesion and undermining trust in the government.

Chinese officials haven’t been shy about echoing these messages out in the open: After the most recent US investment announcement in March, a spokesperson from China’s Taiwan Affairs Council accused Taiwan’s DPP of handing over TSMC as a “gift” to the US. (“TSMC turning into USMC?” asked a state media headline.) Former Taiwanese president Ma Ying-jeou posted an eerily similar criticism, alleging that TSMC’s US expansion amounted to “selling” the chipmaker in exchange for protection.

TSMC’s expansion abroad could become a major issue in Taiwan’s 2028 presidential election. It plays directly into party politics: The KMT can accuse the DPP of sacrificing Taiwan’s technology assets to placate the US, and the DPP can accuse the KMT of cozying up with China, even as Beijing’s military incursions become a more evident part of daily life. It remains to be seen whether TSMC’s shift to the US will ultimately protect or weaken Taiwan—or have no effect on the island’s security and sovereignty. For now at least, China’s aspirations loom large. 

To Beijing, unequivocally, Taiwan does not equal TSMC. Instead, it represents the final, unfulfilled stage of the Communist Party’s revolutionary struggle. Framed that way, China’s resolve to take the island could very well be nonnegotiable. That would mean if Taiwan is going to maintain a shield that protects it from the full weight of China’s political orthodoxy, it may need to be made of something much stronger than silicon. 

Johanna M. Costigan is a writer and editor focused on technology and geopolitics in the US, China, and Taiwan. She writes the newsletter The Long Game.

Why Trump’s “golden dome” missile defense idea is another ripped straight from the movies

In 1940, a fresh-faced Ronald Reagan starred as US Secret Service agent Brass Bancroft in Murder in the Air, an action film centered on a fictional “superweapon” that could stop enemy aircraft midflight. A mock newspaper in the movie hails it as the “greatest peace argument ever invented.” The experimental weapon is “the exclusive property of Uncle Sam,” Reagan’s character declares.

More than 40 years later, this cinematic vision—an American superweapon capable of neutralizing assaults and ushering in global peace—became a real-life centerpiece of Reagan’s presidency. Some have suggested that Reagan’s Strategic Defense Initiative (SDI), a quixotic plan for a space-based missile shield, may have been partly inspired by his silver-screen past; indeed, the concept was so fantastical it’s now better known by its Hollywood-referencing nickname, “Star Wars.”

In January 2024, Donald Trump revived the space-shield dream at a primary campaign rally in Laconia, New Hampshire, using the Star Wars nickname that Reagan hated. It didn’t work in the 1980s, Trump said, because the technology wasn’t there. But times have changed. 

Whether in Golden Age Hollywood or Trump’s impromptu dramatizations, the dream of a missile shield is animated by its sheer cinematic allure.

“I’ve seen so many things. I’ve seen shots that you wouldn’t even believe,” Trump said. He acted out a scene of missile defense experts triangulating the path of an incoming weapon. “Ding, ding, ding, ding,” he said, as he mimed typing on a keyboard. “Missile launch? Psshing!!” He raised his hand to indicate the rising missile, then let it fall to signal the successful interception: “Boom.” 

Trump has often expressed admiration for Israel’s Iron Dome, an air defense system that can intercept short-range rockets and artillery over the small nation and that is funded in part by the United States. At the rally, he pledged to “build an Iron Dome over our country, a state-of-the-art missile defense shield made in the USA … a lot of it right here in New Hampshire, actually.” 

Within a week of his inauguration, President Trump began working toward this promise by issuing an executive order to develop “The Iron Dome for America,” which was rebranded the “Golden Dome” a month later. The eruption of a revived conflict between Israel and Iran in June—including Trump’s decision to strike Iran’s nuclear facilities—has only strengthened the case for an American version of the Iron Dome in the eyes of the administration.

CHIP SOMODEVILLA/GETTY IMAGES

The Golden Dome has often been compared to SDI for its futuristic sheen, its aggressive form of protection, and its reflection of the belief that an impenetrable shield is the cheat code to global peace. Both efforts demonstrate the performative power of spectacle in defense policy, especially when wielded by deft showmen like Reagan and Trump. Whether in Golden Age Hollywood or Trump’s impromptu dramatizations, the dream of a missile shield is animated by its sheer cinematic allure, often rendered in deceptively simple concept art depicting a society made immune to catastrophic strikes. 

But in the complicated security landscape confronting the world today, is spectacle the same as safety?

“Missile defense is an area where facts and fiction blend,” says Anette Stimmer, a lecturer in international relations at the University of St Andrews who has researched SDI. “A lot is up to interpretation by all the actors involved.”


Trump’s view is simple: Space is as much a warfighting domain as land, air, and ocean, and therefore the US must assert its dominance there with advanced technologies. This position inspired the creation of the US Space Force in his first term, and Trump has now redoubled his efforts with the ongoing development of the Golden Dome.  

General Michael Guetlein, who Trump has appointed to lead the Golden Dome project, argued that America’s foes, including China and Russia, have forced the nation’s hand by continually pushing limits in their own weapons programs. “While we have been focused on peace overseas, our adversaries have been quickly modernizing their nuclear forces, building out ballistic missiles capable of hosting multiple warheads; building out hypersonic missiles capable of attacking the United States within an hour and traveling at 6,000 miles an hour; building cruise missiles that can navigate around our radar and our defenses; and building submarines that can sneak up on our shores; and, worse yet, building space weapons,” Guetlein said in May.

“It is time that we change that equation and start doubling down on the protection of the homeland,” he said. “Golden Dome is a bold and aggressive approach to hurry up and protect the homeland from our adversaries. We owe it to our children and our children’s children to protect them and afford them a quality of life that we have all grown up enjoying.”

With that vision in mind, Trump’s executive order outlines a host of goals for missile defense, some of which support bipartisan priorities like protecting supply chains and upgrading sensor arrays. The specific architecture of the Golden Dome is still being hammered out, but the initial executive order envisions a multi-tiered system of new sensors and interceptors—on the ground, in the air, and in space—that would work together to counter the threat of attacks from ballistic, hypersonic, and cruise missiles. The system would be coordinated in part by artificial-intelligence models trained for real-time threat detection and response. 

The technology that links the Golden Dome directly to SDI hinges on one key bullet point in the order that demands the “development and deployment of proliferated space-based interceptors capable of boost-phase intercept.” This language revives Reagan’s dream of deploying hundreds of missile interceptors in orbit to target missiles in the boost phase right after liftoff, a window of just a few minutes when the projectiles are slower and still near the attacker’s territory.

Space weapons are an attractive option for targeting the boost phase because interceptors need to be close enough to the launching missile to hit it. If a nation fired off long-range missiles from deep in its territory, the nearest ground- or air-based interceptors could be thousands of miles from the launch site. Space interceptors, in contrast, would be just a few hundred miles overhead of the ascending missiles, allowing for a much faster reaction time. But though the dream of boost-phase interception dates back decades, these maneuvers have never been operationally demonstrated from ground, air, or space.

“It’s a really hard problem that hasn’t been solved,” says Laura Grego, senior scientist and research director at the Union of Concerned Scientists’ global security program.

The US is currently protected by the Ground-Based Midcourse Defense (GMD), which consists of 44 interceptor missiles split between bases in Alaska and California, along with a network of early-­warning sensors on the ground, at sea, and in orbit. Tests suggest that the GMD would have about a 50% success rate at intercepting missiles.

Initiated by President Bill Clinton in the late ’90s and accelerated by President George W. Bush in the 2000s, the GMD is intended mainly to defend against rogue states like North Korea, which has nuclear weapons and intercontinental ballistic missiles (ICBMs) capable of reaching the US. A secondary focus is Iran, which does not currently have a nuclear weapon or ICBMs. Still, the GMD is built to anticipate a possible future where it develops those capabilities. 

The GMD is not designed to protect the US from the sort of large-scale and coordinated missile attacks that Russia and China could lob across the world. The Bush administration instead favored a focus on strategic deterrence with these peer nations, an approach that the Obama and Biden administrations continued. In addition to the GMD, the Pentagon and its international partners maintain regional defense systems to counter threats in conflict hot spots or attacks on critical infrastructure. All these networks are designed to intercept missiles during their midcourse cruise phase, as they hurtle through the sky or space, or during their terminal or reentry phase, as they approach their targets. The GMD has cost upward of $63 billion since it was initiated, and the US spends about an additional $20 billion to $30 billion annually on its array of other missile defense systems. 

In May, Trump was presented with several design options for the Golden Dome and selected a plan with a price tag of $175 billion and a schedule for full deployment by the end of his term. The One Big Beautiful Bill, signed into law on July 4, approved an initial $24.4 billion in funding for it. Space technologies and launch access have become much more affordable since the 1980s, but many analysts still think the projected cost and timeline are not realistic. The Congressional Budget Office, a nonpartisan federal agency, projected that the cost of the space-based interceptors could total from $161 billion to $542 billion over the course of 20 years. The wide range can be explained by the current lack of specifics on those orbital interceptors’ design and number.

Reintroducing the idea of space-based interceptors is “probably the most controversial piece of Golden Dome,” says Leonor Tomero, who served as deputy assistant secretary of defense for nuclear and missile defense policy in the Biden administration. 

“There are a lot of improvements that we can and should make on missile defense,” she continues. “There’s a lot of capability gaps I think we do need to address. My concern is the focus on reviving Star Wars and SDI. It’s got very significant policy implications, strategic stability implications, in addition to cost implications and technology feasibility challenges.” 

Indeed. Regardless of whether the Golden Dome materializes, the program is already raising geopolitical anxieties reminiscent of the Cold War era. Back then, the US had one main adversary: the Soviet Union. Now, it confronts a roiling multipolarity of established and nascent nuclear powers. Many of them have expressed dismay over the about-face on American missile defense strategy, which was previously predicated on arms reduction and deterrence.

“Here we are, despite years of saying we are not going to do this—that it is technically out of reach, economically unsustainable, and strategically unwise,” Grego says. “Overnight, we’re like, ‘No, actually, we’re doing it.’” 

The fact that we “blew up that logic” will “have a big impact on whether or not the program actually succeeds in creating the vision that it lays out,” she adds.

Russian and Chinese officials called the Golden Dome “deeply destabilizing in nature” in a joint statement in May, and North Korea’s foreign ministry warned it could “turn outer space into a potential nuclear war field.”  

Reagan, by all accounts, believed that SDI would be the ultimate tool of peace for all nations, and he even offered to share the technology with the Soviet leader, Mikhail Gorbachev. Trump, in contrast, sees Golden Dome as part of his “America First” brand. He has lamented that past American leaders supported the development of other missile defense projects abroad while neglecting to build similar security measures for their own country. The Golden Dome is both an expression of Trump’s belief that the world is leeching off America and a bargaining chip in negotiations toward a new power balance; Canada could be covered by the shield for free, he has said—in exchange for becoming the 51st state.

Trump has argued that America has been both demographically diluted by unchecked immigration and financially depleted by freeloading allied nations—undermining its security on both internal and external fronts. His first term’s marquee promise to build a wall on the southern US border, paid for by Mexico, aimed to address the former problem. That administration did build more physical barriers along the border (though US taxpayers, not Mexico, footed the bill). But just as important, the wall emerged as a symbolic shorthand for tougher immigration control. 

The Golden Dome is the second-term amplification of that promise, a wall that expands the concept of the “border” to the entire American airspace. Trump has projected an image of his envisioned space missile shield as a literal dome that could ward off coordinated attacks, including boost-phase interceptors from space and cruise- and terminal-phase interception by ground and air assets. When he announced the selected plan from the Resolute Desk in May, he sat in front of a mockup that depicted a barrage of incoming missiles being thwarted by the nationwide shield, depicted with a golden glow.

The Golden Dome’s orbital interceptors are supposedly there to target the early boost phase of missiles on or near the launch site, not over the United States. But the image of a besieged America, repelling enemy fire from the heavens, provides the visual and cinematic idea of both threat and security that Trump hopes to impress on the public.  

“This administration, and MAGA world, thinks about itself as being victimized by immigrants, government waste, leftist professors, and so on,” says Edward Tabor Linenthal, a historian who examined public narratives about SDI in his 1989 book Symbolic Defense: The Cultural Significance of the Strategic Defense Initiative. “It’s not much of a jump to be victimized by too many nations getting nuclear weapons.” 


Even in our era of entrenched political polarization, there is support across party lines for upgrading and optimizing America’s missile defense systems. No long-range missile has ever struck US soil, but an attack would be disastrous for the nation and the world. 

“We’ve come a long way in terms of missile defense,” says Tomero. “There has been a lot of bipartisan consensus on increasing regional missile defense, working with our allies, and making sure that the missile defense interceptors we have work.”

outline of the United States inside a corked glass bottle with scorpions

SHOUT

Trump has challenged that consensus with his reversion to the dream of a space shield. He is correct that SDI failed to materialize in part because its envisioned technologies were out of reach, from a financial and engineering standpoint, in the 1980s. But the controversy that erupted around SDI—and that tarnished it with the derisive name “Star Wars”—stemmed just as much from its potential geopolitical disruptiveness as from its fantastical techno-optimism. 

“This idea of a missile shield, also back when Reagan proposed it, has a huge popular appeal, because who wouldn’t want to be able to defend your country from nuclear weapons? It is a universal dream,” says Stimmer. “It requires a bit more digging in and understanding to see that actually, this vision depends a lot on technological feasibility and on how others perceive it.” 

Reagan maintained a steadfast conviction that this shield of space-based interceptors would render nuclear weapons “impotent and obsolete,” ushering in “world peace,” as he said in his March 1983 speech announcing SDI. The doctrine of mutually assured destruction could be replaced by mutually assured survival, he argued.

Amid nuclear tensions, J. Robert Oppenheimer compared the US and the Soviet Union to “two scorpions in a bottle.” Now there are many more scorpions.

But Gorbachev saw the space-based shield as an offensive weapon, since it would give the US a first-strike advantage. The imbalance, he warned, could spark a weapons race in space, a domain that had been spared from overt military conflicts. As a result, the initiative would only destabilize the world order and interrupt the progress of arms control and nuclear de-proliferation efforts. 

Reagan’s insistence on SDI as the only route to world peace may have blocked opportunities to advance that goal through more practical and cost-effective avenues, such as diplomacy and arms control. At the 1986 Reykjavik Summit, Reagan and Gorbachev came very close to an arms control agreement that might have eliminated all ballistic missiles and nuclear weapons. The sticking point was Reagan’s refusal to give up SDI. 

“It is not the Strategic Defense Initiative; it’s a strategic defense ideology,” says Linenthal. He mentions the famous metaphor used by J. Robert Oppenheimer, a central figure of the Manhattan Project, who compared the United States and the Soviet Union to “two scorpions in a bottle.” Either scorpion could kill the other, but only at the probable cost of its own life. 

Reagan felt a “tremendously powerful impetus” to escape Oppenheimer’s metaphor, Linenthal noted: “It was a new kind of deliverance that would resolve it all. Of course, now there are many more scorpions, so it has to be a bigger bottle.”

A true believer, Reagan never abandoned SDI in spite of cost overruns and public backlash. President Bill Clinton redirected the program in 1993 by shifting gears from global to regional missile defense, a focus that remained fairly consistent for decades—until Trump took center stage. Now, the Golden Dome has flipped that logic on its head, risking a possible escalation of military tensions in outer space.

Tomero describes a “nightmare scenario” in which adversaries attack the Golden Dome’s space infrastructure, leaving the orbital environment filled with debris that renders the defense system, among countless other space assets, inoperable. 

“Having a one-sided capability that is very threatening to our adversaries is obviously going to create very dangerous stability issues,” she says. It could “lead to inadvertent escalation and miscalculation and, I think, lower the threshold to conflict and nuclear war.” 


As president, Trump has channeled the boardroom antics that once resuscitated his celebrity status on The Apprentice. But armed adversaries, long wary of America’s position on missile defense, don’t have the luxury of wondering whether it’s all real or just more stagecraft. 

“What makes Trump so difficult to read for others is his unpredictability,” Stimmer says. “This, just by itself, destabilizes things, because no one knows what he’ll actually do.”

Trump has described the Golden Dome as nearly impenetrable by missile attacks, evoking a clear symbolic return to an American golden age where we can all feel safe again.

“All of them will be knocked out of the air,” as “the success rate is very close to 100%,” he said at the project’s official launch in May. “We will truly be completing the job that President Reagan started 40 years ago, forever ending the missile threat to the American homeland.”

Becky Ferreira is a science reporter based in upstate New York, and author of First Contact, a book about the search for alien life, which will be published in September. 

Why the AI moratorium’s defeat may signal a new political era

The “Big, Beautiful Bill” that President Donald Trump signed into law on July 4 was chock full of controversial policies—Medicaid work requirements, increased funding for ICE, and an end to tax credits for clean energy and vehicles, to name just a few. But one highly contested provision was missing. Just days earlier, during a late-night voting session, the Senate had killed the bill’s 10-year moratorium on state-level AI regulation. 

“We really dodged a bullet,” says Scott Wiener, a California state senator and the author of SB 1047, a bill that would have made companies liable for harms caused by large AI models. It was vetoed by Governor Gavin Newsom last year, but Wiener is now working to pass SB 53, which establishes whistleblower protections for employees of AI companies. Had the federal AI regulation moratorium passed, he says, that bill likely would have been dead.

The moratorium could also have killed laws that have already been adopted around the country, including a Colorado law that targets algorithmic discrimination, laws in Utah and California aimed at making AI-generated content more identifiable, and other legislation focused on preserving data privacy and keeping children safe online. Proponents of the moratorium, such OpenAI and Senator Ted Cruz, have said that a “patchwork” of state-level regulations would place an undue burden on technology companies and stymie innovation. Federal regulation, they argue, is a better approach—but there is currently no federal AI regulation in place.

Wiener and other state lawmakers can now get back to work writing and passing AI policy, at least for the time being—with the tailwind of a major moral victory at their backs. The movement to defeat the moratorium was impressively bipartisan: 40 state attorneys general signed a letter to Congress opposing the measure, as did a group of over 250 Republican and Democratic state lawmakers. And while congressional Democrats were united against the moratorium, the final nail in its coffin was hammered in by Senator Marsha Blackburn of Tennessee, a Tea Party conservative and Trump ally who backed out of a compromise with Cruz at the eleventh hour.

The moratorium fight may have signaled a bigger political shift. “In the last few months, we’ve seen a much broader and more diverse coalition form in support of AI regulation generally,” says Amba Kak, co–executive director of the AI Now Institute. After years of relative inaction, politicians are getting concerned about the risks of unregulated artificial intelligence. 

Granted, there’s an argument to be made that the moratorium’s defeat was highly contingent. Blackburn appears to have been motivated almost entirely by concerns about children’s online safety and the rights of country musicians to control their own likenesses; state lawmakers, meanwhile, were affronted by the federal government’s attempt to defang legislation that they had already passed.

And even though powerful technology firms such as Andreessen Horowitz and OpenAI reportedly lobbied in favor of the moratorium, continuing to push for it might not have been worth it to the Trump administration and its allies—at least not at the expense of tax breaks and entitlement cuts. Baobao Zhang, an associate professor of political science at Syracuse University, says that the administration may have been willing to give up on the moratorium in order to push through the rest of the bill by its self-imposed Independence Day deadline.

Andreessen Horowitz did not respond to a request for comment. OpenAI noted that the company was opposed to a state-by-state approach to AI regulation but did not respond to specific questions regarding the moratorium’s defeat. 

It’s almost certainly the case that the moratorium’s breadth, as well as its decade-long duration, helped opponents marshall a diverse coalition to their side. But that breadth isn’t incidental—it’s related to the very nature of AI. Blackburn, who represents country musicians in Nashville, and Wiener, who represents software developers in San Francisco, have a shared interest in AI regulation precisely because such a powerful and general-purpose tool has the potential to affect so many people’s well-being and livelihood. “There are real anxieties that are touching people of all classes,” Kak says. “It’s creating solidarities that maybe didn’t exist before.”

Faced with outspoken advocates, concerned constituents, and the constant buzz of AI discourse, politicians from both sides of the aisle are starting to argue for taking AI extremely seriously. One of the most prominent anti-moratorium voices was Marjorie Taylor Greene, who voted for the version of the bill containing the moratorium before admitting that she hadn’t read it thoroughly and committing to opposing the moratorium moving forward. “We have no idea what AI will be capable of in the next 10 years,” she posted last month.

And two weeks ago, Pete Buttigieg, President Biden’s transportation secretary, published a Substack post entitled “We Are Still Underreacting on AI.” “The terms of what it is like to be a human are about to change in ways that rival the transformations of the Enlightenment or the Industrial Revolution, only much more quickly,” he wrote.

Wiener has noticed a shift among his peers. “More and more policymakers understand that we can’t just ignore this,” he says. But awareness is several steps short of effective legislation, and regulation opponents aren’t giving up the fight. The Trump administration is reportedly working on a slate of executive actions aimed at making more energy available for AI training and deployment, and Cruz says he is planning to introduce his own anti-regulation bill.

Meanwhile, proponents of regulation will need to figure out how to channel the broad opposition to the moratorium into support for specific policies. It won’t be a simple task. “It’s easy for all of us to agree on what we don’t want,” Kak says. “The harder question is: What is it that we do want?”

What does it mean for an algorithm to be “fair”?

Back in February, I flew to Amsterdam to report on a high-stakes experiment the city had recently conducted: a pilot program for what it called Smart Check, which was its attempt to create an effective, fair, and unbiased predictive algorithm to try to detect welfare fraud. But the city fell short of its lofty goals—and, with our partners at Lighthouse Reports and the Dutch newspaper Trouw, we tried to get to the bottom of why. You can read about it in our deep dive published last week.

For an American reporter, it’s been an interesting time to write a story on “responsible AI” in a progressive European city—just as ethical considerations in AI deployments appear to be disappearing in the United States, at least at the national level. 

For example, a few weeks before my trip, the Trump administration rescinded Biden’s executive order on AI safety and DOGE began turning to AI to decide which federal programs to cut. Then, more recently, House Republicans passed a 10-year moratorium on US states’ ability to regulate AI (though it has yet to be passed by the Senate). 

What all this points to is a new reality in the United States where responsible AI is no longer a priority (if it ever genuinely was). 

But this has also made me think more deeply about the stakes of deploying AI in situations that directly affect human lives, and about what success would even look like. 

When Amsterdam’s welfare department began developing the algorithm that became Smart Check, the municipality followed virtually every recommendation in the responsible-AI playbook: consulting external experts, running bias tests, implementing technical safeguards, and seeking stakeholder feedback. City officials hoped the resulting algorithm could avoid causing the worst types of harm inflicted by discriminatory AI over nearly a decade. 

After talking to a large number of people involved in the project and others who would potentially be affected by it, as well as some experts who did not work on it, it’s hard not to wonder if the city could ever have succeeded in its goals when neither “fairness” nor even “bias” has a universally agreed-upon definition. The city was treating these issues as technical ones that could be answered by reweighting numbers and figures—rather than political and philosophical questions that society as a whole has to grapple with.

On the afternoon that I arrived in Amsterdam, I sat down with Anke van der Vliet, a longtime advocate for welfare beneficiaries who served on what’s called the Participation Council, a 15-member citizen body that represents benefits recipients and their advocates.

The city had consulted the council during Smart Check’s development, but van der Vliet was blunt in sharing the committee’s criticisms of the plans. Its members simply didn’t want the program. They had well-placed fears of discrimination and disproportionate impact, given that fraud is found in only 3% of applications.

To the city’s credit, it did respond to some of their concerns and make changes in the algorithm’s design—like removing from consideration factors, such as age, whose inclusion could have had a discriminatory impact. But the city ignored the Participation Council’s main feedback: its recommendation to stop development altogether. 

Van der Vliet and other welfare advocates I met on my trip, like representatives from the Amsterdam Welfare Union, described what they see as a number of challenges faced by the city’s some 35,000 benefits recipients: the indignities of having to constantly re-prove the need for benefits, the increases in cost of living that benefits payments do not reflect, and the general feeling of distrust between recipients and the government. 

City welfare officials themselves recognize the flaws of the system, which “is held together by rubber bands and staples,” as Harry Bodaar, a senior policy advisor to the city who focuses on welfare fraud enforcement, told us. “And if you’re at the bottom of that system, you’re the first to fall through the cracks.”

So the Participation Council didn’t want Smart Check at all, even as Bodaar and others working in the department hoped that it could fix the system. It’s a classic example of a “wicked problem,” a social or cultural issue with no one clear answer and many potential consequences. 

After the story was published, I heard from Suresh Venkatasubramanian, a former tech advisor to the White House Office of Science and Technology Policy who co-wrote Biden’s AI Bill of Rights (now rescinded by Trump). “We need participation early on from communities,” he said, but he added that it also matters what officials do with the feedback—and whether there is “a willingness to reframe the intervention based on what people actually want.” 

Had the city started with a different question—what people actually want—perhaps it might have developed a different algorithm entirely. As the Dutch digital rights advocate Hans De Zwart put it to us, “We are being seduced by technological solutions for the wrong problems … why doesn’t the municipality build an algorithm that searches for people who do not apply for social assistance but are entitled to it?” 

These are the kinds of fundamental questions AI developers will need to consider, or they run the risk of repeating (or ignoring) the same mistakes over and over again.

Venkatasubramanian told me he found the story to be “affirming” in highlighting the need for “those in charge of governing these systems”  to “ask hard questions … starting with whether they should be used at all.”

But he also called the story “humbling”: “Even with good intentions, and a desire to benefit from all the research on responsible AI, it’s still possible to build systems that are fundamentally flawed, for reasons that go well beyond the details of the system constructions.” 

To better understand this debate, read our full story here. And if you want more detail on how we ran our own bias tests after the city gave us unprecedented access to the Smart Check algorithm, check out the methodology over at Lighthouse. (For any Dutch speakers out there, here’s the companion story in Trouw.) Thanks to the Pulitzer Center for supporting our reporting. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How AI can help make cities work better for residents

In recent decades, cities have become increasingly adept at amassing all sorts of data. But that data can have limited impact when government officials are unable to communicate, let alone analyze or put to use, all the information they have access to.

This dynamic has always bothered Sarah Williams, a professor of urban planning and technology at MIT. “We do a lot of spatial and data analytics. We sit on academic papers and research that could have a huge impact on the way we plan and design our cities,” she says of her profession. “It wasn’t getting communicated.”

Shortly after joining MIT in 2012, Williams created the Civic Data Design Lab to bridge that divide. Over the years, she and her colleagues have pushed the narrative and expository bounds of urban planning data using the latest technologies available—making numbers vivid and accessible through human stories and striking graphics. One project she was involved in, on rates of incarceration in New York City by neighborhood, is now in the permanent collection of the Museum of Modern Art in New York. Williams’s other projects have tracked the spread and impact of air pollution in Beijing using air quality monitors and mapped the daily commutes of Nairobi residents using geographic information systems

Cities should be transparent in how they’re using AI and what its limitations are. In doing so, they have an opportunity to model more ethical and responsive ways of using this technology.

In recent years, as AI became more accessible, Williams was intrigued by what it could reveal about cities. “I really started thinking, ‘What are the implications for urban planning?’” she says. These tools have the potential to organize and illustrate vast amounts of data instantaneously. But having more information also increases the risks of misinformation and manipulation. “I wanted to help guide cities in thinking about the positives and negatives of these tools,” she says. 

In 2024, that inquiry led to a collaboration with the city of Boston, which was exploring how and whether to apply AI in various government functions through its Office of Emerging Technology. Over the course of the year, Williams and her team followed along as Boston experimented with several new applications for AI in government and gathered feedback at community meetings.

On the basis of these findings, Williams and the Civic Data Design Lab published the Generative AI Playbook for Civic Engagement in the spring. It’s a publicly available document that helps city governments take advantage of AI’s capabilities and navigate its ­attendant risks. This kind of guidance is especially important as the federal government takes an increasingly laissez-faire approach to AI regulation. 

“That gray zone is where nonprofits and academia can create research to help guide states and private institutions,” Williams says. 

The lab’s playbook and academic papers touch on a wide range of emerging applications, from virtual assistants for Boston’s procurement division to optimization of traffic signals to chatbots for the 311 nonemergency services hotline. But Williams’s primary focus is how to use this technology for civic engagement. AI could help make the membrane between the government and the public more porous, allowing each side to understand the other a little better. 

Right now, civic engagement is mostly limited to “social media, websites, and community meetings,” she says. “If we can create more tools to help close that gap, that’s really important.”

One of Boston’s AI-powered experiments moves in that direction. The city used a large language model to summarize every vote of the Boston City Council for the past 16 years, creating simple and straightforward descriptions of each measure. This easily searchable database “will help you find what you’re looking for a lot more quickly,” says Michael Lawrence Evans, head of the Office of Emerging Technology.  A quick search for “housing” shows the city council’s recent actions to create a new housing accelerator fund and to expand the capacity of migrant shelters. Though not every summary has been double-checked by a human, the tool’s accuracy was confirmed through “a really robust evaluation,” Evans says. 

AI tools may also help governments understand the needs and desires of residents. The community is “already inputting a lot of its knowledge” through community meetings, public surveys, 311 tickets, and other channels, Williams says. Boston, for instance, recorded nearly 300,000 311 requests in 2024 (most were complaints related to parking). New York City recorded 35 million 311 contacts in 2023. It can be difficult for government workers to spot trends in all that noise. “Now they have a more structured way to analyze that data that didn’t really exist before,” she says.

AI can help paint a clearer picture of how these sorts of resident complaints are distributed geographically. At a community meeting in Boston last year, city staff used generative AI to instantly produce a map of pothole complaints from the previous month. 

AI also has the potential to illuminate more abstract data on residents’ desires. One mechanism Williams cites in her research is Polis, an open-source polling platform used by several national governments around the world and a handful of cities and media companies in the US. A recent update allows poll hosts to categorize and summarize responses using AI. It’s something of an experiment in how AI can help facilitate direct democracy—an issue that tool creator Colin Megill has worked on with both OpenAI and Anthropic. 

But even as Megill explores these frontiers, he is proceeding cautiously. The goal is to “enhance human agency,” he says, and to avoid “manipulation” at all costs: “You want to give the model very specific and discrete tasks that augment human authors but don’t replace them.”

Misinformation is another concern as local governments figure out how best to work with AI. Though they’re increasingly common, 311 chatbots have a mixed record on this front. New York City’s chatbot made headlines last year for providing inaccurate and, at times, bizarre information. When an Associated Press reporter asked if it was legal for a restaurant to serve cheese that had been nibbled on by a rat, the chatbot responded, “Yes, you can still serve the cheese to customers if it has rat bites.” (The New York chatbot appears to have improved since then. When asked by this reporter, it responded firmly in the negative to the nibbling rat question.)

These AI mishaps can reduce trust in government—precisely the opposite of the outcome that Williams is pursuing in her work. 

“Currently, we don’t have a lot of trust in AI systems,” she says. “That’s why having that human facilitator is really important.” Cities should be transparent in how they’re using AI and what its limitations are, she says. In doing so, they have an opportunity to model more ethical and responsive ways of using this technology. 

Next on Williams’s agenda is exploring how cities can develop their own AI systems rather than relying on tech giants, which often have a different set of priorities. This technology could be open-source; not only would communities be able to better understand the data they produce, but they would own it. 

“One of the biggest criticisms of AI right now is that the people who are doing the labor are not paid for the work that they do [to train the systems],” she says. “I’m super excited about how communities can own their large language models. Then communities can own the data that’s inside them and allow people to have access to it.”  

Benjamin Schneider is a freelance writer covering housing, transportation, and urban policy.

Senior State Department official sought internal communications with journalists, European officials, and Trump critics

A previously unreported document distributed by senior US State Department official Darren Beattie reveals a sweeping effort to uncover all communications between the staff of a small government office focused on online disinformation and a lengthy list of public and private figures—many of whom are longtime targets of the political right. 

The document, originally shared in person with roughly a dozen State Department employees in early March, requested staff emails and other records with or about a host of individuals and organizations that track or write about foreign disinformation—including Atlantic journalist Anne Applebaum, former US cybersecurity official Christopher Krebs, and the Stanford Internet Observatory—or have criticized President Donald Trump and his allies, such as the conservative anti-Trump commentator Bill Kristol. 

The document also seeks all staff communications that merely reference Trump or people in his orbit, like Alex Jones, Glenn Greenwald, and Robert F. Kennedy Jr. In addition, it directs a search of communications for a long list of keywords, including “Pepe the Frog,” “incel,” “q-anon,” “Black Lives Matter,” “great replacement theory,” “far-right,” and “infodemic.”

For several people who received or saw the document, the broad requests for unredacted information felt like a “witch hunt,” one official says—one that could put the privacy and security of numerous individuals and organizations at risk. 

Beattie, whom Trump appointed in February to be the acting undersecretary for public diplomacy, told State Department officials that his goal in seeking these records was a “Twitter files”-like release of internal State Department documents “to rebuild trust with the American public,” according to a State Department employee who heard the remarks. (Beattie was referring to the internal Twitter documents that were released after Elon Musk bought the platform, in an attempt to prove that the company had previously silenced conservatives. While the effort provided more detail on the challenges and mistakes Twitter had already admitted to, it failed to produce a smoking gun.)

“What would be the innocent reason for doing that?” Bill Kristol

The document, dated March 11, 2025, focuses specifically on records and communications from the Counter Foreign Information Manipulation and Interference (R/FIMI) Hub, a small office in the State Department’s Office of Public Diplomacy that tracked and countered foreign disinformation campaigns; it was created after the Global Engagement Center (GEC), which had the same mission, shut down at the end of 2024. MIT Technology Review broke the news earlier this month that R/FIMI would be shuttered. 

Some R/FIMI staff were at the meeting where the document was initially shared, as were State Department lawyers and staff from the department’s Bureau of Administration, who are responsible for conducting searches to fulfill public records requests. 

Also included among the nearly 60 individuals and organizations caught up in Beattie’s information dragnet are Bill Gates; the open-source journalism outlet Bellingcat; former FBI special agent Clint Watts; Nancy Faeser, the German interior minister; Daniel Fried, a career State Department official and former US ambassador to Poland; Renée DiResta, an expert in online disinformation who led research at Stanford Internet Observatory; and Nina Jankowicz, a disinformation researcher who briefly led the Disinformation Governance Board at the US Department of Homeland Security.

Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

When told of their inclusion in the records request, multiple people expressed alarm that such a list exists at all in an American institution. “When I was in government I’d never done anything like that,” Kristol, a former chief of staff to Vice President Dan Quayle, says. “What would be the innocent reason for doing that?”

Fried echoes this sentiment. “I spent 40 years in the State Department, and you didn’t collect names or demand email records,” says Fried. “I’ve never heard of such a thing”—at least not in the American context, he clarifies. It did remind him of Eastern European “Communist Party minder[s] watching over the untrusted bureaucracy.” 

He adds: “It also approaches the compilation of an enemies list.” 

Targeting the “censorship industrial complex”

Both GEC and R/FIMI, its pared-down successor office, focused on tracking and countering foreign disinformation efforts from Russia, China, and Iran, among others, but GEC was frequently accused—and was even sued—by conservative critics who claimed that it enabled censorship of conservative Americans’ views. A judge threw out one of those claims against GEC in 2022 (while finding that other parts of the Biden administration did exert undue pressure on tech platforms). 

Beattie has also personally promoted these views. Before joining the State Department, he started Revolver News, a website that espouses far-right talking points that often gain traction in certain conservative circles. Among the ideas promoted in Revolver News is that GEC was part of a “censorship industrial complex” aimed at suppressing American conservative voices, even though GEC’s mission was foreign disinformation. This idea has taken hold more broadly; the House Foreign Affairs Committee held a hearing titled the “Censorship-Industrial Complex: The Need for First Amendment Safeguards at the State Department,” on April 1 focused on GEC. 

Most people on the list appear to have focused at some point on tracking or challenging disinformation broadly, or on countering specific false claims, including those related to the 2020 election. A few of the individuals appear primarily to be critics of Trump, Beattie, or others in the right-wing media ecosystem. Many have been the subject of Trump’s public grievances for years. (Trump called Krebs, for instance, a “significant bad-faith actor” in an executive order targeting him earlier this month.)   

Beattie specifically asked for “all documents, emails, correspondence, or other records of communications amongst/between employees, contractors, subcontractors or consultants at the GEC or R/FIMI” since 2017 with all the named individuals, as well as communications that merely referenced them. He sought communications that referenced any of the listed organizations.  

Finally, he sought a list of additional unredacted agency records—including all GEC grants and contracts, as well as subgrants, which are particularly sensitive due to the risks of retaliation to subgrantees, who often work in local journalism, fact-checking, or pro-democracy organizations under repressive regimes. It also asked for “all documents mentioning” the Election Integrity Partnership, a research collaboration between academics and tech companies that has been a target of right-wing criticism

Several State Department staffers call the records requests “unusual” and “improper” in their scope. MIT Technology Review spoke to three people who had personally seen the document, as well as two others who were aware of it; we agreed to allow them to speak anonymously due to their fears of retaliation. 

While they acknowledge that previous political appointees have, on occasion, made information requests through the records management system, Beattie’s request was something wholly different. 

Never had “an incoming political appointee” sought to “search through seven years’ worth of all staff emails to see whether anything negative had been said about his friends,” says one staffer. 

Another staffer calls it a “pet project” for Beattie. 

Selective transparency

Beattie delivered the request, which he framed as a “transparency” initiative, to the State Department officials in a conference room at its Washington, D.C., headquarters on a Tuesday afternoon in early March, in the form of an 11-page packet titled, “SO [Senior Official] Beattie Inquiry for GEC/R/FIMI Records.” The documents were printed out, rather than emailed.

Labeled “sensitive but unclassified,” the document lays out Beattie’s requests in 12 separate, but sometimes repetitive, bullet points. In total, he sought communications about 16 organizations, including Harvard’s Berkman Klein Center and the US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), as well as with and about 39 individuals. 

Notably, this includes several journalists: In addition to Bellingcat and Applebaum, the document also asks for communications with NBC News senior reporter Brandy Zadrozny. 

Press-freedom advocates expressed alarm about the inclusion of journalists on the list, as well as the possibility of their communications being released to the public, which goes “considerably well beyond the scope of what … leak investigations in the past have typically focused on,” says Grayson Clary, a staff attorney at the Reporters Committee for Freedom of the Press. Rather, the effort seems like “a tactic designed to … make it much harder for journalists to strike up those source relationships in the first instance.”

Beattie also requested a search for communications that mentioned Trump and more than a dozen other prominent right-leaning figures. In addition to Jones, Greenwald, and “RFK Jr.,” the list includes “Don Jr.,” Elon Musk, Joe Rogan, Charlie Kirk, Marine Le Pen, “Bolsonaro” (which could cover either Jair Bolsonaro, the former Brazilian president, or his son Eduardo, who is seeking political asylum in the US), and Beattie himself. It also asked for a search for 32 right-wing buzzwords related to abortion, immigration, election denial, and January 6, suggesting a determined effort to find State Department staff who even just discussed such matters. 

(Staffers say they doubt that Beattie will find much, unless, one says, it’s “previous [FOIA] queries from people like Beattie” or discussions about “some Russian or PRC [Chinese] narrative that includes some of this stuff.”)

Multiple sources say State Department employees raised alarms internally about the records requests. They worried about the sensitivity and impropriety of the broad scope of the information requested, particularly because records would be unredacted, as well as about how the search would be conducted: through the eRecords file management system, which makes it easy for administrative staff to search through and retrieve State Department employees’ emails, typically in response to FOIA requests. 

This felt, they say, like a powerful misuse of the public records system—or as Jankowicz, the disinformation researcher and former DHS official, put it, “weaponizing the access [Beattie] has to internal communications in order to upend people’s lives.”

“It stank to high heaven,” one staffer says. “This could be used for retaliation. This could be used for any kind of improper purposes, and our oversight committees should be informed of this.”

Another employee expressed concerns about the request for information on the agency’s subgrantees—who were often on the ground in repressive countries and whose information was closely guarded and not shared digitally, unlike the public lists of contractors and grantees typically available on websites like Grants.gov or USAspending.gov. “Making it known that [they] took money from the United States would put a target on them,” this individual explains. “We kept that information very secure. We wouldn’t even email subgrant names back and forth.”

Several people familiar with the matter say that by early April, Beattie had received many of the documents he’d requested, retrieved through eRecords, as well as a list of grantees. One source says the more sensitive list of subgrantees was not shared.  

Neither the State Department nor Beattie responded to requests for comment. A CISA spokesperson emailed, “We do not comment on intergovernmental documents and would refer you back to the State Department.” We reached out to all individuals whose communications were requested and are named here; many declined to comment on the record.

A “chilling effect”

Five weeks after Beattie made his requests for information, the State Department shut down R/FIMI. 

An hour after staff members were informed, US Secretary of State Marco Rubio published a blog post announcing the news on the Federalist, one of the outlets that sued the GEC over allegations of censorship. He then discussed in an interview with the influential right-wing Internet personality Mike Benz plans for Beattie to lead a “transparency effort.”  

“What we have to do now—and Darren will be big involved in that as well—is sort of document what happened … because I think people who were harmed deserve to know that, and be able to prove that they were harmed,” Rubio told Benz.

This is what Beattie—and Benz—have long called for. Many of the names and keywords he included in his request reflect conspiracy theories and grievances promoted by Revolver News—which Beattie founded after being fired from his job as a speechwriter during the first Trump administration when CNN reported that he had spoken at a conference with white nationalists. 

Ultimately, the State Department staffers say they fear that a selective disclosure of documents, taken out of context, could be distorted to fit any kind of narrative Beattie, Rubio, or others create. 

Weaponizing any speech they consider to be critical by deeming it disinformation is not only ironic, says Jankowicz—it will also have “chilling effects” on anyone who conducts disinformation research, and it will result in “less oversight and transparency over tech platforms, over adversarial activities, over, frankly, people who are legitimately trying to disenfranchise US voters.” 

That, she warns, “is something we should all be alarmed about.”