How AI can help make cities work better for residents

In recent decades, cities have become increasingly adept at amassing all sorts of data. But that data can have limited impact when government officials are unable to communicate, let alone analyze or put to use, all the information they have access to.

This dynamic has always bothered Sarah Williams, a professor of urban planning and technology at MIT. “We do a lot of spatial and data analytics. We sit on academic papers and research that could have a huge impact on the way we plan and design our cities,” she says of her profession. “It wasn’t getting communicated.”

Shortly after joining MIT in 2012, Williams created the Civic Data Design Lab to bridge that divide. Over the years, she and her colleagues have pushed the narrative and expository bounds of urban planning data using the latest technologies available—making numbers vivid and accessible through human stories and striking graphics. One project she was involved in, on rates of incarceration in New York City by neighborhood, is now in the permanent collection of the Museum of Modern Art in New York. Williams’s other projects have tracked the spread and impact of air pollution in Beijing using air quality monitors and mapped the daily commutes of Nairobi residents using geographic information systems

Cities should be transparent in how they’re using AI and what its limitations are. In doing so, they have an opportunity to model more ethical and responsive ways of using this technology.

In recent years, as AI became more accessible, Williams was intrigued by what it could reveal about cities. “I really started thinking, ‘What are the implications for urban planning?’” she says. These tools have the potential to organize and illustrate vast amounts of data instantaneously. But having more information also increases the risks of misinformation and manipulation. “I wanted to help guide cities in thinking about the positives and negatives of these tools,” she says. 

In 2024, that inquiry led to a collaboration with the city of Boston, which was exploring how and whether to apply AI in various government functions through its Office of Emerging Technology. Over the course of the year, Williams and her team followed along as Boston experimented with several new applications for AI in government and gathered feedback at community meetings.

On the basis of these findings, Williams and the Civic Data Design Lab published the Generative AI Playbook for Civic Engagement in the spring. It’s a publicly available document that helps city governments take advantage of AI’s capabilities and navigate its ­attendant risks. This kind of guidance is especially important as the federal government takes an increasingly laissez-faire approach to AI regulation. 

“That gray zone is where nonprofits and academia can create research to help guide states and private institutions,” Williams says. 

The lab’s playbook and academic papers touch on a wide range of emerging applications, from virtual assistants for Boston’s procurement division to optimization of traffic signals to chatbots for the 311 nonemergency services hotline. But Williams’s primary focus is how to use this technology for civic engagement. AI could help make the membrane between the government and the public more porous, allowing each side to understand the other a little better. 

Right now, civic engagement is mostly limited to “social media, websites, and community meetings,” she says. “If we can create more tools to help close that gap, that’s really important.”

One of Boston’s AI-powered experiments moves in that direction. The city used a large language model to summarize every vote of the Boston City Council for the past 16 years, creating simple and straightforward descriptions of each measure. This easily searchable database “will help you find what you’re looking for a lot more quickly,” says Michael Lawrence Evans, head of the Office of Emerging Technology.  A quick search for “housing” shows the city council’s recent actions to create a new housing accelerator fund and to expand the capacity of migrant shelters. Though not every summary has been double-checked by a human, the tool’s accuracy was confirmed through “a really robust evaluation,” Evans says. 

AI tools may also help governments understand the needs and desires of residents. The community is “already inputting a lot of its knowledge” through community meetings, public surveys, 311 tickets, and other channels, Williams says. Boston, for instance, recorded nearly 300,000 311 requests in 2024 (most were complaints related to parking). New York City recorded 35 million 311 contacts in 2023. It can be difficult for government workers to spot trends in all that noise. “Now they have a more structured way to analyze that data that didn’t really exist before,” she says.

AI can help paint a clearer picture of how these sorts of resident complaints are distributed geographically. At a community meeting in Boston last year, city staff used generative AI to instantly produce a map of pothole complaints from the previous month. 

AI also has the potential to illuminate more abstract data on residents’ desires. One mechanism Williams cites in her research is Polis, an open-source polling platform used by several national governments around the world and a handful of cities and media companies in the US. A recent update allows poll hosts to categorize and summarize responses using AI. It’s something of an experiment in how AI can help facilitate direct democracy—an issue that tool creator Colin Megill has worked on with both OpenAI and Anthropic. 

But even as Megill explores these frontiers, he is proceeding cautiously. The goal is to “enhance human agency,” he says, and to avoid “manipulation” at all costs: “You want to give the model very specific and discrete tasks that augment human authors but don’t replace them.”

Misinformation is another concern as local governments figure out how best to work with AI. Though they’re increasingly common, 311 chatbots have a mixed record on this front. New York City’s chatbot made headlines last year for providing inaccurate and, at times, bizarre information. When an Associated Press reporter asked if it was legal for a restaurant to serve cheese that had been nibbled on by a rat, the chatbot responded, “Yes, you can still serve the cheese to customers if it has rat bites.” (The New York chatbot appears to have improved since then. When asked by this reporter, it responded firmly in the negative to the nibbling rat question.)

These AI mishaps can reduce trust in government—precisely the opposite of the outcome that Williams is pursuing in her work. 

“Currently, we don’t have a lot of trust in AI systems,” she says. “That’s why having that human facilitator is really important.” Cities should be transparent in how they’re using AI and what its limitations are, she says. In doing so, they have an opportunity to model more ethical and responsive ways of using this technology. 

Next on Williams’s agenda is exploring how cities can develop their own AI systems rather than relying on tech giants, which often have a different set of priorities. This technology could be open-source; not only would communities be able to better understand the data they produce, but they would own it. 

“One of the biggest criticisms of AI right now is that the people who are doing the labor are not paid for the work that they do [to train the systems],” she says. “I’m super excited about how communities can own their large language models. Then communities can own the data that’s inside them and allow people to have access to it.”  

Benjamin Schneider is a freelance writer covering housing, transportation, and urban policy.

Senior State Department official sought internal communications with journalists, European officials, and Trump critics

A previously unreported document distributed by senior US State Department official Darren Beattie reveals a sweeping effort to uncover all communications between the staff of a small government office focused on online disinformation and a lengthy list of public and private figures—many of whom are longtime targets of the political right. 

The document, originally shared in person with roughly a dozen State Department employees in early March, requested staff emails and other records with or about a host of individuals and organizations that track or write about foreign disinformation—including Atlantic journalist Anne Applebaum, former US cybersecurity official Christopher Krebs, and the Stanford Internet Observatory—or have criticized President Donald Trump and his allies, such as the conservative anti-Trump commentator Bill Kristol. 

The document also seeks all staff communications that merely reference Trump or people in his orbit, like Alex Jones, Glenn Greenwald, and Robert F. Kennedy Jr. In addition, it directs a search of communications for a long list of keywords, including “Pepe the Frog,” “incel,” “q-anon,” “Black Lives Matter,” “great replacement theory,” “far-right,” and “infodemic.”

For several people who received or saw the document, the broad requests for unredacted information felt like a “witch hunt,” one official says—one that could put the privacy and security of numerous individuals and organizations at risk. 

Beattie, whom Trump appointed in February to be the acting undersecretary for public diplomacy, told State Department officials that his goal in seeking these records was a “Twitter files”-like release of internal State Department documents “to rebuild trust with the American public,” according to a State Department employee who heard the remarks. (Beattie was referring to the internal Twitter documents that were released after Elon Musk bought the platform, in an attempt to prove that the company had previously silenced conservatives. While the effort provided more detail on the challenges and mistakes Twitter had already admitted to, it failed to produce a smoking gun.)

“What would be the innocent reason for doing that?” Bill Kristol

The document, dated March 11, 2025, focuses specifically on records and communications from the Counter Foreign Information Manipulation and Interference (R/FIMI) Hub, a small office in the State Department’s Office of Public Diplomacy that tracked and countered foreign disinformation campaigns; it was created after the Global Engagement Center (GEC), which had the same mission, shut down at the end of 2024. MIT Technology Review broke the news earlier this month that R/FIMI would be shuttered. 

Some R/FIMI staff were at the meeting where the document was initially shared, as were State Department lawyers and staff from the department’s Bureau of Administration, who are responsible for conducting searches to fulfill public records requests. 

Also included among the nearly 60 individuals and organizations caught up in Beattie’s information dragnet are Bill Gates; the open-source journalism outlet Bellingcat; former FBI special agent Clint Watts; Nancy Faeser, the German interior minister; Daniel Fried, a career State Department official and former US ambassador to Poland; Renée DiResta, an expert in online disinformation who led research at Stanford Internet Observatory; and Nina Jankowicz, a disinformation researcher who briefly led the Disinformation Governance Board at the US Department of Homeland Security.

Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

When told of their inclusion in the records request, multiple people expressed alarm that such a list exists at all in an American institution. “When I was in government I’d never done anything like that,” Kristol, a former chief of staff to Vice President Dan Quayle, says. “What would be the innocent reason for doing that?”

Fried echoes this sentiment. “I spent 40 years in the State Department, and you didn’t collect names or demand email records,” says Fried. “I’ve never heard of such a thing”—at least not in the American context, he clarifies. It did remind him of Eastern European “Communist Party minder[s] watching over the untrusted bureaucracy.” 

He adds: “It also approaches the compilation of an enemies list.” 

Targeting the “censorship industrial complex”

Both GEC and R/FIMI, its pared-down successor office, focused on tracking and countering foreign disinformation efforts from Russia, China, and Iran, among others, but GEC was frequently accused—and was even sued—by conservative critics who claimed that it enabled censorship of conservative Americans’ views. A judge threw out one of those claims against GEC in 2022 (while finding that other parts of the Biden administration did exert undue pressure on tech platforms). 

Beattie has also personally promoted these views. Before joining the State Department, he started Revolver News, a website that espouses far-right talking points that often gain traction in certain conservative circles. Among the ideas promoted in Revolver News is that GEC was part of a “censorship industrial complex” aimed at suppressing American conservative voices, even though GEC’s mission was foreign disinformation. This idea has taken hold more broadly; the House Foreign Affairs Committee held a hearing titled the “Censorship-Industrial Complex: The Need for First Amendment Safeguards at the State Department,” on April 1 focused on GEC. 

Most people on the list appear to have focused at some point on tracking or challenging disinformation broadly, or on countering specific false claims, including those related to the 2020 election. A few of the individuals appear primarily to be critics of Trump, Beattie, or others in the right-wing media ecosystem. Many have been the subject of Trump’s public grievances for years. (Trump called Krebs, for instance, a “significant bad-faith actor” in an executive order targeting him earlier this month.)   

Beattie specifically asked for “all documents, emails, correspondence, or other records of communications amongst/between employees, contractors, subcontractors or consultants at the GEC or R/FIMI” since 2017 with all the named individuals, as well as communications that merely referenced them. He sought communications that referenced any of the listed organizations.  

Finally, he sought a list of additional unredacted agency records—including all GEC grants and contracts, as well as subgrants, which are particularly sensitive due to the risks of retaliation to subgrantees, who often work in local journalism, fact-checking, or pro-democracy organizations under repressive regimes. It also asked for “all documents mentioning” the Election Integrity Partnership, a research collaboration between academics and tech companies that has been a target of right-wing criticism

Several State Department staffers call the records requests “unusual” and “improper” in their scope. MIT Technology Review spoke to three people who had personally seen the document, as well as two others who were aware of it; we agreed to allow them to speak anonymously due to their fears of retaliation. 

While they acknowledge that previous political appointees have, on occasion, made information requests through the records management system, Beattie’s request was something wholly different. 

Never had “an incoming political appointee” sought to “search through seven years’ worth of all staff emails to see whether anything negative had been said about his friends,” says one staffer. 

Another staffer calls it a “pet project” for Beattie. 

Selective transparency

Beattie delivered the request, which he framed as a “transparency” initiative, to the State Department officials in a conference room at its Washington, D.C., headquarters on a Tuesday afternoon in early March, in the form of an 11-page packet titled, “SO [Senior Official] Beattie Inquiry for GEC/R/FIMI Records.” The documents were printed out, rather than emailed.

Labeled “sensitive but unclassified,” the document lays out Beattie’s requests in 12 separate, but sometimes repetitive, bullet points. In total, he sought communications about 16 organizations, including Harvard’s Berkman Klein Center and the US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), as well as with and about 39 individuals. 

Notably, this includes several journalists: In addition to Bellingcat and Applebaum, the document also asks for communications with NBC News senior reporter Brandy Zadrozny. 

Press-freedom advocates expressed alarm about the inclusion of journalists on the list, as well as the possibility of their communications being released to the public, which goes “considerably well beyond the scope of what … leak investigations in the past have typically focused on,” says Grayson Clary, a staff attorney at the Reporters Committee for Freedom of the Press. Rather, the effort seems like “a tactic designed to … make it much harder for journalists to strike up those source relationships in the first instance.”

Beattie also requested a search for communications that mentioned Trump and more than a dozen other prominent right-leaning figures. In addition to Jones, Greenwald, and “RFK Jr.,” the list includes “Don Jr.,” Elon Musk, Joe Rogan, Charlie Kirk, Marine Le Pen, “Bolsonaro” (which could cover either Jair Bolsonaro, the former Brazilian president, or his son Eduardo, who is seeking political asylum in the US), and Beattie himself. It also asked for a search for 32 right-wing buzzwords related to abortion, immigration, election denial, and January 6, suggesting a determined effort to find State Department staff who even just discussed such matters. 

(Staffers say they doubt that Beattie will find much, unless, one says, it’s “previous [FOIA] queries from people like Beattie” or discussions about “some Russian or PRC [Chinese] narrative that includes some of this stuff.”)

Multiple sources say State Department employees raised alarms internally about the records requests. They worried about the sensitivity and impropriety of the broad scope of the information requested, particularly because records would be unredacted, as well as about how the search would be conducted: through the eRecords file management system, which makes it easy for administrative staff to search through and retrieve State Department employees’ emails, typically in response to FOIA requests. 

This felt, they say, like a powerful misuse of the public records system—or as Jankowicz, the disinformation researcher and former DHS official, put it, “weaponizing the access [Beattie] has to internal communications in order to upend people’s lives.”

“It stank to high heaven,” one staffer says. “This could be used for retaliation. This could be used for any kind of improper purposes, and our oversight committees should be informed of this.”

Another employee expressed concerns about the request for information on the agency’s subgrantees—who were often on the ground in repressive countries and whose information was closely guarded and not shared digitally, unlike the public lists of contractors and grantees typically available on websites like Grants.gov or USAspending.gov. “Making it known that [they] took money from the United States would put a target on them,” this individual explains. “We kept that information very secure. We wouldn’t even email subgrant names back and forth.”

Several people familiar with the matter say that by early April, Beattie had received many of the documents he’d requested, retrieved through eRecords, as well as a list of grantees. One source says the more sensitive list of subgrantees was not shared.  

Neither the State Department nor Beattie responded to requests for comment. A CISA spokesperson emailed, “We do not comment on intergovernmental documents and would refer you back to the State Department.” We reached out to all individuals whose communications were requested and are named here; many declined to comment on the record.

A “chilling effect”

Five weeks after Beattie made his requests for information, the State Department shut down R/FIMI. 

An hour after staff members were informed, US Secretary of State Marco Rubio published a blog post announcing the news on the Federalist, one of the outlets that sued the GEC over allegations of censorship. He then discussed in an interview with the influential right-wing Internet personality Mike Benz plans for Beattie to lead a “transparency effort.”  

“What we have to do now—and Darren will be big involved in that as well—is sort of document what happened … because I think people who were harmed deserve to know that, and be able to prove that they were harmed,” Rubio told Benz.

This is what Beattie—and Benz—have long called for. Many of the names and keywords he included in his request reflect conspiracy theories and grievances promoted by Revolver News—which Beattie founded after being fired from his job as a speechwriter during the first Trump administration when CNN reported that he had spoken at a conference with white nationalists. 

Ultimately, the State Department staffers say they fear that a selective disclosure of documents, taken out of context, could be distorted to fit any kind of narrative Beattie, Rubio, or others create. 

Weaponizing any speech they consider to be critical by deeming it disinformation is not only ironic, says Jankowicz—it will also have “chilling effects” on anyone who conducts disinformation research, and it will result in “less oversight and transparency over tech platforms, over adversarial activities, over, frankly, people who are legitimately trying to disenfranchise US voters.” 

That, she warns, “is something we should all be alarmed about.”

We need targeted policies, not blunt tariffs, to drive “American energy dominance”

President Trump and his appointees have repeatedly stressed the need to establish “American energy dominance.” 

But the White House’s profusion of executive orders and aggressive tariffs, along with its determined effort to roll back clean-energy policies, are moving the industry in the wrong direction, creating market chaos and economic uncertainty that are making it harder for both legacy players and emerging companies to invest, grow, and compete.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


The current 90-day pause on rolling out most of the administration’s so-called “reciprocal” tariffs presents a critical opportunity. Rather than defaulting to broad, blunt tariffs, the administration should use this window to align trade policy with a focused industrial strategy—one aimed at winning the global race to become a manufacturing powerhouse in next-generation energy technologies. 

By tightly aligning tariff design with US strengths in R&D and recent government investments in the energy innovation lifecycle, the administration can turn a regressive trade posture into a proactive plan for economic growth and geopolitical advantage.

The president is right to point out that America is blessed with world-leading energy resources. Over the past decade, the country has grown from being a net importer to a net exporter of oil and the world’s largest producer of oil and gas. These resources are undeniably crucial to America’s ability to reindustrialize and rebuild a resilient domestic industrial base, while also providing strategic leverage abroad. 

But the world is slowly but surely moving beyond the centuries-old model of extracting and burning fossil fuels, a change driven initially by climate risks but increasingly by economic opportunities. America will achieve true energy dominance only by evolving beyond being a mere exporter of raw, greenhouse-gas-emitting energy commodities—and becoming the world’s manufacturing and innovation hub for sophisticated, high-value energy technologies.

Notably, the nation took a lead role in developing essential early components of the cleantech sector, including solar photovoltaics and electric vehicles. Yet too often, the fruits of that innovation—especially manufacturing jobs and export opportunities—have ended up overseas, particularly in China.

China, which is subject to Trump’s steepest tariffs and wasn’t granted any reprieve in the 90-day pause, has become the world’s dominant producer of lithium-ion batteries, EVs, wind turbines, and other key components of the clean-energy transition.

Today, the US is again making exciting strides in next-generation technologies, including fusion energy, clean steel, advanced batteries, industrial heat pumps, and thermal energy storage. These advances can transform industrial processes, cut emissions, improve air quality, and maximize the strategic value of our fossil-fuel resources. That means not simply burning them for their energy content, but instead using them as feedstocks for higher-value materials and chemicals that power the modern economy.

The US’s leading role in energy innovation didn’t develop by accident. For several decades, legislators on both sides of the political divide supported increasing government investments into energy innovation—from basic research at national labs and universities to applied R&D through ARPA-E and, more recently, to the creation of the Office of Clean Energy Demonstrations, which funds first-of-a-kind technology deployments. These programs have laid the foundation for the technologies we need—not just to meet climate goals, but to achieve global competitiveness.

Early-stage companies in competitive, global industries like energy do need extra support to help them get to the point where they can stand up on their own. This is especially true for cleantech companies whose overseas rivals have much lower labor, land, and environmental compliance costs.

That’s why, for starters, the White House shouldn’t work to eliminate federal investments made in these sectors under the Bipartisan Infrastructure Law and the Inflation Reduction Act, as it’s reportedly striving to do as part of the federal budget negotiations.

Instead, the administration and its Republican colleagues in Congress should preserve and refine these programs, which have already helped expand America’s ability to produce advanced energy products like batteries and EVs. Success should be measured not only in barrels produced or watts generated, but in dollars of goods exported, jobs created, and manufacturing capacity built.

The Trump administration should back this industrial strategy with smarter trade policy as well. Steep, sweeping tariffs won’t  build long-term economic strength. 

But there are certain instances where reasonable, modern, targeted tariffs can be a useful tool in supporting domestic industries or countering unfair trade practices elsewhere. That’s why we’ve seen leaders of both parties, including Presidents Biden and Obama, apply them in recent years.

Such levies can be used to protect domestic industries where we’re competing directly with geopolitical rivals like China, and where American companies need breathing room to scale and thrive. These aims can be achieved by imposing tariffs on specific strategic technologies, such as EVs and next-generation batteries.

But to be clear, targeted tariffs on a few strategic sectors are starkly different from Trump’s tariffs, which now include 145% levies on most Chinese goods, a 10% “universal” tariff on other nations and 25% fees on steel and aluminum. 

Another option is implementing a broader border adjustment policy, like the Foreign Pollution Fee Act recently reintroduced by Senators Cassidy and Graham, which is designed to create a level playing field that would help clean manufacturers in the US compete with heavily polluting businesses overseas.  

Just as important, the nation must avoid counterproductive tariffs on critical raw materials like steel, aluminum, and copper or retaliatory restrictions on critical minerals—all of which are essential inputs for US manufacturing. The nation does not currently produce enough of these materials to meet demand, and it would take years to build up that capacity. Raising input costs through tariffs only slows our ability to keep or bring key industries home.

Finally, we must be strategic in how we deploy the country’s greatest asset: our workforce. Americans are among the most educated and capable workers in the world. Their time, talent, and ingenuity shouldn’t be spent assembling low-cost, low-margin consumer goods like toasters. Instead, we should focus on building cutting-edge industrial technologies that the world is demanding. These are the high-value products that support strong wages, resilient supply chains, and durable global leadership.

The worldwide demand for clean, efficient energy technologies is rising rapidly, and the US cannot afford to be left behind. The energy transition presents not just an environmental imperative but a generational opportunity for American industrial renewal.

The Trump administration has a chance to define energy dominance not just in terms of extraction, but in terms of production—of technology, of exports, of jobs, and of strategic influence. Let’s not let that opportunity slip away.

Addison Killean Stark is the chief executive and cofounder of AtmosZero, an industrial steam heat pump startup based in Loveland, Colorado. He was previously a fellow at the Department of Energy’s ARPA-E division, which funds research and development of advanced energy technologies.

US office that counters foreign disinformation is being eliminated

The only office within the US State Department that monitors foreign disinformation is to be eliminated, according to US Secretary of State Marco Rubio, confirming reporting by MIT Technology Review.

The Counter Foreign Information Manipulation and Interference (R/FIMI) Hub is a small office in the State Department’s Office of Public Diplomacy that tracks and counters foreign disinformation campaigns. 

In shutting R/FIMI, the department’s controversial acting undersecretary, Darren Beattie, is delivering a major win to conservative critics who have alleged that it censors conservative voices. Created at the end of 2024, it was reorganized from the Global Engagement Center (GEC), a larger office with a similar mission that had long been criticized by conservatives who claimed that, despite its international mission, it was censoring American conservatives. In 2023, Elon Musk called the center the “worst offender in US government censorship [and] media manipulation” and a “threat to our democracy.” 

The culling of the office leaves the State Department without a way to actively counter the increasingly sophisticated disinformation campaigns from foreign governments like those of Russia, Iran, and China.

Shortly after publication, employees at R/FIMI received an email, inviting them to an 11:15AM meeting with Beattie, where employees were told that the office and their jobs have been eliminated. 

Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

Then, Secretary of State Marco Rubio confirmed our reporting in a blog post in The Federalist, which had sued GEC last year alleging that it had infringed on its freedom of speech. “It is my pleasure to announce the State Department is taking a crucial step toward keeping the president’s promise to liberate American speech by abolishing forever the body formerly known as the Global Engagement Center (GEC),” he wrote. And he told Mike Benz, a former first-term Trump official who also reportedly has alt right views, during a YouTube interview, “We ended government-sponsored censorship in the United States through the State Department.”  

Censorship claims

For years, conservative voices both in and out of government have accused Big Tech of censoring conservative views—and they often charged GEC with enabling such censorship. 

GEC had its roots as the Center for Strategic Counterterrorism Communications (CSCC), created by an Obama-era executive order, but shifted its mission to fight propaganda and disinformation from foreign governments and terrorist organizations in 2016, becoming the Global Engagement Center. It was always explicitly focused on the international information space, but some of the organizations that it funded also did work in the United States. It shut down last December after a measure to reauthorize its $61 million budget was blocked by Republicans in Congress, who accused it of helping Big Tech censor American conservative voices. 

R/FIMI had a similar goal to fight foreign disinformation, but it was smaller: the newly created office had a $51.9 million budget, and a small staff that, by mid-April, was down to just 40 employees, from 125 at GEC. In a Wednesday morning meeting, those employees were told that they would  be put on administrative leave and terminated within 30 days. 

With the change in administrations, R/FIMI had never really gotten off the ground. Beattie, a controversial pick for undersecretary—he was fired as a speechwriter during the first Trump administration for attending a white nationalism conference, has suggested that the FBI organized the January 6 attack on Congress, and has said that it’s not worth defending Taiwan from China—had instructed the few remaining staff to be “pencils down,” one State Department official told me, meaning to pause in their work. 

The administration’s executive order on “countering censorship and restoring freedom of speech” reads like a summary of conservative accusations against GEC:

“Under the guise of combatting “misinformation,” “disinformation,” and “malinformation,” the Federal Government infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.  Government censorship of speech is intolerable in a free society.”

In 2023, The Daily Wire, founded by conservative media personality Ben Shapiro, joined The Federalist in suing GEC for allegedly infringing on the company’s first amendment rights by funding two non-profit organizations, the London-based Global Disinformation Index and New York-based NewsGuard, that had labeled The Daily Wire as “unreliable,” “risky,” and/or (per GDI), susceptible to foreign disinformation. Those projects were not funded by GEC. The lawsuit alleged that this amounted to censorship by “starving them of advertising revenue and reducing the circulation of their reporting and speech,” the lawsuit continued. 

In 2022, the Republican attorneys general of Missouri and Louisiana named GEC among the federal agencies that, they alleged, were pressuring social networks to censor conservative views. Though the case eventually made its way to the Supreme Court, which found no First Amendment violations, a lower court had already removed GEC’s name from the list of defendants, ruling there was “no evidence” that GEC’s communications with the social media platforms had gone beyond “educating the platforms on ‘tools and techniques used by foreign actors.’”

The stakes

The GEC—and now R/FIMI—was targeted as part of a wider campaign to shut down groups accused of being “weaponized” against conservatives. 

Conservative critics railing against what they have alternatively called a disinformation- or censorship- industrial complex have also taken aim at the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the Stanford Internet Observatory, a prominent research group that conducted widely cited research on the flows of disinformation during elections. 

CISA’s former director, Chris Krebs, was personally targeted in an April 9 White House memo, while in response to the criticism and millions of dollars in legal fees, Stanford University shuttered the Stanford Internet Observatory ahead of the 2024 presidential elections.  

But this targeting comes at a time when foreign disinformation campaigns—especially by Russia, China, and Iran—have become increasingly sophisticated. 

According to one estimate, Russia spends $1.5 billion per year on foreign influence campaigns. In 2022, the Islamic Republic of Iran Broadcasting, that country’s primary foreign propaganda arm, had a $1.26 billion budget. And a 2015 estimate suggests that China spent up to $10 billion per year on media targeting non-Chinese foreigners—a figure that has almost certainly grown.

In September 2024, the Justice Department indicted two employees of RT, a Russian state-owned propaganda agency, in a $10 million scheme to create propaganda aimed at influencing US audiences through a media company that has since been identified as the conservative Tenet Media. 

The GEC was one effort to counter such campaigns. Some of its recent projects have included developing AI models to detect memes and deepfakes and exposing Russian propaganda efforts to influence Latin American public opinion against the war in Ukraine. 

By law, the Office of Public Diplomacy has to provide Congress with 15-day advance notice of any intent to reassign any funding allocated by Congress over $1 million. Congress then has time to respond, ask questions, and challenge the decisions—though to judge from its record with other unilateral executive-branch decisions to gut government agencies, it is unlikely to do so. 

We have reached out to the State Department for comment. 

This story was updated at 11:55am to note that R/FIMI employees have confirmed that the office closed.
This story was updated at 12:37am to include confirmation about R/FIMI’s shutdown from Marco Rubio.
This story was updated at 6:10pm to add an identifier for Mike Benz.

How the federal government is tracking changes in the supply of street drugs

In 2021, the Maryland Department of Health and the state police were confronting a crisis: Fatal drug overdoses in the state were at an all-time high, and authorities didn’t know why. There was a general sense that it had something to do with changes in the supply of illicit drugs—and specifically of the synthetic opioid fentanyl, which has caused overdose deaths in the US to roughly double over the past decade, to more than 100,000 per year. 

But Maryland officials were flying blind when it came to understanding these fluctuations in anything close to real time. The US Drug Enforcement Administration reported on the purity of drugs recovered in enforcement operations, but the DEA’s data offered limited detail and typically came back six to nine months after the seizures. By then, the actual drugs on the street had morphed many times over. Part of the investigative challenge was that fentanyl can be some 50 times more potent than heroin, and inhaling even a small amount can be deadly. This made conventional methods of analysis, which required handling the contents of drug packages directly, incredibly risky. 

Seeking answers, Maryland officials turned to scientists at the National Institute of Standards and Technology, the national metrology institute for the United States, which defines and maintains standards of measurement essential to a wide range of industrial sectors and health and security applications.

There, a research chemist named Ed Sisco and his team had developed methods for detecting trace amounts of drugs, explosives, and other dangerous materials—techniques that could protect law enforcement officials and others who had to collect these samples. Essentially, Sisco’s lab had fine-tuned a technology called DART (for “direct analysis in real time”) mass spectrometry—which the US Transportation Security Administration uses to test for explosives by swiping your hand—to enable the detection of even tiny traces of chemicals collected from an investigation site. This meant that nobody had to open a bag or handle unidentified powders; a usable residue sample could be obtained by simply swiping the outside of the bag.  

Sisco realized that first responders or volunteers at needle exchange sites could use these same methods to safely collect drug residue from bags, drug paraphernalia, or used test strips—which also meant they would no longer need to wait for law enforcement to seize drugs for testing. They could then safely mail the samples to NIST’s lab in Maryland and get results back in as little as 24 hours, thanks to innovations in Sisco’s lab that shaved the time to generate a complete report from 10 to 30 minutes to just one or two. This was partly enabled by algorithms that allowed them to skip the time-consuming step of separating the compounds in a sample before running an analysis.

The Rapid Drug Analysis and Research (RaDAR) program launched as a pilot in October 2021 and uncovered new, critical information almost immediately. Early analysis found xylazine—a veterinary sedative that’s been associated with gruesome wounds in users—in about 80% of opioid samples they collected. 

This was a significant finding, Sisco says: “Forensic labs care about things that are illegal, not things that are not illegal but do potentially cause harm. Xylazine is not a scheduled compound, but it leads to wounds that can lead to amputation, and it makes the other drugs more dangerous.” In addition to the compounds that are known to appear in high concentrations in street drugs—xylazine, fentanyl, and the veterinary sedative medetomidine—NIST’s technology can pick out trace amounts of dozens of adulterants that swirl through the street-drug supply and can make it more dangerous, including acetaminophen, rat poison, and local anesthetics like lidocaine.

What’s more, the exact chemical formulation of fentanyl on the street is always changing, and differences in molecular structure can make the drugs deadlier. So Sisco’s team has developed new methods for spotting these “analogues”—­compounds that resemble known chemical structures of fentanyl and related drugs.

Ed Sisco in a mask
Ed Sisco’s lab at NIST developed a test that gives law enforcement and public health officials vital information about what substances are present in street drugs.
B. HAYES/NIST

The RaDAR program has expanded to work with partners in public health, city and state law enforcement, forensic science, and customs agencies at about 65 sites in 14 states. Sisco’s lab processes 700 to 1,000 samples a month. About 85% come from public health organizations that focus on harm reduction (an approach to minimizing negative impacts of drug use for people who are not ready to quit). Results are shared at these collection points, which also collect survey data about the effects of the drugs.

Jason Bienert, a wound-care nurse at Johns Hopkins who formerly volunteered with a nonprofit harm reduction organization in rural northern Maryland, started participating in the RaDAR program in spring 2024. “Xylazine hit like a storm here,” he says. “Everyone I took care of wanted to know what was in their drugs because they wanted to know if there was xylazine in it.” When the data started coming back, he says, “it almost became a race to see how many samples we could collect.” Bienert sent in about 14 samples weekly and created a chart on a dry-erase board, with drugs identified by the logos on their bags, sorted into columns according to the compounds found in them: ­heroin, fentanyl, xylazine, and everything else.

“It was a super useful tool,” Bienert says. “Everyone accepted the validity of it.” As people came back to check on the results of testing, he was able to build rapport and offer additional support, including providing wound care for about 50 people a week.

The breadth and depth of testing under the RaDAR program allow an eagle’s-eye view of the national street-drug landscape—and insights about drug trafficking. “We’re seeing distinct fingerprints from different states,” says Sisco. NIST’s analysis shows that fentanyl has taken over the opioid market—except for pockets in the Southwest, there is very little heroin on the streets anymore. But the fentanyl supply varies dramatically as you cross the US. “If you drill down in the states,” says Sisco, “you also see different fingerprints in different areas.” Maryland, for example, has two distinct fentanyl supplies—one with xylazine and one without.

In summer 2024, RaDAR analysis detected something really unusual: the sudden appearance of an industrial-grade chemical called BTMPS, which is used to preserve plastic, in drug samples nationwide. In the human body, BTMPS acts as a calcium channel blocker, which lowers blood pressure, and mixed with xylazine or medetomidine, can make overdoses harder to treat. Exactly why and how BTMPS showed up in the drug supply isn’t clear, but it continues to be found in fentanyl samples at a sustained level since it was initially detected. “This was an example of a compound we would have never thought to look for,” says Sisco. 

To Sisco, Bienert, and others working on the public health front of the drug crisis, the ever-shifting chemical composition of the street-drug supply speaks to the futility of the “war on drugs.” They point out that a crackdown on heroin smuggling is what gave rise to fentanyl. And NIST’s data shows how in June 2024—the month after Pennsylvania governor Josh Shapiro signed a bill to make possession of xylazine illegal in his state—it was almost entirely replaced on the East Coast by the next veterinary drug, medetomidine. 

Over the past year, for reasons that are not fully understood, drug overdose deaths nationally have been falling for the first time in decades. One theory is that xylazine has longer-lasting effects than fentanyl, which means people using drugs are taking them less often. Or it could be that more and better information about the drugs themselves is helping people make safer decisions.

“It’s difficult to say the program prevents overdoses and saves lives,” says Sisco. “But it increases the likelihood of people coming in to needle exchange centers and getting more linkages to wound care, other services, other education.” Working with public health partners “has humanized this entire area for me,” he says. “There’s a lot more gray than you think—it’s not black and white. And it’s a matter of life or death for some of these people.” 

Adam Bluestein writes about innovation in business, science, and technology.

DOGE’s tech takeover threatens the safety and stability of our critical data

Tech buzzwords are clanging through the halls of Washington, DC. The Trump administration has promised to “leverage blockchain technology” to reorganize the US Agency for International Development, and Elon Musk’s DOGE has already unleashed an internal chatbot to automate agency tasks—with bigger plans on the horizon to take over for laid-off employees. The executive order that created DOGE in the first place claims the agency intends to “modernize Federal technology and software.” But jamming hyped-up tech into government workflows isn’t a formula for efficiency. Successful, safe civic tech requires a human-centered approach that understands and respects the needs of citizens. Unfortunately, this administration laid off all the federal workers with the know-how for that—seasoned design and technology professionals, many of whom left careers in the private sector to serve their government and compatriots.

What’s going on now is not unconventional swashbuckling—it’s wild incompetence. Musk may have run plenty of tech companies, but building technology for government is an entirely different beast. If this administration doesn’t change its approach soon, American citizens are going to suffer far more than they probably realize.

Many may wince remembering the rollout of Healthcare.gov under the Obama administration. Following passage of the Affordable Care Act, Healthcare.gov launched in October of 2013 to facilitate the anticipated wave of insurance signups. But enormous demand famously took down the website two hours after launch. On that first day, only six people were able to complete the registration process. In the wake of the mess, the administration formed the US Digital Service (USDS) and 18F, the digital services office of the General Services Administration. These agencies—the ones now dismantled at the hands of DOGE—pulled experienced technologists from industry to improve critical infrastructure across the federal government, including the Social Security Administration and Veterans Affairs. 

Over the last decade, USDS and 18F have worked to build safe, accessible, and secure infrastructure for the people of the United States. DirectFile, the free digital tax filing system that the IRS launched last year, emerged from years of careful research, design, and engineering and a thoughtful, multi-staged release. As a result, 90% of people who used DirectFile and responded to a survey said their experience was excellent or above average, and 86% reported that DirectFile increased their trust in the IRS. Recently, Sam Corcos, a DOGE engineer, told IRS employees he plans to kill the program. When 21 experienced technologists quit their jobs at USDS in January after their colleagues were let go, they weren’t objecting on political grounds. Rather, they preferred to quit rather than “compromise core government services” under DOGE, whose orders are incompatible with USDS’s original mission.

As DOGE bulldozes through technological systems, firewalls between government agencies are collapsing and the floodgates are open for data-sharing disasters that will affect everyone. For example, the decision to give Immigration and Customs Enforcement access to IRS data and to databases of unaccompanied minors creates immediate dangers for immigrants, regardless of their legal status. And it threatens everyone else, albeit perhaps less imminently, as every American’s Social Security number, tax returns, benefits, and health-care records are agglomerated into one massive, poorly secured data pool. 

That’s not just speculation. We’ve already seen how data breaches at companies like Equifax can expose the sensitive information of hundreds of millions of people. Now imagine those same risks with all your government data, managed by a small crew of DOGE workers without a hint of institutional knowledge between them. 

Making data sets speak to each other is one of the most difficult technological challenges out there. Anyone who has ever had to migrate from one CRM system to another knows how easy it is to lose data in the process. Centralization of data is on the administration’s agenda—and will more than likely involve the help of contracting tech companies. Giants like Palantir have built entire business models around integrating government data for surveillance, and they stand to profit enormously from DOGE’s dismantling of privacy protections. This is the playbook: Gut public infrastructure, pay private companies millions to rebuild it, and then grant those companies unprecedented access to our data. 

DOGE is also coming for COBOL, a programming language that the entire infrastructure of the Social Security Administration is built on. According to reporting by Wired, DOGE plans to rebuild that system from the ground up in mere months—even though the SSA itself estimated that a project like that would take five years. The difference in those timelines isn’t due to efficiency or ingenuity; it’s the audacity of naïveté and negligence. If something goes wrong, more than 65 million people in the US currently receiving Social Security benefits will feel it where it hurts. Any delay in a Social Security payment can mean the difference between paying rent and facing eviction, affording medication or food and going without. 

There are so many alarms to ring about the actions of this administration, but the damage to essential technical infrastructure may be one of the effects with the longest tails. Once these systems are gutted and these firewalls are down, it could take years or even decades to put the pieces back together from a technical standpoint. And since the administration has laid off the in-house experts who did the important and meticulous work of truly modernizing government technology, who will be around to clean up the mess?  

Last month, an 83-year-old pastor in hospice care summoned her strength to sue this administration over its gutting of the Consumer Financial Protection Bureau, and we can follow her example. Former federal tech workers have both the knowledge and the legal standing to challenge these reckless tech initiatives. And everyday Americans who rely on government services, which is all of us, have a stake in this fight. Support the lawyers challenging DOGE’s tech takeover, document and report any failures you encounter in government systems, and demand that your representatives hold hearings on what’s happening to our digital infrastructure. It may soon be too late.

Steven Renderos is the executive director of Media Justice.

Correction: Due to a CMS error, this article was originally published with an incorrect byline. Steven Renderos is the author.

How the Pentagon is adapting to China’s technological rise

It’s been just over two months since Kathleen Hicks stepped down as US deputy secretary of defense. As the highest-ranking woman in Pentagon history, Hicks shaped US military posture through an era defined by renewed competition between powerful countries and a scramble to modernize defense technology.  

She’s currently taking a break before jumping into her (still unannounced) next act. “It’s been refreshing,” she says—but disconnecting isn’t easy. She continues to monitor defense developments closely and expresses concern over potential setbacks: “New administrations have new priorities, and that’s completely expected, but I do worry about just stalling out on progress that we’ve built over a number of administrations.”

Over the past three decades, Hicks has watched the Pentagon transform—politically, strategically, and technologically. She entered government in the 1990s at the tail end of the Cold War, when optimism and a belief in global cooperation still dominated US foreign policy. But that optimism dimmed. After 9/11, the focus shifted to counterterrorism and nonstate actors. Then came Russia’s resurgence and China’s growing assertiveness. Hicks took two previous breaks from government work—the first to complete a PhD at MIT and joining the think thank Center for Strategic and International Studies (CSIS), which she later rejoined to lead its International Security Program after her second tour. “By the time I returned in 2021,” she says, “there was one actor—the PRC (People’s Republic of China)—that had the capability and the will to really contest the international system as it’s set up.”

In this conversation with MIT Technology Review, Hicks reflects on how the Pentagon is adapting—or failing to adapt—to a new era of geopolitical competition. She discusses China’s technological rise, the future of AI in warfare, and her signature initiative, Replicator, a Pentagon initiative to rapidly field thousands of low-cost autonomous systems such as drones.

You’ve described China as a “talented fast follower. Do you still believe that, especially given recent developments in AI and other technologies?

Yes, I do. China is the biggest pacing challenge we face, which means it sets the pace for most capability areas for what we need to be able to defeat to deter them. For example, surface maritime capability, missile capability, stealth fighter capability. They set their minds to achieving a certain capability, they tend to get there, and they tend to get there even faster.

That said, they have a substantial amount of corruption, and they haven’t been engaged in a real conflict or combat operation in the way that Western militaries have trained for or been involved in, and that is a huge X factor in how effective they would be.

China has made major technological strides, and the old narrative of its being a follower is breaking down—not just in commercial tech, but more broadly. Do you think the US still holds a strategic advantage?

I would never want to underestimate their ability—or any nation’s ability—to innovate organically when they put their minds to it. But I still think it’s a helpful comparison to look at the US model. Because we’re a system of free minds, free people, and free markets, we have the potential to generate much more innovation culturally and organically than a statist model does. That’s our advantage—if we can realize it.

China is ahead in manufacturing, especially when it comes to drones and other unmanned systems. How big a problem is that for US defense, and can the US catch up?

I do think it’s a massive problem. When we were conceiving Replicator, one of the big concerns was that DJI had just jumped way out ahead on the manufacturing side, and the US had been left behind. A lot of manufacturers here believe they can catch up if given the right contracts—and I agree with that.

But the harder challenge isn’t just making the drones—it’s integrating them into our broader systems. That’s where the U.S. often struggles. It’s not a complicated manufacturing problem. It’s a systems integration problem: how you take something and make it usable, scalable, and connected across a joint force. Replicator was designed to push through that—to drive not just production, but integration and deployment at speed.

We also spent time identifying broader supply-chain vulnerabilities. Microelectronics was a big one. Critical minerals. Batteries. People sometimes think batteries are just about electrification, but they’re fundamental across our systems—even on ships in the Navy.

When it comes to drones specifically, I actually think it’s a solvable problem. The issue isn’t complexity. It’s just about getting enough mass of contracts to scale up manufacturing. If we do that, I believe the US can absolutely compete.

The Replicator drone program was one of your key initiatives. It promised a very fast timeline—especially compared with the typical defense acquisition cycle. Was that achievable? How is that progressing?

When I left in January, we had still lined up for proving out this summer, and I still believe we should see some completion this year. I hope Congress will stay very engaged in trying to ensure that the capability, in fact, comes to fruition. Even just this week with Secretary [Pete] Hegseth out in the Indo-Pacific, he made some passing reference to the [US Indo-Pacific Command] commander, Admiral [Samuel] Paparo, having the flexibility to create the capability needed, and that gives me a lot of confidence of consistency.

Can you talk about how Replicator fits into broader efforts to speed up defense innovation? What’s actually changing inside the system?

Traditionally, defense acquisition is slow and serial—one step after another, which works for massive, long-term systems like submarines. But for things like drones, that just doesn’t cut it. With Replicator, we aimed to shift to a parallel model: integrating hardware, software, policy, and testing all at once. That’s how you get speed—by breaking down silos and running things simultaneously.

It’s not about “Move fast and break things.” You still have to test and evaluate responsibly. But this approach shows we can move faster without sacrificing accountability—and that’s a big cultural shift.

 How important is AI to the future of national defense?

It’s central. The future of warfare will be about speed and precision—decision advantage. AI helps enable that. It’s about integrating capabilities to create faster, more accurate decision-making: for achieving military objectives, for reducing civilian casualties, and for being able to deter effectively. But we’ve also emphasized responsible AI. If it’s not safe, it’s not going to be effective. That’s been a key focus across administrations.

What about generative AI specifically? Does it have real strategic significance yet, or is it still in the experimental phase?

It does have significance, especially for decision-making and efficiency. We had an effort called Project Lima where we looked at use cases for generative AI—where it might be most useful, and what the rules for responsible use should look like. Some of the biggest use may come first in the back office—human resources, auditing, logistics. But the ability to use generative AI to create a network of capability around unmanned systems or information exchange, either in Replicator or JADC2? That’s where it becomes a real advantage. But those back-office areas are where I would anticipate to see big gains first.

[Editor’s note: JADC2 is Joint All-Domain Command and Control, a DOD initiative to connect sensors from all branches of the armed forces into a unified network powered by artificial intelligence.]

In recent years, we’ve seen more tech industry figures stepping into national defense conversations—sometimes pushing strong political views or advocating for deregulation. How do you see Silicon Valley’s growing influence on US defense strategy?

There’s a long history of innovation in this country coming from outside the government—people who look at big national problems and want to help solve them. That kind of engagement is good, especially when their technical expertise lines up with real national security needs.

But that’s not just one stakeholder group. A healthy democracy includes others, too—workers, environmental voices, allies. We need to reconcile all of that through a functioning democratic process. That’s the only way this works.

How do you view the involvement of prominent tech entrepreneurs, such as Elon Musk, in shaping national defense policies?

I believe it’s not healthy for any democracy when a single individual wields more power than their technical expertise or official role justifies. We need strong institutions, not just strong personalities.

The US has long attracted top STEM talent from around the world, including many researchers from China. But in recent years, immigration hurdles and heightened scrutiny have made it harder for foreign-born scientists to stay. Do you see this as a threat to US innovation?

I think you have to be confident that you have a secure research community to do secure work. But much of the work that underpins national defense that’s STEM-related research doesn’t need to be tightly secured in that way, and it really is dependent on a diverse ecosystem of talent. Cutting off talent pipelines is like eating our seed corn. Programs like H-1B visas are really important.

And it’s not just about international talent—we need to make sure people from underrepresented communities here in the US see national security as a space where they can contribute. If they don’t feel valued or trusted, they’re less likely to come in and stay.

What do you see as the biggest challenge the Department of Defense faces today?

I do think the  trust—or the lack of it—is a big challenge. Whether it’s trust in government broadly or specific concerns like military spending, audits, or politicization of the uniformed military, that issue manifests in everything DOD is trying to get done. It affects our ability to work with Congress, with allies, with industry, and with the American people. If people don’t believe you’re working in their interest, it’s hard to get anything done.

How a bankruptcy judge can stop a genetic privacy disaster

Stop me if you’ve heard this one before: A tech company accumulates a ton of user data, hoping to figure out a business model later. That business model never arrives, the company goes under, and the data is in the wind. 

The latest version of that story emerged on March 24, when the onetime genetic testing darling 23andMe filed for bankruptcy. Now the fate of 15 million people’s genetic data rests in the hands of a bankruptcy judge. At a hearing on March 26, the judge gave 23andMe permission to seek offers for its users’ data. But, there’s still a small chance of writing a better ending for users.

After the bankruptcy filing, the immediate take from policymakers and privacy advocates was that 23andMe users should delete their accounts to prevent genetic data from falling into the wrong hands. That’s good advice for the individual user (and you can read how to do so here). But the reality is most people won’t do it. Maybe they won’t see the recommendations to do so. Maybe they don’t know why they should be worried. Maybe they have long since abandoned an account that they don’t even remember exists. Or maybe they’re just occupied with the chaos of everyday life. 

This means the real value of this data comes from the fact that people have forgotten about it. Given 23andMe’s meager revenue—fewer than 4% of people who took tests pay for subscriptions—it seems inevitable that the new owner, whoever it is, will have to find some new way to monetize that data. 

This is a terrible deal for users who just wanted to learn a little more about themselves or their ancestry. Because genetic data is forever. Contact information can go stale over time: you can always change your password, your email, your phone number, or even your address. But a bad actor who has your genetic data—whether a cybercriminal selling it to the highest bidder, a company building a profile of your future health risk, or a government trying to identify you—will have it tomorrow and the next day and all the days after that. 

Users with exposed genetic data are not only vulnerable to harm today; they’re vulnerable to exploits that might be developed in the future. 

While 23andMe promises that it will not voluntarily share data with insurance providers, employers, or public databases, its new owner could unwind those promises at any time with a simple change in terms. 

In other words: If a bankruptcy court makes a mistake authorizing the sale of 23andMe’s user data, that mistake is likely permanent and irreparable. 

All this is possible because American lawmakers have neglected to meaningfully engage with digital privacy for nearly a quarter-century. As a result, services are incentivized to make flimsy, deceptive promises that can be abandoned at a moment’s notice. And the burden falls on users to keep track of it all, or just give up.

Here, a simple fix would be to reverse that burden. A bankruptcy court could require that users individually opt in before their genetic data can be transferred to 23andMe’s new owners, regardless of who those new owners are. Anyone who didn’t respond or who opted out would have the data deleted. 

Bankruptcy proceedings involving personal data don’t have to end badly. In 2000, the Federal Trade Commission settled with the bankrupt retailer ToySmart to ensure that its customer data could not be sold as a stand-alone asset, and that customers would have to affirmatively consent to unexpected new uses of their data. And in 2015, the FTC intervened in the bankruptcy of RadioShack to ensure that it would keep its promises never to sell the personal data of its customers. (RadioShack eventually agreed to destroy it.) 

The ToySmart case also gave rise to the role of the consumer privacy ombudsman. Bankruptcy judges can appoint an ombuds to help the court consider how the sale of personal data might affect the bankruptcy estate, examining the potential harms or benefits to consumers and any alternatives that might mitigate those harms. The U.S. Trustee has requested the appointment of an ombuds in this case. While scholars have called for the role to have more teeth and for the FTC and states to intervene more often, a framework for protecting personal data in bankruptcy is available. And ultimately, the bankruptcy judge has broad power to make decisions about how (or whether) property in bankruptcy is sold.

Here, 23andMe has a more permissive privacy policy than ToySmart or RadioShack. But the risks incurred if genetic data falls into the wrong hands or is misused are severe and irreversible. And given 23andMe’s failure to build a viable business model from testing kits, it seems likely that a new business would use genetic data in ways that users wouldn’t expect or want. 

An opt-in requirement for genetic data solves this problem. Genetic data (and other sensitive data) could be held by the bankruptcy trustee and released as individual users gave their consent. If users failed to opt in after a period of time, the remaining data would be deleted. This would incentivize 23andMe’s new owners to earn user trust and build a business that delivers value to users, instead of finding unexpected ways to exploit their data. And it would impose virtually no burden on the people whose genetic data is at risk: after all, they have plenty more DNA to spare.

Consider the alternative. Before 23andMe went into bankruptcy, its then-CEO made two failed attempts to buy it, at reported valuations of $74.7 million and $12.1 million. Using the higher offer, and with 15 million users, that works out to a little under $5 per user. Is it really worth it to permanently risk a person’s genetic privacy just to add a few dollars in value to the bankruptcy estate?    

Of course, this raises a bigger question: Why should anyone be able to buy the genetic data of millions of Americans in a bankruptcy proceeding? The answer is simple: Lawmakers allow them to. Federal and state inaction allows companies to dissolve promises about protecting Americans’ most sensitive data at a moment’s notice. When 23andMe was founded, in 2006, the promise was that personalized health care was around the corner. Today, 18 years later, that era may really be almost here. But with privacy laws like ours, who would trust it?

Keith Porcaro is the Rueben Everett Senior Lecturing Fellow at Duke Law School.

What is Signal? The messaging app, explained.

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

With the recent news that the Atlantic’s editor in chief was accidentally added to a group Signal chat for American leaders planning a bombing in Yemen, many people are wondering: What is Signal? Is it secure? If government officials aren’t supposed to use it for military planning, does that mean I shouldn’t use it either?

The answer is: Yes, you should use Signal, but government officials having top-secret conversations shouldn’t use Signal.

Read on to find out why.

What is Signal?

Signal is an app you can install on your iPhone or Android phone, or on your computer. It lets you send secure texts, images, and phone or video chats with other people or groups of people, just like iMessage, Google Messages, WhatsApp, and other chat apps.

Installing Signal is a two-minute process—again, it’s designed to work just like other popular texting apps.

Why is it a problem for government officials to use Signal?

Signal is very secure—as we’ll see below, it’s the best option out there for having private conversations with your friends on your cell phone.

But you shouldn’t use it if you have a legal obligation to preserve your messages, such as while doing government business, because Signal prioritizes privacy over ability to preserve data. It’s designed to securely delete data when you’re done with it, not to keep it. This makes it uniquely unsuited for following public record laws.

You also shouldn’t use it if your phone might be a target of sophisticated hackers, because Signal can only do its job if the phone it is running on is secure. If your phone has been hacked, then the hacker can read your messages regardless of what software you are running.

This is why you shouldn’t use Signal to discuss classified material or military plans. For military communication your civilian phone is always considered hacked by adversaries, so you should instead use communication equipment that is safer—equipment that is physically guarded and designed to do only one job, making it harder to hack.

What about everyone else?

Signal is designed from bottom to top as a very private space for conversation. Cryptographers are very sure that as long as your phone is otherwise secure, no one can read your messages.

Why should you want that? Because private spaces for conversation are very important. In the US, the First Amendment recognizes, in the right to freedom of assembly, that we all need private conversations among our own selected groups in order to function.

And you don’t need the First Amendment to tell you that. You know, just like everyone else, that you can have important conversations in your living room, bedroom, church coffee hour, or meeting hall that you could never have on a public stage. Signal gives us the digital equivalent of that—it’s a space where we can talk, among groups of our choice, about the private things that matter to us, free of corporate or government surveillance. Our mental health and social functioning require that.

So if you’re not legally required to record your conversations, and not planning secret military operations, go ahead and use Signal—you deserve the privacy.

How do we know Signal is secure?

People often give up on finding digital privacy and end up censoring themselves out of caution. So are there really private ways to talk on our phones, or should we just assume that everything is being read anyway?

The good news is: For most of us who aren’t individually targeted by hackers, we really can still have private conversations.

Signal is designed to ensure that if you know your phone and the phones of other people in your group haven’t been hacked (more on that later), you don’t have to trust anything else. It uses many techniques from the cryptography community to make that possible.

Most important and well-known is “end-to-end encryption,” which means that messages can be read only on the devices involved in the conversation and not by servers passing the messages back and forth.

But Signal uses other techniques to keep your messages private and safe as well. For example, it goes to great lengths to make it hard for the Signal server itself to know who else you are talking to (a feature known as “sealed sender”), or for an attacker who records traffic between phones to later decrypt the traffic by seizing one of the phones (“perfect forward secrecy”).

These are only a few of many security properties built into the protocol, which is well enough designed and vetted for other messaging apps, such as WhatsApp and Google Messages, to use the same one.

Signal is also designed so we don’t have to trust the people who make it. The source code for the app is available online and, because of its popularity as a security tool, is frequently audited by experts.

And even though its security does not rely on our trust in the publisher, it does come from a respected source: the Signal Technology Foundation, a nonprofit whose mission is to “protect free expression and enable secure global communication through open-source privacy technology.” The app itself, and the foundation, grew out of a community of prominent privacy advocates. The foundation was started by Moxie Marlinspike, a cryptographer and longtime advocate of secure private communication, and Brian Acton, a cofounder of WhatsApp.

Why do people use Signal over other text apps? Are other ones secure?

Many apps offer end-to-end encryption, and it’s not a bad idea to use them for a measure of privacy. But Signal is a gold standard for private communication because it is secure by default: Unless you add someone you didn’t mean to, it’s very hard for a chat to accidentally become less secure than you intended.

That’s not necessarily the case for other apps. For example, iMessage conversations are sometimes end-to-end encrypted, but only if your chat has “blue bubbles,” and they aren’t encrypted in iCloud backups by default. Google Messages are sometimes end-to-end encrypted, but only if the chat shows a lock icon. WhatsApp is end-to-end encrypted but logs your activity, including “how you interact with others using our Services.”

Signal is careful not to record who you are talking with, to offer ways to reliably delete messages, and to keep messages secure even in online phone backups. This focus demonstrates the benefits of an app coming from a nonprofit focused on privacy rather than a company that sees security as a “nice to have” feature alongside other goals.

(Conversely, and as a warning, using Signal makes it rather easier to accidentally lose messages! Again, it is not a good choice if you are legally required to record your communication.)

Applications like WhatsApp, iMessage, and Google Messages do offer end-to-end encryption and can offer much better security than nothing. The worst option of all is regular SMS text messages (“green bubbles” on iOS)—those are sent unencrypted and are likely collected by mass government surveillance.

Wait, how do I know that my phone is secure?

Signal is an excellent choice for privacy if you know that the phones of everyone you’re talking with are secure. But how do you know that? It’s easy to give up on a feeling of privacy if you never feel good about trusting your phone anyway.

One good place to start for most of us is simply to make sure your phone is up to date. Governments often do have ways of hacking phones, but hacking up-to-date phones is expensive and risky and reserved for high-value targets. For most people, simply having your software up to date will remove you from a category that hackers target.

If you’re a potential target of sophisticated hacking, then don’t stop there. You’ll need extra security measures, and guides from the Freedom of the Press Foundation and the Electronic Frontier Foundation are a good place to start.

But you don’t have to be a high-value target to value privacy. The rest of us can do our part to re-create that private living room, bedroom, church, or meeting hall simply by using an up-to-date phone with an app that respects our privacy.

Jack Cushman is a fellow of the Berkman Klein Center for Internet and Society and directs the Library Innovation Lab at Harvard Law School Library. He is an appellate lawyer, computer programmer, and former board member of the ACLU of Massachusetts.

At RightsCon in Taipei, activists reckon with a US retreat from promoting digital rights 

Last week, I joined over 3,200 digital rights activists, tech policymakers, and researchers and a smattering of tech company representatives in Taipei at RightsCon, the world’s largest digital rights conference. 

Human rights conferences can be sobering, to say the least. They highlight the David vs. Goliath situation of small civil society organizations fighting to center human rights in decisions about technology, sometimes challenging the priorities of much more powerful governments and technology companies. 

But this year’s RightsCon, the 13th since the event began as the Silicon Valley Human Rights Conference in 2011, felt especially urgent. This was primarily due to the shocking, rapid gutting of the US federal government by the Elon Musk–led DOGE initiative, and the reverberations this stands to have around the world. 

At RightsCon, the cuts to USAID were top of mind; the development agency has long been one of the world’s biggest funders of digital rights work, from ensuring that the internet stays on during elections and crises around the world to supporting digital security hotlines for human rights defenders and journalists targeted by surveillance and hacking. Now, the agency is facing budget cuts of over 90% under the Trump administration. 

The withdrawal of funding is existential for the international digital rights community—and follows other trends that are concerning for those who support a free and safe Internet. “We are unfortunately witnessing the erosion … of multistakeholderism, with restrictions on civil society participation, democratic backsliding worldwide, and companies divesting from policies and practices that uphold human rights,” Nikki Gladstone, RightsCon’s director, said in her opening speech. 

Cindy Cohn, director of the Electronic Frontier Foundation, which advocates for digital civil liberties, was more blunt: “The scale and speed of the attacks on people’s rights is unprecedented. It’s breathtaking,” she told me. 

But it’s not just funding cuts that will curtail digital rights globally. As various speakers highlighted throughout the conference, the United States government has gone from taking the leading role in supporting an open and safe internet to demonstrating how to dismantle it. Here’s what speakers are seeing:  

The Trump administration’s policies are being weaponized in other countries 

On Tuesday, February 25, just before RightsCon began, Serbian law enforcement raided the offices of four local civil society organizations focused on government accountability, citing Musk and Trump’s (unproven) accusations of fraud at USAID. 

“The (Serbian) Special Anti-Corruption Department … contacted the US Justice Department for information concerning USAID over the abuse of funds, possible money laundering, and the improper spending of American taxpayers’ funds in Serbia,” Nenad Stefanovic, a state prosecutor, explained on a TV broadcast announcing the move. 

“Since Trump’s second administration, we cannot count on them [the platforms] to do even the bare minimum anymore.” —Yasmin Curzi

For RightsCon attendees, it was a clear—and familiar—example of how oppressive regimes find or invent reasons to go after critics. Only now, by using the Trump administration’s justifications for revoking USAID’s funding, they hope to gain an extra veneer of credibility. 

Ashnah Kalemera, a program manager for CIPESA, a Ugandan nonprofit that runs technology for civic participation initiatives across Africa, says Trump and Musk’s attacks on USAID are providing false narratives that “justify arrests, intimidations, and continued clampdowns on civil society organizations—organizations that obviously no longer have the resources to do their work anyway.” 

Yasmin Curzi, a professor at FGV Law School in Rio de Janeiro and an expert on digital law, says that American politics are also being weaponized in Brazil’s domestic affairs. There, she told me, right-wing figures have been “lifting signs at protests like ‘Trump save us!’ and ‘Protect our First Amendment rights,’ which they don’t have.” Instead, Brazil’s Internet Bill of Rights seeks to balance protections on user privacy and speech with criminal liabilities for certain types of harmful content, including disinformation and hate speech. 

Despite the differing legal frameworks, in late February the Trump Media & Technology Group, which operates Truth Social, and the video platform Rumble tried to enforce US-style speech protections in Brazil. They sued Brazilian Supreme Court justice Alexandre de Moraes for banning a Brazilian digital influencer who had fled to the United States to avoid arrest in connection with allegations that he has spread disinformation and hate. Truth Social and Rumble allege that Moraes has violated the United States’ free speech laws. 

(A US judge has since ruled that because the Brazilian court had yet to officially serve Truth Social and Rumble as required under international treaty, the platforms’ lawsuit was premature and the companies do not have to comply with the order; the judge did not comment on the merits of the argument, though the companies have claimed victory.)

Platforms are becoming less willing to engage with local communities 

In addition to how Trump and Musk might inspire other countries to act, speakers also expressed concern that their trolling and use of dehumanizing language and imagery will inspire more online hate (and attacks), just at a time when platforms are rolling back human content moderation. Experts warn that automated content moderation systems trained on English-language data sets are unable to detect much of this hateful language. 

India, for example, has a history of platforms’ recognizing the necessity of using local-language moderators and also failing to do so, leading to real-world violence. Yet now the attitude of some internet users there has become “If the president of the United States can do it, why can’t I?” says Sadaf Wani, a communications manager for IT for Change, an Indian nonprofit research and advocacy organization, who organized a RightsCon panel on hate speech and AI. 

As her panel noted, these online attacks are accompanied by an increase in automated and even fully AI-based content moderation, largely trained on North American data sets, that are known to be less effective at identifying problematic speech in languages other than English. Even the latest large language models have difficulties identifying local slang, cultural context, and the use of non-English characters. “AI is not as smart as it looks, so you can use very obvious [and] very basic tricks to evade scrutiny. So I think that’s what’s also amplifying hate speech further,” Wani explains. 

Others, including Curzi from Brazil and Kalemera from Uganda, described similar trends playing out in their countries—and they say changes in platform policy and a lack of local staff make content moderation even harder. Platforms used to have humans in the loop whom users could reach out to for help, Curzi said. She pointed to community-driven moderation efforts on Twitter, which she considered to be a relative success at curbing hate speech until Elon Musk bought the site and fired some 4,400 contract workers—including the entire team that worked with community partners in Brazil. 

Curzi and Kalemera both say that things have gotten worse since. Last year, Trump threatened Meta CEO Mark Zuckerberg with “spend[ing] the rest of his life in prison” if Meta attempted to interfere with—i.e. fact-check claims about—the 2024 election. This January Meta announced that it was replacing its fact-checking program with X-style community notes, a move widely seen as capitulation to pressure from the new administration. 

Shortly after Trump’s second inauguration, social platforms skipped a hearing on hate speech and disinformation held by the Brazilian attorney general. While this may have been expected of Musk’s X, it represented a big shift for Meta, Curzi told me. “Since Trump’s second administration, we cannot count on them [the platforms] to do even the bare minimum anymore,”  she adds. Meta and X did not respond to requests for comment.

The US’s retreat is creating a moral vacuum 

Then there’s simply the fact that the United States can no longer be counted on to support digital rights defenders or journalists under attack. That creates a vacuum, and it’s not clear who else is willing—or able—to step into it, participants said. 

The US used to be the “main support for journalists in repressive regimes,” both financially and morally, one journalism trainer said during a last-minute session added to the schedule to address the funding crisis. The fact that there is now no one to turn to, she added, makes the current situation “not comparable to the past.” 

But that’s not to say that everything was doom and gloom. “You could feel the solidarity and community,” says the EFF’s Cohn. “And having [the conference] in Taiwan, which lives in the shadow of a very powerful, often hostile government, seemed especially fitting.”

Indeed, if there was one theme that was repeated throughout the event, it was a shared desire to rethink and challenge who holds power. 

Multiple sessions, for example, focused on strategies to counter both unresponsive Big Tech platforms and repressive governments. Meanwhile, during the session on AI and hate-speech moderation, participants concluded that one way of creating a safer internet would be for local organizations to build localized language models that are context- and language-specific. At the very least, said Curzi, we could move to other, smaller platforms that match our values, because at this point, “the big platforms can do anything they want.” 

Do you have additional information on how Doge is affecting digital rights globally? Please use a non-work device and get in touch at tips@technologyreview.com or with the reporter on Signal: eileenguo.15.