How the Pentagon is adapting to China’s technological rise

It’s been just over two months since Kathleen Hicks stepped down as US deputy secretary of defense. As the highest-ranking woman in Pentagon history, Hicks shaped US military posture through an era defined by renewed competition between powerful countries and a scramble to modernize defense technology.  

She’s currently taking a break before jumping into her (still unannounced) next act. “It’s been refreshing,” she says—but disconnecting isn’t easy. She continues to monitor defense developments closely and expresses concern over potential setbacks: “New administrations have new priorities, and that’s completely expected, but I do worry about just stalling out on progress that we’ve built over a number of administrations.”

Over the past three decades, Hicks has watched the Pentagon transform—politically, strategically, and technologically. She entered government in the 1990s at the tail end of the Cold War, when optimism and a belief in global cooperation still dominated US foreign policy. But that optimism dimmed. After 9/11, the focus shifted to counterterrorism and nonstate actors. Then came Russia’s resurgence and China’s growing assertiveness. Hicks took two previous breaks from government work—the first to complete a PhD at MIT and joining the think thank Center for Strategic and International Studies (CSIS), which she later rejoined to lead its International Security Program after her second tour. “By the time I returned in 2021,” she says, “there was one actor—the PRC (People’s Republic of China)—that had the capability and the will to really contest the international system as it’s set up.”

In this conversation with MIT Technology Review, Hicks reflects on how the Pentagon is adapting—or failing to adapt—to a new era of geopolitical competition. She discusses China’s technological rise, the future of AI in warfare, and her signature initiative, Replicator, a Pentagon initiative to rapidly field thousands of low-cost autonomous systems such as drones.

You’ve described China as a “talented fast follower. Do you still believe that, especially given recent developments in AI and other technologies?

Yes, I do. China is the biggest pacing challenge we face, which means it sets the pace for most capability areas for what we need to be able to defeat to deter them. For example, surface maritime capability, missile capability, stealth fighter capability. They set their minds to achieving a certain capability, they tend to get there, and they tend to get there even faster.

That said, they have a substantial amount of corruption, and they haven’t been engaged in a real conflict or combat operation in the way that Western militaries have trained for or been involved in, and that is a huge X factor in how effective they would be.

China has made major technological strides, and the old narrative of its being a follower is breaking down—not just in commercial tech, but more broadly. Do you think the US still holds a strategic advantage?

I would never want to underestimate their ability—or any nation’s ability—to innovate organically when they put their minds to it. But I still think it’s a helpful comparison to look at the US model. Because we’re a system of free minds, free people, and free markets, we have the potential to generate much more innovation culturally and organically than a statist model does. That’s our advantage—if we can realize it.

China is ahead in manufacturing, especially when it comes to drones and other unmanned systems. How big a problem is that for US defense, and can the US catch up?

I do think it’s a massive problem. When we were conceiving Replicator, one of the big concerns was that DJI had just jumped way out ahead on the manufacturing side, and the US had been left behind. A lot of manufacturers here believe they can catch up if given the right contracts—and I agree with that.

But the harder challenge isn’t just making the drones—it’s integrating them into our broader systems. That’s where the U.S. often struggles. It’s not a complicated manufacturing problem. It’s a systems integration problem: how you take something and make it usable, scalable, and connected across a joint force. Replicator was designed to push through that—to drive not just production, but integration and deployment at speed.

We also spent time identifying broader supply-chain vulnerabilities. Microelectronics was a big one. Critical minerals. Batteries. People sometimes think batteries are just about electrification, but they’re fundamental across our systems—even on ships in the Navy.

When it comes to drones specifically, I actually think it’s a solvable problem. The issue isn’t complexity. It’s just about getting enough mass of contracts to scale up manufacturing. If we do that, I believe the US can absolutely compete.

The Replicator drone program was one of your key initiatives. It promised a very fast timeline—especially compared with the typical defense acquisition cycle. Was that achievable? How is that progressing?

When I left in January, we had still lined up for proving out this summer, and I still believe we should see some completion this year. I hope Congress will stay very engaged in trying to ensure that the capability, in fact, comes to fruition. Even just this week with Secretary [Pete] Hegseth out in the Indo-Pacific, he made some passing reference to the [US Indo-Pacific Command] commander, Admiral [Samuel] Paparo, having the flexibility to create the capability needed, and that gives me a lot of confidence of consistency.

Can you talk about how Replicator fits into broader efforts to speed up defense innovation? What’s actually changing inside the system?

Traditionally, defense acquisition is slow and serial—one step after another, which works for massive, long-term systems like submarines. But for things like drones, that just doesn’t cut it. With Replicator, we aimed to shift to a parallel model: integrating hardware, software, policy, and testing all at once. That’s how you get speed—by breaking down silos and running things simultaneously.

It’s not about “Move fast and break things.” You still have to test and evaluate responsibly. But this approach shows we can move faster without sacrificing accountability—and that’s a big cultural shift.

 How important is AI to the future of national defense?

It’s central. The future of warfare will be about speed and precision—decision advantage. AI helps enable that. It’s about integrating capabilities to create faster, more accurate decision-making: for achieving military objectives, for reducing civilian casualties, and for being able to deter effectively. But we’ve also emphasized responsible AI. If it’s not safe, it’s not going to be effective. That’s been a key focus across administrations.

What about generative AI specifically? Does it have real strategic significance yet, or is it still in the experimental phase?

It does have significance, especially for decision-making and efficiency. We had an effort called Project Lima where we looked at use cases for generative AI—where it might be most useful, and what the rules for responsible use should look like. Some of the biggest use may come first in the back office—human resources, auditing, logistics. But the ability to use generative AI to create a network of capability around unmanned systems or information exchange, either in Replicator or JADC2? That’s where it becomes a real advantage. But those back-office areas are where I would anticipate to see big gains first.

[Editor’s note: JADC2 is Joint All-Domain Command and Control, a DOD initiative to connect sensors from all branches of the armed forces into a unified network powered by artificial intelligence.]

In recent years, we’ve seen more tech industry figures stepping into national defense conversations—sometimes pushing strong political views or advocating for deregulation. How do you see Silicon Valley’s growing influence on US defense strategy?

There’s a long history of innovation in this country coming from outside the government—people who look at big national problems and want to help solve them. That kind of engagement is good, especially when their technical expertise lines up with real national security needs.

But that’s not just one stakeholder group. A healthy democracy includes others, too—workers, environmental voices, allies. We need to reconcile all of that through a functioning democratic process. That’s the only way this works.

How do you view the involvement of prominent tech entrepreneurs, such as Elon Musk, in shaping national defense policies?

I believe it’s not healthy for any democracy when a single individual wields more power than their technical expertise or official role justifies. We need strong institutions, not just strong personalities.

The US has long attracted top STEM talent from around the world, including many researchers from China. But in recent years, immigration hurdles and heightened scrutiny have made it harder for foreign-born scientists to stay. Do you see this as a threat to US innovation?

I think you have to be confident that you have a secure research community to do secure work. But much of the work that underpins national defense that’s STEM-related research doesn’t need to be tightly secured in that way, and it really is dependent on a diverse ecosystem of talent. Cutting off talent pipelines is like eating our seed corn. Programs like H-1B visas are really important.

And it’s not just about international talent—we need to make sure people from underrepresented communities here in the US see national security as a space where they can contribute. If they don’t feel valued or trusted, they’re less likely to come in and stay.

What do you see as the biggest challenge the Department of Defense faces today?

I do think the  trust—or the lack of it—is a big challenge. Whether it’s trust in government broadly or specific concerns like military spending, audits, or politicization of the uniformed military, that issue manifests in everything DOD is trying to get done. It affects our ability to work with Congress, with allies, with industry, and with the American people. If people don’t believe you’re working in their interest, it’s hard to get anything done.

How a bankruptcy judge can stop a genetic privacy disaster

Stop me if you’ve heard this one before: A tech company accumulates a ton of user data, hoping to figure out a business model later. That business model never arrives, the company goes under, and the data is in the wind. 

The latest version of that story emerged on March 24, when the onetime genetic testing darling 23andMe filed for bankruptcy. Now the fate of 15 million people’s genetic data rests in the hands of a bankruptcy judge. At a hearing on March 26, the judge gave 23andMe permission to seek offers for its users’ data. But, there’s still a small chance of writing a better ending for users.

After the bankruptcy filing, the immediate take from policymakers and privacy advocates was that 23andMe users should delete their accounts to prevent genetic data from falling into the wrong hands. That’s good advice for the individual user (and you can read how to do so here). But the reality is most people won’t do it. Maybe they won’t see the recommendations to do so. Maybe they don’t know why they should be worried. Maybe they have long since abandoned an account that they don’t even remember exists. Or maybe they’re just occupied with the chaos of everyday life. 

This means the real value of this data comes from the fact that people have forgotten about it. Given 23andMe’s meager revenue—fewer than 4% of people who took tests pay for subscriptions—it seems inevitable that the new owner, whoever it is, will have to find some new way to monetize that data. 

This is a terrible deal for users who just wanted to learn a little more about themselves or their ancestry. Because genetic data is forever. Contact information can go stale over time: you can always change your password, your email, your phone number, or even your address. But a bad actor who has your genetic data—whether a cybercriminal selling it to the highest bidder, a company building a profile of your future health risk, or a government trying to identify you—will have it tomorrow and the next day and all the days after that. 

Users with exposed genetic data are not only vulnerable to harm today; they’re vulnerable to exploits that might be developed in the future. 

While 23andMe promises that it will not voluntarily share data with insurance providers, employers, or public databases, its new owner could unwind those promises at any time with a simple change in terms. 

In other words: If a bankruptcy court makes a mistake authorizing the sale of 23andMe’s user data, that mistake is likely permanent and irreparable. 

All this is possible because American lawmakers have neglected to meaningfully engage with digital privacy for nearly a quarter-century. As a result, services are incentivized to make flimsy, deceptive promises that can be abandoned at a moment’s notice. And the burden falls on users to keep track of it all, or just give up.

Here, a simple fix would be to reverse that burden. A bankruptcy court could require that users individually opt in before their genetic data can be transferred to 23andMe’s new owners, regardless of who those new owners are. Anyone who didn’t respond or who opted out would have the data deleted. 

Bankruptcy proceedings involving personal data don’t have to end badly. In 2000, the Federal Trade Commission settled with the bankrupt retailer ToySmart to ensure that its customer data could not be sold as a stand-alone asset, and that customers would have to affirmatively consent to unexpected new uses of their data. And in 2015, the FTC intervened in the bankruptcy of RadioShack to ensure that it would keep its promises never to sell the personal data of its customers. (RadioShack eventually agreed to destroy it.) 

The ToySmart case also gave rise to the role of the consumer privacy ombudsman. Bankruptcy judges can appoint an ombuds to help the court consider how the sale of personal data might affect the bankruptcy estate, examining the potential harms or benefits to consumers and any alternatives that might mitigate those harms. The U.S. Trustee has requested the appointment of an ombuds in this case. While scholars have called for the role to have more teeth and for the FTC and states to intervene more often, a framework for protecting personal data in bankruptcy is available. And ultimately, the bankruptcy judge has broad power to make decisions about how (or whether) property in bankruptcy is sold.

Here, 23andMe has a more permissive privacy policy than ToySmart or RadioShack. But the risks incurred if genetic data falls into the wrong hands or is misused are severe and irreversible. And given 23andMe’s failure to build a viable business model from testing kits, it seems likely that a new business would use genetic data in ways that users wouldn’t expect or want. 

An opt-in requirement for genetic data solves this problem. Genetic data (and other sensitive data) could be held by the bankruptcy trustee and released as individual users gave their consent. If users failed to opt in after a period of time, the remaining data would be deleted. This would incentivize 23andMe’s new owners to earn user trust and build a business that delivers value to users, instead of finding unexpected ways to exploit their data. And it would impose virtually no burden on the people whose genetic data is at risk: after all, they have plenty more DNA to spare.

Consider the alternative. Before 23andMe went into bankruptcy, its then-CEO made two failed attempts to buy it, at reported valuations of $74.7 million and $12.1 million. Using the higher offer, and with 15 million users, that works out to a little under $5 per user. Is it really worth it to permanently risk a person’s genetic privacy just to add a few dollars in value to the bankruptcy estate?    

Of course, this raises a bigger question: Why should anyone be able to buy the genetic data of millions of Americans in a bankruptcy proceeding? The answer is simple: Lawmakers allow them to. Federal and state inaction allows companies to dissolve promises about protecting Americans’ most sensitive data at a moment’s notice. When 23andMe was founded, in 2006, the promise was that personalized health care was around the corner. Today, 18 years later, that era may really be almost here. But with privacy laws like ours, who would trust it?

Keith Porcaro is the Rueben Everett Senior Lecturing Fellow at Duke Law School.

What is Signal? The messaging app, explained.

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

With the recent news that the Atlantic’s editor in chief was accidentally added to a group Signal chat for American leaders planning a bombing in Yemen, many people are wondering: What is Signal? Is it secure? If government officials aren’t supposed to use it for military planning, does that mean I shouldn’t use it either?

The answer is: Yes, you should use Signal, but government officials having top-secret conversations shouldn’t use Signal.

Read on to find out why.

What is Signal?

Signal is an app you can install on your iPhone or Android phone, or on your computer. It lets you send secure texts, images, and phone or video chats with other people or groups of people, just like iMessage, Google Messages, WhatsApp, and other chat apps.

Installing Signal is a two-minute process—again, it’s designed to work just like other popular texting apps.

Why is it a problem for government officials to use Signal?

Signal is very secure—as we’ll see below, it’s the best option out there for having private conversations with your friends on your cell phone.

But you shouldn’t use it if you have a legal obligation to preserve your messages, such as while doing government business, because Signal prioritizes privacy over ability to preserve data. It’s designed to securely delete data when you’re done with it, not to keep it. This makes it uniquely unsuited for following public record laws.

You also shouldn’t use it if your phone might be a target of sophisticated hackers, because Signal can only do its job if the phone it is running on is secure. If your phone has been hacked, then the hacker can read your messages regardless of what software you are running.

This is why you shouldn’t use Signal to discuss classified material or military plans. For military communication your civilian phone is always considered hacked by adversaries, so you should instead use communication equipment that is safer—equipment that is physically guarded and designed to do only one job, making it harder to hack.

What about everyone else?

Signal is designed from bottom to top as a very private space for conversation. Cryptographers are very sure that as long as your phone is otherwise secure, no one can read your messages.

Why should you want that? Because private spaces for conversation are very important. In the US, the First Amendment recognizes, in the right to freedom of assembly, that we all need private conversations among our own selected groups in order to function.

And you don’t need the First Amendment to tell you that. You know, just like everyone else, that you can have important conversations in your living room, bedroom, church coffee hour, or meeting hall that you could never have on a public stage. Signal gives us the digital equivalent of that—it’s a space where we can talk, among groups of our choice, about the private things that matter to us, free of corporate or government surveillance. Our mental health and social functioning require that.

So if you’re not legally required to record your conversations, and not planning secret military operations, go ahead and use Signal—you deserve the privacy.

How do we know Signal is secure?

People often give up on finding digital privacy and end up censoring themselves out of caution. So are there really private ways to talk on our phones, or should we just assume that everything is being read anyway?

The good news is: For most of us who aren’t individually targeted by hackers, we really can still have private conversations.

Signal is designed to ensure that if you know your phone and the phones of other people in your group haven’t been hacked (more on that later), you don’t have to trust anything else. It uses many techniques from the cryptography community to make that possible.

Most important and well-known is “end-to-end encryption,” which means that messages can be read only on the devices involved in the conversation and not by servers passing the messages back and forth.

But Signal uses other techniques to keep your messages private and safe as well. For example, it goes to great lengths to make it hard for the Signal server itself to know who else you are talking to (a feature known as “sealed sender”), or for an attacker who records traffic between phones to later decrypt the traffic by seizing one of the phones (“perfect forward secrecy”).

These are only a few of many security properties built into the protocol, which is well enough designed and vetted for other messaging apps, such as WhatsApp and Google Messages, to use the same one.

Signal is also designed so we don’t have to trust the people who make it. The source code for the app is available online and, because of its popularity as a security tool, is frequently audited by experts.

And even though its security does not rely on our trust in the publisher, it does come from a respected source: the Signal Technology Foundation, a nonprofit whose mission is to “protect free expression and enable secure global communication through open-source privacy technology.” The app itself, and the foundation, grew out of a community of prominent privacy advocates. The foundation was started by Moxie Marlinspike, a cryptographer and longtime advocate of secure private communication, and Brian Acton, a cofounder of WhatsApp.

Why do people use Signal over other text apps? Are other ones secure?

Many apps offer end-to-end encryption, and it’s not a bad idea to use them for a measure of privacy. But Signal is a gold standard for private communication because it is secure by default: Unless you add someone you didn’t mean to, it’s very hard for a chat to accidentally become less secure than you intended.

That’s not necessarily the case for other apps. For example, iMessage conversations are sometimes end-to-end encrypted, but only if your chat has “blue bubbles,” and they aren’t encrypted in iCloud backups by default. Google Messages are sometimes end-to-end encrypted, but only if the chat shows a lock icon. WhatsApp is end-to-end encrypted but logs your activity, including “how you interact with others using our Services.”

Signal is careful not to record who you are talking with, to offer ways to reliably delete messages, and to keep messages secure even in online phone backups. This focus demonstrates the benefits of an app coming from a nonprofit focused on privacy rather than a company that sees security as a “nice to have” feature alongside other goals.

(Conversely, and as a warning, using Signal makes it rather easier to accidentally lose messages! Again, it is not a good choice if you are legally required to record your communication.)

Applications like WhatsApp, iMessage, and Google Messages do offer end-to-end encryption and can offer much better security than nothing. The worst option of all is regular SMS text messages (“green bubbles” on iOS)—those are sent unencrypted and are likely collected by mass government surveillance.

Wait, how do I know that my phone is secure?

Signal is an excellent choice for privacy if you know that the phones of everyone you’re talking with are secure. But how do you know that? It’s easy to give up on a feeling of privacy if you never feel good about trusting your phone anyway.

One good place to start for most of us is simply to make sure your phone is up to date. Governments often do have ways of hacking phones, but hacking up-to-date phones is expensive and risky and reserved for high-value targets. For most people, simply having your software up to date will remove you from a category that hackers target.

If you’re a potential target of sophisticated hacking, then don’t stop there. You’ll need extra security measures, and guides from the Freedom of the Press Foundation and the Electronic Frontier Foundation are a good place to start.

But you don’t have to be a high-value target to value privacy. The rest of us can do our part to re-create that private living room, bedroom, church, or meeting hall simply by using an up-to-date phone with an app that respects our privacy.

Jack Cushman is a fellow of the Berkman Klein Center for Internet and Society and directs the Library Innovation Lab at Harvard Law School Library. He is an appellate lawyer, computer programmer, and former board member of the ACLU of Massachusetts.

At RightsCon in Taipei, activists reckon with a US retreat from promoting digital rights 

Last week, I joined over 3,200 digital rights activists, tech policymakers, and researchers and a smattering of tech company representatives in Taipei at RightsCon, the world’s largest digital rights conference. 

Human rights conferences can be sobering, to say the least. They highlight the David vs. Goliath situation of small civil society organizations fighting to center human rights in decisions about technology, sometimes challenging the priorities of much more powerful governments and technology companies. 

But this year’s RightsCon, the 13th since the event began as the Silicon Valley Human Rights Conference in 2011, felt especially urgent. This was primarily due to the shocking, rapid gutting of the US federal government by the Elon Musk–led DOGE initiative, and the reverberations this stands to have around the world. 

At RightsCon, the cuts to USAID were top of mind; the development agency has long been one of the world’s biggest funders of digital rights work, from ensuring that the internet stays on during elections and crises around the world to supporting digital security hotlines for human rights defenders and journalists targeted by surveillance and hacking. Now, the agency is facing budget cuts of over 90% under the Trump administration. 

The withdrawal of funding is existential for the international digital rights community—and follows other trends that are concerning for those who support a free and safe Internet. “We are unfortunately witnessing the erosion … of multistakeholderism, with restrictions on civil society participation, democratic backsliding worldwide, and companies divesting from policies and practices that uphold human rights,” Nikki Gladstone, RightsCon’s director, said in her opening speech. 

Cindy Cohn, director of the Electronic Frontier Foundation, which advocates for digital civil liberties, was more blunt: “The scale and speed of the attacks on people’s rights is unprecedented. It’s breathtaking,” she told me. 

But it’s not just funding cuts that will curtail digital rights globally. As various speakers highlighted throughout the conference, the United States government has gone from taking the leading role in supporting an open and safe internet to demonstrating how to dismantle it. Here’s what speakers are seeing:  

The Trump administration’s policies are being weaponized in other countries 

On Tuesday, February 25, just before RightsCon began, Serbian law enforcement raided the offices of four local civil society organizations focused on government accountability, citing Musk and Trump’s (unproven) accusations of fraud at USAID. 

“The (Serbian) Special Anti-Corruption Department … contacted the US Justice Department for information concerning USAID over the abuse of funds, possible money laundering, and the improper spending of American taxpayers’ funds in Serbia,” Nenad Stefanovic, a state prosecutor, explained on a TV broadcast announcing the move. 

“Since Trump’s second administration, we cannot count on them [the platforms] to do even the bare minimum anymore.” —Yasmin Curzi

For RightsCon attendees, it was a clear—and familiar—example of how oppressive regimes find or invent reasons to go after critics. Only now, by using the Trump administration’s justifications for revoking USAID’s funding, they hope to gain an extra veneer of credibility. 

Ashnah Kalemera, a program manager for CIPESA, a Ugandan nonprofit that runs technology for civic participation initiatives across Africa, says Trump and Musk’s attacks on USAID are providing false narratives that “justify arrests, intimidations, and continued clampdowns on civil society organizations—organizations that obviously no longer have the resources to do their work anyway.” 

Yasmin Curzi, a professor at FGV Law School in Rio de Janeiro and an expert on digital law, says that American politics are also being weaponized in Brazil’s domestic affairs. There, she told me, right-wing figures have been “lifting signs at protests like ‘Trump save us!’ and ‘Protect our First Amendment rights,’ which they don’t have.” Instead, Brazil’s Internet Bill of Rights seeks to balance protections on user privacy and speech with criminal liabilities for certain types of harmful content, including disinformation and hate speech. 

Despite the differing legal frameworks, in late February the Trump Media & Technology Group, which operates Truth Social, and the video platform Rumble tried to enforce US-style speech protections in Brazil. They sued Brazilian Supreme Court justice Alexandre de Moraes for banning a Brazilian digital influencer who had fled to the United States to avoid arrest in connection with allegations that he has spread disinformation and hate. Truth Social and Rumble allege that Moraes has violated the United States’ free speech laws. 

(A US judge has since ruled that because the Brazilian court had yet to officially serve Truth Social and Rumble as required under international treaty, the platforms’ lawsuit was premature and the companies do not have to comply with the order; the judge did not comment on the merits of the argument, though the companies have claimed victory.)

Platforms are becoming less willing to engage with local communities 

In addition to how Trump and Musk might inspire other countries to act, speakers also expressed concern that their trolling and use of dehumanizing language and imagery will inspire more online hate (and attacks), just at a time when platforms are rolling back human content moderation. Experts warn that automated content moderation systems trained on English-language data sets are unable to detect much of this hateful language. 

India, for example, has a history of platforms’ recognizing the necessity of using local-language moderators and also failing to do so, leading to real-world violence. Yet now the attitude of some internet users there has become “If the president of the United States can do it, why can’t I?” says Sadaf Wani, a communications manager for IT for Change, an Indian nonprofit research and advocacy organization, who organized a RightsCon panel on hate speech and AI. 

As her panel noted, these online attacks are accompanied by an increase in automated and even fully AI-based content moderation, largely trained on North American data sets, that are known to be less effective at identifying problematic speech in languages other than English. Even the latest large language models have difficulties identifying local slang, cultural context, and the use of non-English characters. “AI is not as smart as it looks, so you can use very obvious [and] very basic tricks to evade scrutiny. So I think that’s what’s also amplifying hate speech further,” Wani explains. 

Others, including Curzi from Brazil and Kalemera from Uganda, described similar trends playing out in their countries—and they say changes in platform policy and a lack of local staff make content moderation even harder. Platforms used to have humans in the loop whom users could reach out to for help, Curzi said. She pointed to community-driven moderation efforts on Twitter, which she considered to be a relative success at curbing hate speech until Elon Musk bought the site and fired some 4,400 contract workers—including the entire team that worked with community partners in Brazil. 

Curzi and Kalemera both say that things have gotten worse since. Last year, Trump threatened Meta CEO Mark Zuckerberg with “spend[ing] the rest of his life in prison” if Meta attempted to interfere with—i.e. fact-check claims about—the 2024 election. This January Meta announced that it was replacing its fact-checking program with X-style community notes, a move widely seen as capitulation to pressure from the new administration. 

Shortly after Trump’s second inauguration, social platforms skipped a hearing on hate speech and disinformation held by the Brazilian attorney general. While this may have been expected of Musk’s X, it represented a big shift for Meta, Curzi told me. “Since Trump’s second administration, we cannot count on them [the platforms] to do even the bare minimum anymore,”  she adds. Meta and X did not respond to requests for comment.

The US’s retreat is creating a moral vacuum 

Then there’s simply the fact that the United States can no longer be counted on to support digital rights defenders or journalists under attack. That creates a vacuum, and it’s not clear who else is willing—or able—to step into it, participants said. 

The US used to be the “main support for journalists in repressive regimes,” both financially and morally, one journalism trainer said during a last-minute session added to the schedule to address the funding crisis. The fact that there is now no one to turn to, she added, makes the current situation “not comparable to the past.” 

But that’s not to say that everything was doom and gloom. “You could feel the solidarity and community,” says the EFF’s Cohn. “And having [the conference] in Taiwan, which lives in the shadow of a very powerful, often hostile government, seemed especially fitting.”

Indeed, if there was one theme that was repeated throughout the event, it was a shared desire to rethink and challenge who holds power. 

Multiple sessions, for example, focused on strategies to counter both unresponsive Big Tech platforms and repressive governments. Meanwhile, during the session on AI and hate-speech moderation, participants concluded that one way of creating a safer internet would be for local organizations to build localized language models that are context- and language-specific. At the very least, said Curzi, we could move to other, smaller platforms that match our values, because at this point, “the big platforms can do anything they want.” 

Do you have additional information on how Doge is affecting digital rights globally? Please use a non-work device and get in touch at tips@technologyreview.com or with the reporter on Signal: eileenguo.15.

Technology shapes relationships. Relationships shape technology.

Greetings from a cold winter day.

As I write this letter, we are in the early stages of President Donald Trump’s second term. The inauguration was exactly one week ago, and already an image from that day has become an indelible symbol of presidential power: a photo of the tech industry’s great data barons seated front and center at the swearing-in ceremony.

Elon Musk, Sundar Pichai, Jeff Bezos, and Mark Zuckerberg all sat shoulder to shoulder, almost as if on display, in front of some of the most important figures of the new administration. They were not the only tech leaders in Washington, DC, that week. Tim Cook, Sam Altman, and TikTok CEO Shou Zi Chew also put in appearances during the president’s first days back in action. 

These are tycoons who lead trillion-dollar companies, set the direction of entire industries, and shape the lives of billions of people all over the world. They are among the richest and most powerful people who have ever lived. And yet, just like you and me, they need relationships to get things done. In this case, with President Trump. 

Those tech barons showed up because they need relationships more than personal status, more than access to capital, and sometimes even more than ideas. Some of those same people—most notably Zuckerberg—had to make profound breaks with their own pasts in order to forge or preserve a relationship with the incoming president. 

Relationships are the stories of people and systems working together. Sometimes by choice. Sometimes for practicality. Sometimes by force. Too often, for purely transactional reasons. 

That’s why we’re exploring relationships in this issue. Relationships connect us to one another, but also to the machines, platforms, technologies, and systems that mediate modern life. They’re behind the partnerships that make breakthroughs possible, the networks that help ideas spread, and the bonds that build trust—or at least access. In this issue, you’ll find stories about the relationships we forge with each other, with our past, with our children (or not-quite-children, as the case may be), and with technology itself. 

Rhiannon Williams explores the relationships people have formed with AI chatbots. Some of these are purely professional, others more complicated. This kind of relationship may be novel now, but it’s something we will all take for granted in just a few years. 

Also in this issue, Antonio Regalado delves into our relationship with the ecological past and the way ancient DNA is being used not only to learn new truths about who we are and where we came from but also, potentially, to address modern challenges of climate and disease.

In an extremely thought-provoking piece, Jessica Hamzelou examines people’s relationships with the millions of IVF embryos in storage. Held in cryopreservation tanks around the world, these embryos wait in limbo, in ever growing numbers, as we attempt to answer complicated ethical and legal questions about their existence and preservation. 

Turning to the workplace, Rebecca Ackermann explores how our relationships with our employers are often mediated through monitoring systems. As she writes, what may be more important than the privacy implications is how the data they collect is “shifting the relationships between workers and managers” as algorithms “determine hiring and firing, promotion and ‘deactivation.’” Good luck with that.

Thank you for reading. As always, I value your feedback. So please, reach out and let me know what you think. I really don’t want this to be a transactional relationship. 

Warmly,

Mat Honan
Editor in Chief
mat.honan@technologyreview.com

The foundations of America’s prosperity are being dismantled

Ever since World War II, the US has been the global leader in science and technology—and benefited immensely from it. Research fuels American innovation and the economy in turn. Scientists around the world want to study in the US and collaborate with American scientists to produce more of that research. These international collaborations play a critical role in American soft power and diplomacy. The products Americans can buy, the drugs they have access to, the diseases they’re at risk of catching—are all directly related to the strength of American research and its connections to the world’s scientists.

That scientific leadership is now being dismantled, according to more than 10 federal workers who spoke to MIT Technology Review, as the Trump administration—spearheaded by Elon Musk’s Department of Government Efficiency (DOGE)—slashes personnel, programs, and agencies. Meanwhile, the president himself has gone after relationships with US allies.   

These workers come from several agencies, including the Departments of State, Defense, and Commerce, the US Agency for International Development, and the National Science Foundation. All of them occupy scientific and technical roles, many of which the average American has never heard of but which are nevertheless critical, coordinating research, distributing funding, supporting policymaking, or advising diplomacy.

They warn that dismantling the behind-the-scenes scientific research programs that backstop American life could lead to long-lasting, perhaps irreparable damage to everything from the quality of health care to the public’s access to next-generation consumer technologies. The US took nearly a century to craft its rich scientific ecosystem; if the unraveling that has taken place over the past month continues, Americans will feel the effects for decades to come. 

Most of the federal workers spoke on condition of anonymity because they were not authorized to talk or for fear of being targeted. Many are completely stunned and terrified by the scope and totality of the actions. While every administration brings its changes, keeping the US a science and technology leader has never been a partisan issue. No one predicted the wholesale assault on these foundations of American prosperity.

“If you believe that innovation is important to economic development, then throwing a wrench in one of the most sophisticated and productive innovation machines in world history is not a good idea,” says Deborah Seligsohn, an assistant professor of political science at Villanova University who worked for two decades in the State Department on science issues. “They’re setting us up for economic decline.”

The biggest funder of innovation

The US currently has the most top-quality research institutes in the world. This includes world-class universities like MIT (which publishes MIT Technology Review) and the University of California, Berkeley; national labs like Oak Ridge and Los Alamos; and federal research facilities run by agencies like the National Oceanic and Atmospheric Administration and the Department of Defense. Much of this network was developed by the federal government after World War II to bolster the US position as a global superpower. 

Before the Trump administration’s wide-ranging actions, which now threaten to slash federal research funding, the government remained by far the largest supporter of scientific progress. Outside of its own labs and facilities, it funded more than 50% of research and development across higher education, according to data from the National Science Foundation. In 2023, that came to nearly $60 billion out of the $109 billion that universities spent on basic science and engineering. 

The return on these investments is difficult to measure. It can often take years or decades for this kind of basic science research to have tangible effects on the lives of Americans and people globally, and on the US’s place in the world. But history is littered with examples of the transformative effect that this funding produces over time. The internet and GPS were first developed through research backed by the Department of Defense, as was the quantum dot technology behind high-resolution QLED television screens. Well before they were useful or commercially relevant, the development of neural networks that underpin nearly all modern AI systems was substantially supported by the National Science Foundation. The decades-long drug discovery process that led to Ozempic was incubated by the Department of Veterans Affairs and the National Institutes of Health. Microchips. Self-driving cars. MRIs. The flu shot. The list goes on and on. 

In her 2013 book The Entrepreneurial State, Mariana Mazzucato, a leading economist studying innovation at University College London, found that every major technological transformation in the US, from electric cars to Google to the iPhone, can trace its roots back to basic science research once funded by the federal government. If the past offers any lesson, that means every major transformation in the future could be shortchanged with the destruction of that support.

The Trump administration’s distaste for regulation will arguably be a boon in the short term for some parts of the tech industry, including crypto and AI. But the federal workers said the president’s and Musk’s undermining of basic science research will hurt American innovation in the long run. “Rather than investing in the future, you’re burning through scientific capital,” an employee at the State Department said. “You can build off the things you already know, but you’re not learning anything new. Twenty years later, you fall behind because you stopped making new discoveries.”

A global currency

The government doesn’t just give money, either. It supports American science in numerous other ways, and the US reaps the returns. The Department of State helps attract the best students from around the world to American universities. Amid stagnating growth in the number of homegrown STEM PhD graduates, recruiting foreign students remains one of the strongest pathways for the US to expand its pool of technical talent, especially in strategic areas like batteries and semiconductors. Many of those students stay for years, if not the rest of their lives; even if they leave the country, they’ve already spent some of their most productive years in the US and will retain a wealth of professional connections with whom they’ll collaborate, thereby continuing to contribute to US science.

The State Department also establishes agreements between the US and other countries and helps broker partnerships between American and international universities. That helps scientists collaborate across borders on everything from global issues like climate change to research that requires equipment on opposite sides of the world, such as the measurement of gravitational waves.

The international development work of USAID in global health, poverty reduction, and conflict alleviation—now virtually shut down in its entirety—was designed to build up goodwill toward the US globally; it improved regional stability for decades. In addition to its inherent benefits, this allowed American scientists to safely access diverse geographies and populations, as well as plant and animal species not found in the US. Such international interchange played just as critical a role as government funding in many crucial inventions.

Several federal agencies, including the Centers for Disease Control and Prevention, the Environmental Protection Agency, and the National Oceanic and Atmospheric Administration, also help collect and aggregate critical data on disease, health trends, air quality, weather, and more from disparate sources that feed into the work of scientists across the country.

The National Institutes of Health, for example, has since 2015 been running the Precision Medicine Initiative, the only effort of its kind to collect extensive and granular health data from over 1 million Americans who volunteer their medical records, genetic history, and even Fitbit data to help researchers understand health disparities and develop personalized and more effective treatments for disorders from heart and lung disease to cancer. The data set, which is too expensive for any one university to assemble and maintain, has already been used in hundreds of papers that will lay the foundation for the next generation of life-saving pharmaceuticals.

Beyond fueling innovation, a well-supported science and technology ecosystem bolsters US national security and global influence. When people want to study at American universities, attend international conferences hosted on American soil, or move to the US to work or to found their own companies, the US stays the center of global innovation activity. This ensures that the country continues to get access to the best people and ideas, and gives it an outsize role in setting global scientific practices and priorities. US research norms, including academic freedom and a robust peer review system, become global research norms that lift the overall quality of science. International agencies like the World Health Organization take significant cues from American guidance.

US scientific leadership has long been one of the country’s purest tools of soft power and diplomacy as well. Countries keen to learn from the American innovation ecosystem and to have access to American researchers and universities have been more prone to partner with the US and align with its strategic priorities.

Just one example: Science diplomacy has long played an important role in maintaining the US’s strong relationship with the Netherlands, which is home to ASML, the only company in the world that can produce the extreme ultraviolet lithography machines needed to produce the most advanced semiconductors. These are critical for both AI development and national security.

International science cooperation has also served as a stabilizing force in otherwise difficult relationships. During the Cold War, the US and USSR continued to collaborate on the International Space Station; during the recent heightened economic competition between the US and China, the countries have remained each other’s top scientific partners. “Actively working together to solve problems that we both care about helps maintain the connections and the context but also helps build respect,” Seligsohn says.

The federal government itself is a significant beneficiary of the country’s convening power for technical expertise. Among other things, experts both inside and outside the government support its sound policymaking in science and technology. During the US Senate AI Insight Forums, co-organized by Senator Chuck Schumer through the fall of 2023, for example, the Senate heard from more than 150 experts, many of whom were born abroad and studying at American universities, working at or advising American companies, or living permanently in the US as naturalized American citizens.

Federal scientists and technical experts at government agencies also work on wide-ranging goals critical to the US, including building resilience in the face of an increasingly erratic climate; researching strategic technologies such as next-generation battery technology to reduce the country’s reliance on minerals not found in the US; and monitoring global infectious diseases to prevent the next pandemic.

“Every issue that the US faces, there are people that are trying to do research on it and there are partnerships that have to happen,” the State Department employee said.

A system in jeopardy

Now the breadth and velocity of the Trump administration’s actions has led to an unprecedented assault on every pillar upholding American scientific leadership.

For starters, the purging of tens of thousands—and perhaps soon hundreds of thousands—of federal workers is removing scientists and technologists from the government and paralyzing the ability of critical agencies to function. Across multiple agencies, science and technology fellowship programs, designed to bring in talented early-career staff with advanced STEM degrees, have shuttered. Many other federal scientists were among the thousands who were terminated as probationary employees, a status they held because of the way scientific roles are often contractually structured.

Some agencies that were supporting or conducting their own research, including the National Institutes of Health and the National Science Foundation, are no longer functionally operational. USAID has effectively shuttered, eliminating a bastion of US expertise, influence, and credibility overnight.

“Diplomacy is built on relationships. If we’ve closed all these clinics and gotten rid of technical experts in our knowledge base inside the government, why would any foreign government have respect for the US in our ability to hold our word and in our ability to actually be knowledgeable?” a terminated USAID worker said. “I really hope America can save itself.”

Now the Trump administration has sought to reverse some terminations after discovering that many were key to national security, including nuclear safety employees responsible for designing, building, and maintaining the country’s nuclear weapons arsenal. But many federal workers I spoke to can no longer imagine staying in the public sector. Some are considering going into industry. Others are wondering whether it will be better to move abroad.

“It’s just such a waste of American talent,” said Fiona Coleman, a terminated federal scientist, her voice cracking with emotion as she described the long years of schooling and training she and her colleagues went through to serve the government.

Many fear the US has also singlehandedly kneecapped its own ability to attract talent from abroad. Over the last 10 years, even as American universities have continued to lead the world, many universities in other countries have rapidly leveled up. That includes those in Canada, where liberal immigration policies and lower tuition fees have driven a 200% increase in international student enrollment over the last decade, according to Anna Esaki-Smith, cofounder of a higher-education research consultancy called Education Rethink and author of Make College Your Superpower.

Germany has also seen an influx, thanks to a growing number of English-taught programs and strong connections between universities and German industry. Chinese students, who once represented the largest share of foreign students in the US, are increasingly staying at home or opting to study in places like Hong Kong, Singapore, and the UK.

During the first Trump administration, many international students were already more reluctant to come to the US because of the president’s hostile rhetoric. With the return and rapid escalation of that rhetoric, Esaki-Smith is hearing from some universities that international students are declining their admissions offers.

Add to that the other recent developments—the potential dramatic cuts in federal research funding, the deletion of scores of rich public data sets on health and the environment, the clampdown on academic freedom for research that appears related to diversity, equity, and inclusion and the fear that these restrictions could ultimately encompass other politically charged topics like climate change or vaccines—and many more international science and engineering students could decide to head elsewhere.

“I’ve been hearing this increasingly from several postdocs and early-career professors, fearing the cuts in NIH or NSF grants, that they’re starting to look for funding or job opportunities in other countries,” Coleman told me. “And then we’re going to be training up the US’s competitors.”

The attacks could similarly weaken the productivity of those who stay at American universities. While many of the Trump administration’s actions are now being halted and scrutinized by US judges, the chaos has weakened a critical prerequisite for tackling the toughest research problems: a long-term stable environment. With reports that the NSF is combing through research grants for words like “women,” “diverse,” and “institutional” to determine whether they violate President Trump’s executive order on DEIA programs, a chilling effect is also setting in among federally funded academics uncertain whether they’ll get caught in the dragnet.

To scientists abroad, the situation in the US government has marked American institutions and researchers as potentially unreliable partners, several federal workers told me. If international researchers think collaborations with the US can end at any moment when funds are abruptly pulled or certain topics or keywords are suddenly blacklisted, many of them could steer clear and look to other countries. “I’m really concerned about the instability we’re showing,” another employee at the State Department said. “What’s the point in even engaging? Because science is a long-term initiative and process that outlasts administrations and political cycles.”

Meanwhile, international scientists have far more options these days for high-caliber colleagues to collaborate with outside America. In recent years, for example, China has made a remarkable ascent to become a global peer in scientific discoveries. By some metrics, it has even surpassed the US; it started accounting for more of the top 1% of most-cited papers globally, often called the Nobel Prize tier, back in 2019 and has continued to improve the quality of the rest of its research. 

Where Chinese universities can also entice international collaborators with substantial resources, the US is more limited in its ability to offer tangible funding, the State employee said. Until now, the US has maintained its advantage in part through the prestige of its institutions and its more open cultural norms, including stronger academic freedom. But several federal scientists warn that this advantage is dissipating. 

“America is made up of so many different people contributing to it. There’s such a powerful global community that makes this country what it is, especially in science and technology and academia and research. We’re going to lose that; there’s not a chance in the world that we’re not going to lose that through stuff like this,” says Brigid Cakouros, a federal scientist who was also terminated from USAID. “I have no doubt that the international science community will ultimately be okay. It’ll just be a shame for the US to isolate themselves from it.”

Doctors and patients are calling for more telehealth. Where is it?

Maggie Barnidge, 18, has been managing cystic fibrosis her whole life. But not long after she moved out of her home state to start college, she came down with pneumonia and went into liver failure. She desperately wanted to get in touch with her doctor back home, whom she’d been seeing since she was diagnosed as an infant and who knew which treatments worked best for her—but he wasn’t allowed to practice telemedicine across state lines. The local hospital, and doctors unfamiliar with her complicated medical history, would have to do. 

“A lot of what Maggie needed wasn’t a physical exam,” says Barnidge’s mother, Elizabeth. “It was a conversation: What tests should I be getting next? What did my labs look like? She just needed her doctor who knew her well.”  

But doctors are generally allowed to practice medicine only where they have a license. This means they cannot treat patients across state lines unless they also have a license in the patient’s state, and most physicians have one or two licenses at most. This has led to what Ateev Mehrotra, a physician and professor of health policy at the Brown University School of Public Health, calls an “inane” norm: A woman with a rare cancer boarding an airplane, at the risk of her chemotherapy-weakened immune system, to see a specialist thousands of miles away, for example, or a baby with a rare disease who’s repeatedly shuttled between Arizona and Massachusetts. 

While eligible physicians can currently apply to practice in states besides their own, this can be a burdensome and impractical process. For instance, let’s say you are an oncologist in Minnesota, and a patient from Kansas arrives at your office seeking treatment. The patient will probably want to do follow-up appointments via telehealth when possible, to avoid having to travel back to Minnesota. 

But if you are not yet licensed to practice in Kansas (and you probably are not), you can’t suddenly start practicing medicine there. You would first need to apply to do so, either through the Interstate Medical Licensure Compact (designed to streamline the process of obtaining a full license in another state, but at a price of $700 per year) or with Kansas’s board of medicine directly. Maybe this poses too great an administrative hurdle for you—you work long hours, and how will you find time to compile the necessary paperwork? Doctors can’t reasonably be expected to apply for licensure in all 50 states. The patient, then, either loses out on care or must shoulder the burden of traveling to Minnesota for a doctor’s visit. The only way to access telehealth, if that’s what the patient prefers, would be to cross into the state and log in—an option that might still be preferable to traveling all the way to the doctor’s office. These obstacles to care have led to a growing belief among health-care providers, policymakers, and patients that under certain circumstances, doctors should be able to treat their patients anywhere. 

Lately, telehealth has proved to be widely popular, too. The coronavirus emergency in 2020 served as proof of concept, demonstrating that new digital platforms for medicine were feasible—and often highly effective. One study showed that telehealth accounted for nearly a quarter of contacts between patients and providers during the first four months of the pandemic (up from 0.3% during the same period in 2019), and among Medicare users, nearly half had used telehealth in 2020—a 63-fold increase. This swift and dramatic shift came about because Congress and the Centers for Medicare and Medicaid Services had passed legislation to make more telehealth visits temporarily eligible for reimbursement (the payments a health-care provider receives from an insurance company for providing medical services), while state boards of medicine relaxed the licensing restrictions. Now, more providers were able to offer telehealth, and more patients were eager to receive medical care without leaving their homes.

Though in-person care remains standard, telehealth has gained a significant place in US medicine, increasing from 0.1% of total Medicare visits in 2019 to 5.3% in 2020 and 3.5% in 2021. By the end of 2023, more than one in 10 Medicare patients were still using telehealth. And in some specialties the rate is much higher: 37% of all mental-health visits in the third quarter of 2023 were telemedicine, as well as 10% of obstetric appointments, 10% of transplant appointments, and 11% of infectious-disease appointments. 

“Telehealth has broadened our ability to provide care in ways not imaginable prior to the pandemic,” says Tara Sklar, faculty director of the health law and policy program at the University of Arizona James E. Rogers College of Law. 

Traditionally, patients and providers alike have been skeptical that telehealth care can meet the standards of an in-person appointment. However, most people advocating for telehealth aren’t arguing that it should completely replace visiting your doctor, explains Carmel Shachar, director of Harvard Law School’s Health Law and Policy Clinic. Rather, “it’s a really useful way to improve access to care.” Digital medicine could help address a gap in care for seniors by eliminating the need for them to make an arduous journey to the doctor’s office; many older adults find they’re more likely to keep their follow-up appointments when they can do them remotely. Telemedicine could also help address the equity issues facing hourly employees, who might not be able to take a half or full day off work to attend an in-­person appointment. For them, the offer of a video call might make the difference between seeking and not seeking help. 

“It’s a modality that we’re not using to its fullest potential because we’re not updating our regulations to reflect the digital age,” Shachar says.

Last December, Congress extended most of the provisions increasing Medicare coverage for telehealth through the end of March 2025, including the assurances that patients can be in their homes when they receive care and that they don’t need to be in a rural area to be eligible for telemedicine. 

“We would love to have these flexibilities made permanent,” says Helen Hughes, medical director for the Johns Hopkins Office of Telemedicine. “It’s confusing to explain to our providers and patients the continued regulatory uncertainty and news articles implying that telehealth is at risk, only to have consistent extensions for the last five years. This uncertainty leads providers and patients to worry that this type of care is not permanent and probably stifles innovation and investment by health systems.” 

In the meantime, several strategies are being considered to facilitate telehealth across state lines. Some places—like Maryland, Virginia, and Washington, DC—offer “proximal reciprocity,” meaning that a physician licensed in any of those states can more efficiently be licensed in the others. And several states, like Arkansas and Idaho, say that out-of-state doctors can generally practice telemedicine within their borders as long as they are licensed in good standing in another state and are using the technology to provide follow-up care. Expanding on these ideas, some advocates say that an ideal approach might look similar to how we regulate driving across state lines: A driver’s license from one state generally permits you to drive anywhere in the country as long as you have a good record and obey the rules of the road in the state that you’re in. Another idea is to create a telemedicine-specific version of the Interstate Medical Licensure Compact (which deals only with full medical licenses) in which qualifying physicians can register to practice telehealth among all participating states via a centralized compact.

For the foreseeable future, telehealth policy in the US is locked in what Mehrotra calls “hand-to-hand warfare”—states duking it out within their own legislatures to try to determine rules and regulations for administering telemedicine. Meanwhile, advocates are also pushing for uniformity between states, as with the Uniform Law Commission’s Telehealth Act of 2022, which set out consistent terminology so that states can adopt similar telehealth laws. 

“We’ve always advanced our technologies, like what I can provide as a doctor—meds, tests, surgeries,” Mehrotra says. “But in 2024, the basic structure of how we deliver that care is very similar to 1964.” That is, we still ask people to come to a doctor’s office or emergency department for an in-person visit. 

“That’s what excites me about telehealth,” he says. “I think there’s the potential that we can deliver care in a better way.” 

Isabel Ruehl is a writer based in New York and an assistant editor at Harper’s Magazine.

Congress used to evaluate emerging technologies. Let’s do it again.

At about the time when personal computers charged into cubicle farms, another machine muscled its way into human resources departments and became a staple of routine employment screenings. By the early 1980s, some 2 million Americans annually found themselves strapped to a polygraph—a metal box that, in many people’s minds, detected deception. Most of those tested were not suspected crooks or spooks. 

Then the US Office of Technology Assessment, an independent office that had been created by Congress about a decade earlier to serve as its scientific consulting arm, got involved. The office reached out to Boston University researcher Leonard Saxe with an assignment: Evaluate polygraphs. Tell us the truth about these supposed truth-telling devices.

And so Saxe assembled a team of about a dozen researchers, including Michael Saks of Boston College, to begin a systematic review. The group conducted interviews, pored over existing studies, and embarked on new lines of research. A few months later, the OTA published a technical memo, “Scientific Validity of Polygraph Testing: A Research Review and Evaluation.” Despite the tests’ widespread use, the memo dutifully reported, “there is very little research or scientific evidence to establish polygraph test validity in screening situations, whether they be preemployment, preclearance, periodic or aperiodic, random, or ‘dragnet.’” These machines could not detect lies. 

Four years later, in 1987, critics at a congressional hearing invoked the OTA report as authoritative, comparing polygraphs derisively to “tea leaf reading or crystal ball gazing.” Congress soon passed strict limits on the use of polygraphs in the workplace. 

Over its 23-year history, the OTA would publish some 750 reports—lengthy, interdisciplinary assessments of specific technologies that proposed means of maximizing their benefits and minimizing harms. Their subjects included electronic surveillance, genetic engineering, hazardous-waste disposal, and remote sensing from outer space. Congress set its course: The office initiated studies only at the request of a committee chairperson, a ranking minority leader, or its 12-person bipartisan board. 

The investigations remained independent; staffers and consultants from both inside and outside government collaborated to answer timely and sometimes politicized questions. The reports addressed worries about alarming advances and tamped down scary-sounding hypotheticals. Some of those concerns no longer keep policymakers up at night. For instance, “Do Insects Transmit AIDS?” A 1987 OTA report correctly suggested that they don’t.

The office functioned like a debunking arm. It sussed out the snake oil. Lifted the lid on the Mechanical Turk. The reports saw through the alluring gleam of overhyped technologies. 

In the years since its unceremonious defunding, perennial calls have gone out: Rouse the office from the dead! And with advances in robotics, big data, and AI systems, these calls have taken on a new level of urgency. 

Like polygraphs, chatbots and search engines powered by so-called artificial intelligence come with a shimmer and a sheen of magical thinking. And if we’re not careful, politicians, employers, and other decision-makers may accept at face value the idea that machines can and should replace human judgment and discretion. 

A resurrected OTA might be the perfect body to rein in dangerous and dangerously overhyped technologies. “That’s what Congress needs right now,” says Ryan Calo at the University of Washington’s Tech Policy Lab and the Center for an Informed Public, “because otherwise Congress is going to, like, take Sam Altman’s word for everything, or Eric Schmidt’s.” (The CEO of OpenAI and the former CEO of Google have both testified before Congress.) Leaving it to tech executives to educate lawmakers is like having the fox tell you how to build your henhouse. Wasted resources and inadequate protections might be only the start. 

A man administers a lie detector test to a job
applicant in 1976. A 1983 report from the OTA debunked the efficacy of polygraphs.
LIBRARY OF CONGRESS

No doubt independent expertise still exists. Congress can turn to the Congressional Research Service, for example, or the National Academies of Sciences, Medicine, and Engineering. Other federal entities, such as the Office of Management and Budget and the Office of Science and Technology Policy, have advised the executive branch (and still existed as we went to press). “But they’re not even necessarily specialists,” Calo says, “and what they’re producing is very lightweight compared to what the OTA did. And so I really think we need OTA back.”  

What exists today, as one researcher puts it, is a “diffuse and inefficient” system. There is no central agency that wholly devotes itself to studying emerging technologies in a serious and dedicated way and advising the country’s 535 elected officials about potential impacts. The digestible summaries Congress receives from the Congressional Research Service provide insight but are no replacement for the exhaustive technical research and analytic capacity of a fully staffed and funded think tank. There’s simply nothing like the OTA, and no single entity replicates its incisive and instructive guidance. But there’s also nothing stopping Congress from reauthorizing its budget and bringing it back, except perhaps the lack of political will. 

“Congress Smiles, Scientists Wince”

The OTA had not exactly been an easy sell to the research community in 1972. At the time, it was only the third independent congressional agency ever established. As the journal Science put it in a headline that year, “The Office of Technology Assessment: Congress Smiles, Scientists Wince.” One researcher from Bell Labs told Science that he feared legislators would embark on “a clumsy, destructive attempt to manage national R&D,” but mostly the cringe seemed to stem from uncertainty about what exactly technology assessment entailed. 

The OTA’s first report, in 1974, examined bioequivalence, an essential part of evaluating generic drugs. Regulators were trying to figure out whether these drugs could be deemed comparable to their name-brand equivalents without lengthy and expensive clinical studies demonstrating their safety and efficacy. Unlike all the OTA’s subsequent assessments, this one listed specific policy recommendations, such as clarifying what data should be required in order to evaluatea generic drug and ensure uniformity and standardization in the regulatory approval process. The Food and Drug Administration later incorporated these recommendations into its own submission requirements. 

From then on, though, the OTA did not take sides. The office had not been set up to advise Congress on how to legislate. Rather, it dutifully followed through on its narrowly focused mandate: Do the research and provide policymakers with a well-reasoned set of options that represented a range of expert opinions.

Perhaps surprisingly, given the rise of commercially available PCs, in the first decade of its existence the OTA produced only a few reports on computing. One 1976 report touched on the automated control of trains. Others examined computerized x-ray imaging, better known as CT scans; computerized crime databases; and the use of computers in medical education. Over time, the office’s output steadily increased, eventually averaging 32 reports a year. Its budget swelled to $22 million; its staff peaked at 143. 

While it’s sometimes said that the future impact of a technology is beyond anyone’s imagination, several findings proved prescient. A 1982 report on electronic funds transfer, or EFT, predicted that financial transactions would increasingly be carried out electronically (an obvious challenge to paper currency and hard-copy checks). Another predicted that email, or what was then termed “electronic message systems,” would disrupt snail mail and the bottom line of the US Postal Service. 

In vetting the digital record-keeping that provides the basis for routine background checks, the office commissioned a study that produced a statistic still cited today, suggesting that only about a quarter of the records sent to the FBI were “complete, accurate, and unambiguous.” It was an indicator of a growing issue: computational systems that, despite seeming automated, are not free of human bias and error. 

Many of the OTA’s reports focus on specific events or technologies. One looked at Love Canal, the upstate New York neighborhood polluted by hazardous waste (a disaster, the report said, that had not yet been remediated by the Environmental Protection Agency’s Superfund cleanup program); another studied the Boston Elbow, a cybernetic limb (the verdict: decidedly mixed). The office examined the feasibility of a water pipeline connecting Alaska to California, the health effects of the Kuwait oil fires, and the news media’s use of satellite imagery. The office also took on issues we grapple with today—evaluating automatic record checks for people buying guns, scrutinizing the compensation for injuries allegedly caused by vaccines, and pondering whether we should explore Mars. 

The OTA made its biggest splash in 1984, when it published a background report criticizing the Strategic Defense Initiative (commonly known as “Star Wars”), a pet project of the Reagan administration that involved several exotic missile defense systems. Its lead author was the MIT physicist Ashton Carter, later secretary of defense in the second Obama administration. And the report concluded that a “perfect or near-perfect” system to defend against nuclear weapons was basically beyond the realm of the plausible; the possibility of deployment was “so remote that it should not serve as the basis of public expectation or national policy.” 

The report generated lots of clicks, so to speak, especially after the administration claimed that the OTA had divulged state secrets. These charges did not hold up and Star Wars never materialized, although there have been recent efforts to beef up the military’s offensive capacity in space. But for the work of an advisory body that did not play politics, the report made a big political hubbub. By some accounts, its subsequent assessments became so neutral that the office risked receding to the point of invisibility.

From a purely pragmatic point of view, the OTA wrote to be understood. A dozen reports from the early ’90s received “Blue Pencil Awards,” given by the National Association of Government Communicators for “superior government communication products and those who produce them.” None are copyrighted. All were freely reproduced and distributed, both in print and electronically. The entire archive is stored on CD-ROM, and digitized copies are still freely available for download on a website maintained by Princeton University, like an earnest oasis of competence in the cloistered world of federal documents. 

Assessments versus accountability

Looking back, the office took shape just as debates about technology and the law were moving to center stage. 

While the gravest of dangers may have changed in form and in scope, the central problem remains: Laws and lawmakers cannot keep up with rapid technological advances. Policymakers often face a choice between regulating with insufficient facts and doing nothing. 

In 2018, Adam Kinzinger, then a Republican congressman from Illinois, confessed to a panel on quantum computing: “I can understand about 50% of the things you say.” To some, his admission underscored a broader tech illiteracy afflicting those in power. But other commentators argued that members of Congress should not be expected to know it all—all the more reason to restaff an office like the OTA.

A motley chorus of voices have clamored for an OTA 2.0 over the years. One doctor wrote that the office could help address the “discordance between the amount of money spent and the actual level of health.” Tech fellows have said bringing it back could help Congress understand machine learning and AI. Hillary Clinton, as a Democratic presidential hopeful, floated the possibility of resurrecting the OTA in 2017. 

But Meg Leta Jones, a law scholar at Georgetown University, argues that assessing new technologies is the least of our problems. The kind of work the OTA did is now done by other agencies, such as the FTC, FCC, and National Telecommunications and Information Administration, she says: “The energy I would like to put into the administrative state is not on assessments, but it’s on actual accountability and enforcement.”

She sees the existing framework as built for the industrial age, not a digital one, and is among those calling for a more ambitious overhaul. There seems to be little political appetite for the creation of new agencies anyway. That said, Jones adds, “I wouldn’t be mad if they remade the OTA.” 

No one can know whether or how future administrations will address AI, Mars colonization, the safety of vaccines, or, for that matter, any other emerging technology that the OTA investigated in an earlier era. But if the new administration makes good on plans to deregulate many sectors, it’s worth noting some historic echoes. In 1995, when conservative politicians defunded the OTA, they did so in the name of efficiency. Critics of that move contend that the office probably saved the government money and argue that the purported cost savings associated with its elimination were largely symbolic. 

Jathan Sadowski, a research fellow at Monash University in Melbourne, Australia, who has written about the OTA’s history, says the conditions that led to its demise have only gotten more partisan, more politicized. This makes it difficult to envision a place for the agency today, he says—“There’s no room for the kind of technocratic naïveté that would see authoritative scientific advice cutting through the noise of politics.”

Congress purposely cut off its scientific advisory arm as part of a larger shake-up led by Newt Gingrich, then the House Speaker, whose pugilistic brand of populist conservatism promised “drain the swamp”–type reforms and launched what critics called a “war on science.” As a rationale for why the office was defunded, he said, “We constantly found scientists who thought what they were saying was not correct.” 

Once again, Congress smiled and scientists winced. Only this time it was because politicians had pulled the plug. 

Peter Andrey Smith, a freelance reporter, has contributed to Undark, the New Yorker, the New York Times Magazine, and WNYC’s Radiolab.

Inside the race to archive the US government’s websites

Over the past three weeks, the new US presidential administration has taken down thousands of government web pages related to public health, environmental justice, and scientific research. The mass takedowns stem from the new administration’s push to remove government information related to diversity and “gender ideology,” as well as scrutiny of various government agencies’ practices. 

USAID’s website is down. So are sites related to it, like childreninadversity.gov, as well as thousands of pages from the Census Bureau, the Centers for Disease Control and Prevention, and the Office of Justice Programs.

“We’ve never seen anything like this,” says David Kaye, professor of law at the University of California, Irvine, and the former UN Special Rapporteur for freedom of opinion and expression. “I don’t think any of us know exactly what is happening. What we can see is government websites coming down, databases of essential public interest. The entirety of the USAID website.”

But as government web pages go dark, a collection of organizations are trying to archive as much data and information as possible before it’s gone for good. The hope is to keep a record of what has been lost for scientists and historians to be able to use in the future.

Data archiving is generally considered to be nonpartisan, but the recent actions of the administration have spurred some in the preservation community to stand up. 

“I consider the actions of the current administration an assault on the entire scientific enterprise,” says Margaret Hedstrom, professor emerita of information at the University of Michigan.

Various organizations are trying to scrounge up as much data as possible. One of the largest projects is the End of Term Web Archive, a nonpartisan coalition of many organizations that aims to make a copy of all government data at the end of each presidential term. The EoT Archive allows individuals to nominate specific websites or data sets for preservation.

“All we can do is collect what has been published and archive it and make sure it’s publicly accessible for the future,” says James Jacobs, US government information librarian at Stanford University, who is one of the people running the EoT Archive. 

Other organizations are taking a specific angle on data collection. For example, the Open Environmental Data Project (OEDP) is trying to capture data related to climate science and environmental justice. “We’re trying to track what’s getting taken down,” says Katie Hoeberling, director of policy initiatives at OEDP. “I can’t say with certainty exactly how much of what used to be up is still up, but we’re seeing, especially in the last couple weeks, an accelerating rate of data getting taken down.” 

In addition to tracking what’s happening, OEDP is actively backing up relevant data. It actually began this process in November, to capture the data at the end of former president Biden’s term. But efforts have ramped up in the last couple weeks. “Things were a lot calmer prior to the inauguration,” says Cathy Richards, a technologist at OEDP. “It was the second day of the new administration that the first platform went down. At that moment, everyone realized, ‘Oh, no—we have to keep doing this, and we have to keep working our way down this list of data sets.’”

This kind of work is crucial because the US government holds invaluable international and national data relating to climate. “These are irreplaceable repositories of important climate information,” says Lauren Kurtz, executive director of the Climate Science Legal Defense Fund. “So fiddling with them or deleting them means the irreplaceable loss of critical information. It’s really quite tragic.”

Like the OEDP, the Catalyst Cooperative is trying to make sure data related to climate and energy is stored and accessible for researchers. Both are part of the Public Environmental Data Partners, a collective of organizations dedicated to preserving federal environmental data. ”We have tried to identify data sets that we know our communities make use of to make decisions about what electricity we should procure or to make decisions about resiliency in our infrastructure planning,” says Christina Gosnell, cofounder and president of Catalyst. 

Archiving can be a difficult task; there is no one easy way to store all the US government’s data. “Various federal agencies and departments handle data preservation and archiving in a myriad of ways,” says Gosnell. There’s also no one who has a complete list of all the government websites in existence. 

This hodgepodge of data means that in addition to using web crawlers, which are tools used to capture snapshots of websites and data, archivists often have to manually scrape data as well. Additionally, sometimes a data set will be behind a login address or captcha to prevent scraper tools from pulling the data. Web scrapers also sometimes miss key features on a site. For example, sites will often have plenty of links to other pieces of information that aren’t captured in a scrape. Or the scrape may just not work because of something to do with a website’s structure. Therefore, having a person in the loop double-checking the scraper’s work or capturing data manually is often the only way to ensure that the information is properly collected.

And there are questions about whether scraping the data will really be enough. Restoring websites and complex data sets is often not a simple process. “It becomes extraordinarily difficult and costly to attempt to rescue and salvage the data,” says Hedstrom. “It is like draining a body of blood and expecting the body to continue to function. The repairs and attempts to recover are sometimes insurmountable where we need continuous readings of data.”

“All of this data archiving work is a temporary Band-Aid,” says Gosnell. “If data sets are removed and are no longer updated, our archived data will become increasingly stale and thus ineffective at informing decisions over time.” 

These effects may be long-lasting. “You won’t see the impact of that until 10 years from now, when you notice that there’s a gap of four years of data,” says Jacobs. 

Many digital archivists stress the importance of understanding our past. “We can all think about our own family photos that have been passed down to us and how important those different documents are,” says Trevor Owens, chief research officer at the American Institute of Physics and former director of digital services at the Library of Congress. “That chain of connection to the past is really important.”

“It’s our library; it’s our history,” says Richards. “This data is funded by taxpayers, so we definitely don’t want all that knowledge to be lost when we can keep it, store it, potentially do something with it and continue to learn from it.”

Three reasons Meta will struggle with community fact-checking

Earlier this month, Mark Zuckerberg announced that Meta will cut back on its content moderation efforts and eliminate fact-checking in the US in favor of the more “democratic” approach that X (formerly Twitter) calls Community Notes, rolling back protections that he claimed had been developed only in response to media and government pressure.

The move is raising alarm bells, and rightly so. Meta has left a trail of moderation controversies in its wake, from overmoderating images of breastfeeding women to undermoderating hate speech in Myanmar, contributing to the genocide of Rohingya Muslims. Meanwhile, ending professional fact-checking creates the potential for misinformation and hate to spread unchecked.

Enlisting volunteers is how moderation started on the Internet, long before social media giants realized that centralized efforts were necessary. And volunteer moderation can be successful, allowing for the development of bespoke regulations aligned with the needs of particular communities. But without significant commitment and oversight from Meta, such a system cannot contend with how much content is shared across the company’s platforms, and how fast. In fact, the jury is still out on how well it works at X, which is used by 21% of Americans (Meta’s are significantly more popular—Facebook alone is used by 70% of Americans, according to Pew).  

Community Notes, which started in 2021 as Birdwatch, is a community-driven moderation system on X that allows users who sign up for the program to add context to posts. Having regular users provide public fact-checking is relatively new, and so far results are mixed. For example, researchers have found that participants are more likely to challenge content they disagree with politically and that flagging content as false does not reduce engagement, but they have also found that the notes are typically accurate and can help reduce the spread of misleading posts

I’m a community moderator who researches community moderation. Here’s what I’ve learned about the limitations of relying on volunteers for moderation—and what Meta needs to do to succeed: 

1. The system will miss falsehoods and could amplify hateful content

There is a real risk under this style of moderation that only posts about things that a lot of people know about will get flagged in a timely manner—or at all. Consider how a post with a picture of a death cap mushroom and the caption “Tasty” might be handled under Community Notes–style moderation. If an expert in mycology doesn’t see the post, or sees it only after it’s been widely shared, it may not get flagged as “Poisonous, do not eat”—at least not until it’s too late. Topic areas that are more esoteric will be undermoderated. This could have serious impacts on both individuals (who may eat a poisonous mushroom) and society (if a falsehood spreads widely). 

Crucially, X’s Community Notes aren’t visible to readers when they are first added. A note becomes visible to the wider user base only when enough contributors agree that it is accurate by voting for it. And not all votes count. If a note is rated only by people who tend to agree with each other, it won’t show up. X does not make a note visible until there’s agreement from people who have disagreed on previous ratings. This is an attempt to reduce bias, but it’s not foolproof. It still relies on people’s opinions about a note and not on actual facts. Often what’s needed is expertise.

I moderate a community on Reddit called r/AskHistorians. It’s a public history site with over 2 million members and is very strictly moderated. We see people get facts wrong all the time. Sometimes these are straightforward errors. But sometimes there is hateful content that takes experts to recognize. One time a question containing a Holocaust-denial dog whistle escaped review for hours and ended up amassing hundreds of upvotes before it was caught by an expert on our team. Hundreds of people—probably with very different voting patterns and very different opinions on a lot of topics—not only missed the problematic nature of the content but chose to promote it through upvotes. This happens with answers to questions, too. People who aren’t experts in history will upvote outdated, truthy-sounding answers that aren’t actually correct. Conversely, they will downvote good answers if they reflect viewpoints that are tough to swallow. 

r/AskHistorians works because most of its moderators are expert historians. If Meta wants its Community Notes–style program to work, it should  make sure that the people with the knowledge to make assessments see the posts and that expertise is accounted for in voting, especially when there’s a misalignment between common understanding and expert knowledge. 

2. It won’t work without well-supported volunteers  

Meta’s paid content moderators review the worst of the worst—including gore, sexual abuse and exploitation, and violence. As a result, many have suffered severe trauma, leading to lawsuits and unionization efforts. When Meta cuts resources from its centralized moderation efforts, it will be increasingly up to unpaid volunteers to keep the platform safe. 

Community moderators don’t have an easy job. On top of exposure to horrific content, as identifiable members of their communities, they are also often subject to harassment and abuse—something we experience daily on r/AskHistorians. However, community moderators moderate only what they can handle. For example, while I routinely manage hate speech and violent language, as a moderator of a text-based community I am rarely exposed to violent imagery. Community moderators also work as a team. If I do get exposed to something I find upsetting or if someone is being abusive, my colleagues take over and provide emotional support. I also care deeply about the community I moderate. Care for community, supportive colleagues, and self-selection all help keep volunteer moderators’ morale high(ish). 

It’s unclear how Meta’s new moderation system will be structured. If volunteers choose what content they flag, will that replicate X’s problem, where partisanship affects which posts are flagged and how? It’s also unclear what kind of support the platform will provide. If volunteers are exposed to content they find upsetting, will Meta—the company that is currently being sued for damaging the mental health of its paid content moderators—provide social and psychological aid? To be successful, the company will need to ensure that volunteers have access to such resources and are able to choose the type of content they moderate (while also ensuring that this self-selection doesn’t unduly influence the notes).    

3. It can’t work without protections and guardrails 

Online communities can thrive when they are run by people who deeply care about them. However, volunteers can’t do it all on their own. Moderation isn’t just about making decisions on what’s “true” or “false.” It’s also about identifying and responding to other kinds of harmful content. Zuckerberg’s decision is coupled with other changes to its community standards that weaken rules around hateful content in particular. Community moderation is part of a broader ecosystem, and it becomes significantly harder to do it when that ecosystem gets poisoned by toxic content. 

I started moderating r/AskHistorians in 2020 as part of a research project to learn more about the behind-the-scenes experiences of volunteer moderators. While Reddit had started addressing some of the most extreme hate on its platform by occasionally banning entire communities, many communities promoting misogyny, racism, and all other forms of bigotry were permitted to thrive and grow. As a result, my early field notes are filled with examples of extreme hate speech, as well as harassment and abuse directed at moderators. It was hard to keep up with. 

But halfway through 2020, something happened. After a milquetoast statement about racism from CEO Steve Huffman, moderators on the site shut down their communities in protest. And to its credit, the platform listened. Reddit updated its community standards to explicitly prohibit hate speech and began to enforce the policy more actively. While hate is still an issue on Reddit, I see far less now than I did in 2020 and 2021. Community moderation needs robust support because volunteers can’t do it all on their own. It’s only one tool in the box. 

If Meta wants to ensure that its users are safe from scams, exploitation, and manipulation in addition to hate, it cannot rely solely on community fact-checking. But keeping the user base safe isn’t what this decision aims to do. It’s a political move to curry favor with the new administration. Meta could create the perfect community fact-checking program, but because this decision is coupled with weakening its wider moderation practices, things are going to get worse for its users rather than better. 

Sarah Gilbert is research director for the Citizens and Technology Lab at Cornell University.