There can be no winners in a US-China AI arms race

The United States and China are entangled in what many have dubbed an “AI arms race.” 

In the early days of this standoff, US policymakers drove an agenda centered on “winning” the race, mostly from an economic perspective. In recent months, leading AI labs such as OpenAI and Anthropic got involved in pushing the narrative of “beating China” in what appeared to be an attempt to align themselves with the incoming Trump administration. The belief that the US can win in such a race was based mostly on the early advantage it had over China in advanced GPU compute resources and the effectiveness of AI’s scaling laws.

But now it appears that access to large quantities of advanced compute resources is no longer the defining or sustainable advantage many had thought it would be. In fact, the capability gap between leading US and Chinese models has essentially disappeared, and in one important way the Chinese models may now have an advantage: They are able to achieve near equivalent results while using only a small fraction of the compute resources available to the leading Western labs.    

The AI competition is increasingly being framed within narrow national security terms, as a zero-sum game, and influenced by assumptions that a future war between the US and China, centered on Taiwan, is inevitable. The US has employed “chokepoint” tactics to limit China’s access to key technologies like advanced semiconductors, and China has responded by accelerating its efforts toward self-sufficiency and indigenous innovation, which is causing US efforts to backfire.

Recently even outgoing US Secretary of Commerce Gina Raimondo, a staunch advocate for strict export controls, finally admitted that using such controls to hold back China’s progress on AI and advanced semiconductors is a “fool’s errand.” Ironically, the unprecedented export control packages targeting China’s semiconductor and AI sectors have unfolded alongside tentative bilateral and multilateral engagements to establish AI safety standards and governance frameworks—highlighting a paradoxical desire of both sides to compete and cooperate. 

When we consider this dynamic more deeply, it becomes clear that the real existential threat ahead is not from China, but from the weaponization of advanced AI by bad actors and rogue groups who seek to create broad harms, gain wealth, or destabilize society. As with nuclear arms, China, as a nation-state, must be careful about using AI-powered capabilities against US interests, but bad actors, including extremist organizations, would be much more likely to abuse AI capabilities with little hesitation. Given the asymmetric nature of AI technology, which is much like cyberweapons, it is very difficult to fully prevent and defend against a determined foe who has mastered its use and intends to deploy it for nefarious ends. 

Given the ramifications, it is incumbent on the US and China as global leaders in developing AI technology to jointly identify and mitigate such threats, collaborate on solutions, and cooperate on developing a global framework for regulating the most advanced models—instead of erecting new fences, small or large, around AI technologies and pursing policies that deflect focus from the real threat.

It is now clearer than ever that despite the high stakes and escalating rhetoric, there will not and cannot be any long-term winners if the intense competition continues on its current path. Instead, the consequences could be severe—undermining global stability, stalling scientific progress, and leading both nations toward a dangerous technological brinkmanship. This is particularly salient given the importance of Taiwan and the global foundry leader TSMC in the AI stack, and the increasing tensions around the high-tech island. 

Heading blindly down this path will bring the risk of isolation and polarization, threatening not only international peace but also the vast potential benefits AI promises for humanity as a whole.

Historical narratives, geopolitical forces, and economic competition have all contributed to the current state of the US-China AI rivalry. A recent report from the US-China Economic and Security Review Commission, for example, frames the entire issue in binary terms, focused on dominance or subservience. This “winner takes all” logic overlooks the potential for global collaboration and could even provoke a self-fulfilling prophecy by escalating conflict. Under the new Trump administration this dynamic will likely become more accentuated, with increasing discussion of a Manhattan Project for AI and redirection of US military resources from Ukraine toward China

Fortunately, a glimmer of hope for a responsible approach to AI collaboration is appearing now as Donald Trump recently  posted on January 17 that he’d restarted direct dialogue with Chairman Xi Jinping regarding various areas of collaboration, and given past cooperation should continue to be “partners and friends.” The outcome of the TikTok drama, putting Trump at odds with sharp China critics in his own administration and Congress, will be a preview of how his efforts to put US China relations on a less confrontational trajectory.

The promise of AI for good

Western mass media usually focuses on attention-grabbing issues described in terms like the “existential risks of evil AI.” Unfortunately, the AI safety experts who get the most coverage often recite the same narratives, scaring the public. In reality, no credible research shows that more capable AI will become increasingly evil. We need to challenge the current false dichotomy of pure accelerationism versus doomerism to allow for a model more like collaborative acceleration

It is important to note the significant difference between the way AI is perceived in Western developed countries and developing countries. In developed countries the public sentiment toward AI is 60% to 70% negative, while in the developing markets the positive ratings are 60% to 80%. People in the latter places have seen technology transform their lives for the better in the past decades and are hopeful AI will help solve the remaining issues they face by improving education, health care, and productivity, thereby elevating their quality of life and giving them greater world standing. What Western populations often fail to realize is that those same benefits could directly improve their lives as well, given the high levels of inequity even in developed markets. Consider what progress would be possible if we reallocated the trillions that go into defense budgets each year to infrastructure, education, and health-care projects. 

Once we get to the next phase, AI will help us accelerate scientific discovery, develop new drugs, extend our health span, reduce our work obligations, and ensure access to high-quality education for all. This may sound idealistic, but given current trends, most of this can become a reality within a generation, and maybe sooner. To get there we’ll need more advanced AI systems, which will be a much more challenging goal if we divide up compute/data resources and research talent pools. Almost half of all top AI researchers globally (47%) were born or educated in China, according to industry studies. It’s hard to imagine how we could have gotten where we are without the efforts of Chinese researchers. Active collaboration with China on joint AI research could be pivotal to supercharging progress with a major infusion of quality training data and researchers. 

The escalating AI competition between the US and China poses significant threats to both nations and to the entire world. The risks inherent in this rivalry are not hypothetical—they could lead to outcomes that threaten global peace, economic stability, and technological progress. Framing the development of artificial intelligence as a zero-sum race undermines opportunities for collective advancement and security. Rather than succumb to the rhetoric of confrontation, it is imperative that the US and China, along with their allies, shift toward collaboration and shared governance.

Our recommendations for policymakers:

  1. Reduce national security dominance over AI policy. Both the US and China must recalibrate their approach to AI development, moving away from viewing AI primarily as a military asset. This means reducing the emphasis on national security concerns that currently dominate every aspect of AI policy. Instead, policymakers should focus on civilian applications of AI that can directly benefit their populations and address global challenges, such as health care, education, and climate change. The US also needs to investigate how to implement a possible universal basic income program as job displacement from AI adoption becomes a bigger issue domestically. 
    • 2. Promote bilateral and multilateral AI governance. Establishing a robust dialogue between the US, China, and other international stakeholders is crucial for the development of common AI governance standards. This includes agreeing on ethical norms, safety measures, and transparency guidelines for advanced AI technologies. A cooperative framework would help ensure that AI development is conducted responsibly and inclusively, minimizing risks while maximizing benefits for all.
    • 3. Expand investment in detection and mitigation of AI misuse. The risk of AI misuse by bad actors, whether through misinformation campaigns, telecom, power, or financial system attacks, or cybersecurity attacks with the potential to destabilize society, is the biggest existential threat to the world today. Dramatically increasing funding for and international cooperation in detecting and mitigating these risks is vital. The US and China must agree on shared standards for the responsible use of AI and collaborate on tools that can monitor and counteract misuse globally.
    • 4. Create incentives for collaborative AI research. Governments should provide incentives for academic and industry collaborations across borders. By creating joint funding programs and research initiatives, the US and China can foster an environment where the best minds from both nations contribute to breakthroughs in AI that serve humanity as a whole. This collaboration would help pool talent, data, and compute resources, overcoming barriers that neither country could tackle alone. A global effort akin to the CERN for AI will bring much more value to the world, and a peaceful end, than a Manhattan Project for AI, which is being promoted by many in Washington today. 
    • 5. Establish trust-building measures. Both countries need to prevent misinterpretations of AI-related actions as aggressive or threatening. They could do this via data-sharing agreements, joint projects in nonmilitary AI, and exchanges between AI researchers. Reducing import restrictions for civilian AI use cases, for example, could help the nations rebuild some trust and make it possible for them to discuss deeper cooperation on joint research. These measures would help build transparency, reduce the risk of miscommunication, and pave the way for a less adversarial relationship.
    • 6. Support the development of a global AI safety coalition. A coalition that includes major AI developers from multiple countries could serve as a neutral platform for addressing ethical and safety concerns. This coalition would bring together leading AI researchers, ethicists, and policymakers to ensure that AI progresses in a way that is safe, fair, and beneficial to all. This effort should not exclude China, as it remains an essential partner in developing and maintaining a safe AI ecosystem.
    • 7. Shift the focus toward AI for global challenges. It is crucial that the world’s two AI superpowers use their capabilities to tackle global issues, such as climate change, disease, and poverty. By demonstrating the positive societal impacts of AI through tangible projects and presenting it not as a threat but as a powerful tool for good, the US and China can reshape public perception of AI. 

    Our choice is stark but simple: We can proceed down a path of confrontation that will almost certainly lead to mutual harm, or we can pivot toward collaboration, which offers the potential for a prosperous and stable future for all. Artificial intelligence holds the promise to solve some of the greatest challenges facing humanity, but realizing this potential depends on whether we choose to race against each other or work together. 

    The opportunity to harness AI for the common good is a chance the world cannot afford to miss.


    Alvin Wang Graylin

    Alvin Wang Graylin is a technology executive, author, investor, and pioneer with over 30 years of experience shaping innovation in AI, XR (extended reality), cybersecurity, and semiconductors. Currently serving as global vice president at HTC, Graylin was the company’s China president from 2016 to 2023. He is the author of Our Next Reality.

    Paul Triolo

    Paul Triolo is a partner for China and technology policy lead at DGA-Albright Stonebridge Group. He advises clients in technology, financial services, and other sectors as they navigate complex political and regulatory matters in the US, China, the European Union, India, and around the world.

    We need to protect the protocol that runs Bluesky

    Last week, when Mark Zuckerberg announced that Meta would be ending third-party fact-checking, it was a shocking pivot, but not exactly surprising. It’s just the latest example of a billionaire flip-flop affecting our social lives on the internet. 

    After January 6, 2021, Zuckerberg bragged to Congress about Facebook’s “industry-leading fact-checking program” and banned Donald Trump from the platform. But just two years later, he welcomed Trump back. And last year Zuckerberg was privately reassuring the conservative congressman Jim Jordan that Meta will no longer demote questionable content while it’s being fact-checked. 

    Now, not only is Meta ending fact-checking completely; it is loosening rules around hate speech, allowing horrendous personal attacks on migrants and trans people, for example, on its platforms. 

    And Zuckerberg isn’t the only social media CEO careening all over the road: Elon Musk, since buying Twitter in 2022 and touting free speech as “the bedrock of a functioning democracy,” has suspended journalists, restored tens of thousands of banned users (including white nationalists), brought back political advertising, and weakened verification and harassment policies. 

    Unfortunately, these capricious billionaires can do whatever they want because of an ownership model that privileges singular, centralized control in exchange for shareholder returns.

    And this has led to a constantly shifting digital environment in which people can lose their communication pathways and livelihoods in a second, with no recourse, as opaque rules change. 

    The internet doesn’t need to be like this. As luck would have it, a new way is emerging just in time. 

    If you’ve heard of Bluesky, you’ve probably heard of it as a clone of Twitter where liberals can take refuge. But under the hood it’s structured fundamentally differently—in a way that could point us to a healthier internet for everyone, regardless of politics or identity. 

    Just like email, Bluesky sits on top of an open protocol, in this case known as the AT Protocol. In practice, that means that anyone can build on it. Just as you wouldn’t need anyone’s permission to start a newsletter company built on email, people are starting to share remixed versions of their social media feeds, built on Bluesky. This sounds like a small thing, but think about all the harms enabled by social media companies’ algorithms in the last decade: insurrection, radicalization, self-harm, bullying. Bluesky enables users to collaborate on verification and moderation by sharing block lists and labels. Letting people shape their own experience of social media is nothing short of revolutionary. 

    And importantly, if you decide that you don’t agree with Bluesky’s design and moderation decisions, you can build something else on the same infrastructure and use that instead. This is fundamentally different from the dominant, centralized social media that has prevailed until now.

    At the core of Bluesky’s philosophy is the idea that instead of being centralized in the hands of one person or institution, social media governance should obey the principle of subsidiarity. The Nobel Prize–winning economist Elinor Ostrom found, through studying grassroots solutions to local environmental problems around the world, that some problems are best solved locally, while others are best solved at a higher level. 

    In terms of content moderation, posts related to child sexual abuse or terrorism are best handled by professionals trained to help keep millions or billions safe. But a lot of decisions about speech can be solved in each community, or even user by user as people assemble Bluesky block lists. 

    So all the right elements are currently in place at Bluesky to usher in this new architecture for social media: independent ownership, newfound popularity, a stark contrast with other dominant platforms, and right-minded leadership. But challenges remain, and we can’t count on Bluesky to do this right without support. 

    Critics have pointed out that Bluesky has yet to turn a profit and is currently running on venture capital, the same corporate structure that brought us Facebook, Twitter, and other social media companies. As of now, there’s no option to exit Bluesky and take your data and network with you, because there are no other servers that run the AT Protocol. Bluesky CEO Jay Graber deserves credit for her stewardship so far, and for attempting to avoid the dangers of advertising incentives. But the process by which capitalism degrades tech products is so predictable that Cory Doctorow coined a now-popular term for it: enshittification.

    That’s why we need to act now to secure the foundation of this digital future and make it enshittification-proof. This week, prominent technologists started a new project, which we at New_ Public are supporting, called Free Our Feeds. There are three parts: First, Free Our Feeds wants to create a nonprofit foundation to govern and protect the AT Protocol, outside of Bluesky the company. We also need to build redundant servers so all users can leave with their data or build anything they want—regardless of policies set by Bluesky. Finally, we need to spur the development of a whole ecosystem built on this tech with seed money and expertise. 

    It’s worth noting that this is not a hostile takeover: Bluesky and Graber recognize the importance of this effort and have signaled their approval. But the point is, it can’t rely on them. To free us from fickle billionaires, some of the power has to reside outside Bluesky, Inc. 

    If we get this right, so much is possible. Not too long ago, the internet was full of builders and people working together: the open web. Email. Podcasts. Wikipedia is one of the best examples—a collaborative project to create one of the web’s best free, public resources. And the reason we still have it today is the infrastructure built up around it: The nonprofit Wikimedia Foundation protects the project and insulates it from the pressures of capitalism. When’s the last time we collectively built anything as good?

    We can shift the balance of power and reclaim our social lives from these companies and their billionaires. This is an opportunity to bring much more independence, innovation, and local control to our online conversations. We can finally build the “Wikipedia of social media,” or whatever we want. But we need to act, because the future of the internet can’t depend on whether one of the richest men on Earth wakes up on the wrong side of the bed. 

    Eli Pariser is author of The Filter Bubble and Co-Director of New_ Public, a nonprofit R&D lab that’s working to reimagine social media. 

    Deepti Doshi is a Co-Director of New_ Public and was a director at Meta.

    A New York legislator wants to pick up the pieces of the dead California AI bill

    The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047, with a new version in his state that would regulate the most advanced AI models. It’s called the RAISE Act, an acronym for “Responsible AI Safety and Education.”

    Assemblymember Alex Bores hopes his bill, currently an unpublished draft—subject to change—that MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law.

    SB 1047 was, at first, thought to be a fairly modest bill that would pass without much fanfare. In fact, it flew through the California statehouse with huge margins and received significant public support.

    However, before it even landed on Governor Gavin Newsom’s desk for signature in September, it sparked an intense national fight. Google, Meta, and OpenAI came out against the bill, alongside top congressional Democrats like Nancy Pelosi and Zoe Lofgren. Even Hollywood celebrities got involved, with Jane Fonda and Mark Hamill expressing support for the bill. 

    Ultimately, Newsom vetoed SB 1047, effectively killing regulation of so-called frontier AI models not just in California but, with the lack of laws on the national level, anywhere in the US, where the most powerful systems are developed.

    Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models. 

    The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause “critical harm”; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people. 

    Alternatively, a critical harm could be a use of the AI model that results in 100 or more deaths or at least $1 billion in damages in an act with limited human oversight that if committed by a human would constitute a crime requiring intent, recklessness, or gross negligence.

    The safety plans would ensure that a company has cybersecurity protections in place to prevent unauthorized access to a model. The plan would also require testing of models to assess risks before and after training, as well as detailed descriptions of procedures to assess the risks associated with post-training modifications. For example, some current AI systems have safeguards that can be easily and cheaply removed by a malicious actor. A safety plan would have to address how the company plans to mitigate these actions.

    The safety plans would then be audited by a third party, like a nonprofit with technical expertise that currently tests AI models. And if violations are found, the bill empowers the attorney general of New York to issue fines and, if necessary, go to the courts to determine whether to halt unsafe development. 

    A different flavour of bill

    The safety plans and external audits were elements of SB 1047, but Bores aims to differentiate his bill from the California one. “We focused a lot on what the feedback was for 1047,” he says. “Parts of the criticism were in good faith and could make improvements. And so we’ve made a lot of changes.” 

    The RAISE Act diverges from SB 1047 in a few ways. For one, SB 1047 would have created the Board of Frontier Models, tasked with approving updates to the definitions and regulations around these AI models, but the proposed act would not create a new government body. The New York bill also doesn’t create a public cloud computing cluster, which SB 1047 would have done. The cluster was intended to support projects to develop AI for the public good. 

    The RAISE Act doesn’t have SB 1047’s requirement that companies be able to halt all operations of their model, a capability sometimes referred to as a “kill switch.” Some critics alleged that the shutdown provision of SB 1047 would harm open-source models, since developers can’t shut down a model someone else may now possess (even though SB 1047 had an exemption for open-source models).

    The RAISE Act avoids the fight entirely. SB 1047 referred to an “advanced persistent threat” associated with bad actors trying to steal information during model training. The RAISE Act does away with that definition, sticking to addressing critical harms from covered models.

    Focusing on the wrong issues?

    Bores’ bill is very specific with its definitions in an effort to clearly delineate what this bill is and isn’t about. The RAISE Act doesn’t address some of the current risks from AI models, like bias, discrimination, and job displacement. Like SB 1047, it is very focused on catastrophic risks from frontier AI models. 

    Some in the AI community believe this focus is misguided. “We’re broadly supportive of any efforts to hold large models accountable,” says Kate Brennan, associate director of the AI Now Institute, which conducts AI policy research.

    “But defining critical harms only in terms of the most catastrophic harms from the most advanced models overlooks the material risks that AI poses, whether it’s workers subject to surveillance mechanisms, prone to workplace injuries because of algorithmically managed speed rates, climate impacts of large-scale AI systems, data centers exerting massive pressure on local power grids, or data center construction sidestepping key environmental protections,” she says.

    Bores has worked on other bills addressing current harms posed by AI systems, like discrimination and lack of transparency. That said, Bores is clear that this new bill is aimed at mitigating catastrophic risks from more advanced models. “We’re not talking about any model that exists right now,” he says. “We are talking about truly frontier models, those on the edge of what we can build and what we understand, and there is risk in that.” 

    The bill would cover only models that pass a certain threshold for how many computations their training required, typically measured in FLOPs (floating-point operations). In the bill, a covered model is one that requires more than 1026 FLOPs in its training and costs over $100 million. For reference, GPT-4 is estimated to have required 1025 FLOPs. 

    This approach may draw scrutiny from industry forces. “While we can’t comment specifically on legislation that isn’t public yet, we believe effective regulation should focus on specific applications rather than broad model categories,” says a spokesperson at Hugging Face, a company that opposed SB 1047.

    Early days

    The bill is in its nascent stages, so it’s subject to many edits in the future, and no opposition has yet formed. There may already be lessons to be learned from the battle over SB 1047, however. “There’s significant disagreement in the space, but I think debate around future legislation would benefit from more clarity around the severity, the likelihood, and the imminence of harms,” says Scott Kohler, a scholar at the Carnegie Endowment for International Peace, who tracked the development of SB 1047. 

    When asked about the idea of mandated safety plans for AI companies, assemblymember Edward Ra, a Republican who hasn’t yet seen a draft of the new bill yet, said: “I don’t have any general problem with the idea of doing that. We expect businesses to be good corporate citizens, but sometimes you do have to put some of that into writing.” 

    Ra and Bores co chair the New York Future Caucus, which aims to bring together lawmakers 45 and under to tackle pressing issues that affect future generations.

    Scott Wiener, a California state senator who sponsored SB 1047, is happy to see that his initial bill, even though it failed, is inspiring further legislation and discourse. “The bill triggered a conversation about whether we should just trust the AI labs to make good decisions, which some will, but we know from past experience, some won’t make good decisions, and that’s why a level of basic regulation for incredibly powerful technology is important,” he says.

    He has his own plans to reignite the fight: “We’re not done in California. There will be continued work in California, including for next year. I’m optimistic that California is gonna be able to get some good things done.”

    And some believe the RAISE Act will highlight a notable contradiction: Many of the industry’s players insist that they want regulation, but when any regulation is proposed, they fight against it. “SB 1047 became a referendum on whether AI should be regulated at all,” says Brennan. “There are a lot of things we saw with 1047 that we can expect to see replay in New York if this bill is introduced. We should be prepared to see a massive lobbying reaction that industry is going to bring to even the lightest-touch regulation.”

    Wiener and Bores both wish to see regulation at a national level, but in the absence of such legislation, they’ve taken the battle upon themselves. At first it may seem odd for states to take up such important reforms, but California houses the headquarters of the top AI companies, and New York, which has the third-largest state economy in the US, is home to offices for OpenAI and other AI companies. The two states may be well positioned to lead the conversation around regulation. 

    “There is uncertainty at the direction of federal policy with the transition upcoming and around the role of Congress,” says Kohler. “It is likely that states will continue to step up in this area.”

    Wiener’s advice for New York legislators entering the arena of AI regulation? “Buckle up and get ready.”

    What’s next for our privacy?

    MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

    Every day, we are tracked hundreds or even thousands of times across the digital world. Cookies and web trackers capture every website link that we click, while code installed in mobile apps tracks every physical location that our devices—and, by extension, we—have visited. All of this is collected, packaged together with other details (compiled from public records, supermarket member programs, utility companies, and more), and used to create highly personalized profiles that are then shared or sold, often without our explicit knowledge or consent. 

    A consensus is growing that Americans need better privacy protections—and that the best way to deliver them would be for Congress to pass comprehensive federal privacy legislation. While the latest iteration of such a bill, the American Privacy Rights Act of 2024, gained more momentum than previously proposed laws, it became so watered down that it lost support from both Republicans and Democrats before it even came to a vote. 

    There have been some privacy wins in the form of limits on what data brokers—third-party companies that buy and sell consumers’ personal information for targeted advertisements, messaging, and other purposes—can do with geolocation data. 

    These are still small steps, though—and they are happening as increasingly pervasive and powerful technologies collect more data than ever. And at the same time, Washington is preparing for a new presidential administration that has attacked the press and other critics, promised to target immigrants for mass deportation, threatened to seek retribution against perceived enemies, and supported restrictive state abortion laws. This is not even to mention the increased collection of our biometric data, especially for facial recognition, and the normalization of its use in all kinds of ways. In this light, it’s no stretch to say our personal data has arguably never been more vulnerable, and the imperative for privacy has never felt more urgent. 

    So what can Americans expect for their personal data in 2025? We spoke to privacy experts and advocates about (some of) what’s on their mind regarding how our digital data might be traded or protected moving forward. 

    Reining in a problematic industry

    In early December, the Federal Trade Commission announced separate settlement agreements with the data brokers Mobilewalla and Gravy Analytics (and its subsidiary Venntel). Finding that the companies had tracked and sold geolocation data from users at sensitive locations like churches, hospitals, and military installations without explicit consent, the FTC banned the companies from selling such data except in specific circumstances. This follows something of a busy year in regulation of data brokers, including multiple FTC enforcement actions against other companies for similar use and sale of geolocation data, as well as a proposed rule from the Justice Department that would prohibit the sale of bulk data to foreign entities. 

    And on the same day that the FTC announced these settlements in December, the Consumer Financial Protection Bureau proposed a new rule that would designate data brokers as consumer reporting agencies, which would trigger stringent reporting requirements and consumer privacy protections. The rule would prohibit the collection and sharing of people’s sensitive information, such as their salaries and Social Security numbers, without “legitimate purposes.” While the rule will still need to undergo a 90-day public comment period, and it’s unclear whether it will move forward under the Trump administration, if it’s finalized it has the power to fundamentally limit how data brokers do business.

    Right now, there just aren’t many limits on how these companies operate—nor, for that matter, clear information on how many data brokerages even exist. Industry watchers estimate there may be 4,000 to 5,000 data brokers around the world, many of which we’ve never heard of—and whose names constantly shift. In California alone, the state’s 2024 Data Broker Registry lists 527 such businesses that have voluntarily registered there, nearly 90 of which also self-reported that they collect geolocation data. 

    All this data is widely available for purchase by anyone who will pay. Marketers buy data to create highly targeted advertisements, and banks and insurance companies do the same to verify identity, prevent fraud, and conduct risk assessments. Law enforcement buys geolocation data to track people’s whereabouts without getting traditional search warrants. Foreign entities can also currently buy sensitive information on members of the military and other government officials. And on people-finder websites, basically anyone can pay for anyone else’s contact details and personal history.  

    Data brokers and their clients defend these transactions by saying that most of this data is anonymized—though it’s questionable whether that can truly be done in the case of geolocation data. Besides, anonymous data can be easily reidentified, especially when it’s combined with other personal information. 

    Digital-rights advocates have spent years sounding the alarm on this secretive industry, especially the ways in which it can harm already marginalized communities, though various types of data collection have sparked consternation across the political spectrum. Representative Cathy McMorris Rodgers, the Republican chair of the House Energy and Commerce Committee, for example, was concerned about how the Centers for Disease Control and Prevention bought location data to evaluate the effectiveness of pandemic lockdowns. Then a study from last year showed how easy (and cheap) it was to buy sensitive data about members of the US military; Senator Elizabeth Warren, a Democrat, called out the national security risks of data brokers in a statement to MIT Technology Review, and Senator John Cornyn, a Republican, later said he was “shocked” when he read about the practice in our story. 

    But it was the 2022 Supreme Court decision ending the constitutional guarantee of legal abortion that spurred much of the federal action last year. Shortly after the Dobbs ruling, President Biden issued an executive order to protect access to reproductive health care; it included instructions for the FTC to take steps preventing information about visits to doctor’s offices or abortion clinics from being sold to law enforcement agencies or state prosecutors.

    The new enforcers

    With Donald Trump taking office in January, and Republicans taking control of both houses of Congress, the fate of the CFPB’s proposed rule—and the CFPB itself—is uncertain. Republicans, the people behind Project 2025, and Elon Musk (who will lead the newly created advisory group known as the Department of Government Efficiency) have long been interested in seeing the bureau “deleted,” as Musk put it on X. That would take an act of Congress, making it unlikely, but there are other ways that the administration could severely curtail its powers. Trump is likely to fire the current director and install a Republican who could rescind existing CFPB rules and stop any proposed rules from moving forward. 

    Meanwhile, the FTC’s enforcement actions are only as good as the enforcers. FTC decisions do not set legal precedent in quite the same way that court cases do, says Ben Winters, a former Department of Justice official and the director of AI and privacy at the Consumer Federation of America, a network of organizations and agencies focused on consumer protection. Instead, they “require consistent [and] additional enforcement to make the whole industry scared of not having an FTC enforcement action against them.” (It’s also worth noting that these FTC settlements are specifically focused on geolocation data, which is just one of the many types of sensitive data that we regularly give up in order to participate in the digital world.)

    Looking ahead, Tiffany Li, a professor at the University of San Francisco School of Law who focuses on AI and privacy law, is worried about “a defanged FTC” that she says would be “less aggressive in taking action against companies.” 

    Lina Khan, the current FTC chair, has been the leader of privacy protection action in the US, notes Li, and she’ll soon be leaving. Andrew Ferguson, Trump’s recently named pick to be the next FTC chair, has come out in strong opposition to data brokers: “This type of data—records of a person’s precise physical locations—is inherently intrusive and revealing of people’s most private affairs,” he wrote in a statement on the Mobilewalla decision, indicating that he is likely to continue action against them. (Ferguson has been serving as a commissioner on the FTC since April 20214.) On the other hand, he has spoken out against using FTC actions as an alternative to privacy legislation passed by Congress. And, of course, this brings us right back around to that other major roadblock: Congress has so far failed to pass such laws—and it’s unclear if the next Congress will either. 

    Movement in the states

    Without federal legislative action, many US states are taking privacy matters into their own hands. 

    In 2025, eight new state privacy laws will take effect, making a total of 25 around the country. A number of other states—like Vermont and Massachusetts—are considering passing their own privacy bills next year, and such laws could, in theory, force national legislation, says Woodrow Hartzog, a technology law scholar at Boston University School of Law. “Right now, the statutes are all similar enough that the compliance cost is perhaps expensive but manageable,” he explains. But if one state passed a law that was different enough from the others, a national law could be the only way to resolve the conflict. Additionally, four states—California, Texas, Vermont, and Oregon—already have specific laws regulating data brokers, including the requirement that they register with the state. 

    Along with new laws, says Justin Brookman, the director of technology policy at Consumer Reports, comes the possibility that “we can put some more teeth on these laws.” 

    Brookman points to Texas, where some of the most aggressive enforcement action at the state level has taken place under its Republican attorney general, Ken Paxton. Even before the state’s new consumer privacy bill went into effect in July, Paxton announced the creation of a special task force focused on enforcing the state’s privacy laws. He has since targeted a number of data brokers—including National Public Data, which exposed millions of sensitive customer records in a data breach in August, as well as companies that sell to them, like Sirius XM. 

    At the same time, though, Paxton has moved to enforce the state’s strict abortion laws in ways that threaten individual privacy. In December, he sued a New York doctor for sending abortion pills to a Texas woman through the mail. While the doctor is theoretically protected by New York’s shield laws, which provide a safeguard from out-of-state prosecution, Paxton’s aggressive action makes it even more crucial that states enshrine data privacy protections into their laws, says Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, an advocacy group. “There is an urgent need for states,” he says, “to lock down our resident’s’ data, barring companies from collecting and sharing information in ways that can be weaponized against them by out-of-state prosecutors.” 

    Data collection in the name of “security”

    While privacy has become a bipartisan issue, Republicans, in particular, are interested in “addressing data brokers in the context of national security,” such as protecting the data of military members or other government officials, says Winters. But in his view, it’s the effects on reproductive rights and immigrants that are potentially the “most dangerous” threats to privacy. 

    Indeed, data brokers (including Venntel, the Gravy Analytics subsidiary named in the recent FTC settlement) have sold cell-phone data to Immigration and Customs Enforcement, as well as to Customs and Border Protection. That data has then been used to track individuals for deportation proceedings—allowing the agencies to bypass local and state sanctuary laws that ban local law enforcement from sharing information for immigration enforcement. 

    “The more data that corporations collect, the more data that’s available to governments for surveillance,” warns Ashley Gorski, a senior attorney who works on national security and privacy at the American Civil Liberties Union.

    The ACLU is among a number of organizations that have been pushing for the passage of another federal law related to privacy: the Fourth Amendment Is Not For Sale Act. It would close the so-called “data-broker loophole” that allows law enforcement and intelligence agencies to buy personal information from data brokers without a search warrant. The bill would “dramatically limit the ability of the government to buy Americans’ private data,” Gorski says. It was first introduced in 2021 and passed the House in April 2024, with the support of 123 Republicans and 93 Democrats, before stalling in the Senate. 

    While Gorski is hopeful that the bill will move forward in the next Congress, others are less sanguine about these prospects—and alarmed about other ways that the incoming administration might “co-opt private systems for surveillance purposes,” as Hartzog puts it. So much of our personal information that is “collected for one purpose,” he says, could “easily be used by the government … to track us.” 

    This is especially concerning, adds Winters, given that the next administration has been “very explicit” about wanting to use every tool at its disposal to carry out policies like mass deportations and to exact revenge on perceived enemies. And one possible change, he says, is as simple as loosening the government’s procurement processes to make them more open to emerging technologies, which may have fewer privacy protections. “Right now, it’s annoying to procure anything as a federal agency,” he says, but he expects a more “fast and loose use of commercial tools.” 

    “That’s something we’ve [already] seen a lot,” he adds, pointing to “federal, state, and local agencies using the Clearviews of the world”—a reference to the controversial facial recognition company. 

    The AI wild card

    Underlying all of these debates on potential legislation is the fact that technology companies—especially AI companies—continue to require reams and reams of data, including personal data, to train their machine-learning models. And they’re quickly running out of it. 

    This is something of a wild card in any predictions about personal data. Ideally, says Jennifer King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, the shortage would lead to ways for consumers to directly benefit, perhaps financially, from the value of their own data. But it’s more likely that “there will be more industry resistance against some of the proposed comprehensive federal privacy legislation bills,” she says. “Companies benefit from the status quo.” 

    The hunt for more and more data may also push companies to change their own privacy policies, says Whitney Merrill, a former FTC official who works on data privacy at Asana. Speaking in a personal capacity, she says that companies “have felt the squeeze in the tech recession that we’re in, with the high interest rates,” and that under those circumstances, “we’ve seen people turn around, change their policies, and try to monetize their data in an AI world”—even if it’s at the expense of user privacy. She points to the $60-million-per-year deal that Reddit struck last year to license its content to Google to help train the company’s AI. 

    Earlier this year, the FTC warned companies that it would be “unfair and deceptive” to “surreptitiously” change their privacy policies to allow for the use of user data to train AI. But again, whether or not officials follow up on this depends on those in charge. 

    So what will privacy look like in 2025? 

    While the recent FTC settlements and the CFPB’s proposed rule represent important steps forward in privacy protection—at least when it comes to geolocation data—Americans’ personal information still remains widely available and vulnerable. 

    Rebecca Williams, a senior strategist at the ACLU for privacy and data governance, argues that all of us, as individuals and communities, should take it upon ourselves to do more to protect ourselves and “resist … by opting out” of as much data collection as possible. That means checking privacy settings on accounts and apps, and using encrypted messaging services. 

    Cahn, meanwhile, says he’ll “be striving to protect [his] local community, working to enact safeguards to ensure that we live up to our principles and stated commitments.” One example of such safeguards is a proposed New York City ordinance that would ban the sharing of any location data originating from within the city limits. Hartzog says that kind of local activism has already been effective in pushing for city bans on facial recognition. 

    “Privacy rights are at risk, but they’re not gone, and it’s not helpful to take an overly pessimistic look right now,” says Li, the USF law professor. “We definitely still have privacy rights, and the more that we continue to fight for these rights, the more we’re going to be able to protect our rights.”

    Science and technology stories in the age of Trump

    Rather than analyzing the news this week, I thought I’d lift the hood a bit on how we make it. 

    I’ve spent most of this year being pretty convinced that Donald Trump would be the 47th president of the United States. Even so, like most people, I was completely surprised by the scope of his victory. By taking the lion’s share not just in the Electoral College but also the popular vote, coupled with the wins in the Senate (and, as I write this, seemingly the House) and ongoing control of the courts, Trump has done far more than simply eke out a win. This level of victory will certainly provide the political capital to usher in a broad sweep of policy changes.

    Some of these changes will be well outside our lane as a publication. But very many of President-elect Trump’s stated policy goals will have direct impacts on science and technology. Some of the proposed changes would have profound effects on the industries and innovations we’ve covered regularly, and for years. When he talks about his intention to end EV subsidies, hit the brakes on FTC enforcement actions on Big Tech, ease the rules on crypto, or impose a 60 percent tariff on goods from China, these are squarely in our strike zone and we would be remiss not to explore the policies and their impact in detail. 

    And so I thought I would share some of my remarks from our edit meeting on Wednesday morning, when we woke up to find out that the world had indeed changed. I think it’s helpful for our audience if we are transparent and upfront about how we intend to operate, especially over the next several months that will likely be, well, chaotic. 

    This is a moment when our jobs are more important than ever. There will be so much noise and heat out there in the coming weeks and months, and maybe even years. The next six months in particular will be a confusing time for a lot of people. We should strive to be the signal in that noise. 

    We have extremely important stories to write about the role of science and technology in the new administration. There are obvious stories for us to take on in regards to climate, energy, vaccines, women’s health, IVF, food safety, chips, China, and I’m sure a lot more, that people are going to have all sorts of questions about. Let’s start by making a list of questions we have ourselves. Some of the people and technologies we cover will be ascendant in all sorts of ways. We should interrogate that power.  It’s important that we take care in those stories not to be speculative or presumptive. To always have the facts buttoned up. To speak the truth and be unassailable in doing so.

    Do we drop everything and only cover this? No. But it will certainly be a massive story that affects nearly all others.

    This election will be a transformative moment for society and the world. Trump didn’t just win, he won a mandate. And he’s going to change the country and the global order as a result.  The next few weeks will see so much speculation as to what it all means. So much fear, uncertainty, and doubt. There is an enormous amount of bullshit headed down the line. People will be hungry for sources they can trust. We should be there for that. Let’s leverage our credibility, not squander it. 

    We are not the resistance. We just want to tell the truth. So let’s take a breath, and then go out there and do our jobs.

    I like to tell our reporters and editors that our coverage should be free from either hype or cynicism. I think that’s especially true now. 

    I’m also very interested to hear from our readers: What questions do you have? What are the policy changes or staffing decisions you are curious about? Please drop me a line at mat.honan@technologyreview.com I’m eager to hear from you. 

    If someone forwarded you this edition of The Debrief, you can subscribe here.


    Now read the rest of The Debrief

    The News

    Palmer Luckey, who was ousted from Facebook over his support for the last Trump administration and went into defense contracting, is poised to grow in influence under a second administration. He recently talked to MIT Technology Review about how the Pentagon is using mixed reality.

    • What does Donald Trump’s relationship with Elon Musk mean for the global EV industry?

    • The Biden administration was perceived as hostile to crypto. The industry can likely expect friendlier waters under Trump

    • Some counter-programming: Life seeking robots could punch through Europa’s icy surface

    • And for one more big take that’s not related to the election: AI vs quantum. AI could solve some of the most interesting scientific problems before big quantum computers become a reality


    The Chat

    Every week I’ll talk to one of MIT Technology Review’s reporters or editors to find out more about what they’ve been working on. This week, I chatted with Melissa Heikkilä about her story on how ChatGPT search paves the way for AI agents.

    Mat: Melissa, OpenAI rolled out web search for ChatGPT last week. It seems pretty cool. But you got at a really interesting bigger picture point about it paving the way for agents. What does that mean?

    Melissa: Microsoft tried to chip away at Google’s search monopoly with Bing, and that didn’t really work. It’s unlikely OpenAI will be able to make much difference either. Their best bet is try to get users used to a new way of finding information and browsing the web through virtual assistants that can do complex tasks. Tech companies call these agents. ChatGPT’s usefulness is limited by the fact that it can’t access the internet and doesn’t have the most up to date information. By integrating a really powerful search engine into the chatbot, suddenly you have a tool that can help you plan things and find information in a far more comprehensive and immersive way than traditional search, and this is a key feature of the next generation of AI assistants.

    Mat: What will agents be able to do?

    Melissa: AI agents can complete complex tasks autonomously and the vision is that they will work as a human assistant would — book your flights, reschedule your meetings, help with research, you name it. But I wouldn’t get too excited yet. The cutting-edge of AI tech can retrieve information and generate stuff, but it still lacks the reasoning and long-term planning skills to be really useful. AI tools like ChatGPT and Claude also can’t interact with computer interfaces, like clicking at stuff, very well. They also need to become a lot more reliable and stop making stuff up, which is still a massive problem with AI. So we’re still a long way away from the vision becoming reality! I wrote an explainer on agents a little while ago with more details.

    Mat: Is search as we know it going away? Are we just moving to a world of agents that not only answer questions but also accomplish tasks?

    Melissa: It’s really hard to say. We are so used to using online search, and it’s surprisingly hard to change people’s behaviors. Unless agents become super reliable and powerful, I don’t think search is going to go away.

    Mat: By the way, I know you are in the UK. Did you hear we had an election over here in the US?

    Melissa: LOL


    The Recommendation

    I’m just back from a family vacation in New York City, where I was in town to run the marathon. (I get to point this out for like one or two more weeks before the bragging gets tedious, I think.) While there, we went to see The Outsiders. Chat, it was incredible. (Which maybe should go without saying given that it won the Tony for best musical.) But wow. I loved the book and the movie as a kid. But this hit me on an entirely other level. I’m not really a cries-at-movies (or especially at musicals) kind of person but I was wiping my eyes for much of the second act. So were very many people sitting around me. Anyway. If you’re in New York, or if it comes to your city, go see it. And until then, the soundtrack is pretty amazing on its own. (Here’s a great example.)

    There are more than 120 AI bills in Congress right now

    More than 120 bills related to regulating artificial intelligence are currently floating around the US Congress.

    They’re pretty varied. One aims to improve knowledge of AI in public schools, while another is pushing for model developers to disclose what copyrighted material they use in their training.  Three deal with mitigating AI robocalls, while two address biological risks from AI. There’s even a bill that prohibits AI from launching a nuke on its own.

    The flood of bills is indicative of the desperation Congress feels to keep up with the rapid pace of technological improvements. “There is a sense of urgency. There’s a commitment to addressing this issue, because it is developing so quickly and because it is so crucial to our economy,” says Heather Vaughan, director of communications for the US House of Representatives Committee on Science, Space, and Technology.

    Because of the way Congress works, the majority of these bills will never make it into law. But simply taking a look at all the different bills that are in motion can give us insight into policymakers’ current preoccupations: where they think the dangers are, what each party is focusing on, and more broadly, what vision the US is pursuing when it comes to AI and how it should be regulated.

    That’s why, with help from the Brennan Center for Justice, which created a tracker with all the AI bills circulating in various committees in Congress right now, MIT Technology Review has taken a closer look to see if there’s anything we can learn from this legislative smorgasbord. 

    As you can see, it can seem as if Congress is trying to do everything at once when it comes to AI. To get a better sense of what may actually pass, it’s useful to look at what bills are moving along to potentially become law. 

    A bill typically needs to pass a committee, or a smaller body of Congress, before it is voted on by the whole Congress. Many will fall short at this stage, while others will simply be introduced and then never spoken of again. This happens because there are so many bills presented in each session, and not all of them are given equal consideration. If the leaders of a party don’t feel a bill from one of its members can pass, they may not even try to push it forward. And then, depending on the makeup of Congress, a bill’s sponsor usually needs to get some members of the opposite party to support it for it to pass. In the current polarized US political climate, that task can be herculean. 

    Congress has passed legislation on artificial intelligence before. Back in 2020, the National AI Initiative Act was part of the Defense Authorization Act, which invested resources in AI research and provided support for public education and workforce training on AI.

    And some of the current bills are making their way through the system. The Senate Commerce Committee pushed through five AI-related bills at the end of July. The bills focused on authorizing the newly formed US AI Safety Institute (AISI) to create test beds and voluntary guidelines for AI models. The other bills focused on expanding education on AI, establishing public computing resources for AI research, and criminalizing the publication of deepfake pornography. The next step would be to put the bills on the congressional calendar to be voted on, debated, or amended.

    “The US AI Safety Institute, as a place to have consortium building and easy collaboration between corporate and civil society actors, is amazing. It’s exactly what we need,” says Yacine Jernite, an AI researcher at Hugging Face.

    The progress of these bills is a positive development, says Varun Krovi, executive director of the Center for AI Safety Action Fund. “We need to codify the US AI Safety Institute into law if you want to maintain our leadership on the global stage when it comes to standards development,” he says. “And we need to make sure that we pass a bill that provides computing capacity required for startups, small businesses, and academia to pursue AI.”

    Following the Senate’s lead, the House Committee on Science, Space, and Technology just passed nine more bills regarding AI on September 11. Those bills focused on improving education on AI in schools, directing the National Institute of Standards and Technology (NIST) to establish guidelines for artificial-intelligence systems, and expanding the workforce of AI experts. These bills were chosen because they have a narrower focus and thus might not get bogged down in big ideological battles on AI, says Vaughan.

    “It was a day that culminated from a lot of work. We’ve had a lot of time to hear from members and stakeholders. We’ve had years of hearings and fact-finding briefings on artificial intelligence,” says Representative Haley Stevens, one of the Democratic members of the House committee.

    Many of the bills specify that any guidance they propose for the industry is nonbinding and that the goal is to work with companies to ensure safe development rather than curtail innovation. 

    For example, one of the bills from the House, the AI Development Practices Act, directs NIST to establish “voluntary guidance for practices and guidelines relating to the development … of AI systems” and a “voluntary risk management framework.” Another bill, the AI Advancement and Reliability Act, has similar language. It supports “the development of voluntary best practices and technical standards” for evaluating AI systems. 

    “Each bill contributes to advancing AI in a safe, reliable, and trustworthy manner while fostering the technology’s growth and progress through innovation and vital R&D,” committee chairman Frank Lucas, an Oklahoma Republican, said in a press release on the bills coming out of the House.

    “It’s emblematic of the approach that the US has taken when it comes to tech policy. We hope that we would move on from voluntary agreements to mandating them,” says Krovi.

    Avoiding mandates is a practical matter for the House committee. “Republicans don’t go in for mandates for the most part. They generally aren’t going to go for that. So we would have a hard time getting support,” says Vaughan. “We’ve heard concerns about stifling innovation, and that’s not the approach that we want to take.” When MIT Technology Review asked about the origin of these concerns, they were attributed to unidentified “third parties.” 

    And fears of slowing innovation don’t just come from the Republican side. “What’s most important to me is that the United States of America is establishing aggressive rules of the road on the international stage,” says Stevens. “It’s concerning to me that actors within the Chinese Communist Party could outpace us on these technological advancements.”

    But these bills come at a time when big tech companies have ramped up lobbying efforts on AI. “Industry lobbyists are in an interesting predicament—their CEOs have said that they want more AI regulation, so it’s hard for them to visibly push to kill all AI regulation,” says David Evan Harris, who teaches courses on AI ethics at the University of California, Berkeley. “On the bills that they don’t blatantly try to kill, they instead try to make them meaningless by pushing to transform the language in the bills to make compliance optional and enforcement impossible.”

    “A [voluntary commitment] is something that is also only accessible to the largest companies,” says Jernite at Hugging Face, claiming that sometimes the ambiguous nature of voluntary commitments allows big companies to set definitions for themselves. “If you have a voluntary commitment—that is, ‘We’re going to develop state-of-the-art watermarking technology’—you don’t know what state-of-the-art means. It doesn’t come with any of the concrete things that make regulation work.”

    “We are in a very aggressive policy conversation about how to do this right, and how this carrot and stick is actually going to work,” says Stevens, indicating that Congress may ultimately draw red lines that AI companies must not cross.

    There are other interesting insights to be gleaned from looking at the bills all together. Two-thirds of the AI bills are sponsored by Democrats. This isn’t too surprising, since some House Republicans have claimed to want no AI regulations, believing that guardrails will slow down progress.

    The topics of the bills (as specified by Congress) are dominated by science, tech, and communications (28%), commerce (22%), updating government operations (18%), and national security (9%). Topics that don’t receive much attention include labor and employment (2%), environmental protection (1%), and civil rights, civil liberties, and minority issues (1%).

    The lack of a focus on equity and minority issues came into view during the Senate markup session at the end of July. Senator Ted Cruz, a Republican, added an amendment that explicitly prohibits any action “to ensure inclusivity and equity in the creation, design, or development of the technology.” Cruz said regulatory action might slow US progress in AI, allowing the country to fall behind China.

    On the House side, there was also a hesitation to work on bills dealing with biases in AI models. “None of our bills are addressing that. That’s one of the more ideological issues that we’re not moving forward on,” says Vaughan.

    The lead Democrat on the House committee, Representative Zoe Lofgren, told MIT Technology Review, “It is surprising and disappointing if any of my Republican colleagues have made that comment about bias in AI systems. We shouldn’t tolerate discrimination that’s overt and intentional any more than we should tolerate discrimination that occurs because of bias in AI systems. I’m not really sure how anyone can argue against that.”

    After publication, Vaughan clarified that “[Bias] is one of the bigger, more cross-cutting issues, unlike the narrow, practical bills we considered that week. But we do care about bias as an issue,” and she expects it to be addressed within an upcoming House Task Force report.

    One issue that may rise above the partisan divide is deepfakes. The Defiance Act, one of several bills addressing them, is cosponsored by a Democratic senator, Amy Klobuchar, and a Republican senator, Josh Hawley. Deepfakes have already been abused in elections; for example, someone faked Joe Biden’s voice for a robocall to tell citizens not to vote. And the technology has been weaponized to victimize people by incorporating their images into pornography without their consent. 

    “I certainly think that there is more bipartisan support for action on these issues than on many others,” says Daniel Weiner, director of the Brennan Center’s Elections & Government Program. “But it remains to be seen whether that’s going to win out against some of the more traditional ideological divisions that tend to arise around these issues.” 

    Although none of the current slate of bills have resulted in laws yet, the task of regulating any new technology, and specifically advanced AI systems that no one entirely understands, is difficult. The fact that Congress is making any progress at all may be surprising in itself. 

    “Congress is not sleeping on this by any stretch of the means,” says Stevens. “We are evaluating and asking the right questions and also working alongside our partners in the Biden-Harris administration to get us to the best place for the harnessing of artificial intelligence.”

    Update: We added further comments from the Republican spokesperson.

    Why a ruling against the Internet Archive threatens the future of America’s libraries

    I was raised in the 1980s and ’90s, and for my generation and generations before us, the public library was an equalizing force in every town, helping anyone move toward the American dream. In Chantilly, Virginia, where I grew up, it didn’t matter if you didn’t have a computer or your parents lacked infinite money for tutors—you could get a lifetime’s education for free at the public library. A ruling from the US Second Circuit against the Internet Archive and in favor of publisher Hachette has just thrown that promise of equality into doubt by limiting libraries’ access to digital lending.

    To understand why this is so important to the future of libraries, you first have to understand the dire state of library e-book lending. 

    Libraries have traditionally operated on a basic premise: Once they purchase a book, they can lend it out to patrons as much (or as little) as they like. Library copies often come from publishers, but they can also come from donations, used book sales, or other libraries. However the library obtains the book, once the library legally owns it, it is theirs to lend as they see fit. 

    Not so for digital books. To make licensed e-books available to patrons, libraries have to pay publishers multiple times over. First, they must subscribe (for a fee) to aggregator platforms such as Overdrive. Aggregators, like streaming services such as HBO’s Max, have total control over adding or removing content from their catalogue. Content can be removed at any time, for any reason, without input from your local library. The decision happens not at the community level but at the corporate one, thousands of miles from the patrons affected. 

    Then libraries must purchase each individual copy of each individual title that they want to offer as an e-book. These e-book copies are not only priced at a steep markup—up to 300% over consumer retail—but are also time- and loan-limited, meaning the files self-destruct after a certain number of loans. The library then needs to repurchase the same book, at a new price, in order to keep it in stock. 

    This upending of the traditional order puts massive financial strain on libraries and the taxpayers that fund them. It also opens up a world of privacy concerns; while libraries are restricted in the reader data they can collect and share, private companies are under no such obligation.

    Some libraries have turned to another solution: controlled digital lending, or CDL, a process by which a library scans the physical books it already has in its collection, makes secure digital copies, and lends those out on a one-to-one “owned to loaned” ratio.  The Internet Archive was an early pioneer of this technique.

    When the digital copy is loaned, the physical copy is sequestered from borrowing; when the physical copy is checked out, the digital copy becomes unavailable. The benefits to libraries are obvious; delicate books can be circulated without fear of damage, volumes can be moved off-site for facilities work without interrupting patron access, and older and endangered works become searchable and can get a second chance at life. Library patrons, who fund their local library’s purchases with their tax dollars, also benefit from the ability to freely access the books.

    Publishers are, unfortunately, not a fan of this model, and in 2020 four of them sued the Internet Archive over its CDL program. The suit ultimately focused on the Internet Archive’s lending of 127 books that were already commercially available through licensed aggregators. The publisher plaintiffs accused the Internet Archive of mass copyright infringement, while the Internet Archive argued that its digitization and lending program was a fair use. The trial court sided with the publishers, and on September 4, the Court of Appeals for the Second Circuit reaffirmed that decision with some alterations to the underlying reasoning. 

    This decision harms libraries. It locks them into an e-book ecosystem designed to extract as much money as possible while harvesting (and reselling) reader data en masse. It leaves local communities’ reading habits at the mercy of curatorial decisions made by four dominant publishing companies thousands of miles away. It steers Americans away from one of the few remaining bastions of privacy protection and funnels them into a surveillance ecosystem that, like Big Tech, becomes more dangerous with each passing data breach. And by increasing the price for access to knowledge, it puts up even more barriers between underserved communities and the American dream.

    It doesn’t stop there. This decision also renders the fair use doctrine—legally crucial in everything from parody to education to news reporting—almost unusable. And while there were occasional moments of sanity (such as recognizing that a “Donate here” button does not magically turn a nonprofit into a commercial enterprise), this decision fractured, rather than clarified, the law. 

    If the courts won’t recognize CDL-based library lending as fair use, then the next step falls to Congress. Libraries are in crisis, caught between shrinking budgets and growing demand for services. Congress must act now to ensure that a pillar of equality in our communities isn’t sacrificed on the altar of profit. 

    Chris Lewis is president and CEO of Public Knowledge, a consumer advocacy group that works to shape technology policy in the public interest. Public Knowledge promotes freedom of expression, an open internet, and access to affordable communications tools and creative works.

    What Japan’s “megaquake” warning really tells us

    MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

    On August 8, at 16:42 local time, a magnitude-7.1 earthquake shook southern Japan. The temblor, originating off the shores of mainland island of Kyūshū, was felt by nearly a million people across the region, and initially, the threat of a tsunami emerged. But only a diminutive wave swept ashore, buildings remained upright, and nobody died. The crisis was over as quickly as it began.

    But then, something new happened. The Japan Meteorological Agency, a government organization, issued a ‘megaquake advisory’ for the first time. This pair of words may appear disquieting—and to some extent, they are. There is a ticking bomb below Japanese waters, a giant crevasse where one tectonic plate dives below another. Stress has been accumulating across this boundary for quite some time, and inevitably, it will do what it has repeatedly done in the past: part of it will violently rupture, generating a devastating earthquake and a potentially huge tsunami.

    The advisory was in part issued because it is possible that the magnitude-7.1 quake is a foreshock – a precursory quake – to a far larger one, a tsunami-making monster that could kill a quarter of a million people.

    The good news, for now, is that scientists think it is very unlikely that that magnitude-7.1 quake is a prelude to a cataclysm. Nothing is certain, but “the chances that this actually is a foreshock are really quite low,” says Harold Tobin, the director of the Pacific Northwest Seismic Network.

    The advisory, ultimately, isn’t prophetic. Its primary purpose is to let the public know that scientists are aware of what’s going on, that they are cognizant of the worst-case scenario—and that everyone else should be mindful of that grim possibility too. Evacuation routes should be memorized, and emergency supplies should be obtained, just in case.

    “Even if the probability is low, the consequences are so high,” says Judith Hubbard, an earthquake scientist at Cornell University. “It makes sense to worry about some of these low probabilities.”

    Japan, which sits atop a tectonic jigsaw, is no stranger to large earthquakes. Just this past New Year’s Day, a magnitude-7.6 temblor convulsed the Noto Peninsula, killing 230 people. But special attention is paid to certain quakes even when they cause no direct harm.

    The August 8 event took place on the Nankai subduction zone: here, the Philippine Sea plate creeps below Japan, which is attached to the Eurasian plate. This type of plate boundary is the sort capable of producing ‘megaquakes’, those of a magnitude-8.0 and higher. (The numerical difference may seem small, but the scale is logarithmic: a magnitude-8.0 quake unleashes 32 times more energy than a magnitude-7.0 quake.)

    Consequently, the Nankai subduction zone (or Nankai Trough) has created several historical tragedies. A magnitude-7.9 quake in 1944 was followed by a magnitude-8.0 quake in 1946; both events were caused by part of the submarine trench jolting. The magnitude-8.6 quake of 1707, however, involved the rupture of the entire Nankai Trough. Thousands died on each occasion.

    Predicting disaster

    Predicting when and where the next major quake will happen anywhere on Earth is currently impossible. Nankai is no different: as recently noted by Hubbard on her blog Earthquake Insights – co-authored with geoscientist Kyle Bradley – there isn’t a set time between Nankai’s major quakes, which range from days to several centuries.

    But as stress is continually accumulating on that plate boundary, it’s certain that, one day, the Nankai Trough will let loose another great quake, one which could push a vast volume of seawater toward a large swath of western and central Japan, making a tsunami 100 feet tall. The darkest scenario suggests that 230,000 could perish, two million buildings would be damaged or destroyed, and the country would be left with a $1.4 trillion bill.

    Naturally, a magnitude-7.1 quake on that Trough worries scientists. Aftershocks (a series of smaller magnitude quakes) are a guaranteed feature of potent quakes. But there is a small chance that a large quake will be followed by an even larger quake, retrospectively making the first a foreshock.

    “The earthquake changes the stress in the surrounding crust a little bit,” says Hubbard. Using the energy released during the August 8 rupture, and decoding the seismic waves created during the quake, scientists can estimate how much stress gets shifted to surrounding faults.

    The worry is that some of the stress released by one quake gets transferred to a big fault that hasn’t ruptured in a very long time but is ready to fold like an explosive house of cards. “You never know which increment of stress is gonna be the one that pushes it over the edge.”

    Scientists cannot tell whether a large quake is a foreshock until a larger quake occurs. But the possibility remains that the August 8 temblor is a foreshock to something considerably worse. Statistically, it’s unlikely. But there is additional context to why that megaquake advisory was issued: the specter of 2011’s magnitude-9.1 Tōhoku earthquake and tsunami, which killed 18,000 people, still haunts the Japanese government and the nation’s geoscientists. 

    Hubbard explains that, two days before that quake struck off Japan’s eastern seaboard, there was a magnitude-7.2 event in the same area—now known to be a foreshock to the catastrophe. Reportedly, authorities in Japan regretted not highlighting that possibility in advance, which may have meant people on the eastern seaboard would have been more prepared, and more capable, of escaping their fate.

    A sign to get prepared

    In response, Japan’s government created new protocols for signaling that foreshock possibility. Most magnitude-7.0-or-so quakes would not be followed by a ‘megaquake advisory’. Only those happening in tectonic settings able to trigger truly gigantic quakes will—and that includes the Nankai Trough.

    Crucially, this advisory is not a warning that a megaquake is imminent. It means: “be ready for when the big earthquake comes,” says Hubbard. Nobody is mandated to evacuate, but they are asked to know their escape routes. Meanwhile, local news reports that nursing homes and hospitals in the region are tallying emergency supplies while moving immobile patients to higher floors or other locations. The high-speed Shinkansen railway trains are running at a reduced maximum speed, and certain flights are carrying more fuel than usual in case they need to divert.

    Earthquake advisories aren’t new. “California has something similar, and has issued advisories before,” says Wendy Bohon, an independent earthquake geologist. In September 2016, for example, a swarm of hundreds of modest quakes caused the U.S. Geological Survey to publicly advise that, for a week, there was a 0.03 to 1% chance of a magnitude-7.0-or-greater quake rocking the Southern San Andreas Fault—an outcome that fortunately didn’t come to pass.

    But this megaquake advisory is Japan’s first, and it will have both pros and cons. “There are economic and social consequences to this,” says Bohon. Some confusion about how to respond has been reported, and widespread cancellations of travel to the region will come with a price tag. 

    But calm reactions to the advisory seem to be the norm, and (ideally) this advisory will result in an increased understanding of the threat of the Nankai Trough. “It really is about raising awareness,” says Adam Pascale, chief scientist at the Seismology Research Centre in Melbourne, Australia. “It’s got everyone talking. And that’s the point.”

    Geoscientists are also increasingly optimistic that the August 8 quake isn’t a harbinger of a seismic pandemonium. “This thing is way off to the extreme margin of the actual Nankai rupture zone,” says Tobin—meaning it may not even count as being in the zone of tectonic concern. 

    A blog post co-authored by Shinji Toda, a seismologist at Tōhoku University in Sendai, Japan, also estimates that any stress transferal to the dangerous parts of the Trough is negligible. There is no clear evidence that the plate boundary is acting weirdly. And with each day that goes by, the odds of the August 8 quake being a foreshock drop even further.

    Tech defenses

    But if a megaquake did suddenly emerge, Japan has a technological shield that may mitigate a decent portion of the disaster. 

    Buildings are commonly fitted with dampeners that allow them to withstand dramatic quake-triggered shaking. And like America’s West Coast, the entire archipelago has a sophisticated earthquake early-warning system: seismometers close to the quake’s origins listen to its seismic screams, and software makes a quick estimate of the magnitude and shaking intensity of the rupture, before beaming it to people’s various devices, giving them invaluable seconds to get to cover. Automatic countermeasures also slow trains down, control machinery in factories, hospitals, and office buildings, to minimize damage from the incoming shaking.

    A tsunami early-warning system also kicks into gear if activated, beaming evacuation notices to phones, televisions, radios, sirens, and myriad specialized receivers in buildings in the afflicted region—giving people several minutes to flee. A megaquake advisory may be new, but for a population highly knowledgeable about earthquake and tsunami defense, it’s just another layer of protection.

    The advisory has had other effects too: it’s caused those in another imperiled part of the world to take notice. The Cascadia Subduction Zone offshore from the US Pacific Northwest is also capable of producing both titanic quakes and prodigious tsunamis. Its last grand performance, in 1700, created a tsunami that not only inundated large sections of the North American coast, but it also swamped parts of Japan, all the way across the ocean.

    Japan’s megaquake advisory has got Tobin thinking: “What would we do if our subduction zone starts acting weird?” he says—which includes a magnitude-7.0 quake in the Cascadian depths. “There is not a protocol in place the way there is in Japan.” Tobin speculates that a panel of experts would quickly assemble, and a statement – perhaps one not too dissimilar to Japan’s own advisory – would emerge from the U.S. Geological Survey. Like Japan, “we would have to be very forthright about the uncertainty,” he says.

    Whether it’s Japan or the US or anywhere else, such advisories aren’t meant to engender panic. “You don’t want people to live their lives in fear,” says Hubbard. But it’s no bad thing to draw attention to the fact that Earth can sometimes be an unforgiving place to live.

    Robin George Andrews is an award-winning science journalist and doctor of volcanoes based in London. He regularly writes about the Earth, space, and planetary sciences, and is the author of two critically acclaimed books: Super Volcanoes (2021) and How To Kill An Asteroid (October 2024).

    How the US and its allies can rebuild economic security

    A country’s economic security—its ability to generate both national security and economic prosperity—is grounded in it having significant technological capabilities that outpace those of its adversaries and complement those of its allies. Though this is a principle well known throughout history, the move over the last few decades toward globalization and offshoring of technologically advanced industrial capacity has made ensuring a nation state’s security and economic prosperity increasingly problematic. A broad span of technologies ranging from automation and secure communications to energy storage and vaccine design are the basis for wider economic prosperity—and high priorities for governments seeking to maintain national security. However, the necessary capabilities do not spring up overnight. They rely upon long decades of development, years of accumulated knowledge, and robust supply chains.

    For the US and, especially, its allies in NATO, a particular problem has emerged: a “missing middle” in technology investment. Insufficient capital is allocated toward the maturation of breakthroughs in critical technologies to ensure that they can be deployed at scale. Investment is allocated either toward the rapid deployment of existing technologies or to scientific ideas that are decades away from delivering practical capability or significant economic impact (for example, quantum computers). But investment in scaling manufacturing technologies, learning while doing, and maturing of emerging technologies to contribute to a next-generation industrial base, is too often absent. Without this middle-ground commitment, the United States and its partners lack the production know-how that will be crucial for tomorrow’s batteries, the next generation of advanced computing, alternative solar photovoltaic cells, and active pharmaceutical ingredients.

    While this once mattered only for economic prosperity, it is now a concern for national security too—especially given that China has built strong supply chains and other domestic capabilities that confer both economic security and significant geopolitical leverage.

    Consider drone technology. Military doctrine has shifted toward battlefield technology that relies upon armies of small, relatively cheap products enabled by sophisticated software—from drones above the battlefield to autonomous boats to CubeSats in space.

    Drones have played a central role in the war in Ukraine. First-person viewer (FPV) drones—those controlled by a pilot on the ground via a video stream—are often strapped with explosives to act as precision kamikaze munitions and have been essential to Ukraine’s frontline defenses. While many foundational technologies for FPV drones were pioneered in the West, China now dominates the manufacturing of drone components and systems, which ultimately enables the country to have a significant influence on the outcome of the war.

    When the history of the war in Ukraine is written, it will be taught as the first true “drone war.” But it should also be understood as an industrial wake-up call: a time when the role of a drone’s component parts was laid bare and the supply chains that support this technology—the knowledge, production operations, and manufacturing processes—were found wanting. Heroic stories will be told of Ukrainian ingenuity in building drones with Chinese parts in basements and on kitchen tables, and we will hear of the country’s attempt to rebuild supply chains dominated by China while in the midst of an existential fight for survival. But in the background, we will also need to understand the ways in which other nations, especially China, controlled the war through long-term economic policies focused on capturing industrial capacity that the US and its allies failed to support through to maturity.

    Disassemble one of the FPV drones found across the battlefields of Ukraine and you will find about seven critical subsystems: power, propulsion, flight control, navigation and sensors (which gather location data and other information to support flight), compute (the processing and memory capacity needed to analyze the vast array of information and then support operations), communications (to connect the drone to the ground), and—supporting it all—the airframe.

    We have created a bill of materials listing the components necessary to build an FPV drone and the common suppliers for those parts.

    China’s manufacturing dominance has resulted in a domestic workforce with the experience to achieve process innovations and product improvements that have no equal in the West.  And it has come with the sophisticated supply chains that support a wide range of today’s technological capabilities and serve as the foundations for the next generation. None of that was inevitable. For example, most drone electronics are integrated on printed circuit boards (PCBs), a technology that was developed in the UK and US. However, first-mover advantage was not converted into long-term economic or national security outcomes, and both countries have lost the PCB supply chain to China.

    Propulsion is another case in point. The brushless DC motors used to convert electrical energy from batteries into mechanical energy to rotate drone propellers were invented in the US and Germany. The sintered permanent neodymium (NdFeB) magnets used in these motors were invented in Japan and the US. Today, to our knowledge, all brushless DC motors for drones are made in China. Similarly, China dominates all steps in the processing and manufacture of NdFeB magnets, accounting for 92% of global NdFeB magnet and magnet alloy markets.

    The missing middle of technology investment—insufficient funding for commercial production—is evident in each and every one of these failures, but the loss of expertise is an added dimension. For example, lithium polymer (LiPo) batteries are at the heart of every FPV drone. LiPo uses a solid or gel polymer electrolyte and achieves higher specific energy (energy per unit of weight)—a feature that is crucial for lightweight drones. Today, you would be hard-pressed to find a LiPo battery that was not manufactured in China. The experienced workforce behind these companies has contributed to learning curves that have led to a 97% drop in the cost of lithium-ion batteries and a simultaneous 300%-plus increase in battery energy density over the past three decades.

    China’s dominance in LiPo batteries for drones reflects its overall dominance in Li-ion manufacturing. China controls approximately 75% of global lithium-ion capacity—the anode, cathode, electrolyte, and separator subcomponents as well as the assembly into a single unit. It dominates the manufacture of each of these subcomponents, producing over 85% of anodes and over 70% of cathodes, electrolytes, and separators. China also controls the extraction and refinement of minerals needed to make these subcomponents.

    Again, this dominance was not inevitable. Most of the critical breakthroughs needed to invent and commercialize Li-ion batteries were made by scientists in North America and Japan. But in comparison to the US and Europe (at least until very recently), China has taken a proactive stance to coordinate, support, and co-invest with strategic industries to commercialize emerging technologies. China’s Ministry of Industry and Information Technology has been at pains to support these domestic industries.

    The case of Li-ion batteries is not an isolated one. The shift to Chinese dominance in the underlying electronics for FPV drones coincides with the period beginning in 2000, when Shenzhen started to emerge as a global hub for low-cost electronics. This trend was amplified by US corporations from Apple, for which low-cost production in China has been essential, to General Electric, which also sought low-cost approaches to maintain the competitive edge of its products. The global nature of supply chains was seen as a strength for US companies, whose comparative advantage lay in the design and integration of consumer products (such as smartphones) with little or no relevance for national security. Only a small handful of “exquisite systems” essential for military purposes were carefully developed within the US. And even those have relied upon global supply chains.

    While the absence of the high-tech industrial capacity needed for economic security is easy to label, it is not simple to address. Doing so requires several interrelated elements, among them designing and incentivizing appropriate capital investments, creating and matching demand for a talented technology workforce, building robust industrial infrastructure, ensuring visibility into supply chains, and providing favorable financial and regulatory environments for on- and friend-shoring of production. This is a project that cannot be done by the public or the private sector alone. Nor is the US likely to accomplish it absent carefully crafted shared partnerships with allies and partners across both the Atlantic and the Pacific.

    The opportunity to support today’s drones may have passed, but we do have the chance to build a strong industrial base to support tomorrow’s most critical technologies—not simply the eye-catching finished assemblies of autonomous vehicles, satellites, or robots but also their essential components. This will require attention to our manufacturing capabilities, our supply chains, and the materials that are the essential inputs. Alongside a shift in emphasis to our own domestic industrial base must come a willingness to plan and partner more effectively with allies and partners.

    If we do so, we will transform decades of US and allied support for foundational science and technology into tomorrow’s industrial base vital for economic prosperity and national security. But to truly take advantage of this opportunity, we need to value and support our shared, long-term economic security. And this means rewarding patient investment in projects that take a decade or more, incentivizing high-capital industrial activity, and maintaining a determined focus on education and workforce development—all within a flexible regulatory framework.

    Edlyn V. Levine is CEO and co-founder of a stealth-mode technology start up and an affiliate at MIT Sloan School of Management and the Department of Physics at Harvard University. Levine was co-founder and CSO of America’s Frontier Fund, and formerly Chief Technologist for the MITRE Corporation.

    Fiona Murray is the William Porter (1967) Professor of Entrepreneurship at the MIT School of Management where she works at the intersection of critical technologies, entrepreneurship, and geopolitics. She is the Vice Chair of the NATO Innovation Fund—a multi-sovereign venture fund for defense, security and resilience, and served for a decade on the UK Prime Minister’s Council on Science and Technology.

    Inside the US government’s brilliantly boring websites

    The United States has an official web design system and a custom typeface. This public design system aims to make government websites not only good-looking but accessible and functional for all.

    Before the internet, Americans may have interacted with the federal government by stepping into grand buildings adorned with impressive stone columns and gleaming marble floors. Today, the neoclassical architecture of those physical spaces has been (at least partially) replaced by the digital architecture of website design—HTML code, tables, forms, and buttons. 

    While people visiting a government website to apply for student loans, research veterans’ benefits, or enroll in Medicare might not notice these digital elements, they play a crucial role. If a website is buggy or doesn’t work on a phone, taxpayers may not be able to access the services they have paid for—which can create a negative impression of the government itself.  

    There are about 26,000 federal websites in the US. Early on, each site had its own designs, fonts, and log-in systems, creating frustration for the public and wasting government resources. The troubled launch of Healthcare.gov in 2013 highlighted the need for a better way to build government digital services. In 2014, President Obama created two new teams to help improve government tech.

    Within the General Services Administration (GSA), a new team called 18F (named for its office at 1800 F Street in Washington, DC) was created to “collaborate with other agencies to fix technical problems, build products, and improve public service through technology.” The team was built to move at the speed of tech startups rather than lumbering bureaucratic agencies. 

    The US Digital Service (USDS) was set up “to deliver better government services to the American people through technology and design.” In 2015, the two teams collaborated to build the US Web Design System (USWDS), a style guide and collection of user interface components and design patterns intended to ensure accessibility and a consistent user experience across government websites. “Inconsistency is felt, even if not always precisely articulated in usability research findings,” Dan Williams, the USWDS program lead, said in an email. 

    Today, the system defines 47 user interface components such as buttons, alerts, search boxes, and forms, each with design examples, sample code, and guidelines such as “Be polite” and “Don’t overdo it.” Now in its third iteration, it is used in 160 government websites. “As of September 2023, 94 agencies use USWDS code, and it powers about 1.1 billion page views on federal websites,” says Williams.

    To ensure clear and consistent typography, the free and open-source typeface Public Sans was created for the US government in 2019. “It started as a design experiment,” says Williams, who designed the typeface. “We were interested in trying to establish an open-source solution space for a typeface, just like we had for the other design elements in the design system.”

    The teams behind Public Sans and the USWDS embrace transparency and collaboration with government agencies and the public.

    And to ensure that the hard-learned lessons aren’t forgotten, the projects embrace continuous improvement. One of the design principles behind Public Sans offers key guidance in this area: “Strive to be better, not necessarily perfect.”

    Jon Keegan writes Beautiful Public Data, a newsletter that curates visually interesting data sets collected by local, state, and federal government agencies
    (beautifulpublicdata.com).