Doctors and patients are calling for more telehealth. Where is it?

Maggie Barnidge, 18, has been managing cystic fibrosis her whole life. But not long after she moved out of her home state to start college, she came down with pneumonia and went into liver failure. She desperately wanted to get in touch with her doctor back home, whom she’d been seeing since she was diagnosed as an infant and who knew which treatments worked best for her—but he wasn’t allowed to practice telemedicine across state lines. The local hospital, and doctors unfamiliar with her complicated medical history, would have to do. 

“A lot of what Maggie needed wasn’t a physical exam,” says Barnidge’s mother, Elizabeth. “It was a conversation: What tests should I be getting next? What did my labs look like? She just needed her doctor who knew her well.”  

But doctors are generally allowed to practice medicine only where they have a license. This means they cannot treat patients across state lines unless they also have a license in the patient’s state, and most physicians have one or two licenses at most. This has led to what Ateev Mehrotra, a physician and professor of health policy at the Brown University School of Public Health, calls an “inane” norm: A woman with a rare cancer boarding an airplane, at the risk of her chemotherapy-weakened immune system, to see a specialist thousands of miles away, for example, or a baby with a rare disease who’s repeatedly shuttled between Arizona and Massachusetts. 

While eligible physicians can currently apply to practice in states besides their own, this can be a burdensome and impractical process. For instance, let’s say you are an oncologist in Minnesota, and a patient from Kansas arrives at your office seeking treatment. The patient will probably want to do follow-up appointments via telehealth when possible, to avoid having to travel back to Minnesota. 

But if you are not yet licensed to practice in Kansas (and you probably are not), you can’t suddenly start practicing medicine there. You would first need to apply to do so, either through the Interstate Medical Licensure Compact (designed to streamline the process of obtaining a full license in another state, but at a price of $700 per year) or with Kansas’s board of medicine directly. Maybe this poses too great an administrative hurdle for you—you work long hours, and how will you find time to compile the necessary paperwork? Doctors can’t reasonably be expected to apply for licensure in all 50 states. The patient, then, either loses out on care or must shoulder the burden of traveling to Minnesota for a doctor’s visit. The only way to access telehealth, if that’s what the patient prefers, would be to cross into the state and log in—an option that might still be preferable to traveling all the way to the doctor’s office. These obstacles to care have led to a growing belief among health-care providers, policymakers, and patients that under certain circumstances, doctors should be able to treat their patients anywhere. 

Lately, telehealth has proved to be widely popular, too. The coronavirus emergency in 2020 served as proof of concept, demonstrating that new digital platforms for medicine were feasible—and often highly effective. One study showed that telehealth accounted for nearly a quarter of contacts between patients and providers during the first four months of the pandemic (up from 0.3% during the same period in 2019), and among Medicare users, nearly half had used telehealth in 2020—a 63-fold increase. This swift and dramatic shift came about because Congress and the Centers for Medicare and Medicaid Services had passed legislation to make more telehealth visits temporarily eligible for reimbursement (the payments a health-care provider receives from an insurance company for providing medical services), while state boards of medicine relaxed the licensing restrictions. Now, more providers were able to offer telehealth, and more patients were eager to receive medical care without leaving their homes.

Though in-person care remains standard, telehealth has gained a significant place in US medicine, increasing from 0.1% of total Medicare visits in 2019 to 5.3% in 2020 and 3.5% in 2021. By the end of 2023, more than one in 10 Medicare patients were still using telehealth. And in some specialties the rate is much higher: 37% of all mental-health visits in the third quarter of 2023 were telemedicine, as well as 10% of obstetric appointments, 10% of transplant appointments, and 11% of infectious-disease appointments. 

“Telehealth has broadened our ability to provide care in ways not imaginable prior to the pandemic,” says Tara Sklar, faculty director of the health law and policy program at the University of Arizona James E. Rogers College of Law. 

Traditionally, patients and providers alike have been skeptical that telehealth care can meet the standards of an in-person appointment. However, most people advocating for telehealth aren’t arguing that it should completely replace visiting your doctor, explains Carmel Shachar, director of Harvard Law School’s Health Law and Policy Clinic. Rather, “it’s a really useful way to improve access to care.” Digital medicine could help address a gap in care for seniors by eliminating the need for them to make an arduous journey to the doctor’s office; many older adults find they’re more likely to keep their follow-up appointments when they can do them remotely. Telemedicine could also help address the equity issues facing hourly employees, who might not be able to take a half or full day off work to attend an in-­person appointment. For them, the offer of a video call might make the difference between seeking and not seeking help. 

“It’s a modality that we’re not using to its fullest potential because we’re not updating our regulations to reflect the digital age,” Shachar says.

Last December, Congress extended most of the provisions increasing Medicare coverage for telehealth through the end of March 2025, including the assurances that patients can be in their homes when they receive care and that they don’t need to be in a rural area to be eligible for telemedicine. 

“We would love to have these flexibilities made permanent,” says Helen Hughes, medical director for the Johns Hopkins Office of Telemedicine. “It’s confusing to explain to our providers and patients the continued regulatory uncertainty and news articles implying that telehealth is at risk, only to have consistent extensions for the last five years. This uncertainty leads providers and patients to worry that this type of care is not permanent and probably stifles innovation and investment by health systems.” 

In the meantime, several strategies are being considered to facilitate telehealth across state lines. Some places—like Maryland, Virginia, and Washington, DC—offer “proximal reciprocity,” meaning that a physician licensed in any of those states can more efficiently be licensed in the others. And several states, like Arkansas and Idaho, say that out-of-state doctors can generally practice telemedicine within their borders as long as they are licensed in good standing in another state and are using the technology to provide follow-up care. Expanding on these ideas, some advocates say that an ideal approach might look similar to how we regulate driving across state lines: A driver’s license from one state generally permits you to drive anywhere in the country as long as you have a good record and obey the rules of the road in the state that you’re in. Another idea is to create a telemedicine-specific version of the Interstate Medical Licensure Compact (which deals only with full medical licenses) in which qualifying physicians can register to practice telehealth among all participating states via a centralized compact.

For the foreseeable future, telehealth policy in the US is locked in what Mehrotra calls “hand-to-hand warfare”—states duking it out within their own legislatures to try to determine rules and regulations for administering telemedicine. Meanwhile, advocates are also pushing for uniformity between states, as with the Uniform Law Commission’s Telehealth Act of 2022, which set out consistent terminology so that states can adopt similar telehealth laws. 

“We’ve always advanced our technologies, like what I can provide as a doctor—meds, tests, surgeries,” Mehrotra says. “But in 2024, the basic structure of how we deliver that care is very similar to 1964.” That is, we still ask people to come to a doctor’s office or emergency department for an in-person visit. 

“That’s what excites me about telehealth,” he says. “I think there’s the potential that we can deliver care in a better way.” 

Isabel Ruehl is a writer based in New York and an assistant editor at Harper’s Magazine.

Congress used to evaluate emerging technologies. Let’s do it again.

At about the time when personal computers charged into cubicle farms, another machine muscled its way into human resources departments and became a staple of routine employment screenings. By the early 1980s, some 2 million Americans annually found themselves strapped to a polygraph—a metal box that, in many people’s minds, detected deception. Most of those tested were not suspected crooks or spooks. 

Then the US Office of Technology Assessment, an independent office that had been created by Congress about a decade earlier to serve as its scientific consulting arm, got involved. The office reached out to Boston University researcher Leonard Saxe with an assignment: Evaluate polygraphs. Tell us the truth about these supposed truth-telling devices.

And so Saxe assembled a team of about a dozen researchers, including Michael Saks of Boston College, to begin a systematic review. The group conducted interviews, pored over existing studies, and embarked on new lines of research. A few months later, the OTA published a technical memo, “Scientific Validity of Polygraph Testing: A Research Review and Evaluation.” Despite the tests’ widespread use, the memo dutifully reported, “there is very little research or scientific evidence to establish polygraph test validity in screening situations, whether they be preemployment, preclearance, periodic or aperiodic, random, or ‘dragnet.’” These machines could not detect lies. 

Four years later, in 1987, critics at a congressional hearing invoked the OTA report as authoritative, comparing polygraphs derisively to “tea leaf reading or crystal ball gazing.” Congress soon passed strict limits on the use of polygraphs in the workplace. 

Over its 23-year history, the OTA would publish some 750 reports—lengthy, interdisciplinary assessments of specific technologies that proposed means of maximizing their benefits and minimizing harms. Their subjects included electronic surveillance, genetic engineering, hazardous-waste disposal, and remote sensing from outer space. Congress set its course: The office initiated studies only at the request of a committee chairperson, a ranking minority leader, or its 12-person bipartisan board. 

The investigations remained independent; staffers and consultants from both inside and outside government collaborated to answer timely and sometimes politicized questions. The reports addressed worries about alarming advances and tamped down scary-sounding hypotheticals. Some of those concerns no longer keep policymakers up at night. For instance, “Do Insects Transmit AIDS?” A 1987 OTA report correctly suggested that they don’t.

The office functioned like a debunking arm. It sussed out the snake oil. Lifted the lid on the Mechanical Turk. The reports saw through the alluring gleam of overhyped technologies. 

In the years since its unceremonious defunding, perennial calls have gone out: Rouse the office from the dead! And with advances in robotics, big data, and AI systems, these calls have taken on a new level of urgency. 

Like polygraphs, chatbots and search engines powered by so-called artificial intelligence come with a shimmer and a sheen of magical thinking. And if we’re not careful, politicians, employers, and other decision-makers may accept at face value the idea that machines can and should replace human judgment and discretion. 

A resurrected OTA might be the perfect body to rein in dangerous and dangerously overhyped technologies. “That’s what Congress needs right now,” says Ryan Calo at the University of Washington’s Tech Policy Lab and the Center for an Informed Public, “because otherwise Congress is going to, like, take Sam Altman’s word for everything, or Eric Schmidt’s.” (The CEO of OpenAI and the former CEO of Google have both testified before Congress.) Leaving it to tech executives to educate lawmakers is like having the fox tell you how to build your henhouse. Wasted resources and inadequate protections might be only the start. 

A man administers a lie detector test to a job
applicant in 1976. A 1983 report from the OTA debunked the efficacy of polygraphs.
LIBRARY OF CONGRESS

No doubt independent expertise still exists. Congress can turn to the Congressional Research Service, for example, or the National Academies of Sciences, Medicine, and Engineering. Other federal entities, such as the Office of Management and Budget and the Office of Science and Technology Policy, have advised the executive branch (and still existed as we went to press). “But they’re not even necessarily specialists,” Calo says, “and what they’re producing is very lightweight compared to what the OTA did. And so I really think we need OTA back.”  

What exists today, as one researcher puts it, is a “diffuse and inefficient” system. There is no central agency that wholly devotes itself to studying emerging technologies in a serious and dedicated way and advising the country’s 535 elected officials about potential impacts. The digestible summaries Congress receives from the Congressional Research Service provide insight but are no replacement for the exhaustive technical research and analytic capacity of a fully staffed and funded think tank. There’s simply nothing like the OTA, and no single entity replicates its incisive and instructive guidance. But there’s also nothing stopping Congress from reauthorizing its budget and bringing it back, except perhaps the lack of political will. 

“Congress Smiles, Scientists Wince”

The OTA had not exactly been an easy sell to the research community in 1972. At the time, it was only the third independent congressional agency ever established. As the journal Science put it in a headline that year, “The Office of Technology Assessment: Congress Smiles, Scientists Wince.” One researcher from Bell Labs told Science that he feared legislators would embark on “a clumsy, destructive attempt to manage national R&D,” but mostly the cringe seemed to stem from uncertainty about what exactly technology assessment entailed. 

The OTA’s first report, in 1974, examined bioequivalence, an essential part of evaluating generic drugs. Regulators were trying to figure out whether these drugs could be deemed comparable to their name-brand equivalents without lengthy and expensive clinical studies demonstrating their safety and efficacy. Unlike all the OTA’s subsequent assessments, this one listed specific policy recommendations, such as clarifying what data should be required in order to evaluatea generic drug and ensure uniformity and standardization in the regulatory approval process. The Food and Drug Administration later incorporated these recommendations into its own submission requirements. 

From then on, though, the OTA did not take sides. The office had not been set up to advise Congress on how to legislate. Rather, it dutifully followed through on its narrowly focused mandate: Do the research and provide policymakers with a well-reasoned set of options that represented a range of expert opinions.

Perhaps surprisingly, given the rise of commercially available PCs, in the first decade of its existence the OTA produced only a few reports on computing. One 1976 report touched on the automated control of trains. Others examined computerized x-ray imaging, better known as CT scans; computerized crime databases; and the use of computers in medical education. Over time, the office’s output steadily increased, eventually averaging 32 reports a year. Its budget swelled to $22 million; its staff peaked at 143. 

While it’s sometimes said that the future impact of a technology is beyond anyone’s imagination, several findings proved prescient. A 1982 report on electronic funds transfer, or EFT, predicted that financial transactions would increasingly be carried out electronically (an obvious challenge to paper currency and hard-copy checks). Another predicted that email, or what was then termed “electronic message systems,” would disrupt snail mail and the bottom line of the US Postal Service. 

In vetting the digital record-keeping that provides the basis for routine background checks, the office commissioned a study that produced a statistic still cited today, suggesting that only about a quarter of the records sent to the FBI were “complete, accurate, and unambiguous.” It was an indicator of a growing issue: computational systems that, despite seeming automated, are not free of human bias and error. 

Many of the OTA’s reports focus on specific events or technologies. One looked at Love Canal, the upstate New York neighborhood polluted by hazardous waste (a disaster, the report said, that had not yet been remediated by the Environmental Protection Agency’s Superfund cleanup program); another studied the Boston Elbow, a cybernetic limb (the verdict: decidedly mixed). The office examined the feasibility of a water pipeline connecting Alaska to California, the health effects of the Kuwait oil fires, and the news media’s use of satellite imagery. The office also took on issues we grapple with today—evaluating automatic record checks for people buying guns, scrutinizing the compensation for injuries allegedly caused by vaccines, and pondering whether we should explore Mars. 

The OTA made its biggest splash in 1984, when it published a background report criticizing the Strategic Defense Initiative (commonly known as “Star Wars”), a pet project of the Reagan administration that involved several exotic missile defense systems. Its lead author was the MIT physicist Ashton Carter, later secretary of defense in the second Obama administration. And the report concluded that a “perfect or near-perfect” system to defend against nuclear weapons was basically beyond the realm of the plausible; the possibility of deployment was “so remote that it should not serve as the basis of public expectation or national policy.” 

The report generated lots of clicks, so to speak, especially after the administration claimed that the OTA had divulged state secrets. These charges did not hold up and Star Wars never materialized, although there have been recent efforts to beef up the military’s offensive capacity in space. But for the work of an advisory body that did not play politics, the report made a big political hubbub. By some accounts, its subsequent assessments became so neutral that the office risked receding to the point of invisibility.

From a purely pragmatic point of view, the OTA wrote to be understood. A dozen reports from the early ’90s received “Blue Pencil Awards,” given by the National Association of Government Communicators for “superior government communication products and those who produce them.” None are copyrighted. All were freely reproduced and distributed, both in print and electronically. The entire archive is stored on CD-ROM, and digitized copies are still freely available for download on a website maintained by Princeton University, like an earnest oasis of competence in the cloistered world of federal documents. 

Assessments versus accountability

Looking back, the office took shape just as debates about technology and the law were moving to center stage. 

While the gravest of dangers may have changed in form and in scope, the central problem remains: Laws and lawmakers cannot keep up with rapid technological advances. Policymakers often face a choice between regulating with insufficient facts and doing nothing. 

In 2018, Adam Kinzinger, then a Republican congressman from Illinois, confessed to a panel on quantum computing: “I can understand about 50% of the things you say.” To some, his admission underscored a broader tech illiteracy afflicting those in power. But other commentators argued that members of Congress should not be expected to know it all—all the more reason to restaff an office like the OTA.

A motley chorus of voices have clamored for an OTA 2.0 over the years. One doctor wrote that the office could help address the “discordance between the amount of money spent and the actual level of health.” Tech fellows have said bringing it back could help Congress understand machine learning and AI. Hillary Clinton, as a Democratic presidential hopeful, floated the possibility of resurrecting the OTA in 2017. 

But Meg Leta Jones, a law scholar at Georgetown University, argues that assessing new technologies is the least of our problems. The kind of work the OTA did is now done by other agencies, such as the FTC, FCC, and National Telecommunications and Information Administration, she says: “The energy I would like to put into the administrative state is not on assessments, but it’s on actual accountability and enforcement.”

She sees the existing framework as built for the industrial age, not a digital one, and is among those calling for a more ambitious overhaul. There seems to be little political appetite for the creation of new agencies anyway. That said, Jones adds, “I wouldn’t be mad if they remade the OTA.” 

No one can know whether or how future administrations will address AI, Mars colonization, the safety of vaccines, or, for that matter, any other emerging technology that the OTA investigated in an earlier era. But if the new administration makes good on plans to deregulate many sectors, it’s worth noting some historic echoes. In 1995, when conservative politicians defunded the OTA, they did so in the name of efficiency. Critics of that move contend that the office probably saved the government money and argue that the purported cost savings associated with its elimination were largely symbolic. 

Jathan Sadowski, a research fellow at Monash University in Melbourne, Australia, who has written about the OTA’s history, says the conditions that led to its demise have only gotten more partisan, more politicized. This makes it difficult to envision a place for the agency today, he says—“There’s no room for the kind of technocratic naïveté that would see authoritative scientific advice cutting through the noise of politics.”

Congress purposely cut off its scientific advisory arm as part of a larger shake-up led by Newt Gingrich, then the House Speaker, whose pugilistic brand of populist conservatism promised “drain the swamp”–type reforms and launched what critics called a “war on science.” As a rationale for why the office was defunded, he said, “We constantly found scientists who thought what they were saying was not correct.” 

Once again, Congress smiled and scientists winced. Only this time it was because politicians had pulled the plug. 

Peter Andrey Smith, a freelance reporter, has contributed to Undark, the New Yorker, the New York Times Magazine, and WNYC’s Radiolab.

Inside the race to archive the US government’s websites

Over the past three weeks, the new US presidential administration has taken down thousands of government web pages related to public health, environmental justice, and scientific research. The mass takedowns stem from the new administration’s push to remove government information related to diversity and “gender ideology,” as well as scrutiny of various government agencies’ practices. 

USAID’s website is down. So are sites related to it, like childreninadversity.gov, as well as thousands of pages from the Census Bureau, the Centers for Disease Control and Prevention, and the Office of Justice Programs.

“We’ve never seen anything like this,” says David Kaye, professor of law at the University of California, Irvine, and the former UN Special Rapporteur for freedom of opinion and expression. “I don’t think any of us know exactly what is happening. What we can see is government websites coming down, databases of essential public interest. The entirety of the USAID website.”

But as government web pages go dark, a collection of organizations are trying to archive as much data and information as possible before it’s gone for good. The hope is to keep a record of what has been lost for scientists and historians to be able to use in the future.

Data archiving is generally considered to be nonpartisan, but the recent actions of the administration have spurred some in the preservation community to stand up. 

“I consider the actions of the current administration an assault on the entire scientific enterprise,” says Margaret Hedstrom, professor emerita of information at the University of Michigan.

Various organizations are trying to scrounge up as much data as possible. One of the largest projects is the End of Term Web Archive, a nonpartisan coalition of many organizations that aims to make a copy of all government data at the end of each presidential term. The EoT Archive allows individuals to nominate specific websites or data sets for preservation.

“All we can do is collect what has been published and archive it and make sure it’s publicly accessible for the future,” says James Jacobs, US government information librarian at Stanford University, who is one of the people running the EoT Archive. 

Other organizations are taking a specific angle on data collection. For example, the Open Environmental Data Project (OEDP) is trying to capture data related to climate science and environmental justice. “We’re trying to track what’s getting taken down,” says Katie Hoeberling, director of policy initiatives at OEDP. “I can’t say with certainty exactly how much of what used to be up is still up, but we’re seeing, especially in the last couple weeks, an accelerating rate of data getting taken down.” 

In addition to tracking what’s happening, OEDP is actively backing up relevant data. It actually began this process in November, to capture the data at the end of former president Biden’s term. But efforts have ramped up in the last couple weeks. “Things were a lot calmer prior to the inauguration,” says Cathy Richards, a technologist at OEDP. “It was the second day of the new administration that the first platform went down. At that moment, everyone realized, ‘Oh, no—we have to keep doing this, and we have to keep working our way down this list of data sets.’”

This kind of work is crucial because the US government holds invaluable international and national data relating to climate. “These are irreplaceable repositories of important climate information,” says Lauren Kurtz, executive director of the Climate Science Legal Defense Fund. “So fiddling with them or deleting them means the irreplaceable loss of critical information. It’s really quite tragic.”

Like the OEDP, the Catalyst Cooperative is trying to make sure data related to climate and energy is stored and accessible for researchers. Both are part of the Public Environmental Data Partners, a collective of organizations dedicated to preserving federal environmental data. ”We have tried to identify data sets that we know our communities make use of to make decisions about what electricity we should procure or to make decisions about resiliency in our infrastructure planning,” says Christina Gosnell, cofounder and president of Catalyst. 

Archiving can be a difficult task; there is no one easy way to store all the US government’s data. “Various federal agencies and departments handle data preservation and archiving in a myriad of ways,” says Gosnell. There’s also no one who has a complete list of all the government websites in existence. 

This hodgepodge of data means that in addition to using web crawlers, which are tools used to capture snapshots of websites and data, archivists often have to manually scrape data as well. Additionally, sometimes a data set will be behind a login address or captcha to prevent scraper tools from pulling the data. Web scrapers also sometimes miss key features on a site. For example, sites will often have plenty of links to other pieces of information that aren’t captured in a scrape. Or the scrape may just not work because of something to do with a website’s structure. Therefore, having a person in the loop double-checking the scraper’s work or capturing data manually is often the only way to ensure that the information is properly collected.

And there are questions about whether scraping the data will really be enough. Restoring websites and complex data sets is often not a simple process. “It becomes extraordinarily difficult and costly to attempt to rescue and salvage the data,” says Hedstrom. “It is like draining a body of blood and expecting the body to continue to function. The repairs and attempts to recover are sometimes insurmountable where we need continuous readings of data.”

“All of this data archiving work is a temporary Band-Aid,” says Gosnell. “If data sets are removed and are no longer updated, our archived data will become increasingly stale and thus ineffective at informing decisions over time.” 

These effects may be long-lasting. “You won’t see the impact of that until 10 years from now, when you notice that there’s a gap of four years of data,” says Jacobs. 

Many digital archivists stress the importance of understanding our past. “We can all think about our own family photos that have been passed down to us and how important those different documents are,” says Trevor Owens, chief research officer at the American Institute of Physics and former director of digital services at the Library of Congress. “That chain of connection to the past is really important.”

“It’s our library; it’s our history,” says Richards. “This data is funded by taxpayers, so we definitely don’t want all that knowledge to be lost when we can keep it, store it, potentially do something with it and continue to learn from it.”

Three reasons Meta will struggle with community fact-checking

Earlier this month, Mark Zuckerberg announced that Meta will cut back on its content moderation efforts and eliminate fact-checking in the US in favor of the more “democratic” approach that X (formerly Twitter) calls Community Notes, rolling back protections that he claimed had been developed only in response to media and government pressure.

The move is raising alarm bells, and rightly so. Meta has left a trail of moderation controversies in its wake, from overmoderating images of breastfeeding women to undermoderating hate speech in Myanmar, contributing to the genocide of Rohingya Muslims. Meanwhile, ending professional fact-checking creates the potential for misinformation and hate to spread unchecked.

Enlisting volunteers is how moderation started on the Internet, long before social media giants realized that centralized efforts were necessary. And volunteer moderation can be successful, allowing for the development of bespoke regulations aligned with the needs of particular communities. But without significant commitment and oversight from Meta, such a system cannot contend with how much content is shared across the company’s platforms, and how fast. In fact, the jury is still out on how well it works at X, which is used by 21% of Americans (Meta’s are significantly more popular—Facebook alone is used by 70% of Americans, according to Pew).  

Community Notes, which started in 2021 as Birdwatch, is a community-driven moderation system on X that allows users who sign up for the program to add context to posts. Having regular users provide public fact-checking is relatively new, and so far results are mixed. For example, researchers have found that participants are more likely to challenge content they disagree with politically and that flagging content as false does not reduce engagement, but they have also found that the notes are typically accurate and can help reduce the spread of misleading posts

I’m a community moderator who researches community moderation. Here’s what I’ve learned about the limitations of relying on volunteers for moderation—and what Meta needs to do to succeed: 

1. The system will miss falsehoods and could amplify hateful content

There is a real risk under this style of moderation that only posts about things that a lot of people know about will get flagged in a timely manner—or at all. Consider how a post with a picture of a death cap mushroom and the caption “Tasty” might be handled under Community Notes–style moderation. If an expert in mycology doesn’t see the post, or sees it only after it’s been widely shared, it may not get flagged as “Poisonous, do not eat”—at least not until it’s too late. Topic areas that are more esoteric will be undermoderated. This could have serious impacts on both individuals (who may eat a poisonous mushroom) and society (if a falsehood spreads widely). 

Crucially, X’s Community Notes aren’t visible to readers when they are first added. A note becomes visible to the wider user base only when enough contributors agree that it is accurate by voting for it. And not all votes count. If a note is rated only by people who tend to agree with each other, it won’t show up. X does not make a note visible until there’s agreement from people who have disagreed on previous ratings. This is an attempt to reduce bias, but it’s not foolproof. It still relies on people’s opinions about a note and not on actual facts. Often what’s needed is expertise.

I moderate a community on Reddit called r/AskHistorians. It’s a public history site with over 2 million members and is very strictly moderated. We see people get facts wrong all the time. Sometimes these are straightforward errors. But sometimes there is hateful content that takes experts to recognize. One time a question containing a Holocaust-denial dog whistle escaped review for hours and ended up amassing hundreds of upvotes before it was caught by an expert on our team. Hundreds of people—probably with very different voting patterns and very different opinions on a lot of topics—not only missed the problematic nature of the content but chose to promote it through upvotes. This happens with answers to questions, too. People who aren’t experts in history will upvote outdated, truthy-sounding answers that aren’t actually correct. Conversely, they will downvote good answers if they reflect viewpoints that are tough to swallow. 

r/AskHistorians works because most of its moderators are expert historians. If Meta wants its Community Notes–style program to work, it should  make sure that the people with the knowledge to make assessments see the posts and that expertise is accounted for in voting, especially when there’s a misalignment between common understanding and expert knowledge. 

2. It won’t work without well-supported volunteers  

Meta’s paid content moderators review the worst of the worst—including gore, sexual abuse and exploitation, and violence. As a result, many have suffered severe trauma, leading to lawsuits and unionization efforts. When Meta cuts resources from its centralized moderation efforts, it will be increasingly up to unpaid volunteers to keep the platform safe. 

Community moderators don’t have an easy job. On top of exposure to horrific content, as identifiable members of their communities, they are also often subject to harassment and abuse—something we experience daily on r/AskHistorians. However, community moderators moderate only what they can handle. For example, while I routinely manage hate speech and violent language, as a moderator of a text-based community I am rarely exposed to violent imagery. Community moderators also work as a team. If I do get exposed to something I find upsetting or if someone is being abusive, my colleagues take over and provide emotional support. I also care deeply about the community I moderate. Care for community, supportive colleagues, and self-selection all help keep volunteer moderators’ morale high(ish). 

It’s unclear how Meta’s new moderation system will be structured. If volunteers choose what content they flag, will that replicate X’s problem, where partisanship affects which posts are flagged and how? It’s also unclear what kind of support the platform will provide. If volunteers are exposed to content they find upsetting, will Meta—the company that is currently being sued for damaging the mental health of its paid content moderators—provide social and psychological aid? To be successful, the company will need to ensure that volunteers have access to such resources and are able to choose the type of content they moderate (while also ensuring that this self-selection doesn’t unduly influence the notes).    

3. It can’t work without protections and guardrails 

Online communities can thrive when they are run by people who deeply care about them. However, volunteers can’t do it all on their own. Moderation isn’t just about making decisions on what’s “true” or “false.” It’s also about identifying and responding to other kinds of harmful content. Zuckerberg’s decision is coupled with other changes to its community standards that weaken rules around hateful content in particular. Community moderation is part of a broader ecosystem, and it becomes significantly harder to do it when that ecosystem gets poisoned by toxic content. 

I started moderating r/AskHistorians in 2020 as part of a research project to learn more about the behind-the-scenes experiences of volunteer moderators. While Reddit had started addressing some of the most extreme hate on its platform by occasionally banning entire communities, many communities promoting misogyny, racism, and all other forms of bigotry were permitted to thrive and grow. As a result, my early field notes are filled with examples of extreme hate speech, as well as harassment and abuse directed at moderators. It was hard to keep up with. 

But halfway through 2020, something happened. After a milquetoast statement about racism from CEO Steve Huffman, moderators on the site shut down their communities in protest. And to its credit, the platform listened. Reddit updated its community standards to explicitly prohibit hate speech and began to enforce the policy more actively. While hate is still an issue on Reddit, I see far less now than I did in 2020 and 2021. Community moderation needs robust support because volunteers can’t do it all on their own. It’s only one tool in the box. 

If Meta wants to ensure that its users are safe from scams, exploitation, and manipulation in addition to hate, it cannot rely solely on community fact-checking. But keeping the user base safe isn’t what this decision aims to do. It’s a political move to curry favor with the new administration. Meta could create the perfect community fact-checking program, but because this decision is coupled with weakening its wider moderation practices, things are going to get worse for its users rather than better. 

Sarah Gilbert is research director for the Citizens and Technology Lab at Cornell University.

There can be no winners in a US-China AI arms race

The United States and China are entangled in what many have dubbed an “AI arms race.” 

In the early days of this standoff, US policymakers drove an agenda centered on “winning” the race, mostly from an economic perspective. In recent months, leading AI labs such as OpenAI and Anthropic got involved in pushing the narrative of “beating China” in what appeared to be an attempt to align themselves with the incoming Trump administration. The belief that the US can win in such a race was based mostly on the early advantage it had over China in advanced GPU compute resources and the effectiveness of AI’s scaling laws.

But now it appears that access to large quantities of advanced compute resources is no longer the defining or sustainable advantage many had thought it would be. In fact, the capability gap between leading US and Chinese models has essentially disappeared, and in one important way the Chinese models may now have an advantage: They are able to achieve near equivalent results while using only a small fraction of the compute resources available to the leading Western labs.    

The AI competition is increasingly being framed within narrow national security terms, as a zero-sum game, and influenced by assumptions that a future war between the US and China, centered on Taiwan, is inevitable. The US has employed “chokepoint” tactics to limit China’s access to key technologies like advanced semiconductors, and China has responded by accelerating its efforts toward self-sufficiency and indigenous innovation, which is causing US efforts to backfire.

Recently even outgoing US Secretary of Commerce Gina Raimondo, a staunch advocate for strict export controls, finally admitted that using such controls to hold back China’s progress on AI and advanced semiconductors is a “fool’s errand.” Ironically, the unprecedented export control packages targeting China’s semiconductor and AI sectors have unfolded alongside tentative bilateral and multilateral engagements to establish AI safety standards and governance frameworks—highlighting a paradoxical desire of both sides to compete and cooperate. 

When we consider this dynamic more deeply, it becomes clear that the real existential threat ahead is not from China, but from the weaponization of advanced AI by bad actors and rogue groups who seek to create broad harms, gain wealth, or destabilize society. As with nuclear arms, China, as a nation-state, must be careful about using AI-powered capabilities against US interests, but bad actors, including extremist organizations, would be much more likely to abuse AI capabilities with little hesitation. Given the asymmetric nature of AI technology, which is much like cyberweapons, it is very difficult to fully prevent and defend against a determined foe who has mastered its use and intends to deploy it for nefarious ends. 

Given the ramifications, it is incumbent on the US and China as global leaders in developing AI technology to jointly identify and mitigate such threats, collaborate on solutions, and cooperate on developing a global framework for regulating the most advanced models—instead of erecting new fences, small or large, around AI technologies and pursing policies that deflect focus from the real threat.

It is now clearer than ever that despite the high stakes and escalating rhetoric, there will not and cannot be any long-term winners if the intense competition continues on its current path. Instead, the consequences could be severe—undermining global stability, stalling scientific progress, and leading both nations toward a dangerous technological brinkmanship. This is particularly salient given the importance of Taiwan and the global foundry leader TSMC in the AI stack, and the increasing tensions around the high-tech island. 

Heading blindly down this path will bring the risk of isolation and polarization, threatening not only international peace but also the vast potential benefits AI promises for humanity as a whole.

Historical narratives, geopolitical forces, and economic competition have all contributed to the current state of the US-China AI rivalry. A recent report from the US-China Economic and Security Review Commission, for example, frames the entire issue in binary terms, focused on dominance or subservience. This “winner takes all” logic overlooks the potential for global collaboration and could even provoke a self-fulfilling prophecy by escalating conflict. Under the new Trump administration this dynamic will likely become more accentuated, with increasing discussion of a Manhattan Project for AI and redirection of US military resources from Ukraine toward China

Fortunately, a glimmer of hope for a responsible approach to AI collaboration is appearing now as Donald Trump recently  posted on January 17 that he’d restarted direct dialogue with Chairman Xi Jinping regarding various areas of collaboration, and given past cooperation should continue to be “partners and friends.” The outcome of the TikTok drama, putting Trump at odds with sharp China critics in his own administration and Congress, will be a preview of how his efforts to put US China relations on a less confrontational trajectory.

The promise of AI for good

Western mass media usually focuses on attention-grabbing issues described in terms like the “existential risks of evil AI.” Unfortunately, the AI safety experts who get the most coverage often recite the same narratives, scaring the public. In reality, no credible research shows that more capable AI will become increasingly evil. We need to challenge the current false dichotomy of pure accelerationism versus doomerism to allow for a model more like collaborative acceleration

It is important to note the significant difference between the way AI is perceived in Western developed countries and developing countries. In developed countries the public sentiment toward AI is 60% to 70% negative, while in the developing markets the positive ratings are 60% to 80%. People in the latter places have seen technology transform their lives for the better in the past decades and are hopeful AI will help solve the remaining issues they face by improving education, health care, and productivity, thereby elevating their quality of life and giving them greater world standing. What Western populations often fail to realize is that those same benefits could directly improve their lives as well, given the high levels of inequity even in developed markets. Consider what progress would be possible if we reallocated the trillions that go into defense budgets each year to infrastructure, education, and health-care projects. 

Once we get to the next phase, AI will help us accelerate scientific discovery, develop new drugs, extend our health span, reduce our work obligations, and ensure access to high-quality education for all. This may sound idealistic, but given current trends, most of this can become a reality within a generation, and maybe sooner. To get there we’ll need more advanced AI systems, which will be a much more challenging goal if we divide up compute/data resources and research talent pools. Almost half of all top AI researchers globally (47%) were born or educated in China, according to industry studies. It’s hard to imagine how we could have gotten where we are without the efforts of Chinese researchers. Active collaboration with China on joint AI research could be pivotal to supercharging progress with a major infusion of quality training data and researchers. 

The escalating AI competition between the US and China poses significant threats to both nations and to the entire world. The risks inherent in this rivalry are not hypothetical—they could lead to outcomes that threaten global peace, economic stability, and technological progress. Framing the development of artificial intelligence as a zero-sum race undermines opportunities for collective advancement and security. Rather than succumb to the rhetoric of confrontation, it is imperative that the US and China, along with their allies, shift toward collaboration and shared governance.

Our recommendations for policymakers:

  1. Reduce national security dominance over AI policy. Both the US and China must recalibrate their approach to AI development, moving away from viewing AI primarily as a military asset. This means reducing the emphasis on national security concerns that currently dominate every aspect of AI policy. Instead, policymakers should focus on civilian applications of AI that can directly benefit their populations and address global challenges, such as health care, education, and climate change. The US also needs to investigate how to implement a possible universal basic income program as job displacement from AI adoption becomes a bigger issue domestically. 
    • 2. Promote bilateral and multilateral AI governance. Establishing a robust dialogue between the US, China, and other international stakeholders is crucial for the development of common AI governance standards. This includes agreeing on ethical norms, safety measures, and transparency guidelines for advanced AI technologies. A cooperative framework would help ensure that AI development is conducted responsibly and inclusively, minimizing risks while maximizing benefits for all.
    • 3. Expand investment in detection and mitigation of AI misuse. The risk of AI misuse by bad actors, whether through misinformation campaigns, telecom, power, or financial system attacks, or cybersecurity attacks with the potential to destabilize society, is the biggest existential threat to the world today. Dramatically increasing funding for and international cooperation in detecting and mitigating these risks is vital. The US and China must agree on shared standards for the responsible use of AI and collaborate on tools that can monitor and counteract misuse globally.
    • 4. Create incentives for collaborative AI research. Governments should provide incentives for academic and industry collaborations across borders. By creating joint funding programs and research initiatives, the US and China can foster an environment where the best minds from both nations contribute to breakthroughs in AI that serve humanity as a whole. This collaboration would help pool talent, data, and compute resources, overcoming barriers that neither country could tackle alone. A global effort akin to the CERN for AI will bring much more value to the world, and a peaceful end, than a Manhattan Project for AI, which is being promoted by many in Washington today. 
    • 5. Establish trust-building measures. Both countries need to prevent misinterpretations of AI-related actions as aggressive or threatening. They could do this via data-sharing agreements, joint projects in nonmilitary AI, and exchanges between AI researchers. Reducing import restrictions for civilian AI use cases, for example, could help the nations rebuild some trust and make it possible for them to discuss deeper cooperation on joint research. These measures would help build transparency, reduce the risk of miscommunication, and pave the way for a less adversarial relationship.
    • 6. Support the development of a global AI safety coalition. A coalition that includes major AI developers from multiple countries could serve as a neutral platform for addressing ethical and safety concerns. This coalition would bring together leading AI researchers, ethicists, and policymakers to ensure that AI progresses in a way that is safe, fair, and beneficial to all. This effort should not exclude China, as it remains an essential partner in developing and maintaining a safe AI ecosystem.
    • 7. Shift the focus toward AI for global challenges. It is crucial that the world’s two AI superpowers use their capabilities to tackle global issues, such as climate change, disease, and poverty. By demonstrating the positive societal impacts of AI through tangible projects and presenting it not as a threat but as a powerful tool for good, the US and China can reshape public perception of AI. 

    Our choice is stark but simple: We can proceed down a path of confrontation that will almost certainly lead to mutual harm, or we can pivot toward collaboration, which offers the potential for a prosperous and stable future for all. Artificial intelligence holds the promise to solve some of the greatest challenges facing humanity, but realizing this potential depends on whether we choose to race against each other or work together. 

    The opportunity to harness AI for the common good is a chance the world cannot afford to miss.


    Alvin Wang Graylin

    Alvin Wang Graylin is a technology executive, author, investor, and pioneer with over 30 years of experience shaping innovation in AI, XR (extended reality), cybersecurity, and semiconductors. Currently serving as global vice president at HTC, Graylin was the company’s China president from 2016 to 2023. He is the author of Our Next Reality.

    Paul Triolo

    Paul Triolo is a partner for China and technology policy lead at DGA-Albright Stonebridge Group. He advises clients in technology, financial services, and other sectors as they navigate complex political and regulatory matters in the US, China, the European Union, India, and around the world.

    We need to protect the protocol that runs Bluesky

    Last week, when Mark Zuckerberg announced that Meta would be ending third-party fact-checking, it was a shocking pivot, but not exactly surprising. It’s just the latest example of a billionaire flip-flop affecting our social lives on the internet. 

    After January 6, 2021, Zuckerberg bragged to Congress about Facebook’s “industry-leading fact-checking program” and banned Donald Trump from the platform. But just two years later, he welcomed Trump back. And last year Zuckerberg was privately reassuring the conservative congressman Jim Jordan that Meta will no longer demote questionable content while it’s being fact-checked. 

    Now, not only is Meta ending fact-checking completely; it is loosening rules around hate speech, allowing horrendous personal attacks on migrants and trans people, for example, on its platforms. 

    And Zuckerberg isn’t the only social media CEO careening all over the road: Elon Musk, since buying Twitter in 2022 and touting free speech as “the bedrock of a functioning democracy,” has suspended journalists, restored tens of thousands of banned users (including white nationalists), brought back political advertising, and weakened verification and harassment policies. 

    Unfortunately, these capricious billionaires can do whatever they want because of an ownership model that privileges singular, centralized control in exchange for shareholder returns.

    And this has led to a constantly shifting digital environment in which people can lose their communication pathways and livelihoods in a second, with no recourse, as opaque rules change. 

    The internet doesn’t need to be like this. As luck would have it, a new way is emerging just in time. 

    If you’ve heard of Bluesky, you’ve probably heard of it as a clone of Twitter where liberals can take refuge. But under the hood it’s structured fundamentally differently—in a way that could point us to a healthier internet for everyone, regardless of politics or identity. 

    Just like email, Bluesky sits on top of an open protocol, in this case known as the AT Protocol. In practice, that means that anyone can build on it. Just as you wouldn’t need anyone’s permission to start a newsletter company built on email, people are starting to share remixed versions of their social media feeds, built on Bluesky. This sounds like a small thing, but think about all the harms enabled by social media companies’ algorithms in the last decade: insurrection, radicalization, self-harm, bullying. Bluesky enables users to collaborate on verification and moderation by sharing block lists and labels. Letting people shape their own experience of social media is nothing short of revolutionary. 

    And importantly, if you decide that you don’t agree with Bluesky’s design and moderation decisions, you can build something else on the same infrastructure and use that instead. This is fundamentally different from the dominant, centralized social media that has prevailed until now.

    At the core of Bluesky’s philosophy is the idea that instead of being centralized in the hands of one person or institution, social media governance should obey the principle of subsidiarity. The Nobel Prize–winning economist Elinor Ostrom found, through studying grassroots solutions to local environmental problems around the world, that some problems are best solved locally, while others are best solved at a higher level. 

    In terms of content moderation, posts related to child sexual abuse or terrorism are best handled by professionals trained to help keep millions or billions safe. But a lot of decisions about speech can be solved in each community, or even user by user as people assemble Bluesky block lists. 

    So all the right elements are currently in place at Bluesky to usher in this new architecture for social media: independent ownership, newfound popularity, a stark contrast with other dominant platforms, and right-minded leadership. But challenges remain, and we can’t count on Bluesky to do this right without support. 

    Critics have pointed out that Bluesky has yet to turn a profit and is currently running on venture capital, the same corporate structure that brought us Facebook, Twitter, and other social media companies. As of now, there’s no option to exit Bluesky and take your data and network with you, because there are no other servers that run the AT Protocol. Bluesky CEO Jay Graber deserves credit for her stewardship so far, and for attempting to avoid the dangers of advertising incentives. But the process by which capitalism degrades tech products is so predictable that Cory Doctorow coined a now-popular term for it: enshittification.

    That’s why we need to act now to secure the foundation of this digital future and make it enshittification-proof. This week, prominent technologists started a new project, which we at New_ Public are supporting, called Free Our Feeds. There are three parts: First, Free Our Feeds wants to create a nonprofit foundation to govern and protect the AT Protocol, outside of Bluesky the company. We also need to build redundant servers so all users can leave with their data or build anything they want—regardless of policies set by Bluesky. Finally, we need to spur the development of a whole ecosystem built on this tech with seed money and expertise. 

    It’s worth noting that this is not a hostile takeover: Bluesky and Graber recognize the importance of this effort and have signaled their approval. But the point is, it can’t rely on them. To free us from fickle billionaires, some of the power has to reside outside Bluesky, Inc. 

    If we get this right, so much is possible. Not too long ago, the internet was full of builders and people working together: the open web. Email. Podcasts. Wikipedia is one of the best examples—a collaborative project to create one of the web’s best free, public resources. And the reason we still have it today is the infrastructure built up around it: The nonprofit Wikimedia Foundation protects the project and insulates it from the pressures of capitalism. When’s the last time we collectively built anything as good?

    We can shift the balance of power and reclaim our social lives from these companies and their billionaires. This is an opportunity to bring much more independence, innovation, and local control to our online conversations. We can finally build the “Wikipedia of social media,” or whatever we want. But we need to act, because the future of the internet can’t depend on whether one of the richest men on Earth wakes up on the wrong side of the bed. 

    Eli Pariser is author of The Filter Bubble and Co-Director of New_ Public, a nonprofit R&D lab that’s working to reimagine social media. 

    Deepti Doshi is a Co-Director of New_ Public and was a director at Meta.

    A New York legislator wants to pick up the pieces of the dead California AI bill

    The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047, with a new version in his state that would regulate the most advanced AI models. It’s called the RAISE Act, an acronym for “Responsible AI Safety and Education.”

    Assemblymember Alex Bores hopes his bill, currently an unpublished draft—subject to change—that MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law.

    SB 1047 was, at first, thought to be a fairly modest bill that would pass without much fanfare. In fact, it flew through the California statehouse with huge margins and received significant public support.

    However, before it even landed on Governor Gavin Newsom’s desk for signature in September, it sparked an intense national fight. Google, Meta, and OpenAI came out against the bill, alongside top congressional Democrats like Nancy Pelosi and Zoe Lofgren. Even Hollywood celebrities got involved, with Jane Fonda and Mark Hamill expressing support for the bill. 

    Ultimately, Newsom vetoed SB 1047, effectively killing regulation of so-called frontier AI models not just in California but, with the lack of laws on the national level, anywhere in the US, where the most powerful systems are developed.

    Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models. 

    The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause “critical harm”; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people. 

    Alternatively, a critical harm could be a use of the AI model that results in 100 or more deaths or at least $1 billion in damages in an act with limited human oversight that if committed by a human would constitute a crime requiring intent, recklessness, or gross negligence.

    The safety plans would ensure that a company has cybersecurity protections in place to prevent unauthorized access to a model. The plan would also require testing of models to assess risks before and after training, as well as detailed descriptions of procedures to assess the risks associated with post-training modifications. For example, some current AI systems have safeguards that can be easily and cheaply removed by a malicious actor. A safety plan would have to address how the company plans to mitigate these actions.

    The safety plans would then be audited by a third party, like a nonprofit with technical expertise that currently tests AI models. And if violations are found, the bill empowers the attorney general of New York to issue fines and, if necessary, go to the courts to determine whether to halt unsafe development. 

    A different flavour of bill

    The safety plans and external audits were elements of SB 1047, but Bores aims to differentiate his bill from the California one. “We focused a lot on what the feedback was for 1047,” he says. “Parts of the criticism were in good faith and could make improvements. And so we’ve made a lot of changes.” 

    The RAISE Act diverges from SB 1047 in a few ways. For one, SB 1047 would have created the Board of Frontier Models, tasked with approving updates to the definitions and regulations around these AI models, but the proposed act would not create a new government body. The New York bill also doesn’t create a public cloud computing cluster, which SB 1047 would have done. The cluster was intended to support projects to develop AI for the public good. 

    The RAISE Act doesn’t have SB 1047’s requirement that companies be able to halt all operations of their model, a capability sometimes referred to as a “kill switch.” Some critics alleged that the shutdown provision of SB 1047 would harm open-source models, since developers can’t shut down a model someone else may now possess (even though SB 1047 had an exemption for open-source models).

    The RAISE Act avoids the fight entirely. SB 1047 referred to an “advanced persistent threat” associated with bad actors trying to steal information during model training. The RAISE Act does away with that definition, sticking to addressing critical harms from covered models.

    Focusing on the wrong issues?

    Bores’ bill is very specific with its definitions in an effort to clearly delineate what this bill is and isn’t about. The RAISE Act doesn’t address some of the current risks from AI models, like bias, discrimination, and job displacement. Like SB 1047, it is very focused on catastrophic risks from frontier AI models. 

    Some in the AI community believe this focus is misguided. “We’re broadly supportive of any efforts to hold large models accountable,” says Kate Brennan, associate director of the AI Now Institute, which conducts AI policy research.

    “But defining critical harms only in terms of the most catastrophic harms from the most advanced models overlooks the material risks that AI poses, whether it’s workers subject to surveillance mechanisms, prone to workplace injuries because of algorithmically managed speed rates, climate impacts of large-scale AI systems, data centers exerting massive pressure on local power grids, or data center construction sidestepping key environmental protections,” she says.

    Bores has worked on other bills addressing current harms posed by AI systems, like discrimination and lack of transparency. That said, Bores is clear that this new bill is aimed at mitigating catastrophic risks from more advanced models. “We’re not talking about any model that exists right now,” he says. “We are talking about truly frontier models, those on the edge of what we can build and what we understand, and there is risk in that.” 

    The bill would cover only models that pass a certain threshold for how many computations their training required, typically measured in FLOPs (floating-point operations). In the bill, a covered model is one that requires more than 1026 FLOPs in its training and costs over $100 million. For reference, GPT-4 is estimated to have required 1025 FLOPs. 

    This approach may draw scrutiny from industry forces. “While we can’t comment specifically on legislation that isn’t public yet, we believe effective regulation should focus on specific applications rather than broad model categories,” says a spokesperson at Hugging Face, a company that opposed SB 1047.

    Early days

    The bill is in its nascent stages, so it’s subject to many edits in the future, and no opposition has yet formed. There may already be lessons to be learned from the battle over SB 1047, however. “There’s significant disagreement in the space, but I think debate around future legislation would benefit from more clarity around the severity, the likelihood, and the imminence of harms,” says Scott Kohler, a scholar at the Carnegie Endowment for International Peace, who tracked the development of SB 1047. 

    When asked about the idea of mandated safety plans for AI companies, assemblymember Edward Ra, a Republican who hasn’t yet seen a draft of the new bill yet, said: “I don’t have any general problem with the idea of doing that. We expect businesses to be good corporate citizens, but sometimes you do have to put some of that into writing.” 

    Ra and Bores co chair the New York Future Caucus, which aims to bring together lawmakers 45 and under to tackle pressing issues that affect future generations.

    Scott Wiener, a California state senator who sponsored SB 1047, is happy to see that his initial bill, even though it failed, is inspiring further legislation and discourse. “The bill triggered a conversation about whether we should just trust the AI labs to make good decisions, which some will, but we know from past experience, some won’t make good decisions, and that’s why a level of basic regulation for incredibly powerful technology is important,” he says.

    He has his own plans to reignite the fight: “We’re not done in California. There will be continued work in California, including for next year. I’m optimistic that California is gonna be able to get some good things done.”

    And some believe the RAISE Act will highlight a notable contradiction: Many of the industry’s players insist that they want regulation, but when any regulation is proposed, they fight against it. “SB 1047 became a referendum on whether AI should be regulated at all,” says Brennan. “There are a lot of things we saw with 1047 that we can expect to see replay in New York if this bill is introduced. We should be prepared to see a massive lobbying reaction that industry is going to bring to even the lightest-touch regulation.”

    Wiener and Bores both wish to see regulation at a national level, but in the absence of such legislation, they’ve taken the battle upon themselves. At first it may seem odd for states to take up such important reforms, but California houses the headquarters of the top AI companies, and New York, which has the third-largest state economy in the US, is home to offices for OpenAI and other AI companies. The two states may be well positioned to lead the conversation around regulation. 

    “There is uncertainty at the direction of federal policy with the transition upcoming and around the role of Congress,” says Kohler. “It is likely that states will continue to step up in this area.”

    Wiener’s advice for New York legislators entering the arena of AI regulation? “Buckle up and get ready.”

    What’s next for our privacy?

    MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

    Every day, we are tracked hundreds or even thousands of times across the digital world. Cookies and web trackers capture every website link that we click, while code installed in mobile apps tracks every physical location that our devices—and, by extension, we—have visited. All of this is collected, packaged together with other details (compiled from public records, supermarket member programs, utility companies, and more), and used to create highly personalized profiles that are then shared or sold, often without our explicit knowledge or consent. 

    A consensus is growing that Americans need better privacy protections—and that the best way to deliver them would be for Congress to pass comprehensive federal privacy legislation. While the latest iteration of such a bill, the American Privacy Rights Act of 2024, gained more momentum than previously proposed laws, it became so watered down that it lost support from both Republicans and Democrats before it even came to a vote. 

    There have been some privacy wins in the form of limits on what data brokers—third-party companies that buy and sell consumers’ personal information for targeted advertisements, messaging, and other purposes—can do with geolocation data. 

    These are still small steps, though—and they are happening as increasingly pervasive and powerful technologies collect more data than ever. And at the same time, Washington is preparing for a new presidential administration that has attacked the press and other critics, promised to target immigrants for mass deportation, threatened to seek retribution against perceived enemies, and supported restrictive state abortion laws. This is not even to mention the increased collection of our biometric data, especially for facial recognition, and the normalization of its use in all kinds of ways. In this light, it’s no stretch to say our personal data has arguably never been more vulnerable, and the imperative for privacy has never felt more urgent. 

    So what can Americans expect for their personal data in 2025? We spoke to privacy experts and advocates about (some of) what’s on their mind regarding how our digital data might be traded or protected moving forward. 

    Reining in a problematic industry

    In early December, the Federal Trade Commission announced separate settlement agreements with the data brokers Mobilewalla and Gravy Analytics (and its subsidiary Venntel). Finding that the companies had tracked and sold geolocation data from users at sensitive locations like churches, hospitals, and military installations without explicit consent, the FTC banned the companies from selling such data except in specific circumstances. This follows something of a busy year in regulation of data brokers, including multiple FTC enforcement actions against other companies for similar use and sale of geolocation data, as well as a proposed rule from the Justice Department that would prohibit the sale of bulk data to foreign entities. 

    And on the same day that the FTC announced these settlements in December, the Consumer Financial Protection Bureau proposed a new rule that would designate data brokers as consumer reporting agencies, which would trigger stringent reporting requirements and consumer privacy protections. The rule would prohibit the collection and sharing of people’s sensitive information, such as their salaries and Social Security numbers, without “legitimate purposes.” While the rule will still need to undergo a 90-day public comment period, and it’s unclear whether it will move forward under the Trump administration, if it’s finalized it has the power to fundamentally limit how data brokers do business.

    Right now, there just aren’t many limits on how these companies operate—nor, for that matter, clear information on how many data brokerages even exist. Industry watchers estimate there may be 4,000 to 5,000 data brokers around the world, many of which we’ve never heard of—and whose names constantly shift. In California alone, the state’s 2024 Data Broker Registry lists 527 such businesses that have voluntarily registered there, nearly 90 of which also self-reported that they collect geolocation data. 

    All this data is widely available for purchase by anyone who will pay. Marketers buy data to create highly targeted advertisements, and banks and insurance companies do the same to verify identity, prevent fraud, and conduct risk assessments. Law enforcement buys geolocation data to track people’s whereabouts without getting traditional search warrants. Foreign entities can also currently buy sensitive information on members of the military and other government officials. And on people-finder websites, basically anyone can pay for anyone else’s contact details and personal history.  

    Data brokers and their clients defend these transactions by saying that most of this data is anonymized—though it’s questionable whether that can truly be done in the case of geolocation data. Besides, anonymous data can be easily reidentified, especially when it’s combined with other personal information. 

    Digital-rights advocates have spent years sounding the alarm on this secretive industry, especially the ways in which it can harm already marginalized communities, though various types of data collection have sparked consternation across the political spectrum. Representative Cathy McMorris Rodgers, the Republican chair of the House Energy and Commerce Committee, for example, was concerned about how the Centers for Disease Control and Prevention bought location data to evaluate the effectiveness of pandemic lockdowns. Then a study from last year showed how easy (and cheap) it was to buy sensitive data about members of the US military; Senator Elizabeth Warren, a Democrat, called out the national security risks of data brokers in a statement to MIT Technology Review, and Senator John Cornyn, a Republican, later said he was “shocked” when he read about the practice in our story. 

    But it was the 2022 Supreme Court decision ending the constitutional guarantee of legal abortion that spurred much of the federal action last year. Shortly after the Dobbs ruling, President Biden issued an executive order to protect access to reproductive health care; it included instructions for the FTC to take steps preventing information about visits to doctor’s offices or abortion clinics from being sold to law enforcement agencies or state prosecutors.

    The new enforcers

    With Donald Trump taking office in January, and Republicans taking control of both houses of Congress, the fate of the CFPB’s proposed rule—and the CFPB itself—is uncertain. Republicans, the people behind Project 2025, and Elon Musk (who will lead the newly created advisory group known as the Department of Government Efficiency) have long been interested in seeing the bureau “deleted,” as Musk put it on X. That would take an act of Congress, making it unlikely, but there are other ways that the administration could severely curtail its powers. Trump is likely to fire the current director and install a Republican who could rescind existing CFPB rules and stop any proposed rules from moving forward. 

    Meanwhile, the FTC’s enforcement actions are only as good as the enforcers. FTC decisions do not set legal precedent in quite the same way that court cases do, says Ben Winters, a former Department of Justice official and the director of AI and privacy at the Consumer Federation of America, a network of organizations and agencies focused on consumer protection. Instead, they “require consistent [and] additional enforcement to make the whole industry scared of not having an FTC enforcement action against them.” (It’s also worth noting that these FTC settlements are specifically focused on geolocation data, which is just one of the many types of sensitive data that we regularly give up in order to participate in the digital world.)

    Looking ahead, Tiffany Li, a professor at the University of San Francisco School of Law who focuses on AI and privacy law, is worried about “a defanged FTC” that she says would be “less aggressive in taking action against companies.” 

    Lina Khan, the current FTC chair, has been the leader of privacy protection action in the US, notes Li, and she’ll soon be leaving. Andrew Ferguson, Trump’s recently named pick to be the next FTC chair, has come out in strong opposition to data brokers: “This type of data—records of a person’s precise physical locations—is inherently intrusive and revealing of people’s most private affairs,” he wrote in a statement on the Mobilewalla decision, indicating that he is likely to continue action against them. (Ferguson has been serving as a commissioner on the FTC since April 20214.) On the other hand, he has spoken out against using FTC actions as an alternative to privacy legislation passed by Congress. And, of course, this brings us right back around to that other major roadblock: Congress has so far failed to pass such laws—and it’s unclear if the next Congress will either. 

    Movement in the states

    Without federal legislative action, many US states are taking privacy matters into their own hands. 

    In 2025, eight new state privacy laws will take effect, making a total of 25 around the country. A number of other states—like Vermont and Massachusetts—are considering passing their own privacy bills next year, and such laws could, in theory, force national legislation, says Woodrow Hartzog, a technology law scholar at Boston University School of Law. “Right now, the statutes are all similar enough that the compliance cost is perhaps expensive but manageable,” he explains. But if one state passed a law that was different enough from the others, a national law could be the only way to resolve the conflict. Additionally, four states—California, Texas, Vermont, and Oregon—already have specific laws regulating data brokers, including the requirement that they register with the state. 

    Along with new laws, says Justin Brookman, the director of technology policy at Consumer Reports, comes the possibility that “we can put some more teeth on these laws.” 

    Brookman points to Texas, where some of the most aggressive enforcement action at the state level has taken place under its Republican attorney general, Ken Paxton. Even before the state’s new consumer privacy bill went into effect in July, Paxton announced the creation of a special task force focused on enforcing the state’s privacy laws. He has since targeted a number of data brokers—including National Public Data, which exposed millions of sensitive customer records in a data breach in August, as well as companies that sell to them, like Sirius XM. 

    At the same time, though, Paxton has moved to enforce the state’s strict abortion laws in ways that threaten individual privacy. In December, he sued a New York doctor for sending abortion pills to a Texas woman through the mail. While the doctor is theoretically protected by New York’s shield laws, which provide a safeguard from out-of-state prosecution, Paxton’s aggressive action makes it even more crucial that states enshrine data privacy protections into their laws, says Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, an advocacy group. “There is an urgent need for states,” he says, “to lock down our resident’s’ data, barring companies from collecting and sharing information in ways that can be weaponized against them by out-of-state prosecutors.” 

    Data collection in the name of “security”

    While privacy has become a bipartisan issue, Republicans, in particular, are interested in “addressing data brokers in the context of national security,” such as protecting the data of military members or other government officials, says Winters. But in his view, it’s the effects on reproductive rights and immigrants that are potentially the “most dangerous” threats to privacy. 

    Indeed, data brokers (including Venntel, the Gravy Analytics subsidiary named in the recent FTC settlement) have sold cell-phone data to Immigration and Customs Enforcement, as well as to Customs and Border Protection. That data has then been used to track individuals for deportation proceedings—allowing the agencies to bypass local and state sanctuary laws that ban local law enforcement from sharing information for immigration enforcement. 

    “The more data that corporations collect, the more data that’s available to governments for surveillance,” warns Ashley Gorski, a senior attorney who works on national security and privacy at the American Civil Liberties Union.

    The ACLU is among a number of organizations that have been pushing for the passage of another federal law related to privacy: the Fourth Amendment Is Not For Sale Act. It would close the so-called “data-broker loophole” that allows law enforcement and intelligence agencies to buy personal information from data brokers without a search warrant. The bill would “dramatically limit the ability of the government to buy Americans’ private data,” Gorski says. It was first introduced in 2021 and passed the House in April 2024, with the support of 123 Republicans and 93 Democrats, before stalling in the Senate. 

    While Gorski is hopeful that the bill will move forward in the next Congress, others are less sanguine about these prospects—and alarmed about other ways that the incoming administration might “co-opt private systems for surveillance purposes,” as Hartzog puts it. So much of our personal information that is “collected for one purpose,” he says, could “easily be used by the government … to track us.” 

    This is especially concerning, adds Winters, given that the next administration has been “very explicit” about wanting to use every tool at its disposal to carry out policies like mass deportations and to exact revenge on perceived enemies. And one possible change, he says, is as simple as loosening the government’s procurement processes to make them more open to emerging technologies, which may have fewer privacy protections. “Right now, it’s annoying to procure anything as a federal agency,” he says, but he expects a more “fast and loose use of commercial tools.” 

    “That’s something we’ve [already] seen a lot,” he adds, pointing to “federal, state, and local agencies using the Clearviews of the world”—a reference to the controversial facial recognition company. 

    The AI wild card

    Underlying all of these debates on potential legislation is the fact that technology companies—especially AI companies—continue to require reams and reams of data, including personal data, to train their machine-learning models. And they’re quickly running out of it. 

    This is something of a wild card in any predictions about personal data. Ideally, says Jennifer King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, the shortage would lead to ways for consumers to directly benefit, perhaps financially, from the value of their own data. But it’s more likely that “there will be more industry resistance against some of the proposed comprehensive federal privacy legislation bills,” she says. “Companies benefit from the status quo.” 

    The hunt for more and more data may also push companies to change their own privacy policies, says Whitney Merrill, a former FTC official who works on data privacy at Asana. Speaking in a personal capacity, she says that companies “have felt the squeeze in the tech recession that we’re in, with the high interest rates,” and that under those circumstances, “we’ve seen people turn around, change their policies, and try to monetize their data in an AI world”—even if it’s at the expense of user privacy. She points to the $60-million-per-year deal that Reddit struck last year to license its content to Google to help train the company’s AI. 

    Earlier this year, the FTC warned companies that it would be “unfair and deceptive” to “surreptitiously” change their privacy policies to allow for the use of user data to train AI. But again, whether or not officials follow up on this depends on those in charge. 

    So what will privacy look like in 2025? 

    While the recent FTC settlements and the CFPB’s proposed rule represent important steps forward in privacy protection—at least when it comes to geolocation data—Americans’ personal information still remains widely available and vulnerable. 

    Rebecca Williams, a senior strategist at the ACLU for privacy and data governance, argues that all of us, as individuals and communities, should take it upon ourselves to do more to protect ourselves and “resist … by opting out” of as much data collection as possible. That means checking privacy settings on accounts and apps, and using encrypted messaging services. 

    Cahn, meanwhile, says he’ll “be striving to protect [his] local community, working to enact safeguards to ensure that we live up to our principles and stated commitments.” One example of such safeguards is a proposed New York City ordinance that would ban the sharing of any location data originating from within the city limits. Hartzog says that kind of local activism has already been effective in pushing for city bans on facial recognition. 

    “Privacy rights are at risk, but they’re not gone, and it’s not helpful to take an overly pessimistic look right now,” says Li, the USF law professor. “We definitely still have privacy rights, and the more that we continue to fight for these rights, the more we’re going to be able to protect our rights.”

    Science and technology stories in the age of Trump

    Rather than analyzing the news this week, I thought I’d lift the hood a bit on how we make it. 

    I’ve spent most of this year being pretty convinced that Donald Trump would be the 47th president of the United States. Even so, like most people, I was completely surprised by the scope of his victory. By taking the lion’s share not just in the Electoral College but also the popular vote, coupled with the wins in the Senate (and, as I write this, seemingly the House) and ongoing control of the courts, Trump has done far more than simply eke out a win. This level of victory will certainly provide the political capital to usher in a broad sweep of policy changes.

    Some of these changes will be well outside our lane as a publication. But very many of President-elect Trump’s stated policy goals will have direct impacts on science and technology. Some of the proposed changes would have profound effects on the industries and innovations we’ve covered regularly, and for years. When he talks about his intention to end EV subsidies, hit the brakes on FTC enforcement actions on Big Tech, ease the rules on crypto, or impose a 60 percent tariff on goods from China, these are squarely in our strike zone and we would be remiss not to explore the policies and their impact in detail. 

    And so I thought I would share some of my remarks from our edit meeting on Wednesday morning, when we woke up to find out that the world had indeed changed. I think it’s helpful for our audience if we are transparent and upfront about how we intend to operate, especially over the next several months that will likely be, well, chaotic. 

    This is a moment when our jobs are more important than ever. There will be so much noise and heat out there in the coming weeks and months, and maybe even years. The next six months in particular will be a confusing time for a lot of people. We should strive to be the signal in that noise. 

    We have extremely important stories to write about the role of science and technology in the new administration. There are obvious stories for us to take on in regards to climate, energy, vaccines, women’s health, IVF, food safety, chips, China, and I’m sure a lot more, that people are going to have all sorts of questions about. Let’s start by making a list of questions we have ourselves. Some of the people and technologies we cover will be ascendant in all sorts of ways. We should interrogate that power.  It’s important that we take care in those stories not to be speculative or presumptive. To always have the facts buttoned up. To speak the truth and be unassailable in doing so.

    Do we drop everything and only cover this? No. But it will certainly be a massive story that affects nearly all others.

    This election will be a transformative moment for society and the world. Trump didn’t just win, he won a mandate. And he’s going to change the country and the global order as a result.  The next few weeks will see so much speculation as to what it all means. So much fear, uncertainty, and doubt. There is an enormous amount of bullshit headed down the line. People will be hungry for sources they can trust. We should be there for that. Let’s leverage our credibility, not squander it. 

    We are not the resistance. We just want to tell the truth. So let’s take a breath, and then go out there and do our jobs.

    I like to tell our reporters and editors that our coverage should be free from either hype or cynicism. I think that’s especially true now. 

    I’m also very interested to hear from our readers: What questions do you have? What are the policy changes or staffing decisions you are curious about? Please drop me a line at mat.honan@technologyreview.com I’m eager to hear from you. 

    If someone forwarded you this edition of The Debrief, you can subscribe here.


    Now read the rest of The Debrief

    The News

    Palmer Luckey, who was ousted from Facebook over his support for the last Trump administration and went into defense contracting, is poised to grow in influence under a second administration. He recently talked to MIT Technology Review about how the Pentagon is using mixed reality.

    • What does Donald Trump’s relationship with Elon Musk mean for the global EV industry?

    • The Biden administration was perceived as hostile to crypto. The industry can likely expect friendlier waters under Trump

    • Some counter-programming: Life seeking robots could punch through Europa’s icy surface

    • And for one more big take that’s not related to the election: AI vs quantum. AI could solve some of the most interesting scientific problems before big quantum computers become a reality


    The Chat

    Every week I’ll talk to one of MIT Technology Review’s reporters or editors to find out more about what they’ve been working on. This week, I chatted with Melissa Heikkilä about her story on how ChatGPT search paves the way for AI agents.

    Mat: Melissa, OpenAI rolled out web search for ChatGPT last week. It seems pretty cool. But you got at a really interesting bigger picture point about it paving the way for agents. What does that mean?

    Melissa: Microsoft tried to chip away at Google’s search monopoly with Bing, and that didn’t really work. It’s unlikely OpenAI will be able to make much difference either. Their best bet is try to get users used to a new way of finding information and browsing the web through virtual assistants that can do complex tasks. Tech companies call these agents. ChatGPT’s usefulness is limited by the fact that it can’t access the internet and doesn’t have the most up to date information. By integrating a really powerful search engine into the chatbot, suddenly you have a tool that can help you plan things and find information in a far more comprehensive and immersive way than traditional search, and this is a key feature of the next generation of AI assistants.

    Mat: What will agents be able to do?

    Melissa: AI agents can complete complex tasks autonomously and the vision is that they will work as a human assistant would — book your flights, reschedule your meetings, help with research, you name it. But I wouldn’t get too excited yet. The cutting-edge of AI tech can retrieve information and generate stuff, but it still lacks the reasoning and long-term planning skills to be really useful. AI tools like ChatGPT and Claude also can’t interact with computer interfaces, like clicking at stuff, very well. They also need to become a lot more reliable and stop making stuff up, which is still a massive problem with AI. So we’re still a long way away from the vision becoming reality! I wrote an explainer on agents a little while ago with more details.

    Mat: Is search as we know it going away? Are we just moving to a world of agents that not only answer questions but also accomplish tasks?

    Melissa: It’s really hard to say. We are so used to using online search, and it’s surprisingly hard to change people’s behaviors. Unless agents become super reliable and powerful, I don’t think search is going to go away.

    Mat: By the way, I know you are in the UK. Did you hear we had an election over here in the US?

    Melissa: LOL


    The Recommendation

    I’m just back from a family vacation in New York City, where I was in town to run the marathon. (I get to point this out for like one or two more weeks before the bragging gets tedious, I think.) While there, we went to see The Outsiders. Chat, it was incredible. (Which maybe should go without saying given that it won the Tony for best musical.) But wow. I loved the book and the movie as a kid. But this hit me on an entirely other level. I’m not really a cries-at-movies (or especially at musicals) kind of person but I was wiping my eyes for much of the second act. So were very many people sitting around me. Anyway. If you’re in New York, or if it comes to your city, go see it. And until then, the soundtrack is pretty amazing on its own. (Here’s a great example.)

    There are more than 120 AI bills in Congress right now

    More than 120 bills related to regulating artificial intelligence are currently floating around the US Congress.

    They’re pretty varied. One aims to improve knowledge of AI in public schools, while another is pushing for model developers to disclose what copyrighted material they use in their training.  Three deal with mitigating AI robocalls, while two address biological risks from AI. There’s even a bill that prohibits AI from launching a nuke on its own.

    The flood of bills is indicative of the desperation Congress feels to keep up with the rapid pace of technological improvements. “There is a sense of urgency. There’s a commitment to addressing this issue, because it is developing so quickly and because it is so crucial to our economy,” says Heather Vaughan, director of communications for the US House of Representatives Committee on Science, Space, and Technology.

    Because of the way Congress works, the majority of these bills will never make it into law. But simply taking a look at all the different bills that are in motion can give us insight into policymakers’ current preoccupations: where they think the dangers are, what each party is focusing on, and more broadly, what vision the US is pursuing when it comes to AI and how it should be regulated.

    That’s why, with help from the Brennan Center for Justice, which created a tracker with all the AI bills circulating in various committees in Congress right now, MIT Technology Review has taken a closer look to see if there’s anything we can learn from this legislative smorgasbord. 

    As you can see, it can seem as if Congress is trying to do everything at once when it comes to AI. To get a better sense of what may actually pass, it’s useful to look at what bills are moving along to potentially become law. 

    A bill typically needs to pass a committee, or a smaller body of Congress, before it is voted on by the whole Congress. Many will fall short at this stage, while others will simply be introduced and then never spoken of again. This happens because there are so many bills presented in each session, and not all of them are given equal consideration. If the leaders of a party don’t feel a bill from one of its members can pass, they may not even try to push it forward. And then, depending on the makeup of Congress, a bill’s sponsor usually needs to get some members of the opposite party to support it for it to pass. In the current polarized US political climate, that task can be herculean. 

    Congress has passed legislation on artificial intelligence before. Back in 2020, the National AI Initiative Act was part of the Defense Authorization Act, which invested resources in AI research and provided support for public education and workforce training on AI.

    And some of the current bills are making their way through the system. The Senate Commerce Committee pushed through five AI-related bills at the end of July. The bills focused on authorizing the newly formed US AI Safety Institute (AISI) to create test beds and voluntary guidelines for AI models. The other bills focused on expanding education on AI, establishing public computing resources for AI research, and criminalizing the publication of deepfake pornography. The next step would be to put the bills on the congressional calendar to be voted on, debated, or amended.

    “The US AI Safety Institute, as a place to have consortium building and easy collaboration between corporate and civil society actors, is amazing. It’s exactly what we need,” says Yacine Jernite, an AI researcher at Hugging Face.

    The progress of these bills is a positive development, says Varun Krovi, executive director of the Center for AI Safety Action Fund. “We need to codify the US AI Safety Institute into law if you want to maintain our leadership on the global stage when it comes to standards development,” he says. “And we need to make sure that we pass a bill that provides computing capacity required for startups, small businesses, and academia to pursue AI.”

    Following the Senate’s lead, the House Committee on Science, Space, and Technology just passed nine more bills regarding AI on September 11. Those bills focused on improving education on AI in schools, directing the National Institute of Standards and Technology (NIST) to establish guidelines for artificial-intelligence systems, and expanding the workforce of AI experts. These bills were chosen because they have a narrower focus and thus might not get bogged down in big ideological battles on AI, says Vaughan.

    “It was a day that culminated from a lot of work. We’ve had a lot of time to hear from members and stakeholders. We’ve had years of hearings and fact-finding briefings on artificial intelligence,” says Representative Haley Stevens, one of the Democratic members of the House committee.

    Many of the bills specify that any guidance they propose for the industry is nonbinding and that the goal is to work with companies to ensure safe development rather than curtail innovation. 

    For example, one of the bills from the House, the AI Development Practices Act, directs NIST to establish “voluntary guidance for practices and guidelines relating to the development … of AI systems” and a “voluntary risk management framework.” Another bill, the AI Advancement and Reliability Act, has similar language. It supports “the development of voluntary best practices and technical standards” for evaluating AI systems. 

    “Each bill contributes to advancing AI in a safe, reliable, and trustworthy manner while fostering the technology’s growth and progress through innovation and vital R&D,” committee chairman Frank Lucas, an Oklahoma Republican, said in a press release on the bills coming out of the House.

    “It’s emblematic of the approach that the US has taken when it comes to tech policy. We hope that we would move on from voluntary agreements to mandating them,” says Krovi.

    Avoiding mandates is a practical matter for the House committee. “Republicans don’t go in for mandates for the most part. They generally aren’t going to go for that. So we would have a hard time getting support,” says Vaughan. “We’ve heard concerns about stifling innovation, and that’s not the approach that we want to take.” When MIT Technology Review asked about the origin of these concerns, they were attributed to unidentified “third parties.” 

    And fears of slowing innovation don’t just come from the Republican side. “What’s most important to me is that the United States of America is establishing aggressive rules of the road on the international stage,” says Stevens. “It’s concerning to me that actors within the Chinese Communist Party could outpace us on these technological advancements.”

    But these bills come at a time when big tech companies have ramped up lobbying efforts on AI. “Industry lobbyists are in an interesting predicament—their CEOs have said that they want more AI regulation, so it’s hard for them to visibly push to kill all AI regulation,” says David Evan Harris, who teaches courses on AI ethics at the University of California, Berkeley. “On the bills that they don’t blatantly try to kill, they instead try to make them meaningless by pushing to transform the language in the bills to make compliance optional and enforcement impossible.”

    “A [voluntary commitment] is something that is also only accessible to the largest companies,” says Jernite at Hugging Face, claiming that sometimes the ambiguous nature of voluntary commitments allows big companies to set definitions for themselves. “If you have a voluntary commitment—that is, ‘We’re going to develop state-of-the-art watermarking technology’—you don’t know what state-of-the-art means. It doesn’t come with any of the concrete things that make regulation work.”

    “We are in a very aggressive policy conversation about how to do this right, and how this carrot and stick is actually going to work,” says Stevens, indicating that Congress may ultimately draw red lines that AI companies must not cross.

    There are other interesting insights to be gleaned from looking at the bills all together. Two-thirds of the AI bills are sponsored by Democrats. This isn’t too surprising, since some House Republicans have claimed to want no AI regulations, believing that guardrails will slow down progress.

    The topics of the bills (as specified by Congress) are dominated by science, tech, and communications (28%), commerce (22%), updating government operations (18%), and national security (9%). Topics that don’t receive much attention include labor and employment (2%), environmental protection (1%), and civil rights, civil liberties, and minority issues (1%).

    The lack of a focus on equity and minority issues came into view during the Senate markup session at the end of July. Senator Ted Cruz, a Republican, added an amendment that explicitly prohibits any action “to ensure inclusivity and equity in the creation, design, or development of the technology.” Cruz said regulatory action might slow US progress in AI, allowing the country to fall behind China.

    On the House side, there was also a hesitation to work on bills dealing with biases in AI models. “None of our bills are addressing that. That’s one of the more ideological issues that we’re not moving forward on,” says Vaughan.

    The lead Democrat on the House committee, Representative Zoe Lofgren, told MIT Technology Review, “It is surprising and disappointing if any of my Republican colleagues have made that comment about bias in AI systems. We shouldn’t tolerate discrimination that’s overt and intentional any more than we should tolerate discrimination that occurs because of bias in AI systems. I’m not really sure how anyone can argue against that.”

    After publication, Vaughan clarified that “[Bias] is one of the bigger, more cross-cutting issues, unlike the narrow, practical bills we considered that week. But we do care about bias as an issue,” and she expects it to be addressed within an upcoming House Task Force report.

    One issue that may rise above the partisan divide is deepfakes. The Defiance Act, one of several bills addressing them, is cosponsored by a Democratic senator, Amy Klobuchar, and a Republican senator, Josh Hawley. Deepfakes have already been abused in elections; for example, someone faked Joe Biden’s voice for a robocall to tell citizens not to vote. And the technology has been weaponized to victimize people by incorporating their images into pornography without their consent. 

    “I certainly think that there is more bipartisan support for action on these issues than on many others,” says Daniel Weiner, director of the Brennan Center’s Elections & Government Program. “But it remains to be seen whether that’s going to win out against some of the more traditional ideological divisions that tend to arise around these issues.” 

    Although none of the current slate of bills have resulted in laws yet, the task of regulating any new technology, and specifically advanced AI systems that no one entirely understands, is difficult. The fact that Congress is making any progress at all may be surprising in itself. 

    “Congress is not sleeping on this by any stretch of the means,” says Stevens. “We are evaluating and asking the right questions and also working alongside our partners in the Biden-Harris administration to get us to the best place for the harnessing of artificial intelligence.”

    Update: We added further comments from the Republican spokesperson.