A controversial US surveillance program is up for renewal. Critics are speaking out.

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

For the past week my social feeds have been filled with a pretty important tech policy debate that I want to key you in on: the renewal of a controversial program of American surveillance.

The program, outlined in Section 702 of the Foreign Intelligence Surveillance Act (FISA), was created in 2008. It was designed to expand the power of US agencies to collect electronic “foreign intelligence information,” whether about spies, terrorists, or cybercriminals abroad, and to do so without a warrant. 

Tech companies, in other words, are compelled to hand over communications records like phone calls, texts, and emails to US intelligence agencies including the FBI, CIA, and NSA. A lot of data about Americans who communicate with people internationally gets swept up in these searches. Critics say that is unconstitutional

Despite a history of abuses by intelligence agencies, Section 702 was successfully renewed in both 2012 and 2017. The program, which has to be periodically renewed by Congress, is set to expire again at the end of December. But a broad group that transcends parties is calling for reforming the program, out of concern about the vast surveillance it enables. Here is what you need to know.

What do the critics of Section 702 say?

Of particular concern is that while the program intends to target people who aren’t Americans, a lot of data from US citizens gets swept up if they communicate with anyone abroad—and, again, this is without a warrant. The 2022 annual report on the program revealed that intelligence agencies ran searches on an estimated 3.4 million “US persons” during the previous year; that’s an unusually high number for the program, though the FBI attributed it to an uptick in investigations of Russia-based cybercrime that targeted US infrastructure. Critics have raised alarms about the ways the FBI has used the program to surveil Americans including Black Lives Matter activists and a member of Congress.  

In a letter to Senate Majority Leader Chuck Schumer this week, over 25 civil society organizations, including the American Civil Liberties Union (ACLU), the Center for Democracy & Technology, and the Freedom of the Press Foundation, said they “strongly oppose even a short-term reauthorization of Section 702.”

Wikimedia, the foundation that runs Wikipedia, also opposes the program in its current form, saying it leaves international open-source projects vulnerable to surveillance. “Wikimedia projects are edited and governed by nearly 300,000 volunteers around the world who share free knowledge and serve billions of readers globally. Under Section 702, every interaction on these projects is currently subject to surveillance by the NSA,” says a spokesperson for the Wikimedia Foundation. “Research shows that online surveillance has a ‘chilling effect’ on Wikipedia users, who will engage in self-censorship to avoid the threat of governmental reprisals for accurately documenting or accessing certain kinds of information.”

And what about the proponents?

The main supporters of the program’s reauthorization are the intelligence agencies themselves, which say it enables them to gather critical information about foreign adversaries and online criminal activities like ransomware and cyberattacks. 

In defense of the provision, FBI director Christopher Wray has also pointed to procedural changes at the bureau in recent years that have reduced the number of Americans being surveilled from 3.4 million in 2021 to 200,000 in 2022. 

The Biden administration has also broadly pushed for the reauthorization of Section 702 without reform.  

“Section 702 is a necessary instrument within the intelligence community, leveraging the United States’ global telecommunication footprint through legal and court-approved means,” says Sabine Neschke, a senior policy analyst at the Bipartisan Policy Center. “Ultimately, Congress must strike a balance between ensuring national security and safeguarding individual rights.”

What would reform look like?

The proposal to reform the program, called the Government Surveillance Reform Act, was announced last week and focuses on narrowing the government’s authority to collect information on US citizens.

It would require warrants to collect Americans’ location data and web browsing or search records under the program and documentation that the queries were “reasonably likely to retrieve foreign intelligence information.” In a hearing before the House Committee on Homeland Security on Wednesday, Wray said that a warrant requirement would be a “significant blow” to the program, calling it a “de facto ban.”

Senator Ron Wyden, who cosponsored the reform bill and sits on the Senate Select Committee on Intelligence, has said he won’t vote to renew the program unless some of its powers are curbed. “Congress must have a real debate about reforming warrantless government surveillance of Americans,” Wyden said in a statement to MIT Technology Review. “Therefore, the administration and congressional leaders should listen to the overwhelming bipartisan coalition that supports adopting common-sense protections for Americans’ privacy and extending key national security authorities at the same time.”

The reform bill does not, as some civil society groups had hoped, limit the government’s powers for surveillance of people outside of the US. 

While it’s not yet clear whether these reforms will pass, intelligence agencies have never faced such a broad, bipartisan coalition of opponents. As for what happens next, we’ll have to wait and see. 

What else I’m reading

  • Here’s a great story from the New Yorker about how facial recognition searches can lead police to ignore other pieces of an investigation. 
  • I loved this excerpt of Broken Code, a new book from reporter Jeff Horwitz, who broke the Facebook Files revealed by whistleblower Frances Haugen. It’s a nice insidery look at the company’s AI strategy. 
  • Meta says that age verification requirements, such as those being proposed by child online safety bills, should be up to app stores like Apple’s and Google’s. It’s an interesting stance that the company says would help take the burden off individual websites to comply with the new regulations. 

What I learned this week

Some researchers and technologists have been calling for new and more precise language around artificial intelligence. This week, Google DeepMind released a paper outlining different levels of artificial general intelligence, often referred to as AGI, as my colleague Will Douglas Heaven reports.

“The team outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals),” Will writes. “They note that no level beyond emerging AGI has been achieved.” We’ll certainly be hearing more about what words we should use when referring to AI in the future.

Tackling our biggest problems

For all of history we’ve turned to technology, again and again, to help us solve our hardest problems. Technology gave us warmth and light when it was cold and dark. It helped us pull fish from the sea and crops from the earth so we would not be hungry. It enabled us to cross over the oceans and fly through the skies, shrinking vast distances down to routine travel. It’s given us vaccines and treatments and cures. It has made virtually every fact and all of human knowledge available to us instantly on demand. We can speak to each other in entirely different languages and be understood using nothing more than a slim slab of glass and metals in our pocket. 

Sometimes technology can seem like a miracle. Of course, it is nothing more than human achievement. Yet like all things human, our creations can be deeply flawed. As a result, we have also used tech to unleash horrors on ourselves, intentionally and by accident. 

We have employed it to broadcast hateful rhetoric and divisive ideologies. We have fine-tuned our machines to kill each other in ever greater numbers and with ever more efficiency. It is our technology that took the carbon from out of the ground and put it in the sky. Our technology that poisoned the water and the air, that made deserts out of forests, and that wiped entire species off the planet. 

Technology is an engine for problems, for solving them and for creating entirely new ones—and then we perversely turn to even newer technologies to try to solve those. In this issue, we step back from this cycle. We explore big questions and hard problems and ask: What role can—and should—technology play going forward?

Our cover is inspired by Douglas Main’s terrifying story on plastics. There’s an adage that says every piece of plastic ever made still exists. While that isn’t entirely true, as Main vividly describes, it is pretty darn close. We’re not reducing how much is made—precisely the opposite. Reuse is negligible. Recycling isn’t working. Meanwhile, plastic is absolutely everywhere, and in absolutely everything, including our own bodies. What are we going to do about it? 

AI epitomizes the sometimes fraught relationship we have with technology. It has the potential to massively benefit society—and yet it could cause incalculable harm if we get it wrong. As its development races ahead, Grace Huckins has written a powerful, even poetic exploration of AI consciousness. What would it take, and what would it mean, for an AI to become conscious? How would we know? What would we owe it? 

David W. Brown takes on the challenge of spacecraft design and the struggle to make smaller, cheaper missions that can still tell us meaningful new things about the solar system. If we are going to make the most of the resources we devote to space exploration, we’ll have to grapple with the hard limits of physics—and think hard about what we can, and want to, do.

Some of our hardest problems come down to human nature, and our capacity and sometimes outright desire for conflict. Social media and online communications are lousy with trolling, disinformation, harassment, and hate speech. Katie Notopoulos argues that the solution for much of this is to end our fixation with free services and move to smaller, distributed platforms that put more power in users’ hands. 

One hard problem most of us have likely faced is the experience of interacting with government services online. A decade after the famously botched launch of Healthcare.gov, Tate Ryan-Mosley explores why it is still so hard for the government to get tech right. Her reporting takes us to New York City, which has had some manner of success—in part by going with the lowest tech possible. 

And finally, we asked some of the smartest minds out there what they consider the biggest problems that aren’t getting enough attention right now. You’ll find their responses, and many more online at techreview.com/hardproblems.

Thanks for reading,

Mat Honan

Not a subscriber? Subscribe and save 25% to gain exclusive access to this thought-provoking issue that brings together some of the brightest minds shedding light on critical global challenges.

Three things to know about the White House’s executive order on AI

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

The US has set out its most sweeping set of AI rules and guidelines yet in an executive order issued by President Joe Biden today. The order will require more transparency from AI companies about how their models work and will establish a raft of new standards, most notably for labeling AI-generated content. 

The goal of the order, according to the White House, is to improve “AI safety and security.” It also includes a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. This is a surprising move that invokes the Defense Production Act, typically used during times of national emergency.

The executive order advances the voluntary requirements for AI policy that the White House set back in August, though it lacks specifics on how the rules will be enforced. Executive orders are also vulnerable to being overturned at any time by a future president, and they lack the legitimacy of congressional legislation on AI, which looks unlikely in the short term.  

“The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” says Anu Bradford, a law professor at Columbia University who specializes in digital regulation.

Nevertheless, AI experts have hailed the order as an important step forward, especially thanks to its focus on watermarking and standards set by the National Institute of Standards and Technology (NIST). However, others argue that it does not go far enough to protect people against immediate harms inflicted by AI.

Here are the three most important things you need to know about the executive order and the impact it could have. 

What are the new rules around labeling AI-generated content? 

The White House’s executive order requires the Department of Commerce to develop guidance for labeling AI-generated content. AI companies will use this guidance to develop labeling and watermarking tools that the White House hopes federal agencies will adopt. “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world,” according to a fact sheet that the White House shared over the weekend. 

The hope is that labeling the origins of text, audio, and visual content will make it easier for us to know what’s been created using AI online. These sorts of tools are widely proposed as a solution to AI-enabled problems such as deepfakes and disinformation, and in a voluntary pledge with the White House announced in August, leading AI companies such as Google and Open AI pledged to develop such technologies

The trouble is that technologies such as watermarks are still very much works in progress. There currently are no fully reliable ways to label text or investigate whether a piece of content was machine generated. AI detection tools are still easy to fool

The executive order also falls short of requiring industry players or government agencies to use these technologies.

On a call with reporters on Sunday, a White House spokesperson responded to a question from MIT Technology Review about whether any requirements are anticipated for the future, saying, “I can imagine, honestly, a version of a call like this in some number of years from now and there’ll be a cryptographic signature attached to it that you know you’re actually speaking to [the White House press team] and not an AI version.” This executive order intends to “facilitate technological development that needs to take place before we can get to that point.”

The White House says it plans to push forward the development and use of these technologies with the Coalition for Content Provenance and Authenticity, called the C2PA initiative. As we’ve previously reported, the initiative and its affiliated open-source community has been growing rapidly in recent months as companies rush to label AI-generated content. The collective includes some major companies like Adobe, Intel, and Microsoft and has devised a new internet protocol that uses cryptographic techniques to encode information about the origins of a piece of content.

The coalition does not have a formal relationship with the White House, and it’s unclear what that collaboration would look like. In response to questions, Mounir Ibrahim, the cochair of the governmental affairs team, said, “C2PA has been in regular contact with various offices at the NSC [National Security Council] and White House for some time.”

The emphasis on developing watermarking is good, says Emily Bender, a professor of linguistics at the University of Washington. She says she also hopes content labeling systems can be developed for text; current watermarking technologies work best on images and audio. “[The executive order] of course wouldn’t be a requirement to watermark, but even an existence proof of reasonable systems for doing so would be an important step,” Bender says.

Will this executive order have teeth? Is it enforceable? 

While Biden’s executive order goes beyond previous US government attempts to regulate AI, it places far more emphasis on establishing best practices and standards than on how, or even whether, the new directives will be enforced. 

The order calls on the National Institute of Standards and Technology to set standards for extensive “red team” testing—meaning tests meant to break the models in order to expose vulnerabilities—before models are launched. NIST has been somewhat effective at documenting how accurate or biased AI systems such as facial recognition are already. In 2019, a NIST study of over 200 facial recognition systems revealed widespread racial bias in the technology.

However, the executive order does not require that AI companies adhere to NIST standards or testing methods. “Many aspects of the EO still rely on voluntary cooperation by tech companies,” says Bradford, the law professor at Columbia.

The executive order requires all companies developing new AI models whose computational size exceeds a certain threshold to notify the federal government when training the system and then share the results of safety tests in accordance with the Defense Production Act. This law has traditionally been used to intervene in commercial production at times of war or national emergencies such as the covid-19 pandemic, so this is an unusual way to push through regulations. A White House spokesperson says this mandate will be enforceable and will apply to all future commercial AI models in the US, but will likely not apply to AI models that have already been launched. The threshold is set at a point where all major AI models that could pose risks “to national security, national economic security, or national public health and safety” are likely to fall under the order, according to the White House’s fact  sheet. 

The executive order also calls for federal agencies to develop rules and guidelines for different applications, such as supporting workers’ rights, protecting consumers, ensuring fair competition, and administering government services. These more specific guidelines prioritize privacy and bias protections.

“Throughout, at least, there is the empowering of other agencies, who may be able to address these issues seriously,” says Margaret Mitchell, researcher and chief ethics scientist at AI startup Hugging Face. “Albeit with a much harder and more exhausting battle for some of the people most negatively affected by AI, in order to actually have their rights taken seriously.”

What has the reaction to the order been so far? 

Major tech companies have largely welcomed the executive order. 

Brad Smith, the vice chair and president of Microsoft, hailed it as “another critical step forward in the governance of AI technology.” Google’s president of global affairs, Kent Walker, said the company looks “forward to engaging constructively with government agencies to maximize AI’s potential—including by making government services better, faster, and more secure.”

“It’s great to see the White House investing in AI’s growth by creating a framework for responsible AI practices,” said Adobe’s general counsel and chief trust officer, Dana Rao. 

The White House’s approach remains friendly to Silicon Valley, emphasizing innovation and competition rather than limitation and restriction. The strategy is in line with the policy priorities for AI regulation set forth by Senate Majority Leader Chuck Schumer, and it further crystallizes the lighter touch of the American approach to AI regulation. 

However, some AI researchers say that sort of approach is cause for concern. “The biggest concern to me in this is it ignores a lot of work on how to train and develop models to minimize foreseeable harms,” says Mitchell.

Instead of preventing AI harms before deployment—for example, by making tech companies’ data practices better—the White House is using a “whack-a-mole” approach, tackling problems that have already emerged, she adds.  

The highly anticipated executive order on artificial intelligence comes two days before the UK’s AI Safety Summit and attempts to position the US as a global leader on AI policy. 

It will likely have implications outside the US, adds Bradford. It will set the tone for the UK summit and will likely embolden the European Union to finalize its AI Act, as the executive order sends a clear message that the US agrees with many of the EU’s policy goals.

“The executive order is probably the best we can expect from the US government at this time,” says Bradford.

Correction: A previous version of this story had Emily Bender’s title wrong. This has now been corrected. We apologize for any inconvenience.

How to make government technology better

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

Last week I published a story about government and technology that I spent the better part of this past year reporting, and I think all of you will identify with it. 

Who hasn’t tried to fill in a government form online and run into at least one issue? Or even just thought, Hmm, why can’t I easily just do this basic civic activity online, like renewing your license? Can you even imagine a world where you could submit a digital request to fill a pothole in your neighborhood (and it actually got filled)? 

All too often, online experiences with government agencies are painfully inefficient. At the same time, examples of dangerous, eye-roll-inducing techno-solutionist thinking run rampant. This is all the more frustrating given that the world desperately needs big policy action right now—and there is a lot of opportunity to do so with new technological capacity and data. But the US government, at least, just isn’t meeting the moment.

So I spent months trying to answer these questions, seeking to understand why the relationship between technology and government is so broken. 

Well, the answer is far more complicated than is often portrayed. I won’t get into all the reasons here—you can read about what I learned from top government tech people across the country in my story

In the piece, I decided to focus mostly on New York City because it’s approaching these challenges in ways you almost certainly wouldn’t expect. But one other place kept coming up again and again in my reporting as an example of somewhere doing tech pretty well: Massachusetts, and Boston in particular. 

To understand why, we actually need to back up for a second. Most of the experts I spoke with told me some version of the same story: they said that historically, government employees who are responsible for implementing policies at the ground level are not empowered to shape how citizens actually interact with these policies. For example, an agency responsible for getting people affordable housing doesn’t necessarily have the power to shape how the enrollment process works. In an age of user-centric technology, this boundary can be ruinous. 

“When Google or Apple or whatever builds a product, they have very close, tight channels for getting customer feedback and changing the product,” Santiago Garces, the chief information officer of the city of Boston, told me. “In government, a lot of times the product or the service is legislated. And legislators do get some feedback; there’s public comment and whatnot. But actually, it’s very rare to see the regulation getting updated because of feedback.” 

Boston, though, actually has a long history of prioritizing user experience research and human-centric design when it comes to digital services, finding ways to integrate feedback into policymaking. 

It’s been able to do this, at least in part, because the city follows an organizational approach similar to one that Jen Pahlka, the founder of Code for America and author of the fabulous book Recoding America, told me about: government technologies are run by in-house product managers who are able to make policy decisions. 

“Some of the most successful legislations are the ones that empower the programs and services where you really have the biggest ability to have tighter feedback loops with the constituents,” said Garces.

Garces told me that the city recently hired the first chief product officer in the country and is building a team of product managers and UX designers to work hand in hand with policymakers. The bottom line is that when people who actually implement policy are able to shape technology, we can get much better results. 

Harlan Weber, a former user experience designer fellow for Massachusetts’s IT department, told me about working on the Common Housing Application for Massachusetts Program (CHAMP) several years ago. He noted that they “went out and did research with tons of people in housing authorities and with government workers who’d have to use the thing.” They then used that feedback, he said, to shape the portal that finally let residents apply for housing benefits in a single streamlined online system.   

Boston has “a lot of inbuilt advantages,” said Weber, also the founder of Code for Boston. “And we’ve worked hard to press those advantages.” 

Massachusetts, he points out, is a highly educated, well-resourced state “that mostly believes that government can be part of the solution and not just part of the problem.” It also helps that Boston is home to a lot of tech companies and tech researchers working in close proximity to the center of government. This has allowed the city to build up an internal talent pool. 

Finally, Boston also has an established culture of prioritizing digital services. The mayor’s office created one of the first government innovation labs in the US, and the city was one of the first to have a chief digital officer and fellows from Code for America

All this said, digital services in Massachusetts are far from perfect (and in fact a recent investigation reveals significant problems with CHAMP and affordable housing). As I found in my reporting, there are simply no silver bullets that can fix the government’s broken relationship with technology. It’s just an incredibly thorny problem (which is why this story is part of our new print issue devoted to hard problems!). But it’s critical that governments urgently work to improve digital services—our democracy depends on it. 

I’ve been thinking a lot about something Pahlka said to me about core government services: “If the American public doesn’t see government deliver, I think it’s less that they get driven toward one party or another, and more that they get driven away from government altogether.”

What else I’m reading

  • This story from the New Yorker about the inaccuracy of social media posts about the violence between Israel and Hamas is a thought-provoking reflection on the future of our information system, especially during times of crisis. 
  • Clearview AI, the face recognition system that scrapes the internet for photos, does not have to pay a $9 million fine to the UK’s Data Protection Agency. The company escaped the massive fee on the grounds that the agency doesn’t have jurisdiction over how foreign law enforcement use British citizens’ data. Clearview is facing several of these fines, which pose an “existential threat” to the company, according to this report from the New York Times’ Kashmir Hill. But this is a sign that perhaps the company will prevail.  
  • A 21-year-old computer science student at the University of Nebraska, Lincoln, used AI to identify a word in a charred, 2,000-year-old, tightly wrapped scroll from Pompeii, damaged in the eruption of Mount Vesuvius. The scroll had been incomprehensible, but using a 3D x-ray scanner, the student was able to identify ink patterns and train AI to make out letters that spelled the word for “purple.”

What I learned this week

Google released a policy proposal focused on online safety for kids and teens. It offers several suggestions for legislation, including a risk-based approach for systems to estimate a user’s age and better tools for users to control recommendation algorithms. Perhaps most notable, it recommends a ban on personalized advertising that targets those under 18. Child online safety has been a hot topic in tech policy lately, as I’ve written about, and it’s interesting to get a perspective from Big Tech. 

Why New York City is embracing low-tech solutions to hard problems

Every Tuesday, Jessica Ramgoolam heads down to the New Amsterdam branch of the New York City Public Library, sets up a small folding table, and takes a seat with her laptop. She lays out piles of paper flyers, and it’s clear she has information to share, like a fortune teller awaiting a passing seeker. 

Just before 11 a.m., when the library opens, people may begin lining up for her assistance. With the aid of her team, she can communicate with people in nearly 20 languages, and her iPhone can help her manage many more.

Though she holds no unique powers of foresight, Ramgoolam represents for many the keys to the future. Sitting behind a bright yellow sign reading “GetCoveredNYC,” she’s there to help people—anyone—enroll in health care. 

Determining what programs you might be eligible for, gathering the bewildering amount of information required for different applications, and navigating the submission process is a headache, even for the most administratively savvy. 

That’s true even though most New Yorkers have already submitted information about their income and employment to the city many times over, and more and more residents get regular updates from and about the city government through websites, phone calls, chatbots, text messages, Twitter, email, Facebook and Instagram, livestreams, TV, and radio—all of which are used to communicate everything from emergency notifications to trash collection schedules. Not to mention the overwhelming volume of information online devoted specifically to the several public health-care plans available. 

But even with those programs and a variety of tax credits, there are still hundreds of thousands of people in the city who do not have health insurance.

It’s a reality of politics that is often overlooked: once a law is passed, it needs to evolve from an idea into a plan with a budget and a staff, and from there it needs to actually reach the lives of millions of people. Moving from policy to implementation has always been a hard part of governing, but today it’s easy to assume technology can make it easier. 

Yet even as technology presents unprecedented opportunities to bridge the gap between government programs and the people they serve, it also brings unprecedented challenges. How do we modernize without leaving people behind? How do we increase access without unduly burdening citizens? How do we increase efficiency and make services easier to use while still protecting sensitive data?

Today, technology is both an instrument and a medium of government, and in turn, it’s transforming the way citizens and states interact with each other. And it’s essential, even urgent, that governments understand this relationship—and how easily it can be broken, even by the tools meant to bolster it. After all, civic technology has the power to help, but not everything can be technologically simplified. Not everything can be automated. Bureaucrats can make forms all day long, but they are useless if people don’t know how to use them—or if they don’t even have the resources to access them or fill them out. 

Which is why, every week, Ramgoolam supports uninsured New Yorkers as they navigate the ever growing, ever changing, always tangled web of online forms that promise access to affordable care. 

“I’ve come across, in my lifetime, so many folks who have had many detrimental issues with the health insurance system,” she told me. “What motivates me is how great it makes me feel to know that I’ve succeeded in helping someone.”

New York City is something of a test lab for strategies to confront some big problems that plague the modern state. Akin to a country in the budget and bureaucratic complexity of its government, it is, and has been, dealing with the key question of how to make government work for people today. And through its experimentation, it is finding that sometimes the solution to doing big things also involves doing a lot of small things, sometimes with the lowest tech possible: a human sitting behind a table.

“Why can’t we just …?”  

When President Barack Obama took office in 2009, his administration was heralded as more technologically savvy than any that had come before. At the dawn of Web 2.0 and with immense faith in the power of technology to do big things, it hired the country’s first chief information officer, started the US Digital Service to modernize the executive branch, and issued a directive to “build a 21st-century digital government.” Technology was envisioned as a key to the administration’s ambitious plan for expanding access to health insurance.

Yet when Healthcare.gov launched in 2013, after three years of work and a cost of more than $300 million, the website crashed. Fewer than 10 people were able to enroll on the first day. 

In the years since, the Healthcare.gov fiasco has turned into a sort of parable for those working in policy implementation. The program’s tech-forward approach was meant to make it easier for people to compare the costs of health-care plans and enroll in one, but at least at first, the tech failed in spectacular fashion. 

The crash was indicative of massive challenges that the US still faces when it comes to government use of technology. Jennifer Pahlka was serving as deputy chief technology officer of the White House Office of Science and Technology Policy at the time. As she explains in her book Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better, the failed site launch was a reflection of just how big the “glaring gap between policy intentions and actual outcomes” really is.

In the book, Pahlka—who also founded Code for America, a nonprofit that pairs engineers, designers, and product managers with government agencies to improve public services—lays out the problem. “Whether for good or for ill, the essence of the digital revolution is that it has become easier to implement ideas of all kinds—business, cultural, and social,” she writes. “Inside government, however, the digital revolution has played out very differently. Even as our expectations about the immediacy and accuracy of services have skyrocketed, the implementation of laws has become anything but easier.”

In several conversations, Pahlka explained to me how well-­intended policies morph between the time they pass a legislature and the time they finally trickle through the bureaucracy and down to the lives of everyday Americans. And today, of course, the way Americans interact with those policies is so often through technology—government websites, data management and record keeping, or benefit enrollment.    

“Ultimately, we tell the American public we’re gonna do this thing,” she told me, “and then the actual outcome that was desired may or may not occur.” The reason, she argued, is that policy implementation has grown so complex—and technology often complicates it even further. What’s more, the American system isn’t designed to empower technology designers in this process. Instead, legislators are making the choices without necessarily understanding what technology would help carry them out most effectively.

“We need to rediscover what democracy offers to us and apply that in the context of building services, making decisions, and doing regulation that works for people in a way that’s less like ‘Everybody throws their stuff in the pot of soup and then that’s what the soup is,’” she told me.

Jessica Ramgoolam
Jessica Ramgoolam
MATCHULL SUMMERS

Officials working on digital transformation and public services in New York, San Francisco, and Boston all told me that there is no silver bullet. Technology can be as much a part of the problem as it is part of the solution. As Cyd Harrell, the chief digital services officer of San Francisco, put it, the story of government technology is a story of the question “Why can’t we just …?” In other words, the contrast between the opportunities technology seems to offer and the challenges it often creates can make modern governing maddening. 

Even if the technology is promising, deploying it takes money and talent. There are challenges with procurement and integrating new systems with legacy tech. There are the realities of budgets, bureaucratic red tape,election cycles, and ever-growing legal complexities. And getting the technology itself right is no simple task, especially when citizens are accustomed to easy-to-use interfaces and information management systems from the likes of Apple, Microsoft, and Google.

It’s all these things at once that make the problems with government technology so intractable. But at the same time, it’s never been more critical to improve government effectiveness. 

“The stakes matter more at this moment than they ever have,” says Pahlka. “The [Inflation Reduction Act] is trying to save us from a climate collapse, the CHIPS Act is trying to save us from potential national security disasters, the infrastructure [law] is trying to save us from driving over bridges that might fall. 

“These are all core issues where I think if the American public doesn’t see government deliver, I think it’s less that they get driven toward one party or another, and more that they get driven away from government altogether.”

In fact, according to recent survey data, trust in government is near record lows. Research has shown, too, that people who have had an unpleasant experience with government services are less likely to engage in civic activities like voting—and democracy depends on this kind of involvement from the people it serves. 

People invest trust in their government when it works for them. And right now it isn’t working. 

The disconnect

Bridging the gap between policy and implementation is just what Ramgoolam, the health-care specialist in New York, is doing at her table. 

She is a staffer for the Public Engagement Unit of the New York City Mayor’s Office, which was first created by Mayor Bill DeBlasio in 2015 and was specifically designed to boost enrollment in underutilized programs. Before this, New Yorkers who needed help had to call 311 for assistance or physically show up at the offices of the Human Resources Administration.

“Unfortunately,” says Adrienne Lever, the executive director of the Public Engagement Unit, “there are resources that are underused, and that is just a waste. There is a resource, and there is a person in need. We just need to figure out how to make that connection happen.” 

Lever told me that often those most in need of benefits are the least equipped to navigate a complex process required to access them, and the discrepancy becomes particularly acute when someone is in crisis.

“PEU’s target populations are often lower income. We work with a lot of seniors. Many of them don’t have access to computers, let alone the internet. Some are homebound and don’t have the ability to go out,” Lever explains. “So with those populations in mind, even if the technology is not flawed in and of itself, they may not have the resources or the information to be able to just fill out a simple Google form.”

And many applications are much more difficult to navigate than a Google form. Take New York City’s Senior Citizen Rent Increase Exemption (SCRIE) program, which enables people over 62 to have their rent frozen, depending on their income, even if a landlord raises the price. The city then reimburses the landlord through a tax credit. The PEU has reached out to 20,000 New Yorkers so far this year who might be eligible for a rent freeze.

Lever told me about one eligible New Yorker, whom she identified only by his first initial. D called the city asking for help renewing his enrollment in the program, but he was missing some required documentation, including a renewed lease. He also had severe cognitive and physical disabilities after suffering a stroke, which made it impossible for him to navigate the rest of the application online, or even with help over the phone. 

Benefit programs like SCRIE and those related to health care are particularly troublesome. They’re often the product of complex regulation that has been chewed on by many policymakers and regulatory agencies with lots of legal requirements, stipulations, and definitions, necessitating lots of compromises. 

The frequent upshot is that these programs are implemented only partially or with so many barriers that they are inaccessible to people most in need. As a result, many policies lose their impact. The SCRIE program, for example, had nearly 76,000 people enrolled as of 2019, though it’s estimated that around 135,000 New Yorkers were eligible, according to an October 2022 status report. Many benefit programs in the city—including Fair Fares, which offers lower public transportation prices for eligible travelers, and NYC Care, which increases access to low-cost and no-cost health care—are also underenrolled.

Making matters worse, the system is always growing as more laws are written and more programs are started—but different public benefit programs are administered by different agencies, each with its own databases and registration processes. When people are eligible for a number of separate programs, which is common, they have to work through each of these agencies individually to enroll. New York doesn’t currently have a centralized database that manages city benefits, in part because of regulatory constraints that limit data sharing and in part because siloed processes and legacy technology make it difficult to stitch all these processes together. 

Virtually every government office across the US faces or has recently faced a similar problem. In 2015, for instance, there were over 450 different websites just for veteran services before the US Digital Service swooped in to overhaul the online registration processes through a redesign of Vets.gov.  

As the world moves online, policy implementation that doesn’t center citizen accessibility will increasingly lead to undersubscribed benefits programs or laws that, in practice, look very different from what their drafters intended. 

Vivek Kundra, who served as the first chief information officer of the United States in the Obama administration, told me that government is working, even if slowly, to adapt to this new reality. “I think we have to reimagine and even rethink what we mean when we talk about policy,” Kundra said. “There’s going to be a massive impact on the regulatory front that we haven’t even conceived yet.” 

Door knocking for benefits 

New York City’s Public Engagement Unit has found that it needs to deploy low-tech interventions to bring people into the high-tech ecosystem. Consistent outreach through multiple channels is the most effective way it’s found to support people eligible for city programs as they cope with the bureaucratic complexity. Above everything else, the unit’s staffers aim to take some of the burden, technological or otherwise, off average city residents. 

Lever told me she believes it’s the government’s responsibility to “help people break through that struggle and find the resources they need to get access to the services that they deserve.” 

So the unit applies what it calls “campaign tactics” to policy implementation, proactively engaging with New Yorkers through door knocking, phone banking, text messages, emails, and public events to share information about city services like rent assistance, public transportation subsidies, and—of course—health care and help people sign up for them.

“My goal and my team’s goal is to limit the technical complexity and, as much as possible, also minimize the amount of times that you have to provide the same piece of information.”

Matt Fraser, NYC chief technology officer

The specific outreach approach depends on the population involved. For young people in the city, texts alone might do the trick. If the unit wants to target seniors, it might also start with a mass text campaign, since most people are comfortable with cell phones, and quickly move on to door knocking and in-person support for those who don’t respond to texts. To reach those who are not accessible by phone or at home, staffers work with community-based organizations and in public spaces like libraries to meet people in person. 

I recently tested the PEU’s system, texting the unit to ask for help with my health insurance options. I received an immediate text back and two follow-up calls the same day. When I didn’t reply, I continued to get texts and calls consistently throughout the week until I informed them that I did not need help any longer. It was almost annoying, but it was effective.

The PEU has seen that people are significantly more likely to sign up for government programs when the city comes to them, whether it’s through texts, calls, or some other approach. In one study of a campaign to enroll New Yorkers in the Fair Fares program, the PEU targeted people already registered in the Supplemental Nutrition Assistance Program (SNAP), since the eligibility requirements are similar. It found that people it texted were 46% more likely to sign up for Fair Fares than those it didn’t reach out to. And eligible New Yorkers who texted back were 168% more likely to enroll.

Avoiding techno-solutionist traps 

The PEU is proving that more, or more complicated, tech is not always the answer. Shiny tech-savvy government projects touted by politicians can prove to be radical letdowns. Take blockchain voting, which West Virginia briefly piloted during the 2020 election; after much media attention, the experiment was abandoned once it was clear the technology couldn’t provide any increased security for electronic voting. 

Or consider the rise, and rapid fall, of education technology programs during the pandemic; at first, Zoom and personalized online lessons seemed like a great way to replace in-person teaching, but core learning metrics dipped dramatically across the country.  

In many cases, advances in technology meant to help implement public policy have actually harmed people they were supposed to help. Think of electronic health records, which have led to infringements on patient privacy, and even deaths, caused by data errors. Or the use of facial recognition in policing, which is less accurate for Black and brown people, leading to false arrests and actually decreasing public safety for large swaths of the population. 

But this hasn’t stopped political leaders from pinning their administrations’ fates on new technology, even in New York City. 

In December 2022, toward the end of his first year in office, Mayor Eric Adams told Politico: “It blows my mind how much we have not embraced technology, and part of that is because many of our electeds are afraid. Anything technology, they think, ‘Oh, it’s a boogeyman. It’s Big Brother watching you.’

“No, Big Brother is protecting you,” he added. 

The comments have somewhat defined Adams’s style in office since. He has supported the deployment of police tech, including facial recognition, and he has prioritized incorporating technological solutions into city programs. This includes finally building a centralized database residents can use for city services—a potential one-stop shop for benefits access. 

“The newly launched MyCity online portal will allow New Yorkers to go online [and] easily search, apply for, and track city services and benefits right from their smartphones or computers,” Adams said in March 2023. “We are using the power of technology to reduce the bureaucracy and red tape in our government, to help New Yorkers get the services their taxes pay for, and to get stuff done for the working people of this city.” (The mayor’s press office did not respond to requests for comment.)

NYC chief technology officer Matt Fraser has high hopes for the project, which will focus first on child-care benefits. (It’s a particularly daunting initial target; infant care in New York City costs over $21,000 per year on average, and according to the federal affordability standard, a household would need a combined income of over $300,000 in order to afford that.) 

The city offers several subsidized child-care programs, which are administered by at least three separate agencies; the sign-up process previously started with a separate paper form for each of them. In March, MyCity launched a child-care benefit portal that can screen applicants for two of the programs online and at once.  

“My goal and my team’s goal is to limit the technical complexity and, as much as possible, also minimize the amount of times that you have to provide the same piece of information,” Fraser told me.

The ability to go to one website, be screened, and submit one application for all city programs they may be eligible for would be a major upgrade for New Yorkers who struggle to navigate so many disparate, confusing applications today. 

The Adams administration isn’t the first to try to achieve this, though. In fact, he’s the third mayor to attempt to centralize and streamline city benefits enrollment online. And while some more limited projects have had considerable success, like the DeBlasio administration’s redesign of the central screening tool Access NYC, no one managed to create and sustain the technology for a comprehensive centralized registration portal. 

Ariel Kennan, a product designer and government tech researcher who led the redesign of Access NYC in 2016, told me that MyCity’s success depends on both political will and an internal investment in designing human-centered technology. The work of building the portal has been contracted out, as is common with government technology projects, even though Kennan notes that many similar projects have failed after outsourcing. Hiring contractors can lead to slow and expensive procurement cycles, high turnover, and minimal investment in technology and design teams within government, which ultimately makes it hard to turn digital services into sustainable, evolving solutions. 

Noel Hidalgo, cofounder and executive director of BetaNYC, a civic technology organization, echoes these sentiments. “Technology is a manifestation of bureaucracy and its complexities,” he told me. “These systems are built over decades, and we need technologists and designers to go work inside of city government.” (Fraser said that government employees “remain very involved” in MyCity.) 

For his part, Fraser recognizes the bleak history of government’s digital services, but he told me he’s committed to making MyCity a success; he sees the project as part of a greater mission.By expanding access to benefits through an easy online interface, MyCity will help “bring equity to government,” he said, adding that other initiatives to increase connectivity, digital access, and online literacy in largely offline communities are helping the city close the digital divide. 

Still, there are New York residents like D, the senior citizen who was trying to renew his SCRIE benefits. For him, technology simply couldn’t replace in-person assistance. After he had an unsuccessful phone call with the PEU, one of the unit’s specialists, Hakim Hamsi, showed up at his door and walked him through the forms. Hamsi expedited D’s application, and D’s rent dropped from $1,000 a month back down to his original rate of $850. D also introduced Hamsi to a neighbor, who now helps him stay on top of his renewal forms.

“All of this takes time,” says Hidalgo. 

“Government doesn’t work at the speed of the internet, and that’s fine—so long as it’s working to actually address these problems for New Yorkers.”