Learning from catastrophe

The philosopher Karl Popper once argued that there are two kinds of problems in the world: clock problems and cloud problems. As the metaphor suggests, clock problems obey a certain logic. They are orderly and can be broken down and analyzed piece by piece. When a clock stops working, you’re able to take it apart, look for what’s wrong, and fix it. The fix may not be easy, but it’s achievable. Crucially, you know when you’ve solved the issue because the clock starts telling the time again. 

Wicked Problems: How to Engineer a Better World
Guru Madhavan
W.W. NORTON, 2024

Cloud problems offer no such assurances. They are inherently complex and unpredictable, and they usually have social, psychological, or political dimensions. Because of their dynamic, shape-shifting nature, trying to “fix” a cloud problem often ends up creating several new problems. For this reason, they don’t have a definitive “solved” state—only good and bad (or better and worse) outcomes. Trying to repair a broken-down car is a clock problem. Trying to solve traffic is a cloud problem.  

Engineers are renowned clock-problem solvers. They’re also notorious for treating every problem like a clock. Increasing specialization and cultural expectations play a role in this tendency. But so do engineers themselves, who are typically the ones who get to frame the problems they’re trying to solve in the first place. 

In his latest book, Wicked Problems, Guru Madhavan argues that the growing number of cloudy problems in our world demands a broader, more civic-minded approach to engineering. “Wickedness” is Madhavan’s way of characterizing what he calls “the cloudiest of problems.” It’s a nod to a now-famous coinage by Horst Rittel and Melvin Webber, professors at the University of California, Berkeley, who used the term “wicked” to describe complex social problems that resisted the rote scientific and engineering-based (i.e., clock-like) approaches that were invading their fields of design and urban planning back in the 1970s. 

Madhavan, who’s the senior director of programs at the National Academy of Engineering, is no stranger to wicked problems himself. He’s tackled such daunting examples as trying to make prescription drugs more affordable in the US and prioritizing development of new vaccines. But the book isn’t about his own work. Instead, Wicked Problems weaves together the story of a largely forgotten aviation engineer and inventor, Edwin A. Link, with case studies of man-made and natural disasters that Madhavan uses to explain how wicked problems take shape in society and how they might be tamed.

Link’s story, for those who don’t know it, is fascinating—he was responsible for building the first mechanical flight trainer, using parts from his family’s organ factory—and Madhavan gives a rich and detailed accounting. The challenges this inventor faced in the 1920s and ’30s—which included figuring out how tens of thousands of pilots could quickly and effectively be trained to fly without putting all of them up in the air (and in danger), as well as how to instill trust in “instrument flying” when pilots’ instincts frequently told them their instruments were wrong—were among the quintessential wicked problems of his time. 

To address a world full of wicked problems, we’re going to need a more expansive and inclusive idea of what engineering is and who gets to participate in it.

Unfortunately, while Link’s biography and many of the interstitial chapters on disasters, like Boston’s Great Molasses Flood of 1919, are interesting and deeply researched, Wicked Problems suffers from some wicked structural choices. 

The book’s elaborate conceptual framework and hodgepodge of narratives feel both fussy and unnecessary, making a complex and nuanced topic even more difficult to grasp at times. In the prologue alone, readers must bounce from the concept of cloud problems to that of wicked problems, which get broken down into hard, soft, and messy problems, which are then reconstituted in different ways and linked to six attributes—efficiency, vagueness, vulnerability, safety, maintenance, and resilience—that, together, form what Madhavan calls a “concept of operations,” which is the primary organizational tool he uses to examine wicked problems.

It’s a lot—or at least enough to make you wonder whether a “systems engineering” approach was the correct lens through which to examine wickedness. It’s also unfortunate because Madhavan’s ultimate argument is an important one, particularly in an age of rampant solutionism and “one neat trick” approaches to complex problems. To effectively address a world full of wicked problems, he says, we’re going to need a more expansive and inclusive idea of what engineering is and who gets to participate in it.  

Rational Accidents: Reckoning with Catastrophic Technologies
John Downer
MIT PRESS, 2024

While John Downer would likely agree with that sentiment, his new book, Rational Accidents, makes a strong argument that there are hard limits to even the best and broadest engineering approaches. Similarly set in the world of aviation, Downer’s book explores a fundamental paradox at the heart of today’s civil aviation industry: the fact that flying is safer and more reliable than should technically be possible.

Jetliners are an example of what Downer calls a “catastrophic technology.” These are “complex technological systems that require extraordinary, and historically unprecedented, failure rates—of the order of hundreds of millions, or even billions, of operational hours between catastrophic failures.”

Take the average modern jetliner, with its 7 million components and 170 miles’ worth of wiring—an immensely complex system in and of itself. There were over 25,000 jetliners in regular service in 2014, according to Downer. Together, they averaged 100,000 flights every single day. Now consider that in 2017, no passenger-carrying commercial jetliner was involved in a fatal accident. Zero. That year, passenger totals reached 4 billion on close to 37 million flights. Yes, it was a record-setting year for the airline industry, safety-wise, but flying remains an almost unfathomably safe and reliable mode of transportation—even with Boeing’s deadly 737 Max crashes in 2018 and 2019 and the company’s ongoing troubles

Downer, a professor of science and technology studies at the University of Bristol, does an excellent job in the first half of the book dismantling the idea that we can objectively recognize, understand, and therefore control all risk involved in such complex technologies. Using examples from well-known jetliner crashes, as well as from the Fukushima nuclear plant meltdown, he shows why there are simply too many scenarios and permutations of failure for us to assess or foresee such risks, even with today’s sophisticated modeling techniques and algorithmic assistance.

So how does the airline industry achieve its seemingly unachievable record of safety and reliability? It’s not regulation, Downer says. Instead, he points to three unique factors. First is the massive service experience the industry has amassed. Over the course of 70 years, manufacturers have built tens of thousands of jetliners, which have failed (and continue to fail) in all sorts of unpredictable ways. 

This deep and constantly growing data set, combined with the industry’s commitment to thoroughly investigating each and every failure, lets it generalize the lessons learned across the entire industry—the second key to understanding jetliner reliability. 

Finally is what might be the most interesting and counterintuitive factor: Downer argues that the lack of innovation in jetliner design is an essential but overlooked part of the reliability record. The fact that the industry has been building what are essentially iterations of the same jetliner for 70 years ensures that lessons learned from failures are perpetually relevant as well as generalizable, he says. 

That extremely cautious relationship to change flies in the face of the innovate-or-die ethos that drives most technology companies today. And yet it allows the airline industry to learn from decades of failures and continue to chip away at the future “failure performance” of jetliners.

The bad news is that the lessons in jetliner reliability aren’t transferable to other catastrophic technologies. “It is an irony of modernity that the only catastrophic technology with which we have real experience, the jetliner, is highly unrepresentative, and yet it reifies a misleading perception of mastery over catastrophic technologies in general,” writes Downer.

For instance, to make nuclear reactors as reliable as jetliners, that industry would need to commit to one common reactor design, build tens of thousands of reactors, operate them for decades, suffer through thousands of catastrophes, slowly accumulate lessons and insights from those catastrophes, and then use them to refine that common reactor design.  

This obviously won’t happen. And yet “because we remain entranced by the promise of implausible reliability, and implausible certainty about that reliability, our appetite for innovation has outpaced our insight and humility,” writes Downer. With the age of catastrophic technologies still in its infancy, our continued survival may very well hinge not on innovating our way out of cloudy or wicked problems, but rather on recognizing, and respecting, what we don’t know and can probably never understand.  

If Wicked Problems and Rational Accidents are about the challenges and limits of trying to understand complex systems using objective science- and engineering-based methods, Georgina Voss’s new book, Systems Ultra, provides a refreshing alternative. Rather than dispassionately trying to map out or make sense of complex systems from the outside, Voss—a writer, artist, and researcher—uses her book to grapple with what they feel like, and ultimately what they mean, from the inside.

Systems Ultra: Making Sense of Technology in a Complex World
Georgina Voss
VERSO, 2024

“There is something rather wonderful about simply feeling our way through these enormous structures,” she writes before taking readers on a whirlwind tour of systems visible and unseen, corrupt and benign, ancient and new. Stops include the halls of hype at Las Vegas’s annual Consumer Electronics Show (“a hot mess of a Friday casual hellscape”), the “memetic gold mine” that was the container ship Ever Given and the global supply chain it broke when it got stuck in the Suez Canal, and the payment systems that undergird the porn industry. 

For Voss, systems are both structure and behavior. They are relational technologies that are “defined by their ability to scale and, perhaps more importantly, their peculiar relationship to scale.” She’s also keenly aware of the pitfalls of using an “experiential” approach to make sense of these large-scale systems. “Verbal attempts to neatly encapsulate what a system is can feel like a stoner monologue with pointed hand gestures (‘Have you ever thought about how electricity is, like, really big?’),” she writes. 

Nevertheless, her written attempts are a delight to read. Voss manages to skillfully unpack the power structures that make up, and reinforce, the large-scale systems we live in. Along the way, she also dispels many of the stories we’re told about their inscrutability and inevitability. That she does all this with humor, intelligence, and a boundless sense of curiosity makes Systems Ultra both a shining example of the “civic engagement as engineering” approach that Madhavan argues for in Wicked Problems, and proof that his argument is spot on. 

Bryan Gardiner is a writer based in Oakland, California.

Why China’s dominance in commercial drones has become a global security matter

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Whether you’ve flown a drone before or not, you’ve probably heard of DJI, or at least seen its logo. With more than a 90% share of the global consumer market, this Shenzhen-based company’s drones are used by hobbyists and businesses alike for photography and surveillance, as well as for spraying pesticides, moving parcels, and many other purposes around the world.  

But on June 14, the US House of Representatives passed a bill that would completely ban DJI’s drones from being sold in the US. The bill is now being discussed in the Senate as part of the annual defense budget negotiations. 

The reason? While its market dominance has attracted scrutiny for years, it’s increasingly clear that DJI’s commercial products are so good and affordable they are also being used on active battlefields to scout out the enemy or carry bombs. As the US worries about the potential for conflict between China and Taiwan, the military implications of DJI’s commercial drones are becoming a top policy concern.

DJI has managed to set the gold standard for commercial drones because it is built on decades of electronic manufacturing prowess and policy support in Shenzhen. It is an example of how China’s manufacturing advantage can turn into a technological one.

“I’ve been to the DJI factory many times … and mainly, China’s industrial base is so deep that every component ends up being a fraction of the cost,” Sam Schmitz, the mechanical engineering lead at Neuralink, wrote on X. Shenzhen and surrounding towns have had a robust factory scene for decades, providing an indispensable supply chain for a hardware industry like drones. “This factory made almost everything, and it’s surrounded by thousands of factories that make everything else … nowhere else in the world can you run out of some weird screw and just walk down the street until you find someone selling thousands of them,” he wrote.

But Shenzhen’s municipal government has also significantly contributed to the industry. For example, it has granted companies more permission for potentially risky experiments and set up subsidies and policy support. Last year, I visited Shenzhen to experience how it’s already incorporating drones in everyday food delivery, but the city is also working with companies to use drones for bigger and bigger jobs—carrying everything from packages to passengers. All of these go into a plan to build up the “low-altitude economy” in Shenzhen that keeps the city on the leading edge of drone technology.

As a result, the supply chain in Shenzhen has become so competitive that the world can’t really use drones without it. Chinese drones are simply the most accessible and affordable out there. 

Most recently, DJI’s drones have been used by both sides in the Ukraine-Russia conflict for reconnaissance and bombing. Some American companies tried to replace DJI’s role, but their drones were more expensive and their performance unsatisfactory. And even as DJI publicly suspended its businesses in Russia and Ukraine and said it would terminate any reseller relationship if its products were found to be used for military purposes, the Ukrainian army is still assembling its own drones with parts sourced from China.

This reliance on one Chinese company and the supply chain behind it is what worries US politicians, but the danger would be more pronounced in any conflict between China and Taiwan, a prospect that is a huge security concern in the US and globally.

Last week, my colleague James O’Donnell wrote about a report by the think tank Center for a New American Security (CNAS) that analyzed the role of drones in a potential war in the Taiwan Strait. Right now, both Ukraine and Russia are still finding ways to source drones or drone parts from Chinese companies, but it’d be much harder for Taiwan to do so, since it would be in China’s interest to block its opponent’s supply. “So Taiwan is effectively cut off from the world’s foremost commercial drone supplier and must either make its own drones or find alternative manufacturers, likely in the US,” James wrote.

If the ban on DJI sales in the US is eventually passed, it will hit the company hard for sure, as the US drone market is currently worth an estimated $6 billion, the majority of which is going to DJI. But undercutting DJI’s advantage won’t magically grow an alternative drone industry outside China. 

“The actions taken against DJI suggest protectionism and undermine the principles of fair competition and an open market. The Countering CCP Drones Act risks setting a dangerous precedent, where unfounded allegations dictate public policy, potentially jeopardizing the economic well-being of the US,” DJI told MIT Technology Review in an emailed statement.

The Taiwanese government is aware of the risks of relying too much on China’s drone industry, and it’s looking to change. In March, Taiwan’s newly elected president, Lai Ching-te, said that Taiwan wants to become the “Asian center for the democratic drone supply chain.” 

Already the hub of global semiconductor production, Taiwan seems well positioned to grow another hardware industry like drones, but it will probably still take years or even decades to build the economies of scale seen in Shenzhen. With support from the US, can Taiwanese companies really grow fast enough to meaningfully sway China’s control of the industry? That’s a very open question.

A housekeeping note: I’m currently visiting London, and the newsletter will take a break next week. If you are based in the UK and would like to meet up, let me know by writing to zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. ByteDance is working with the US chip design company Broadcom to develop a five-nanometer AI chip. This US-China collaboration, which should be compliant with US export restrictions, is rare these days given the political climate. (Reuters $)

2. After both the European Union and China announced new tariffs against each other, the two sides agreed to chat about how to resolve the dispute. (New York Times $)

  • Canada is preparing to announce its own tariffs on Chinese-made electric vehicles. (Bloomberg $)

3. A NASA leader says the US is “on schedule” to send astronauts to the moon within a few years. There’s currently a heated race between the US and China on moon exploration. (Washington Post $)

4. A new cybersecurity report says RedJuliett, a China-backed hacker group, has intensified attacks on Taiwanese organizations this year. (Al Jazeera $)

5. The Canadian government is blocking a rare earth mine from being sold to a Chinese company. Instead, the government will buy the stockpiled rare earth materials for $2.2 million. (Bloomberg $)

6. Economic hardship at home has pushed some Chinese small investors to enter the US marijuana industry. They have been buying lands in the States, setting up marijuana farms, and hiring other new Chinese immigrants. (NPR)

Lost in translation

In the past week, the most talked-about person in China has been a 17-year-old girl named Jiang Ping, according to the Chinese publication Southern Metropolis Daily. Every year since 2018, the Chinese company Alibaba has been hosting a global mathematics contest that attracts students from prestigious universities around the world to compete for a generous prize. But to everyone’s surprise, Jiang, who’s studying fashion design at a vocational high school in a poor town in eastern China, ended up ranking 12th in the qualifying round this year, beating scores of college undergraduate or even master’s students. Other than reading college mathematics textbooks under her math teacher’s guidance, Jiang has received no professional training, as many of her competitors have.

Jiang’s story, highlighted by Alibaba following the announcement of the first-round results, immediately went viral in China. While some saw it as a tale of buried talents and how personal endeavor can overcome unfavorable circumstances, others questioned the legitimacy of her results. She became so famous that people, including social media influencers, kept visiting her home, turning her hometown into an unlikely tourist destination. The town had to hide Jiang from public attention while she prepared for the final round of the competition.

One more thing

After I wrote about the new Chinese generative video model Kling last week, the AI tool added a new feature that can turn a static photo into a short video clip. Well, what better way to test its performance than feeding it the iconic “distracted boyfriend” meme and watching what the model predicts will happen after that moment?

Update: The story has been updated to include a statement from DJI.

Hong Kong is targeting Western Big Tech companies in its new ban of a popular protest song

It wasn’t exactly surprising when, on Wednesday May 8, a Hong Kong appeals court sided with the city government to take down “Glory to Hong Kong” from the internet. The trial, in which no one represented the defense, was the culmination of a years-long battle over the song that has become the unofficial anthem for protesters fighting China’s tightening control and police brutality in the city. But it remains an open question how exactly Western Big Tech companies will respond. Even as the injunction is narrowly designed to make it easier for them to comply, the companies may still be seen as aiding authoritarian control and obstructing internet freedom if they do so.  

Google, Apple, Meta, Spotify, and more have spent the last several years largely refusing to cooperate with previous efforts by the Hong Kong government to prevent the spread of the song, which the government has claimed is a threat to national security. But the government has also hesitated to leverage criminal law to force them to comply with requests for removal of content, which could risk international uproar and have a negative effect on the city’s economy. 

Now, the new ruling seemingly finds a third option: By providing the platforms with a civil injunction that doesn’t invoke criminal prosecution—which is similar to how copyright violations are enforced—the platforms can theoretically face less reputational blowback when they comply with the court order.

“If you look closely at the judgment, it’s basically tailor-made for the tech companies at stake,” says Chung Ching Kwong, a senior analyst at the Inter-Parliamentary Alliance on China, an advocacy organization that connects legislators from over 30 countries to try to hold China accountable. She believes the language in the judgment suggests the tech companies will now be ready to comply with the government’s request.

A Google spokesperson says the company is reviewing the court’s judgment and didn’t respond to specific questions sent by MIT Technology Review. A Meta spokesperson pointed to a statement from Jeff Paine, the managing director of the Asia Internet Coalition, a trade group representing many tech companies in the Asia-Pacific region: The AIC “is assessing the implications of the decision made today, including how the injunction will be implemented, to determine its impact on businesses. We believe that a free and open internet is fundamental to the city’s ambitions to become an international technology and innovation hub.” The AIC did not immediately reply to questions sent via email. Apple and Spotify didn’t immediately respond to requests for comment.

But no matter what these companies do next, the ruling is already having an effect: Just over 24 hours after the court order, some of the 32 YouTube videos that are explicitly named in the injunction as requiring removal were inaccessible for users worldwide, not just in Hong Kong. 

While it’s unclear whether the videos were removed by the platform or by their creators, experts say the court decision will almost certainly set a precedent for more content to be censored from Hong Kong’s internet in the future.

“Censorship of the song would be a clear violation of internet freedom and freedom of expression,” says Yaqiu Wang, the research director for China, Hong Kong, and Taiwan at Freedom House, a human rights advocacy group. “Google and other internet companies should use all available channels to challenge the decision.” 

Erasing a song from the internet

Since “Glory to Hong Kong” was first uploaded to YouTube in August 2019 by an anonymous group called Dgx Music, it’s been adored by protesters and applauded as their anthem. Its popularity only grew after China passed the harsh Hong Kong national security law in 2020

It also unsurprisingly became a major flashpoint. With lyrics like, “Liberate Hong Kong, revolution of our times,” the city and national Chinese governments were wary of its spread. 

Their fears escalated when the song was repeatedly mistaken for China’s national anthem at international events, and was broadcast in sporting events after Hong Kong athletes won. By mid 2023, the mistake, intentional or not, had happened 887 times, according to the Hong Kong government’s request for the content’s removal; the request to the court credits YouTube videos and Google search results referring to the song as the “Hong Kong National Anthem” as the reason. 

The government has been arresting people for performing the song on the ground in Hong Kong, but it has been harder to prosecute the online activity since most of the videos and music were uploaded anonymously, and Hong Kong, unlike mainland China, has historically had a free internet. This meant officials needed to explore new approaches to content removal. 

To comply or not to comply

Using the controversial 2020 national security law as legal justification to make requests for removal of certain content deemed threatening, the Hong Kong government has been able to exert pressure on local companies, like internet service providers (ISPs). “In Hong Kong, all the major internet service providers are locally owned or Chinese-owned. For business reasons, probably within the last 20 years, most of the foreign investors like Verizon left on their own,” says Charles Mok, a researcher at Stanford University’s Cyber Policy Center and a former legislator in Hong Kong. “So right now, the government is focusing on telling the customer-facing internet service providers to do the blocking.” And it seems to have been somewhat effective, with a few websites for human rights activist organizations becoming inaccessible locally.

But the city government can’t get its way as easily when the content is on foreign-owned platforms like YouTube or Facebook. Back in 2020, most major Western companies declared they would pause processing data requests from the Hong Kong government while they assessed the law. Over time, some of them have started answering government requests again. But they’ve largely remained firm: Over the first six months of 2023, for example, Meta received 41 requests from the Hong Kong government to obtain user data and answered 0; during the same period, Google received requests for removing 164 items from Google services and ended up removing 82 of them, according to both companies’ transparency reports. Google specifically mentioned that it chose to not remove two YouTube videos and one Google Drive file related to “Glory to Hong Kong.”

Both sides are in tight spots. Tech companies don’t want to lose the Hong Kong market or endanger their local staff, but they are also worried about being seen as complying with authoritarian government actions. And the Hong Kong government doesn’t want to be seen as openly fighting Western platforms while trust in the region’s financial markets is already in decline. In particular, officials fear international headlines if the government invokes criminal law to force tech companies to remove certain content. 

“I think both sides are navigating this balancing act. So the government finally figured out a way that they thought might be able to solve the impasse: by going to the court and narrowly seeking an injunction,” Mok says.

That happened in June 2023, when Hong Kong’s government requested a court injunction to ban the distribution of the song online with the purpose of “inciting others to commit secession.” It named 32 YouTube videos explicitly, including the original version and live performances, translations in other languages, instrumental and opera versions, and an interview of the original creators. But the order would also cover “any adaptation of the song, the melody and/or lyrics of which are substantially the same as the song,” according to court documents. 

The injunction went through a year of back-and-forth hearings, including a lower court ruling that briefly swatted down the ban. But now, the Court of Appeal has granted the government approval. The case can theoretically be appealed one last time, but with no defendants present, that’s unlikely to happen.

The key difference between this action and previous attempts to remove content is that this is a civil injunction, unlike a criminal prosecution—meaning it is, at least legally speaking, closer to a copyright takedown request. In turn, a platform could arguably be less likely to take a reputational hit as long as it removes the content upon request. 

Kwong believes this will indeed make platforms more likely to cooperate and there have already been pretty clear signs to that effect. In one hearing in December, the government was asked by the court to consult online platforms for the feasibility of the injunction. The final judgment this week says that while the platforms “have not taken part in these proceedings, they have indicated that they are ready to accede to the Government’s request if there is a court order.”

“The actual targets in this case, mainly the tech giants, may have less hesitation to comply with a civil court order than a national security order because if it’s the latter, they may also face backfire from the US,” says Eric Yan-Ho Lai, a research fellow at Georgetown Center for Asian Law. 

Lai also says now that the injunction is granted, it will be easier to prosecute an individual based on violating a civil injunction rather than prosecuting someone based on criminal offenses, since the government won’t need to prove criminal intent.

The chilling effect

Immediately after the injunction, human rights advocates called on tech companies to remain committed to their values. “Companies like Google and Apple have repeatedly claimed that they stand by the universal right to freedom of expression. They should put their ideals into practice,” says Freedom House’s Wang. “Google and other tech companies should thoroughly document government demands, and publish detailed transparency reports on content takedowns, both for those initiated by the authorities and those done by the companies themselves.”

Without making their plans clear, it’s too early to know just how tech companies will react. But right after the injunction was granted, the song largely remained available on most platforms, including YouTube, iTunes, and Spotify, for Hong Kong users, according to the South China Morning Post. On iTunes, the song even returned to the top of the download rankings a few hours after the injunction.

One key factor that may still determine corporate cooperation is how far the content removal requests go. There will surely be more videos of the song that are uploaded to YouTube, not to mention independent websites hosting the videos and music for more people to access. Will the government go after each of them too?

The Hong Kong government has previously said in court hearings that it only seeks a local restriction of the online content, meaning content will only be inaccessible to users physically in the city, which large platforms like YouTube can do so without difficulty. 

Theoretically, this allows local residents to still circumvent the ban by using VPN software, but not everyone would be technologically savvy enough to do so. And that wouldn’t do much to minimize the larger chilling effect on free speech, says Kwong from the Inter-Parliamentary Alliance on China. 

“As a Hong Konger living abroad, I do rely on Hong Kong services or international services based in Hong Kong to get a hold of what’s happening in the city. I do use YouTube Hong Kong to see certain things, and I do use Spotify Hong Kong or Apple Music because I want access to Cantopop,” she says. “At the same time, you worry about what you can share with friends in Hong Kong and whatnot. We don’t want to put them into trouble by sharing things that they are not supposed to see, which they should be able to see.”

The court made at least two explicit exemptions to the song’s ban, for “lawful activities conducted in connection with the song, such as those for the purpose of academic activity and news activity.” But even the implementation of these could be incredibly complex and confusing in practice. “In the current political context in Hong Kong, I don’t see anyone willing to take the risk,” Kwong says. 

The government has already arrested prominent journalists in the name of endangering national security, and a new law passed in 2024 has expanded the crimes that can be prosecuted on national security grounds. As with all efforts to suppress free speech, the impact of vague boundaries that encourage self-censorship on potentially sensitive topics is often sprawling and hard to measure. 

“Nobody knows where the actual red line is,” Kwong says.

The depressing truth about TikTok’s impending ban

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Allow me to indulge in a little reflection this week. Last week, the divest-or-ban TikTok bill was passed in Congress and signed into law. Four years ago, when I was just starting to report on the world of Chinese technologies, one of my first stories was about very similar news: President Donald Trump announcing he’d ban TikTok. 

That 2020 executive order came to nothing in the end—it was blocked in the courts, put aside after the presidency changed hands, and eventually withdrawn by the Biden administration. Yet the idea—that the US government should ban TikTok in some way—never went away. It would repeatedly be suggested in different forms and shapes. And eventually, on April 24, 2024, things came full circle.

A lot has changed in the four years between these two news cycles. Back then, TikTok was a rising sensation that many people didn’t understand; now, it’s one of the biggest social media platforms, the originator of a generation-defining content medium, and a music-industry juggernaut. 

What has also changed is my outlook on the issue. For a long time, I thought TikTok would find a way out of the political tensions, but I’m increasingly pessimistic about its future. And I have even less hope for other Chinese tech companies trying to go global. If the TikTok saga tells us anything, it’s that their Chinese roots will be scrutinized forever, no matter what they do.

I don’t believe TikTok has become a larger security threat now than it was in 2020. There have always been issues with the app, like potential operational influence by the Chinese government, the black-box algorithms that produce unpredictable results, and the fact that parent company ByteDance never managed to separate the US side and the China side cleanly, despite efforts (one called Project Texas) to store and process American data locally. 

But none of those problems got worse over the last four years. And interestingly, while discussions in 2020 still revolved around potential remedies like setting up data centers in the US to store American data or having an organization like Oracle audit operations, those kinds of fixes are not in the law passed this year. As long as it still has Chinese owners, the app is not permissible in the US. The only thing it can do to survive here is transfer ownership to a US entity. 

That’s the cold, hard truth not only for TikTok but for other Chinese companies too. In today’s political climate, any association with China and the Chinese government is seen as unacceptable. It’s a far cry from the 2010s, when Chinese companies could dream about developing a killer app and finding audiences and investors around the globe—something many did pull off. 

There’s something I wrote four years ago that still rings true today: TikTok is the bellwether for Chinese companies trying to go global. 

The majority of Chinese tech giants, like Alibaba, Tencent, and Baidu, operate primarily within China’s borders. TikTok was the first to gain mass popularity in lots of other countries across the world and become part of daily life for people outside China. To many Chinese startups, it showed that the hard work of trying to learn about foreign countries and users can eventually pay off, and it’s worth the time and investment to try.

On the other hand, if even TikTok can’t get itself out of trouble, with all the resources that ByteDance has, is there any hope for the smaller players?

When TikTok found itself in trouble, the initial reaction of these other Chinese companies was to conceal their roots, hoping they could avoid attention. During my reporting, I’ve encountered multiple companies that fret about being described as Chinese. “We are headquartered in Boston,” one would say, while everyone in China openly talked about its product as the overseas version of a Chinese app.

But with all the political back-and-forth about TikTok, I think these companies are also realizing that concealing their Chinese associations doesn’t work—and it may make them look even worse if it leaves users and regulators feeling deceived.

With the new divest-or-ban bill, I think these companies are getting a clear signal that it’s not the technical details that matter—only their national origin. The same worry is spreading to many other industries, as I wrote in this newsletter last week. Even in the climate and renewable power industries, the presence of Chinese companies is becoming increasingly politicized. They, too, are finding themselves scrutinized more for their Chinese roots than for the actual products they offer.

Obviously, none of this is good news to me. When they feel unwelcome in the US market, Chinese companies don’t feel the need to talk to international media anymore. Without these vital conversations, it’s even harder for people in other countries to figure out what’s going on with tech in China.

Instead of banning TikTok because it’s Chinese, maybe we should go back to focus on what TikTok did wrong: why certain sensitive political topics seem deprioritized on the platform; why Project Texas has stalled; how to make the algorithmic workings of the platform more transparent. These issues, instead of whether TikTok is still controlled by China, are the things that actually matter. It’s a harder path to take than just banning the app entirely, but I think it’s the right one.

Do you believe the TikTok ban will go through? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Facing the possibility of a total ban on TikTok, influencers and creators are making contingency plans. (Wired $)

2. TSMC has brought hundreds of Taiwanese employees to Arizona to build its new chip factory. But the company is struggling to bridge cultural and professional differences between American and Taiwanese workers. (Rest of World)

3. The US secretary of state, Antony Blinken, met with Chinese president Xi Jinping during a visit to China this week. (New York Times $)

  • Here’s the best way to describe these recent US-China diplomatic meetings: “The US and China talk past each other on most issues, but at least they’re still talking.” (Associated Press)

4. Half of Russian companies’ payments to China are made through middlemen in Hong Kong, Central Asia, or the Middle East to evade sanctions. (Reuters $)

5. A massive auto show is taking place in Beijing this week, with domestic electric vehicles unsurprisingly taking center stage. (Associated Press)

  • Meanwhile, Elon Musk squeezed in a quick trip to China and met with his “old friend” the Chinese premier Li Qiang, who was believed to have facilitated establishing the Gigafactory in Shanghai. (BBC)
  • Tesla may finally get a license to deploy its autopilot system, which it calls Full Self Driving, in China after agreeing to collaborate with Baidu. (Reuters $)

6. Beijing has hosted two rival Palestinian political groups, Hamas and Fatah, to talk about potential reconciliation. (Al Jazeera)

Lost in translation

The Chinese dubbing community is grappling with the impacts of new audio-generating AI tools. According to the Chinese publication ACGx, for a new audio drama, a music company licensed the voice of the famous dubbing actor Zhao Qianjing and used AI to transform it into multiple characters and voice the entire script. 

But online, this wasn’t really celebrated as an advancement for the industry. Beyond criticizing the quality of the audio drama (saying it still doesn’t sound like real humans), dubbers are worried about the replacement of human actors and increasingly limited opportunities for newcomers. Other than this new audio drama, there have been several examples in China where AI audio generation has been used to replace human dubbers in documentaries and games. E-book platforms have also allowed users to choose different audio-generated voices to read out the text. 

One more thing

While in Beijing, Antony Blinken visited a record store and bought two vinyl records—one by Taylor Swift and another by the Chinese rock star Dou Wei. Many Chinese (and American!) people learned for the first time that Blinken had previously been in a rock band.

Three ways the US could help universities compete with tech companies on AI innovation

The ongoing revolution in artificial intelligence has the potential to dramatically improve our lives—from the way we work to what we do to stay healthy. Yet ensuring that America and other democracies can help shape the trajectory of this technology requires going beyond the tech development taking place at private companies. 

Research at universities drove the AI advances that laid the groundwork for the commercial boom we are experiencing today. Importantly, academia also produced the leaders of pioneering AI companies. 

But today, large foundational models, or LFMs, like ChatGPT, Claude, and Gemini require such vast computational power and such extensive data sets that private companies have replaced academia at the frontier of AI. Empowering our universities to remain alongside them at the forefront of AI research will be key to realizing the field’s long-term potential. This will require correcting the stark asymmetry between academia and industry in access to computing resources.  

Academia’s greatest strength lies in its ability to pursue long-term research projects and fundamental studies that push the boundaries of knowledge. The freedom to explore and experiment with bold, cutting-edge theories will lead to discoveries and innovations that serve as the foundation for future innovation. While tools enabled by LFMs are in everybody’s pocket, there are many questions that need to be answered about them, since they remain a “black box” in many ways. For example, we know AI models have a propensity to hallucinate, but we still don’t fully understand why. 

Because they are insulated from market forces, universities can chart a future where AI truly benefits the many. Expanding academia’s access to resources would foster more inclusive approaches to AI research and its applications. 

The pilot of the National Artificial Intelligence Research Resource (NAIRR), mandated in President Biden’s October 2023 executive order on AI, is a step in the right direction. Through partnerships with the private sector, the NAIRR will create a shared research infrastructure for AI. If it realizes its full potential, it will be an essential hub that helps academic researchers access GPU computational power more effectively. Yet even if the NAIRR is fully funded, its resources are likely to be spread thin. 

This problem could be mitigated if the NAIRR focused on a select number of discrete projects, as some have suggested. But we should also pursue additional creative solutions to get meaningful numbers of GPUs into the hands of academics. Here are a few ideas:

First, we should use large-scale GPU clusters to improve and leverage the supercomputer infrastructure the US government already funds. Academic researchers should be enabled to partner with the US National Labs on grand challenges in AI research. 

Second, the US government should explore ways to reduce the costs of high-end GPUs for academic institutions—for example, by offering financial assistance such as grants or R&D tax credits. Initiatives like New York’s, which make universities key partners with the state in AI development, are already playing an important role at a state level. This model should be emulated across the country. 

Lastly, recent export control restrictions could over time leave some US chipmakers with surplus inventory of leading-edge AI chips. In that case, the government could purchase this surplus and distribute it to universities and academic institutions nationwide.

Imagine the surge of academic AI research and innovation these actions would ignite. Ambitious researchers at universities have a wealth of diverse ideas that are too often stopped short for lack of resources. But supplying universities with adequate computing power will enable their work to complement the research carried out by private industry. Thus equipped, academia can serve as an indispensable hub for technological progress, driving interdisciplinary collaboration, pursuing long-term research, nurturing talent that produces the next generation of AI pioneers, and promoting ethical innovation. 

Historically, similar investments have yielded critical dividends in innovation. The United States of the postwar era cultivated a symbiotic relationship among government, academia, and industry that carried us to the moonseeded Silicon Valley, and created the internet

We need to ensure that academia remains a strong pole in our innovation ecosystem. Investing in its compute capacity is a necessary first step. 

Ylli Bajraktari is CEO of the Special Competitive Studies Project (SCSP), a nonprofit initiative that seeks to strengthen the United States’ long-term competitiveness. 

Tom Mitchell is the Founders University Professor at Carnegie Mellon University. 

Daniela Rus is a professor of electrical engineering and computer science at MIT and director of its Computer Science and Artificial Intelligence Laboratory (CSAIL).

A brief, weird history of brainwashing

On an early spring day in 1959, Edward Hunter testified before a US Senate subcommittee investigating “the effect of Red China Communes on the United States.” It was the kind of opportunity he relished. A war correspondent who had spent considerable time in Asia, Hunter had achieved brief media stardom in 1951 after his book Brain-Washing in Red China introduced a new concept to the American public: a supposedly scientific system for changing people’s minds, even making them love things they once hated. 

But Hunter wasn’t just a reporter, objectively chronicling conditions in China. As he told the assembled senators, he was also an anticommunist activist who served as a propagandist for the OSS, or Office of Strategic Services—something that was considered normal and patriotic at the time. His reporting blurred the line between fact and political mythology.

portrait of Liang Qichao
Chinese reformists like Liang Qichao used the term xinao—a play on an older word, xixin, or “washing the heart”—in an attempt to bring ideas from Western science into Chinese philosophy
WIKIMEDIA COMMONS

When a senator asked about Hunter’s work for the OSS, the operative boasted that he was the first to “discover the technique of mind-attack” in mainland China, the first to use the word “brainwashing” in writing in any language, and “the first, except for the Chinese, to use the word in speech in any language.” 

None of this was true. Other operatives associated with the OSS had used the word in reports before Hunter published articles about it. More important, as the University of Hong Kong legal scholar Ryan Mitchell has pointed out, the Chinese word Hunter used at the hearing—xinao (), translated as “wash brain”—has a long history going back to scientifically minded Chinese philosophers of the late 19th century, who used it to mean something more akin to enlightenment. 

Yet Hunter’s sensational tales still became an important part of the disinformation and pseudoscience that fueled a “mind-control race” during the Cold War, much like the space race. Inspired by new studies on brain function, the US military and intelligence communities prepared themselves for a psychic war with the Soviet Union and China by spending millions of dollars on research into manipulating the human brain. But while the science never exactly panned out, residual beliefs fostered by this bizarre conflict continue to play a role in ideological and scientific debates to this day.

Coercive persuasion and pseudoscience

Ironically, “brainwashing” was not a widely used term among communists in China. The word xinao, Mitchell told me in an email, is actually a play on an older word, xixin, or washing the heart, which alludes to a Confucian and Buddhist ideal of self-awareness. In the late 1800s, Chinese reformists such as Liang Qichao began using xinao—replacing the character for “heart” with “brain”—in part because they were trying to modernize Chinese philosophy. “They were eager to receive and internalize as much as they could of Western science in general, and discourse about the brain as the seat of consciousness was just one aspect of that set of imported ideas,” Mitchell said. 

For Liang and his circle, brainwashing wasn’t some kind of mind-wiping process. “It was a sort of notion of epistemic virtue,” Mitchell said, “or a personal duty to make oneself modern in order to behave properly in the modern world.”

Meanwhile, scientists outside China were investigating “brainwashing” in the sense we usually think of, with experiments into mind clearing and reprogramming. Some of the earliest research into the possibility began in the 1890s, when Ivan Pavlov, the Russian physiologist who had famously conditioned dogs to drool at the sound of a bell, worked on Soviet-funded projects to investigate how trauma could change animal behavior. He found that even the most well-conditioned dogs would forget their training after intensely stressful experiences such as nearly drowning, especially when those were combined with sleep deprivation and isolation. It seemed that Pavlov had hit upon a quick way to wipe animals’ memories. Scientists on both sides of the Iron Curtain subsequently wondered whether it might work on humans. And once memories were wiped, they wondered, could something else be installed their place? 

During the 1949 show trial of the Hungarian anticommunist József Mindszenty, American officials worried that the Russians might have found the answer. A Catholic cardinal, Mindszenty had protested several government policies of the newly formed, Soviet-backed Hungarian People’s Republic. He was arrested and tortured, and he eventually made a series of outlandish confessions at trial: that he had conspired to steal the Hungarian crown jewels, start World War III, and make himself ruler of the world. In his book Dark Persuasion, Joel Dimsdale, a psychiatry professor at the University of California, San Diego, argues that the US intelligence community saw these implausible claims as confirmation that the Soviets had made some kind of scientific breakthrough that allowed them to control the human mind through coercive persuasion.

This question became more urgent when, in 1953, a handful of American POWs in China and Korea switched sides, and a Marine named Frank Schwable was quoted on Chinese radio validating the communist claim that the US was testing germ warfare in Asia. By this time, Hunter had already published a book about brainwashing in China, so the Western public quickly gravitated toward his explanation that the prisoners had been brainwashed, just like Mindszenty. People were terrified, and this was a reassuring explanation for how nice American GIs could go Red. 

Edward Hunter, who claimed to have coined the term “brainwashing,” wrote a book that fueled paranoia about a “mind-control race” during the Cold War.
A pamphlet published in 1955, purported to be a translation of a work by the Russian secret police, claimed that the Soviets used drugs and psychology to control the masses and that Dianetics, a pseudoscience invented by Scientology founder L. Ron Hubbard, could prevent brainwashing.

Over the following years, in the wake of the Korean War, “brainwashing” grew into a catchall explanation for any kind of radical or nonconformist behavior in the United States. Social scientists and politicians alike latched onto the idea. The Dutch psychologist Joost Meerloo warned that television was a brainwashing machine, for example, and the anticommunist educator J. Merrill Root claimed that high schools brainwashed kids into being weak-willed and vulnerable to communist influence. Meanwhile, popular movies like 1962’s The Manchurian Candidate, starring Frank Sinatra, offered thrilling tales of Chinese communists whose advanced psychological techniques turned unsuspecting American POWs into assassins. 

For the military and intelligence communities, mind control hovered between myth and science. Nowhere is this more obvious than in the peculiar case of an anonymously published 1955 pamphlet called Brain-Washing: A Synthesis of the Russian Textbook on Psychopolitics, which purported to be a translation of work by the Soviet secret-police chief Lavrentiy Beria. Full of wild claims about how the Soviets used psychology and drugs to control the masses, the pamphlet has a peculiar section devoted to the ways that Dianetics—a pseudoscience invented by the founder of Scientology, L. Ron Hubbard—could prevent brainwashing. As a result, it is widely believed that Hubbard himself wrote the pamphlet as black propaganda, or propaganda that masquerades as something produced by a foreign adversary. 

The 1962 film The Manchurian Candidate, starring Frank Sinatra, offered thrilling tales of Chinese communists whose advanced psychological techniques turned unsuspecting American POWs into assassins.
ALAMY

Still, US officials apparently took it seriously. David Seed, a cultural studies scholar at the University of Liverpool, plumbed the National Security Council papers at the Dwight D. Eisenhower Library, where he discovered that the NSC’s Operations Coordinating Board had analyzed the pamphlet as part of an investigation into enemy capabilities. A member of the board wrote that it might be “fake” but contained so much accurate information that it was clearly written by “experts.” When it came to brainwashing, government operatives made almost no distinction between black propaganda and so-called expertise.

This gobbledygook may also have struck the NSC investigator as legitimate because Hubbard borrowed lingo from the same sources as many scientists of the era. Hubbard chose the name Dianetics, for instance, specifically to evoke the computer scientist Norbert Wiener’s idea of cybernetics, an influential theory about information control systems that heavily informed both psychology and the burgeoning field of artificial intelligence. Cybernetics suggested that the brain functioned like a machine, with inputs and outputs, feedback and control. And if machines could be optimized, then why not brains?

An excuse for government abuse 

The fantasy of brainwashing was always one of optimization. Military experts knew that adversaries could be broken with torture, but it took months and was often a violent, messy process. A fast, scientifically informed interrogation method would save time and could potentially be deployed on a mass scale. In 1953, that dream led the CIA to invest millions of dollars in MK-Ultra, a project that injected cash into university and research programs devoted to memory wiping, mind control, and “truth serum” drugs. Worried that their rivals in the Soviet Union and China were controlling people’s minds to spread communism throughout the world, the intelligence community was willing to try almost anything to fight back. No operation was too weird. 

One of MK-Ultra’s most notorious projects was “Operation Midnight Climax” in San Francisco, where sex workers lured random American men to a safe house and dosed them with LSD while CIA agents covertly observed their behavior. At McGill University in Montreal, the CIA funded the work of the psychologist Donald Cameron, who used a combination of drugs and electroconvulsive therapy on patients with mental illness, attempting to erase and “repattern” their minds. Though many of his victims did wind up suffering from amnesia for years, Cameron never successfully injected new thoughts or memories. Marcia Holmes, a science historian who researched brainwashing for the Hidden Persuaders project at Birkbeck, University of London, told me that the CIA used Cameron’s data to develop new kinds of torture, which the US adopted as  “enhanced interrogation” techniques in the wake of 9/11. “You could put a scientific spin on it and claim that’s why it worked,” she said. “But it always boiled down to medieval tactics that people knew from experience worked.”

Schwable
Believed to be a victim of communist mind control, the American
POW Frank Schwable claimed on Chinese radio in 1953 that the US was testing germ warfare in Asia.
József Mindszenty
After being arrested and tortured, the Catholic cardinal and anticommunist
József Mindszenty made outlandish confessions
at trial, like that he had conspired to steal the Hungarian crown jewels.

MK-Ultra remained secret until the mid-1970s, when the US Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities, commonly known as the Church Committee after its chair, Senator Frank Church, opened hearings into the long-­running project. The shocking revelations that the CIA was drugging American citizens and paying for the torment of vulnerable Canadians changed the public’s understanding of mind control. “Brainwashing” came to seem less like a legitimate threat from overseas enemies and more like a ruse or excuse for almost any kind of bad behavior. When Patty Hearst, granddaughter of the newspaper publisher William Randolph Hearst, was put on trial in 1976 for robbing a bank after being kidnapped by the Symbionese Liberation Army, an American militant organization, the judge refused to believe experts who testified that she had been tortured and brainwashed by her captors. She was convicted and spent 22 months in jail. This marked the end of the nation’s infatuation with brainwashing, and experts began to debunk the idea that there was a scientific basis for mind control.

Patty Hearts against a red flag
In publishing heiress Patty Hearst’s 1976 trial for bank robbery,
the judge refused to believe that she had been brainwashed as a victim of kidnapping.
GIFT OF TIME MAGAZINE

Still, the revelations about MK-Ultra led to new cultural myths. Communists were no longer the baddies—instead, people feared that the US government was trying to experiment on its citizens. Soon after the Church Committee hearings were over, the media was gripped by a crime story of epic proportions: nearly two dozen Black children had been murdered in Atlanta, and the police had no leads other than a vague idea that maybe it could be a serial killer. Wayne Williams, a Black man who was eventually convicted of two of the murders, claimed at various points that he had been trained by the CIA. This led to popular conspiracy theories that MK-Ultra had been experimenting on Black people in Atlanta.

Colin Dickey, author of Under the Eye of Power: How Fear of Secret Societies Shapes American Democracy, told me these conspiracy theories became “a way of making sense of an otherwise mystifying and terrifying reality, [which is that America is] a country where Black people are so disenfranchised that their murders aren’t noticed.” Dickey added that this MK-Ultra conspiracy theory “gave a shape to systemic racism,” placing blame for the Atlanta child murders on the US government. In the process, it also suggested that Black people had been brainwashed to kill each other. 

No evidence ever surfaced that MK-Ultra was behind the children’s deaths, but the idea of brainwashing continues to be a powerful metaphor for the effects of systemic racism. It haunts contemporary Black horror films like Get Out, where white people take over Black people’s bodies through a fantastical version of hypnosis. And it provides the analytical substrate for the scathing indictment of racist marketing in the book Brainwashed: Challenging the Myth of Black Inferiority, by the Black advertising executive Tom Burrell. He argues that advertising has systematically pushed stereotypes of Black people as second-class citizens, instilling a “slave mindset” in Black audiences.

A social and political phenomenon

Today, even as the idea of brainwashing is often dismissed as pseudoscience, Americans are still spellbound by the idea that people we disagree with have been psychologically captured by our enemies. Right-wing pundits and politicians often attribute discussions of racism to infections by a “woke mind virus”—an idea that is a direct descendant of Cold War panics over communist brainwashing. Meanwhile, contemporary psychology researchers like UCSD’s Dimsdale fear that social media is now a vector for coercive persuasion, just as Meerloo worried about television’s mind-control powers in the 1950s. 

Cutting-edge technology is also altering how we think about mind control. In a 2017 open letter published in Nature, an international group of researchers and ethicists warned that neurotechnologies like brain-computer interfaces “mean that we are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions.” It sounds like MK-Ultra’s wish list. Hoping to head off a neuro-dystopia, the group outlined several key ways that companies and universities could guard against coercive uses of this technology in the future. They suggested that we need laws to prevent companies from spying on people’s private thoughts, for example, as well as regulations that bar anyone from using brain implants to change people’s personalities or make them more neurotypical. 

Many neuroscientists feel that these concerns are overblown; one of them, the University of Maryland cognitive scientist R. Douglas Fields, summed up the naysayers’ position with a column in Quanta magazine arguing that the brain is more plastic than we realize, and that neurotech mind control will never be as simple as throwing a switch. Kathleen Taylor, another neuroscientist who studies brainwashing, takes a more measured view; in her book Brainwashing: The Science of Thought Control, she acknowledges that neurotech and drugs could change people’s thought processes but ultimately concludes that “brainwashing is above all a social and political phenomenon.” 

Sydney Gottleib
Sidney Gottlieb was an American chemist and spymaster who in the 1950s headed the
Central Intelligence Agency’s mind-control program known as Project MK-Ultra.
COURTESY OF THE CIA

Perhaps that means the anonymous National Security Council examiner was right to call Hubbard’s black propaganda the work of an “expert.” If brainwashing is politics, then disinformation might be as effective (or ineffective) as a brain implant in changing someone’s mind. Still, scholars have learned that political efforts at mind control do not have predictable results. Online disinformation leads to what Juliette Kayyem, a former assistant secretary of the Department of Homeland Security, identifies as stochastic terrorism, or acts of violence that cannot be predicted precisely but can be analyzed statistically. She writes that stochastic terrorism is inspired by online rhetoric that demonizes groups of people, but it’s hard to know which people consuming that rhetoric will actually become terrorists, and which of them will just rage at their computer screens—the result of coercive persuasion that works on some targets and misses others. 

American operatives may never have found the perfect system for brainwashing foreign adversaries or unsuspecting citizens, but the US managed to win the mind-control wars in one small way. Mitchell, the legal scholar at Hong Kong University, told me that the American definition of brainwashing, or xinao, is now the dominant way the word is used in modern Chinese speech. “People refer to aggressive advertising campaigns or earworm pop songs as having a xinao effect,” he said. The Chinese government, Mitchell added, uses the term exactly the way the US military did back in the 1950s. State media, for example, “described many Hong Kong protesters in 2019 as having undergone xinao by the West.”

Annalee Newitz is the author of Stories Are Weapons: Psychological Warfare and the American Mind, coming in June 2024.

Africa’s push to regulate AI starts now        

In the Zanzibar archipelago of Tanzania, rural farmers are using an AI-assisted app called Nuru that works in their native language of Swahili to detect a devastating cassava disease before it spreads. In South Africa, computer scientists have built machine learning models to analyze the impact of racial segregation in housing. And in Nairobi, Kenya, AI classifies images from thousands of surveillance cameras perched on lampposts in the bustling city’s center. 

The projected benefit of AI adoption on Africa’s economy is tantalizing. Estimates suggest that four African countries alone—Nigeria, Ghana, Kenya, and South Africa—could rake in up to $136 billion worth of economic benefits by 2030 if businesses there begin using more AI tools.

Now, the African Union—made up of 55 member nations—is preparing an ambitious AI policy that envisions an Africa-centric path for the development and regulation of this emerging technology. But debates on when AI regulation is warranted and concerns about stifling innovation could pose a roadblock, while a lack of AI infrastructure could hold back the technology’s adoption.  

“We’re seeing a growth of AI in the continent;  it’s really important there be set rules in place to govern these technologies,” says Chinasa T. Okolo, a fellow in the Center for Technology Innovation at Brookings, whose research focuses on AI governance and policy development in Africa.

Some African countries have already begun to formulate their own legal and policy frameworks for AI. Seven have developed national AI policies and strategies, which are currently at different stages of implementation. 

On February 29, the African Union Development Agency published a policy draft that lays out a blueprint of AI regulations for African nations. The draft includes recommendations for industry-specific codes and practices, standards and certification bodies to assess and benchmark AI systems, regulatory sandboxes for safe testing of AI, and the establishment of national AI councils to oversee and monitor responsible deployment of AI. 

The heads of African governments are expected to eventually endorse the continental AI strategy, but not until February 2025, when they meet next at the AU’s annual summit in Addis Ababa, Ethiopia. Countries with no existing AI policies or regulations would then use this framework to develop their own national strategies, while those that already have will be encouraged to review and align their policies with the AU’s.

Elsewhere, major AI laws and policies are also taking shape. This week, the European Union passed the AI Act, set to become the world’s first comprehensive AI law. In October, the United States issued an executive order on AI. And the Chinese government is eyeing a sweeping AI law similar to the EU’s, while also setting rules that target specific AI products as they’re developed. 

If African countries don’t develop their own regulatory frameworks that protect citizens from the technology’s misuse, some experts worry that Africans will face social harms, including bias that could exacerbate inequalities. And if these countries don’t also find a way to harness AI’s benefits, others fear these economies could be left behind. 

“We want to be standard makers”

Some African researchers think it’s too early to be thinking about AI regulation. The industry is still nascent there due to the high cost of building data infrastructure, limited internet access, a lack of funding, and a dearth of powerful computers needed to train AI models. A lack of access to quality training data is also a problem. African data is largely concentrated in the hands of companies outside of Africa.

In February, just before the AU’s AI policy draft came out, Shikoh Gitau, a computer scientist who started the Nairobi-based AI research lab Qubit Hub, published a paper arguing that Africa should prioritize the development of an AI industry before trying to regulate the technology. 

“If we start by regulating, we’re not going to figure out the innovations and opportunities that exist for Africa,” says David Lemayian, a software engineer and one of the paper’s co-authors.  

Okolo, who consulted on the AU-AI draft policy, disagrees. Africa should be proactive in developing regulations, Okolo says. She suggests African countries reform existing laws such as policies on data privacy and digital governance to address AI. 

But Gitau is concerned that a hasty approach to regulating AI could hinder adoption of the technology. And she says it’s critical to build homegrown AI with applications tailored for Africans to harness the power of AI to improve economic growth. 

“Before we put regulations [in place], we need to do the hard work of understanding the full spectrum of the technology and invest in building the African AI ecosystem,” she says.

More than 50 countries and the EU have AI strategies in place, and more than 700 AI policy initiatives have been implemented since 2017, according to the Organisation for Economic Co-operation and Development’s AI Policy Observatory. But only five of those initiatives are from Africa and none of the OECD’s 38 member countries are African.

Africa’s voices and perspectives have largely been absent from global discussions on AI governance and regulation, says Melody Musoni, a policy and digital governance expert at ECDPM, an independent-policy think tank in Brussels.   

“We must contribute our perspectives and own our regulatory frameworks,” says Musoni. “We want to be standard makers, not standard takers.” 

Nyalleng Moorosi, a specialist in ethics and fairness in machine learning who is based in Hlotse, Lesotho and works at the Distributed AI Research Institute, says that some African countries are already seeing labor exploitation by AI companies. This includes poor wages and lack of psychological support for data labelers, who are largely from low-income countries but working for big tech companies. She argues regulation is needed to prevent that, and to protect communities against misuse by both large corporations and authoritarian governments. 

In Libya, autonomous lethal weapons systems have already been used in fighting, and in Zimbabwe, a controversial, military-driven national facial-recognition scheme has raised concerns over the technology’s alleged use as a surveillance tool by the government. The draft AU-AI policy didn’t explicitly address the use of AI by African governments for national security interests, but it acknowledges that there could be perilous AI risks. 

Barbara Glover, program officer for an African Union group that works on policies for emerging technologies, points out that the policy draft recommends that African countries invest in digital and data infrastructure, and collaborate with the private sector to build investment funds to support AI startups and innovation hubs on the continent. 

Unlike the EU, the AU lacks the power to enforce sweeping policies and laws across its member states. Even if the draft AI strategy wins endorsement of parliamentarians at the AU’s assembly next February, African nations must then implement the continental strategy through national AI policies and laws.

Meanwhile, tools powered by machine learning will continue to be deployed, raising ethical questions and regulatory needs and posing a challenge for policymakers across the continent. 

Moorosi says Africa must develop a model for local AI regulation and governance which balances the localized risks and rewards. “If it works with people and works for people, then it has to be regulated,” she says.             

Chinese EVs have entered center stage in US-China tensions

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

So far, electric vehicles have mostly been discussed in the US through a scientific, economic, or environmental lens. But all of a sudden, they have become highly political. 

Last Thursday, the Biden administration announced it would investigate the security risks posed by Chinese-made smart cars, which could “collect sensitive data about our citizens and our infrastructure and send this data back to the People’s Republic of China,” the statement from White House claims.

While many other technologies from China have been scrutinized because of security concerns, EVs have largely avoided that sort of attention until now. After all, they represent a technology that will greatly help the world transition to clean and renewable energy, and people have greeted its rapid growth in China with praise.

But US-China relations have been at a low point since the Trump years and the pandemic, and it seems like only a matter of time before any trade or interaction between the two countries falls under security scrutiny. Now it’s EVs’ turn.

The White House has made clear that there are two motivations behind the investigation: the economy and security.

Even though the statement didn’t explicitly mention EVs, it’s undeniable that they are the only reason Chinese automakers have now become serious challengers to their American peers. Chinese companies like BYD make quality EVs at affordable prices, making them increasingly competitive in international markets. A recent report by the Alliance for American Manufacturing, an industry group, even describes EV competition as “China’s existential threat to America’s auto industry.”

“The issue of Chinese EV imports really hits on so many major political factors all at the same time,” says Kyle Chan, a sociology researcher at Princeton University who studies industrial policies and China. “Not just the auto plants in swing states like Michigan and Ohio, but the broader auto manufacturing sector spread over many important states.”

If the US auto industry fails to remain competitive, it will threaten the job security of millions of Americans, and countless other parts of the US economy will be affected. So it’s no surprise Chinese EVs are seen as a major economic threat that needs to be addressed. 

In fact, it’s one of the few issues everyone seems to agree on in this election cycle. Before the Biden investigation, Trump drew people’s attention to Chinese EVs during campaign speeches, vowing to slap a 60% tariff on Chinese imported goods. Josh Hawley, a Republican senator and a longtime China hawk, proposed a bill last Tuesday for a whopping 125% tariff on Chinese cars, including Chinese-branded cars made in other countries like Mexico.

But the new action taken by the Biden administration introduces another factor to the discussion: security threats.

Basically, the argument here is that Chinese cars—especially the newer ones with smart features that collect information from the environment or connect to the telecom and satellite network—could be used to steal information and harm US national interests. 

To many experts, this argument is a lot less supported by reality. When TikTok and Huawei were subject to similar concerns, it was because their products were widely used in the US. But the majority of Chinese-made cars are running inside China. There are barely any Chinese cars being sold in the US today, let alone the latest models. That makes the White House’s position look slightly bizarre. 

Lei Xing, an auto analyst and observer of the EV industry, has very strong opinions about the security accusations in the Biden administration’s announcement. “It is full of subjective and inaccurate statements trying to paint a picture of threat and security risk that is much greater than it actually is, and is obviously aimed at gaining voter favor as the presidential election race heats up,” Xing tells me.

Nonetheless, fears over data security are shared across the political spectrum in the US. “There has been almost an emerging consensus in Washington, across party lines, that is much more concerned about Chinese data collection through potential technology channels,” Chan says. 

This lens has now been used to question almost any technology product with Chinese connections: whether it’s Chinese cars, Chinese e-commerce apps like Shein and Temu, social media platforms like TikTok and WeChat, or smart home gadgets, the sentiment about data security remains the same.

Having watched these other technologies come into the geopolitical crossfire from afar, Chinese EV companies were mostly prepared for what was announced last week. 

“I think the Chinese EV firms have already baked this into their calculations,” Chan says. “As they’ve been ramping up more joint ventures and partnerships and entering other markets of the world, I’ve noticed a very clear reluctance to put that much investment into the US market.”

Recently, BYD Americas’ CEO said in an interview that its new planned factory in Mexico will serve the domestic market rather than exporting to the US; Xing learned recently that NIO, another Chinese car company, removed the US from its initial plan of entering 25 markets by 2025. These are all signs that Chinese EV companies will shy away from the US market for a while, at least until the political animosity goes away. Being unable to sell in the world’s second-largest auto market is obviously not good news, but they have a lot of potential customers in Europe, Latin America, and Southeast Asia.

“[The Chinese auto industry] for now will remain in a ‘watch and study’ mode and strategize accordingly. Mexico will be an important market and a critical production hub for the Americas region whether [the industry] eventually enters America or not,” says Xing.

I had been counting down the days until we’re able to drive Chinese EVs in the US and see how they compete with American cars on their home turf. I guess I’ll be in for a very long wait. 

Do you think this move will help or harm US domestic automakers in the long run? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. China started its annual parliamentary meeting today. It’s the highest-level of political meeting in China, and it’s where economic plans and other important policy signals are often released. So watch this space. (NBC News)

  • For the first time in 30 years, the country has scrapped the annual tradition where the premier briefs the press and answers questions. It was one of the only moments of access to China’s political leaders, and now it’s gone. (Reuters $)

2. A deepfake clone of a Ukrainian YouTuber is being used by Chinese people to express pro-Russia sentiments and sell Russian goods. (Voice of America News)

3. Hundreds of North Koreans are forced to work in Chinese seafood factories while enduring frequent beatings and sexual abuse. These factories supply major US retailers like Walmart and ShopRite. (New Yorker $)

4. The US government wants to stop data brokers from selling sensitive data to China and a few other adversaries. (Wall Street Journal $)

5. In tiny New York studios, American TikTok influencers are learning the tricks of livestream e-commerce from their Chinese counterparts. (Rest of World)

6. The US Department of Justice accused a Chinese chipmaker of stealing trade secrets five years ago. The company was just found not guilty in court. (Bloomberg $)

7. The number of patents filed by inventors in China has been growing rapidly—surpassing the US figure for the first time ever. (Axios)

Lost in translation

When a Chinese college graduate named Lu Zhi moved on from her first job after eight months at PDD (the Chinese e-commerce company that owns Temu), she didn’t realize the company would ask her to pay $36,000 back as a noncompete compensation. As the Chinese publication Caixin reports, Chinese tech companies, particularly PDD, have sparked outrage for how broad their noncompete agreements have become. 

It doesn’t just affect key personnel in critical positions. Almost any employee, no matter how junior or peripheral their role, has to sign such an agreement when hired. To enforce the agreement, PDD has even hired private detectives to follow former employees around and film their commute to the new workplace. People are questioning whether these companies have gone too far in the name of protecting their trade secrets.

One more thing

The new Dune 2 movie is barely out, and people are already making memes comparing the plot to the real-life geopolitical situation between the US, China, and Taiwan. Is it accurate? I’ll report back after I watch it.

A plan to bring down drug prices could threaten America’s technology boom

Forty years ago, Kendall Square in Cambridge, Massachusetts, was full of deserted warehouses and dying low-tech factories. Today, it is arguably the center of the global biotech industry. 

During my 30 years in MIT’s Technology Licensing Office, I witnessed this transformation firsthand, and I know it was no accident. Much of it was the direct result of the Bayh-Dole Act, a bipartisan law that Congress passed in 1980. 

The reform enabled world-class universities like MIT and Harvard, both within a couple of miles of Kendall Square, to retain the patent and licensing rights on discoveries made by their scientists—even when federal funds paid for the research, as they did in nearly all labs. Those discoveries, in turn, helped a significant number of biotechnology startups throughout the Boston area launch and grow.

Before Bayh-Dole, the government retained those patent and licensing rights. Yet while federal agencies like the National Institutes of Health heavily funded basic scientific research at universities, they were ill equipped to find private-sector companies interested in licensing and developing promising but still nascent discoveries. That’s because, worried about accusations of favoritism, government agencies were willing to grant only nonexclusive licenses to companies to develop patented technologies. 

Few companies were willing to license technology on a nonexclusive basis. Nonexclusive licenses opened up the possibility that a startup might spend many millions of dollars on product development only to have the government relicense the patent to a rival firm.

As a result, many taxpayer-financed discoveries were never turned into real-world products. Before the law, less than 5% of the roughly 28,000 patents held by the federal government had been licensed for development by private firms.

The bipartisan lawmakers behind Bayh-Dole understood that these misaligned incentives were impeding scientific and technological progress—and hampering economic growth and job creation. They changed the rules so that patents no longer automatically went to the federal government. Instead, universities and medical schools could hold on to their patents and manage the licensing themselves.

In response, research institutions invested heavily in offices like the one I ran at MIT, which are devoted to transferring technology from academia to private-sector companies.

Today, universities and nonprofit research institutions transfer thousands of discoveries each year, resulting in innovations in all manner of technical fields. Many thousands of entrepreneurial companies—often founded by the researchers who made the discoveries in question—have licensed patents stemming from federally funded research. This technology transfer system has helped create millions of jobs

Google’s search algorithm, for instance, was developed by Sergey Brin and Larry Page with the help of federal grants while they were still PhD students at Stanford. They cofounded Google, licensed their patented algorithm from the school’s technology transfer office, and ultimately built one of the world’s most valuable companies.

All told, the law sparked a national innovation renaissance that continues to this day. In 2002, the Economist dubbed it “possibly the most inspired piece of legislation to be enacted in America over the past half-century.” I consider it so vital that after I retired, I joined the advisory council of an organization devoted to celebrating and protecting it. 

But the efficacy of the Bayh-Dole Act is now under serious threat from a draft framework the Biden administration is currently in the process of finalizing after a months-long public comment period that concluded on February 6.

In an attempt to control drug prices in the US, the administration’s proposal relies on an obscure provision of Bayh-Dole that allows the government to “march in” and relicense patents. In other words, it can take the exclusively licensed patent right from one company and grant a license to a competing firm. 

The provision is designed to allow the government to step in if a company fails to commercialize a federally funded discovery and make it available to the public in a reasonable time frame. But the White House is now proposing that the provision be used to control the ever-rising costs of pharmaceuticals by relicensing brand-name drug patents if they are not offered at a “reasonable” price. 

On the surface, this might sound like a good idea—the US has some of the highest drug prices in the world, and many life-saving drugs are unavailable to patients who cannot afford them. But trying to control drug prices through the march-in provision will be largely ineffective. Many drugs are separately protected by other private patents filed by biotech and pharma companies later in the development process, so relicensing just an early-stage patent will do little to help generate generic alternatives. At the same time, this policy could have an enormous chilling effect on the very beginning of the drug development process, when companies license the initial innovative patent from the universities and research institutions.

If the Biden administration finalizes the draft march-in framework as currently written, it will allow the federal government to ignore licensing agreements between universities and private companies whenever it chooses and on the basis of currently unknown and potentially subjective criteria, such as what constitutes a “reasonable” price. This would make developing new technologies far riskier. Large companies would have ample reason to walk away, and investors in startup companies—which are major players in bringing innovative university technology to market—would be equally reluctant to invest in those firms.

Any patent associated with federal dollars would likely become toxic overnight, since even one cent of taxpayer funding would make the resulting consumer product eligible for march-in on the basis of price. 

What’s more, while the draft framework has been billed as a “drug pricing” policy, it makes no distinction between university discoveries in life sciences and those in any other high-tech field. As a result, investment in IP-driven industries from biotech to aerospace to alternative energy would plummet. Technological progress would stall. And the system of technology transfer established by the Bayh-Dole Act would quickly break down.

Unless the administration withdraws its proposal, the United States will return to the days when the most promising federally backed discoveries never left university labs. Far fewer inventions based on advanced research will be patented, and innovation hubs like the one I watched grow will have no chance to take root.

Lita Nelsen joined the Technology Licensing Office of the Massachusetts Institute of Technology in 1986 and was director from 1992 to 2016. She is a member of the advisory council of the Bayh-Dole Coalition, a group of organizations and individuals committed to celebrating and protecting the Bayh-Dole Act, as well as informing policymakers and the public of its benefits.

How open source voting machines could boost trust in US elections

While the vendors pitched their latest voting machines in Concord, New Hampshire, this past August, the election officials in the room g­­asped. They whispered, “No way.” They nodded their heads and filled out the scorecards in their laps. Interrupting if they had to, they asked every kind of question: How much does the new scanner weigh? Are any of its parts made in China? Does it use the JSON data format?

The answers weren’t trivial. Based in part on these presentations, many would be making a once-in-a-decade decision.

These New Hampshire officials currently use AccuVote machines, which were made by a company that’s now part of Dominion Voting Systems. First introduced in 1989, they run on an operating system no longer supported by Microsoft, and some have suffered extreme malfunctions; in 2022, the same model of AccuVote partially melted during an especially warm summer election in Connecticut.

Many towns in New Hampshire want to replace the AccuVote. But with what? Based on past history, the new machines would likely have to last decades — while also being secure enough to satisfy the state’s election skeptics. Outside the event, those skeptics held signs like “Ban Voting Machines.” Though they were relatively small in number that day, they’re part of a nationwide movement to eliminate voting technology and instead hand count every ballot — an option election administrators say is simply not feasible.

Against this backdrop, more than 130 election officials packed into the conference rooms on the second floor of Concord’s Legislative Office Building. Ultimately, they faced a choice between two radically different futures.

The first was to continue with a legacy vendor. Three companies — Dominion, ES&S, and Hart InterCivic — control roughly 90 percent of the U.S. voting technology market. All three are privately held, meaning they’re required to reveal little about their financial workings and they’re also committed to keeping their source code from becoming fully public.

The second future was to gamble on VotingWorks, a nonprofit with only 17 employees and voting machine contracts in just five small counties, all in Mississippi. The company has taken the opposite approach to the Big Three. Its financial statements are posted on its website, and every line of code powering its machines is published on GitHub, available for anyone to inspect.

“Why in 2023 are we counting votes with any proprietary software at all?”

At the Concord event, a representative for ES&S suggested that this open-source approach could be dangerous. “If the FBI was building a new building, they’re not going to put the blueprints out online,” he said. But VotingWorks co-founder Ben Adida says it’s fundamental to rebuilding trust in voting equipment and combatting the nationwide push to hand count ballots. “An open-source voting system is one where there are no secrets about how this works,” Adida told the audience. “All the source code is public for the world to see, because why in 2023 are we counting votes with any proprietary software at all?”

Others agree. Ten states currently use VotingWorks’ open-source audit software, including Georgia during its hand count audit in 2020. Other groups are exploring open-source voting technology, including Microsoft, which recently piloted voting software in Franklin County, Idaho. Bills requiring or allowing for open-source voting technology have recently been introduced in at least six states; a bill has also been introduced at the federal level to study the issue further. In New Hampshire, the idea has support from election officials, the secretary of state, and even diehard machine skeptics.

VotingWorks is at the forefront of the movement to make elections more transparent. “Although the voting equipment that we’ve been using for the last 20, 30 years is not responsible for this crisis,” Adida said, “it’s also not the equipment that’s going to get us out of this crisis.” But can an idealist nonprofit really unseat industry juggernauts — and restore faith in democracy along the way?


For years, officials have feared that America’s voting machines are vulnerable to attack. During the 2016 election, Russian hackers targeted election systems in all 50 states, according to the Senate Intelligence Committee. The committee found no evidence that any votes were changed, but it did suggest that Russia could be cataloging options “for use at a later date.”

In 2017, the Department of Homeland Security designated election infrastructure as “critical infrastructure,” noting that “bad cyber actors — ranging from nation states, cyber criminals, and hacktivists — are becoming more sophisticated and dangerous.”

Some conservative activists have suggested simply avoiding machines altogether and hand-counting ballots. But doing so is prohibitively slow and expensive, not to mention more error-prone. Last year, for example, one county in Arizona estimated that counting all 105,000 ballots from the 2020 election would require at least 245 people working every day, including holidays, for almost three weeks.

That leaves election administrators dependent on machines to tally up votes. That August day in Concord, VotingWorks and two of the legacy vendors, Dominion and ES&S, were offering the same kind of product: an optical scanner, which is essentially just a counting machine. After a New Hampshire voter fills in a paper ballot by hand, it’s most likely inserted into an optical scanner, which interprets and tallies the marks. This process is how roughly two-thirds of the country votes. A quarter of voters mark their ballots using machines (aptly named “ballot-marking devices”), which are then fed into an optical scanner as well. About 5 percent use direct recording electronic systems, or DREs, which allows votes to be cast and stored directly on the machine. Only 0.2 percent of voters have their ballots counted by hand.

close up view of hands counting stacks of ballots
Workers in Hinsdale, New Hampshire count each of the 1799 ballots cast after the polls closed on election day in 2016. Hand counts of ballots are prohibitively slow and expensive, and less accurate than machines.
KRISTOPHER RADDER/THE BRATTLEBORO REFORMER VIA AP

Since the 2020 election, the companies that make these machines have been the subject of intense scrutiny from people who deny the election results. Those companies have also come under fire for what critics on both sides of the political aisle describe as their secrecy, lack of innovation, and obstructionist tendencies.

None of the three companies publicly disclose basic information, including their investors and their financial health. It can also be difficult to even get the prices of their machines. Often, jurisdictions come to depend on these firms. Two-thirds of the industry’s revenue comes from support, maintenance, and services for the machines.

Legacy vendors also fight to maintain their market share. In 2017, Hart InterCivic sued Texas to prevent counties from replacing its machines, which don’t produce a paper trail, with machines that did. “For a vendor to sue to prevent auditable paper records from being used in voting shows that market dynamics can be starkly misaligned with the public interest,” concluded a report by researchers at the University of Pennsylvania in collaboration with Verified Voting, a nonprofit that, according to its mission statement, works to promote “the responsible use of technology in elections.”

The companies tell a different story, pointing out that they do disclose their code to certain entities, including third-party firms and independent labs that work on behalf of the federal government to test for vulnerabilities in the software that could be exploited by hackers. In a statement to Undark, ES&S also said it discloses certain financial information to jurisdictions “when requested” and the company shared approximate prices for its voting machines, although it noted that final pricing depends on “individual customer requirements.”

In Concord, officials from some small towns where ballots are still hand-counted were considering switching to machines. Others were considering whether to stick with Dominion and LHS — the New Hampshire-based company that services the machines — or switch to VotingWorks. It would likely be one of the most expensive, consequential decisions of their careers.

“For a vendor to sue to prevent auditable paper records from being used in voting shows that market dynamics can be starkly misaligned with the public interest.”

Throughout his pitch, the representative for LHS emphasized the continuity between the old AccuVote machines and the new Dominion scanner. Wearing a blazer and a dress shirt unbuttoned at the collar, Jeff Silvestro knew the crowd well. LHS is the only authorized service provider for the entire state’s AccuVote machines, and it’s responsible for offering training for the towns’ staff, delivering memory cards for each election, and weathering a blizzard to come to their poll site and service a broken scanner.

Don’t worry, Silvestro reassured the crowd: The voter experience is the same. “Similarities,” Silvestro told the crowd. “That’s what we’re looking for.”

Just down the hall from Silvestro, Ben Adida laid out a different vision of what voting technology could be. He opened by addressing the “elephant in the room”: the substantial number of people who distrust the elections. VotingWorks could do so, he said, by offering three things: security, simplicity, and transparency.

Adida first started working on election technology in 1997, as a computer science undergraduate at MIT, where he built a voting system for student council elections. After earning a Ph.D. from MIT in 2006, with a specialty in cryptography and information security, he did a few more years of election work as a post-doc at Harvard University and then transitioned to data security and privacy for medical data. Later, he served as director of engineering at Mozilla and Square and vice president of engineering at Clever, a digital learning platform for K-12 schools.

In 2016, Adida considered leaving Clever to do election work again, and he followed the progress of STAR-Vote, an open-source election system proposed by Travis County, Texas, that ultimately didn’t move forward. He decided to stay put, but he couldn’t shake the thought of voting technology. Adida knew it was rare for someone to have his background in both product design and election security. “This is kind of a calling,” he said.

Ben Adida
Ben Adida, who holds a Ph.D. in computer science, with a specialty in cryptography and information security, is the co-founder of VotingWorks, a nonprofit that builds open-source election technology.
a VotingWorks display of at the National Association of Secretaries of State in 2022 showing a voting screen built into a tamper-evident ballot box
The voting machine built by VotingWorks is made from off-the-shelf electronics and open-source software that the company posted on GitHub.

Adida launched VotingWorks in December 2018, with some funding from individuals and Y Combinator, a renowned startup accelerator. The nonprofit is now unique among the legacy voting technology vendors: The group has disclosed everything, from its donors to the prices of its machines. VotingWorks machines are made from off-the-shelf electronics, and in the long-run, according to Adida, are cheaper than their competitors.

The day of the Concord event, Adida wore a T-shirt tucked into his khakis, and sported a thick brown mustache. When he started discussing the specs of his machine, he spoke quickly, bounding around the room and even tripping on an errant wire. At one point, he showed off his machine’s end-of-night election report, printed on an 8 ½ by 11 piece of paper, a far cry from the long strips of paper that are currently used. You don’t have to have “these long CVS receipts.” The room laughed.


Adida and his team are staking out a position in a debate that stretches back to the early days of computing: Is the route to computer security through secrecy, or through total transparency?

Some of the most widely used software today is open-source software, or OSS, meaning anyone can read, modify, and reuse the code. OSS has powered popular products like the operating system Linux and the internet browser Firefox from Mozilla. It’s also used extensively by the Department of Defense.

Proponents of OSS offer three main arguments for why it’s more secure than a locked box model. First, publicly available source code can be scrutinized by anyone, not just a relatively small group of engineers within a company, increasing the chances of catching flaws. Second, because coders know that they can be scrutinized by anyone, they’re incentivized to produce better work and to explain their approach. “You can go and look at exactly why it’s being done this way, who wrote it, who approved it, and all of that,” said Adida.

Third, OSS proponents say that trying to hide source code will ultimately fail, because attackers can acquire it from the supplier or reverse engineer it themselves. Hackers don’t need perfect source code, just enough to analyze for patterns that may suggest a vulnerability. Breaking is easier than building.

Already, there are indications that bad actors have acquired proprietary voting machine code. In 2021, an election official in Colorado allegedly allowed a conspiracy theorist to access county machines, copy sensitive data, and photograph system passwords — the kind of insider attack that, experts warn, could compromise the security of the coming presidential election.

Adida and his team are staking out a position in a debate that stretches back to the early days of computing: Is the route to computer security through secrecy, or through total transparency?

Not everyone is convinced that open-source code alone is enough to ensure a secure voting machine. “You could have had open-source software, and you might not have found all of the problems or errors or issues,” said Pamela Smith, the president of Verified Voting, citing the numerous lines of code that would need to be examined in a limited amount of time.

Adida doesn’t expect anyone to go through the hundreds of thousands of lines of code on the VotingWorks GitHub. But if they’re curious about a specific aspect, like how the scanner handles paper that’s askew, it’s much more manageable: only a few hundred lines of code. Already, a small number of coders from outside the company have made suggestions on how to improve the software, some of which have been accepted. Then, to fully guard against vulnerabilities, the company relies on its own procedures, third-party reviews, and certification testing at the federal level, said Adida.

two poll workers holding long scrolls of receipt paper which has puddled onto the ground
Miami-Dade election workers check voting machines for accuracy by reviewing scrolls of paper that Adida likened to “long CVS receipts.”
JOE RAEDLE/GETTY IMAGES

In addition to security, any new machine also needs to be easy for poll workers to operate — and able to perform reliably under the high-stakes conditions of an election day. In interviews, election officials who use the technology in Mississippi raved about its ease of use.

Some also love how responsive the company is to feedback. “They come to us and say, ‘Tell us in the field what’s going on,’” said Sara Dionne, chairman of the election commission in Warren County, Mississippi, which started using VotingWorks in 2020. “We certainly never had that kind of conversation with ES&S ever.”


To expand VotingWorks’ reach, though, Adida must pitch it in places like New Hampshire, where election officials are navigating tight budgets, fallout from the 2020 election, and misperceptions about voting technology.

New Hampshire is a swing state, and, after the 2020 election, it has a small but vocal faction of election deniers. At the same time, Republican Secretary of State David Scanlan has done little to marshal resources for new machines. Last year, Scanlan opposed a bill that would have allowed New Hampshire towns and cities to apply for funding from a $12 million federal grant for new voting machines; Republicans in the legislature killed the bill. (Asked what cash-strapped jurisdictions should do if they can’t afford new scanners, Scanlan told Undark they could cannibalize parts from old AccuVote machines.)

Some critics also say Scanlan has done little to dispel some conservative activists’ beliefs that New Hampshire can dispense with machines altogether. At the Concord event, a woman told Undark that Manchester, a city with 68,000 registered voters, could hand count all of its ballots in just four hours. Speaking with Undark, Scanlan acknowledged that this estimate wasn’t correct, and that hand counting is less accurate than machines. However, his office hasn’t communicated this message to the public in any formal way. “I definitely think that he is complicit in allowing [misinformation] to continue to flourish,” said Liz Wester, co-founder of 603 Forward, which encourages civic participation in the state.

The VotingWorks model won over some machine skeptics at the Concord event, like Tim Cahill, a Republican in the New Hampshire House of Representatives. Cahill said he’d prefer that all ballots in the state be hand counted but would choose VotingWorks over the other vendors. “Why would you trust something you can’t put your eyes on?” he told Undark. “We have a lot of smart people in this country and people want open source, they want transparency.”

people in an office setting surrounded by stacks of ballots
Poll workers use the Accu-Vote machines to scan absentee ballots in Fairbanks, Alaska.
ERIC ENGMAN/GETTY IMAGES

Open source has found fans in other states, too. Kevin Cavanaugh is a county supervisor in Pinal, Arizona’s third most populous county. He says he started to doubt voting machines after watching a documentary, funded by the election denier Mike Lindell, claiming that the devices have unauthorized software that could change vote totals without detection. In November 2022, Cavanaugh introduced a motion to increase the number of ballots counted by hand in the county, and he told Undark he’d like a full hand count. “But, if we’re using machines,” he added, “then I think it’s important that the source code is available for inspection to experts.”

Back in Concord, Adida appeared to be persuasive to the public at large — or at least those invested enough to attend the event. Of the 201 attendees who filled out a scorecard, VotingWorks was the most popular first choice. But among election officials, the clear preference was Dominion. Some officials were skeptical that open-source technology would mean much to people in their towns. “Your average voter doesn’t care about open source,” said one town clerk.

Still, five towns in New Hampshire have already purchased VotingWorks machines, some of which will be used in upcoming March local elections.


Two main factors determine whether someone has faith in an election, said Charles Stewart III, a political scientist at MIT who has written extensively about trust in elections. The first, which affects roughly 5 to 10 percent of voters, is a negative personal experience at the polls, like long lines, rude poll workers, and problems with machines, which can make the public less willing to trust an election’s outcome.

The second, more influential factor affecting trust is if a voter’s candidate won. That makes it supremely difficult to restore confidence, said Tammy Patrick, a former election official in Maricopa County and the current CEO for programs at the National Association of Election Officials. “The answer on election administration — it’s complex, it’s wonky, it’s not pithy,” she said in a recent press conference. “It’s hard to come back to those emotional pleas with what the reality is.”

Adida agrees with Stewart that VotingWorks alone isn’t going to eliminate election denialism — nor, he said, is that his goal. Instead, he hopes to reach the people who are susceptible to misinformation but haven’t necessarily made up their minds yet, a group he describes as the “middle 80 percent.” Even if they never visit the company’s GitHub, he says, “the fact that we’re putting it all out in the open builds trust.” And when someone says something patently false about the company, Adida can at least ask them to identify the incriminating lines of source code.

Are those two things — rhetorical power and a commitment to transparency — really a match for the disinformation machinery pushing lies across the country? Adida mentioned the myths about legacy vendors’ machines being mis-programmed or incorrectly counting ballots during the 2020 election. “What was the counterpoint to that?” he asked. “It was, ‘Trust us. These machines have been tested.’ I want the counterpoint to be, ‘Hey folks, all the source code is open.’”


Spenser Mestel is a poll worker and independent journalist. His bylines include The New York Times, The Atlantic, The Guardian, and The Intercept.

This article was originally published on Undark. Read the original article.