Three ways the US could help universities compete with tech companies on AI innovation

The ongoing revolution in artificial intelligence has the potential to dramatically improve our lives—from the way we work to what we do to stay healthy. Yet ensuring that America and other democracies can help shape the trajectory of this technology requires going beyond the tech development taking place at private companies. 

Research at universities drove the AI advances that laid the groundwork for the commercial boom we are experiencing today. Importantly, academia also produced the leaders of pioneering AI companies. 

But today, large foundational models, or LFMs, like ChatGPT, Claude, and Gemini require such vast computational power and such extensive data sets that private companies have replaced academia at the frontier of AI. Empowering our universities to remain alongside them at the forefront of AI research will be key to realizing the field’s long-term potential. This will require correcting the stark asymmetry between academia and industry in access to computing resources.  

Academia’s greatest strength lies in its ability to pursue long-term research projects and fundamental studies that push the boundaries of knowledge. The freedom to explore and experiment with bold, cutting-edge theories will lead to discoveries and innovations that serve as the foundation for future innovation. While tools enabled by LFMs are in everybody’s pocket, there are many questions that need to be answered about them, since they remain a “black box” in many ways. For example, we know AI models have a propensity to hallucinate, but we still don’t fully understand why. 

Because they are insulated from market forces, universities can chart a future where AI truly benefits the many. Expanding academia’s access to resources would foster more inclusive approaches to AI research and its applications. 

The pilot of the National Artificial Intelligence Research Resource (NAIRR), mandated in President Biden’s October 2023 executive order on AI, is a step in the right direction. Through partnerships with the private sector, the NAIRR will create a shared research infrastructure for AI. If it realizes its full potential, it will be an essential hub that helps academic researchers access GPU computational power more effectively. Yet even if the NAIRR is fully funded, its resources are likely to be spread thin. 

This problem could be mitigated if the NAIRR focused on a select number of discrete projects, as some have suggested. But we should also pursue additional creative solutions to get meaningful numbers of GPUs into the hands of academics. Here are a few ideas:

First, we should use large-scale GPU clusters to improve and leverage the supercomputer infrastructure the US government already funds. Academic researchers should be enabled to partner with the US National Labs on grand challenges in AI research. 

Second, the US government should explore ways to reduce the costs of high-end GPUs for academic institutions—for example, by offering financial assistance such as grants or R&D tax credits. Initiatives like New York’s, which make universities key partners with the state in AI development, are already playing an important role at a state level. This model should be emulated across the country. 

Lastly, recent export control restrictions could over time leave some US chipmakers with surplus inventory of leading-edge AI chips. In that case, the government could purchase this surplus and distribute it to universities and academic institutions nationwide.

Imagine the surge of academic AI research and innovation these actions would ignite. Ambitious researchers at universities have a wealth of diverse ideas that are too often stopped short for lack of resources. But supplying universities with adequate computing power will enable their work to complement the research carried out by private industry. Thus equipped, academia can serve as an indispensable hub for technological progress, driving interdisciplinary collaboration, pursuing long-term research, nurturing talent that produces the next generation of AI pioneers, and promoting ethical innovation. 

Historically, similar investments have yielded critical dividends in innovation. The United States of the postwar era cultivated a symbiotic relationship among government, academia, and industry that carried us to the moonseeded Silicon Valley, and created the internet

We need to ensure that academia remains a strong pole in our innovation ecosystem. Investing in its compute capacity is a necessary first step. 

Ylli Bajraktari is CEO of the Special Competitive Studies Project (SCSP), a nonprofit initiative that seeks to strengthen the United States’ long-term competitiveness. 

Tom Mitchell is the Founders University Professor at Carnegie Mellon University. 

Daniela Rus is a professor of electrical engineering and computer science at MIT and director of its Computer Science and Artificial Intelligence Laboratory (CSAIL).

A brief, weird history of brainwashing

On an early spring day in 1959, Edward Hunter testified before a US Senate subcommittee investigating “the effect of Red China Communes on the United States.” It was the kind of opportunity he relished. A war correspondent who had spent considerable time in Asia, Hunter had achieved brief media stardom in 1951 after his book Brain-Washing in Red China introduced a new concept to the American public: a supposedly scientific system for changing people’s minds, even making them love things they once hated. 

But Hunter wasn’t just a reporter, objectively chronicling conditions in China. As he told the assembled senators, he was also an anticommunist activist who served as a propagandist for the OSS, or Office of Strategic Services—something that was considered normal and patriotic at the time. His reporting blurred the line between fact and political mythology.

portrait of Liang Qichao
Chinese reformists like Liang Qichao used the term xinao—a play on an older word, xixin, or “washing the heart”—in an attempt to bring ideas from Western science into Chinese philosophy
WIKIMEDIA COMMONS

When a senator asked about Hunter’s work for the OSS, the operative boasted that he was the first to “discover the technique of mind-attack” in mainland China, the first to use the word “brainwashing” in writing in any language, and “the first, except for the Chinese, to use the word in speech in any language.” 

None of this was true. Other operatives associated with the OSS had used the word in reports before Hunter published articles about it. More important, as the University of Hong Kong legal scholar Ryan Mitchell has pointed out, the Chinese word Hunter used at the hearing—xinao (), translated as “wash brain”—has a long history going back to scientifically minded Chinese philosophers of the late 19th century, who used it to mean something more akin to enlightenment. 

Yet Hunter’s sensational tales still became an important part of the disinformation and pseudoscience that fueled a “mind-control race” during the Cold War, much like the space race. Inspired by new studies on brain function, the US military and intelligence communities prepared themselves for a psychic war with the Soviet Union and China by spending millions of dollars on research into manipulating the human brain. But while the science never exactly panned out, residual beliefs fostered by this bizarre conflict continue to play a role in ideological and scientific debates to this day.

Coercive persuasion and pseudoscience

Ironically, “brainwashing” was not a widely used term among communists in China. The word xinao, Mitchell told me in an email, is actually a play on an older word, xixin, or washing the heart, which alludes to a Confucian and Buddhist ideal of self-awareness. In the late 1800s, Chinese reformists such as Liang Qichao began using xinao—replacing the character for “heart” with “brain”—in part because they were trying to modernize Chinese philosophy. “They were eager to receive and internalize as much as they could of Western science in general, and discourse about the brain as the seat of consciousness was just one aspect of that set of imported ideas,” Mitchell said. 

For Liang and his circle, brainwashing wasn’t some kind of mind-wiping process. “It was a sort of notion of epistemic virtue,” Mitchell said, “or a personal duty to make oneself modern in order to behave properly in the modern world.”

Meanwhile, scientists outside China were investigating “brainwashing” in the sense we usually think of, with experiments into mind clearing and reprogramming. Some of the earliest research into the possibility began in the 1890s, when Ivan Pavlov, the Russian physiologist who had famously conditioned dogs to drool at the sound of a bell, worked on Soviet-funded projects to investigate how trauma could change animal behavior. He found that even the most well-conditioned dogs would forget their training after intensely stressful experiences such as nearly drowning, especially when those were combined with sleep deprivation and isolation. It seemed that Pavlov had hit upon a quick way to wipe animals’ memories. Scientists on both sides of the Iron Curtain subsequently wondered whether it might work on humans. And once memories were wiped, they wondered, could something else be installed their place? 

During the 1949 show trial of the Hungarian anticommunist József Mindszenty, American officials worried that the Russians might have found the answer. A Catholic cardinal, Mindszenty had protested several government policies of the newly formed, Soviet-backed Hungarian People’s Republic. He was arrested and tortured, and he eventually made a series of outlandish confessions at trial: that he had conspired to steal the Hungarian crown jewels, start World War III, and make himself ruler of the world. In his book Dark Persuasion, Joel Dimsdale, a psychiatry professor at the University of California, San Diego, argues that the US intelligence community saw these implausible claims as confirmation that the Soviets had made some kind of scientific breakthrough that allowed them to control the human mind through coercive persuasion.

This question became more urgent when, in 1953, a handful of American POWs in China and Korea switched sides, and a Marine named Frank Schwable was quoted on Chinese radio validating the communist claim that the US was testing germ warfare in Asia. By this time, Hunter had already published a book about brainwashing in China, so the Western public quickly gravitated toward his explanation that the prisoners had been brainwashed, just like Mindszenty. People were terrified, and this was a reassuring explanation for how nice American GIs could go Red. 

Edward Hunter, who claimed to have coined the term “brainwashing,” wrote a book that fueled paranoia about a “mind-control race” during the Cold War.
A pamphlet published in 1955, purported to be a translation of a work by the Russian secret police, claimed that the Soviets used drugs and psychology to control the masses and that Dianetics, a pseudoscience invented by Scientology founder L. Ron Hubbard, could prevent brainwashing.

Over the following years, in the wake of the Korean War, “brainwashing” grew into a catchall explanation for any kind of radical or nonconformist behavior in the United States. Social scientists and politicians alike latched onto the idea. The Dutch psychologist Joost Meerloo warned that television was a brainwashing machine, for example, and the anticommunist educator J. Merrill Root claimed that high schools brainwashed kids into being weak-willed and vulnerable to communist influence. Meanwhile, popular movies like 1962’s The Manchurian Candidate, starring Frank Sinatra, offered thrilling tales of Chinese communists whose advanced psychological techniques turned unsuspecting American POWs into assassins. 

For the military and intelligence communities, mind control hovered between myth and science. Nowhere is this more obvious than in the peculiar case of an anonymously published 1955 pamphlet called Brain-Washing: A Synthesis of the Russian Textbook on Psychopolitics, which purported to be a translation of work by the Soviet secret-police chief Lavrentiy Beria. Full of wild claims about how the Soviets used psychology and drugs to control the masses, the pamphlet has a peculiar section devoted to the ways that Dianetics—a pseudoscience invented by the founder of Scientology, L. Ron Hubbard—could prevent brainwashing. As a result, it is widely believed that Hubbard himself wrote the pamphlet as black propaganda, or propaganda that masquerades as something produced by a foreign adversary. 

The 1962 film The Manchurian Candidate, starring Frank Sinatra, offered thrilling tales of Chinese communists whose advanced psychological techniques turned unsuspecting American POWs into assassins.
ALAMY

Still, US officials apparently took it seriously. David Seed, a cultural studies scholar at the University of Liverpool, plumbed the National Security Council papers at the Dwight D. Eisenhower Library, where he discovered that the NSC’s Operations Coordinating Board had analyzed the pamphlet as part of an investigation into enemy capabilities. A member of the board wrote that it might be “fake” but contained so much accurate information that it was clearly written by “experts.” When it came to brainwashing, government operatives made almost no distinction between black propaganda and so-called expertise.

This gobbledygook may also have struck the NSC investigator as legitimate because Hubbard borrowed lingo from the same sources as many scientists of the era. Hubbard chose the name Dianetics, for instance, specifically to evoke the computer scientist Norbert Wiener’s idea of cybernetics, an influential theory about information control systems that heavily informed both psychology and the burgeoning field of artificial intelligence. Cybernetics suggested that the brain functioned like a machine, with inputs and outputs, feedback and control. And if machines could be optimized, then why not brains?

An excuse for government abuse 

The fantasy of brainwashing was always one of optimization. Military experts knew that adversaries could be broken with torture, but it took months and was often a violent, messy process. A fast, scientifically informed interrogation method would save time and could potentially be deployed on a mass scale. In 1953, that dream led the CIA to invest millions of dollars in MK-Ultra, a project that injected cash into university and research programs devoted to memory wiping, mind control, and “truth serum” drugs. Worried that their rivals in the Soviet Union and China were controlling people’s minds to spread communism throughout the world, the intelligence community was willing to try almost anything to fight back. No operation was too weird. 

One of MK-Ultra’s most notorious projects was “Operation Midnight Climax” in San Francisco, where sex workers lured random American men to a safe house and dosed them with LSD while CIA agents covertly observed their behavior. At McGill University in Montreal, the CIA funded the work of the psychologist Donald Cameron, who used a combination of drugs and electroconvulsive therapy on patients with mental illness, attempting to erase and “repattern” their minds. Though many of his victims did wind up suffering from amnesia for years, Cameron never successfully injected new thoughts or memories. Marcia Holmes, a science historian who researched brainwashing for the Hidden Persuaders project at Birkbeck, University of London, told me that the CIA used Cameron’s data to develop new kinds of torture, which the US adopted as  “enhanced interrogation” techniques in the wake of 9/11. “You could put a scientific spin on it and claim that’s why it worked,” she said. “But it always boiled down to medieval tactics that people knew from experience worked.”

Schwable
Believed to be a victim of communist mind control, the American
POW Frank Schwable claimed on Chinese radio in 1953 that the US was testing germ warfare in Asia.
József Mindszenty
After being arrested and tortured, the Catholic cardinal and anticommunist
József Mindszenty made outlandish confessions
at trial, like that he had conspired to steal the Hungarian crown jewels.

MK-Ultra remained secret until the mid-1970s, when the US Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities, commonly known as the Church Committee after its chair, Senator Frank Church, opened hearings into the long-­running project. The shocking revelations that the CIA was drugging American citizens and paying for the torment of vulnerable Canadians changed the public’s understanding of mind control. “Brainwashing” came to seem less like a legitimate threat from overseas enemies and more like a ruse or excuse for almost any kind of bad behavior. When Patty Hearst, granddaughter of the newspaper publisher William Randolph Hearst, was put on trial in 1976 for robbing a bank after being kidnapped by the Symbionese Liberation Army, an American militant organization, the judge refused to believe experts who testified that she had been tortured and brainwashed by her captors. She was convicted and spent 22 months in jail. This marked the end of the nation’s infatuation with brainwashing, and experts began to debunk the idea that there was a scientific basis for mind control.

Patty Hearts against a red flag
In publishing heiress Patty Hearst’s 1976 trial for bank robbery,
the judge refused to believe that she had been brainwashed as a victim of kidnapping.
GIFT OF TIME MAGAZINE

Still, the revelations about MK-Ultra led to new cultural myths. Communists were no longer the baddies—instead, people feared that the US government was trying to experiment on its citizens. Soon after the Church Committee hearings were over, the media was gripped by a crime story of epic proportions: nearly two dozen Black children had been murdered in Atlanta, and the police had no leads other than a vague idea that maybe it could be a serial killer. Wayne Williams, a Black man who was eventually convicted of two of the murders, claimed at various points that he had been trained by the CIA. This led to popular conspiracy theories that MK-Ultra had been experimenting on Black people in Atlanta.

Colin Dickey, author of Under the Eye of Power: How Fear of Secret Societies Shapes American Democracy, told me these conspiracy theories became “a way of making sense of an otherwise mystifying and terrifying reality, [which is that America is] a country where Black people are so disenfranchised that their murders aren’t noticed.” Dickey added that this MK-Ultra conspiracy theory “gave a shape to systemic racism,” placing blame for the Atlanta child murders on the US government. In the process, it also suggested that Black people had been brainwashed to kill each other. 

No evidence ever surfaced that MK-Ultra was behind the children’s deaths, but the idea of brainwashing continues to be a powerful metaphor for the effects of systemic racism. It haunts contemporary Black horror films like Get Out, where white people take over Black people’s bodies through a fantastical version of hypnosis. And it provides the analytical substrate for the scathing indictment of racist marketing in the book Brainwashed: Challenging the Myth of Black Inferiority, by the Black advertising executive Tom Burrell. He argues that advertising has systematically pushed stereotypes of Black people as second-class citizens, instilling a “slave mindset” in Black audiences.

A social and political phenomenon

Today, even as the idea of brainwashing is often dismissed as pseudoscience, Americans are still spellbound by the idea that people we disagree with have been psychologically captured by our enemies. Right-wing pundits and politicians often attribute discussions of racism to infections by a “woke mind virus”—an idea that is a direct descendant of Cold War panics over communist brainwashing. Meanwhile, contemporary psychology researchers like UCSD’s Dimsdale fear that social media is now a vector for coercive persuasion, just as Meerloo worried about television’s mind-control powers in the 1950s. 

Cutting-edge technology is also altering how we think about mind control. In a 2017 open letter published in Nature, an international group of researchers and ethicists warned that neurotechnologies like brain-computer interfaces “mean that we are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions.” It sounds like MK-Ultra’s wish list. Hoping to head off a neuro-dystopia, the group outlined several key ways that companies and universities could guard against coercive uses of this technology in the future. They suggested that we need laws to prevent companies from spying on people’s private thoughts, for example, as well as regulations that bar anyone from using brain implants to change people’s personalities or make them more neurotypical. 

Many neuroscientists feel that these concerns are overblown; one of them, the University of Maryland cognitive scientist R. Douglas Fields, summed up the naysayers’ position with a column in Quanta magazine arguing that the brain is more plastic than we realize, and that neurotech mind control will never be as simple as throwing a switch. Kathleen Taylor, another neuroscientist who studies brainwashing, takes a more measured view; in her book Brainwashing: The Science of Thought Control, she acknowledges that neurotech and drugs could change people’s thought processes but ultimately concludes that “brainwashing is above all a social and political phenomenon.” 

Sydney Gottleib
Sidney Gottlieb was an American chemist and spymaster who in the 1950s headed the
Central Intelligence Agency’s mind-control program known as Project MK-Ultra.
COURTESY OF THE CIA

Perhaps that means the anonymous National Security Council examiner was right to call Hubbard’s black propaganda the work of an “expert.” If brainwashing is politics, then disinformation might be as effective (or ineffective) as a brain implant in changing someone’s mind. Still, scholars have learned that political efforts at mind control do not have predictable results. Online disinformation leads to what Juliette Kayyem, a former assistant secretary of the Department of Homeland Security, identifies as stochastic terrorism, or acts of violence that cannot be predicted precisely but can be analyzed statistically. She writes that stochastic terrorism is inspired by online rhetoric that demonizes groups of people, but it’s hard to know which people consuming that rhetoric will actually become terrorists, and which of them will just rage at their computer screens—the result of coercive persuasion that works on some targets and misses others. 

American operatives may never have found the perfect system for brainwashing foreign adversaries or unsuspecting citizens, but the US managed to win the mind-control wars in one small way. Mitchell, the legal scholar at Hong Kong University, told me that the American definition of brainwashing, or xinao, is now the dominant way the word is used in modern Chinese speech. “People refer to aggressive advertising campaigns or earworm pop songs as having a xinao effect,” he said. The Chinese government, Mitchell added, uses the term exactly the way the US military did back in the 1950s. State media, for example, “described many Hong Kong protesters in 2019 as having undergone xinao by the West.”

Annalee Newitz is the author of Stories Are Weapons: Psychological Warfare and the American Mind, coming in June 2024.

Africa’s push to regulate AI starts now        

In the Zanzibar archipelago of Tanzania, rural farmers are using an AI-assisted app called Nuru that works in their native language of Swahili to detect a devastating cassava disease before it spreads. In South Africa, computer scientists have built machine learning models to analyze the impact of racial segregation in housing. And in Nairobi, Kenya, AI classifies images from thousands of surveillance cameras perched on lampposts in the bustling city’s center. 

The projected benefit of AI adoption on Africa’s economy is tantalizing. Estimates suggest that four African countries alone—Nigeria, Ghana, Kenya, and South Africa—could rake in up to $136 billion worth of economic benefits by 2030 if businesses there begin using more AI tools.

Now, the African Union—made up of 55 member nations—is preparing an ambitious AI policy that envisions an Africa-centric path for the development and regulation of this emerging technology. But debates on when AI regulation is warranted and concerns about stifling innovation could pose a roadblock, while a lack of AI infrastructure could hold back the technology’s adoption.  

“We’re seeing a growth of AI in the continent;  it’s really important there be set rules in place to govern these technologies,” says Chinasa T. Okolo, a fellow in the Center for Technology Innovation at Brookings, whose research focuses on AI governance and policy development in Africa.

Some African countries have already begun to formulate their own legal and policy frameworks for AI. Seven have developed national AI policies and strategies, which are currently at different stages of implementation. 

On February 29, the African Union Development Agency published a policy draft that lays out a blueprint of AI regulations for African nations. The draft includes recommendations for industry-specific codes and practices, standards and certification bodies to assess and benchmark AI systems, regulatory sandboxes for safe testing of AI, and the establishment of national AI councils to oversee and monitor responsible deployment of AI. 

The heads of African governments are expected to eventually endorse the continental AI strategy, but not until February 2025, when they meet next at the AU’s annual summit in Addis Ababa, Ethiopia. Countries with no existing AI policies or regulations would then use this framework to develop their own national strategies, while those that already have will be encouraged to review and align their policies with the AU’s.

Elsewhere, major AI laws and policies are also taking shape. This week, the European Union passed the AI Act, set to become the world’s first comprehensive AI law. In October, the United States issued an executive order on AI. And the Chinese government is eyeing a sweeping AI law similar to the EU’s, while also setting rules that target specific AI products as they’re developed. 

If African countries don’t develop their own regulatory frameworks that protect citizens from the technology’s misuse, some experts worry that Africans will face social harms, including bias that could exacerbate inequalities. And if these countries don’t also find a way to harness AI’s benefits, others fear these economies could be left behind. 

“We want to be standard makers”

Some African researchers think it’s too early to be thinking about AI regulation. The industry is still nascent there due to the high cost of building data infrastructure, limited internet access, a lack of funding, and a dearth of powerful computers needed to train AI models. A lack of access to quality training data is also a problem. African data is largely concentrated in the hands of companies outside of Africa.

In February, just before the AU’s AI policy draft came out, Shikoh Gitau, a computer scientist who started the Nairobi-based AI research lab Qubit Hub, published a paper arguing that Africa should prioritize the development of an AI industry before trying to regulate the technology. 

“If we start by regulating, we’re not going to figure out the innovations and opportunities that exist for Africa,” says David Lemayian, a software engineer and one of the paper’s co-authors.  

Okolo, who consulted on the AU-AI draft policy, disagrees. Africa should be proactive in developing regulations, Okolo says. She suggests African countries reform existing laws such as policies on data privacy and digital governance to address AI. 

But Gitau is concerned that a hasty approach to regulating AI could hinder adoption of the technology. And she says it’s critical to build homegrown AI with applications tailored for Africans to harness the power of AI to improve economic growth. 

“Before we put regulations [in place], we need to do the hard work of understanding the full spectrum of the technology and invest in building the African AI ecosystem,” she says.

More than 50 countries and the EU have AI strategies in place, and more than 700 AI policy initiatives have been implemented since 2017, according to the Organisation for Economic Co-operation and Development’s AI Policy Observatory. But only five of those initiatives are from Africa and none of the OECD’s 38 member countries are African.

Africa’s voices and perspectives have largely been absent from global discussions on AI governance and regulation, says Melody Musoni, a policy and digital governance expert at ECDPM, an independent-policy think tank in Brussels.   

“We must contribute our perspectives and own our regulatory frameworks,” says Musoni. “We want to be standard makers, not standard takers.” 

Nyalleng Moorosi, a specialist in ethics and fairness in machine learning who is based in Hlotse, Lesotho and works at the Distributed AI Research Institute, says that some African countries are already seeing labor exploitation by AI companies. This includes poor wages and lack of psychological support for data labelers, who are largely from low-income countries but working for big tech companies. She argues regulation is needed to prevent that, and to protect communities against misuse by both large corporations and authoritarian governments. 

In Libya, autonomous lethal weapons systems have already been used in fighting, and in Zimbabwe, a controversial, military-driven national facial-recognition scheme has raised concerns over the technology’s alleged use as a surveillance tool by the government. The draft AU-AI policy didn’t explicitly address the use of AI by African governments for national security interests, but it acknowledges that there could be perilous AI risks. 

Barbara Glover, program officer for an African Union group that works on policies for emerging technologies, points out that the policy draft recommends that African countries invest in digital and data infrastructure, and collaborate with the private sector to build investment funds to support AI startups and innovation hubs on the continent. 

Unlike the EU, the AU lacks the power to enforce sweeping policies and laws across its member states. Even if the draft AI strategy wins endorsement of parliamentarians at the AU’s assembly next February, African nations must then implement the continental strategy through national AI policies and laws.

Meanwhile, tools powered by machine learning will continue to be deployed, raising ethical questions and regulatory needs and posing a challenge for policymakers across the continent. 

Moorosi says Africa must develop a model for local AI regulation and governance which balances the localized risks and rewards. “If it works with people and works for people, then it has to be regulated,” she says.             

Chinese EVs have entered center stage in US-China tensions

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

So far, electric vehicles have mostly been discussed in the US through a scientific, economic, or environmental lens. But all of a sudden, they have become highly political. 

Last Thursday, the Biden administration announced it would investigate the security risks posed by Chinese-made smart cars, which could “collect sensitive data about our citizens and our infrastructure and send this data back to the People’s Republic of China,” the statement from White House claims.

While many other technologies from China have been scrutinized because of security concerns, EVs have largely avoided that sort of attention until now. After all, they represent a technology that will greatly help the world transition to clean and renewable energy, and people have greeted its rapid growth in China with praise.

But US-China relations have been at a low point since the Trump years and the pandemic, and it seems like only a matter of time before any trade or interaction between the two countries falls under security scrutiny. Now it’s EVs’ turn.

The White House has made clear that there are two motivations behind the investigation: the economy and security.

Even though the statement didn’t explicitly mention EVs, it’s undeniable that they are the only reason Chinese automakers have now become serious challengers to their American peers. Chinese companies like BYD make quality EVs at affordable prices, making them increasingly competitive in international markets. A recent report by the Alliance for American Manufacturing, an industry group, even describes EV competition as “China’s existential threat to America’s auto industry.”

“The issue of Chinese EV imports really hits on so many major political factors all at the same time,” says Kyle Chan, a sociology researcher at Princeton University who studies industrial policies and China. “Not just the auto plants in swing states like Michigan and Ohio, but the broader auto manufacturing sector spread over many important states.”

If the US auto industry fails to remain competitive, it will threaten the job security of millions of Americans, and countless other parts of the US economy will be affected. So it’s no surprise Chinese EVs are seen as a major economic threat that needs to be addressed. 

In fact, it’s one of the few issues everyone seems to agree on in this election cycle. Before the Biden investigation, Trump drew people’s attention to Chinese EVs during campaign speeches, vowing to slap a 60% tariff on Chinese imported goods. Josh Hawley, a Republican senator and a longtime China hawk, proposed a bill last Tuesday for a whopping 125% tariff on Chinese cars, including Chinese-branded cars made in other countries like Mexico.

But the new action taken by the Biden administration introduces another factor to the discussion: security threats.

Basically, the argument here is that Chinese cars—especially the newer ones with smart features that collect information from the environment or connect to the telecom and satellite network—could be used to steal information and harm US national interests. 

To many experts, this argument is a lot less supported by reality. When TikTok and Huawei were subject to similar concerns, it was because their products were widely used in the US. But the majority of Chinese-made cars are running inside China. There are barely any Chinese cars being sold in the US today, let alone the latest models. That makes the White House’s position look slightly bizarre. 

Lei Xing, an auto analyst and observer of the EV industry, has very strong opinions about the security accusations in the Biden administration’s announcement. “It is full of subjective and inaccurate statements trying to paint a picture of threat and security risk that is much greater than it actually is, and is obviously aimed at gaining voter favor as the presidential election race heats up,” Xing tells me.

Nonetheless, fears over data security are shared across the political spectrum in the US. “There has been almost an emerging consensus in Washington, across party lines, that is much more concerned about Chinese data collection through potential technology channels,” Chan says. 

This lens has now been used to question almost any technology product with Chinese connections: whether it’s Chinese cars, Chinese e-commerce apps like Shein and Temu, social media platforms like TikTok and WeChat, or smart home gadgets, the sentiment about data security remains the same.

Having watched these other technologies come into the geopolitical crossfire from afar, Chinese EV companies were mostly prepared for what was announced last week. 

“I think the Chinese EV firms have already baked this into their calculations,” Chan says. “As they’ve been ramping up more joint ventures and partnerships and entering other markets of the world, I’ve noticed a very clear reluctance to put that much investment into the US market.”

Recently, BYD Americas’ CEO said in an interview that its new planned factory in Mexico will serve the domestic market rather than exporting to the US; Xing learned recently that NIO, another Chinese car company, removed the US from its initial plan of entering 25 markets by 2025. These are all signs that Chinese EV companies will shy away from the US market for a while, at least until the political animosity goes away. Being unable to sell in the world’s second-largest auto market is obviously not good news, but they have a lot of potential customers in Europe, Latin America, and Southeast Asia.

“[The Chinese auto industry] for now will remain in a ‘watch and study’ mode and strategize accordingly. Mexico will be an important market and a critical production hub for the Americas region whether [the industry] eventually enters America or not,” says Xing.

I had been counting down the days until we’re able to drive Chinese EVs in the US and see how they compete with American cars on their home turf. I guess I’ll be in for a very long wait. 

Do you think this move will help or harm US domestic automakers in the long run? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. China started its annual parliamentary meeting today. It’s the highest-level of political meeting in China, and it’s where economic plans and other important policy signals are often released. So watch this space. (NBC News)

  • For the first time in 30 years, the country has scrapped the annual tradition where the premier briefs the press and answers questions. It was one of the only moments of access to China’s political leaders, and now it’s gone. (Reuters $)

2. A deepfake clone of a Ukrainian YouTuber is being used by Chinese people to express pro-Russia sentiments and sell Russian goods. (Voice of America News)

3. Hundreds of North Koreans are forced to work in Chinese seafood factories while enduring frequent beatings and sexual abuse. These factories supply major US retailers like Walmart and ShopRite. (New Yorker $)

4. The US government wants to stop data brokers from selling sensitive data to China and a few other adversaries. (Wall Street Journal $)

5. In tiny New York studios, American TikTok influencers are learning the tricks of livestream e-commerce from their Chinese counterparts. (Rest of World)

6. The US Department of Justice accused a Chinese chipmaker of stealing trade secrets five years ago. The company was just found not guilty in court. (Bloomberg $)

7. The number of patents filed by inventors in China has been growing rapidly—surpassing the US figure for the first time ever. (Axios)

Lost in translation

When a Chinese college graduate named Lu Zhi moved on from her first job after eight months at PDD (the Chinese e-commerce company that owns Temu), she didn’t realize the company would ask her to pay $36,000 back as a noncompete compensation. As the Chinese publication Caixin reports, Chinese tech companies, particularly PDD, have sparked outrage for how broad their noncompete agreements have become. 

It doesn’t just affect key personnel in critical positions. Almost any employee, no matter how junior or peripheral their role, has to sign such an agreement when hired. To enforce the agreement, PDD has even hired private detectives to follow former employees around and film their commute to the new workplace. People are questioning whether these companies have gone too far in the name of protecting their trade secrets.

One more thing

The new Dune 2 movie is barely out, and people are already making memes comparing the plot to the real-life geopolitical situation between the US, China, and Taiwan. Is it accurate? I’ll report back after I watch it.

A plan to bring down drug prices could threaten America’s technology boom

Forty years ago, Kendall Square in Cambridge, Massachusetts, was full of deserted warehouses and dying low-tech factories. Today, it is arguably the center of the global biotech industry. 

During my 30 years in MIT’s Technology Licensing Office, I witnessed this transformation firsthand, and I know it was no accident. Much of it was the direct result of the Bayh-Dole Act, a bipartisan law that Congress passed in 1980. 

The reform enabled world-class universities like MIT and Harvard, both within a couple of miles of Kendall Square, to retain the patent and licensing rights on discoveries made by their scientists—even when federal funds paid for the research, as they did in nearly all labs. Those discoveries, in turn, helped a significant number of biotechnology startups throughout the Boston area launch and grow.

Before Bayh-Dole, the government retained those patent and licensing rights. Yet while federal agencies like the National Institutes of Health heavily funded basic scientific research at universities, they were ill equipped to find private-sector companies interested in licensing and developing promising but still nascent discoveries. That’s because, worried about accusations of favoritism, government agencies were willing to grant only nonexclusive licenses to companies to develop patented technologies. 

Few companies were willing to license technology on a nonexclusive basis. Nonexclusive licenses opened up the possibility that a startup might spend many millions of dollars on product development only to have the government relicense the patent to a rival firm.

As a result, many taxpayer-financed discoveries were never turned into real-world products. Before the law, less than 5% of the roughly 28,000 patents held by the federal government had been licensed for development by private firms.

The bipartisan lawmakers behind Bayh-Dole understood that these misaligned incentives were impeding scientific and technological progress—and hampering economic growth and job creation. They changed the rules so that patents no longer automatically went to the federal government. Instead, universities and medical schools could hold on to their patents and manage the licensing themselves.

In response, research institutions invested heavily in offices like the one I ran at MIT, which are devoted to transferring technology from academia to private-sector companies.

Today, universities and nonprofit research institutions transfer thousands of discoveries each year, resulting in innovations in all manner of technical fields. Many thousands of entrepreneurial companies—often founded by the researchers who made the discoveries in question—have licensed patents stemming from federally funded research. This technology transfer system has helped create millions of jobs

Google’s search algorithm, for instance, was developed by Sergey Brin and Larry Page with the help of federal grants while they were still PhD students at Stanford. They cofounded Google, licensed their patented algorithm from the school’s technology transfer office, and ultimately built one of the world’s most valuable companies.

All told, the law sparked a national innovation renaissance that continues to this day. In 2002, the Economist dubbed it “possibly the most inspired piece of legislation to be enacted in America over the past half-century.” I consider it so vital that after I retired, I joined the advisory council of an organization devoted to celebrating and protecting it. 

But the efficacy of the Bayh-Dole Act is now under serious threat from a draft framework the Biden administration is currently in the process of finalizing after a months-long public comment period that concluded on February 6.

In an attempt to control drug prices in the US, the administration’s proposal relies on an obscure provision of Bayh-Dole that allows the government to “march in” and relicense patents. In other words, it can take the exclusively licensed patent right from one company and grant a license to a competing firm. 

The provision is designed to allow the government to step in if a company fails to commercialize a federally funded discovery and make it available to the public in a reasonable time frame. But the White House is now proposing that the provision be used to control the ever-rising costs of pharmaceuticals by relicensing brand-name drug patents if they are not offered at a “reasonable” price. 

On the surface, this might sound like a good idea—the US has some of the highest drug prices in the world, and many life-saving drugs are unavailable to patients who cannot afford them. But trying to control drug prices through the march-in provision will be largely ineffective. Many drugs are separately protected by other private patents filed by biotech and pharma companies later in the development process, so relicensing just an early-stage patent will do little to help generate generic alternatives. At the same time, this policy could have an enormous chilling effect on the very beginning of the drug development process, when companies license the initial innovative patent from the universities and research institutions.

If the Biden administration finalizes the draft march-in framework as currently written, it will allow the federal government to ignore licensing agreements between universities and private companies whenever it chooses and on the basis of currently unknown and potentially subjective criteria, such as what constitutes a “reasonable” price. This would make developing new technologies far riskier. Large companies would have ample reason to walk away, and investors in startup companies—which are major players in bringing innovative university technology to market—would be equally reluctant to invest in those firms.

Any patent associated with federal dollars would likely become toxic overnight, since even one cent of taxpayer funding would make the resulting consumer product eligible for march-in on the basis of price. 

What’s more, while the draft framework has been billed as a “drug pricing” policy, it makes no distinction between university discoveries in life sciences and those in any other high-tech field. As a result, investment in IP-driven industries from biotech to aerospace to alternative energy would plummet. Technological progress would stall. And the system of technology transfer established by the Bayh-Dole Act would quickly break down.

Unless the administration withdraws its proposal, the United States will return to the days when the most promising federally backed discoveries never left university labs. Far fewer inventions based on advanced research will be patented, and innovation hubs like the one I watched grow will have no chance to take root.

Lita Nelsen joined the Technology Licensing Office of the Massachusetts Institute of Technology in 1986 and was director from 1992 to 2016. She is a member of the advisory council of the Bayh-Dole Coalition, a group of organizations and individuals committed to celebrating and protecting the Bayh-Dole Act, as well as informing policymakers and the public of its benefits.

How open source voting machines could boost trust in US elections

While the vendors pitched their latest voting machines in Concord, New Hampshire, this past August, the election officials in the room g­­asped. They whispered, “No way.” They nodded their heads and filled out the scorecards in their laps. Interrupting if they had to, they asked every kind of question: How much does the new scanner weigh? Are any of its parts made in China? Does it use the JSON data format?

The answers weren’t trivial. Based in part on these presentations, many would be making a once-in-a-decade decision.

These New Hampshire officials currently use AccuVote machines, which were made by a company that’s now part of Dominion Voting Systems. First introduced in 1989, they run on an operating system no longer supported by Microsoft, and some have suffered extreme malfunctions; in 2022, the same model of AccuVote partially melted during an especially warm summer election in Connecticut.

Many towns in New Hampshire want to replace the AccuVote. But with what? Based on past history, the new machines would likely have to last decades — while also being secure enough to satisfy the state’s election skeptics. Outside the event, those skeptics held signs like “Ban Voting Machines.” Though they were relatively small in number that day, they’re part of a nationwide movement to eliminate voting technology and instead hand count every ballot — an option election administrators say is simply not feasible.

Against this backdrop, more than 130 election officials packed into the conference rooms on the second floor of Concord’s Legislative Office Building. Ultimately, they faced a choice between two radically different futures.

The first was to continue with a legacy vendor. Three companies — Dominion, ES&S, and Hart InterCivic — control roughly 90 percent of the U.S. voting technology market. All three are privately held, meaning they’re required to reveal little about their financial workings and they’re also committed to keeping their source code from becoming fully public.

The second future was to gamble on VotingWorks, a nonprofit with only 17 employees and voting machine contracts in just five small counties, all in Mississippi. The company has taken the opposite approach to the Big Three. Its financial statements are posted on its website, and every line of code powering its machines is published on GitHub, available for anyone to inspect.

“Why in 2023 are we counting votes with any proprietary software at all?”

At the Concord event, a representative for ES&S suggested that this open-source approach could be dangerous. “If the FBI was building a new building, they’re not going to put the blueprints out online,” he said. But VotingWorks co-founder Ben Adida says it’s fundamental to rebuilding trust in voting equipment and combatting the nationwide push to hand count ballots. “An open-source voting system is one where there are no secrets about how this works,” Adida told the audience. “All the source code is public for the world to see, because why in 2023 are we counting votes with any proprietary software at all?”

Others agree. Ten states currently use VotingWorks’ open-source audit software, including Georgia during its hand count audit in 2020. Other groups are exploring open-source voting technology, including Microsoft, which recently piloted voting software in Franklin County, Idaho. Bills requiring or allowing for open-source voting technology have recently been introduced in at least six states; a bill has also been introduced at the federal level to study the issue further. In New Hampshire, the idea has support from election officials, the secretary of state, and even diehard machine skeptics.

VotingWorks is at the forefront of the movement to make elections more transparent. “Although the voting equipment that we’ve been using for the last 20, 30 years is not responsible for this crisis,” Adida said, “it’s also not the equipment that’s going to get us out of this crisis.” But can an idealist nonprofit really unseat industry juggernauts — and restore faith in democracy along the way?


For years, officials have feared that America’s voting machines are vulnerable to attack. During the 2016 election, Russian hackers targeted election systems in all 50 states, according to the Senate Intelligence Committee. The committee found no evidence that any votes were changed, but it did suggest that Russia could be cataloging options “for use at a later date.”

In 2017, the Department of Homeland Security designated election infrastructure as “critical infrastructure,” noting that “bad cyber actors — ranging from nation states, cyber criminals, and hacktivists — are becoming more sophisticated and dangerous.”

Some conservative activists have suggested simply avoiding machines altogether and hand-counting ballots. But doing so is prohibitively slow and expensive, not to mention more error-prone. Last year, for example, one county in Arizona estimated that counting all 105,000 ballots from the 2020 election would require at least 245 people working every day, including holidays, for almost three weeks.

That leaves election administrators dependent on machines to tally up votes. That August day in Concord, VotingWorks and two of the legacy vendors, Dominion and ES&S, were offering the same kind of product: an optical scanner, which is essentially just a counting machine. After a New Hampshire voter fills in a paper ballot by hand, it’s most likely inserted into an optical scanner, which interprets and tallies the marks. This process is how roughly two-thirds of the country votes. A quarter of voters mark their ballots using machines (aptly named “ballot-marking devices”), which are then fed into an optical scanner as well. About 5 percent use direct recording electronic systems, or DREs, which allows votes to be cast and stored directly on the machine. Only 0.2 percent of voters have their ballots counted by hand.

close up view of hands counting stacks of ballots
Workers in Hinsdale, New Hampshire count each of the 1799 ballots cast after the polls closed on election day in 2016. Hand counts of ballots are prohibitively slow and expensive, and less accurate than machines.
KRISTOPHER RADDER/THE BRATTLEBORO REFORMER VIA AP

Since the 2020 election, the companies that make these machines have been the subject of intense scrutiny from people who deny the election results. Those companies have also come under fire for what critics on both sides of the political aisle describe as their secrecy, lack of innovation, and obstructionist tendencies.

None of the three companies publicly disclose basic information, including their investors and their financial health. It can also be difficult to even get the prices of their machines. Often, jurisdictions come to depend on these firms. Two-thirds of the industry’s revenue comes from support, maintenance, and services for the machines.

Legacy vendors also fight to maintain their market share. In 2017, Hart InterCivic sued Texas to prevent counties from replacing its machines, which don’t produce a paper trail, with machines that did. “For a vendor to sue to prevent auditable paper records from being used in voting shows that market dynamics can be starkly misaligned with the public interest,” concluded a report by researchers at the University of Pennsylvania in collaboration with Verified Voting, a nonprofit that, according to its mission statement, works to promote “the responsible use of technology in elections.”

The companies tell a different story, pointing out that they do disclose their code to certain entities, including third-party firms and independent labs that work on behalf of the federal government to test for vulnerabilities in the software that could be exploited by hackers. In a statement to Undark, ES&S also said it discloses certain financial information to jurisdictions “when requested” and the company shared approximate prices for its voting machines, although it noted that final pricing depends on “individual customer requirements.”

In Concord, officials from some small towns where ballots are still hand-counted were considering switching to machines. Others were considering whether to stick with Dominion and LHS — the New Hampshire-based company that services the machines — or switch to VotingWorks. It would likely be one of the most expensive, consequential decisions of their careers.

“For a vendor to sue to prevent auditable paper records from being used in voting shows that market dynamics can be starkly misaligned with the public interest.”

Throughout his pitch, the representative for LHS emphasized the continuity between the old AccuVote machines and the new Dominion scanner. Wearing a blazer and a dress shirt unbuttoned at the collar, Jeff Silvestro knew the crowd well. LHS is the only authorized service provider for the entire state’s AccuVote machines, and it’s responsible for offering training for the towns’ staff, delivering memory cards for each election, and weathering a blizzard to come to their poll site and service a broken scanner.

Don’t worry, Silvestro reassured the crowd: The voter experience is the same. “Similarities,” Silvestro told the crowd. “That’s what we’re looking for.”

Just down the hall from Silvestro, Ben Adida laid out a different vision of what voting technology could be. He opened by addressing the “elephant in the room”: the substantial number of people who distrust the elections. VotingWorks could do so, he said, by offering three things: security, simplicity, and transparency.

Adida first started working on election technology in 1997, as a computer science undergraduate at MIT, where he built a voting system for student council elections. After earning a Ph.D. from MIT in 2006, with a specialty in cryptography and information security, he did a few more years of election work as a post-doc at Harvard University and then transitioned to data security and privacy for medical data. Later, he served as director of engineering at Mozilla and Square and vice president of engineering at Clever, a digital learning platform for K-12 schools.

In 2016, Adida considered leaving Clever to do election work again, and he followed the progress of STAR-Vote, an open-source election system proposed by Travis County, Texas, that ultimately didn’t move forward. He decided to stay put, but he couldn’t shake the thought of voting technology. Adida knew it was rare for someone to have his background in both product design and election security. “This is kind of a calling,” he said.

Ben Adida
Ben Adida, who holds a Ph.D. in computer science, with a specialty in cryptography and information security, is the co-founder of VotingWorks, a nonprofit that builds open-source election technology.
a VotingWorks display of at the National Association of Secretaries of State in 2022 showing a voting screen built into a tamper-evident ballot box
The voting machine built by VotingWorks is made from off-the-shelf electronics and open-source software that the company posted on GitHub.

Adida launched VotingWorks in December 2018, with some funding from individuals and Y Combinator, a renowned startup accelerator. The nonprofit is now unique among the legacy voting technology vendors: The group has disclosed everything, from its donors to the prices of its machines. VotingWorks machines are made from off-the-shelf electronics, and in the long-run, according to Adida, are cheaper than their competitors.

The day of the Concord event, Adida wore a T-shirt tucked into his khakis, and sported a thick brown mustache. When he started discussing the specs of his machine, he spoke quickly, bounding around the room and even tripping on an errant wire. At one point, he showed off his machine’s end-of-night election report, printed on an 8 ½ by 11 piece of paper, a far cry from the long strips of paper that are currently used. You don’t have to have “these long CVS receipts.” The room laughed.


Adida and his team are staking out a position in a debate that stretches back to the early days of computing: Is the route to computer security through secrecy, or through total transparency?

Some of the most widely used software today is open-source software, or OSS, meaning anyone can read, modify, and reuse the code. OSS has powered popular products like the operating system Linux and the internet browser Firefox from Mozilla. It’s also used extensively by the Department of Defense.

Proponents of OSS offer three main arguments for why it’s more secure than a locked box model. First, publicly available source code can be scrutinized by anyone, not just a relatively small group of engineers within a company, increasing the chances of catching flaws. Second, because coders know that they can be scrutinized by anyone, they’re incentivized to produce better work and to explain their approach. “You can go and look at exactly why it’s being done this way, who wrote it, who approved it, and all of that,” said Adida.

Third, OSS proponents say that trying to hide source code will ultimately fail, because attackers can acquire it from the supplier or reverse engineer it themselves. Hackers don’t need perfect source code, just enough to analyze for patterns that may suggest a vulnerability. Breaking is easier than building.

Already, there are indications that bad actors have acquired proprietary voting machine code. In 2021, an election official in Colorado allegedly allowed a conspiracy theorist to access county machines, copy sensitive data, and photograph system passwords — the kind of insider attack that, experts warn, could compromise the security of the coming presidential election.

Adida and his team are staking out a position in a debate that stretches back to the early days of computing: Is the route to computer security through secrecy, or through total transparency?

Not everyone is convinced that open-source code alone is enough to ensure a secure voting machine. “You could have had open-source software, and you might not have found all of the problems or errors or issues,” said Pamela Smith, the president of Verified Voting, citing the numerous lines of code that would need to be examined in a limited amount of time.

Adida doesn’t expect anyone to go through the hundreds of thousands of lines of code on the VotingWorks GitHub. But if they’re curious about a specific aspect, like how the scanner handles paper that’s askew, it’s much more manageable: only a few hundred lines of code. Already, a small number of coders from outside the company have made suggestions on how to improve the software, some of which have been accepted. Then, to fully guard against vulnerabilities, the company relies on its own procedures, third-party reviews, and certification testing at the federal level, said Adida.

two poll workers holding long scrolls of receipt paper which has puddled onto the ground
Miami-Dade election workers check voting machines for accuracy by reviewing scrolls of paper that Adida likened to “long CVS receipts.”
JOE RAEDLE/GETTY IMAGES

In addition to security, any new machine also needs to be easy for poll workers to operate — and able to perform reliably under the high-stakes conditions of an election day. In interviews, election officials who use the technology in Mississippi raved about its ease of use.

Some also love how responsive the company is to feedback. “They come to us and say, ‘Tell us in the field what’s going on,’” said Sara Dionne, chairman of the election commission in Warren County, Mississippi, which started using VotingWorks in 2020. “We certainly never had that kind of conversation with ES&S ever.”


To expand VotingWorks’ reach, though, Adida must pitch it in places like New Hampshire, where election officials are navigating tight budgets, fallout from the 2020 election, and misperceptions about voting technology.

New Hampshire is a swing state, and, after the 2020 election, it has a small but vocal faction of election deniers. At the same time, Republican Secretary of State David Scanlan has done little to marshal resources for new machines. Last year, Scanlan opposed a bill that would have allowed New Hampshire towns and cities to apply for funding from a $12 million federal grant for new voting machines; Republicans in the legislature killed the bill. (Asked what cash-strapped jurisdictions should do if they can’t afford new scanners, Scanlan told Undark they could cannibalize parts from old AccuVote machines.)

Some critics also say Scanlan has done little to dispel some conservative activists’ beliefs that New Hampshire can dispense with machines altogether. At the Concord event, a woman told Undark that Manchester, a city with 68,000 registered voters, could hand count all of its ballots in just four hours. Speaking with Undark, Scanlan acknowledged that this estimate wasn’t correct, and that hand counting is less accurate than machines. However, his office hasn’t communicated this message to the public in any formal way. “I definitely think that he is complicit in allowing [misinformation] to continue to flourish,” said Liz Wester, co-founder of 603 Forward, which encourages civic participation in the state.

The VotingWorks model won over some machine skeptics at the Concord event, like Tim Cahill, a Republican in the New Hampshire House of Representatives. Cahill said he’d prefer that all ballots in the state be hand counted but would choose VotingWorks over the other vendors. “Why would you trust something you can’t put your eyes on?” he told Undark. “We have a lot of smart people in this country and people want open source, they want transparency.”

people in an office setting surrounded by stacks of ballots
Poll workers use the Accu-Vote machines to scan absentee ballots in Fairbanks, Alaska.
ERIC ENGMAN/GETTY IMAGES

Open source has found fans in other states, too. Kevin Cavanaugh is a county supervisor in Pinal, Arizona’s third most populous county. He says he started to doubt voting machines after watching a documentary, funded by the election denier Mike Lindell, claiming that the devices have unauthorized software that could change vote totals without detection. In November 2022, Cavanaugh introduced a motion to increase the number of ballots counted by hand in the county, and he told Undark he’d like a full hand count. “But, if we’re using machines,” he added, “then I think it’s important that the source code is available for inspection to experts.”

Back in Concord, Adida appeared to be persuasive to the public at large — or at least those invested enough to attend the event. Of the 201 attendees who filled out a scorecard, VotingWorks was the most popular first choice. But among election officials, the clear preference was Dominion. Some officials were skeptical that open-source technology would mean much to people in their towns. “Your average voter doesn’t care about open source,” said one town clerk.

Still, five towns in New Hampshire have already purchased VotingWorks machines, some of which will be used in upcoming March local elections.


Two main factors determine whether someone has faith in an election, said Charles Stewart III, a political scientist at MIT who has written extensively about trust in elections. The first, which affects roughly 5 to 10 percent of voters, is a negative personal experience at the polls, like long lines, rude poll workers, and problems with machines, which can make the public less willing to trust an election’s outcome.

The second, more influential factor affecting trust is if a voter’s candidate won. That makes it supremely difficult to restore confidence, said Tammy Patrick, a former election official in Maricopa County and the current CEO for programs at the National Association of Election Officials. “The answer on election administration — it’s complex, it’s wonky, it’s not pithy,” she said in a recent press conference. “It’s hard to come back to those emotional pleas with what the reality is.”

Adida agrees with Stewart that VotingWorks alone isn’t going to eliminate election denialism — nor, he said, is that his goal. Instead, he hopes to reach the people who are susceptible to misinformation but haven’t necessarily made up their minds yet, a group he describes as the “middle 80 percent.” Even if they never visit the company’s GitHub, he says, “the fact that we’re putting it all out in the open builds trust.” And when someone says something patently false about the company, Adida can at least ask them to identify the incriminating lines of source code.

Are those two things — rhetorical power and a commitment to transparency — really a match for the disinformation machinery pushing lies across the country? Adida mentioned the myths about legacy vendors’ machines being mis-programmed or incorrectly counting ballots during the 2020 election. “What was the counterpoint to that?” he asked. “It was, ‘Trust us. These machines have been tested.’ I want the counterpoint to be, ‘Hey folks, all the source code is open.’”


Spenser Mestel is a poll worker and independent journalist. His bylines include The New York Times, The Atlantic, The Guardian, and The Intercept.

This article was originally published on Undark. Read the original article.

Yes, remote learning can work for preschoolers

The other day some preschoolers were pretending to be one of their favorite Sesame Street characters, a baby goat named Ma’zooza who likes round things. They played with tomatoes—counting up to five, hiding one, and putting it back. 

A totally ordinary moment exploring shapes, numbers, and imagination. Except this version of Sesame Street—called Ahlan Simsim (Welcome Sesame)—was custom made for children like these: Syrian refugees living in camps in Lebanon who otherwise don’t have access to preschool or, often, enough to eat. 

Educational interruptions due to the pandemic, climate disasters, and war have affected nearly every child on Earth since 2020. A record 43.3 million children have been driven from their homes by conflict and disasters, according to UNICEF—a number that doubled over the past decade.

And yet, points out Sherrie Westin, the head of the nonprofit that produces Sesame Street, “less than 2% of humanitarian aid worldwide goes to the early years”—that is, specifically supporting care and education, not just food and medicine. 

two children sitting close together holding a tablet
Sesame Workshop created the TV show Ahlan Simsim (seen on screen) for children who have been displaced from their homes or experienced conflict.
RYAN HEFFERNAN/SESAME WORKSHOP

That may be about to change. The Ahlan Simsim program is the largest-ever humanitarian intervention specifically intended for small children’s development. Sesame Workshop partnered with the International Rescue Committee, a humanitarian-aid nonprofit, to win a $100 million grant competition administered by the MacArthur Foundation. The results, released in May 2023 but not yet peer reviewed, have been startling: they have provided the first evidence that 100% remote learning can help young children in crisis situations. And the format has already been successfully copied and used in other crises. 

The program combines video content produced by Sesame with services from the IRC, which employs a combination of volunteers from the affected community and professional teachers and parent educators to work locally with families. Over the past few years, 2 million children and their caregivers watched Ahlan Simsim and received coordinated services, some of which were provided entirely over mobile phones. Another 25 million simply watched the show. 

In 2023, Hiro Yoshikawa and his team of researchers at New York University showed in a randomized controlled trial that Syrian refugee children taking part in an 11-week, fully remote learning program, combining Ahlan Simsim videos with live support from local preschool teachers over cell phones, showed progress in learning that was comparable to the results from a year of standard in-person preschool. 

And the learning they measured wasn’t just academic. Children made progress in overall development, emerging literacy, emerging numeracy, motor skills, social-emotional skills, and even the quality of play—like pretending to be Ma’zooza the goat. 

“I’m pretty impressed,” says Kathy Hirsh-Pasek, an expert in early child development at Temple University, who was not involved with the research. Compared with in-person preschool, “this is probably not the full nutritional value,” she cautions. “But nicely done—to even bring them anything in this setting is kind of amazing.” 

Sesame and IRC hope that holistic intervention can help the world’s most vulnerable kids cope with toxic stress—the kind that can, if unchecked, change the architecture of a developing brain. “We see so many children that just because of the circumstances of their birth—born into crisis, into conflict—the odds of them achieving their full potential are reduced,” says Katie Murphy, the director of early-­childhood development and strategic initiatives at the IRC, who was closely involved with the project. “Our work tries to reduce that gap.” 

With the right support from caregivers and communities, Murphy and her colleagues believe, more children around the world can grow up resilient amid crisis, displacement, and war. 

Coping with discrimination, conflict, and hunger 

At a refugee camp in the agricultural Beqaa Valley in eastern Lebanon, Amal, Hana, and Mariam, three Syrian refugee mothers who participated in the program, sat on a carpet in a nearly empty tent, wearing headscarves and patterned robes. Hana’s four-year-old son reclined on her lap. There had been a recent security incident at their home camp, so we Zoomed from a neighboring camp. 

As the economic crisis worsens here, the mothers I spoke with reported feeling growing resentment and sometimes discrimination from their Lebanese neighbors. They said they are nervous just using their UNHCR aid cards—which mark them as refugees—to buy food at the supermarket. And their kids are sometimes bullied as well. 

“There are children who are saying ‘Oh, you are Syrian,’ and they want to fight with them,” Sou’ad, a refugee with four children under seven who also volunteers for the IRC, told me through an interpreter in a separate interview. “The sense of belonging is needed. The children don’t know which country they belong to—to Lebanon or to Syria. They say, ‘This is not our country—this is the country of these other children, so that’s why they are fighting us.’” 

Syrians are the largest displaced population in the world. Seven out of 10 fled or were forced from their homes at the outbreak of the civil war in 2011. The women I’ve spoken with have been here for around a decade; their children were born in Lebanon, but they aren’t permanent residents and have no path to citizenship. 

In August 2020, a giant explosion at the port of Beirut complicated and worsened Lebanon’s brutal economic and political crisis. The country is currently without a president, and tensions have only increased with the Israel-Hamas war. As of October 2023, one-fourth of Lebanon’s population was food insecure, including 36% of Syrian refugees.  

These days, the mothers at the camp tell me, there isn’t enough money even to give the children milk or labne, the local variation of yogurt; they say they have only rice, pita, and a little za’atar spice to sprinkle on top. Children as young as eight are going to work in the fields for food. 

Pivoting under pressure

When the pandemic hit, the remote-­schooling intervention in Lebanon had to be redesigned from scratch. The original plan was that IRC would provide in-person preschool, and Sesame would create complementary TV episodes and short videos featuring Ma’zooza the goat alongside Jad, a yellow monster, and Basma, his purple friend. But in early 2020, when schools around the world had to go remote, the project did too.

Remote-learning programs during disaster and war are nothing new. During World War II, BBC School Radio broadcast lessons to up to half the students in Britain. But the advent of mobile 4G and 5G networks around the world has presented whole new opportunities for learning in crises. Russia’s invasion of Ukraine in 2022 created one of the biggest and fastest displacements of families in history; Ukraine declared just a two-week school holiday before resuming public education online, with many students joining in on mobile phones from across the country and around the world.  

two children entering a door next to a mural of Elmo with ABC blocks
a teacher sitting at a table to help a preschool age girl with her coloring
a classroom of children seated on a rug to listen to a story being read by a teacher

After covid lockdowns ended, the International Rescue Committee and Sesame resumed their in-person preschool program, where Lebanese children learned alongside Syrian refugees.

But that’s a tall order in Lebanon, where telecommunications infrastructure and the power grid have deteriorated along with the economic situation. As of January 2021, about two-thirds of the population had a mobile connection and 78% were internet users, according to Datareportal. The Syrian refugees reached by IRC programming typically had one prepaid mobile phone per household; the group gave them recharge cards to cover their data services.

Nevertheless, the approach was “extraordinarily successful,” says Yoshikawa, the lead researcher in the study. He points to the unusual level of commitment from family members, primarily mothers, who took time out of their household duties for a rare and prized opportunity to focus on their children’s learning. “When we interviewed the caregivers,” he says, “they really perceived this as the chance to have full access to preschool, which Syrian refugee families typically don’t have.”

Preschool teachers dropped off basic writing materials and art supplies. They called groups of five or six families at home three times a week. In these calls, they spent about five minutes greeting and engaging the children directly, and the rest of the time instructing caregivers on how to convey lessons and lead activities with the children. Families could message the teachers, as well as each other, for support. And families would send videos back to the teachers via WhatsApp, documenting what the children had practiced. 

Caregivers reported going up to their roof for a better signal. They charged smartphones at neighbors’ homes when the power went out, and when they ran out of data they borrowed phones from extended family members so the children could keep watching assigned videos.

More than a third of the mothers were illiterate, so often the teachers would have to show them how to write basic letters to teach their children, or send them voice memos if they couldn’t read messages. “We were empowering them both,” says Awada, one of the teachers. 

“That interaction with their children, I believe, was one of the reasons the study’s results were so powerful,” says Westin. 

Dima, a refugee and mother of three who also works as an outreach volunteer with the IRC, helps recruit and support families to engage with the Ahlan Simsim program. She says most children had no previous access to the sort of educational materials provided—things like scissors, tape, and crayons. The IRC also distributed necessities like soap, lice medicine, and toothbrushes. 

Mariam, one of the mothers in the camp, has two girls, five and four years old, and her greatest wish is that they get an education. She herself stopped her schooling at the sixth grade. “Reading and writing,” she said through an interpreter, “is the most important thing in life.”

A focus on resilience

Sesame Street premiered in the United States in 1969 with a social mission born out of the civil rights movement and President Lyndon Johnson’s Great Society: to level the playing field for poor kids by bringing early learning into the home for free. 

The show debuted its first foreign-­language co-productions in Brazil and Mexico just three years later; there have been a total of 42 international co-­productions over the years. A meta-analysis of studies with over 10,000 children in 15 countries found that these programs have had significant positive effects on children’s mastery of reading and basic math concepts, as well as their social-emotional skills and attitudes toward out-groups.  

An Arabic version of the show (Iftah Ya Simsim/Open Sesame, which many of today’s parents in the region grew up with) ran from 1979 to 1989. But Ahlan Simsim is the first production created deliberately for children affected by crisis and conflict, and that necessitated some special sensitivity.

The social-emotional curriculum for the show had to be designed from scratch for the cultural context and needs of these children, says Shanna Kohn, the director of international education at Sesame Workshop. “We went in with the idea of a show that focused on resilience—a beloved Western concept. And we brought that to this team of academics and Arab advisors, and there was a lot of skepticism. There isn’t even a clear Arabic translation,” says Kohn. 

So the team backed up and started with the basics. They had to figure out how to present relatable stories—about Jad leaving home and feeling different from his friends—without introducing situations or concepts that might be triggering for young viewers. 

Elmo with children in
a classroom in Saida,
Lebanon.
RYAN HEFFERNAN/SESAME WORKSHOP

“Boats are usually a go-to for preschool children,” says Scott Cameron, who has been with the company for 25 years. “We avoided things like that, for obvious reasons.” They also avoided loud noises, like thunderstorms. They skipped nutrition lessons, because kids who are barely getting enough to eat can’t use reminders about fruits and vegetables. 

Kids who are traumatized often respond with an outward numbness; the research team found that the children were using only two or three terms—happy, sad, angry—to describe their feelings. To help them process these feelings and frustrations, the show defines the Arabic words for nine emotions: caring, fear, frustration, nervousness, hope or determination, jealousy, loneliness, and sadness. Jad and Basma model emotional coping strategies: belly breathing, counting to five, “moving it out,” “drawing it out,” asking for help, and making a plan. 

Sesame and the IRC are hoping that the evidence from this study becomes a mandate. The Lego Foundation supported a version of Ahlan Simsim for Rohingya refugees in Bangladesh. Sesame and the IRC have piloted the remote preschool program in Iraq and hope to scale it in that country, where there aren’t resources for in-person preschool. 

And they adapted some of the video content for rapid response to 2023’s floods in Libya and earthquake in Morocco. Westin’s hope is that the world will begin to see both the need and the opportunity. “Those who have the most to lose,” she says, “receive the least and have the most to gain if we can reach them with nurturing care and growth.” 

Anya Kamenetz is a freelance education reporter who writes the Substack newsletter The Golden Hour.

Four lessons from 2023 that tell us where AI regulation is going

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

In the US and elsewhere, 2023 was a blockbuster year for artificial intelligence and AI regulation, and this next year is guaranteed to bring even more action. On January 5, I published a story with my colleagues Melissa Heikkilä and Zeyi Yang that lays out what we should expect in the coming 12 months in AI policy around the world. 

Most broadly, we are likely to see the strategies that emerged last year continue, expand, and begin to be implemented. For example, following President Biden’s executive order, various US government agencies may outline new best practices but empower AI companies to police themselves. And across the pond, companies and regulators will begin to grapple with Europe’s AI Act and its risk-based approach. It certainly won’t be seamless, and there’s bound to be a lot of discussion about how these new laws and policies actually work in practice. 

While writing this piece, I took some time to reflect on how we got here. I think stories about technologies’ rise are worthy of reflective examination—they can help us better understand what might happen next. And as a reporter, I’ve seen patterns emerge in these stories over time—whether it’s with blockchain, social media, self-driving cars, or any other fast-developing, world-changing innovation. The tech usually moves much faster than regulation, with lawmakers increasingly challenged to stay up to speed with the technology itself while devising new ways to craft sustainable, future-proof laws. 

In thinking about the US specifically, I’m not sure what we’re experiencing so far is unprecedented, though certainly the speed with which generative AI has launched into our lives has been surprising. Last year, AI policy was marked by Big Tech power moves, congressional upskilling and bipartisanship (at least in this space!), geopolitical competition, and rapid deployment of nascent technologies on the fly. 

So what did we learn? And what is around the corner? There’s so much to try to stay on top of in terms of policy, but I’ve broken down what you need to know into four takeaways.

1. The US isn’t planning on putting the screws to Big Tech. But lawmakers do plan to engage the AI industry. 

OpenAI’s CEO, Sam Altman, first started his tour de Congress last May, six months after the bombshell launch of ChatGPT. He met with lawmakers at private dinners and testified about the existential threats his own technology could pose to humanity. In a lot of ways, this set the tone for how we’ve been talking about AI in the US, and it was followed by Biden’s speech on AI, congressional AI insight forums to help lawmakers get up to speed, and the release of more large language models. (Notably, the guest list for these AI insight forums skewed heavily toward industry.)

As US lawmakers began to really take on AI, it became a rare (if small) area of bipartisanship on the Hill, with legislators from both parties calling for more guardrails around the tech. At the same time, activity at the state level and in the courts increased, primarily around user protections like age verification and content moderation

As I wrote in the story, “Through this activity, a US flavor of AI policy began to emerge: one that’s friendly to the AI industry, with an emphasis on best practices, a reliance on different agencies to craft their own rules, and a nuanced approach of regulating each sector of the economy differently.” The culmination of all this was Biden’s executive order at the end of October, which outlined a distributed approach to AI policy, in which different agencies craft their own rules. It (perhaps unsurprisingly) will rely quite heavily on buy-in from AI companies

Next year, we can expect some new regulations to build on all this. As we wrote in our story today, Congress is looking to draft new laws and will consider existing bills on recommendation algorithms, data privacy, and transparency that will complement Biden’s executive order. States, too, will be considering their own regulations.

2. It’s not going to be easy to grapple with the harms and risks posed by AI.

While existential risk got the biggest headlines last year, human rights advocates and researchers frequently called out the harm that AI already on the market is causing right now, like perpetuating inaccuracy and bias. They warned that hyping existential risks would pull focus from dangerous realities, like medical AIs that disproportionately misdiagnose health issues in Black and brown patients.

As debates over how concerned we should be about the coming robot wars infiltrated dinner table chats and classrooms alike, agencies and local regulators started making declarations and issuing statements about AI, such as the joint statement in April from four federal agencies, including the FTC and CFPB, which warned that AI has the “potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.” Just how those outcomes will be monitored or prevented, however, is far from clear at this point. 

As for the tech industry itself, players are likely to continue to squabble with lawmakers over the riskiness of AI systems. The last-hour discussions over the EU AI Act were hung up on a fight over foundation models, and this debate will likely continue in different arenas this year, as will debates over what uses of AI should be considered high risk and who is responsible for managing those risks.

3. AI is the next frontier for techno-nationalism and global competition.

This past year also made clear that the US approach to AI is shaped by the desire to achieve and maintain a technological advantage over China. Meanwhile, the two countries continue to escalate their trade war over semiconductors, which provide the hardware necessary for AI models.  

Beyond keeping an edge in technological prowess, the US wants to be a leader on tech regulation and compete with a regulation-happy Europe. Biden’s executive order strategically dropped just days before the UK’s AI Summit and before the final negotiations over the EU AI Act were set to take place. 

4. Watch closely what happens in the US election and those around the world. 

Of course, the US will have a big election in 2024, but so will many many other countries. In my last Technocrat of 2023, we talked about how generative AI and other media technologies have created acute concern about an onslaught of deceitful and inaccurate information. I’m particularly interested in watching how social media platforms and politicians alike address the new threat of political disinformation as a result of generative AI. As I wrote in a story a few months ago, researchers are already seeing a negative impact

One thing at least is sure: the rapid release of generative AI to users in 2023 will affect 2024 elections, likely in a dramatic and unprecedented way. It’s hard to really predict what may happen given how rapidly the technology is changing and how quickly users are pushing it in different and unexpected directions. So even if governments or social media companies, among others, do try to strengthen safeguards or create new policies, the way generative AI is actually used in 2024 will be critical in shaping future regulations. 

No matter what, it’s definitely going to be an interesting ride!

What I am reading this week

  • The New York Times is suing OpenAI on the grounds that it used its articles to train ChatGPT. It’s one of the biggest stories over the past few weeks that you may have missed, and I was particularly interested in the similarity between some of the ChatGPT outputs and the NYT articles, as documented in the filing.
  • Researchers at the Stanford Internet Observatory found thousands of examples of child sexual abuse material in one of the major data sets used to train generative AI. That data set has now been temporarily taken down.
  • Smart cars are being weaponized by abusive partners as tools for surveillance and tracking, according to a new story by Kashmir Hill in the New York Times. In a world where almost everything has the ability to produce geolocation data, I’m afraid these sorts of stories will be more and more common.

What I learned this week

My colleagues Melissa Heikkilä and Will Douglas Heaven published a forward-thinking piece about what’s to come for AI in 2024, and I figured you all would want a taste! They predict a year of customized chatbots, new advances in generative-AI video, AI-generated misinformation during elections, and multitasking robots. Definitely worth the read!

What’s next for AI regulation in 2024? 

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here.

In 2023, AI policy and regulation went from a niche, nerdy topic to front-page news. This is partly thanks to OpenAI’s ChatGPT, which helped AI go mainstream, but which also exposed people to how AI systems work—and don’t work. It has been a monumental year for policy: we saw the first sweeping AI law agreed upon in the European Union, Senate hearings and executive orders in the US, and specific rules in China for things like recommender algorithms. 

If 2023 was the year lawmakers agreed on a vision, 2024 will be the year policies start to morph into concrete action. Here’s what to expect. 

The United States

AI really entered the political conversation in the US in 2023. But it wasn’t just debate. There was also action, culminating in President Biden’s executive order on AI at the end of October—a sprawling directive calling for more transparency and new standards. 

Through this activity, a US flavor of AI policy began to emerge: one that’s friendly to the AI industry, with an emphasis on best practices, a reliance on different agencies to craft their own rules, and a nuanced approach of regulating each sector of the economy differently. 

Next year will build on the momentum of 2023, and many items detailed in Biden’s executive order will be enacted. We’ll also be hearing a lot about the new US AI Safety Institute, which will be responsible for executing most of the policies called for in the order. 

From a congressional standpoint, it’s not clear what exactly will happen. Senate Majority Leader Chuck Schumer recently signaled that new laws may be coming in addition to the executive order. There are already several legislative proposals in play that touch various aspects of AI, such as transparency, deepfakes, and platform accountability. But it’s not clear which, if any, of these already proposed bills will gain traction next year.

What we can expect, though, is an approach that grades types and uses of AI by how much risk they pose—a framework similar to the EU’s AI Act. The National Institute of Standards and Technology has already proposed such a framework that each sector and agency will now have to put into practice, says Chris Meserole, executive director of the Frontier Model Forum, an industry lobbying body. 

Another thing is clear: the US presidential election in 2024 will color much of the discussion on AI regulation. As we see in generative AI’s impact on social media platforms and misinformation, we can expect the debate around how we prevent harms from this technology to be shaped by what happens during election season. 

Europe

The European Union has just agreed on the AI Act, the world’s first sweeping AI law. 

After intense technical tinkering and official approval by European countries and the EU Parliament in the first half of 2024, the AI Act will kick in fairly quickly. In the most optimistic scenario, bans on certain AI uses could apply as soon as the end of the year. 

This all means 2024 will be a busy year for the AI sector as it prepares to comply with the new rules. Although most AI applications will get a free pass from the AI Act, companies developing foundation models and applications that are considered to pose a “high risk” to fundamental rights, such as those meant to be used in sectors like education, health care, and policing, will have to meet new EU standards. In Europe, the police will not be allowed to use the technology in public places, unless they get court approval first for specific purposes such as fighting terrorism, preventing human trafficking, or finding a missing person. 

Other AI uses will be entirely banned in the EU, such as creating facial recognition databases like Clearview AI’s or using emotion recognition technology at work or in schools. The AI Act will require companies to be more transparent about how they develop their models, and it will make them, and organizations using high-risk AI systems, more accountable for any harms that result. 

Companies developing foundation models—the models upon which other AI products, such as GPT-4,  are based—will have to comply with the law within one year of the time it enters into force. Other tech companies have two years to implement the rules. 

To meet the new requirements, AI companies will have to be more thoughtful about how they build their systems, and document their work more rigorously so it can be audited. The law will require companies to be more transparent about how their models have been trained and will ensure that AI systems deemed high-risk are trained and tested with sufficiently representative data sets in order to minimize biases, for example. 

The EU believes that the most powerful AI models, such as OpenAI’s GPT-4 and Google’s Gemini, could pose a “systemic” risk to citizens and thus need additional work to meet EU standards. Companies must take steps to assess and mitigate risks and ensure that the systems are secure, and they will be required to report serious incidents and share details on their energy consumption. It will be up to companies to assess whether their models are powerful enough to fall into this category. 

Open-source AI companies are exempted from most of the AI Act’s transparency requirements, unless they are developing models as computing-intensive as GPT-4. Not complying with rules could lead to steep fines or cause their products to be blocked from the EU. 

The EU is also working on another bill, called the AI Liability Directive, which will ensure that people who have been harmed by the technology can get financial compensation. Negotiations for that are still ongoing and will likely pick up this year. 

Some other countries are taking a more hands-off approach. For example, the UK, home of Google DeepMind, has said it does not intend to regulate AI in the short term. However, any company outside the EU, the world’s second-largest economy, will still have to comply with the AI Act if it wants to do business in the trading bloc. 

Columbia University law professor Anu Bradford has called this the “Brussels effect”—by being the first to regulate, the EU is able to set the de facto global standard, shaping the way the world does business and develops technology. The EU successfully achieved this with its strict data protection regime, the GDPR, which has been copied everywhere from California to India. It hopes to repeat the trick when it comes to AI. 

China

So far, AI regulation in China has been deeply fragmented and piecemeal. Rather than regulating AI as a whole, the country has released individual pieces of legislation whenever a new AI product becomes prominent. That’s why China has one set of rules for algorithmic recommendation services (TikTok-like apps and search engines), another for deepfakes, and yet another for generative AI. 

The strength of this approach is it allows Beijing to quickly react to risks emerging from the advances in technology—both for the users and for the government. But the problem is it prevents a more long-term and panoramic perspective from developing.

That could change next year. In June 2023, China’s state council, the top governing body, announced that “an artificial intelligence law” is on its legislative agenda. This law would cover everything—like the AI Act for Europe. Because of its ambitious scope, it’s hard to say how long the legislative process will take. We might see a first draft in 2024, but it might take longer. In the interim, it won’t be surprising if Chinese internet regulators introduce new rules to deal with popular new AI tools or types of content that emerge next year. 

So far, very little information about it has been released, but one document could help us predict the new law: scholars from the Chinese Academy of Social Sciences, a state-owned research institute, released an “expert suggestion” version of the Chinese AI law in August. This document proposes a “national AI office” to oversee the development of AI in China, demands a yearly independent “social responsibility report” on foundation models, and sets up a “negative list” of AI areas with higher risks, which companies can’t even research without government approval.

Currently, Chinese AI companies are already subject to plenty of regulations. In fact, any foundation model needs to be registered with the government before it can be released to the Chinese public (as of the end of 2023, 22 companies have registered their AI models). 

This means that AI in China is no longer a Wild West environment. But exactly how these regulations will be enforced remains uncertain. In the coming year, generative-AI companies will have to try to figure out the compliance reality, especially around safety reviews and IP infringement. 

At the same time, since foreign AI companies haven’t received any approval to release their products in China (and likely won’t in the future), the resulting domestic commercial environment protects Chinese companies. It may help them gain an edge against Western AI companies, but it may also stifle competition and reinforcing China’s control of online speech.

The rest of the world

We’re likely to see more AI regulations introduced in other parts of the world throughout the next year. One region to watch will be Africa. The African Union is likely to release an AI strategy for the continent early in 2024, meant to establish policies that individual countries can replicate to compete in AI and protect African consumers from Western tech companies, says Melody Musoni, a policy officer at the European Centre for Development Policy Management.

Some countries, like Rwanda, Nigeria, and South Africa, have already drafted national AI strategies and are working to develop education programs, computing power, and industry-friendly policies to support AI companies. Global bodies like the UN, OECD, G20, and regional alliances have started to create working groups, advisory boards, principles, standards, and statements about AI. Groups like the OECD may prove useful in creating regulatory consistency across different regions, which could ease the burden of compliance for AI companies. 

Geopolitically, we’re likely to see growing differences between how democratic and authoritarian countries foster—and weaponize—their AI industries. It will be interesting to see to what extent AI companies prioritize global expansion or domestic specialization in 2024. They might have to make some tough decisions.

Meet the economist who wants the field to account for nature

What is the true value of a honeybee? A mountain stream? A mangrove tree? 

Gretchen Daily, cofounder and faculty director of the Stanford Natural Capital Project, has dedicated her career to answering such complex questions. Using emerging scientific data and the project’s innovative open-source software, Daily and her team help governments, international banks, and NGOs to not only quantify the value of nature, but also determine the benefits of conservation and ecosystem restoration.

This marriage of ecological and economic concerns may seem an unusual one to some. But to Daily, it’s a union as natural as the planet’s ecosystems themselves.

Daily completed her doctoral work in ecology at Stanford during the 1990s. It was, she says, a revolutionary time for interdisciplinary approaches to both economic and ecological crises. Spurred by a summit hosted by the Royal Swedish Academy of Scientists, ecologists and economists began coming together for the first time to consider the benefits of a joint approach to developing economic and environmental policy.

“For so much of our history, humanity had operated under the assumption that nature was infinite,” says Daily. “We knew that collapses of civilization were at least in part because of the destruction of the local environment, but nobody thought that could happen at a planetary scale.”

“Many of us finally began to see that, fundamentally, environmental problems are economic and social problems. We cannot maintain the vitality and security of the biosphere without valuing nature.”

Gretchen Daily

Global climate change and its myriad impacts changed all that. “That crisis forced us all to rethink the assumptions on which economic systems operate,” she says. “It also revealed the frailties in different lines of inquiry that have built up for decades and even centuries.”

In 1997, Daily edited Nature’s Services: Societal Dependence on Natural Ecosystems—one of the first books to introduce the concept of ecosystem services, a field that seeks to quantify the value of resources such as clean water, fertile soil, and species habitats. The release of that book inspired unprecedented interdisciplinary collaboration on issues of ecology and economics.

“I think many of us finally began to see that, fundamentally, environmental problems are economic and social problems,” she says. “We cannot maintain the vitality and security of the biosphere without valuing nature.”

That recognition, Daily says, inspired her to create the Natural Capital Project in 2005. More than anything, she adds, the initiative was born out of the idea that mapping and modeling the value of nature would compel global leaders to see the inherent benefits of conservation as well.

A partnership between Stanford, the Chinese Academy of Sciences, the Nature Conservancy, the University of Minnesota, and the World Wildlife Fund, the Natural Capital Project now works with banks, governments, and nonprofit organizations around the globe.

The organization’s open-source software model, called InVEST, combines data gleaned from thousands of researchers working with techniques such as satellite imaging, soil surveys, climate modeling, and human development mapping to quantify and place a value on natural resources. Recent advances in this data collection, along with machine learning and software modeling, allow the Natural Capital team to evaluate ecosystems at a level of detail and sophistication previously considered impossible.

In a recent project undertaken for the Colombian government, for instance, the Natural Capital Project assisted in establishing a conservation plan for the Caribbean Gulf of Morrosquillo and its hinterlands. The region’s Rio Sinú is an essential source of drinking water for many downstream communities but also originates in an area that depends upon logging, ranching, and agriculture for its financial security. Using InVEST, Daily and her team were able to determine the actual cost of silt deposition in the river, particularly for drinking water and hydropower, and the value of maintaining upstream forests that would prevent that congestion from occurring.

“We were able to show that communities in the region were benefiting from this forest in ways they hadn’t necessarily realized,” says Lisa Mandle, lead scientist and director of science-software integration for the Natural Capital Project. “We can never capture the total value of a forest in terms of cultural and spiritual values or even biodiversity, but we can say that it has measurable economic values across dimensions that have not been considered before.”

And that, says Mandle, has created powerful incentives for the Colombian government to think about how to support the communities within that crucial forest.

A similar approach, also crafted by the Natural Capital Project, helps countries determine their gross ecosystem product, or GEP. Modeled after the gross domestic product, the GEP index allows nations to determine the monetary value of their ecological systems. Daily and her team piloted this index in 2014 on both municipal and national scales in China, and it was adopted by the United Nations Statistical Commission in 2021.

“Just as the Great Depression exposed the urgent need for better macroeconomic performance metrics, our current ‘Great Degradation’ of natural capital is making it imperative that we track ecological performance and use that information to guide investments in revitalization and regeneration,” says Daily, who predicts that the GEP metric will be employed globally within the next decade.

In the meantime, she and her team are dedicated to streamlining their ecological assessments in a way that makes the final analysis and visualization easier for political leaders, investors, and local communities to use. Making that information more accessible, she says, will be crucial for fostering a cultural shift toward recognizing humanity’s dependence upon the biosphere.

In many ways, this idea of codifying the value of nature has been 30 years in the making. And it couldn’t have become reality without Daily’s vision, says Qingfeng Zhang, a senior director at the Asian Development Bank, which now includes a Natural Capital Lab inspired and supported by the Stanford project. This initiative, which was launched in 2020, created a platform for the bank to promote sustainable finance with the help of tools that Daily and her team developed.

“Gretchen’s work in the area of environmental science and its implications for public policy has been monumental,” says Zhang. “Her InVEST model and GEP concept are transforming the way governments, corporations, and civil society look at nature. We now have a tangible economic basis to invest in protecting and growing nature.” 

Kathryn Miles is a journalist and the author of five books including, most recently, Trailed: One Woman’s Quest to Solve the Shenandoah Murders.