Cryptography may offer a solution to the massive AI-labeling problem 

The White House wants big AI companies to disclose when content has been created using artificial intelligence, and very soon the EU will require some tech platforms to label their AI-generated images, audio, and video with “prominent markings” disclosing their synthetic origins. 

There’s a big problem, though: identifying material that was created by artificial intelligence is a massive technical challenge. The best options currently available—detection tools powered by AI, and watermarking—are inconsistent, impermanent, and sometimes inaccurate. (In fact, just this week OpenAI shuttered its own AI-detecting tool because of high error rates.)

But another approach has been attracting attention lately: C2PA. Launched two years ago, it’s an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as “provenance” information. 

The developers of C2PA often compare the protocol to a nutrition label, but one that says where content came from and who—or what—created it. 

The project, part of the nonprofit Joint Development Foundation, was started by Adobe, Arm, Intel, Microsoft, and Truepic, which formed the Coalition for Content Provenance and Authenticity (from which C2PA gets its name). Over 1,500 companies are now involved in the project through the closely affiliated open-source community, Content Authenticity Initiative (CAI), including ones as varied and prominent as Nikon, the BBC, and Sony.

Recently, as interest in AI detection and regulation has intensified, the project has been gaining steam; Andrew Jenks, the chair of C2PA, says that membership has increased 56% in the past six months. The major media platform Shutterstock has joined as a member and announced its intention to use the protocol to label all its AI-generated content, including its DALL-E-powered AI image generator. 

Sejal Amin, chief technology officer at Shutterstock, told MIT Technology Review in an email that the company is protecting artists and users by “supporting the development of systems and infrastructure that create greater transparency to easily identify what is an artist’s creation versus AI-generated or modified art.”

What is C2PA and how is it being used?

Microsoft, Intel, Adobe, and other major tech companies started working on C2PA in February 2021, hoping to create a universal internet protocol that would allow content creators to opt in to labeling their visual and audio content with information about where it came from. (At least for the moment, this does not apply to text-based posts.) 

Crucially, the project is designed to be adaptable and functional across the internet, and the base computer code is accessible and free to anyone. 

Truepic, which sells content verification products, has demonstrated how the protocol works with a deepfake video with Revel.ai. When a viewer hovers over a little icon at the top right corner of the screen, a box of information about the video appears that includes the disclosure that it “contains AI-generated content.” 

Adobe has also already integrated C2PA, which it calls content credentials, into several of its products, including Photoshop and Adobe Firefly. “We think it’s a value-add that may attract more customers to Adobe tools,” Andy Parsons, senior director of the Content Authenticity Initiative at Adobe and a leader of the C2PA project, says. 

C2PA is secured through cryptography, which relies on a series of codes and keys to protect information from being tampered with and to record where information came from. More specifically, it works by encoding provenance information through a set of hashes that cryptographically bind to each pixel, says Jenks, who also leads Microsoft’s work on C2PA. 

C2PA offers some critical benefits over AI detection systems, which use AI to spot AI-generated content and can in turn learn to get better at evading detection. It’s also a more standardized and, in some instances, more easily viewable system than watermarking, the other prominent technique used to identify AI-generated content. The protocol can work alongside watermarking and AI detection tools as well, says Jenks. 

The value of provenance information 

Adding provenance information to media to combat misinformation is not a new idea, and early research seems to show that it could be promising: one project from a master’s student at the University of Oxford, for example, found evidence that users were less susceptible to misinformation when they had access to provenance information about content. Indeed, in OpenAI’s update about its AI detection tool, the company said it was focusing on other “provenance techniques” to meet disclosure requirements.

That said, provenance information is far from a fix-all solution. C2PA is not legally binding, and without required internet-wide adoption of the standard, unlabeled AI-generated content will exist, says Siwei Lyu, a director of the Center for Information Integrity and professor at the University at Buffalo in New York. “The lack of over-board binding power makes intrinsic loopholes in this effort,” he says, though he emphasizes that the project is nevertheless important.

What’s more, since C2PA relies on creators to opt in, the protocol doesn’t really address the problem of bad actors using AI-generated content. And it’s not yet clear just how helpful the provision of metadata will be when it comes to media fluency of the public. Provenance labels do not necessarily mention whether the content is true or accurate. 

Ultimately, the coalition’s most significant challenge may be encouraging widespread adoption across the internet ecosystem, especially by social media platforms. The protocol is designed so that a photo, for example, would have provenance information encoded from the time a camera captured it to when it found its way onto social media. But if the social media platform doesn’t use the protocol, it won’t display the photo’s provenance data.

The major social media platforms have not yet adopted C2PA. Twitter had signed on to the project but dropped out after Elon Musk took over. (Twitter also stopped participating in other volunteer-based projects focused on curbing misinformation.)  

C2PA “[is] not a panacea, it doesn’t solve all of our misinformation problems, but it does put a foundation in place for a shared objective reality,” says Parsons. “Just like the nutrition label metaphor, you don’t have to look at the nutrition label before you buy the sugary cereal.

“And you don’t have to know where something came from before you share it on Meta, but you can. We think the ability to do that is critical given the astonishing abilities of generative media.”

This piece has been updated to clarify the relationship between C2PA and CAI.

How face recognition rules in the US got stuck in political gridlock

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

This week, I published an in-depth story about efforts to restrict face recognition in the US. The story’s genesis came during a team meeting a few months back, when one of my editors casually asked what on earth had happened to the once-promising campaign to ban the technology. Just several years ago, the US seemed on the cusp of potentially getting police use of the technology restricted at a national level. 

I even wrote a story in May 2021 titled “We could see federal regulation on face recognition as early as next week.” News flash: I was wrong. In the years since, the push to regulate the technology seems to have ground to a halt. 

The editor held up his iPhone. “Meanwhile, I’m using it constantly throughout the day,” he said, referring to the face recognition verification system on Apple’s smartphone. 

My story was an attempt to understand what happened by zooming in on one of the hotbeds for debate over police use of face recognition: Massachusetts. Lawmakers in the state are considering a bill that would be a breakthrough on the issue and could set a new tone of compromise for the rest of the country. 

The bill distinguishes between different types of technology, such as live video recognition and retroactive image matching, and sets some strict guardrails when it comes to law enforcement. Under the proposal, only the state police could use face recognition, for example.  

During reporting, I learned that face recognition regulation is being held up in a unique type of political stasis, as Andrew Guthrie Ferguson, a law professor at the American University Washington College of Law who specializes in policing and tech, put it. 

The push to regulate face recognition technology is bipartisan. However, when you get down to details, the picture gets muddier. Face recognition as a tool for law enforcement has become more contentious in recent years, and Republicans tend to align with police groups, at least partly because of growing fears about crime. Those groups often say that new tools like face recognition help increase their capacity during staffing shortages. 

Little surprise, then, that police groups have no interest in regulation. Police lobbies and companies that provide law enforcement with their tech are content to continue using the technology with few guardrails, especially as staffing shortages put pressure on law enforcement to do more with less. Having no restrictions on it suits them fine. 

But civil liberties activists are generally opposed to regulation too. They think that compromising on measures short of a ban decreases the likelihood that a ban will ever be passed. They argue that police are likely to abuse the technology, so giving them any access to it poses risks to the public, and specifically to Black and brown communities that are already overpoliced and surveilled. 

“The battle between ‘abolition’ and ‘don’t regulate it at all’ has led to an absence of regulation. That’s not the fault of the abolitionists,” says Ferguson. “But it has meant that the normal potential political compromise that you might’ve seen in Congress hasn’t happened because the normal political actors are not willing to concede for any regulation.”

Some abolitionist groups, such as S.T.O.P. in New York, are turning their advocacy work away from police bans toward regulating private uses of face recognition—for example, at Madison Square Garden

“We see growing momentum to pass bans on private-sector use of facial recognition,” says S.T.O.P.’s executive director, Albert Fox Cahn. However, he thinks eventually we will see a resurgence of calls to ban police use of the technology too. 

In the meantime, it’s deeply unfortunate that as face recognition technology continues to proliferate and become normalized in our lives, regulation is stuck in gridlock, especially when there is bipartisan agreement that we need it.

Compromises that set new guardrails on the technology, but are short of an absolute ban, might be the most promising path forward.

What I am reading this week

  • This morning, the White House announced a new AI initiative in which companies voluntarily agreed to a set of requirements, such as watermarking AI-generated content and submitting to external review. Notably left off the list of requirements were stipulations around transparency and data privacy. The voluntary agreements, while better than nothing, seem pretty fluffy. 
  • I really enjoyed Charlie Warzel’s latest piece, which was a love letter to the phone number in the Atlantic. I am a sap for user-focused technologies. We often don’t think of the 10-digit identity as a breakthrough, but oh … how it is. 
  • Regardless of the FTC’s recent losses, President Biden’s team seems to be sticking to its aggressive antitrust strategy. It’ll be interesting to watch how it plays out and whether the Justice Department can eventually do something to break up Big Tech.  

What I learned this week

This week, I finally dove into our latest magazine issue on accessibility. A story about the digital border wall really stood out. Since January, US Customs and Border Protection has been using a new app to organize immigration flows and secure initial appointments for entry. One problem, though, is that the app—called CBP One—barely works. It puts a massive strain on people trying to enter the country.  

Lorena Rios writes about Keisy Plaza, a migrant traveling from Colombia. “When she was staying in a shelter in Ciudad Juárez in March, she tried the app practically every day, never losing hope that she and her family would eventually get their chance.” After seven weeks of constant worry, Plaza finally got an appointment.  

Rios’s story is heartbreaking—a bit dystopian, but useful, as she really gets at how technology can completely upend people’s lives. Take a read this weekend! 

Is the digital dollar dead?

It’s summer 2020. The world is under a series of lockdowns as the pandemic continues to run its course. And in academic and foreign policy circles, digital currencies are one of the hottest topics in town. 

China is well on its way to launching its own central bank digital currency, or CBDC, and many other countries have launched CBDC research projects. Even Facebook has proposed a global digital currency, called Libra

So when the Boston branch of the US Federal Reserve announces Project Hamilton, a collaboration with MIT’s Digital Currency Initiative, to research how a CBDC might be technically designed—it doesn’t raise many eyebrows. A hypothetical US central bank digital currency is hardly controversial, after all. And the US cannot afford to be left behind.

How things change. Three years later, the digital dollar—even though it doesn’t exist and the Fed says it has no plans to issue one—has become political red meat. Tapping into voters’ widespread opposition to government surveillance, a group of anti-CBDC politicians has emerged with the message that the digital dollar is something to fear. 

It’s difficult to pinpoint when the dynamic changed, but a distinct brand of CBDC alarmism seemed to pick up after President Joe Biden signed an executive order in March 2022 stating that his administration would “[place] the highest urgency on research and development efforts into the potential design and deployment options of a United States CBDC.”  

Now legislators in both houses of Congress have introduced bills aimed at making sure a CBDC doesn’t see the light of day. Presidential candidates are even campaigning against it. 

“Anyone with their eyes open could see the danger this type of arrangement would mean for Americans who … would like to be able to conduct business without having the government know every single transaction they’re making in real time,” Florida governor Ron DeSantis, who is running for the Republican nomination for president, said in May. In campaign speeches, DeSantis has described a dystopian future in which the government uses its CBDC network to block people from buying guns or fossil fuel. 

Not only does the Fed have no plans to issue a digital currency, but it has repeatedly said it wouldn’t do so without authorization from Congress. How one might work—including how closely it might imitate physical cash—is still a wide-open question that can only be answered through research and testing. 

Project Hamilton’s goal was to build and test a prototype of just one component of a potential system: a way to securely and resiliently handle the same quantity of transactions that the major payment card networks process.

Hamilton’s first phase demonstrated a feasible technical approach, and the researchers promised a “Phase 2” that would explore sophisticated approaches to privacy and offline payments. But late last year, shortly after the project came under scrutiny from anti-CBDC legislators, the Boston Fed ended Hamilton. Now the sort of technical design research that Project Hamilton exemplified may have to come from outside the central bank, which prefers to remain politically neutral.  

And a digital dollar looks less likely than ever before.

The case for cash

Opponents of a hypothetical US CBDC cast it as a solution in search of a problem. Dollars are already digital, after all. If you paid with a debit card recently, did you not pay with digital dollars? China’s move to pilot a consumer central bank digital currency is not reason by itself to pursue one, they argue. Libra failed to launch; a global digital currency run by a tech company is no longer an issue. What purpose would a government-issued digital currency serve other than to give the government a tool for financial surveillance and control?

But there is a problem—probably one that you’ve noticed yourself. Physical cash is going away. Fewer and fewer vendors are accepting bills and coins. On top of that, consumers are simply choosing to use less cash. That’s in part out of convenience, but there’s another big reason: you can’t use cash to buy things on the internet.

In the US, cash payments represented just 18% of all payments in 2022—down from 31% in 2016, according to research by the San Francisco Fed. Outside the US, things are even further along the road to a cashless society. The decline of cash is a primary reason more than 100 countries are researching the idea of creating their own digital currencies. 

The solution is a digital currency with all the features of physical cash, according to Willamette University law professor Rohan Grey.

That we can’t use cash on Amazon is only one argument for government-issued digital cash, says Grey. In the US, plenty of people rely on bills and coins because they don’t have bank accounts and can’t get credit or debit cards. The Federal Deposit Insurance Corporation estimates that in 2021, 5.9 million US households were “unbanked.” Besides that, Grey argues, cash has unique “social features” that we should be careful to preserve, including its privacy and anonymity. No one can trace how you spend your coins and bills. “I think anonymity is a social good,” he says. 

Last year, Grey helped author a US House bill called the Electronic Currency and Secure Hardware Act (ECASH). The legislation, which was introduced by Representative Stephen Lynch of Massachusetts, would have directed the Department of Treasury to create a digital dollar that could be used both online and offline and have cash-like features, “including anonymity, privacy, and minimal generation of data from transaction.” It didn’t make it out of the Financial Services Committee, but Grey says there are plans to reintroduce it this year.

DeSantis and other CBDC opponents most likely agree with Grey that we should replicate the privacy of cash in digital form—after all, they claim to be defending Americans against a financial surveillance state. But whereas Grey is advocating for a government-controlled system, they seem to prefer something more like decentralized cryptocurrency networks, which are not controlled by any central authority. 

DeSantis recently signed a bill explicitly banning a “centralized” digital dollar in Florida, apparently leaving the door open for one that is decentralized. Representative Tom Emmer of Minnesota, who introduced a bill this year that would prohibit the Fed from issuing a digital currency, has said multiple times that a CBDC must be “open, permissionless, and private.” “Permissionless” is a term enthusiasts use for crypto networks like Bitcoin and Ethereum, which are open to anyone with an internet connection. Emmer, a Republican,  is one of Congress’s most outspoken crypto enthusiasts.

A spectrum of possible designs

It is not clear how currency issued by a central bank could ever be controlled by a permissionless crypto network. And Bitcoin and similar cryptocurrencies have privacy issues of their own. Though users are pseudonymous, information about the sender, the recipient, and the amount of every transaction is published on the blockchain. Investigators are skilled at using clues, like personal information that users share with crypto exchanges, to discover users’ real identities.

Either way, using a blockchain network won’t suffice, says Grey, because many of the same people who rely on cash also lack internet access. He envisions cards that could be tapped together or to smartphones to transfer value anonymously, online or offline. Like physical dollars, the digital stand-ins would be so-called bearer instruments, meaning that possession gives the holder rights to ownership. There are a number of unanswered technical questions about how to pull all this off securely, however—a fact that Grey acknowledges.

Unanswered technical questions were also the motivation behind Project Hamilton. The researchers set out to investigate possible designs for a “resilient transaction processor” that could handle at minimum tens of thousands of transactions per second, the capacity they determined necessary to handle the volume of retail transactions in the US. But they also sought to develop a transaction processor that was flexible enough in its design to leave open a range of options for other parts of the system, like technologies for privacy and offline payments. 

The software they came up with does not use a blockchain, but it borrows components from Bitcoin. Neha Narula, director of the Digital Currency Initiative at the MIT Media Lab, says it’s possible to break a blockchain system down into its component parts and then apply some but not all of those pieces in a different context. 

For example, one piece is a blockchain’s decentralized nature, which makes it possible to run a cryptocurrency system without relying on any one person to control it. The team decided that a CBDC would not need this property, since it would be run by a central bank. Another property of blockchains is known as Byzantine fault tolerance (BFT), which allows the network to keep functioning even if malicious participants are acting dishonestly. The Hamilton team decided they could assume that since the system would be run by a single central bank, there wouldn’t be malicious participants, and so BFT wouldn’t be required. 

Ditching BFT and decentralized governance has its benefits. In Bitcoin, maintaining them both makes the system expensive and slow to run, in part because data must be replicated on every computer on the network. The result is that Bitcoin can only process around seven transactions per second. In early 2022, the Hamilton team demonstrated a system capable of processing 1.7 million transactions per second—much faster than even the Visa network, which Visa claims is able to process 65,000 transactions per second. 

Like Bitcoin, Hamilton’s transaction processor used cryptographic signatures to authorize payments. It also used Bitcoin’s method for recording transactions, called the unspent transaction outputs (UTXO) model, which stops people from spending the same coin twice. The details of the UTXO model are complicated, but it works because each transaction references the specific coins being spent. 

Narula stresses that Project Hamilton was a “first step” toward understanding how a CBDC might be designed. The team made the software open source so that other teams could build on it. But it was not advocating for specific design decisions. There is a spectrum of possible CBDC designs, ranging from traditional bank accounts that the Fed offers directly to consumers (currently it only offers accounts to banks) to something that looks like a “digital bearer instrument,” Narula says.

Besides demonstrating the ability to handle lots of transactions, Hamilton also showed that “if designers want to, it’s possible to build a system that stores very little data about transactions, users, and even outstanding balances,” says Narula. “A big misconception about CBDCs right now is this assumption that they have to be built in a way where whoever is running it can see everything.”

So… what’s next?

Nonetheless, not even a fundamental research project like Hamilton was able to escape the ire of anti-CBDC politicians.

In December of last year, Emmer and eight other members of Congress sent a letter to the president of the Boston Fed, arguing that there had been “insufficient visibility into the interaction between Project Hamilton and the private sector.” The legislators cited an FAQ from the Project Hamilton report stating that the Fed had been working with “government, academia, and the private sector” to learn about “potential use cases, a range of design options, and other considerations” related to CBDCs.

The letter went on to ask several questions, including whether the Boston Fed intended to fund startups interested in designing CBDCs and whether any firms involved in the project might be able to “exploit a regulatory advantage over competitors.”

Emmer’s office did not respond to MIT Technology Review’s questions regarding whether it ever received answers to the questions in the letter. But the Federal Reserve does not invest in startups. And it’s not surprising that Project Hamilton would openly take input from the private sector, because many of the most innovative ideas for digital currency technology lie in the commercial arena.

The letter’s final question asked how Project Hamilton was addressing concerns about “financial privacy and financial freedom” in a CBDC system. In fact, the “Phase 2” promised in the Hamilton research report, which was published in February of 2022, was explicitly meant to entail research into the use of advanced cryptography to “greatly increase user privacy from the central bank.” But when the project shut down in December, the announcement made no mention of Phase 2.

The Fed, which aims to stay out of politics whenever possible, hasn’t stopped doing research on CBDCs, says Darrell Duffie, a professor of finance at Stanford’s graduate school of business. But it has slowed considerably, and “nobody is charging ahead openly” the way Hamilton did, he says. Duffie speculates that “maybe Project Hamilton would have had another phase” if it had not been for Emmer’s letter. 

A spokesperson for the Boston Fed declined to answer questions about Phase 2. Project Hamilton “was completed at the end of 2022,” the spokesperson said in an emailed statement, adding that the Boston Fed “continues to contribute to ongoing Federal Reserve System research that aims to deepen the Federal Reserve’s understanding of the technology that could support the issuance of a CBDC.” The spokesperson also reiterated that the Fed “has made no decision on issuing a CBDC and would only proceed with the issuance of a CBDC with an authorizing law.”

According to MIT’s Narula, the collaboration with the Boston Fed “reached a natural end.” But  the Digital Currency Initiative has continued working on the research project formerly known as Hamilton and still hopes to publish some of that work. 

“The only way to really truly understand these types of systems is to build and test them,” she says.

It’s still a challenge to spot Chinese state media social accounts

This story first appeared in China Report, MIT Technology Review’s newsletter about technology developments in China. Sign up to receive it in your inbox every Tuesday.

It’s no secret that Chinese state-owned media are active on Western social platforms, but sometimes they take a covert approach and distance themselves from China, perhaps to reach more unsuspecting audiences. 

Such operations have been found to target Chinese- and English-speaking users in the past. Now, a study published last week has discovered another network of Twitter accounts that seems to be obscuring its China ties. This time, it’s made up of Spanish-language news accounts targeting Latin America.

Sandra Quincoses, an intelligence advisor at the cybersecurity research firm Nisos, found three accounts posting news about Paraguay, Chile, and Costa Rica on Twitter. The accounts seem to be associated with three Chinese-language newspapers based in those countries. All three are subsidiaries of a Brazil-based Chinese community newspaper called South America Overseas Chinese Press Network.

Very few of the posts are overtly political. The content, which is often the same in all three accounts, usually consists of Spanish-language news about Chinese culture, Chinese viral videos, and one panda post every few days. 

The problematic part, Quincoses says, is that they obscure the sources of their news posts. The accounts often post articles from China News Service (CNS), one of the most prominent Chinese state-owned publications, but they do so without attribution.

Sometimes the accounts will go halfway toward attribution. They might specify, for example, that the news is from “Twitter •mundo_china” without actually tagging the @mundo_China, an account affiliated with the Chinese state broadcaster. 

“When you do not mention Twitter accounts with the proper “@” format, tools that collect from Twitter to do analysis don’t pick up on that,” says Quincoses. As a result, these accounts can fly under the radar of social network analysis tools, making it hard for researchers to associate them with accounts that are clearly related to the Chinese government.

It’s unclear whether these accounts and the newspapers they belong to are controlled directly by Chinese state media. But as obscure as they are, there are real Chinese diplomats following them, suggesting official approval. And one government outlet—CNS—is working closely with these newspapers.

CNS is directly owned by the Chinese Communist Party’s United Front Work Department. In the 1990s, it started fostering ties with outlets aimed at Chinese immigrant communities around the world. 

Today, CNS and these immigrant community newspapers often co-publish articles, and CNS invites executives from the publications to visit China for a conference called the Forum on the Global Chinese Language Media. Some of these publications have often been accused of being controlled or even owned by CNS, the main example being the China Press, a California-based publication.

As media outlets enter the digital age, there is more evidence that these overseas diaspora publications have close ties with CNS. Sinoing (also known as Beijing Zhongxin Chinese Technology Development or Beijing Zhongxin Chinese Media Service), a wholly owned subsidiary of CNS, is the developer behind 36 such news websites across six continents, the Nisos report says. It has also made mobile apps for nearly a dozen such outlets, including the South America Overseas Chinese Press Network, which owns the three Twitter accounts. These apps are also particularly invasive when it comes to data gathering, the Nisos report says.

At the same time, in a hiring post for an overseas social media manager, CNS explicitly wrote in the job description that the work involves “setting up and managing medium-level accounts and covert accounts on overseas social platforms.” 

It’s unclear whether the three Twitter accounts identified in this report are operated by CNS. If this is indeed a covert operation, the job has been done a little too well. Though they post several times a day, two of the accounts have followers in the single digits, while the other one has around 80 followers—including a few real Chinese diplomats to Spanish-speaking countries. Most of the posts have received minimal engagement.

The lack of success is consistent with China’s social media propaganda campaigns in the past. This April, Google identified over 100,000 accounts in “a spammy influence network linked to China,” but the majority of accounts had 0 subscribers, and over 80% of their videos had fewer than 100 views. Twitter and Facebook identified similar unsuccessful attempts in the past, too. 

Of all the state actors she has studied, Quincoses says, China is the least direct when it comes to the intentions of such networks. They could be playing the long game, she says. 

Or maybe they just haven’t figured out how to run covert Twitter accounts effectively. 

According to Quincoses, these accounts were never among those Twitter labeled as government-funded media (a practice it dropped in April). This could be related to the limited traction the accounts got, or to the efforts they made to obscure their ties to Chinese state media.

As other platforms are emerging to take on Twitter, Chinese state-owned publications have begun to appear on them too. Xinhua News Service, China’s main state-owned news agency, has several accounts on Mastodon, one of which still posts regularly. And CGTN, the country’s state broadcaster, has an account on Threads that already has over 50,000 followers.

Responding to an inquiry from the Australian government, Meta said it plans to add labels for government-affiliated media soon. But can it target accounts like these that are trying (and failing) to promote China’s image? They may be small fish now, but it’s better to catch them early before they grow influential enough, like their more successful peers from Russia. 

Do social media users need better tools to sort out what might be government-affiliated media? Tell me at zeyi@technologyreview.com.

Catch up with China

1. John Kerry, the US climate envoy, is visiting China to restart climate negotiations between the two countries. (CNN)

2. Executives of American chip companies, including Intel, Qualcomm, and Nvidia, are flocking to Washington to talk the administration out of more curbs against China. (Bloomberg $)

3. The Taiwanese chip giant TSMC is known for harsh workplace rules imposed to protect its trade secrets, including a ban on Apple Watches at work. Now, facing difficulty attracting talent, the company is relaxing those rules. (The Information $)

4. A Kenyan former content moderator for TikTok is threatening to sue the app and its local content moderation contractor, claiming PTSD and unfair dismissal. (Time)

5. Amazon sellers say their whole stores—including images, descriptions, and even product testing certificates—have been cloned by sellers on Temu, the rising cross-broader e-commerce platform from China. (Wired $)  

6. Microsoft says Chinese hackers accessed the email accounts of Commerce Secretary Gina Raimondo and other US officials in June, but they didn’t get any classified email. (New York Times $)

7. Badiucao, an exiled Chinese political cartoonist, is carefully navigating security risks as he tours his artworks around the world. (The Spectator)

Lost in translation

As image-making AIs become increasingly popular, some Chinese fashion brands are ditching real human models and opting for AI-generated ones. Chinese publication AI Lanmeihui reports that some Stable Diffusion users are charging Chinese vendors 15 RMB (about $2) for an AI-generated product catalogue photo. A specialized website (still built on the open-source Stable Diffusion algorithm) allows vendors to customize the look of the model for just $2.80. Meanwhile, the cost of a photography session with a human model usually comes down to about $14 per photo, according to professional model Zhao Xuan. AI has already started taking jobs from human models, Zhao said, and it’s promoting unrealistic beauty standards in the industry. “The emergence of AI models is popularizing extreme aesthetics and causing professional models to have body shame,” she said. And the technology is still in its early stages: commercially available services often take more than a week, and the quality of the result is variable.

A collage of three screenshots of models generated by AI.

SOCIAL MEDIA SCREENSHOTS COLLECTED BY AI LANMEIHUI.

One more thing

Some Chinese workers are being asked to use AI tools but find that the process of tinkering with them takes too much time. As a result, they’ve been faking using ChatGPT or Midjourney and instead doing their job the old-fashioned way. One social media copywriter managed to mimic ChatGPT’s writing style so well that his boss was fully convinced it had to be the work of an AI. The boss then showed it around the office, asking other colleagues to generate articles like this too, according to the Chinese publication Jingzhe Qingnian.

Face recognition in the US is about to meet one of its biggest tests

Just four years ago, the movement to ban police departments from using face recognition in the US was riding high. By the end of 2020, around 18 cities had enacted laws forbidding the police from adopting the technology. US lawmakers proposed a pause on the federal government’s use of the tech. 

In the years since, that effort has slowed to a halt. Five municipal bans on police and government use passed in 2021, but none in 2022 or in 2023 so far, according to a database from the digital rights group Fight for the Future. Some local bans have even been partially repealed, and today, few seriously believe that a federal ban on police use of face recognition could pass in the foreseeable future. In the meantime, without legal limits on its use, the technology has only grown more ingrained in people’s day-to-day lives.

However, in Massachusetts there is hope for those who want to restrict police access to face recognition. The state’s lawmakers are currently thrashing out a bipartisan state bill that seeks to limit police use of the technology. Although it’s not a full ban, it would mean that only state police could use it, not all law enforcement agencies.

The bill, which could come to a vote imminently, may represent an unsatisfying compromise, both to police who want more freedom to use the technology and to activists who want it completely banned. But it represents a vital test of the prevailing mood around police use of these controversial tools. 

That’s because when it comes to regulating face recognition, few states are as important as Massachusetts. It has more municipal bans on the technology than any other state, and it’s an epicenter for civil liberty advocates, academics, and tech companies. For a movement in need of a breakthrough, a lot rides on whether this law gets passed. 

Right now in the US, regulations on police use of face recognition are trapped in political gridlock. If a leader like Massachusetts can pass its bill, that could usher in a new age of compromise. It would be one of the strictest pieces of statewide legislation in the country and could set the standard for how face recognition is regulated elsewhere. 

On the other hand, if a vote is delayed or fails, it would be yet another sign that the movement is waning as the country moves on to other policy issues.

A history of advocacy

Privacy advocates and public interest groups have long had concerns about the invasiveness of face recognition, which is pivotal to a growing suite of high-tech police surveillance tools. Many of those fears revolve around privacy: live video-based face recognition is seen as riskier than retroactive photo-based recognition because it can track people in real time. 

Those worries reached a fever pitch in 2018 with the arrival of a bombshell: a privacy-shredding new product from a small company called Clearview AI.

Clearview AI’s powerful technology dramatically changed privacy and policing in the US. The company quietly gave free trials of the product to hundreds of law enforcement agencies across the country. Suddenly, police officers looking to identify someone could quickly comb through vastly more images than they’d ever had access to before—billions of public photos available on the internet.

The very same year, evidence started to mount that the accuracy of face recognition tools varied by race and gender. A groundbreaking study out of MIT by Joy Buolamwini and Timnit Gebru, called Gender Shades, showed that the technology is far less accurate at identifying people of color and women than white men. 

The US government corroborated the results in a 2019 study by the National Institute of Science and Technology, which found that many commercial face recognition algorithms were 10 to 100 times more inaccurate in identifying Asian and Black faces than white ones. 

Politicians started to wake up to the risks. In May 2019, San Francisco became the first city in the US to ban police use of face recognition. One month later, the ACLU of Massachusetts announced a groundbreaking campaign called “Press Pause,” which called for a temporary ban on the technology’s use by police in cities across the state. Somerville, Massachusetts, became the second city in the United States to ban it. 

Over the next year, six more Massachusetts cities, including Boston, Cambridge, and Springfield, approved bans on police and government use of face recognition. Some cities even did so preemptively; in Boston, for example, police say they were not using the technology when it was banned. Major tech companies, including Amazon, Microsoft, and IBM pulled the technology from their shelves, and civil liberties advocates were pushing for a nationwide ban on its police use.

“Everyone who lives in Massachusetts deserves these protections; it’s time for the Massachusetts legislature to press pause on this technology by passing a statewide moratorium on government use of face surveillance,” Carol Rose, the executive director of the ACLU’s Massachusetts chapter, said in a statement after Boston passed its ban in June 2020. 

That moratorium would never happen. 

Is your face private? 

At first, momentum was on the side of those who supported a statewide ban. The murder of George Floyd in Minneapolis in May 2020 had sent shock waves through the country and reinvigorated public outcry about abuses in the policing system. In the search for something tangible to fix, activists both locally and nationwide alighted on face recognition. 

At the beginning of December 2020, the Massachusetts legislature passed a bill that would have dramatically restricted police agencies in the state from using face recognition, but Governor Charlie Baker refused to sign it, saying it was too limiting for police. He said he would never sign a ban into law. 

In response, the legislature passed another, more toned-down bill several weeks later. It was still a landmark achievement, restricting most government agencies in the state from using the technology. It also created a commission that would be tasked with investigating further laws specific to face recognition. The commission included representatives from the state police, the Boston police, the Massachusetts Chiefs of Police Association, the ACLU of Massachusetts, several academic experts, the Massachusetts Department of Public Safety, and various lawmakers from both political parties, among others. 

Law enforcement agencies in the state were now permitted access only to face recognition systems owned and operated by the Registry of Motor Vehicles (RMV), the state police, or the FBI. As a result, the universe of photos that police could query was much more limited than what was available through a system like Clearview, which gives users access to all public photos on the internet. 

To hunt for someone’s image, police had to submit a written request and obtain a court order. That’s a lower bar than a warrant, but previously, they’d just been able to ask by emailing over a photo to search for suspects in misdemeanor and felony offenses including fraud, burglary, and identity theft. 

At the time, critics felt the bill was lacking. “They passed some initial regulations that don’t go nearly far enough but were an improvement over the status quo, which was nothing,” says Kade Crockford of the ACLU of Massachusetts, a commission member.

Still, the impetus toward a national ban was building. Just as the commission began meeting in June 2021, Senator Ed Markey of Massachusetts and seven other members of Congress introduced a bill to ban federal government agencies, including law enforcement, from using face recognition technology. All these legislators were left-leaning, but at the time, stricter regulation had bipartisan support.

The Massachusetts commission met regularly for a year, according to its website, with a mandate to draft recommendations for the state legislature about further legal limits on face recognition.

As debate ensued, police groups argued that the technology was essential for modern policing. 

“The sort of constant rhetoric of many of the appointees who were from law enforcement was that they did not want to tie the hands of law enforcement if the X, Y, Z worst situation happened—a terrorist or other extremely violent activity,” said Jamie Eldridge, a Massachusetts state senator who cochaired the commission, in an interview with MIT Technology Review. 

Despite that lobbying, in March 2022 the commission voted to issue a strict set of recommendations for the legal use of face recognition. It suggested that only the state police be allowed to use the RMV database for face matching during a felony investigation, and only with a warrant. The state police would also be able to request that the FBI run a face recognition search.

Of the commission’s 21 members, 15 approved the recommendations, including Crockford. Two abstained, and four dissented. Most of the police members of the commission voted no. 

One of them, Norwood Police Chief William Brooks, told MIT Technology Review there were three major things he disagreed with in the recommendations: requiring a warrant, restricting use of the technology to felonies only, and preventing police from accessing face recognition databases outside those of the RMV and the FBI. 

Brooks says the warrant requirement “makes no sense” and “would protect no one,” given that the law already requires a court order to use face recognition technology. 

“A search warrant is obtained when the police want to search in a place where a person has an expectation of privacy. We’re not talking about that here. We’re just talking about what their face looks like,” he says.

Other police groups and officers serving on the commission, including the Massachusetts public safety office, the Boston Police Patrolmen’s Association, and the Gloucester Police Department, have not responded to our multiple requests for comment. 

An unsatisfying compromise 

After years of discussion, debate, and compromise, in July 2022 the Massachusetts commission’s recommendations were codified into an amendment that has already been passed in the state house of representatives and may come to a vote via a bill in the state senate any day. 

The bill allows image matching, which looks to retroactively identify a face by finding it in a database of images, in certain cases. But it bans two other types of face recognition: face surveillance, which seeks to identify a face in videos and moving images, and emotion recognition, which tries to assign emotions to different facial expressions. 

This more subtle approach is reminiscent of the path that EU lawmakers have taken when evaluating the use of AI in public applications. That system uses risk tiers; the higher the risks associated with a particular technology, the stricter the regulation. Under the proposed AI Act in Europe, for example, live face recognition on video surveillance systems in public spaces would be regulated more harshly than more limited, non-real-time applications, such as an image search for in an investigation of a missing child. 

Eldridge says he expects resistance from prosecutors and law enforcement groups, though he is “cautiously optimistic” that the bill will pass. He also says that many tech companies lobbied during the commission hearings, claiming that the technology is accurate and unbiased, and warning of an industry slowdown if the restrictions pass. Hoan Ton-That, CEO of Clearview, told the commission in his written testimony that “Clearview AI’s bias-free algorithm can accurately find any face out of over 3 billion images it has collected from the public internet.”

Crockford and Eldridge say they are hopeful the bill will be called to a vote in this session, which lasts until July 2024, but so far, no such vote has been scheduled. In Massachusetts, like everywhere else, other priorities like economic and education bills have been getting more attention. 

Nevertheless, the bill has been influential already. Earlier this month, the Montana state legislature passed a law that echoes many of the Massachusetts requirements. Montana will outlaw police use of face recognition on videos and moving images, and require a warrant for face matching. 

The real costs of compromise 

Not everyone is thrilled with the Massachusetts standard. Police groups remain opposed to the bill. Some activists don’t think such regulations are enough. Meanwhile, the sweeping face recognition laws that some anticipated on a national scale in 2020 have not been passed. 

So what happened between 2020 and 2023? During the three years that Massachusetts spent debating, lobbying, and drafting, the national debate moved from police reform to rising crime, triggering political whiplash. As the pendulum of public opinion swung, face recognition became a bargaining chip between policymakers, police, tech companies, and advocates. Perhaps importantly, we also got accustomed to face recognition technology in our lives and public spaces.  

Law enforcement groups nationally are becoming increasingly vocal about the value of face recognition to their work. For example, in Austin, Texas, which has banned the technology, Police Chief Joseph Chacon wishes he had access to it in order to make up for staffing shortages, he told MIT Technology Review in an interview. 

Some activists, including Caitlin Seeley George, director of campaigns and operations at Fight for the Future, say that police groups across the country have used similar arguments in an effort to limit face recognition bans.  

“This narrative about [an] increase in crime that was used to fight the defund movement has also been used to fight efforts to take away technologies that police argue they can use to address their alleged increasing crime stats,” she says. 

Nationally, face recognition bans in certain contexts, and even federal regulation, might be on the table again as lawmakers grapple with recent advances in AI and the attendant public frenzy about the technology. In March, Senator Markey and colleagues reintroduced the proposal to limit face recognition at a federal level. 

But some advocacy groups still disagree with any amount of political compromise, such as the concessions in the Montana and Massachusetts bills.  

“We think that advocating for and supporting these regulatory bills really drains any opportunity to move forward in the future with actual bans,” says Seeley George. “Again, we’ve seen that regulations don’t stop a lot of use cases and don’t do enough to limit the use cases where police are still using this technology.” 

Crockford wishes a ban had been politically feasible: “Obviously the ACLU’s preference is that this technology is banned entirely, but we get it … We think that this is a very, very, very compromised common-sense set of regulations.”

Meanwhile, some experts think that some activists’ “ban or nothing” approach is at least partly responsible for the current lack of regulations restricting face recognition. Andrew Guthrie Ferguson, a law professor at American University Washington College of Law who specializes in policing and tech, says outright bans face significant opposition, and that’s allowed continued growth of the technology without any guardrails or limits.

Face recognition abolitionists fear that any regulation of the technology will legitimize it, but the inability to find agreement on first principles has meant regulation that might actually do some good has languished. 

Yet throughout all this debate, facial recognition technology has only grown more ubiquitous and more accurate.

In an email to MIT Technology Review, Ferguson said, “In pushing for the gold standard of a ban against the political forces aligned to give police more power, the inability to compromise to some regulation has a real cost.”

Introducing MIT Technology Review Roundtables, real-time conversations about what’s next in tech

On August 10, MIT Technology Review is launching Roundtables, a participatory subscriber-only online event series, to keep you informed about emerging tech.

Subscribers will get exclusive access to 30-minute monthly conversations with our writers and editors about topics they’re thinking deeply about—including artificial intelligence, biotechnology, climate change, tech policy, and more. (If you’re not yet a subscriber, become one today and save up to 17%.)

The first Roundtables event, The AI economy, will feature David Rotman, MIT Technology Review editor at large, in conversation with editor in chief Mat Honan. They will discuss David’s recent coverage on the economic implications of large language models like ChatGPT and US efforts to reshore the chip industry and, more broadly, to create innovation hubs. 

There is little doubt that generative AI will affect the economy—but how, exactly, remains an open question. Despite fears that these AI tools will upend jobs and exacerbate wealth inequality, early evidence suggests the technology could help level the playing field—but only if we deploy it in the right ways. Likewise, the Inflation Reduction Act and the Chips Act both have huge implications for the economy, and for efforts to revive America’s high-tech manufacturing base. Rotman and Honan will look at who stands to benefit from these transformative economic events, and what the risks are. 

Then, on September 12, our next edition of Roundtables will tackle another important question: How should we regulate AI? Charlotte Jee, news editor, and Melissa Heikkilä, senior reporter for AI, will discuss the state of AI regulation today and what to watch for in the months ahead.

Europe’s AI Act focuses on creating guardrails for “high-risk” AI used in health care and education systems. In the US, a patchwork of federal regulations and state laws govern certain aspects of automated systems, while work on a federal framework remains in the early stages. Meanwhile, the OECD has set forth a set of nonbinding principles for AI development, and new industry standards are also taking shape. Heikkilä and Jee will walk subscribers through these and other approaches, mapping out the landscape of proposed policies that aim to redirect AI toward serving societal goals or address potential biases that put people at risk. 

If you’re a subscriber, check your email for details on how to register for both events. (Or subscribe now to save up to 17%.) We hope you’ll join us as we explore what’s happening now and what’s coming next in emerging technologies.

How tech companies got access to our tax data

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

You might think (or at least hope) that sensitive data like your tax returns would be kept under close care. But we learned this week that tax prep companies have been sharing millions of taxpayers’ sensitive personal information with Meta and Google, some for over a decade. 

The tax companies shared the data through tracking pixels, which are used for advertising purposes, an investigative congressional report revealed on Wednesday. Many of them say they have removed the pixels, but it’s not clear whether some sensitive data is still being held by the tech companies. The findings expose the significant privacy risks that advertising and data sharing pose, and it’s possible that regulators might actually do something about it.

What’s the story? In November 2022, the Markup published an investigation into tax prep companies including TaxAct, TaxSlayer, and H&R block. It found that the sites were sending data to Meta through Meta Pixel, a commonly used piece of computer code often embedded in websites to track users. The story prompted a congressional probe into the data practices of tax companies, and that report, published Wednesday, showed that things were much worse than even the Markup’s bombshell reporting suggested. 

The tech companies had access to very sensitive data—like millions of peoples’ incomes, the size of their tax refunds, and even their enrollment status in government programs—dating back as early as 2011. Meta said it used the data to target ads to users on its platforms and to train its AI programs. It seems Google did not use the information for its own commercial purposes as directly as Meta, though it’s unclear whether the company used the data elsewhere, an aide to Senator Elizabeth Warren told CNN

Experts say that both tax prep and tech companies could face significant legal consequences, including private lawsuits, challenges from the Federal Trade Commission, even criminal charges from the US federal government.

What are tracking pixels? At the center of the controversy are tracking pixels: bits of code that many websites embed to learn more about user behavior. Some of the most commonly used pixels are made by Google, Meta, and Bing. Websites that use these pixels to collect information about their own users often end up sharing that data with big tech companies

The results can include information like where users click, what they type, and how long they scroll. Highly sensitive data can be gleaned from those sorts of activities. That data can be used to target ads according to what you might be interested in.

Pixels allow websites to communicate with advertising services across websites and devices, so that an ad provider can learn about a user. They are different from cookies, which store information about you, your computer, and your behavior on each website you visit.  

So what are the risks? These tracking pixels are everywhere, and many ads served online are placed at their direction. They contribute to the dominant economic model of the internet, which encourages data collection in the interest of targeted advertising and hyper-personalization online. Often, users don’t know that websites they visit have pixels. In the past, privacy advocates have warned about pixels collecting user data about abortion access, for example.

“This ecosystem involves everything from first-party collectors of data, such as apps and websites, to all the embedded tracking tools and pixels, online ad exchanges, data brokers, and other tech elements that capture and transmit data about people, including sensitive data about health or finances, and often to third parties,” Justin Sherman, a senior fellow at Duke University’s Sanford School of Public Policy, wrote to me in an email.

“The underlying thread is the same: consumers may be more aware of how much data a single website or app or platform gathers directly, but most are unaware about just how many other companies are operating behind the scenes to gather similar or even more data every time they go online.”

(P.S. The Markup has a great explainer on how you can see what your company is sending to Meta through tracking pixels! Take a read here.)

What else I’m reading

  • The FTC is taking on OpenAI, according to a document first published by the Washington Post on Thursday. The agency opened an investigation into the maker of ChatGPT and is demanding records covering its security practices, AI training methods, and use of personal data. The investigation poses the first major regulatory challenge to OpenAI in the US, and I’ll be watching closely. Sam Altman, the CEO, doesn’t seem to be sweating too much, at least publicly. He tweeted that “we are confident we follow the law.”
  • Speaking of the FTC, Commissioner Lina Khan, who has enthusiastically taken on Big Tech antitrust cases, was called in front of Congress this week. She faced harsh criticism from some Republican lawmakers for “harassing” businesses and pursuing antitrust suits that the agency has lost. Khan has had a tough go lately. The latest loss came on Tuesday, when a judge ruled against the agency’s attempt to prevent Microsoft’s $69 billion acquisition of gaming company Activision. 
  • I love this take on the rapid rise of Threads, the Twitter clone put out by Meta, from the Atlantic’s Caroline Mimbs Nyce. She writes, “Many users may not be excited to be on Threads, exactly—it’s more that they’re afraid not to be.” I’ve resisted joining for now, but I certainly feel some FOMO. 

What I learned this week

China is fighting back against US export restrictions on its computer chips and semiconductors, my colleague Zeyi Yang explains in a piece published this week. At the beginning of July, China announced a new restriction on the export of gallium and germanium, two elements used in producing chips, solar panels, and fiber optics. 

Although the move itself won’t necessarily have a ton of impact, Zeyi writes that this might just be the start of Chinese countermeasures, which could include export restrictions on rare-earth elements or materials in electric-vehicle batteries, like lithium and cobalt. “Because these materials are used in much greater quantities, it’s more difficult to find a substitute supply in a short time. They are the real trump card China may hold at the future negotiation table.”

The US-China chip war is still escalating

This story first appeared in China Report, MIT Technology Review’s newsletter about technology developments in China. Sign up to receive it in your inbox every Tuesday.

The temperature of the US-China tech conflict just keeps rising.

Last week, the Chinese Ministry of Commerce announced a new export license system for gallium and germanium, two elements that are used to make computer chips, fiber optics, solar cells, and other tech devices.

Most experts see the move as China’s most significant retaliation against the West’s semiconductor tech blockade, which expanded dramatically last October when the US limited the export to China of the most cutting-edge chips and the equipment capable of making them. 

Earlier this year, China responded by putting Raytheon and Lockheed Martin on a list of unreliable entities and banned domestic companies from buying chips from the American company Micron. Yet none of these moves could rival the global impact of the gallium/germanium export control. By putting a chokehold on these two raw materials, China is signaling that it, in turn, can cause pain for the Western tech system and push other countries to rethink the curbs they put on China.

But as I reported yesterday, China’s new export controls may not have much long-term impact. “Export control is not as effective if the technologies are available in other markets,” Sarah Bauerle Danzman, an associate professor of international studies at Indiana University Bloomington, told me. Since the technology to produce gallium and germanium is very mature, it won’t be too hard for mines in other countries to ramp up their production, although it will take time, investment, policy incentives, and maybe technological improvement to make the process more environmentally friendly.

So what happens now? Half of 2023 is now behind us, and even though there have been a few diplomatic events showing the US-China relationship warming up, like trips to China made by US officials Antony Blinken and Janet Yellen, the tensions on the technological front are only getting worse.

When the US instituted its chip-related export restrictions in October, it wasn’t clear how much of an impact they would have, because the US doesn’t control the entirety of the semiconductor supply chain. Analysts said one of the biggest outstanding questions was the extent to which the US could persuade its allies to join the blockade. 

Now the US has managed to get the key players on board. In May, Japan announced that it is limiting the export of 23 types of equipment used in a variety of chipmaking processes. It even went further than the original US rules. The US limited the export of tools for making the most cutting-edge chips—those of the 14-nanometer generation and under. Japan’s restrictions extend to older, less-advanced chip generations (all the way to the 45-nanometer level), which has the Chinese semiconductor industry worried that production of basic chips used in everyday products, like cars, will also be affected.

At the end of June, the Netherlands followed suit and announced that it will limit the export to China of deep ultraviolet (DUV) lithography machines used to pattern chips. That’s also an escalation of the previous rules, which since 2019 had only limited export of the most advanced extreme ultraviolet (EUV) lithography machines.

These expanding restrictions likely prompted China to take a page from its enemies’ playbook by instituting the controls on gallium and germanium. 

Yellen’s visit last week shows that this back-and-forth retaliation between China and the US-led bloc is not ending anytime soon. Both Yellen and the Chinese leaders expressed their concern at the meeting about the other side’s export controls, yet neither said anything about backing down. 

If more aggressive actions are taken soon, we may see the tech war expand out of the semiconductor field to involve things like battery technologies. As I explained in my piece on Monday, that’s where China would have a larger advantage.

Do you believe the technological tensions between the US and China will worsen from here? Let me know your thoughts at zeyi@technologyreview.com.

Catch up with China

1. Tesla is laying off some battery manufacturing workers in China as a result of the cutthroat electric-vehicle price competition in the country. (Bloomberg $)

2. China’s top EV maker, BYD, is building three new factories in Brazil to make batteries, EVs, and hybrid cars. They will be built at the location of an old Ford plant. (Quartz)

3. Shenzhen, the city often seen as the Silicon Valley of China, is facing population decline for the first time in decades. (Nikkei Asia $)

4. Five people were arrested by the Hong Kong police for involvement in creating an online shopping app to map out local businesses that support the pro-democracy movement. (Hong Kong Free Press)

5. There’s now an official app for learning how to do journalism in China—with online courses taught about the Marxist view of journalism, why the party needs to control the press, and how to be an “influencer-style journalist.” (China Media Project)

6. During her visit, Yellen sat down for dinner with six female Chinese economists. Then they were called traitors online. (Bloomberg $)

7. A new study says a rapidly growing number of scientists of Chinese descent have left the US since 2018, the year the US Department of Justice launched its “China Initiative.” (Inside Higher Ed). An investigation of the initiative by MIT Technology Review published in late 2021 showed it had shifted its focus from economic espionage to “research integrity.” The initiative was officially shut down in 2022.

8. Threads, the new Twitter competitor released by Meta, hit the top five on Apple’s China app store even though Chinese users have to access the platform with a VPN. (TechCrunch)

Lost in translation

On July 5, the famous Hong Kong singer CoCo Lee died by suicide after having battled depression for several years. The tragic incident again highlighted the importance of depression treatment, which is often inaccessible in China. As the Chinese publication Xin Kuai Bao reported, fewer than 10% of patients diagnosed with depression in China have received any kind of medical treatment. 

But in recent years, as several patents for popular Western brand-name depression drugs have expired, Chinese pharmaceutical companies have ramped up their production of local generic alternatives. There’s also a fierce race to invent home-grown treatments. Last November, the first domestically designed depression drug was approved for sale in China, marking a new era for the industry. There are 17 more domestic treatments in trials right now.

One more thing

Every time high-profile US visitors come to China, Chinese social media always fixates on one thing: what they ate. Apparently, Janet Yellen is a fan of the wild mushrooms from China’s southwest border, which her group ordered four times in one dinner. The specific mushroom, called Jian Shou Qing in China, is also known for having psychedelic effects if not cooked properly. Now the restaurant is cashing in by offering Yellen’s dinner choices as a set, branded the “God of Money” menu, according to Quartz.

Why everyone is mad about New York’s AI hiring law

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

Last week, a law about AI and hiring went into effect in New York City, and everyone is up in arms about it. It’s one of the first AI laws in the country, and so the way it plays out will offer clues about how AI policy and debate might take shape in other cities. AI hiring regulation is part of the AI Act in Europe, and other states in the US are considering similar bills to New York’s. 

The use of AI in hiring has been criticized for the way it automates and entrenches existing racial and gender biases. AI systems that evaluate candidates’ facial expressions and language have been shown to prioritize white, male, and abled-bodied candidates. The problem is massive, and many companies use AI at least once during the hiring process. US Equal Employment Opportunity Commission chair Charlotte Burrows said in a meeting in January that as many as four out of five companies use automation to make employment decisions. 

NYC’s Automated Employment Decision Tool law, which came into force on Wednesday, says that employers who use AI in hiring have to tell candidates they are doing so. They will also have to submit to annual independent audits to prove that their systems are not racist or sexist. Candidates will be able to request information from potential employers about what data is collected and analyzed by the technology. Violations will result in fines of up to $1,500.

Proponents of the law say that it’s a good start toward regulating AI and mitigating some of the harms and risks around its use, even if it’s not perfect. It requires that companies better understand the algorithms they use and whether the technology unfairly discriminates against women or people of color. It’s also a fairly rare regulatory success when it comes to AI policy in the US, and we’re likely to see more of these specific, local regulations. Sounds sort of promising, right?

But the law has been met with significant controversy. Public interest groups and civil rights advocates say it isn’t enforceable and extensive enough, while businesses that will have to comply with it argue that it’s impractical and burdensome. 

Groups like the Center for Democracy & Technology, the Surveillance Technology Oversight Project (S.T.O.P.), the NAACP Legal Defense and Educational Fund, and the New York Civil Liberties Union argue that the law is “underinclusive” and risks leaving out many uses of automated systems in hiring, including systems in which AI is used to screen thousands of candidates

What’s more, it’s not clear exactly what independent auditing will achieve, as the auditing industry is currently so immature. BSA, an influential tech trade group whose members include Adobe, Microsoft, and IBM, filed comments to the city in January criticizing the law, arguing that third-party audits are “not feasible.” 

“There’s a lot of questions about what type of access an auditor would get to a company’s information, and how much they would really be able to interrogate about the way it operates,” says Albert Fox Cahn, executive director of S.T.O.P. “It would be like if we had financial auditors, but we didn’t have generally accepted accounting principles, let alone a tax code and auditing rules.” 

Cahn argues that the law could produce a false sense of security and safety about AI and hiring. “This is a fig leave held up as proof of protection from these systems when in practice, I don’t think a single company is going to be held accountable because this was put into law,” he says. 

Importantly, the mandated audits will have to evaluate whether the output of an AI system is biased against a group of people, using a metric called an “impact ratio” that determines whether the tech’s “selection rate” varies across different groups. The audits won’t have to seek to ascertain how an algorithm makes a decision, and the law skirts around the “explainability” challenges of complex forms of machine learning, like deep learning. As you might expect, that omission is also a hot topic for debate among AI experts.  

In the US we’re likely to see much more AI regulation of this sort—local laws that take on one particular application of the technology—as we wait for federal legislation. And it’s in these local fights that we can understand how AI tools, safety mechanisms, and enforcement are going to be defined in the decades ahead. Already, New Jersey and California are considering similar laws

(Read more of our coverage on AI and hiring here.)

What I am reading this week

  • Celebrities riding the crypto wave are now dealing with the crash of reality, and some, like Tom Brady, are in hotter water than others. I loved this New York Times feature about the “humiliating reckoning facing the actors, athletes and other celebrities who rushed to embrace the easy money and online hype of cryptocurrencies,” as authors Erin Griffith and David Yaffe-Bellany write. 
  • Russia is vying for more geopolitical control over the internet, writes David Ignatius in his latest Washington Post opinion column. Russia submitted a resolution ahead of the United Nations meeting in Geneva that would address how the internet is governed globally. This wonky space is actually quite interesting to watch, as China and Russia have both tried to rewrite global digital rules. 
  • A US federal judge has blocked Biden administration officials from contacting social media sites, saying that their attempts to remove and report online content (such as misinformation) would violate the First Amendment. The ruling, which will most certainly be challenged, feels more like a political play than a meaningful change to content moderation policy. 

What I learned this week

Syracuse, New York, is poised for an economic turnaround thanks to a new semiconductor manufacturing facility for chipmaker Micron and a $100 billion investment. The funding is part of President Biden’s plan to revitalize domestic industrial policy with the help of tech jobs. 

David Rotman, our editor at large, writes, “Now Syracuse is about to become an economic test of whether, over the next several decades, the aggressive government policies—and the massive corporate investments they spur—can both boost the country’s manufacturing prowess and revitalize regions like upstate New York.” 

It’s a phenomenal story, and I’d highly recommend you take the time to read it this weekend! 

The $100 billion bet that a postindustrial US city can reinvent itself as a high-tech hub 

For now, the thousand acres that may well portend a more prosperous future for Syracuse, New York, and the surrounding towns are just a nondescript expanse of scrub, overgrown grass, and trees. But on a day in late April, a small drilling rig sits at the edge of the fields, taking soil samples. It’s the first sign of construction on what could become the largest semiconductor manufacturing facility in the United States.

Spring has finally come to upstate New York after a long, gray winter. A small tent is set up. A gaggle of local politicians mill around, including the county executive and the supervisor of the town of Clay, some 15 miles north of Syracuse, where the site is located. There are a couple of local news reporters. If you look closely, the large power lines that help make this land so valuable are visible just beyond a line of trees.

Then an oversize black SUV with the suits drives up, and out steps $100 billion.

The CHIPS and Science Act, passed last year with bipartisan congressional support, was widely viewed by industry leaders and politicians as a way to secure supply chains, bolster R&D spending, and make the United States competitive again in semiconductor chip manufacturing. But it also intends, at least according to the Biden administration, to create good jobs and, ultimately, widen economic prosperity.

Now Syracuse is about to become an economic test of whether, over the next several decades, the aggressive government policies—and the massive corporate investments they spur—can both boost the country’s manufacturing prowess and revitalize regions like upstate New York. It all begins with an astonishingly expensive and complex kind of factory called a chip fab. 

Micron, a maker of memory chips based in Boise, Idaho, announced last fall that it plans to build up to four of these fabs, each costing roughly $25 billion, at the Clay site over the next 20 years. And on this April day, standing under the tent, CEO Sanjay Mehrotra conjures a vision for what the $100 billion investment will mean: “Imagine this site, which has nothing on it today, will have four major buildings 20 years from now. And each of these buildings will be the size of 10 football fields, so a total of 40 football fields worth of clean-room space.” The fabs will create 50,000 jobs in the region over time, including 9,000 at Micron, he has pledged—“so this is really going to be a major transformation for the community.” 

For any city, a $100 billion corporate investment is a big deal, but for Syracuse, it promises a reversal of fortune. Sitting at the northeast corner of the Rust Belt, Syracuse has been losing jobs and people for decades as its core manufacturing facilities shut down—first GE and more recently Carrier, which once employed some 7,000 workers at its East Syracuse plant.

According to Census data, Syracuse now has the highest child poverty rate among large US cities; it has the second-highest rate of families living on less than $10,000 a year.   

An abandoned building in Syracuse, which has lost most of its legacy manufacturing.
KATE WARREN

Syracuse, of course, is not alone in its postindustrial malaise. The nation’s economy is increasingly driven by high-tech industries, and those jobs and the resulting wealth are largely concentrated in a few cities; Boston, San Francisco, San Jose, Seattle, and San Diego accounted for more than 90% of US innovation-sector growth from 2005 to 2017, according to a report by the Brookings Institution. Without these high-tech jobs and with conventional manufacturing long gone as an economic driver, Rust Belt cities like Detroit, Cleveland, Syracuse, and nearby Rochester now top the list of the country’s poorest cities. 

The Micron investment will flood billions into the local economy, making it possible to finally upgrade the infrastructure, housing, and schools. It will also, if all goes according to plan, anchor a new semiconductor manufacturing hub in central New York at a time when the demand for chips, especially the type of memory chips that Micron plans to make in Clay, is expected to explode given the essential role they play in artificial intelligence and other data-driven applications.

It is, in short, an attempt to turn around a region that has struggled economically for decades. And the project’s success or failure will be an important indicator of whether the US can leverage investments in high tech to reverse years of soaring geographic inequality and all the social and political unrest that it has brewed.

Billions for fabs

In many ways, the Micron investment is an on-the-ground trial for the recent US embrace of industrial policy—government interventions that favor particular sectors and regions of the country. Over the last two years, the US government has allocated hundreds of billions to supporting everything from new chip fabs to a slew of battery manufacturing plants throughout the country. Micron, for one, says it would not be building in the US without the funding it expects from the CHIPS and Science Act, which designated $39 billion for support of domestic semiconductor manufacturing and another $13.2 billion for semiconductor R&D and workforce development.

While semiconductors were invented in the US, these days it fabricates only about 12% of the global supply; Taiwan and South Korea dominate the market. For DRAM (dynamic random-access memory) chips, the kind that Micron plans to build in Syracuse, the state of domestic manufacturing is particularly bad. Fewer than 2% of DRAM chips are made in the US. Even US-headquartered Micron, which is one of three companies that control the DRAM market, makes most of its chips in Taiwan, Japan, and Singapore.

It costs roughly 40% more to make chips in the US than in Asia, owing to differences in construction and labor costs and government incentives. The money in the CHIPS Act is meant to make it financially attractive to build fabs in the US once again.

Some of that money is going to places where chip manufacturing is well established: Taiwan Semiconductor Manufacturing Company (TSMC) is investing $40 billion in new fabs in Phoenix, Arizona, and Intel is building fabs in nearby Chandler. But other projects, including a $20 billion pair Intel is building near Columbus, Ohio and Micron’s project in Syracuse, will break ground on new locations for chip manufacturing, potentially creating centers of economic activity around the large investments.

The intention of the CHIPS Act, says Mark Muro, a senior fellow at Brookings, is not just to support building “a big box” to make semiconductors but to help create regional economic clusters around the investments. After years of growing inequality between different parts of the country, he says, this strategy reflects a renewed emphasis on so-called placed-based economic policies to support the local development of high-tech manufacturing. 

A school bus drives down a rural street in New York.
Burnet Road in Clay, New York, where the Micron fab will be built, is still rural—but that could soon change.
KATE WARREN

Predictably, states are aggressively competing for the investments; New York attracted Micron with a staggering $5.8 billion in economic development incentives. But the billions of dollars flowing into Syracuse come with uncertainty. Will this lead to sustainable economic transformation? Or will the massive amounts of money simply provide a temporary burst of growth and jobs for some, leaving many in the community behind and causing a severe case of buyer’s remorse for the city and state?

The incentives that were offered to lure Micron represent “a wild, wild amount of money,” says Nathan Jensen, a professor of government at the University of Texas in Austin. 

While the Micron investment will likely bring good jobs and could be a great opportunity for a distressed city, he says, local and state leaders will need to manage multiple risks over the long term. Corporate strategies can change, and 20 years is a long time to bet on growing market demand for a specific technology. What’s more, says Jensen, by offering generous tax breaks to companies, state and local communities can limit their sources of revenues in the coming decades, even as—if all goes well—they deal with booming demand for housing, roads, and schools. He calls it the “winner’s curse.”

The challenge for Syracuse is that there are no “hard-and-fast recipes” for how to get it right, says Maryann Feldman, a professor of public policy at Arizona State University. “We think like we have an economic development sausage machine,” she says. “You line up a bunch of factors and, voilà, you have a productive and growing economy. It’s much more difficult than that.” 

Risky business

When Ryan McMahon became county executive­­ of Onondaga County, in 2018, the long-imagined industrial park in Clay was languishing. Previous county executives had promoted it as the perfect location for a semiconductor fab. But for two decades there had been no takers. McMahon decided to go all in, pouring millions into expanding and upgrading the site.

His timing couldn’t have been better. Even before the CHIPS Act was passed last summer, semiconductor manufacturers had begun scouting sites in the US to expand. TSMC and Intel both sniffed around Clay, says McMahon, before choosing other sites. Preliminary talks began with Micron, but it all depended on whether the act got passed.

Once that happened, the Micron deal was done. In late October, President Biden went to Syracuse to celebrate what he called “one of the most significant investments in American history.”

The business of memory chips, such as the DRAM chips that Micron will make in Clay, is a notoriously competitive one with very low margins. Like their more glamorous cousins, the logic chips made by Intel and TSMC, they are immensely complex and expensive to make: the process involves cramming billions of transistors onto each thumb-size chip with a precision of a few atoms. To survive, companies have to run their fabs continuously, with remarkable efficiency and yields.

The technical and market demands make finding a suitable site difficult. Micron says it chose the site in Clay because of its size, access to clean power, and abundance of water (by some estimates, large chip fabs use up to 10 million gallons a day). The transmission lines running through it draw power from a huge hydroelectric plant at Niagara Falls and nuclear plants on Lake Ontario. And the lake, with its nearly endless supply of water, is less than 30 miles away.

An industrial street scene showing a rusty overpass with traffic lights in the foreground.
City of Syracuse employees fence in areas alongside an overpass.

The Micron investment, including the $250 million the company has committed to a community fund, could help the city repair its crumbling infrastructure.

“There are very few sites, frankly, in the country that were ready on our timeline,” says Manish Bhatia, Micron’s executive vice president of global operations. Bhatia also points to the area’s manufacturing legacy, which despite being “hollowed out over the last 20 years” has left a “tremendous pool of engineering talent.” Throw in the generous incentives from the state and the company was sold, he says.

Micron’s ambitious expansion plans for the next few decades are fueled in part by anticipated demand from artificial intelligence, as well as increased use of memory in automotive applications and data centers. “AI is all about memory,” says Bhatia. “It needs larger and larger data sets to be able to glean the insights.” And more data means more memory. 

Construction of the first fab is scheduled to begin in 2024,  but it won’t be expected to come fully online until the latter half of the decade. Further expansion is planned but will depend on the demand for the memory chips. Another fab could begin operations by the mid-2030s; after that, two more fabs are on the table, if the market allows.

Micron projects that it will eventually hire 9,000 people to work at the fabs, with roughly 3,000 of those jobs needed for its initial build-out. And it says as many as 41,000 additional jobs will be created in other businesses, from companies supplying the fabs with materials and maintenance to restaurants meeting the needs of the growing workforce. 

The fabs will require workers with a wide range of skill sets, from electrical engineers to a roughly equal number of technicians without college degrees but with specialized training. That means large investments in the area’s vocational schools, community colleges, and universities.

In response to the Micron investment, Syracuse University plans to expand funding for its College of Engineering and Computer Science by 50% over the next five years or so. While some graduates will surely go to work at Micron, the goal is more broadly to train people with a wide range of skills and expertise, from materials science to automation, in hopes that the investment in the fabs will seed a booming local high-tech community.

A group of 5 men work installing an electric meter in a classroom.
The Micron investment could mean plenty of jobs for skilled workers, such as these apprentices in training at the local electrical union.
KATE WARREN

“This is a fascinating natural experiment,” says Mike Haynie, vice chancellor for strategic initiatives and innovation at Syracuse University. “Industry left here largely 25 years ago, and the economy, to a large extent, has been sustained by health care and colleges—it’s essentially what’s driven the economy.” Now, says Haynie, “all of a sudden you insert this $100 billion high-tech investment into the regional economy and see what happens.”

Until now, he says, “we have not been able to authentically look an engineering or computer science student in the face and say, ‘There’s a reason for you to stay in central New York.’”

Going bad

If Syracuse and the surrounding towns want a lesson on how not to do economic development, they just need to drive 150 miles down the thruway to Buffalo.

In 2012, Governor Andrew Cuomo announced the Buffalo Billion, an ambitious redevelopment initiative intended to revive the distressed city. The star project in the Buffalo Billion was an effort to create a clean-tech hub by spending $750 million to build and equip a massive manufacturing facility for SolarCity, a Silicon Valley–based company that financed and installed solar panels.

SolarCity promised it would produce a gigawatt of solar panels by 2017, creating 3,000 jobs in the city, including 1,500 manufacturing jobs at the plant. The so-called gigafactory would be the largest solar panel manufacturer in the Western Hemisphere, the company boasted. 

In the late spring of 2015, I visited SolarCity’s plant as it was being built at the so-called Riverbend site, once the location of a sprawling plant operated by Republic Steel. Less than four miles away from the city’s revitalized downtown waterfront, it seemed like the perfect place to center a new manufacturing economy for Buffalo.

“The Buffalo Billion has been a failure with a capital F.”

Jim Heaney

The following years turned out to be pretty much a bust for the solar gigafactory. With SolarCity several billion dollars in debt, Tesla Motors bought the company. Amid much fanfare, Elon Musk, its CEO, announced it would make solar roof tiles—a product others had tried but that had never really caught on. They turned out to be more or less a market flop. Panasonic, which Tesla had originally brought into the plant to help make solar cells at the facility, pulled out in 2020.

Today, Tesla does in fact employs some 1,500 people at the facility, but many don’t work in solar manufacturing, according to local media reports. Rather, many of the jobs involve assembling charging stations for Tesla’s cars and annotating traffic scenes to help train the autonomous features in its vehicles. Without the anticipated boom in solar panel production—the promise of being the largest solar manufacturer in the US is long forgotten—there are few new jobs for suppliers and other companies that expected to support a growing center of manufacturing. 

The facade of the Tesla Gigafactory in Buffalo New York, with the Tesla logo in massive letters on the outside.
Tesla’s solar gigafactory in Buffalo, New York, in early 2022.
AP PHOTO/FRANK FRANKLIN II

“The Buffalo Billion has been a failure with a capital F,” says Jim Heaney, editor of the Investigative Post in Buffalo, who has followed the state initiative from its outset. The booming tech hub that the Buffalo Billion was explicitly chartered to create never materialized. Heaney points out that the only apparent spinoff from the investments at the Riverbend site is the Tim Hortons doughnut shop across the street.

In many ways, the plans for the Buffalo Billion violated Economic Development 101. For one thing, SolarCity, which was meant to be the clean-tech manufacturing anchor, was a company that installed residential solar panels; it had little experience in large-scale manufacturing. 

There were broader questions about the state investment. Why build in Buffalo, which has no apparent supply chain for the technology and little local demand for it? (It’s one of the cloudiest cities in the country.) Where was the workforce with the skills to produce solar panels going to come from?  

The key lesson of the Buffalo Billion is not that the solar gigafactory was a waste of taxpayer money, though it probably was, but that government-funded economic policy needs to be done in a way that respects a region’s resources and talents.

Richard Deitz, an economist at the Federal Reserve Bank of New York who is based in Buffalo, contrasts the strategy with the investments the state had previously made in Albany. There, the money went into a nanotech research center and to support an existing semiconductor industry; it created partnerships between businesses, higher education, and the state and local governments. The investments strengthened an existing cluster of expertise around those resources.

“These were very different approaches, and I’d say the one in Buffalo did not work very well,” he says. 

Will the Micron investment change the economic trajectory of upstate New York? It’s the right question, says Deitz, “but I don’t think anybody can tell you the answer.” 

However, he says he’s encouraged by what’s happened in Albany over the past 10 years. “You get a picture of what’s possible,” he says. From 2010 to 2020, Albany added some 4,000 jobs, while Buffalo lost some 25,000, according to Deitz: “It’s not like [Albany is] growing like gangbusters, but it’s doing quite well and it’s reinventing itself.” 

Winning the lottery

The initial injection of money from Micron will inevitably create high-tech jobs and will have what economists like to call a “multiplier effect” as those workers spend their generous salaries at local businesses. But the real, sustainable payoff, says Enrico Moretti, an economist at the University of California, Berkeley, will come if the fabs trigger the creation of a cluster of companies that result in a flourishing of new innovation activities and brings long-term high-tech growth beyond Micron.

Ten years ago, Moretti wrote a book called The New Geography of Jobs showing how the rise of such so-called innovation clusters in a few areas of the US, mostly along the coasts, has led to deep economic inequalities. (Those disparities, Moretti now says, have only gotten worse and more troubling since he wrote the book.) “Innovative industries bring ‘good jobs’ and high salaries to communities,” he wrote. They deliver a far stronger multiplier effect than other employers, even those in manufacturing. But communities without innovation clusters, he wrote, “find it hard to create one” and fall further and further behind.

The trick for Syracuse is not to try to be another Silicon Valley (a well-known list of others have failed at that fool’s errand) or even another Austin, but to use its resources and skills to define its own unique brand of innovation. 

Think Albany but on a far grander scale. 

To demonstrate how important these high-tech clusters are to productivity growth, Moretti recently showed what happened to innovation in Rochester after the fortunes of Kodak began to decline in the late 1960s. The company had helped make Rochester one of the country’s wealthiest cities during the 20th century—but then came the invention of digital photography. Kodak’s business, which by then centered on selling film rather than making cameras, collapsed.

As Moretti documented, the damage to the city was not just the loss of Kodak jobs, but a parallel collapse of its ability to invent new technologies. He found that even non-Kodak inventors, who had nothing to do with the photography business, also became far less productive—as measured by number of patents—after Kodak’s decline. The benefits of a flourishing community of innovators interacting with each other, as well as the legal and financial services that facilitate startups and entrepreneurs, had seemingly left town with Kodak.

Now Syracuse wants to run what happened to Rochester in reverse, hoping a large corporate presence will kick-start its own innovation cluster around semiconductors.

“Syracuse has won the economic development lottery,” says Dan Breznitz, a professor of innovation studies at the University of Toronto. Besides the size of the investment, Micron has a long-term track record in chip manufacturing and commitment to building its own production capacity. But, Breznitz suggests, the community now needs a pragmatic vision for what the region and its economy will look like in 15 to 20 years, aside from the Micron fabs. 

Having won the lottery, he says, the community and local businesses can say either “We don’t need to worry anymore” or “This is our moment to create a local vision of how we can become an important location for the global semiconductor industry or related industries.”

Shared prosperity?

When I spoke to Kevin Younis in late April, he appeared to be fully aware that he and Syracuse had won the lottery. As chief operating officer of Empire State Development, the agency responsible for promoting economic growth, Younis had helped lead the effort to recruit Micron. Now, sitting outside on the patio of a bustling downtown food market that he had chosen for the meeting, he basked in the recent revival of the city and its potential prospects.

Younis grew up a mile away, and he says the city has slowly been rebounding in recent years. “When I was a kid in the ’80s and for sure in the ’90s, the downtown was emptying out. I would come down with friends to go to the comic-book store, and we’d be the only people down here,” he says. Now, on a late Thursday afternoon, the market, which has kiosks serving food from all over the world, is busy with young families, businesspeople, and 20-somethings grabbing a beer after work. 

A new home under construction in a new housing development.
A sign for a suburban housing development in a grassy lot.
A completed new home is shown in an empty new housing development

New homes for sale. Hopes are high that the Micron facility will help the local real estate market take off.

But it’s that lottery ticket that Younis knows could change everything, helping a city that has been crawling its way back to reach or exceed its old success. Beyond the $100 billion to build the fabs, there is another $70 billion in operational costs, meaning $170 billion that will be spent in central New York over the next 20 years. “It is something like a $15-billion-a-year GDP impact in central New York on average over the next 30 years,” says Younis. (The GDP of the Syracuse metro area is roughly $42 billion now, according to the Federal Reserve Bank of New York.) And that, he says, is probably a conservative estimate.

Younis, however, is definitely not the type of person who wins the lottery and sits around without any worries. “A lot of things keep me up at night,” he admits. Housing. Infrastructure. “Nobody has ever done anything like this at this scale,” he says.

The state is trying to be strategic, he says, pointing to the plan announced earlier this year to open its first Office of Semiconductor Expansion, Management, and Integration. And when he talks about the existing expertise in the region around smart sensors, drones, and automation, one can see the clear threads of the type of strategic vision that the University of Toronto’s Breznitz talks about.

“A lot of things keep me up at night. Nobody has ever done anything like this at this scale.”

Kevin Younis

But there is another challenge on Younis’s mind these days, one that feels very personal. It goes back to growing up as one of 12 children in a working-class Syracuse family. “Central New York has among the most entrenched poverty in the nation. Having grown up in that poverty and having an opportunity to change that is a generational opportunity,” he says.

Poverty is all around, he says: “It’s where we’re at—it’s right here. It’s where I grew up. These are among the poorest Census tracts in the nation. Imagine living and raising a family on less than $10,000 a year. That’s insane! That’s what keeps me up at night, where I would feel like I failed if we don’t do something about that.”

A vibrant orange, pink and blue sunset is shown over a suburban neighborhood in Syracuse, New York.
Sunset in Syracuse’s Eastwood neighborhood.
KATE WARREN

Perhaps the ultimate test of the Syracuse experiment will be whether, in addition to boosting the opportunities in the largely middle-class suburbs around Clay, the Micron investment also lifts up those living in poverty in the downtown Syracuse neighborhoods that Younis talks about. Can the inevitable economic growth benefit a broad swath of the community? Or will it exacerbate inequality? The results in other booming innovation clusters are not particularly encouraging. Can Syracuse be different?

Robert Simpson, president of the CenterState Corporation for Economic Opportunity and a close collaborator with Younis in recruiting Micron, puts the challenge this way: “Economic growth is no guarantee of a greater measure of shared prosperity. You can grow without improving the quality of life for a lot of people in the region. However, economic growth is a necessary precondition for a greater level of shared prosperity. You need growth—otherwise you’re just redistributing income and wealth from one place to the next. And that gets people understandably upset and nervous.” 

The massive Micron investment, says Simpson, “gives us a chance to do something we have wanted to do for a long time, but we didn’t have the tools to do: bridge the socioeconomic divides that have held our region back.”

It’s a lofty goal that will no doubt be challenged over the coming years. There will be inevitable fights over housing and where and how to invest the hundreds of millions earmarked for community development. There will certainly continue to be skeptics, especially given the state’s hugely generous incentives and the number of years it will take to get the fabs fully up and running.

Transforming a city and its economy is not easy work. It comes with enormous risks. But in many ways, Syracuse has no choice. The great experiment unfolding there is one that the city—indeed, the country—badly needs to succeed.