Citizen如何通过招募亚裔老年人来重塑自己

Editor’s note: This is a translation of a story about how the crime-tracking app Citizen has been giving away free subscriptions to elderly Asians in the Bay Area. Find the English language version here.

本文是与普利策中心的人工智能问责网络合作撰写的。

当外面天黑的时候,约瑟芬·赵(Josephine Zhao)哪怕只是走几个街区就能回到旧金山的家,有时也会多叫一双“眼睛”——字面意义的眼睛。

赵打开手机上的Citizen App,通过一个名为“实时监控”的功能,与该平台的一个客服人员建立联系。而该平台也可以通过网络追踪到赵的GPS位置,客服只要点击另一个按钮,就可以得到打开她手机摄像头的授权。这样该平台就可以“看到我所看到的东西”,赵说。通常来说,她甚至不会和客服人员进行对话,但她知道“这时有人和我一起走”,这会让赵感到安心一些。

这是赵最近采取的最新安全措施之一:她也避免乘坐公共交通工具,以及在城市里走路的时候,会在她的钥匙链上挂着一个长长的尖头装置。这个装置是一个浅粉色的塑料制品,必要的时候会变成一个武器。

但在她看来,Citizen这样一个允许用户报告和跟踪附近犯罪通知的超级本地应用程序是她最好的保护手段之一,这种数据驱动的DIY安全措施能够保护一个长期被忽视的群体。

“我们在教育、公共安全、住房、交通方面上的需求,都没有得到满足和关切。就好像我们不重要一样。”赵说,她目前也是多家教育非政府组织的代课教师和社区联络员,“我们的需求没有得到尊重,我们的需求没有得到满足,人们到处都轻视我们。”

“我真的相信Citizen是一个维持社会正义和种族正义的工具。”

“我们必须实施一些行动来保护我们的社群,”她补充道。“Citizen是最完美的工具。”

在当地持续发生基于种族的攻击、以及一系列针对亚裔居民的大规模枪击事件之后,许多亚裔和太平洋岛民(AAPI,Asian-American and Pacific Islander)社群的居民们都告诉《麻省理工科技评论》他们欢迎这款应用程序,认为它可以解决反亚仇恨带给他们的焦虑。

对于这些受到严重创伤的人们来说,Citizen成为了让他们获得安心的一种方式。

Citizen的转型

对于这款应用来说,这种积极的反响似乎有些奇怪。毕竟因放大了人们对犯罪的幻想,并帮助白人居民实行种族门禁,它长期以来一直都在遭受着批评的声音。Citizen最初被命名为“治安警员”,因为它有一段曲折的历史:苹果应用商店在该款应用2016年推出后的一周内就将其下架,因为它违反了苹果的《开发者审查指南》,该指南规定应用程序不得鼓励身体伤害。2021年,该公司的首席执行官要求他的员工悬赏3万美元,寻找一名他误认为在洛杉矶纵火的人,这在当时成为了头条新闻。而且该款应用的客户也经常因发表种族主义言论而受到批评。

正是在这种情况下,这款应用现在正在积极地争取像赵这样的用户。从2022年9月开始,通过社区团体如奥克兰华埠商会(Oakland Chinatown Chamber of Commerce)或者旧金山美国华商总会(Chinese American Association of Commerce in San Francisco)组织的活动,Citizen一直在湾区招募中国裔和其他亚裔居民,其中包括许多老年人,他们加入服务可以免费获得价值240美元的一年高级订阅服务。(虽然该应用程序的免费版本会向用户发送值得注意的事件警报,但要是想获得与Citizen雇员实时连线监控服务,则需要更高级的版本)。目前,赵直接与Citizen合作,帮助将其应用程序界面翻译成中文,并帮助其在她的人际圈中进行宣传。

该应用程序的最终目标,是想从该地区的AAPI社群招募2万名新用户,这可以带来相当于价值约500万美元的一年付费订阅。Citizen组织的产品负责人达雷尔·斯通(Darrell Stone)表示,目前已经有700人注册了他们的应用程序。

旧金山湾区的项目也是对应用程序更广泛改造的测试,它成功地吸引一些可能经常得不到警察保护的弱势群体,从亚特兰大的黑人跨性别社群到芝加哥地区的帮派暴力受害者。“我真的相信Citizen是一个维持社会正义和种族正义的工具,”特雷弗·钱德勒(Trevor Chandler)说,他在去年担任Citizen组织的政府事务和公共政策主管时,领导了该应用程序在旧金山湾区的试点项目。

但是,一些与湾区亚裔社群合作的倡导者,以及专注于弱势人群中的不实信息研究领域的专家,却怀疑这种快速危险预警技术是否真正解决了核心问题,即它是否真的能人们更安全,而不仅仅是让他们感觉更安全一点。除此之外,他们还怀疑Citizen应用程序是否有时会让事情变得更糟,因为它可能会放大对这个社群的偏见,特别是在全球疫情大流行给地方全国的亚裔社群带来无尽创伤的时候。

“几乎每天你都可以在任何社交媒体上看到该款应用程序向群众征集的信息,在整个技术生态圈中被疯狂和快速地传播,在我看来这完全是不正常的,”倡导亚裔社群的社会、政治和经济福祉的非营利组织OCA的公共事务副总裁肯德尔·小佐井(Kendall Kosai)说。

他说,他在自己的手机上安装了Citizen,并对一些用户针对某些事件提交的偏见评论而感到吃惊。“这对我们社群居民的心理到底有什么样的影响呢?”他提问道,“很明显,这一切可能很快就会失控。”

获得“正确的信息”

“我很高兴能使用它,”49岁的爱丽丝·金(Alice Kim)说,她和丈夫在旧金山北部的里士满区经营着一家名为Joe’s Ice Cream的冰淇淋店,该区域的大约三分之一人口是亚裔,金表示最近会看到各种破坏事件和汽车盗窃案件的增加。

和许多其他亚裔美国人一样,金氏夫妇觉得,对他们安全的担忧在很长一段时间里都被置若罔闻,基本上被当地政客忽视了。“感觉他们生活在另一个世界,”爱丽丝的丈夫肖恩·金(Sean Kim)说。

在2021年的几个月里,他们的商店发生了三次企图闯入事件,当爱丽丝说她要求人们不要使用卫生间时,人们甚至几次向她扔垃圾,或者开始争吵。

“每天早上我来上班的时候都会有点焦虑,我的商店有没有被盗窃,会不会又看到一扇破损的窗户,”爱丽丝告诉我,“尤其在疫情期间,我感觉非常紧张和不安全。”

2022年秋天,爱丽丝让肖恩在她的手机上安装了Citizen应用程序,他之前一直向爱丽丝说明该款应用程序的各种好处。在该应用程序开始向AAPI社群宣传前,肖恩就一直在使用Citizen应用程序,并且当他的朋友赵给他们一个免费试用的高级版本时,他果断地升级了该款应用程序。

肖恩认为Citizen比其它本地信息应用程序如NextDoor更可靠,因为他感觉到Citizen所提供的消息似乎是得到了验证。(除了依赖各种公共数据来源的紧急情况信息外,Citizen员工表示,他们还会在发布犯罪信息之前对用户报告的犯罪信息进行审查。)

“我们在尝试要求人们仔细检查微信群中所转发的信息,”因为“这些信息有时会造成其他人恐慌。”

“我认为越来越多的人使用Citizen,是因为很多人来核实这些信息。”肖恩继续解释说, “所以至少我知道,哦,那不是一声枪响。如果没有这个应用程序,我听到了一声枪响的时候,我完全不知道发生了什么事。我觉得这是一个有效的工具。我知道正确的信息,这让我感觉很安全。”

对爱丽丝来说,能够通过Citizen的高级功能与客服建立联系,可以解决一些可能没有达到真正犯罪门槛、但却让她感觉很不安全问题的一种方式。在应用程序的地图上,红点表示严重事件的报告,比如有人被车撞了或被武器袭击了;黄点表示较温和的一些预警信息,比如报告有武装人员或检测到气体气味,灰点表示值得注意但没有威胁性的问题,比如丢失的宠物。

和金一家人一样,湾区的许多亚裔居民们都积极接受监控,因为他们觉得长期以来都被忽视了。AAPI社群的居民已经在旧金山和奥克兰的华埠组织了各种自发的巡逻活动(尽管金氏夫妇还没有参与其中)。这对夫妇支持一项有争议的法案,该法案允许警方在业主允许的情况下,在24小时内调取私人监控录像。肖恩和爱丽丝还和其他小企业主谈到了安装私人监控设备的问题,附近奥克兰的华埠企业主们也采取了这一措施。对他们来说,Citizen只不过是另一个密切关注他们周围发生的事情的工具。

钱德勒认为,围绕Citizen的许多负面言论都忽略了这一观点,而且像金氏夫妇这样的一些核心用户,之所以依赖这一工具,是因为他们生活的家门口就面临着犯罪。

“Citizen和它的付费版本并不是一款万灵药,它不会解决世界上所有的问题,也不会阻止世界各地的犯罪的发生。它不是为了这些,”钱德勒说,“但这款应用程序成为了让边缘化社群表达他们的声音的一种非常强大的方式。”

“可惜的是,他们的助手里没有人会说中文”

 “虽然Citizen的想法很棒。但因为我们社群的独特性,我确实带着一种善意的怀疑态度来看待这个问题,”OCA的小佐井说。“我一直在想的一件事是,它对最脆弱的成员的可及性到底是怎样的?”

他指出,美国的亚裔社群包括“50个不同的种族和100种不同的语言”,而且“不同的社区围绕这些公共安全问题,与当地执法部门进行着不同的互动。”

目前,Citizen只支持英语操作界面。奥克兰华埠商会的执行主任陈巧伦(Jessica Chen)说,要想真正有效,它必须使用中文或其他亚洲语言提供服务。(Citizen的斯通在一封电子邮件中表示,它正在“积极投资”自然语言处理技术,“将使我们能够实时地将应用程序翻译成不同的语言”,但他没有提供这些举措的细节或时间表。)

在实践层面上,当一个群体的成员对使用科技和获取信息有不同程度的熟悉度时,很难帮助他们采用同一种技术,当英语还不是他们的第一语言时就更难了。特别是对于英语非母语的老年人,从注册这个平台、到理解平台所发布的消息都是非常困难的。

“我有时间教他们吗?以及我是合适的教他们的人吗?”陈问。

75岁的约瑟芬·惠(Josephine Hui)已经在奥克兰生活了40年,她是一名金融教育工作者,经常通勤到华埠工作。最近,她和其他几位老人在一次由Citizen主办的活动上了解到这款应用程序,该活动由关注奥克兰安全问题的非营利组织亚裔犯罪委员会(Asian Committee on Crime)和奥克兰华埠商会联合举办。她在应用程序中看到了奥克兰警察局的公共安全介绍。 

Josephine Josephine Hui, 75, at a local security event in Oakland sitting in a gymnasium as an audience member at a community meeting
75岁的约瑟芬·许(Josephine Hui)在奥克兰的一个安全活动上。
LAM THUY VO

“我认为对于任何街上的行人来说,Citizen都是一个很棒的应用程序,”她说道,“可惜的是,他们的助手里没有人会说中文。”

不过,她说她渴望学习如何使用这款应用。她说,疫情期间她感到孤立,被困在家里,随着针对亚裔居民的攻击增多,她担心自己的安全。

但在她使用这款应用程序之前,她遇到了一个障碍:当她试图安装它时,她已经不记得自己的苹果账户密码了。

混乱的信息

作为奥克兰华埠商会的主席,陈锡澎(Carl Chan)一直在推动更多的安全措施来保护华埠的居民,并感谢社群居民的推广。

然而,对于很多老年人来说,这款App的系统语言并非他们的母语,因此陈经常要帮助他们学习怎么使用。他担心,如果信息不能被翻译成中文或越南语等语言,一些人可能会误解Citizen的警报。他还担心,如果这些老年人没有获得适当的培训,他们可能会错误地将其他地点的警报误认为是本地区的情报而传递到其他平台,这些不实信息的传播会造成不必要的恐惧。

“我们试图要求人们仔细检查微信群中所转发的信息,”陈说,因为“这些信息有时会造成其他人恐慌。”

迪尼·西特拉(Diani Citra)在美国笔会工作,专门处理亚裔社群的不实信息问题,她也担心这种有关犯罪的密集信息的传播会适得其反,使已经受到创伤的人群更加焦虑。

西特拉表示,像Citizen这样的应用程序可以帮助填补一群处于“信息荒漠”的人的信息空白,这些人可能是因为主流媒体没有关注他们,或者因为他们没有收到适合自己母语的信息。

“对许多被边缘化的社群来说,了解犯罪信息是十分有必要的,我们没有得到与我们的安全有关的社群信息。因为现在没有人提供任何信息,我们也没有资格要求他们不去别的地方获取这些信息,”她说,但使用这款应用仍然可能会产生一种“放大的危险感”。

虽然钱德勒说Citizen会不断验证其发布的信息,但亚裔居民会将从这里接收到的信息进一步传播到碎片性的新闻网站和社交平台媒体系统,如WhatsApp,微信,Viber等等。这些平台往往已经充斥着有误导性和分裂性的关于反亚仇恨的信息。

“原本是个例的事情可能会被视为是一种大趋势。”

例如,根据2022年8月一份关于亚太美国人全国委员会和虚假信息防御联盟(National Council of Asian Pacific Americans and the Disinfo Defense League)的虚假信息调查报告,越来越多的新闻聚合平台在收集犯罪者是黑人、受害者是亚裔的犯罪信息。

报告称,这些媒体有时会用更具挑衅性的标题重写新闻文章,或将旧事件当作主流媒体瞒报黑人反亚裔犯罪的证据,其目的往往是推动反黑人叙事,并将亚裔受害者的身份武器化。

报告写道:“主流媒体和新闻机构缺乏对亚裔美国人的报道,给一些单独强调其‘亲亚裔’性质的网络消息源头和平台留下了空间……这些源头助长了一些有问题的叙事,这些宣传报道围绕着女性歧视、反黑人种族主义和仇外心理进行展开。”

虽然还没有证据表明像这样的宣传信息已经在Citizen上占据上风,但西特拉说,当本来就更容易受到错误信息和分裂性叙述影响的亚裔老人看到没有背景的犯罪信息时更容易变得恐慌。 (Citizen没有回答这一系列的后续问题,包括关于该应用程序上可能出现的错误信息。)西特拉警告说:“原本是个例的事情可能会被视为是一种大趋势。”

Citizen可以改变吗?

在美国,当警察处境和治安局势已经很紧张的时候,Citizen一直在向AAPI社群示好。很多Citizen正在争取的社群都不信任警察部门或不愿与他们合作。(事实上,一些组织者告诉我,许多亚裔社群成员会避免报警来报告事件。)

“我们有时对创造一个即时的、能让情况稍微好转的解决方案感到非常兴奋,但我们对结构性的长期解决方案考虑得不够多。”

从理论上讲,对于那些通常感到被官方政府机构辜负,但仍然面临很多安全问题的人,像Citizen这样的技术可以代表一个有用的垫脚石。

不过,就在不久前,Citizen还被批评其创造了一种“恐惧文化”,鼓励人们使用私警。一名前员工曾描述该应用的主流用户是那些会写“极其种族歧视”的评论的人。

钱德勒认为,这些描述忽视了Citizen这类应用程序庞大的用户基础,这些人可能需要该应用程序提供的服务来追踪他们附近的犯罪情况,因为现实就是如此,他们的周围就是犯罪事件频发。在他看来,对于那些没有生活在安全社区的“特权”的用户来说,该应用程序可以是一个强大的信息传播工具。

举例来说,钱德勒引用了他在芝加哥的工作经历。他说,统计数据上来看,南区不如北区安全,那里的一些人每天都不得不生活在犯罪的现实之中。那里的居民告诉他,他们依靠该应用程序来确保他们的家庭安全,例如,了解是否发生了枪击或车祸,这些往往可能升级为更大的冲突。

这些芝加哥的用户“不是被 Citizen 告诉他们应该感到恐惧,”钱德勒说,“他们本来就感到恐惧。”

Trevor Chandler at a safety event for the AAPI community in Oakland
特雷弗·钱德勒(Trevor Chandler)在奥克兰的AAPI社群举办的一个安全活动上
LAM THUY VO

2022年秋冬,钱德勒一直在与湾区的政客和社区组织者进行合作,他正在与另一位当地市长和附近的组织进行交流,为他们所在地区的苗裔和越南裔社区带来Citizen的免费使用账户。在年底之前,他推动Citizen扩展到萨克拉门托县,这里的亚裔居民占比很高。

但展望未来,目前还不清楚该公司将继续向该项目投入多少资金。2023年1月初,钱德勒和其他33名员工被解雇了

钱德勒最近发短信表示:“我很自豪能通过我们与社群伙伴的合作,不仅提高人们对AAPI社群仇恨犯罪意识,还提供切实可行的解决方案。”“我很难过,作为一名前Citizen员工,我再也不能再继续参与其中了。”

钱德勒说,该公司将坚持其承诺,为湾区的亚裔居民提供2万份免费的付费订阅服务,斯通证实,该公司“将继续推广和支持该计划”。但钱德勒也表示,他不确定是否会有其他人继续参与这个项目。

对于经常为纽约市的亚裔居民提供自卫课程的组织Soar Over Hate的主席健次·琼斯( Kenji Jones)来说,对社群的持续承诺是很重要的。他受到Citizen在湾区推广项目的鼓舞,尤其是为应用程序的用户设置一个随时待命的客服的想法“非常好”。但他也担心,免费试用服务只会持续一年,可能许多低收入的亚裔居民无法续期。

“那一年之后会发生什么呢?这是一家盈利性的公司。所以这是为了赚更多的钱。他们是在从这个群体中获利,尤其是这个群体现在感到非常危险。所以我认为,对我来说,只有一年的试用是相当不道德的,”琼斯说。

他补充道:“我们有时对创造一个即时的、能让情况稍微好转的解决方案感到非常兴奋,但我们对结构性的长期解决方案考虑得不够多。”

琼斯还指出,他的组织提供的一些最重要的课程是帮助人们树立自信,他担心使用这款应用可能会破坏这些感觉,这可能会让人们“对自己的安全更加焦虑和恐惧”。

作为亚裔人,“我认为我们中的很多人已经习惯于感到渺小,”他说,“我认为很多人需要的是信心,而这不是一款应用程序能够给你带来的。”

林·瑞·武(Lam Thuy Vo)是一名记者,她将数据分析与实地报道结合起来,以研究制度和政策如何影响个人行为。她目前也是布朗大学的信息未来研究员普利策中心的人工智能问责研究员,以及克雷格·纽马克新闻研究生院的驻校数据记者

感谢 MIT TR China 的张智为本文提供翻译支持。

How Telegram groups can be used by police to find protesters

China Report is MIT Technology Review’s newsletter about technology developments in China. Sign up to receive it in your inbox every Tuesday.

First of all, I’m still processing the whole “Chinese spy balloon” saga, which, from start to finish, took over everyone’s brains for just about 72 hours and has been one of the weirdest recent events in US-China relations. There are still so many mysteries around it that I don’t want to jump to any conclusions, but I will link to some helpful analyses in the next section. For now, I just want to say: RIP The Balloon.

On a wholly different note, I’ve been preoccupied by the many Chinese individuals who remain in police custody after going into the streets in Beijing late last year to protest zero-covid policies. While action happened in many Chinese cities, it’s the Beijing police who have been consistently making new arrests, as recently as mid-January. According to a Twitter account that’s been following what’s happened with the protesters, over 20 people have been detained in Beijing since December 18, four of them formally charged with the crime of “picking quarrels.” As the Wall Street Journal has reported, many of those arrested have been young women.

For the younger generation in China, the movement last year was an introduction to participating in civil disobedience. But many of these young people lack the technical knowledge to protect themselves when organizing or participating in public events. As the Chinese government’s surveillance capability grows, activists are forced to become tech experts to avoid being monitored. It’s an evolving lesson that every new activist will have to learn.

To better understand what has happened over the past two months and what lies ahead, I reached out to Lü Pin, a feminist activist and scholar currently based in the US. As one of the most prominent voices in China’s current feminist movement, Lü is still involved in activist efforts inside China and the longtime cat-and-mouse game between protesters and police. Even though their work is peaceful and legal, she and her fellow activists often worry that their communications are being intercepted by the government. When we talked last week about the aftermath of the “White Paper Protests,” she explained how she thinks protesters were potentially identified through their communications, why many Chinese protesters continue to use Telegram, and the different methods China’s traditional police force and state security agents use to infiltrate group chats.

The following interview has been translated, lightly edited, and rearranged for clarity.

How did the Chinese police figure out the identity of protesters and arrest them over a month after it happened?

In the beginning, the police likely got access to a Telegram group. Later on, officers could have used facial recognition [to identify people in video footage]. Many people, when participating in the White Paper Protests, were filmed with their faces visible. It’s possible that the police are now working on identifying more faces in these videos.

Those who were arrested have no way of confirming this, but their friends [suspect that facial recognition was used] and spread the message. 

And, as you said, it was reported that the police did have information on some protesters’ involvement in a Telegram group. What exactly happened there?

When [these protesters in Beijing] decided to use a Telegram group, they didn’t realize they needed to protect the information on the event. Their Telegram group became very public in the end. Some of them even screenshotted it and posted it on their WeChat timelines. 

Even when they were on the streets in Liangma River [where the November 27 protest in Beijing took place], this group chat was still active. What could easily have happened was that when the police arrested them, they didn’t have time to delete the group chat from their phone. If that happened, nothing [about the group] would be secure anymore.

Could there be undercover police in the Telegram group?

It’s inevitable that there were government people in the Telegram group. When we were organizing the feminist movement inside China, there were always state security officials [in the group]. They would use fake identities to talk to organizers and say: I’m a student interested in feminism. I want to attend your event, join your WeChat group, and know when’s the next gathering. They joined countless WeChat groups to monitor the events. It’s not just limited to feminist activists. They are going to join every group chat about civil society groups, no matter if you are [advocating for] LGBTQ rights or environmental protection. 

What do they want to achieve by infiltrating these group chats?

Different Chinese ministries have different jobs. The people collecting information [undercover] are mostly from the Ministry of State Security [Editor’s note: this is the agency responsible for foreign intelligence and counterintelligence work]. It operates on a long-term basis, so it would be doing more information collection; it has no responsibility to call off an event.

But the purpose of the Ministry of Public Security [Editor’s note: this is the rank-and-file police force] is to stop our events immediately. It works on a more short-term basis. According to my experience, the technology know-how of the police is relatively [basic]. They mostly work with WeChat and don’t use any VPN. And they are also only responsible for one locality, so it’s easier to tell who they are. For example, if they work for the city of Guangzhuo, they will only care about what’s going to happen in Guangzhou. And people may realize who they are because of that.

I’m also seeing people question whether some Twitter accounts, like the one belonging to “Teacher Li,” were undercover police. Is there any merit to that thinking?

It used to be less complicated. Previously, the government could use censorship mechanisms to control [what people posted] within China, so they didn’t need to [establish phishing accounts on foreign platforms]. But one characteristic of the White Paper Revolution is that it leveraged foreign platforms more than ever before.

But my personal opinion is that the chance of a public [Twitter] account phishing information for the government is relatively small. The government operations don’t necessarily have intricate planning. When we talk about phishing, we are talking about setting up an account, accepting user submissions, monitoring your submissions remotely, and then monitoring your activities. It requires a lot of investment to operate a [public] account. It’s far less efficient than infiltrating a WeChat group or Telegram group to obtain information.

But I don’t think the anxiety is unwarranted. The government’s tools evolve rapidly. Every time the government has learned about our organizing or the information of our members, we try to analyze how it happened. It used to be that we could often find out why, but now we can hardly figure out how the police found us. It means their data investigation skills have modernized. So I think the suspicion [of phishing accounts’ existence] is understandable.

And there is a dilemma here: On one hand, we need to be alert. On the other hand, if we are consumed by fears, the Chinese government will have won. That’s the situation we are in today.

When did people start to use Telegram instead of WeChat?

I started around 2014 or 2015. In 2015, we organized some rescue operations [for five feminist activists detained by the state] through Telegram. Before that, people didn’t realize WeChat was not secure. [Editor’s note: WeChat messages are not end-to-end encrypted and have been used by the police for prosecution.] Afterwards, when people were looking for a secure messaging app, the first option was Telegram. At the time, it was both secure and accessible in China. Later, Telegram was blocked, but the habit [of using it] remained. But I don’t use Telegram now.

It does feel like Telegram has gained this reputation of “the protest app of choice” even though it’s not necessarily the most secure one. Why is that?

If you are just a small underground circle, there are a lot of software options you can use. But if you also want other people to join your group, then it has to be something people already know and use widely. That’s how Telegram became the choice. 

But in my opinion, if you are already getting out of the Great Firewall, you can use Signal, or you can use WhatsApp. But many Chinese people don’t know about WhatsApp, so they choose to stay on Telegram. It has a lot to do with the reputation of Telegram. There’s a user stickiness issue with any software you use. Every time you migrate to new software, you will lose a great number of users. That’s a serious problem.

So what apps are you using now to communicate with protesters in China?

The app we use now? That’s a secret [laughs]. The reason why Telegram was monitored and blocked in the first place was because there was lots of media reporting on Telegram use back in 2015.

What do you think about the security protocols taken by Telegram and other communication apps? Let me know at zeyi@technologyreview.com.

Catch up with China

1. The balloon fiasco caused US Secretary of State Antony Blinken to postpone his meeting with President Xi Jinping of China, which was originally planned for this week. (CNN)

  • While the specific goals of the balloon’s trip are unclear, an expert said the termination mechanism likely failed to function. (Ars Technica)
  • Since the balloon was shot down over the weekend, the US Coast Guard has been searching for debris in the Atlantic, which US officials hope to use to reconstruct Chinese intelligence-gathering methods. (Reuters $)
  • The balloon itself didn’t necessarily pose many risks, but the way the situation escalated makes clear that military officials in the two countries do not currently have good communication. (New York Times $

2. TikTok finally opened a transparency center in LA, three years after it first announced it’d build new sites where people could examine how the app conducts moderation. A Forbes journalist who was allowed to tour the center wasn’t impressed. (Forbes)

3. Baidu, China’s leading search engine and AI company, is planning to release its own version of ChatGPT in March. (Bloomberg $)

4. The past three months should have been the busiest season for Foxconn’s iPhone assembly factory in China. Instead, it was disrupted by mass covid-19 infections and intense labor protests. (Rest of World)

5. A new decentralized social media platform called Damus had its five minutes (actually, two days) of fame in China before Apple swiftly removed it from China’s App Store for violating domestic cybersecurity laws. (South China Morning Post $)

6. Taiwan decided to shut down all nuclear power plants by 2025. But its renewable-energy industry is not ready to fill in the gap, and now new fossil-fuel plants are being built to secure the energy supply. (HuffPost)

7. The US Department of Justice suspects that executives of the San Diego–based self-driving-truck company TuSimple have improperly transferred technology to China, anonymous sources said. (Wall Street Journal $)

Lost in translation

Renting smartphones is becoming a popular alternative to purchasing them in China, according to the Chinese publication Shenran Caijing. With 19 billion RMB ($2.79 billion) spent on smartphone rentals in 2021, it is a niche but growing market in the country. Many people opt for rentals to be able to brag about having the latest model, or as a temporary solution when, for example, their phone breaks down and the new iPhone doesn’t come out for a few months. 

But this isn’t exactly saving people cash. While renting a phone costs only one or two bucks a day, the fees build up over time, and many platforms require leases to be at least six months long. In the end, it may not be as cost-effective as buying a phone outright. 

The high costs and lack of regulation have led some individuals to exploit the system. Some people use it as a form of cash loan: they rent a high-end phone, immediately sell it for cash, and slowly pay back the rental and buyout fees. There are also cases of scams where people use someone else’s identity to rent a phone, only to disappear once they obtain the device.

One more thing

Born in Wuhan, I grew up eating freshwater fish like Prussian carp. They taste divine, but the popular kinds often have more small bones than saltwater fish, which can make the eating experience laborious and annoying. Last week, a team of Chinese hydrobiologists based in Wuhan (duh) announced that they had used CRISPR-Cas9 gene-editing technology to create a Prussian carp mutant that is free of the small bones. Not gonna lie, this is true innovation to me.

CT scans from the academic paper showing the original fish and the mutant version without small bones.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

In the fall of 2020, gig workers in Venezuela posted a series of images to online forums where they gathered to talk shop. The photos were mundane, if sometimes intimate, household scenes captured from low angles—including some you really wouldn’t want shared on the Internet. 

In one particularly revealing shot, a young woman in a lavender T-shirt sits on the toilet, her shorts pulled down to mid-thigh.

The images were not taken by a person, but by development versions of iRobot’s Roomba J7 series robot vacuum. They were then sent to Scale AI, a startup that contracts workers around the world to label audio, photo, and video data used to train artificial intelligence. 

They were the sorts of scenes that internet-connected devices regularly capture and send back to the cloud—though usually with stricter storage and access controls. Yet earlier this year, MIT Technology Review obtained 15 screenshots of these private photos, which had been posted to closed social media groups. 

The photos vary in type and in sensitivity. The most intimate image we saw was the series of video stills featuring the young woman on the toilet, her face blocked in the lead image but unobscured in the grainy scroll of shots below. In another image, a boy who appears to be eight or nine years old, and whose face is clearly visible, is sprawled on his stomach across a hallway floor. A triangular flop of hair spills across his forehead as he stares, with apparent amusement, at the object recording him from just below eye level.

The other shots show rooms from homes around the world, some occupied by humans, one by a dog. Furniture, décor, and objects located high on the walls and ceilings are outlined by rectangular boxes and accompanied by labels like “tv,” “plant_or_flower,” and “ceiling light.” 

iRobot—the world’s largest vendor of robotic vacuums, which Amazon recently acquired for $1.7 billion in a pending deal—confirmed that these images were captured by its Roombas in 2020. All of them came from “special development robots with hardware and software modifications that are not and never were present on iRobot consumer products for purchase,” the company said in a statement. They were given to “paid collectors and employees” who signed written agreements acknowledging that they were sending data streams, including video, back to the company for training purposes. According to iRobot, the devices were labeled with a bright green sticker that read “video recording in progress,” and it was up to those paid data collectors to “remove anything they deem sensitive from any space the robot operates in, including children.”

In other words, by iRobot’s estimation, anyone whose photos or video appeared in the streams had agreed to let their Roombas monitor them. iRobot declined to let MIT Technology Review view the consent agreements and did not make any of its paid collectors or employees available to discuss their understanding of the terms.

While the images shared with us did not come from iRobot customers, consumers regularly consent to having our data monitored to varying degrees on devices ranging from iPhones to washing machines. It’s a practice that has only grown more common over the past decade, as data-hungry artificial intelligence has been increasingly integrated into a whole new array of products and services. Much of this technology is based on machine learning, a technique that uses large troves of data—including our voices, faces, homes, and other personal information—to train algorithms to recognize patterns. The most useful data sets are the most realistic, making data sourced from real environments, like homes, especially valuable. Often, we opt in simply by using the product, as noted in privacy policies with vague language that gives companies broad discretion in how they disseminate and analyze consumer information. 

Did you participate in iRobot’s data collection efforts? We’d love to hear from you. Please reach out at tips@technologyreview.com. 

The data collected by robot vacuums can be particularly invasive. They have “powerful hardware, powerful sensors,” says Dennis Giese, a PhD candidate at Northeastern University who studies the security vulnerabilities of Internet of Things devices, including robot vacuums. “And they can drive around in your home—and you have no way to control that.” This is especially true, he adds, of devices with advanced cameras and artificial intelligence—like iRobot’s Roomba J7 series.

This data is then used to build smarter robots whose purpose may one day go far beyond vacuuming. But to make these data sets useful for machine learning, individual humans must first view, categorize, label, and otherwise add context to each bit of data. This process is called data annotation.

There’s always a group of humans sitting somewhere—usually in a windowless room, just doing a bunch of point-and-click: ‘Yes, that is an object or not an object,’” explains Matt Beane, an assistant professor in the technology management program at  the University of California, Santa Barbara, who studies the human work behind robotics.

The 15 images shared with MIT Technology Review are just a tiny slice of a sweeping data ecosystem. iRobot has said that it has shared over 2 million images with Scale AI and an unknown quantity more with other data annotation platforms; the company has confirmed that Scale is just one of the data annotators it has used. 

James Baussmann, iRobot’s spokesperson, said in an email the company had “taken every precaution to ensure that personal data is processed securely and in accordance with applicable law,” and that the images shared with MIT Technology Review were “shared in violation of a written non-disclosure agreement between iRobot and an image annotation service provider.” In an emailed statement a few weeks after we shared the images with the company, iRobot CEO Colin Angle said that “iRobot is terminating its relationship with the service provider who leaked the images, is actively investigating the matter, and [is] taking measures to help prevent a similar leak by any service provider in the future.” The company did not respond to additional questions about what those measures were. 

Ultimately, though, this set of images represents something bigger than any one individual company’s actions. They speak to the widespread, and growing, practice of sharing potentially sensitive data to train algorithms, as well as the surprising, globe-spanning journey that a single image can take—in this case, from homes in North America, Europe, and Asia to the servers of Massachusetts-based iRobot, from there to San Francisco–based Scale AI, and finally to Scale’s contracted data workers around the world (including, in this instance, Venezuelan gig workers who posted the images to private groups on Facebook, Discord, and elsewhere). 

Together, the images reveal a whole data supply chain—and new points where personal information could leak out—that few consumers are even aware of. 

“It’s not expected that human beings are going to be reviewing the raw footage,” emphasizes Justin Brookman, director of tech policy at Consumer Reports and former policy director of the Federal Trade Commission’s Office of Technology Research and Investigation. iRobot would not say whether data collectors were aware that humans, in particular, would be viewing these images, though the company said the consent form made clear that “service providers” would be.

“It’s not expected that human beings are going to be reviewing the raw footage.”

“We literally treat machines differently than we treat humans,” adds Jessica Vitak, an information scientist and professor at the University of Maryland’s communication department and its College of Information Studies. “It’s much easier for me to accept a cute little vacuum, you know, moving around my space [than] somebody walking around my house with a camera.” 

And yet, that’s essentially what is happening. It’s not just a robot vacuum watching you on the toilet—a person may be looking too. 

The robot vacuum revolution 

Robot vacuums weren’t always so smart. 

The earliest model, the Swiss-made Electrolux Trilobite, came to market in 2001. It used ultrasonic sensors to locate walls and plot cleaning patterns; additional bump sensors on its sides and cliff sensors at the bottom helped it avoid running into objects or falling off stairs. But these sensors were glitchy, leading the robot to miss certain areas or repeat others. The result was unfinished and unsatisfactory cleaning jobs. 

The next year, iRobot released the first-generation Roomba, which relied on similar basic bump sensors and turn sensors. Much cheaper than its competitor, it became the first commercially successful robot vacuum.

The most basic models today still operate similarly, while midrange cleaners incorporate better sensors and other navigational techniques like simultaneous localization and mapping to find their place in a room and chart out better cleaning paths. 

Higher-end devices have moved on to computer vision, a subset of artificial intelligence that approximates human sight by training algorithms to extract information from images and videos, and/or lidar, a laser-based sensing technique used by NASA and widely considered the most accurate—but most expensive—navigational technology on the market today. 

Computer vision depends on high-definition cameras, and by our count, around a dozen companies have incorporated front-facing cameras into their robot vacuums for navigation and object recognition—as well as, increasingly, home monitoring. This includes the top three robot vacuum makers by market share: iRobot, which has 30% of the market and has sold over 40 million devices since 2002; Ecovacs, with about 15%; and Roborock, which has about another 15%, according to the market intelligence firm Strategy Analytics. It also includes familiar household appliance makers like Samsung, LG, and Dyson, among others. In all, some 23.4 million robot vacuums were sold in Europe and the Americas in 2021 alone, according to Strategy Analytics. 

From the start, iRobot went all in on computer vision, and its first device with such capabilities, the Roomba 980, debuted in 2015. It was also the first of iRobot’s Wi-Fi-enabled devices, as well as its first that could map a home, adjust its cleaning strategy on the basis of room size, and identify basic obstacles to avoid. 

Computer vision “allows the robot to … see the full richness of the world around it,” says Chris Jones, iRobot’s chief technology officer. It allows iRobot’s devices to “avoid cords on the floor or understand that that’s a couch.” 

But for computer vision in robot vacuums to truly work as intended, manufacturers need to train it on high-quality, diverse data sets that reflect the huge range of what they might see. “The variety of the home environment is a very difficult task,” says Wu Erqi, the senior R&D director of Beijing-based Roborock. Road systems “are quite standard,” he says, so for makers of self-driving cars, “you’ll know how the lane looks … [and] how the traffic sign looks.” But each home interior is vastly different. 

“The furniture is not standardized,” he adds. “You cannot expect what will be on your ground. Sometimes there’s a sock there, maybe some cables”—and the cables may look different in the US and China. 

family bent over a vacuum. light emitting from the vaccuum shines on their obscured faces.

MATTHIEU BOUREL

MIT Technology Review spoke with or sent questions to 12 companies selling robot vacuums and found that they respond to the challenge of gathering training data differently. 

In iRobot’s case, over 95% of its image data set comes from real homes, whose residents are either iRobot employees or volunteers recruited by third-party data vendors (which iRobot declined to identify). People using development devices agree to allow iRobot to collect data, including video streams, as the devices are running, often in exchange for “incentives for participation,” according to a statement from iRobot. The company declined to specify what these incentives were, saying only that they varied “based on the length and complexity of the data collection.” 

The remaining training data comes from what iRobot calls “staged data collection,” in which the company builds models that it then records.

iRobot has also begun offering regular consumers the opportunity to opt in to contributing training data through its app, where people can choose to send specific images of obstacles to company servers to improve its algorithms. iRobot says that if a customer participates in this “user-in-the-loop” training, as it is known, the company receives only these specific images, and no others. Baussmann, the company representative, said in an email that such images have not yet been used to train any algorithms. 

In contrast to iRobot, Roborock said that it either “produce[s] [its] own images in [its] labs” or “work[s] with third-party vendors in China who are specifically asked to capture & provide images of objects on floors for our training purposes.” Meanwhile, Dyson, which sells two high-end robot vacuum models, said that it gathers data from two main sources: “home trialists within Dyson’s research & development department with a security clearance” and, increasingly, synthetic, or AI-generated, training data. 

Most robot vacuum companies MIT Technology Review spoke with explicitly said they don’t use customer data to train their machine-learning algorithms. Samsung did not respond to questions about how it sources its data (though it wrote that it does not use Scale AI for data annotation), while Ecovacs calls the source of its training data “confidential.” LG and Bosch did not respond to requests for comment.

“You have to assume that people … ask each other for help. The policy always says that you’re not supposed to, but it’s very hard to control.” 

Some clues about other methods of data collection come from Giese, the IoT hacker, whose office at Northeastern is piled high with robot vacuums that he has reverse-engineered, giving him access to their machine-learning models. Some are produced by Dreame, a relatively new Chinese company based in Shenzhen that sells affordable, feature-rich devices. 

Giese found that Dreame vacuums have a folder labeled “AI server,” as well as image upload functions. Companies often say that “camera data is never sent to the cloud and whatever,” Giese says, but “when I had access to the device, I was basically able to prove that it’s not true.” Even if they didn’t actually upload any photos, he adds, “[the function] is always there.”  

Dreame manufactures robot vacuums that are also rebranded and sold by other companies—an indication that this practice could be employed by other brands as well, says Giese. 

Dreame did not respond to emailed questions about the data collected from customer devices, but in the days following MIT Technology Review’s initial outreach, the company began changing its privacy policies, including those related to how it collects personal information, and pushing out multiple firmware updates.

But without either an explanation from companies themselves or a way, besides hacking, to test their assertions, it’s hard to know for sure what they’re collecting from customers for training purposes.

How and why our data ends up halfway around the world

With the raw data required for machine-learning algorithms comes the need for labor, and lots of it. That’s where data annotation comes in. A young but growing industry, data annotation is projected to reach $13.3 billion in market value by 2030. 

The field took off largely to meet the huge need for labeled data to train the algorithms used in self-driving vehicles. Today, data labelers, who are often low-paid contract workers in the developing world, help power much of what we take for granted as “automated” online. They keep the worst of the Internet out of our social media feeds by manually categorizing and flagging posts, improve voice recognition software by transcribing low-quality audio, and help robot vacuums recognize objects in their environments by tagging photos and videos. 

Among the myriad companies that have popped up over the past decade, Scale AI has become the market leader. Founded in 2016, it built a business model around contracting with remote workers in less-wealthy nations at cheap project- or task-based rates on Remotasks, its proprietary crowdsourcing platform. 

In 2020, Scale posted a new assignment there: Project IO. It featured images captured from the ground and angled upwards at roughly 45 degrees, and showed the walls, ceilings, and floors of homes around the world, as well as whatever happened to be in or on them—including people, whose faces were clearly visible to the labelers. 

Labelers discussed Project IO in Facebook, Discord, and other groups that they had set up to share advice on handling delayed payments, talk about the best-paying assignments, or request assistance in labeling tricky objects. 

iRobot confirmed that the 15 images posted in these groups and subsequently sent to MIT Technology Review came from its devices, sharing a spreadsheet listing the specific dates they were made (between June and November 2020), the countries they came from (the United States, Japan, France, Germany, and Spain), and the serial numbers of the devices that produced the images, as well as a column indicating that a consent form had been signed by each device’s user. (Scale AI confirmed that 13 of the 15 images came from “an R&D project [it] worked on with iRobot over two years ago,” though it declined to clarify the origins of or offer additional information on the other two images.)

iRobot says that sharing images in social media groups violates Scale’s agreements with it, and Scale says that contract workers sharing these images breached their own agreements. 

“The underlying problem is that your face is like a password you can’t change. Once somebody has recorded the ‘signature’ of your face, they can use it forever to find you in photos or video.” 

But such actions are nearly impossible to police on crowdsourcing platforms. 

When I ask Kevin Guo, the CEO of Hive, a Scale competitor that also depends on contract workers, if he is aware of data labelers sharing content on social media, he is blunt. “These are distributed workers,” he says. “You have to assume that people … ask each other for help. The policy always says that you’re not supposed to, but it’s very hard to control.” 

That means that it’s up to the service provider to decide whether or not to take on certain work. For Hive, Guo says, “we don’t think we have the right controls in place given our workforce” to effectively protect sensitive data. Hive does not work with any robot vacuum companies, he adds. 

“It’s sort of surprising to me that [the images] got shared on a crowdsourcing platform,” says Olga Russakovsky, the principal investigator at Princeton University’s Visual AI Lab and a cofounder of the group AI4All. Keeping the labeling in house, where “folks are under strict NDAs” and “on company computers,” would keep the data far more secure, she points out.

In other words, relying on far-flung data annotators is simply not a secure way to protect data. “When you have data that you’ve gotten from customers, it would normally reside in a database with access protection,” says Pete Warden, a leading computer vision researcher and a PhD student at Stanford University. But with machine-learning training, customer data is all combined “in a big batch,” widening the “circle of people” who get access to it.

Screenshots shared with MIT Technology Review of data annotation in progress

For its part, iRobot says that it shares only a subset of training images with data annotation partners, flags any image with sensitive information, and notifies the company’s chief privacy officer if sensitive information is detected. Baussmann calls this situation “rare,” and adds that when it does happen, “the entire video log, including the image, is deleted from iRobot servers.”

The company specified, “When an image is discovered where a user is in a compromising position, including nudity, partial nudity, or sexual interaction, it is deleted—in addition to ALL other images from that log.” It did not clarify whether this flagging would be done automatically by algorithm or manually by a person, or why that did not happen in the case of the woman on the toilet.

iRobot policy, however, does not deem faces sensitive, even if the people are minors. 

“In order to teach the robots to avoid humans and images of humans”—a feature that it has promoted to privacy-wary customers—the company “first needs to teach the robot what a human is,” Baussmann explained. “In this sense, it is necessary to first collect data of humans to train a model.” The implication is that faces must be part of that data.

But facial images may not actually be necessary for algorithms to detect humans, according to William Beksi, a computer science professor who runs the Robotic Vision Laboratory at the University of Texas at Arlington: human detector models can recognize people based “just [on] the outline (silhouette) of a human.” 

“If you were a big company, and you were concerned about privacy, you could preprocess these images,” Beksi says. For example, you could blur human faces before they even leave the device and “before giving them to someone to annotate.”

“It does seem to be a bit sloppy,” he concludes, “especially to have minors recorded in the videos.” 

In the case of the woman on the toilet, a data labeler made an effort to preserve her privacy, by placing a black circle over her face. But in no other images featuring people were identities obscured, either by the data labelers themselves, by Scale AI, or by iRobot. That includes the image of the young boy sprawled on the floor.

Baussmann explained that iRobot protected “the identity of these humans” by “decoupling all identifying information from the images … so if an image is acquired by a bad actor, they cannot map backwards to identify the person in the image.”

But capturing faces is inherently privacy-violating, argues Warden. “The underlying problem is that your face is like a password you can’t change,” he says. “Once somebody has recorded the ‘signature’ of your face, they can use it forever to find you in photos or video.” 

AI labels over the illustrated faces of a family

MATTHIEU BOUREL

Additionally, “lawmakers and enforcers in privacy would view biometrics, including faces, as sensitive information,” says Jessica Rich, a privacy lawyer who served as director of the FTC’s Bureau of Consumer Protection between 2013 and 2017. This is especially the case if any minors are captured on camera, she adds: “Getting consent from the employee [or testers] isn’t the same as getting consent from the child. The employee doesn’t have the capacity to consent to data collection about other individuals—let alone the children that appear to be implicated.” Rich says she wasn’t referring to any specific company in these comments. 

In the end, the real problem is arguably not that the data labelers shared the images on social media. Rather, it’s that this type of AI training set—specifically, one depicting faces—is far more common than most people understand, notes Milagros Miceli, a sociologist and computer scientist who has been interviewing distributed workers contracted by data annotation companies for years. Miceli has spoken to multiple labelers who have seen similar images, taken from the same low vantage points and sometimes showing people in various stages of undress. 

The data labelers found this work “really uncomfortable,” she adds. 

Surprise: you may have agreed to this 

Robot vacuum manufacturers themselves recognize the heightened privacy risks presented by on-device cameras. “When you’ve made the decision to invest in computer vision, you do have to be very careful with privacy and security,” says Jones, iRobot’s CTO. “You’re giving this benefit to the product and the consumer, but you also have to be treating privacy and security as a top-order priority.”

In fact, iRobot tells MIT Technology Review it has implemented many privacy- and security-protecting measures in its customer devices, including using encryption, regularly patching security vulnerabilities, limiting and monitoring internal employee access to information, and providing customers with detailed information on the data that it collects. 

But there is a wide gap between the way companies talk about privacy and the way consumers understand it. 

It’s easy, for instance, to conflate privacy with security, says Jen Caltrider, the lead researcher behind Mozilla’s “*Privacy Not Included” project, which reviews consumer devices for both privacy and security. Data security refers to a product’s physical and cyber security, or how vulnerable it is to a hack or intrusion, while data privacy is about transparency—knowing and being able to control the data that companies have, how it is used, why it is shared, whether and for how long it’s retained, and how much a company is collecting to start with. 

Conflating the two is convenient, Caltrider adds, because “security has gotten better, while privacy has gotten way worse” since she began tracking products in 2017. “The devices and apps now collect so much more personal information,” she says. 

Company representatives also sometimes use subtle differences, like the distinction between “sharing” data and selling it, that make how they handle privacy particularly hard for non-experts to parse. When a company says it will never sell your data, that doesn’t mean it won’t use it or share it with others for analysis.

These expansive definitions of data collection are often acceptable under companies’ vaguely worded privacy policies, virtually all of which contain some language permitting the use of data for the purposes of “improving products and services”—language that Rich calls so broad as to “permit basically anything.”

“Developers are not traditionally very good [at] security stuff.” Their attitude becomes “Try to get the functionality, and if the functionality is working, ship the product. And then the scandals come out.” 

Indeed, MIT Technology Review reviewed 12 robot vacuum privacy policies, and all of them, including iRobot’s, contained similar language on “improving products and services.” Most of the companies to which MIT Technology Review reached out for comment did not respond to questions on whether “product improvement” would include machine-learning algorithms. But Roborock and iRobot say it would. 

And because the United States lacks a comprehensive data privacy law—instead relying on a mishmash of state laws, most notably the California Consumer Privacy Act—these privacy policies are what shape companies’ legal responsibilities, says Brookman. “A lot of privacy policies will say, you know, we reserve the right to share your data with select partners or service providers,” he notes. That means consumers are likely agreeing to have their data shared with additional companies, whether they are familiar with them or not.

Brookman explains that the legal barriers companies must clear to collect data directly from consumers are fairly low. The FTC, or state attorneys general, may step in if there are either “unfair” or “deceptive” practices, he notes, but these are narrowly defined: unless a privacy policy specifically says “Hey, we’re not going to let contractors look at your data” and they share it anyway, Brookman says, companies are “probably okay on deception, which is the main way” for the FTC to “enforce privacy historically.” Proving that a practice is unfair, meanwhile, carries additional burdens—including proving harm. “The courts have never really ruled on it,” he adds.

Most companies’ privacy policies do not even mention the audiovisual data being captured, with a few exceptions. iRobot’s privacy policy notes that it collects audiovisual data only if an individual shares images via its mobile app. LG’s privacy policy for the camera- and AI-enabled Hom-Bot Turbo+ explains that its app collects audiovisual data, including “audio, electronic, visual, or similar information, such as profile photos, voice recordings, and video recordings.” And the privacy policy for Samsung’s Jet Bot AI+ Robot Vacuum with lidar and Powerbot R7070, both of which have cameras, will collect “information you store on your device, such as photos, contacts, text logs, touch interactions, settings, and calendar information” and “recordings of your voice when you use voice commands to control a Service or contact our Customer Service team.” Meanwhile, Roborock’s privacy policy makes no mention of audiovisual data, though company representatives tell MIT Technology Review that consumers in China have the option to share it. 

iRobot cofounder Helen Greiner, who now runs a startup called Tertill that sells a garden-weeding robot, emphasizes that in collecting all this data, companies are not trying to violate their customers’ privacy. They’re just trying to build better products—or, in iRobot’s case, “make a better clean,” she says. 

Still, even the best efforts of companies like iRobot clearly leave gaps in privacy protection. “It’s less like a maliciousness thing, but just incompetence,” says Giese, the IoT hacker. “Developers are not traditionally very good [at] security stuff.” Their attitude becomes “Try to get the functionality, and if the functionality is working, ship the product.” 

“And then the scandals come out,” he adds.

Robot vacuums are just the beginning

The appetite for data will only increase in the years ahead. Vacuums are just a tiny subset of the connected devices that are proliferating across our lives, and the biggest names in robot vacuums—including iRobot, Samsung, Roborock, and Dyson—are vocal about ambitions much grander than automated floor cleaning. Robotics, including home robotics, has long been the real prize.  

Consider how Mario Munich, then the senior vice president of technology at iRobot, explained the company’s goals back in 2018. In a presentation on the Roomba 980, the company’s first computer-vision vacuum, he showed images from the device’s vantage point—including one of a kitchen with a table, chairs, and stools—next to how they would be labeled and perceived by the robot’s algorithms. “The challenge is not with the vacuuming. The challenge is with the robot,” Munich explained. “We would like to know the environment so we can change the operation of the robot.” 

This bigger mission is evident in what Scale’s data annotators were asked to label—not items on the floor that should be avoided (a feature that iRobot promotes), but items like “cabinet,” “kitchen countertop,” and “shelf,” which together help the Roomba J series device recognize the entire space in which it operates. 

The companies making robot vacuums are already investing in other features and devices that will bring us closer to a robotics-enabled future. The latest Roombas can be voice controlled through Nest and Alexa, and they recognize over 80 different objects around the home. Meanwhile, Ecovacs’s Deebot X1 robot vacuum has integrated the company’s proprietary voice assistance, while Samsung is one of several companies developing “companion robots” to keep humans company. Miele, which sells the RX2 Scout Home Vision, has turned its focus toward other smart appliances, like its camera-enabled smart oven.

And if iRobot’s $1.7 billion acquisition by Amazon moves forward—pending approval by the FTC, which is considering the merger’s effect on competition in the smart-home marketplace—Roombas are likely to become even more integrated into Amazon’s vision for the always-on smart home of the future.

Perhaps unsurprisingly, public policy is starting to reflect the growing public concern with data privacy. From 2018 to 2022, there has been a marked increase in states considering and passing privacy protections, such as the California Consumer Privacy Act and the Illinois Biometric Information Privacy Act. At the federal level, the FTC is considering new rules to crack down on harmful commercial surveillance and lax data security practices—including those used in training data. In two cases, the FTC has taken action against the undisclosed use of customer data to train artificial intelligence, ultimately forcing the companies, Weight Watchers International and the photo app developer Everalbum, to delete both the data collected and the algorithms built from it. 

Still, none of these piecemeal efforts address the growing data annotation market and its proliferation of companies based around the world or contracting with global gig workers, who operate with little oversight, often in countries with even fewer data protection laws. 

When I spoke this summer to Greiner, she said that she personally was not worried about iRobot’s implications for privacy—though she understood why some people might feel differently. Ultimately, she framed privacy in terms of consumer choice: anyone with real concerns could simply not buy that device. 

“Everybody needs to make their own privacy decisions,” she told me. “And I can tell you, overwhelmingly, people make the decision to have the features as long as they are delivered at a cost-effective price point.”

But not everyone agrees with this framework, in part because it is so challenging for consumers to make fully informed choices. Consent should be more than just “a piece of paper” to sign or a privacy policy to glance through, says Vitak, the University of Maryland information scientist. 

True informed consent means “that the person fully understands the procedure, they fully understand the risks … how those risks will be mitigated, and … what their rights are,” she explains. But this rarely happens in a comprehensive way—especially when companies market adorable robot helpers promising clean floors at the click of a button.

Do you have more information about how companies collect data to train AI? Did you participate in data collection efforts by iRobot or other robot vacuum companies? We’d love to hear from you and will respect requests for anonymity. Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

Additional research by Tammy Xu.

How US police use counterterrorism money to buy spy tech

Grant money meant to help cities prepare for terror attacks is being spent on surveillance technology for US police departments, a new report shows. 

It’s been known that federal funding props up police budgets, but the new report, written by the advocacy organizations Action Center on Race and Economy (ACRE), LittleSis, MediaJustice, and the Immigrant Defense Project, reveals that these federal grants are bigger than previously understood. 

The Homeland Security Grant Program, run by the Federal Emergency Management Agency (FEMA), has doled out at least $28 billion to state and local agencies since 2002, according to the report’s authors. This money is intended for counterterrorism and tied to emergency preparedness funding that many cities depend on. 

But the report finds that this federal program has actually funded “massive purchases of surveillance technology.” For example, public records obtained by the researchers found that the Los Angeles Police Department used funding from the program to buy automated license plate readers worth at least $1.27 million, radio equipment worth upwards of $24 million, Palantir data fusion platforms (often used for predictive policing), social media surveillance software, cell site simulators valued at over $600,000, and SWAT equipment. 

Because these grants are federally-funded it means purchases can stay out of public view. That’s because while most police funding comes from tax dollars and has to be accounted for, federal grants don’t require as much public transparency and oversight. The report’s findings are yet another example of a growing pattern in which citizens are increasingly kept in the dark about police tech procurement.

“The acquisition and use of police surveillance technology deserves greater scrutiny than many other government purchases. These tools can pose serious threats to civil liberties,” Beryl Lipton, an investigative surveillance researcher at the Electronic Frontier Foundation, told MIT Technology Review in an email after reviewing the report. 

“However, we often see a dearth of transparency when it comes to this type of equipment, in some cases because agencies do not want to be held accountable for their use of such invasive tools.”

“A hidden funding stream” 

The report highlights the Urban Area Security Initiative (UASI), which assists cities and their surrounding areas with counterterrorism. The report traces how “counterterrorism narratives” have been used by government agencies since 9/11 to justify the creation of a militarized police force and the explosion of public surveillance. In 2022, UASI provided $615 million to local and state agencies for counterterrorism activities, according to its website.

UASI is the largest program within the Homeland Security Grant Program (itself part of FEMA), which also includes Operation Stonegarden, a border management program, and the State Homeland Security Program, a security technology initiative.

“From our understanding, this is the first broad and most current analysis of the program,” says Aly Panjwani, a senior research analyst at ACRE. He cautions that data was aggregated through records requests filed under the Freedom of Information Act with the cities of Chicago, New York, Los Angeles, and Boston and is therefore not comprehensive.

The report drew on a host of public records, and its financial calculations aggregate previous research with public data from government websites. The organizations provide a list of recommendations, including a call for cities and states to reject funding from UASI and redirect investments into public services like housing and education. They also advocate that Congress separate emergency aid from security funding and eventually divest the Homeland Security Grant Program.

FEMA has not yet responded to a request to comment. 

“This is almost like a hidden funding stream that boosts local police budgets and also feeds into this web of data abstraction, data collection and analysis, and reselling consumer data,” says Alli Finn, a senior researcher with the Immigrant Defense Project who worked on the report.

Further, UASI is designed to tie surveillance funding—under the umbrella of counterterrorism—to emergency preparedness programs that are crucial to many cities. For example, 37% of New York City’s proposed emergency management budget for 2023 comes from federal funding, almost all of it through UASI. In order for a local government to obtain UASI grants, it must spend at least 30% of its funds (as of 2022) on law enforcement activities, according to the report.  

There’s no such thing as free tech  

UASI isn’t the only way police forces get their hands on federally subsidized technology. The 1033 Program, named after its establishing section in the 1997 National Defense Authorization Act, allows for excess military equipment to be transferred to law enforcement groups. Police have used it to acquire over $7 billion worth of military-grade supplies like tanks, autonomous ground vehicles, and firearms. 

Some equipment is only tracked for one year after the transfer, and the program is controversial because of the effect militarized police have on communities of color. And another little-known program, called the 1122 Program, allows state and local governments to use federal procurement channels that cut costs by bundling purchase orders and offering access to discounts. The channels are available for “equipment suitable for counter-drug, homeland security, and emergency response activities,” according to US law. 

Once purchased, all equipment other than weapons procured through 1122 is transferred from Department of Defense ownership to law enforcement agencies. An investigative report by Women for Weapons Trade Transparency found that no maintained federal database tracks 1122 purchases accessible by the public. Through FOIA requests, the group uncovered $42 million worth of purchases through the program, including surveillance equipment.

And federal programs are not the only way technology is kept off the books. 

Many technology vendors provide “free trials” of their systems to police agencies, sometimes for years, which avoids the need for a purchasing agreement or budget approval. The controversial facial recognition company Clearview AI provided free trials to anyone with an email address associated with the government or law enforcement agency as part of its “flood-the-market” strategy. Our investigation into Minnesota surveillance technology found that many other vendors offered similar incentives.

“Secretive federal funding pipelines often allow police to sidestep elected officials and the public to purchase technologies that would never otherwise be approved,” says Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project. “It gives the police a power no other type of municipal agency has. Teachers can’t use federal dollars to circumvent school boards.” 

The complicated danger of surveillance states

China Report is MIT Technology Review’s newsletter about what’s happening in China. Sign up to receive it in your inbox every Tuesday.

Welcome back to China Report! 

I recently had a very interesting conversation with Wall Street Journal reporters Josh Chin and Liza Lin. They wrote a new book called Surveillance State, which explores how China is leading the global experiment in using surveillance tech. 

We covered a lot of important topics: how covid offered the ideal context to justify expanding government surveillance, how the world should respond to China, and even philosophical questions about how people perceive privacy. You can read the takeaways in full here

But in this newsletter, I want to share a few extra snippets from our conversation that have really stuck with me. 

Chin and Lin are very clearheaded about the fact that the emergence of the surveillance state is not just a problem in China. Countries with democratic institutions can be and have already been attracted to surveillance tech for its (often artificial) promises. Singapore, where Lin is from, is a great example. 

When Lin was living in Shanghai in 2018, she used to count the number of surveillance cameras she would see every day. As she told me:

I remember one day walking from my apartment to Lao Xi Men station in Shanghai, and there were 17 cameras just from the entrance of that subway station to where you scan your tickets. Seventeen cameras! All owned by various safety departments, and maybe the metro department as well.

She thought this phenomenon would be unique to China—but when she later moved back to Singapore, she found out she was wrong. 

Once I started going back [to Singapore] in 2019 and 2020, it [had] started to embrace the same ideas that China had in terms of a “safe city.” I saw cameras popping up at road intersections that catch cars that are speeding, and then you saw cameras popping up at the subway.

Even her son has picked up her habit, but this time in Singapore.

He “is now counting the number of cameras when we walk through the subway tunnel just to get to the station,” Lin says. “He’s like, ‘Mommy, that’s the police.’” 

We also talked about the impact of the pandemic on surveillance tech. In China, tracing the virus’s spread became another justification for the government to collect data on its citizens, and it further normalized the presence of mass surveillance infrastructure.

Lin told me that the same kind of tracking, if to a lesser extent, happened in Singapore. In March 2020 the country launched an app called TraceTogether, which uses Bluetooth to identify close contacts of people who tested positive for covid. In addition to the mobile app, there were even Apple Watch–size gadgets given to people who don’t use smartphones. 

Over 92% of the population in Singapore eventually used the app. “They didn’t say it was compulsory,” Lin told me. “But just like in China, you couldn’t enter public places if you didn’t have that contact tracing app.” 

And once the pandemic surveillance infrastructure was in place, the police wasted no time in taking advantage of it.

Chin: I thought this was really telling. Initially, when they rolled it out, they were like, “This will be strictly for health monitoring. No other government agencies are going to have access to the data.” That includes the police. And they made an explicit promise to get people to buy in. And then, I can’t remember how much longer …

Lin: Within that same year.

Chin: Yeah, within the same year, the police were using that technology to track suspects, and they basically openly said: “Well, we changed our minds.”

Lin: And there was a public pushback to that. And now they stopped doing it. It’s just an example of how easily one use can lead to another.

The pushback led the Singaporean parliament to pass a bill in February 2021 to restrict police use of TraceTogether data. State forces are still able to access the data now, but they need to go through a stricter process to get permission. 

It’s easy to imagine that not all countries will respond the same way. Several Asian countries were at the forefront of adopting covid tracing apps, and it’s not yet clear how the relevant authorities will deal with the data they collected along the way. So it was a pleasant surprise when I read that Thailand, which pushed for its own covid app, named MorChana, announced in June that it would close down the app and delete all relevant data. 

Since our conversation, I keep thinking about what the pandemic has meant for surveillance tech. For one thing, I think it helped illustrate that surveillance is not an abstract “evil” that all “good” societies would naturally object to. Rather, there’s a nuanced balance between privacy and social needs like public health. And it’s precisely for this reason that we should expect to see governments around the world, including democracies, keep citing new reasons to justify using surveillance tech. There will always be some sort of crisis to respond to, right?

Instead of relying on governments to be responsible with data and self-correct when it makes mistakes, Chin and Lin argued, it’s important to start recognizing the harm of surveillance tech early, and to craft regulations that safeguard against those dangers.

How do you think countries should approach surveillance tech? Let me know your thoughts at zeyi@technologyreview.com

Catch up with China

1. Using the medical records of Li Wenliang, the Chinese doctor and covid whistleblower who died in Wuhan in February 2020, reporters were able to reconstruct his final days. They confirmed that doctors were pushed to use excessive resuscitation measures in order to show that his care was not compromised. (The New York Times $)

2. The Biden administration will block international companies, not just American ones, from selling advanced chips and relevant tools to certain Chinese companies. (Reuters $)

  • Of course, Chinese companies will look for workarounds: already, a startup run by a former Huawei executive is building a semiconductor manufacturing factory in Shenzhen. It may help Huawei circumvent US chip export controls. (Bloomberg $)
  • On Monday, $240 billion in Asian chip companies’ stock market value was wiped out as traders predicted the new controls will hurt their sales. (Bloomberg $)
  • The chip export control is the latest in a series of administrative actions intended to restrict China’s efforts to advance in critical technologies. I wrote a primer last month to help you understand them. (MIT Technology Review

3. Chinese electric-vehicle companies are hungry for lithium mines and spending big bucks around the world to secure supply. (Tech Crunch)

4. Social media influencers are persuading young parents in China to take drastic measures to ensure that their babies conform to traditional beauty standards. (Sixth Tone)

5. The almighty algorithms of Douyin, China’s domestic version of TikTok, are failing to understand audio in Cantonese and suspending live streams for “unrecognized languages.” (South China Morning Post $)

6. To reduce its dependence on China for manufacturing, Apple wants to make its flagship iPhones in India. (BBC)

Lost in translation

Since 2015, banks and fintech platforms have popularized the use of facial verification to make payments faster and more convenient. But that’s also come with a high risk that facial recognition data could be hacked or leaked. 

So it’s probably to no one’s surprise that “paying with your face” has already gone quite wrong in China. The Chinese publication Caijing recently reported on a mysterious scam case in which criminals were able to bypass the bank’s facial recognition verification process and withdraw money from a victim’s account, even though she didn’t provide her face. Experts concluded that the criminals likely tricked the bank’s security system through a combination of illegally obtained biometric data and other technical tools. According to local court documents, identity documents, bank account information, and facial recognition data are sometimes sold on the black market at the price of just $7 to $14 per individual account. 

One more thing

Nothing can stop Chinese grandpas and grandmas from coming up with innovative ways to stay fit. After square dancing, marching in line formation, and other exercises I don’t even know how to describe, the latest trend is the “crocodile crawl,” in which they crawl on all fours after one another on a jogging track. I mean, it does look like a full-body workout, so you might as well try it sometime? 

Screenshot of a Douyin video of dozens of people doing crocodile crawl together.
Screenshot of a crocodile crawl video posted on Douyin

See you next week!

Zeyi

The Chinese surveillance state proves that the idea of privacy is more “malleable” than you’d expect

It’s no surprise that last week, when the Biden administration updated its list of Chinese military companies blocked from accessing US technologies, it added Dahua. The second-largest surveillance camera company in the world, just after Hikvision, Dahua sells to over 180 countries. It exemplifies how Chinese companies have leapfrogged to the front of the video surveillance industry and have driven the world, especially China, to adopt more surveillance tech.

Over the past decade, the US—and the world more generally—have watched with a growing sense of alarm as China has emerged as a global leader in this space. Indeed, the Chinese government has been at the forefront of exploring ways to apply cutting-edge research in computer vision, the Internet of Things, and hardware manufacturing in day-to-day governance. This has led to a slew of human rights abuses—notably, and perhaps most brutally, in monitoring Muslim ethnic minorities in the Western region of Xinjiang. At the same time, the state has also used surveillance tech for good: to find abducted children, for example, and to improve traffic control and trash management in populous cities.

As Wall Street Journal reporters Josh Chin and Liza Lin argue in their new book Surveillance State, out last month, the Chinese government has managed to build a new social contract with its citizens: they give up their data in exchange for more precise governance that, ideally, makes their lives safer and easier (even if it doesn’t always work out so simply in reality).   

MIT Technology Review recently spoke with Chin and Lin about the five years of reporting that culminated in the book, exploring the misconception that privacy is not valued in China.

“A lot of the foreign media coverage, when they encountered that [question], would just brush it off as ‘Oh, Chinese people just don’t have the concept of privacy … they’re brainwashed into accepting it,’” says Chin. “And we felt it was too easy of a conclusion for us, so we wanted to dig into it.” When they did, they realized that the perception of privacy is actually more pliable than it often appears. 

We also spoke about how the pandemic has accelerated the use of surveillance tech in China, whether the technology itself can stay neutral, and the extent to which other countries are following China’s lead. 

How the world should respond to the rise of surveillance states “might be one of the most important questions facing global politics at the moment,” Chin says, “because these technologies … really do have the potential to completely alter the way governments interact with and control people.” 

Here are the key takeaways from our conversation with Josh Chin and Liza Lin.

China has rewritten the definition of privacy to sell a new social contract

After decades of double-digit GDP growth, China’s economic boom has slowed down over the past three years and is expected to face even stronger headwinds. (The World Bank currently estimates that China’s 2022 annual GDP growth will decrease to 2.8%.) So the old social contract, which promised better returns from an economy steered by an authoritarian government, is strained—and a new one is needed. 

As Chin and Lin observe, the Chinese government is now proposing that by collecting every Chinese citizen’s data extensively, it can find out what the people want (without giving them votes) and build a society that meets their needs. 

But to sell this to its people—who, like others around the world, are increasingly aware of the importance of privacy—China has had to cleverly redefine that concept, moving from an individualistic understanding to a collectivist one.

The idea of privacy itself is “an incredibly confusing and malleable concept,” says Chin. “In US law, there’s a dozen, if not more, definitions of privacy. And I think the Chinese government grasped that and sensed an opportunity to define privacy in ways that not only didn’t undermine the surveillance state but actually reinforced it.” 

What the Chinese government has done is position the state and citizens on the same side of the privacy battle against private companies. Consider recent Chinese legislation like the Personal Information Protection Law (in effect since November 2021) and the Data Security Law (since September 2021), under which private companies face harsh penalties for allowing security breaches or failing to get user consent for data collection. State actors, however, largely get a pass under these laws.

“Cybersecurity hacks and data leaks happen not just to companies. They happen to government agencies, too,” says Lin. “But with something like that, you never hear state media play it up at all.” Enabled by its censorship machine, the Chinese government has often successfully directed people’s fury over privacy violations away from the government and entirely toward private companies. 

The pandemic was the perfect excuse to expand surveillance tech

When Chin and Lin were planning the book, they envisioned ending with a thought experiment about what would happen to surveillance tech if something like 9/11 happened again. Then the pandemic came. 

And just like 9/11, the coronavirus fast-tracked the global surveillance industry, the authors saw—particularly in China.

Chin and Lin report on the striking parallels between the way China used societal security to justify the surveillance regime it built in Xinjiang and the way it used physical safety to justify the overreaching pandemic control tools. “In the past, it was always a metaphorical virus: ‘someone was infected with terrorist ideas,’” says Lin. In Xinjiang, before the pandemic, the term “virus” was used in internal government documents to describe what the state deemed “Islamic radicalism.” “But with covid,” she says, “we saw China really turn the whole state surveillance apparatus against its entire population and against a virus that was completely invisible and contagious.”

Going back to the idea that the perception of privacy can change greatly depending on the circumstances, the pandemic has also provided the exact context in which ordinary citizens may agree to give up more of their privacy in the name of safety. “In the field of public health, disease surveillance has never been controversial, because of course you would want to track a disease in the way it spreads. Otherwise how do you control it?” says Chin.

“They probably saved millions of lives by using those technologies,” he says, “and the result is that sold [the necessity of] state surveillance to a lot of Chinese people.”

Does “good” surveillance tech exist?

Once someone (or some entity) starts using surveillance tech, the downward slope is extremely slippery: no matter how noble the motive for developing and deploying it, the tech can always be used for more malicious purposes. For Chin and Lin, China shows how the “good” and “bad” uses of surveillance tech are always intertwined.

They report extensively on how a surveillance system in Hangzhou, the city that’s home to Alibaba, Hikvision, Dahua, and many other tech companies, was built on the benevolent premise of improving city management. Here, with a dense network of cameras on the street and a cloud-based “city brain” processing data and giving out orders, the “smart city” system is being used to monitor disasters and enable quick emergency responses. In one notable example, the authors talk to a man who accompanied his mother to the hospital in an ambulance in 2019 after she nearly drowned. The city was able to turn all the traffic lights on their path to reduce the time it took to reach the hospital. It’s impossible to argue this isn’t a good use of the technology.

But at the same time, it has come to a point where the “smart city” technologies are almost indistinguishable from “safe city” technologies, which aim to enhance police forces and track down alleged criminals. The surveillance company Hikvision, which partly powers the lifesaving system in Hangzhou, is the same one that facilitated the massive incarceration of Muslim minorities in Xinjiang. 

China is far from the only country where police are leaning on a growing number of cameras. Chin and Lin highlight how police in New York City have used and abused cameras to build a facial recognition database and identify suspects, sometimes with legally questionable tactics. (MIT Technology Review also reported earlier this year on how the police in Minnesota built a database to surveil protesters and journalists.)

Chin argues that given this track record, the tech itself can no longer be considered neutral. “Certain technologies by their nature lend themselves to harmful uses. Particularly with AI applied to surveillance, they lend themselves to authoritarian outcomes,” he says. And just like nuclear researchers, for instance, scientists and engineers in these areas should be more careful about the technology’s potential harm.

It’s still possible to disrupt the global supply chain of surveillance tech

There is a sense of pessimism when talking about how surveillance tech will advance in China, because the invasive implementation has become so widespread that it’s hard to imagine the country reversing course. 

But that doesn’t mean people should give up. One key way to intervene, Chin and Lin argue, is to cut off the global supply chain of surveillance tech (a network MIT Technology Review wrote about just last month).

The development of surveillance technology has always been a global effort, with many American companies participating. The authors recount how American companies like Intel and Cisco were essential in building the bedrock of China’s surveillance system. And they were able to disclaim their own responsibility by saying they simply didn’t know what the end use of their products would be.

That kind of excuse won’t work as easily in the future, because global tech companies are being held to higher standards. Whether they contributed to human rights violations on the opposite side of the globe “has become a thing that companies are worried about and planning around,” Chin says. “That’s a really interesting shift that we haven’t seen in decades.” 

Some of these companies have stopped working with China or have been replaced by Chinese firms that have developed similar technologies, but that doesn’t mean China has a self-sufficient surveillance system now. The supply chain for surveillance technology is still distributed around the world, and Chinese tech companies require parts from the US or other Western countries to continue building their products. 

The main example here is the GPU, a type of processor originally produced to run better-quality video games that has since been used to power mass surveillance systems. China still relies for these on foreign companies like Nvidia, which is headquartered in California. 

“In the last two years, there’s been a huge push to substitute foreign technology with domestic technology, [but] these are the areas [where] they still can’t achieve independence,” Lin says.

This means the West can still try to slow the development of the Chinese surveillance state by putting pressure on industry. But results will depend on how much political will there is to uncover the key links in surveillance supply chains, and to come up with effective responses. 

“The other really important thing is just to strengthen your own democratic institutions … like a free press and a strong and vibrant civil society space,” says Lin. Because China won’t be the only country with the potential to become a surveillance state. It can happen anywhere, they warn, including countries with democratic institutions.