It’s still a challenge to spot Chinese state media social accounts

This story first appeared in China Report, MIT Technology Review’s newsletter about technology developments in China. Sign up to receive it in your inbox every Tuesday.

It’s no secret that Chinese state-owned media are active on Western social platforms, but sometimes they take a covert approach and distance themselves from China, perhaps to reach more unsuspecting audiences. 

Such operations have been found to target Chinese- and English-speaking users in the past. Now, a study published last week has discovered another network of Twitter accounts that seems to be obscuring its China ties. This time, it’s made up of Spanish-language news accounts targeting Latin America.

Sandra Quincoses, an intelligence advisor at the cybersecurity research firm Nisos, found three accounts posting news about Paraguay, Chile, and Costa Rica on Twitter. The accounts seem to be associated with three Chinese-language newspapers based in those countries. All three are subsidiaries of a Brazil-based Chinese community newspaper called South America Overseas Chinese Press Network.

Very few of the posts are overtly political. The content, which is often the same in all three accounts, usually consists of Spanish-language news about Chinese culture, Chinese viral videos, and one panda post every few days. 

The problematic part, Quincoses says, is that they obscure the sources of their news posts. The accounts often post articles from China News Service (CNS), one of the most prominent Chinese state-owned publications, but they do so without attribution.

Sometimes the accounts will go halfway toward attribution. They might specify, for example, that the news is from “Twitter •mundo_china” without actually tagging the @mundo_China, an account affiliated with the Chinese state broadcaster. 

“When you do not mention Twitter accounts with the proper “@” format, tools that collect from Twitter to do analysis don’t pick up on that,” says Quincoses. As a result, these accounts can fly under the radar of social network analysis tools, making it hard for researchers to associate them with accounts that are clearly related to the Chinese government.

It’s unclear whether these accounts and the newspapers they belong to are controlled directly by Chinese state media. But as obscure as they are, there are real Chinese diplomats following them, suggesting official approval. And one government outlet—CNS—is working closely with these newspapers.

CNS is directly owned by the Chinese Communist Party’s United Front Work Department. In the 1990s, it started fostering ties with outlets aimed at Chinese immigrant communities around the world. 

Today, CNS and these immigrant community newspapers often co-publish articles, and CNS invites executives from the publications to visit China for a conference called the Forum on the Global Chinese Language Media. Some of these publications have often been accused of being controlled or even owned by CNS, the main example being the China Press, a California-based publication.

As media outlets enter the digital age, there is more evidence that these overseas diaspora publications have close ties with CNS. Sinoing (also known as Beijing Zhongxin Chinese Technology Development or Beijing Zhongxin Chinese Media Service), a wholly owned subsidiary of CNS, is the developer behind 36 such news websites across six continents, the Nisos report says. It has also made mobile apps for nearly a dozen such outlets, including the South America Overseas Chinese Press Network, which owns the three Twitter accounts. These apps are also particularly invasive when it comes to data gathering, the Nisos report says.

At the same time, in a hiring post for an overseas social media manager, CNS explicitly wrote in the job description that the work involves “setting up and managing medium-level accounts and covert accounts on overseas social platforms.” 

It’s unclear whether the three Twitter accounts identified in this report are operated by CNS. If this is indeed a covert operation, the job has been done a little too well. Though they post several times a day, two of the accounts have followers in the single digits, while the other one has around 80 followers—including a few real Chinese diplomats to Spanish-speaking countries. Most of the posts have received minimal engagement.

The lack of success is consistent with China’s social media propaganda campaigns in the past. This April, Google identified over 100,000 accounts in “a spammy influence network linked to China,” but the majority of accounts had 0 subscribers, and over 80% of their videos had fewer than 100 views. Twitter and Facebook identified similar unsuccessful attempts in the past, too. 

Of all the state actors she has studied, Quincoses says, China is the least direct when it comes to the intentions of such networks. They could be playing the long game, she says. 

Or maybe they just haven’t figured out how to run covert Twitter accounts effectively. 

According to Quincoses, these accounts were never among those Twitter labeled as government-funded media (a practice it dropped in April). This could be related to the limited traction the accounts got, or to the efforts they made to obscure their ties to Chinese state media.

As other platforms are emerging to take on Twitter, Chinese state-owned publications have begun to appear on them too. Xinhua News Service, China’s main state-owned news agency, has several accounts on Mastodon, one of which still posts regularly. And CGTN, the country’s state broadcaster, has an account on Threads that already has over 50,000 followers.

Responding to an inquiry from the Australian government, Meta said it plans to add labels for government-affiliated media soon. But can it target accounts like these that are trying (and failing) to promote China’s image? They may be small fish now, but it’s better to catch them early before they grow influential enough, like their more successful peers from Russia. 

Do social media users need better tools to sort out what might be government-affiliated media? Tell me at zeyi@technologyreview.com.

Catch up with China

1. John Kerry, the US climate envoy, is visiting China to restart climate negotiations between the two countries. (CNN)

2. Executives of American chip companies, including Intel, Qualcomm, and Nvidia, are flocking to Washington to talk the administration out of more curbs against China. (Bloomberg $)

3. The Taiwanese chip giant TSMC is known for harsh workplace rules imposed to protect its trade secrets, including a ban on Apple Watches at work. Now, facing difficulty attracting talent, the company is relaxing those rules. (The Information $)

4. A Kenyan former content moderator for TikTok is threatening to sue the app and its local content moderation contractor, claiming PTSD and unfair dismissal. (Time)

5. Amazon sellers say their whole stores—including images, descriptions, and even product testing certificates—have been cloned by sellers on Temu, the rising cross-broader e-commerce platform from China. (Wired $)  

6. Microsoft says Chinese hackers accessed the email accounts of Commerce Secretary Gina Raimondo and other US officials in June, but they didn’t get any classified email. (New York Times $)

7. Badiucao, an exiled Chinese political cartoonist, is carefully navigating security risks as he tours his artworks around the world. (The Spectator)

Lost in translation

As image-making AIs become increasingly popular, some Chinese fashion brands are ditching real human models and opting for AI-generated ones. Chinese publication AI Lanmeihui reports that some Stable Diffusion users are charging Chinese vendors 15 RMB (about $2) for an AI-generated product catalogue photo. A specialized website (still built on the open-source Stable Diffusion algorithm) allows vendors to customize the look of the model for just $2.80. Meanwhile, the cost of a photography session with a human model usually comes down to about $14 per photo, according to professional model Zhao Xuan. AI has already started taking jobs from human models, Zhao said, and it’s promoting unrealistic beauty standards in the industry. “The emergence of AI models is popularizing extreme aesthetics and causing professional models to have body shame,” she said. And the technology is still in its early stages: commercially available services often take more than a week, and the quality of the result is variable.

A collage of three screenshots of models generated by AI.

SOCIAL MEDIA SCREENSHOTS COLLECTED BY AI LANMEIHUI.

One more thing

Some Chinese workers are being asked to use AI tools but find that the process of tinkering with them takes too much time. As a result, they’ve been faking using ChatGPT or Midjourney and instead doing their job the old-fashioned way. One social media copywriter managed to mimic ChatGPT’s writing style so well that his boss was fully convinced it had to be the work of an AI. The boss then showed it around the office, asking other colleagues to generate articles like this too, according to the Chinese publication Jingzhe Qingnian.

How Indian health-care workers use WhatsApp to save pregnant women

Hirabai Koli’s medical reports were normal—but she wasn’t happy.   

She had been monitoring her weight over the first two months of her pregnancy, and she surprised community health-care worker Suraiyya Terdale when she asked why she wasn’t gaining more. (To protect her safety and private health information, Koli is being identified by a pseudonym.) 

“It was an odd question—something I heard for the first time,” says Terdale. She then remembers Koli saying, “Someone told me that if the pregnant mother’s weight isn’t increasing fast, then it’s a girl child.” 

Over 13 years of helping hundreds of women with childbirth in the Ganeshwadi village of Maharashtra, India’s second-most populous state, Terdale had heard a lot of medical misinformation, but never this particular myth. Terdale is an accredited social health activist, or ASHA—part of an all-women cadre of 1 million community health-care workers. Across India’s villages, one ASHA is appointed for every 1,000 people; they are responsible for over 70 health-care tasks and make public health care accessible to people from remote areas and marginalized communities. 

Countering false information has become an increasingly important, if unofficial, part of the job for each ASHA. Medical misinformation is rampant in the country, especially in remote villages like Ganeshwadi, which has a population of just a few thousand. 

Experience told Terdale that countering Koli’s beliefs without context could backfire. “If you tell someone they are wrong, then people don’t listen,” she says.

Indeed, when Terdale told Koli that her understanding was unscientific, Koli wasn’t convinced. Instead, Koli asked if she knew of any doctor who could confirm if it was a male fetus, even though the Indian government banned prenatal sex-determination tests in 1994 in response to the high rate of abortions of female fetuses.

So Terdale began doing the tricky work of probing why Koli believed this. After several rounds of trust-building conversations, Terdale learned that Koli was a victim of domestic violence and sexual abuse because her first child had been female. “My in-laws taunt me every day for giving birth to a girl,” Koli told her. “It has been so traumatic that I won’t be able to survive if it’s another girl child.” 

After, Koli’s requests to get a prenatal sex determination became more frequent, and Terdale decided to turn to the most accessible and discreet way to help her: WhatsApp. She sent Koli “scientific videos of what decides the biological sex of a child,” but “none of it made sense to her,” says Terdale. “The videos were in English, but I am sure the animation helped to a certain extent.” After further digital and in-person interventions, Terdale was finally able to convince Koli she wasn’t responsible for the sex of the child.

Terdale is one of many ASHAs across the country who are turning to WhatsApp as a means to combat medical misinformation and navigate sensitive medical situations, particularly regarding pregnancy. Even though ASHAs weren’t trained to do this, are paid very little, and are at the mercy of India’s poor health-care infrastructure, the approach has had surprisingly good results. In 2006, India’s maternal mortality rate was 254 deaths per 100,000 live births, one of the highest in the world. By 2020, ASHAs had helped slash the maternal mortality rate by over 60%, to 96 per 100,000 live births. This is particularly significant, because for a rural population of 833 million, India only has 763 functioning district hospitals, with just under 27,000 doctors.

But the work of ASHAs can be arduous and sometimes dangerous. Even after she changed Koli’s mind, Terdale still had to convince her husband. 

“Even I was scared. He abused whoever questioned him,” Terdale recalls. His repeated pressure to get a prenatal sex-determination test was causing Koli tremendous stress; Terdale worried about what he would do next. “To birth a male child, people reach out to babas [faith healers] and quacks,” she says. 

So she used the same approach, attempting to connect directly with Koli’s husband and debunk sex-related misinformation via WhatsApp messages. He didn’t respond. Finally, a few days later, she mustered the courage to confront him in person. “He verbally abused me and even declared that no matter what happened, he wouldn’t bear any medical expenses if it were a girl,” she says.

Suraiyya Terdale portrait (left) and speaking with a community member (right)
Suraiyya Terdale, an ASHA since 2009, has saved the lives of hundreds of women by busting pregnancy-related misinformation through WhatsApp and her fieldwork.
SANKET JAIN

Over the next month, Terdale persisted—sending the husband videos about the impact of mental health on the overall well-being of an expectant mother and fetus. She also messaged him relevant news reports. After a few weeks, she increased the frequency of her messages.

He eventually changed his mind, and stopped bothering Koli with the demand for a male child. However, the damage was already done; she reported symptoms of depression.

Terdale continued to use WhatsApp to counsel Koli every few days: “When I wasn’t allowed to enter their house, WhatsApp helped me save her.”

When it comes to pregnancy, most people in India rely on the experiences of their friends or relatives for information, though “this experience-sharing becomes another potent way of sharing misinformation, especially when it’s not backed by science,” says Hemraj Patil, who has over a decade of experience in public health and previously worked with India’s National Health Mission. 

When younger women are coerced by family into following superstitions—about what foods they can and can’t eat, or that they can’t buy new clothes, leave the house, or wear bangles in the first two trimesters—ASHAs counsel the pregnant women and ensure they receive proper science-backed health care. If conservative parents stop ASHAs from entering their houses, the ASHAs can use WhatsApp to remotely support pregnant women and then ask senior doctors or other community members to visit their homes. Notably, ASHAs are also using WhatsApp to create safer spaces for women through targeted group channels, where women share their personal experiences and speak candidly in ways they can’t anywhere else. 

Maya Patil taking case notes with a mother and child in her home
ASHA Maya Patil notes the health conditions of a community woman and her newborn.
SANKET JAIN

“Ever since ASHAs started using WhatsApp to bust misinformation, I’ve seen a positive change,” notes Patil. Last year, the World Health Organization honored ASHAs with the Global Health Leader award for their work on covid and in slashing India’s maternal mortality rate.

Koli is just one success story. After months of patiently counseling her, Terdale took her to the hospital to give birth in early 2022. “It was a male child,” says Terdale. “The case was no doubt challenging and risky, but I am proud I could change someone’s mind and make people think.” 

Crucially, the effects of changing one mind are not restricted to a single family. “Whenever you enter someone’s house in a village, you are not just talking to that particular member, but also the neighbors, sometimes the entire community,” Terdale says with a laugh that implies the concepts of privacy and personal space remain a significant challenge in India’s villages. 

Today, Terdale proudly says she is in touch via WhatsApp with over 60% of the women in the villages she oversees. 

“We are health-care workers and hope for so many people. How can we fear and let them down?” Terdale asks. In many Indian languages, ASHA means hope.

“I started noting down the WhatsApp number of everyone in the community”

When she became an ASHA in 2009, Netradipa Patil, from Maharashtra’s Shirol region in western India, was immediately forced to grapple with pregnancy-related misinformation and superstitions. 

During her field visits back then, Patil saw a few young women using WhatsApp. “I started noting down the WhatsApp number of everyone in the community,” she says. “Every day, many people would send ‘good morning’ wishes to me.” By sometime in 2014, she started to think: if they were already connecting on the messaging app, why not tackle misinformation there, too? 

Such work would go above and beyond Patil’s job requirements. In 2005, the Indian government launched the National Rural Health Mission to improve maternal and infant health. Under this program, ASHA workers were appointed in 18 states; by 2009, the program had expanded to all 28 states. ASHAs, though, are technically volunteers and are not paid a fixed salary but rather receive “performance-based incentives” for completing tasks. In Maharashtra, for instance, they are paid just 1,500 Indian rupees ($18.50) for maintaining detailed records of every community member and 250 rupees ($3.70) for nine months of prenatal care for one patient and for facilitating hospital delivery. Payment is often delayed. 

“We aren’t paid well,” Patil notes, “but that has never stopped us from saving lives.” 

Despite the increased workload and the inadequate (or sometimes nonexistent) compensation for internet charges, Patil decided to try using WhatsApp in her work. “Before directly busting any misinformation, I started posting general bits of advice from doctors regarding pregnancy,” she says. To her surprise, many younger women replied to her personal messages and even thanked her.

Maya Patil standing outdoors in a group of young women and children and holding up an illustrated book
Along with WhatsApp, ASHA workers also rely on books and articles to reach the most vulnerable and marginalized people in the community. Here, Maya Patil talks to a group of migrant sugarcane cutters.
SANKET JAIN

She then experimented by tackling the superstition that if a woman reveals her pregnancy to any health-care worker in the first trimester, she will face complications and be at risk of miscarriage. A few women challenged this—though many supported her. 

Patil began spending several hours a day responding to all the doubts and apprehensions of community women. “It did take a lot of my time, but after two weeks, I saw a woman agreeing,” she says. 

Patil, who is also a union leader of over 3,000 ASHAs, invited a few hundred workers from nearby villages to discuss how to use the technology. “I shared my experience of using WhatsApp and asked ASHAs to start experimenting in their communities,” she says. Many reported positive results, and their work picked up momentum in 2017 when WhatsApp introduced a feature to share photos and videos as a status.

The first time Patil posted a WhatsApp status—a motivational quote—she thought it was just another distraction in her long workday. An hour later, over 100 people had seen it. Just before the 24-hour mark, at which point the status gets archived, over 500 people had viewed it. 

For a few days, she shared inspiring messages in Marathi and Hindi and remembers many people replying to say they found them helpful. That encouraged Patil to scale up her work from one-on-one texts, and it also gave her a feeling of recognition from her community. 

She experimented more from there. One day, she shared an infographic of basic health-care precautions for pregnant women. “It got a tremendous response,” she says. “Many pregnant women wrote to me saying the health-care chart was beneficial, and they had even taken a screenshot.”

It has since become something of a best practice for ASHAs to share visually rich articles and posters via WhatsApp. “These drawings or photos stay in people’s minds,” says Patil. “Instead of sending a long message, we condense the information in a single flowchart or use infographics, and it does help.” 

Hands holding a cell phone with a medical brochure on the screen.  Various medicines seen on the table in the background.
Netradipa Patil often takes photos of the informational posters in Shirol’s rural hospital to share with a WhatsApp group or as a status.
SANKET JAIN
close up of Maya using a pen to point at infant mortality statistics on a leaflet
Maya Patil explains information about malnourishment.
Hands holding out pill packets
ASHAs often distribute iron and folic acid supplements and calcium tablets to women.

Another way ASHAs make their responses particularly effective and persuasive has been by sharing case studies of real patients who have followed their advice. “Give an example of someone who is either their friend or someone they trust,” says Terdale, the ASHA who worked with Koli. As a result, she says, the number of cases of people “blindly following superstitions and misinformation came down … Moreover, several people who benefited from our advice support us. So, there’s no fear of any backlash because we have a much stronger support system.” 

Over the past five years, Patil has trained hundreds of ASHAs from different states to use WhatsApp to debunk false information. 

Maya Patil, an ASHA from Maharashtra’s Kutwad village, says she’s noticed similar positive results after using WhatsApp. She’s been working in the field for 13 years, and in 2018 she met a woman in her ninth month of pregnancy with falling hemoglobin levels who had recently been diagnosed with anemia. She tried to connect the woman to the relevant public doctor, but the family wanted her to use natural methods to increase her hemoglobin levels.

Patil asked the pregnant woman to start drinking pomegranate juice, which has been proven to increase hemoglobin levels, but her mother said pomegranate juice causes kidney stones. Patil tried for several hours to explain the science, but the family wasn’t convinced, nor were they interested in anemia medications.

As a habit, Patil had been taking photos of hundreds of regional newspaper articles addressing common health misinformation that were written by doctors. In one, she found details about the benefits of pomegranate seeds and juice. She sent the pregnant woman the article in a WhatsApp message. Then she found more relevant YouTube videos recorded in Marathi, the woman’s language. After 10 such messages, she finally had an impact; the family allowed the woman to follow her advice, and within 12 days, her hemoglobin levels had increased. 

They worked together for three weeks, and when the woman gave birth, it was a normal delivery with a healthy newborn weighing six-and-a-half pounds.

Creating a safer space for women

Though they had successfully addressed a great deal of misinformation over several years, many ASHAs were still seeing pregnant women who were too scared to talk about their pregnancies for fear of their in-laws and husbands. Even in big, ASHA-led group messages, many men in the community responded with “ill-informed comments,” says Netradipa Patil, the ASHA union leader.

Maya Patil similarly laments the persistence of dangerous medical information passed down by family. “The primary goal of any fake news related to pregnancy is to make women suffer,” she says. “Many older women say that they had suffered these rituals during their pregnancy, so why should the next generation not face this?” 

Maya Patil sits speaking with one-on-one with an expectant mother
Along with ensuring safer childbirth, ASHA workers are also responsible for providing proper postnatal health care to community women. Here, Maya Patil explains how to take care of a newborn.
SANKET JAIN

So, in 2018 and 2019, ASHAs started to form hyperlocal all-women WhatsApp groups. With a smaller group of just 15 to 20 pregnant women and their close female relatives, Netradipa Patil would focus on helping them understand the scientific aspects of care. “It was difficult, but easier than dealing with hundreds of people in one go.” After six months of test runs, women in the groups even reported talking about misinformation in their households. 

Patil and several other ASHAs have created multiple groups; some are limited to a household and some include entire villages, others are meant only for pregnant women or only for ASHA workers and their supervisors.

The topics of conversation in these groups now go beyond health care; women share their dreams for the future, or ask ASHAs about how they can become financially independent or start small businesses. Many women also discuss workplace exploitation and ask ASHAs how to deal with it, or they ask about how to benefit from government welfare programs. These groups are particularly beneficial “when freedom is so restricted in many rural houses,” says Terdale. 

ASHAs say one of their most important tasks is ensuring women aren’t abused for confronting traditional beliefs. Particularly in cases of family conflicts, many ASHAs use very careful and specific language to communicate with women. “Sometimes during fieldwork, we use a code language [with patients], which often means that there’s some family or medical issue which needs to be discussed personally,” Patil says. “We have been working for over a decade and have built a bond with everyone. None of this could have been possible if the community members [didn’t] trust us.”

Patil recalls one particularly dangerous case. Saniya Makandar, a woman with a high-risk pregnancy, was in an interfaith marriage that wasn’t accepted by their families, and many ASHAs wouldn’t work with her because they “feared attacks from religious fanatics,” Patil says. (To protect her safety, Makandar is being identified by a pseudonym.)

Patil had to build trust with Makandar and ensure her safety during treatment, even as frequent family clashes and religious fights weighed on her. Soon, Makandar opened up about her precarious condition. She didn’t know if she’d received certain vaccines, and she reported swelling in her legs, high blood pressure, extreme weakness, and even suicidal thinking. Patil found that her hemoglobin level had dropped to seven during a time in which it should have been 12 to 16 grams per deciliter. 

A smiling woman playfully lifts a toddler up in the air
Terdale plays with Hirabai Koli’s son during a visit with Koli.
SANKET JAIN

Low hemoglobin during pregnancy remains a problem across India, but in Makandar’s case, misinformation made it more difficult to address. Patil discovered that she was eating only wheat flatbreads, due to a local superstition that the diet was healthy. While Patil prepared a proper diet chart for her, visiting her house daily wasn’t feasible because of the backlash Patil might face from her own Hindu community. So she decided, again, to turn to WhatsApp. “Every day, I started sending photos, videos, and articles on what food to eat.”

But just addressing the health myths wasn’t enough. So every day, Patil followed up with simple messages via WhatsApp, like, Are you feeling better today? Or, Is there something you want to share? 

Such questions from ASHAs have had a tremendous impact on many women like Makandar, who had never opened up about their pregnancies, or their families and futures, before they began sharing their problems with the ASHAs and women in their WhatsApp groups. 

After two months of intense work with Patil, Makandar’s health improved, and she gave birth to a healthy baby via cesarean section at the public district hospital. 

“A message can save someone’s life,” says Terdale, “and we see it happening almost every day.” 

Sanket Jain is an independent journalist and a documentary photographer based in India’s Maharashtra state. His work has appeared in more than 30 publications. He tweets at @snktjain.

Why it’s so hard to tell porn spam from Chinese state bots

China Report is MIT Technology Review’s newsletter about technology developments in China. Sign up to receive it in your inbox every Tuesday.

A few weeks ago, at the peak of China’s protests against stringent zero-covid policies, people were shocked to find that searching for major Chinese cities on Twitter led to an endless stream of ads for hookup or escort services in Chinese. At the time, people suspected this was a tactic deployed by the Chinese government to poison the search results and prevent people from accessing protest information. 

But this spam content may not have had anything to do with the Chinese government after all, according to a report published on Monday by the Stanford Internet Observatory. “While the spam did drown out legitimate protest-related content, there is no evidence that it was designed to do so, nor that it was a deliberate effort by the Chinese government,” wrote David Thiel, the report’s author. 

Instead, they were likely just the usual commercial spam bots that have plagued Twitter forever. These particular accounts exist to attract the attention of Chinese users who go on foreign networks to access porn.

So the “significant uptick” in spam was just a coincidence? The short answer is: very likely. There are two major reasons why Thiel does not think the bots are related to the Chinese government.

First of all, these accounts have been posting spam for a long time. And they sent out even more tweets, and more consistently, before the protests broke out, according to a data analysis on the activities of over 600,000 accounts from November 15 to 29. Another analysis shows they’ve also continued to push out spam even as discussions of the protests have died down. 

Check out these two charts (for reference, the protests peaked around November 27):

A line chart showing consistent spam tweets between November 15 and November 29. Above the chart it says this is an analysis of 7,541,382 total tweets.
A line chart showing increasing spam tweets between November 29 and December 4. Above the chart it says this is an analysis of 6,088,596 total tweets.

So did it just feel as if spam activity spiked during the protests? This graph shows that many more bot accounts were in fact created in November: 

A bar chart showing that spam accounts created in November largely outnumbers accounts created in the past months.

But Thiel emphasizes that content moderation takes time. People tend to ignore the effect called “survivorship bias”: older spam content and accounts are constantly being removed from the platform, but researchers don’t have data on suspended accounts. So a graph like this one only shows accounts that survived Twitter’s spam filters. That’s why November’s spike looks so big: they are new accounts created most recently to replace their dead peers and are still standing—but not all will survive, so they wouldn’t be there if we were to revisit this graph in, say, a few months. In other words, if you conducted a data analysis right after the protests, it would certainly seem that this kind of spam just started recently. But it’s not necessarily the full truth.  

Secondly, if the spam accounts were meant to bury information about the protests, they did a pretty poor job. While escort-ad spam featured many Chinese city names as keywords and hashtags, Thiel found that they did not target the hashtags actually used to discuss the protests, like #A4Revolution or #ChinaProtest2022, “which is what you would assume the government would be interested in jumping on if they were trying to silence things,” he tells me. Of the about 30,000 tweets he analyzed containing these more influential hashtags, “there’s no spam to speak of in there.”

“People tend to jump to a state explanation for things just because the content is in Chinese,” he says. “Sure, China’s done tons of online inauthentic operations before. But I don’t think the default assumption should be [that] the state is behind this.” 

Given all this, Thiel believes that the porn ads during this time were probably just run-of-the-mill commercial spamming, which can actually be quite lucrative. Because of the more rigid porn censors on domestic platforms, Chinese people often seek alternative sources for porn, including using innovative outlets like Steam or just using a plain old VPN to access international platforms like Twitter, which is known for being one of the mainstream platforms more tolerant of sexual content. 

That makes Twitter a prime space for sex-work ads—and, of course, scams. Reporters from the New York Times talked to an online advertising company behind such spam, which charged $1,400 for a monthlong campaign. Some of these accounts may lead to real sex services or access to “premium group chats,” where porn content is shared. Others are fraudulent; as Chinese internet users have exposed, they may ask you to pay upfront online for potential services, in the form of things like “transportation fees.” Once they extract as much money as possible from you, the scammers will cut off all communications. In fact, there are even Twitter accounts in Chinese (NSFW!) dedicated to exposing such scammers and the relevant accounts. 

But not everyone knows the context of how Twitter is used by Chinese people to access porn, or that such spam has existed for a long time. So I don’t blame anyone for suspecting that the government was involved. In the end, I think there are two main reasons why people easily bought the assumption that the spam accounts were part of China’s propaganda machine.

As Thiel said, the Chinese government has been behind many Twitter manipulation campaigns in the past, deploying fake personas, automated activities, and targeted harassment. Back in 2019, for instance, it used spam accounts to disseminate pro-China messages and attack Hong Kong pro-democracy protesters. Some of those accounts had posted extensive porn content—sounds familiar, hah?

But Elise Thomas, a senior analyst at the Institute for Strategic Dialogue who analyzed the 2019 campaign, tells me that was a totally different situation. She found bot accounts that had been used for commercial porn spam and were later sold to Chinese government actors to push political messages, without deleting the account history: “They might buy old commercial accounts, and some of the commercial accounts had done porn, spam, cryptocurrency, and all sorts of other stuff.” So it was not the Chinese government that was deliberately posting porn, but the previous owners of the bots.

Obviously, the state’s tactics could evolve, but it’s important not to give the state too much credit for its capacity to meddle with social media.

Last but not least, it’s just generally hard to tie any social media activity to a foreign government when researchers don’t have access to internal company analytics. 

“Only social media companies can definitively link social media accounts to the Chinese government based on technical indicators to which they only have access. It is very difficult to distinguish between random accounts and possibly state-affiliated ones based solely on open-source methods,” says Albert Zhang, who researches Chinese disinformation at the Australian Strategic Policy Institute. “We make probabilistic assessments based on behavioral patterns found in previous Chinese government campaigns that Twitter and Meta have publicly disclosed.” 

Before Elon Musk acquired Twitter, it was one of the best social networks in terms of being transparent to outside researchers and sharing data with them, according to the researchers I spoke with. But even then, Twitter still withheld the internal data it used to determine whether an account was linked to a foreign government. 

Now, as the platform gets into bigger messes, this kind of academic collaboration is increasingly endangered. “That’s the big unknown right now. Normally with this kind of situation, we would be working with Twitter and seeing if they had seen this campaign, seeing what might be able to be done to tamp it down and prevent this kind of thing,” Thiel tells me. But after the mass exodus of Twitter staffers, no employees that used to work with the Stanford Internet Observatory are still on the team. These researchers have no direct contact at the company now.

Identifying and exposing foreign governments’ influence campaigns is already a hard job. Without the collaboration between tech platforms and researchers, it will be even more difficult to correctly hold governments accountable. Will it ever get better under Musk?

Did you think these accounts were linked to the Chinese government? Why or why not? I’d love to hear your thoughts at zeyi@technologyreview.com.

Catch up with China

1. China announced the first two deaths from covid since disbanding much of its zero-covid infrastructure. (Associated Press)

  • But many more deaths have likely gone unreported. One crematorium worker in Beijing said the facility had received over 30 bodies with covid in one day. (Financial Times $)

2. China is planning to pour another 1 trillion yuan ($143 billion) into subsidizing domestic chip industries. (Reuters $)

3. After the Chinese government agreed to let the US audit whether some Chinese companies are making military products, the US Commerce Department added 36 Chinese entities to the trade blacklist—but, in a win for China, removed 25 from the unverified list. (Financial Times $)

4. Using jokes, old photos, and protest news, Chinese Instagram meme accounts are creating a bridge between diaspora communities and Chinese youths at home. (Wired $)

5. Both national and state lawmakers in the US are pushing to ban TikTok from government phones. (South China Morning Post $)

6. A Chinese company tried to launch the world’s first methane-fueled rocket. It failed. (Space News)

7. Ford is working on a complex arrangement to build a battery factory in Michigan along with China’s battery giant Contemporary Amperex Technology—without triggering geopolitical concerns. (Bloomberg $)

8. Acting tough on China is one of the few things both parties can agree on in Washington. But Cornell government professor Jessica Chen Weiss, who spent a year in the Biden administration, is publicly challenging that consensus. (New Yorker $)

  • The Biden administration launched an interdepartmental coordination mechanism named “China House.” (Politico)

9. Writer Sally Rooney is gaining literary fans in China, both because Chinese youths see themselves in her work and because her Irish nationality has shielded her from worsening US-China relations. (The Economist $)

Lost in translation

As cities across China struggle to deal with a covid infection surge, OTC fever medicine has become the hottest commodity. But how did such a common medicine as ibuprofen sell out so widely and so fast? 

Industry insiders told Chinese health-care news publication Saibailan that many domestic pharmaceutical companies were disincentivized from manufacturing ibuprofen this year because until China relaxed its covid control measures in December, Chinese citizens were heavily restricted from purchasing fever medicine. Even though demand is up now, the ibuprofen supply chain needs time to recover and respond. 

To speed things up and ensure medicine supply, local governments are stepping in. Some have asked pharmacies to ration the drug and sell no more than six capsules to each customer. Other governments are even taking over pharmaceutical factories to make sure products are supplied to local patients first before they’re sold to other regions in China.

One more thing

Don’t miss the most viral Chinese internet slang of this year, a list put together by a local publication in Shanghai. The top 10 is a mix of covid-era creations like 团长 (tuan zhang), the volunteers organizing bulk-orders of groceries during Shanghai’s two-month lockdown, and social media phenomena like 嘴替 (zui ti), which means someone who can publicly speak out on things normies don’t dare to say or can’t articulate. And the top one is also the one I find most bewildering: 栓Q (shuan Q), which is really just a dramatic way to pronounce “thank you” when people feel speechless or fed up. Maybe internet trends don’t need to make sense. Just saying.

China is copying Russia’s election interference playbook

China Report is MIT Technology Review’s newsletter about technology developments in China. Sign up to receive it in your inbox every Tuesday.

Last week I was in Boston attending EmTech MIT, our signature annual event, and since then I’ve been thinking about all the interesting ideas I heard—from programming vaccines to work against different diseases to increasing access to prostheses in postwar Sierra Leone. You know, even as so many depressing things are happening around us, these conversations gave me a bit more hope for our future.

I also hosted three discussions about the global technology challenges facing the world. Obviously, a big focus was China—which, as you newsletter readers know, is one of the most important tech players today. My guests tackled crucial questions, like: Why are the recent chip export controls particularly significant? And how do we understand them from not just a geopolitical perspective—but a moral one? I also had a conversation focused on social media disinformation, which proved to be extremely timely given reports last week of China-based bot networks that were trying to influence US politics ahead of today’s midterm elections. 

Well, these conversations weren’t exactly the hopeful kind, but they gave me some needed clarity about what’s happening on the other side of the Pacific. The China news cycle has always been busy (that’s why this newsletter exists!), but it’s also good to take a beat, have a chat, and understand where we’re at regarding US-China relations. 

In case you missed the event this year, here are the China-related highlights I think you’ll be interested in:

What’s the strategy—and real rationale—behind US restrictions on China?

It has been several years since US-China relations took a clear dive, and academics and tech workers on both sides are now accepting that tensions will not resolve anytime soon. When I asked Matt Sheehan, a global technology fellow at the Carnegie Endowment for International Peace, how he feels about US-China relations today, he said he’s “on edge” because “there’re a lot of decisions being made in rapid succession with hugely uncertain outcomes.”

One of these big decisions is the Biden administration’s escalation of restrictions on chip exports to China. While people are still trying to understand the policy in real time, it has become clear that the administration’s moves are not just a matter of adding more Chinese companies or more chip technologies to a list of targets, but a change in the US government’s mindset when it comes to containing China.

For a long time, the main question on Chinese export control was whether to “do as much damage as you can today versus to preserve your leverage on a longer time scale,” said Sheehan. 

The latter—continuing to sell chips and relevant technologies to China in hopes that the country won’t develop its own self-sufficient ecosystem—is what the US has been doing. But that’s going to change, according to Sheehan: “I think this latest control kind of firmly settles that debate within [Washington] DC on the side of doing damage today. People decided that leverage is eroding naturally over time anyway, and we have to use this leverage while we can.”

Photo from EmTech MIT showing speakers Yangyang Cheng, Matt Sheehan, and Zeyi Yang

But it’s also important to scrutinize the justifications for these export controls. Are they really based on addressing human rights concerns, as often claimed, or are they merely more political games? Yangyang Cheng, a fellow at Yale Law School’s Paul Tsai China Center, noted in the panel that the policies are “logically inconsistent and morally indefensible” if the reasoning “is not because building weapons is bad or building different types of surveillance systems is bad, but because I want to build better weapons and better surveillance systems.”

She’s seen the latter reasoning appear more often as China has risen as an economic juggernaut. This is a lasting trend from Obama to Trump to Biden, she noted. While there are real concerns about the increasingly frequent human rights abuses and authoritarian crackdowns in today’s China, “these issues have not been addressed by these technological competitions and tensions,” Cheng said. “However, they are being used as a rhetorical shield for the US government to advance domestic interests and geopolitics agendas.”

China has copied Russia’s election interference playbook—but may not be as good at it

The night before I talked to Renée DiResta—the technical research manager at the Stanford Internet Observatory, who has studied foreign influence on social media for years—she co-published a report on the latest foreign misinformation campaigns on Twitter.

She and her colleagues recently analyzed three China-based and three Iran-based networks of accounts that pretended to be ordinary Americans on the right or left of the political spectrum. According to data provided by Twitter, the platform removed the accounts at the end of October. 

The phony accounts’ strategy for stoking the political conflicts in an already polarized America closely resembled the activity of the fake Russian accounts that thrived before the 2016 elections—riling up partisans on both sides of the political aisle.

One of the three China-based account networks, containing just 300+ tweets, supported Democratic candidates in Florida and tweeted positively about gun control and abortion access. Another network pushed right-wing talking points, like the false claim that the 2020 election was stolen, and heavily retweeted Republican provocateurs like Representative Lauren Boebert. Of all these accounts, the most influential one posed as “Ultra MAGA BELLA Hot Babe”; the combination of soft porn and pro-Trump messaging gained it 26,000 followers, 400,000+ likes, and 180,000+ retweets over six months.

To be fair, even with clear models in past Russian influence campaigns, I’m impressed with how Chinese accounts pulled off this stunt. Besides language proficiency, it requires knowledge of Americans’ daily life, pop culture, and political reality to fake a believable persona. It’s a warning sign that they are getting better at more sophisticated manipulation of social platforms.

But at the same time, Chinese efforts were less effective in other ways. When compared with Russian interference, which focuses almost solely on issues already consuming American politics, China- and Iran-based actors are often more obvious in their geopolitical interests, DiResta said.

Another one of the China-based networks is a good example: through 1,872 accounts and 310,043 tweets (mainly in English and Mandarin), this network mostly talked about issues in Hong Kong, Taiwan, and Xinjiang. This kind of content often fails to get high engagement numbers. Sometimes, it just serves as a megaphone for state-controlled accounts. “So the point is not the bots. The bots are a tool to push forth the messaging from the real mouthpiece,” DiResta said.

So what’s the big picture? We still don’t know what Musk’s takeover will mean for Twitter, but we do know that nothing will stop foreign governments, including China, from trying to maintain their narrative on US-based social platforms. And it’s truly fascinating to identify how these governments learn from each other yet also diverge in their tactics.

Catch up with China

1. Eric Schmidt, Google’s former CEO, has become one of the most influential voices drumming up an artificial-intelligence arms race between the US and China. But he may have conflicts of interest. (Protocol)

2. Chinese officials are considering phasing out zero-covid policies, cutting down on mandatory quarantine days and the number of PCR tests required. But don’t expect it to happen overnight. (Wall Street Journal $)

  • A China correspondent for FT documented his firsthand experience over 10 days at a covid quarantine center in Shanghai. (Financial Times $)

3. China will soon approve the Pfizer/BioNTech covid vaccine (though only for expats), says German chancellor Olaf Scholz. It would be the first mRNA vaccine used in the country. (Politico)

  • The new German administration promised to be tougher on China but is divided on how far it should go. (Financial Times $)

4. The Taiwanese company Foxconn, known for making iPhones, has long said it wanted to build electric vehicles. Now it’s getting investment from Saudi Arabia to manufacture them in the kingdom. (Nikkei Asia $)

5. How the vibrant world of Uyghur-language websites and apps went silent as software developers and IT specialists in Xinjiang were taken into detention in recent years. (Wired $)

6. Lured by the promise of legitimate employment, as many as 100,000 foreigners are being held captive in Cambodia by Chinese cybercriminals and forced to run online scams. (Los Angeles Times $)

  • One of the scam products is fake LinkedIn profiles of people who pose as employees of prestigious companies and coax victims into crypto investment frauds. (MIT Technology Review)

7. China’s first message for this week’s climate summit COP27: Rich countries should give more financial aid to their developing peers. (Bloomberg $)

Lost in translation

“Zoom-bombing” is taking off in China again as Chinese classrooms move online amid local covid restrictions this year. It can have dire unintended consequences; a Chinese middle school teacher died of sudden cardiac arrest in late October after her history class was hit. The news revived discussions about a practice that was popular in 2020, in which uninvited people show up in remote meetings (sometimes serious ones!) to blast music, porn, and curses. Reporters from the Chinese publication Legal Daily joined an online community where “bombers” discuss new tactics and share information about meetings they can bomb. The majority of these members are young—born after 2000—and some volunteered access to their own remote school classes in order to disrupt them. Such activities are illegal and can be considered a criminal act, lawyers say. Digital platforms started suspending group chats for bomber communities following news of the death.

One more thing

The 1982 Bollywood song “Jimmy Jimmy Aaja Aaja” is having an unexpected cultural moment in China. Since the catchy lyric “Jimmy, Aaja” sounds similar to the Mandarin phrase “Jiemi, najia,” which means “who can lend me rice,” the song is getting dubbed under videos of people wearing Indian clothing and dancing with empty containers. You can read these videos, which received millions of views, as a satirical protest against the unpredictable local lockdowns that make basic grocery items hard to access. Or you can just appreciate this rare crossover moment between Chinese and Indian pop culture.

Coincidentally, I’m off to the grocery store, as I’m running out of rice myself. So see you next week!

Zeyi

The porcelain challenge didn’t need to be real to get views

Despite what you may have heard, the teens are not stealing their family’s fine dinnerware, tossing it in a blender, and snorting the resulting dust for the “porcelain challenge.” That’s just what Sebastian Durfee, a 23-year-old actor and TikTok creator, hoped you might believe when he spread the word on social media of the latest dangerous teen challenge. 

Never mind that it was all fake from the start.

On Saturday, Durfee posted a call to action to his followers: to work together to get “boomers to freak out about a fake TikTok challenge.” He chose the porcelain challenge—which, once again, is just a thing Durfee made up—because it seemed like something that would be plausibly dangerous, but not something “the average person could go off and do very easily,” he told me this week. Besides, it’s a catchy name. His original video quickly passed half a million views, and TikTok slapped a warning on it for promoting dangerous acts. 

Meanwhile on TikTok, Facebook, and Twitter, #PorcelainChallenge videos created by those in on the joke started asking people to spread warnings and share stories of (fake) injuries and deaths from those who tried it. One particularly clever video takes the bit to the next level, with the creator claiming that videos of people trying the challenge are being removed by TikTok moderators immediately—a way to explain why there’s no available video of any person actually doing this. 

Shortly after we spoke on Monday, Durfee sent me an update: TikTok had permanently banned his account (which had 150,000 followers), apparently as a result of this experiment. The company didn’t give him a reason for the permanent removal, but he said that they did take down two videos for “promoting dangerous behavior”: one in which he told viewers to post warnings about the porcelain challenge in local Facebook groups, and another in which he shared a screenshot of some early media coverage of his work.

TikTok confirmed on Wednesday that it had banned Durfee’s account, and said that it viewed any content, including hoaxes, that promotes dangerous behavior as a violation of their community guidelines. 

“It goes without saying that neither these videos nor any other ones I created glorified, endorsed, or depicted performing the challenge,” he emailed me on Wednesday. “The fact that they reacted to the challenge as if it was real by banning me entirely is the sort of knee-jerk widespread panic the challenge was meant to critique in the first place; the irony here is not lost on me.”

Durfee’s goal was to get views, which he got in spades before his account was banned. It was also to examine how attention and outrage work online. If a content creator performs all the parts of a moral panic, will the fact that the challenge itself is a complete fiction actually change anything about its spread?  

I’ve reported on moral panics about The Children many times over the years. Right now, it’s the season when people annually freak out about the possibility of THC-laced candies in their kid’s trick-or-treat stash. This fear, along with the many other warnings that deadly candies might be handed to kids by the sadist next door, have thrived every fall for decades merely on possibilities and what-ifs: dig into the “proof” cited by those pushing these warnings and you’ll find that it doesn’t stand up. 

It doesn’t need to. Social media often works by reflex. Content that does well practically begs to be shared right away, reality be damned. And in the case of teen challenges and dangers to children, those warnings are often passed along by sources that carry some authority in their communities: the Facebook pages of local law enforcement, local media, or school officials. 

“I’ve dabbled in the past with trying to make fake news that is transparent about being fake but spreads nonetheless,” Durfee said. (He once, with a surprising amount of success, got a false rumor started that longtime YouTuber Hank Green had been arrested as a teenager for trying to steal a lemur from a zoo.)

On Sunday, Durfee and his friends watched as #PorcelainChallenge gained traction, and they celebrated when it generated its first media headline (“TikTok’s porcelain challenge is not real but it’s not something to joke about either”). A steady parade of other headlines, some more credulous than others, followed. 

But reflex-dependent viral content has a short life span. When Durfee and I chatted three days after he posted his first video about the porcelain challenge, he already could tell that it wasn’t going to catch as widely as he’d hoped. RIP. 

Nevertheless, viral moments can be reanimated with just the slightest touch of attention, becoming an undead trend ambling through Facebook news feeds and panicked parent groups. Stripping away their original context can only make them more powerful. And dubious claims about viral teen challenges are often these sorts of zombies—sometimes giving them a second life that’s much bigger (and arguably more dangerous) than the first.

For every “cinnamon challenge” (a real early-2010s viral challenge that made the YouTube rounds and put participants at risk for some nasty health complications), there are even more dumb ideas on the internet that do not trend until someone with a large audience of parents freaks out about them. 

Just a couple of weeks ago, for instance, the US Food and Drug Administration issued a warning about boiling chicken in NyQuil, prompting a panic over a craze that would endanger Gen Z lives in the name of views. Instead, as Buzzfeed News reported, the warning itself was the most viral thing about NyQuil chicken, spiking interest in a “trend” that was not trending.

And in 2018, there was the “condom challenge,” which gained widespread media coverage as the latest life-threatening thing teens were doing online for attention—“uncovered” because a local news station sat in on a presentation at a Texas school on the dangers teens face. In reality, the condom challenge had a few minor blips of interest online in 2007 and 2013, but videos of people actually trying to snort a condom up their nose were sparse. In each case, the fear of teens flocking en masse to take part in a dangerous challenge did more to amplify it to a much larger audience than the challenge was able to do on its own. 

The porcelain challenge has all the elements of future zombie content. Its catchy name stands out like a bite on the arm. The posts and videos seeded across social media by Durfee’s followers—and the secondary audience coming across the work of those Durfee deputized—are plausible and context-free. 

But in a new twist, it is Durfee’s attempt to now mess with the people who are in on the joke that may return to chase the living. 

When Durfee realized that he’d found a large audience of people who were really enthusiastic about dunking on boomers by helping to spread the porcelain challenge, he thought it was time to shift tactics a bit. “I decided, what if I get all these people who think they’re in on the joke to actually be the punchline of the next part of it?” he said. 

So he posted a video on TikTok that appeared to show a Fox News headline about the porcelain challenge, along with a video clip of someone who appeared to be on the conservative channel ranting about the decline of morality. The headline, of course, was fake, the clip taken from its original context. But Durfee’s video, which featured him pointing at the headline as if he couldn’t believe how well his prank was working, ended up on Reddit’s r/facepalm subreddit. The post has more than 16,000 upvotes. The headline makes no nod to the fact that the Fox News segment is faked: “this guy made up a fake ‘TikTok challenge’ and FOX news made an article about it in three days.”

Just before we spoke, Durfee followed up with another faked video, purportedly showing the porcelain challenge on CNN. This, too, was fictional. And many viewers of his TikToks were fooled by this one as well. 

“I was getting many, many comments from people saying stuff like ‘Oh, does no one check sources anymore? This is so embarrassing that they would post something without fact checking,’” he said. “And the irony is that the thing they were watching and commenting on was entirely fictional.” 

No one in this fake challenge, it turns out, was immune to the impulse to share. Durfee’s followers were enticed by the excitement of dunking on boomers, and that promise became a lure, the faked Fox News clip as shareable as a warning about NyQuil chicken.

I checked in with Durfee on Wednesday morning. He said his account was still banned. But many of the videos, Facebook posts, and tweets about the dangers of the challenge he made up remain online. Unlike Durfee’s content, many of these videos and posts provide absolutely no indication that the challenge itself is fiction. So there they sit, awaiting the end of the porcelain challenge’s online life span. The perfect candidates for reanimation.