Yes, remote learning can work for preschoolers

The other day some preschoolers were pretending to be one of their favorite Sesame Street characters, a baby goat named Ma’zooza who likes round things. They played with tomatoes—counting up to five, hiding one, and putting it back. 

A totally ordinary moment exploring shapes, numbers, and imagination. Except this version of Sesame Street—called Ahlan Simsim (Welcome Sesame)—was custom made for children like these: Syrian refugees living in camps in Lebanon who otherwise don’t have access to preschool or, often, enough to eat. 

Educational interruptions due to the pandemic, climate disasters, and war have affected nearly every child on Earth since 2020. A record 43.3 million children have been driven from their homes by conflict and disasters, according to UNICEF—a number that doubled over the past decade.

And yet, points out Sherrie Westin, the head of the nonprofit that produces Sesame Street, “less than 2% of humanitarian aid worldwide goes to the early years”—that is, specifically supporting care and education, not just food and medicine. 

two children sitting close together holding a tablet
Sesame Workshop created the TV show Ahlan Simsim (seen on screen) for children who have been displaced from their homes or experienced conflict.
RYAN HEFFERNAN/SESAME WORKSHOP

That may be about to change. The Ahlan Simsim program is the largest-ever humanitarian intervention specifically intended for small children’s development. Sesame Workshop partnered with the International Rescue Committee, a humanitarian-aid nonprofit, to win a $100 million grant competition administered by the MacArthur Foundation. The results, released in May 2023 but not yet peer reviewed, have been startling: they have provided the first evidence that 100% remote learning can help young children in crisis situations. And the format has already been successfully copied and used in other crises. 

The program combines video content produced by Sesame with services from the IRC, which employs a combination of volunteers from the affected community and professional teachers and parent educators to work locally with families. Over the past few years, 2 million children and their caregivers watched Ahlan Simsim and received coordinated services, some of which were provided entirely over mobile phones. Another 25 million simply watched the show. 

In 2023, Hiro Yoshikawa and his team of researchers at New York University showed in a randomized controlled trial that Syrian refugee children taking part in an 11-week, fully remote learning program, combining Ahlan Simsim videos with live support from local preschool teachers over cell phones, showed progress in learning that was comparable to the results from a year of standard in-person preschool. 

And the learning they measured wasn’t just academic. Children made progress in overall development, emerging literacy, emerging numeracy, motor skills, social-emotional skills, and even the quality of play—like pretending to be Ma’zooza the goat. 

“I’m pretty impressed,” says Kathy Hirsh-Pasek, an expert in early child development at Temple University, who was not involved with the research. Compared with in-person preschool, “this is probably not the full nutritional value,” she cautions. “But nicely done—to even bring them anything in this setting is kind of amazing.” 

Sesame and IRC hope that holistic intervention can help the world’s most vulnerable kids cope with toxic stress—the kind that can, if unchecked, change the architecture of a developing brain. “We see so many children that just because of the circumstances of their birth—born into crisis, into conflict—the odds of them achieving their full potential are reduced,” says Katie Murphy, the director of early-­childhood development and strategic initiatives at the IRC, who was closely involved with the project. “Our work tries to reduce that gap.” 

With the right support from caregivers and communities, Murphy and her colleagues believe, more children around the world can grow up resilient amid crisis, displacement, and war. 

Coping with discrimination, conflict, and hunger 

At a refugee camp in the agricultural Beqaa Valley in eastern Lebanon, Amal, Hana, and Mariam, three Syrian refugee mothers who participated in the program, sat on a carpet in a nearly empty tent, wearing headscarves and patterned robes. Hana’s four-year-old son reclined on her lap. There had been a recent security incident at their home camp, so we Zoomed from a neighboring camp. 

As the economic crisis worsens here, the mothers I spoke with reported feeling growing resentment and sometimes discrimination from their Lebanese neighbors. They said they are nervous just using their UNHCR aid cards—which mark them as refugees—to buy food at the supermarket. And their kids are sometimes bullied as well. 

“There are children who are saying ‘Oh, you are Syrian,’ and they want to fight with them,” Sou’ad, a refugee with four children under seven who also volunteers for the IRC, told me through an interpreter in a separate interview. “The sense of belonging is needed. The children don’t know which country they belong to—to Lebanon or to Syria. They say, ‘This is not our country—this is the country of these other children, so that’s why they are fighting us.’” 

Syrians are the largest displaced population in the world. Seven out of 10 fled or were forced from their homes at the outbreak of the civil war in 2011. The women I’ve spoken with have been here for around a decade; their children were born in Lebanon, but they aren’t permanent residents and have no path to citizenship. 

In August 2020, a giant explosion at the port of Beirut complicated and worsened Lebanon’s brutal economic and political crisis. The country is currently without a president, and tensions have only increased with the Israel-Hamas war. As of October 2023, one-fourth of Lebanon’s population was food insecure, including 36% of Syrian refugees.  

These days, the mothers at the camp tell me, there isn’t enough money even to give the children milk or labne, the local variation of yogurt; they say they have only rice, pita, and a little za’atar spice to sprinkle on top. Children as young as eight are going to work in the fields for food. 

Pivoting under pressure

When the pandemic hit, the remote-­schooling intervention in Lebanon had to be redesigned from scratch. The original plan was that IRC would provide in-person preschool, and Sesame would create complementary TV episodes and short videos featuring Ma’zooza the goat alongside Jad, a yellow monster, and Basma, his purple friend. But in early 2020, when schools around the world had to go remote, the project did too.

Remote-learning programs during disaster and war are nothing new. During World War II, BBC School Radio broadcast lessons to up to half the students in Britain. But the advent of mobile 4G and 5G networks around the world has presented whole new opportunities for learning in crises. Russia’s invasion of Ukraine in 2022 created one of the biggest and fastest displacements of families in history; Ukraine declared just a two-week school holiday before resuming public education online, with many students joining in on mobile phones from across the country and around the world.  

two children entering a door next to a mural of Elmo with ABC blocks
a teacher sitting at a table to help a preschool age girl with her coloring
a classroom of children seated on a rug to listen to a story being read by a teacher

After covid lockdowns ended, the International Rescue Committee and Sesame resumed their in-person preschool program, where Lebanese children learned alongside Syrian refugees.

But that’s a tall order in Lebanon, where telecommunications infrastructure and the power grid have deteriorated along with the economic situation. As of January 2021, about two-thirds of the population had a mobile connection and 78% were internet users, according to Datareportal. The Syrian refugees reached by IRC programming typically had one prepaid mobile phone per household; the group gave them recharge cards to cover their data services.

Nevertheless, the approach was “extraordinarily successful,” says Yoshikawa, the lead researcher in the study. He points to the unusual level of commitment from family members, primarily mothers, who took time out of their household duties for a rare and prized opportunity to focus on their children’s learning. “When we interviewed the caregivers,” he says, “they really perceived this as the chance to have full access to preschool, which Syrian refugee families typically don’t have.”

Preschool teachers dropped off basic writing materials and art supplies. They called groups of five or six families at home three times a week. In these calls, they spent about five minutes greeting and engaging the children directly, and the rest of the time instructing caregivers on how to convey lessons and lead activities with the children. Families could message the teachers, as well as each other, for support. And families would send videos back to the teachers via WhatsApp, documenting what the children had practiced. 

Caregivers reported going up to their roof for a better signal. They charged smartphones at neighbors’ homes when the power went out, and when they ran out of data they borrowed phones from extended family members so the children could keep watching assigned videos.

More than a third of the mothers were illiterate, so often the teachers would have to show them how to write basic letters to teach their children, or send them voice memos if they couldn’t read messages. “We were empowering them both,” says Awada, one of the teachers. 

“That interaction with their children, I believe, was one of the reasons the study’s results were so powerful,” says Westin. 

Dima, a refugee and mother of three who also works as an outreach volunteer with the IRC, helps recruit and support families to engage with the Ahlan Simsim program. She says most children had no previous access to the sort of educational materials provided—things like scissors, tape, and crayons. The IRC also distributed necessities like soap, lice medicine, and toothbrushes. 

Mariam, one of the mothers in the camp, has two girls, five and four years old, and her greatest wish is that they get an education. She herself stopped her schooling at the sixth grade. “Reading and writing,” she said through an interpreter, “is the most important thing in life.”

A focus on resilience

Sesame Street premiered in the United States in 1969 with a social mission born out of the civil rights movement and President Lyndon Johnson’s Great Society: to level the playing field for poor kids by bringing early learning into the home for free. 

The show debuted its first foreign-­language co-productions in Brazil and Mexico just three years later; there have been a total of 42 international co-­productions over the years. A meta-analysis of studies with over 10,000 children in 15 countries found that these programs have had significant positive effects on children’s mastery of reading and basic math concepts, as well as their social-emotional skills and attitudes toward out-groups.  

An Arabic version of the show (Iftah Ya Simsim/Open Sesame, which many of today’s parents in the region grew up with) ran from 1979 to 1989. But Ahlan Simsim is the first production created deliberately for children affected by crisis and conflict, and that necessitated some special sensitivity.

The social-emotional curriculum for the show had to be designed from scratch for the cultural context and needs of these children, says Shanna Kohn, the director of international education at Sesame Workshop. “We went in with the idea of a show that focused on resilience—a beloved Western concept. And we brought that to this team of academics and Arab advisors, and there was a lot of skepticism. There isn’t even a clear Arabic translation,” says Kohn. 

So the team backed up and started with the basics. They had to figure out how to present relatable stories—about Jad leaving home and feeling different from his friends—without introducing situations or concepts that might be triggering for young viewers. 

Elmo with children in
a classroom in Saida,
Lebanon.
RYAN HEFFERNAN/SESAME WORKSHOP

“Boats are usually a go-to for preschool children,” says Scott Cameron, who has been with the company for 25 years. “We avoided things like that, for obvious reasons.” They also avoided loud noises, like thunderstorms. They skipped nutrition lessons, because kids who are barely getting enough to eat can’t use reminders about fruits and vegetables. 

Kids who are traumatized often respond with an outward numbness; the research team found that the children were using only two or three terms—happy, sad, angry—to describe their feelings. To help them process these feelings and frustrations, the show defines the Arabic words for nine emotions: caring, fear, frustration, nervousness, hope or determination, jealousy, loneliness, and sadness. Jad and Basma model emotional coping strategies: belly breathing, counting to five, “moving it out,” “drawing it out,” asking for help, and making a plan. 

Sesame and the IRC are hoping that the evidence from this study becomes a mandate. The Lego Foundation supported a version of Ahlan Simsim for Rohingya refugees in Bangladesh. Sesame and the IRC have piloted the remote preschool program in Iraq and hope to scale it in that country, where there aren’t resources for in-person preschool. 

And they adapted some of the video content for rapid response to 2023’s floods in Libya and earthquake in Morocco. Westin’s hope is that the world will begin to see both the need and the opportunity. “Those who have the most to lose,” she says, “receive the least and have the most to gain if we can reach them with nurturing care and growth.” 

Anya Kamenetz is a freelance education reporter who writes the Substack newsletter The Golden Hour.

Four lessons from 2023 that tell us where AI regulation is going

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

In the US and elsewhere, 2023 was a blockbuster year for artificial intelligence and AI regulation, and this next year is guaranteed to bring even more action. On January 5, I published a story with my colleagues Melissa Heikkilä and Zeyi Yang that lays out what we should expect in the coming 12 months in AI policy around the world. 

Most broadly, we are likely to see the strategies that emerged last year continue, expand, and begin to be implemented. For example, following President Biden’s executive order, various US government agencies may outline new best practices but empower AI companies to police themselves. And across the pond, companies and regulators will begin to grapple with Europe’s AI Act and its risk-based approach. It certainly won’t be seamless, and there’s bound to be a lot of discussion about how these new laws and policies actually work in practice. 

While writing this piece, I took some time to reflect on how we got here. I think stories about technologies’ rise are worthy of reflective examination—they can help us better understand what might happen next. And as a reporter, I’ve seen patterns emerge in these stories over time—whether it’s with blockchain, social media, self-driving cars, or any other fast-developing, world-changing innovation. The tech usually moves much faster than regulation, with lawmakers increasingly challenged to stay up to speed with the technology itself while devising new ways to craft sustainable, future-proof laws. 

In thinking about the US specifically, I’m not sure what we’re experiencing so far is unprecedented, though certainly the speed with which generative AI has launched into our lives has been surprising. Last year, AI policy was marked by Big Tech power moves, congressional upskilling and bipartisanship (at least in this space!), geopolitical competition, and rapid deployment of nascent technologies on the fly. 

So what did we learn? And what is around the corner? There’s so much to try to stay on top of in terms of policy, but I’ve broken down what you need to know into four takeaways.

1. The US isn’t planning on putting the screws to Big Tech. But lawmakers do plan to engage the AI industry. 

OpenAI’s CEO, Sam Altman, first started his tour de Congress last May, six months after the bombshell launch of ChatGPT. He met with lawmakers at private dinners and testified about the existential threats his own technology could pose to humanity. In a lot of ways, this set the tone for how we’ve been talking about AI in the US, and it was followed by Biden’s speech on AI, congressional AI insight forums to help lawmakers get up to speed, and the release of more large language models. (Notably, the guest list for these AI insight forums skewed heavily toward industry.)

As US lawmakers began to really take on AI, it became a rare (if small) area of bipartisanship on the Hill, with legislators from both parties calling for more guardrails around the tech. At the same time, activity at the state level and in the courts increased, primarily around user protections like age verification and content moderation

As I wrote in the story, “Through this activity, a US flavor of AI policy began to emerge: one that’s friendly to the AI industry, with an emphasis on best practices, a reliance on different agencies to craft their own rules, and a nuanced approach of regulating each sector of the economy differently.” The culmination of all this was Biden’s executive order at the end of October, which outlined a distributed approach to AI policy, in which different agencies craft their own rules. It (perhaps unsurprisingly) will rely quite heavily on buy-in from AI companies

Next year, we can expect some new regulations to build on all this. As we wrote in our story today, Congress is looking to draft new laws and will consider existing bills on recommendation algorithms, data privacy, and transparency that will complement Biden’s executive order. States, too, will be considering their own regulations.

2. It’s not going to be easy to grapple with the harms and risks posed by AI.

While existential risk got the biggest headlines last year, human rights advocates and researchers frequently called out the harm that AI already on the market is causing right now, like perpetuating inaccuracy and bias. They warned that hyping existential risks would pull focus from dangerous realities, like medical AIs that disproportionately misdiagnose health issues in Black and brown patients.

As debates over how concerned we should be about the coming robot wars infiltrated dinner table chats and classrooms alike, agencies and local regulators started making declarations and issuing statements about AI, such as the joint statement in April from four federal agencies, including the FTC and CFPB, which warned that AI has the “potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.” Just how those outcomes will be monitored or prevented, however, is far from clear at this point. 

As for the tech industry itself, players are likely to continue to squabble with lawmakers over the riskiness of AI systems. The last-hour discussions over the EU AI Act were hung up on a fight over foundation models, and this debate will likely continue in different arenas this year, as will debates over what uses of AI should be considered high risk and who is responsible for managing those risks.

3. AI is the next frontier for techno-nationalism and global competition.

This past year also made clear that the US approach to AI is shaped by the desire to achieve and maintain a technological advantage over China. Meanwhile, the two countries continue to escalate their trade war over semiconductors, which provide the hardware necessary for AI models.  

Beyond keeping an edge in technological prowess, the US wants to be a leader on tech regulation and compete with a regulation-happy Europe. Biden’s executive order strategically dropped just days before the UK’s AI Summit and before the final negotiations over the EU AI Act were set to take place. 

4. Watch closely what happens in the US election and those around the world. 

Of course, the US will have a big election in 2024, but so will many many other countries. In my last Technocrat of 2023, we talked about how generative AI and other media technologies have created acute concern about an onslaught of deceitful and inaccurate information. I’m particularly interested in watching how social media platforms and politicians alike address the new threat of political disinformation as a result of generative AI. As I wrote in a story a few months ago, researchers are already seeing a negative impact

One thing at least is sure: the rapid release of generative AI to users in 2023 will affect 2024 elections, likely in a dramatic and unprecedented way. It’s hard to really predict what may happen given how rapidly the technology is changing and how quickly users are pushing it in different and unexpected directions. So even if governments or social media companies, among others, do try to strengthen safeguards or create new policies, the way generative AI is actually used in 2024 will be critical in shaping future regulations. 

No matter what, it’s definitely going to be an interesting ride!

What I am reading this week

  • The New York Times is suing OpenAI on the grounds that it used its articles to train ChatGPT. It’s one of the biggest stories over the past few weeks that you may have missed, and I was particularly interested in the similarity between some of the ChatGPT outputs and the NYT articles, as documented in the filing.
  • Researchers at the Stanford Internet Observatory found thousands of examples of child sexual abuse material in one of the major data sets used to train generative AI. That data set has now been temporarily taken down.
  • Smart cars are being weaponized by abusive partners as tools for surveillance and tracking, according to a new story by Kashmir Hill in the New York Times. In a world where almost everything has the ability to produce geolocation data, I’m afraid these sorts of stories will be more and more common.

What I learned this week

My colleagues Melissa Heikkilä and Will Douglas Heaven published a forward-thinking piece about what’s to come for AI in 2024, and I figured you all would want a taste! They predict a year of customized chatbots, new advances in generative-AI video, AI-generated misinformation during elections, and multitasking robots. Definitely worth the read!

What’s next for AI regulation in 2024? 

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here.

In 2023, AI policy and regulation went from a niche, nerdy topic to front-page news. This is partly thanks to OpenAI’s ChatGPT, which helped AI go mainstream, but which also exposed people to how AI systems work—and don’t work. It has been a monumental year for policy: we saw the first sweeping AI law agreed upon in the European Union, Senate hearings and executive orders in the US, and specific rules in China for things like recommender algorithms. 

If 2023 was the year lawmakers agreed on a vision, 2024 will be the year policies start to morph into concrete action. Here’s what to expect. 

The United States

AI really entered the political conversation in the US in 2023. But it wasn’t just debate. There was also action, culminating in President Biden’s executive order on AI at the end of October—a sprawling directive calling for more transparency and new standards. 

Through this activity, a US flavor of AI policy began to emerge: one that’s friendly to the AI industry, with an emphasis on best practices, a reliance on different agencies to craft their own rules, and a nuanced approach of regulating each sector of the economy differently. 

Next year will build on the momentum of 2023, and many items detailed in Biden’s executive order will be enacted. We’ll also be hearing a lot about the new US AI Safety Institute, which will be responsible for executing most of the policies called for in the order. 

From a congressional standpoint, it’s not clear what exactly will happen. Senate Majority Leader Chuck Schumer recently signaled that new laws may be coming in addition to the executive order. There are already several legislative proposals in play that touch various aspects of AI, such as transparency, deepfakes, and platform accountability. But it’s not clear which, if any, of these already proposed bills will gain traction next year.

What we can expect, though, is an approach that grades types and uses of AI by how much risk they pose—a framework similar to the EU’s AI Act. The National Institute of Standards and Technology has already proposed such a framework that each sector and agency will now have to put into practice, says Chris Meserole, executive director of the Frontier Model Forum, an industry lobbying body. 

Another thing is clear: the US presidential election in 2024 will color much of the discussion on AI regulation. As we see in generative AI’s impact on social media platforms and misinformation, we can expect the debate around how we prevent harms from this technology to be shaped by what happens during election season. 

Europe

The European Union has just agreed on the AI Act, the world’s first sweeping AI law. 

After intense technical tinkering and official approval by European countries and the EU Parliament in the first half of 2024, the AI Act will kick in fairly quickly. In the most optimistic scenario, bans on certain AI uses could apply as soon as the end of the year. 

This all means 2024 will be a busy year for the AI sector as it prepares to comply with the new rules. Although most AI applications will get a free pass from the AI Act, companies developing foundation models and applications that are considered to pose a “high risk” to fundamental rights, such as those meant to be used in sectors like education, health care, and policing, will have to meet new EU standards. In Europe, the police will not be allowed to use the technology in public places, unless they get court approval first for specific purposes such as fighting terrorism, preventing human trafficking, or finding a missing person. 

Other AI uses will be entirely banned in the EU, such as creating facial recognition databases like Clearview AI’s or using emotion recognition technology at work or in schools. The AI Act will require companies to be more transparent about how they develop their models, and it will make them, and organizations using high-risk AI systems, more accountable for any harms that result. 

Companies developing foundation models—the models upon which other AI products, such as GPT-4,  are based—will have to comply with the law within one year of the time it enters into force. Other tech companies have two years to implement the rules. 

To meet the new requirements, AI companies will have to be more thoughtful about how they build their systems, and document their work more rigorously so it can be audited. The law will require companies to be more transparent about how their models have been trained and will ensure that AI systems deemed high-risk are trained and tested with sufficiently representative data sets in order to minimize biases, for example. 

The EU believes that the most powerful AI models, such as OpenAI’s GPT-4 and Google’s Gemini, could pose a “systemic” risk to citizens and thus need additional work to meet EU standards. Companies must take steps to assess and mitigate risks and ensure that the systems are secure, and they will be required to report serious incidents and share details on their energy consumption. It will be up to companies to assess whether their models are powerful enough to fall into this category. 

Open-source AI companies are exempted from most of the AI Act’s transparency requirements, unless they are developing models as computing-intensive as GPT-4. Not complying with rules could lead to steep fines or cause their products to be blocked from the EU. 

The EU is also working on another bill, called the AI Liability Directive, which will ensure that people who have been harmed by the technology can get financial compensation. Negotiations for that are still ongoing and will likely pick up this year. 

Some other countries are taking a more hands-off approach. For example, the UK, home of Google DeepMind, has said it does not intend to regulate AI in the short term. However, any company outside the EU, the world’s second-largest economy, will still have to comply with the AI Act if it wants to do business in the trading bloc. 

Columbia University law professor Anu Bradford has called this the “Brussels effect”—by being the first to regulate, the EU is able to set the de facto global standard, shaping the way the world does business and develops technology. The EU successfully achieved this with its strict data protection regime, the GDPR, which has been copied everywhere from California to India. It hopes to repeat the trick when it comes to AI. 

China

So far, AI regulation in China has been deeply fragmented and piecemeal. Rather than regulating AI as a whole, the country has released individual pieces of legislation whenever a new AI product becomes prominent. That’s why China has one set of rules for algorithmic recommendation services (TikTok-like apps and search engines), another for deepfakes, and yet another for generative AI. 

The strength of this approach is it allows Beijing to quickly react to risks emerging from the advances in technology—both for the users and for the government. But the problem is it prevents a more long-term and panoramic perspective from developing.

That could change next year. In June 2023, China’s state council, the top governing body, announced that “an artificial intelligence law” is on its legislative agenda. This law would cover everything—like the AI Act for Europe. Because of its ambitious scope, it’s hard to say how long the legislative process will take. We might see a first draft in 2024, but it might take longer. In the interim, it won’t be surprising if Chinese internet regulators introduce new rules to deal with popular new AI tools or types of content that emerge next year. 

So far, very little information about it has been released, but one document could help us predict the new law: scholars from the Chinese Academy of Social Sciences, a state-owned research institute, released an “expert suggestion” version of the Chinese AI law in August. This document proposes a “national AI office” to oversee the development of AI in China, demands a yearly independent “social responsibility report” on foundation models, and sets up a “negative list” of AI areas with higher risks, which companies can’t even research without government approval.

Currently, Chinese AI companies are already subject to plenty of regulations. In fact, any foundation model needs to be registered with the government before it can be released to the Chinese public (as of the end of 2023, 22 companies have registered their AI models). 

This means that AI in China is no longer a Wild West environment. But exactly how these regulations will be enforced remains uncertain. In the coming year, generative-AI companies will have to try to figure out the compliance reality, especially around safety reviews and IP infringement. 

At the same time, since foreign AI companies haven’t received any approval to release their products in China (and likely won’t in the future), the resulting domestic commercial environment protects Chinese companies. It may help them gain an edge against Western AI companies, but it may also stifle competition and reinforcing China’s control of online speech.

The rest of the world

We’re likely to see more AI regulations introduced in other parts of the world throughout the next year. One region to watch will be Africa. The African Union is likely to release an AI strategy for the continent early in 2024, meant to establish policies that individual countries can replicate to compete in AI and protect African consumers from Western tech companies, says Melody Musoni, a policy officer at the European Centre for Development Policy Management.

Some countries, like Rwanda, Nigeria, and South Africa, have already drafted national AI strategies and are working to develop education programs, computing power, and industry-friendly policies to support AI companies. Global bodies like the UN, OECD, G20, and regional alliances have started to create working groups, advisory boards, principles, standards, and statements about AI. Groups like the OECD may prove useful in creating regulatory consistency across different regions, which could ease the burden of compliance for AI companies. 

Geopolitically, we’re likely to see growing differences between how democratic and authoritarian countries foster—and weaponize—their AI industries. It will be interesting to see to what extent AI companies prioritize global expansion or domestic specialization in 2024. They might have to make some tough decisions.

Meet the economist who wants the field to account for nature

What is the true value of a honeybee? A mountain stream? A mangrove tree? 

Gretchen Daily, cofounder and faculty director of the Stanford Natural Capital Project, has dedicated her career to answering such complex questions. Using emerging scientific data and the project’s innovative open-source software, Daily and her team help governments, international banks, and NGOs to not only quantify the value of nature, but also determine the benefits of conservation and ecosystem restoration.

This marriage of ecological and economic concerns may seem an unusual one to some. But to Daily, it’s a union as natural as the planet’s ecosystems themselves.

Daily completed her doctoral work in ecology at Stanford during the 1990s. It was, she says, a revolutionary time for interdisciplinary approaches to both economic and ecological crises. Spurred by a summit hosted by the Royal Swedish Academy of Scientists, ecologists and economists began coming together for the first time to consider the benefits of a joint approach to developing economic and environmental policy.

“For so much of our history, humanity had operated under the assumption that nature was infinite,” says Daily. “We knew that collapses of civilization were at least in part because of the destruction of the local environment, but nobody thought that could happen at a planetary scale.”

“Many of us finally began to see that, fundamentally, environmental problems are economic and social problems. We cannot maintain the vitality and security of the biosphere without valuing nature.”

Gretchen Daily

Global climate change and its myriad impacts changed all that. “That crisis forced us all to rethink the assumptions on which economic systems operate,” she says. “It also revealed the frailties in different lines of inquiry that have built up for decades and even centuries.”

In 1997, Daily edited Nature’s Services: Societal Dependence on Natural Ecosystems—one of the first books to introduce the concept of ecosystem services, a field that seeks to quantify the value of resources such as clean water, fertile soil, and species habitats. The release of that book inspired unprecedented interdisciplinary collaboration on issues of ecology and economics.

“I think many of us finally began to see that, fundamentally, environmental problems are economic and social problems,” she says. “We cannot maintain the vitality and security of the biosphere without valuing nature.”

That recognition, Daily says, inspired her to create the Natural Capital Project in 2005. More than anything, she adds, the initiative was born out of the idea that mapping and modeling the value of nature would compel global leaders to see the inherent benefits of conservation as well.

A partnership between Stanford, the Chinese Academy of Sciences, the Nature Conservancy, the University of Minnesota, and the World Wildlife Fund, the Natural Capital Project now works with banks, governments, and nonprofit organizations around the globe.

The organization’s open-source software model, called InVEST, combines data gleaned from thousands of researchers working with techniques such as satellite imaging, soil surveys, climate modeling, and human development mapping to quantify and place a value on natural resources. Recent advances in this data collection, along with machine learning and software modeling, allow the Natural Capital team to evaluate ecosystems at a level of detail and sophistication previously considered impossible.

In a recent project undertaken for the Colombian government, for instance, the Natural Capital Project assisted in establishing a conservation plan for the Caribbean Gulf of Morrosquillo and its hinterlands. The region’s Rio Sinú is an essential source of drinking water for many downstream communities but also originates in an area that depends upon logging, ranching, and agriculture for its financial security. Using InVEST, Daily and her team were able to determine the actual cost of silt deposition in the river, particularly for drinking water and hydropower, and the value of maintaining upstream forests that would prevent that congestion from occurring.

“We were able to show that communities in the region were benefiting from this forest in ways they hadn’t necessarily realized,” says Lisa Mandle, lead scientist and director of science-software integration for the Natural Capital Project. “We can never capture the total value of a forest in terms of cultural and spiritual values or even biodiversity, but we can say that it has measurable economic values across dimensions that have not been considered before.”

And that, says Mandle, has created powerful incentives for the Colombian government to think about how to support the communities within that crucial forest.

A similar approach, also crafted by the Natural Capital Project, helps countries determine their gross ecosystem product, or GEP. Modeled after the gross domestic product, the GEP index allows nations to determine the monetary value of their ecological systems. Daily and her team piloted this index in 2014 on both municipal and national scales in China, and it was adopted by the United Nations Statistical Commission in 2021.

“Just as the Great Depression exposed the urgent need for better macroeconomic performance metrics, our current ‘Great Degradation’ of natural capital is making it imperative that we track ecological performance and use that information to guide investments in revitalization and regeneration,” says Daily, who predicts that the GEP metric will be employed globally within the next decade.

In the meantime, she and her team are dedicated to streamlining their ecological assessments in a way that makes the final analysis and visualization easier for political leaders, investors, and local communities to use. Making that information more accessible, she says, will be crucial for fostering a cultural shift toward recognizing humanity’s dependence upon the biosphere.

In many ways, this idea of codifying the value of nature has been 30 years in the making. And it couldn’t have become reality without Daily’s vision, says Qingfeng Zhang, a senior director at the Asian Development Bank, which now includes a Natural Capital Lab inspired and supported by the Stanford project. This initiative, which was launched in 2020, created a platform for the bank to promote sustainable finance with the help of tools that Daily and her team developed.

“Gretchen’s work in the area of environmental science and its implications for public policy has been monumental,” says Zhang. “Her InVEST model and GEP concept are transforming the way governments, corporations, and civil society look at nature. We now have a tangible economic basis to invest in protecting and growing nature.” 

Kathryn Miles is a journalist and the author of five books including, most recently, Trailed: One Woman’s Quest to Solve the Shenandoah Murders.

How 2023 marked the death of anonymity online in China

If you think about it, there are so many people we meet on the internet daily whose real names we will never know. The TikTok teen who learned the trendy new dance, the anime artist who uploaded a new painting, the random commenter who posted under a YouTube video you just watched. That’s the internet we are familiar with. At the end of the day, nobody knows whether they are really interacting online with a person or, say, a dog. 

But in China, the dogs are losing their cover, as the government gradually makes it more and more difficult to remain anonymous online.

In reality, it’s already impossible to be fully anonymous online in China. Over the years, to implement a stricter regime of online censorship, the country has built a sophisticated system that requires identity verification to use any online service. In many cases, posting politically sensitive content leads to account removal, calls from the police, or even detention.

But that didn’t necessarily mean everyone else knew who you were. In fact, I’ve always felt there were corners of the Chinese internet in which I could remain obscure, where I could present a different face to the world. I used to discuss the latest pop music and cultural phenomena on the forum Baidu Tieba; I started a burner blog to process a bad breakup and write diaries; I still use Xiaohongshu, the latest trendy platform similar to Instagram, to share and learn cat-care tips. I never tell people my real name, occupation, or location on any of those platforms, and I think that’s fine—good, even. 

But lately, even this last bit of anonymity is slipping away.

In April last year, Chinese social media companies started requiring all users to show their location, tagged via their IP address. Then, this past October, platforms started asking accounts with over 500,000 followers to disclose their real names on their profiles. Many people, including me, worry that the real-name rule will reach everyone soon. Meanwhile, popular platforms like the Q&A forum Zhihu disabled features that let anyone post anonymous replies. 

Each one of these changes seemed incremental when first announced, but now, together, they amount to a vibe shift. It was one thing to be aware of the surveillance from the government, but it’s another thing to realize that every stranger on the internet knows about you too. 

Of course, anonymity online can provide a cover for morally and legally unacceptable behaviors, from the spread of hate and conspiracy theories on forums like 4chan to the ransom attacks and data breaches that deliver profits to hackers. Indeed, the most recent changes regarding real names are being pitched by platforms and the government as a way to reduce online bullying and hold influential people accountable. But in practice, this all very well may have the reverse effect and encourage more harassment.

While some Chinese users are trying new (if ultimately temporary) ways to try to stay anonymous, others are leaving platforms altogether—and taking their sometimes boundary-pushing perspectives with them. The result is not just an obstacle for people who want to come together—maybe around a niche interest, maybe to talk politics, or maybe even to find others who share an identity. It’s also a huge blow to the rare grassroots protests that sometimes still happen on Chinese social media. The internet is about to become a lot quieter—and, ironically, much less useful for anyone who comes here to see and really be seen.

Finding comfort and courage in a screen name 

From its beginning, the internet has been a parallel universe where no one has to use their real identity. From bulletin boards, blogs, and MSN to Reddit, YouTube, and Twitter, people have come up with all kinds of aliases and avatars to present the version of themselves that they want that platform to see.

That’s been as true in China as anywhere else. With the protection of anonymity—since replaced by a state of pseudonymity in which one’s identity is known by the platform and government but not by other users—we have felt more comfortable to express ourselves, whether that meant exposing the wrongdoings of a local government official or writing a queer romance novel that might not be accepted by the mainstream. 

An online identity separate from a real identity gives people the opportunity to be something new. It also allows people to develop new sides of themselves without being always reminded of their limitations elsewhere. 

“Why do I use the internet? It’s because I want to go crazy at a place where no one knows me. If the rule of real name is enforced, you might as well consider me dead,” reads one of the comments on the new rule on Weibo, the Chinese equivalent of Twitter. It’s a sentiment felt by many.

“We can’t change what’s our name or how we look, but we can decide who we are online.”

This landscape also encouraged people to share their expertise and knowledge without worrying that it could cause trouble for themselves. Zhihu, the Chinese alternative to Quora, used to allow users to answer questions with a completely anonymous account. Reading those answers makes it clear that people are more willing to share when they know their posts won’t be found by their employer or colleagues. Zhihu disabled the anonymous reply function this year, too.

Xinyu Pan, a researcher at Hong Kong University, was partly inspired to study the relationship between social media anonymity and moral courage by what she saw on platforms: when someone posted about an experience with domestic violence, comments offering help were often from anonymous accounts using the default avatar and username on the platform. 

“The idea is intuitive … we are more likely to do what’s risky when we feel there’s more protection,” Pan says. Through surveys and experiments with Chinese social media users, Pan found that when users perceive themselves to be anonymous, they are more likely to act courageously.

“I believe that social media anonymity holds particular significance for individuals in collective cultures,” says Pan.

This is in line with what Kyrie Zhou, a security, ethics, and privacy researcher at the University of Illinois Urbana-Champaign, found when he studied the social media feed of Qiaomai, a Chinese writer and feminist activist in her late 30s, who has posted thousands of anonymous submissions on Weibo since 2020. These submissions came in direct messages from her followers, often women who wanted to share their experiences of domestic violence, extramarital affairs, anxiety, sexual harassment, and more. 

With over 1 million followers, Qiaomai served as the voice of the community. “It’s very easy to be harassed on [Chinese] social media if you express ideas of feminism,” Zhou says. He also notes there are other similar accounts that accept and repost anonymous messages for other groups of people who may not be widely accepted—like queer communities, for instance.

“By sharing their experiences anonymously with influencers, empathy, online interpersonal support, and practical advice could be made accessible to the affected women. The comments also allow like-minded women to connect with each other,” Zhou wrote in his research paper.

Pulling back the curtain 

Companies justified the recent push to remove anonymity by citing the need to hold influential social media personalities accountable. In October, Weibo posted: “To regulate the operation and management of ‘self-media’ accounts and to enable more convenient public oversight for the purpose of the collective interests, the platform will shortly begin instructing leading accounts to display their real name on the front end.” Almost all other social media platforms posted similar notices with wording almost identical to Weibo’s—a not-so-subtle indication that this was a clearly coordinated move in response to the government.

This came at the same time that the government started to take a strong stance against the online harassment that has resulted in several high-profile suicides over the past year. In July, the Chinese Cyberspace Administration drafted a regulation against cyberbullying, banning online forums that “post inappropriate content through anonymous submission and online call-outs.”

But the idea that displaying more personal information will curb online harassment is a long shot, as another recent rule change shows us. 

In May 2022, Weibo started requiring every post to show the IP location of the user. It’s not a precise location, but it’s accurate to the Chinese province or, if overseas, the country where the user is. Other platforms followed suit, and the government made the feature an official requirement last summer.

Over the past year and a half, users in China have gotten used to frequently commenting on one another’s locations. Sometimes it’s harmless bickering, like saying a particularly off-color comment brings shame to an entire province. Other times stereotypes, like that people from certain provinces are less well-off or less educated, are casually thrown around. And political comments from an overseas IP address are often automatically deemed the result of foreign influence and the poster is attacked.

Including this new tag “ended up providing a new target for the bullies: geographical locations,” Yan Feng, a professor of Chinese literature at Fudan University, wrote in October on his Weibo account. 

He’s used his real name on social media for 13 years, and explained that he understood firsthand how this gave his attackers a clear target and more ammunition. “Displaying real names will reduce some cyberbullying, but it will empower and strengthen new forms of cyberbullying, plus it will significantly reduce the freedom of expression,” Yan wrote. “How to choose? I choose the right for everyone to opt for real name or anonymity.”

The focus on cyberbullying is certainly not the government’s real motivation for the change, says Zhou, the University of Illinois researcher.

In 2022, right after the change about IP location was announced, Zhou surveyed over 500 Weibo users to understand the practice of self-censorship online. Nearly half of them reported that having their real location displayed would encourage them to self-censor more. And that may be just what Beijing really intended.

“It’s ironic … because the government’s persecution is so much more powerful than people attacking each other,” Zhou says. “Now the Chinese government is shifting people’s attention to these infights, as if the antisocial behavior of a small group of people is so concerning that it needs to be regulated with the tool of de-anonymization.”

“Teacher Li” was an anonymous account last year during the White Paper protests against China’s strict zero-covid policies. The account tweeted out hundreds of protest videos that were submitted to it anonymously by people in China who were afraid to speak out. 

Behind the account is Li Ying, a Chinese artist turned dissident who was able to remain relatively anonymous during the period. “We’ve been used to anonymity since we were young. We can come up with a cool alias or pretend to be a powerful figure. Anonymity is a part of the internet culture,” he says. “We can’t change what’s our name or how we look, but we can decide who we are online.”

When I talked to him late last year, he asked me not to use his real name to protect his safety and that of his family. But that all changed a few days later when Fang Shimin, another Chinese dissident, publicly posted Li’s real name on Twitter and questioned whether he had a team behind him or, worse, whether he was secretly working on behalf of the Chinese government. Now, both the powerful Chinese government and random people who don’t agree with his political views can weaponize his identity. 

Because of his influence, Li has experienced even more intense consequences from losing his anonymity. He has moved several times and lost his job because his identity was revealed. 

“Real names displayed on the front end can expose you in front of the public,” Li says. “There are a lot of things the public can do, and the blood is no longer on the government’s hands … It’s an infight among the public.”

What will happen now?

The October announcement of the real-name rule freaked out many social media users. Some people decided to delete their accounts immediately, Rest of World reported—like Kindergarten Killer, an anonymous rapper who refuses to let people know what he looks like. Others removed hundreds of thousands of followers to put them below the threshold for now.

The enforcement of the rule has been slow and murky. While some Weibo influencers already have their names displayed, many others with more than 500,000 followers still don’t. To accounts with both large and small followings, the question is when it will really come for them. 

“It only impacts some big-name influencers now, but I think everyone will have to display their real name in the future,” says Zhou. 

“How to choose? I choose the right for everyone to opt for real name or anonymity.”

The IP location feature is an example of how these seemingly small-scale changes can expand and normalize over time. In the beginning, a user’s location was only shown in posts discussing the pandemic or the Russia-Ukraine war, as platforms and the government alleged that these were the areas where some people were posting under foreign influence. But soon, the function spread to all types of content, all users, and all platforms. Today, people seem to feel as if IP location disclosure has always been the norm.

In the face of these changes, some users are taking a more creative approach to online anonymity. I started to notice it earlier this year, with what I thought was a weird trend on Chinese social platforms like Douban and Xiaohongshu: thousands of people using the same pink cartoon dinosaur as their profile picture and “Momo” as their username. 

Momo, it turns out, used to be the default profile picture when people used their WeChat login to access other platforms. These people decided to keep it as their shared identity and blend into the crowd.

At first I thought this would be one of those quirky trends, going viral for a while before dying out as quickly. But it has lived much longer than I expected. In August 2022, the online group that pioneered the trend had only 485 members. Today, it has over 12,000. Just this month, when I posted online about looking for a ride for a coming trip, one of the Momos replied and offered to help.

Can being a Momo offer complete anonymity? Obviously not. Even if your real name isn’t displayed, you need to use it to register on the platform. And even if you choose the same name and profile photo, the platform still assigns a unique identifier number to each account, which in turn differentiates each Momo from the others.

Maybe that little bit of extra privacy—the pseudonymous state that Momos want, as noted in their “group rule” from last year—is enough to quell their anxieties about being found online. It’s doubtful, though, that they will be able to keep the Momo identity when, as most people expect, the rule requiring influencers to use their real name spreads. 

Chinese users may not even be able to have other people post for them anymore.  

Qiaomai, the feminist writer who shared thousands of follower submissions anonymously, had her Weibo account suspended in July for unknown reasons. Her new account has mostly shared only her own thoughts. (Zhou, who used his Weibo account to send out surveys about self-censorship, also had his account suspended without explanation.) 

Weibo didn’t immediately respond to MIT Technology Review’s request for comments.

With these accounts gone, lively discussions and the collision of ideas have gone with them. And the internet where everyone uses their real name will inevitably be more rigid and intimidating, not to mention easier for the people with power to control.

Li Ying still posts under the name Teacher Li, even though his real name has been exposed. He still accepts user submissions about news in China and shares them with his more than 1.4 million followers on X. 

But to read his posts, people in China have to find a VPN service to go around the Great Firewall. His accounts on Chinese social media have all been suspended, and he receives harassment attached to his real name almost daily on platforms like X. While some of this seems to be from coordinated bot posts, other attacks come from random people who seem to disagree with him politically. 

“It doesn’t mean much to me anymore,” Li tells me. “I’ve been through the White Paper protest [and the attack after it]. Nothing can be worse than that. But for many ordinary people, they would deactivate their account once they were doxxed.” 

I ask him if he’d rather go back to being anonymous on the internet.

“If I had a choice to go all the way back in time,” Li answers, “of course.”

China’s judicial system is becoming even more secretive

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

The new year will be here soon! Typically, it’s a great time for a fresh start. But not always. And today I want to talk about something that’s unfortunately moving in the wrong direction: transparency in China’s judicial system. 

Last week, a leaked document started circulating online from China’s highest court, the Supreme People’s Court, saying that by the end of 2023, courts of all levels should finish uploading their judgments to a new “National Court Judgment Document Database.” This database is to come online in January and will only be accessible to internal staff. The document’s authenticity has since been confirmed by Chinese media.

Building an internal digitized system isn’t inherently bad. But this development caused alarm because academics and other experts believe it is likely to replace a similar resource that was free and open to the public: China Judgements Online. 

First built in 2013, CJO is one of the largest widely accessible databases that can help someone understand governance in China—a unique window into an increasingly opaque system. By allowing the Chinese public to search through millions of detailed court judgments, it was able to hold the powerful accountable, at least to some degree. If it goes away, that will have a big impact both on Chinese people and those observing from the outside.

Back when it was built, China was still in an era when it celebrated more transparency and oversight over itself. There was even a follow-up regulation in 2016 that instructed judges to avoid finding excuses not to upload their cases.

Sure, Beijing’s top goal may not have been transparency for transparency’s sake; the “main motivation for putting judicial decisions online was likely a desire for greater centralized control over a sprawling system, and an effort to strengthen the courts through enhanced professionalization among judges,” wrote Luo Jiajun and Thomas Kellogg, two legal academics who have been tracking CJO. (Previously, different local courts in China had their own tracking systems.)

Nonetheless, the result was effectively the same. CJO became an important resource for a variety of people: lawyers, scholars, law students, and human rights activists, among others. Today, more than 143 million verdicts have been uploaded to CJO, and the website has been visited more than 100 billion times. 

Outside of CJO, it’s incredibly difficult to get the Chinese government to disclose information, but the CJO verdicts, intentionally or not, tell us a lot about the judicial system and what’s happening in the country generally. “If CJO is shut down, it will be difficult to have public scrutiny over individual cases,” Luo tells me.

One particularly powerful example was in early 2022, when an influencer on Douyin documented how a Chinese woman with a mental disability had been abducted and forced to marry, and subsequently gave birth to eight children. The news, as well as initial efforts by the local government to cover it up, quickly angered people across the entire country.

Human rights advocates hoped to show this was not just a one-off incident, but a systemic issue ignored by the local government. In fact, Fengxian, the county where the woman lived, has long had an infamous reputation for allowing women to be abducted and sold to men looking to procreate. 

By searching CJO, advocates found at least two previous cases in which abducted women filed for divorce in Fengxian and were denied; they also found that people who were prosecuted in the county for human trafficking received minimal prison time. 

CJO also showed similar cases from outside Fengxian, revealing a pattern across China. One study that analyzed 1,480 trafficking cases published on CJO found that one-third of the cases involved women with mental disabilities, and that women were often sold for less than $10,000. All this information was obtained from the publicly available materials.

At the time this all came to light, many people believed human trafficking like this was a thing of the past in China. Then the records from CJO collectively contributed to one of the largest online social movements in the country in recent years, with people repeatedly bringing up the Fengxian woman’s name for months and pushing the government for an explanation.

CJO has served many other purposes over the years. Activists used it to uncover the prosecution of Uyghurs in Xinjiang and the criminalization of online protests. It has even become a useful source of information on Chinese corporations, with people reviewing verdicts to evaluate whether a company is trustworthy.

But all this started to take a turn around 2021. 

A data visualization by He Haibo, a law professor at Tsinghua University in Beijing, shows that the annual number of disclosed verdicts on CJO reached its peak in 2020, with 23.3 million cases. In 2022, the total number declined 62%, to 8.9 million. He also noted that in 2022, only 854 administrative cases (where the government is the defendant) were uploaded, which was just a tiny slice of the 670,000 administrative lawsuits that went to trial that year. 

Around the same time, CJO also started to lose case files en masse. In just one three-month period in 2021, CJO administrators removed over 11 million cases, citing the need for a system migration. According to one research project led by Benjamin Liebman, a law professor at Columbia Law School, 9% of criminal case verdicts were removed from the database in a 12-month period in 2021 and 2022. Certain criminal offenses have been totally erased on the platform, including “illegally producing or selling equipment used for espionage” and “picking quarrels and provoking trouble”—the latter being a classic cover for prosecuting Chinese protesters. 

What’s happened to CJO in the past three years means it is no longer an example of genuine government transparency in China, at least not the way it used to be. Yet it still offers immense value, with millions of cases remaining online for people to study. And it is better than what looks to be the likely alternative. 

Since last week’s news, several Chinese legal academics have publicly asked for the courts to keep CJO online and continue releasing verdicts to the public. But it’s hard to see that happening. When it’s gone, it will be even harder for people in and outside China to understand what’s going on there.

Have you used China Judgements Online for your work? Let me know your experience with it by writing to zeyi@technologyreview.com. And a quick programming note: With the holiday season in full swing, China Report will take a two-week break. I’m really thankful to all of you for reading in 2023. See you in the new year!

Catch up with China

1. Public universities in Florida are scrambling to figure out how to implement a new state law that bans institutions from hiring Chinese students to work in labs. Students from a handful of other flagged countries are also affected. (Science

  • Not until last year did the federal government end its controversial China Initiative, which was found to target academics with Chinese heritage. (MIT Technology Review)

2. OpenAI has suspended accounts owned by ByteDance, which secretly used GPT-generated data to train its own competing model in China. (The Verge)

  • Meanwhile, Chinese users of Google’s new AI model Gemini found that if you ask the bot who it is in Chinese, Gemini will say it’s Baidu’s Ernie Bot. It could be the result of Gemini hallucinating, or a signal that it may have used Ernie Bot’s outputs for its training in Chinese. (Here’s one example on X.)
  • If you want to know more about Ernie Bot, I wrote about it over the fall. And earlier this month, my colleagues Will Douglas Heaven and Melissa Heikkilä wrote about the recently launched Gemini. 

3. A national security trial for Jimmy Lai, the pro-democracy media tycoon, began in Hong Kong on Monday. It will be a test for Hong Kong’s judicial independence. (BBC)

4. ByteDance has canceled its next-generation virtual-reality headset as the company pulls back from its metaverse ambitions. (The Information $)

5. Chinese e-commerce platform Temu filed another lawsuit against Shein, accusing the latter of “mafia-style intimidation” aimed at forcing suppliers to cut ties with Temu. (CNBC)

  • This is the latest development in the fast-fashion wars, which I wrote about in July. 

6. Chinese spies recruited a far-right Belgian politician for more than three years in order to create division in the US-European relationship. (Financial Times $)

7. The Japanese auto company Nissan is expanding its research ties with a leading Chinese university to catch up in electric vehicles. (Associated Press)

8. As Taiwan’s presidential election nears, it’s fighting against a flood of disinformation from China. (CNN)

Lost in translation

Do you remember the super-short-lived Quibi? Well, Quibi-style dramas are making a comeback in China, and the industry is making more money than ever. According to the Chinese tech publication Huxiu, series with dozens of two-minute episodes have become extremely popular on Chinese social media this year. 

They are inexpensive to make: it only takes a week or two to finish shooting, and everything from scripts to costumes can be done cheaply. But many entertainment industry insiders have reportedly been surprised that they draw in millions of viewers. The first eight to 12 episodes of a show can usually be watched for free, but once the viewers are hooked, they need to pay to unlock the rest. Even prominent movie studios are now starting to produce these series.

One more thing

Sometimes AI can be used to generate dangerous misinformation; other times, it can create this bizarre music video of world leaders (and Lady Gaga, for some reason) dancing and telling you to stop working.

Eric Schmidt has a 6-point plan for fighting election misinformation

The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.

And election campaigns are using artificial intelligence in novel ways. Earlier this year in the US, the Republican presidential primary campaign of Florida governor Ron DeSantis posted doctored images of Donald Trump; the Republican National Committee released an AI-created ad depicting a dystopian future in response to Joe Biden’s announcing his reelection campaign; and just last month, Argentina’s presidential candidates each created an abundance of AI-generated content portraying the other party in an unflattering light. This surge in deepfakes heralds a new political playing field. Over the past year, AI was used in at least 16 countries to sow doubt, smear opponents, or influence public debate, according to a report released by Freedom House in October. We’ll need to brace ourselves for more chaos as key votes unfold across the world in 2024. 

The year ahead will also bring a paradigm shift for social media platforms. The role  of Facebook and others has conditioned our understanding of social media as centralized, global “public town squares” with a never-ending stream of content and frictionless feedback. Yet the mayhem on X (a.k.a. Twitter) and declining use of Facebook among Gen Z—alongside the ascent of apps like TikTok and Discord—indicate that the future of social media may look very different. In pursuit of growth, platforms have embraced the amplification of emotions through attention-driven algorithms and recommendation-fueled feeds. 

But that’s taken agency away from users (we don’t control what we see) and has instead left us with conversations full of hate and discord, as well as a growing epidemic of mental-health problems among teens. That’s a far cry from the global, democratized one-world conversation the idealists dreamed of 15 years ago. With many users left adrift and losing faith in these platforms, it’s clear that maximizing revenue has ironically hurt business interests.

Now, with AI starting to make social media much more toxic, platforms and regulators need to act quickly to regain user trust and safeguard our democracy. Here I propose six technical approaches that platforms should double down on to protect their users. Regulations and laws will play a crucial role in incentivizing or mandating many of these actions. And while these reforms won’t solve all the problems of mis- and disinformation, they can help stem the tide ahead of elections next year. 

1.     Verify human users. We need to distinguish humans using social media from bots, holding both accountable if laws or policies are violated. This doesn’t mean divulging identities. Think of how we feel safe enough to hop into a stranger’s car because we see user reviews and know that Uber has verified the driver’s identity. Similarly, social media companies need to authenticate the human behind each account and introduce reputation-based functionality to encourage accounts to earn trust from the community.

2.     Know every source. Knowing the provenance of the content and the time it entered the network can improve trust and safety. As a first step, using a time stamp and an encrypted (and not removable) IP address would guarantee an identifiable point of origin. Bad actors and their feeds—discoverable through the chain of custody—could be deprioritized or banned instead of being algorithmically amplified. While VPN traffic may deter detection, platforms can step up efforts to improve identification of VPNs. 

3.     Identify deepfakes. In line with President Biden’s sweeping executive order on AI, which requires the Department of Commerce to develop guidance for watermarking AI-generated content, platforms should further develop detection and labeling tools. One way for platforms to start is to scan an existing database of images and tell the user if an image has no history (Google Images, for example, has begun to do this). AI systems can also be trained to detect the signatures of deepfakes, using large sets of truthful images contrasted with images labeled as fake. Such software can tell you when an image has a high likelihood of being a deepfake, similar to the “spam risk” notice you get on your phone when calls come in from certain numbers.

4.     Filter advertisers. Companies can share a “safe list” of advertisers across platforms, approving those who comply with applicable advertising laws and conform professionally to the platforms’ advertising standards. Platforms also need to ramp up their scrutiny of political ads, adding prominent disclaimers if synthetic content is used. Meta, for example, announced this month that it would require political ads to disclose whether they used AI.  

5.     Use real humans to help. There will, of course, be mistakes, and some untrustworthy content will slip through the protections. But the case of Wikipedia shows that misinformation can be policed by humans who follow clear and highly detailed content rules. Social media companies, too, should publish quality rules for content and enforce them by further equipping their trust and safety teams, and potentially augmenting those teams by providing tools to volunteers. How humans fend off an avalanche of AI-generated material from chatbots remains to be seen, but the task will be less daunting if trained AI systems are deployed to detect and filter out such content. 

6.     Invest in research. For all these approaches to work at scale, we’ll require long-term engagement, starting now. My philanthropic group is working to help create free, open-source testing frameworks for many AI trust and safety groups. Researchers, the government, and civil society will also need increased access to critical platform data. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from projects approved by the National Science Foundation.

With a concerted effort from companies, regulators, and Congress, we can adopt these proposals in the coming year, in time to make a difference. My worry is that everyone benefits from favorable mis- or disinformation to varying degrees: our citizens are amused by such content, our political leaders may campaign with it, and the media garners traffic by covering sensationalist examples. The existing incentive structures will make misinformation hard to eliminate.  

Social media platforms need to fundamentally rethink their design for the age of AI, especially as democracies face a historic test worldwide. It’s clear to me the future will be one of many decentralized online spaces that cater to every interest, reflect the views of real humans (not bots), and focus on concrete community concerns. But until that day comes, setting these guardrails in place will help ensure that platforms maintain a healthy standard of discourse and do not let opaque, engagement-driven algorithms allow AI-enabled election content to run rampant.

Eric Schmidt was the CEO of Google from 2001 to 2011. He is currently cofounder of Schmidt Futures, a philanthropic initiative that bets early on exceptional people making the world better, applying science and technology, and bringing people together across fields

Meet the 15-year-old deepfake victim pushing Congress into action

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

I want to share a story about an inspirational young woman and her mother, who have stepped into the fray on AI policy issues after a horrific incident. Too often, new technologies disproportionately harm women and girls without drawing enough attention or bringing about change. In reporting out a recent story, I was so impressed with this family’s drive to fix that. 

In October, Francesca Mani was one of reportedly more than 30 girls at Westfield High School in New Jersey who were victims of deepfake pornography. Boys at the school had taken photos of Francesca and her classmates and manipulated them with artificial intelligence to create sexually explicit images of them without their consent. (Westfield High School said in an email that “matters involving students are confidential” but claims that “far fewer” than 30 students were affected.)

The practice is actually stunningly commonplace, but we rarely hear such stories—at least in part because many victims of sexual harassment very understandably don’t want to talk publicly about incidents that are so private. But within just a day of learning about the violation, which she calls “shocking,” 15-year-old Francesca started speaking out and calling on lawmakers to do something about the broader problem. Her efforts are already starting to pay off with new momentum behind proposals for state and federal legislation, which I wrote about in a story published this morning. That includes a bill cosponsored by New Jersey state senators Jon Bramnick and Kristin Corrado that would establish civil and criminal penalties for the nonconsensual creation and sharing of deepfake porn.

Francesca and her mother, Dorota, say that their activism aims particularly to support women and girls who might not have their own ability to make change. I spoke with the Manis earlier this week to more deeply understand what this experience has been like, and I want to share parts of my conversation with them, which has been edited for length and clarity. 

Could you walk me through what happened to you and how you found out?

Francesca: Every single girl was worried on October 20. Rumors were going around, people were worried, and all the girls thought they were going to be one of the AI victims. And at the end of the day it was confirmed by the administration that I was one of many AI deepfake victims. Of course, I was shocked, because me and the other girls were betrayed by our classmates. We didn’t know our own classmates would do that to us. 

It’s really brave what you’re doing by speaking out and talking to all these people about your experience. What made you want to take action?

Francesca: Until I was one of the victims, I didn’t really know how complex and scary AI technology is. So it’s made me understand the importance of self-education in regards to the technology, because AI is here to stay, and we need to learn how to live with it without hurting ourselves and others. So this is actually why I have created a website called AI Help, which will help educate and protect us from AI. And this tool will provide resources that will help AI victims self-advocate. I also want to make sure that we have state and federal laws to protect us—children and women—from deepfakes, and that’s already being put into action.

What are some of the key things that you think other girls and women should know about the risks when it comes to AI?

Francesca: It’s important to know that it can happen to anyone, by anyone; like, it could be your classmates. That’s what happened to me. People should realize when they start posting stuff on Instagram or any type of social media that it can happen to you. Protect your image, make your account private, and have only certain followers on it, like people you know, instead of having a public account.

Can you tell me about the conversations you’ve been having with lawmakers about this issue? 

Francesca: I spoke to Senator Bramnick—he’s actually from Westfield—to help me advocate for our state and to [make] new AI laws. We actually spoke in person and he promised that he will do all he can to protect our state from deepfakes. He also immediately cosponsored Senator Corrado’s bill. And if everything goes well with the joint effort, we’ll be able to protect New Jersey with an AI bill by January 2024. And this makes me incredibly happy, to know that my own town senator cares enough to fight for the important cause. 

Congressman [Joe] Morelle [of New York] also invited us to Washington, DC, to meet with other congressmen [from both parties]. 

Dorota: We are hoping after our visit in Washington we’re gonna be able to have more and more support and make sure that we start with something. Then we can always better it.

What has this taught you about politics or the way the American government works? 

Francesca: I was so happy to know that—well, when I reached out I was 14, I just turned 15—but I’m, like, a 14-year-old, and [the lawmakers] listened to me and they helped me. They were willing to protect me and other girls. Something I learned from this is to speak up and not to be afraid. 

I know you filed a police report about this, but what is the legal recourse you are hoping for? 

Francesca: I would really like for whoever’s doing this to be suspended or expelled, because I think it’s important for everyone to feel comfortable if it’s someone from your school. And I would also like an apology. I would forgive that person, but, you know, I would never forget. 

Dorota, you mentioned you haven’t been pleased with the school’s response. Is there anything you would like schools to do differently to either respond to situations like this or prevent them from happening in the first place? 

Dorota: I think education is so important on this matter—educating our children, educating ourselves—and then taking responsibility. I think we should use this [opportunity] to educate our girls that they’re worth it; even though they have been victimized, it doesn’t mean they should be ashamed and they should just accept things as they are and hope for them to pass. We have a wonderful school district. Our teachers are fantastic. Francesca wouldn’t be able to go through this whole situation without the support of her teachers. I cannot say good enough things. 

But the administration simply is hoping for things to die down. I have no report. There’s no consequences, there’s no accountability, there’s no apology. I’m an educator myself. I own a private school in Jersey City. And I think as a mother and as a woman, I’m advocating for something different. I’m supporting my daughter, but as an educator, I’m advocating to create a safe place for our children, because it could happen to anybody. It does not necessarily have to be a woman, and I think we should send a clear message that this is not acceptable. 

[In a statement to MIT Technology Review, Westfield superintendent Raymond González said, The Westfield Public School District has safeguards in place to prevent this from happening on our network and school-issued devices. We continue to strengthen our efforts by educating our students and establishing clear guidelines to ensure that these new technologies are used responsibly in our schools and beyond.” The school also said it conducted an immediate investigation and is working with police.]

Do you think there’s a role for education that also teaches kids what’s appropriate and inappropriate, as well as how to protect themselves from the harms? 

Dorota: Oh, absolutely. I think on many occasions [digital education] will be given to a, you know, physical education teacher, and they will be teaching the dangers of AI. Let’s be honest with each other: this is such a complex and sophisticated technology, and it’s ever changing. We should make sure that a specialist is teaching that class, and it shouldn’t be just at the beginning of the school year. It should be at least twice or three times, and it should be delivered in a meaningful way. 

It really affects people’s lives. And Francesca has such a strong personality. She always has been a fighter. And I applaud her for speaking for herself. At the beginning, when she told me, Mom, I wanna fight, I said, Francesca, I want you to know that it can go either way. You’re going to hear people that are happy and you’re going to hear people that are really against you, and you have to be prepared for it. And she said, I am not a child. I can take people’s opinions and I want to speak up. But not everybody’s gonna have the same character as Francesca. Not everybody’s going to have the same support at home as Francesca. And there are going to be girls or boys who will not see the light at the end of the tunnel and will go for suicide or self-harm. And I don’t think we should wait for that. Education in a meaningful way is the most important thing.

Francesca: I also want to urge all school districts to update their policies on cyber-harassment to add a definition of AI and to add defined consequences if deepfakes are being created by a student. Laws can take time to be passed, but school policies can and should be updated immediately.

It feels like you really have created a lot of momentum. Have you had any negative reactions? 

Francesca: No, not really. It’s so cool to know that I have such a great community and support from my friends and teachers, and I just want to thank them. I’m so proud to be an American. I live in a country where the voice of a now-15-year-old girl can make positive change. 

What I am reading this week

  • Instagram influencers in India are being paid by political campaigns to sway local elections. It’s part of a growing trend in which smaller and nonpolitical social media personalities are tapped for campaign messaging. 
  • The EU AI Act is entering its final stage of negotiations, and some people are worried that the discussions are not going great. The EU Parliament, Commission, and Council have a preliminary deadline of December 6 to finalize the deal, and apparently tech lobbyists are making discussions sticky
  • The US government will no longer notify Meta when foreign disinformation campaigns are operating on its platforms. This reversal of a years-long policy is a result of a conservative legal campaign. And in the run-up to the US elections, it’s not good news for the health of the internet. 

What I learned this week

Using AI to generate an image uses a ton of energy, according to a new research study from Hugging Face and Carnegie Mellon University. As my colleague Melissa Heikkilä reported, “Their work, which is yet to be peer reviewed, shows that while training massive AI models is incredibly energy intensive, it’s only one part of the puzzle. Most of their carbon footprint comes from their actual use.” The research marks the first time the carbon emissions associated with using AI for different purposes, like image or text generation, have been calculated. 

A high school’s deepfake porn scandal is pushing US lawmakers into action

On October 20, Francesca Mani was called to the counselor’s office at her New Jersey high school. A 14-year-old sophomore and a competitive fencer, Francesca wasn’t one for getting in trouble. That day, a rumor had been circulating the halls: over the summer, boys in the school had used artificial intelligence to create sexually explicit and even pornographic photos of some of their classmates. She learned that she was one of more than 30 girls who may have been victimized. (In an email, the school claimed “far fewer” than 30 students were affected.)

Francesca didn’t see the photo of herself that day. And she still doesn’t intend to. Instead, she’s put all her energy into ensuring that no one else is targeted this way. 

Within 24 hours of learning about the photos, Francesca was writing letters to four area lawmakers, sharing her story and asking them to take action. Three of them quickly responded: US Representative Joe Morelle of New York, US Representative Tom Kean Jr. of New Jersey, and New Jersey state senator Jon Bramnick. In the past few weeks, her advocacy has already fueled new legislative momentum to regulate nonconsensual deepfake pornography in the US.

“I just realized that day [that] I need to speak out, because I really think this isn’t okay,” Francesca told me in a phone call this week. “This is such a new technology that people don’t really know about and don’t really know how to protect themselves against.” Over the past few weeks, in addition to celebrating her 15th birthday, Francesca has also launched a new website that offers resources to other victims of deepfake pornography. 

Studies from 2019 and 2021 show that deepfakes—which are images convincingly manipulated by artificial intelligence, often by swapping in faces or voices from different pieces of media—are primarily used for pornography, overwhelmingly without the consent of those who appear in the images. Beyond consent, deepfakes have sparked serious concerns about people’s privacy online

As AI tools have continued to proliferate and become more popular over the last year, so has deepfake pornography and sexual harassment in the form of AI-generated imagery. In September, for instance, an estimated 20 young girls in Spain were sent naked images of themselves after AI was used to strip their clothes in photos. And in December, one of my colleagues, reporter Melissa Heikkilä, showed how the viral generative-AI app Lensa created sexualized renderings of her without her consent—a stark contrast to the images it produced of our male colleagues.

Efforts from members of Congress to clamp down on deepfake pornography are not entirely new. In 2019 and 2021, Representative Yvette Clarke introduced the DEEPFAKES Accountability Act, which requires creators of deepfakes to watermark their content. And in December 2022, Representative Morelle, who is now working closely with Francesca, introduced the Preventing Deepfakes of Intimate Images Act. His bill focuses on criminalizing the creation and distribution of pornographic deepfakes without the consent of the person whose image is used. Both efforts, which didn’t have bipartisan support, stalled in the past. 

But recently, the issue has reached a “tipping point,” says Hany Farid, a professor at the University of California, Berkeley, because AI has grown much more sophisticated, making the potential for harm much more serious. “The threat vector has changed dramatically,” says Farid. Creating a convincing deepfake five years ago required hundreds of images, he says, which meant those at greatest risk for being targeted were celebrities and famous people with lots of publicly accessible photos. But now, deepfakes can be created with just one image.

Farid says, “We’ve just given high school boys the mother of all nuclear weapons for them, which is to be able to create porn with [a single image] of whoever they want. And of course, they’re doing it.”

Clarke and Morelle, both Democrats from New York, have reintroduced their bills this year. Morelle’s now has 18 cosponsors from both parties, four of whom joined after the incident involving Francesca came to light—which indicates there could be real legislative momentum to get the bill passed. Then just this week, Representative Kean, one of the cosponsors of Morelle’s bill, released a related proposal intended to push forward AI-labeling efforts—in part in response to Francesca’s appeals. 

AI regulation in the US is tricky business, even though interest in taking action has reached new heights (and some states are moving forward with their own legislative attempts). Proposals to regulate deepfakes often include measures to label and detect AI-generated content and moderate child sexual abuse material on platforms. This raises thorny policy issues and First Amendment concerns.  

Morelle, though, thinks his bill has found an “elegant” solution that skirts some of those issues by focusing specifically on creators and distributors—developing an avenue for civil and criminal charges, and designating the creation and sharing of nonconsensual pornographic deepfakes as a federal crime. The bill “really puts the liability and the exposure on the person who will post something without the consent of the person who’s in the image and or video,” says Morelle. The bill is under consideration in the House Judiciary Committee, and Morelle’s office plans to push hard for passage in January. If it moves through committee, it will then go to a vote on the House floor. 

Farid says that Morelle’s bill is a good first step at toward awareness and accountability, but in the long run, the problem will need to be tackled upstream with the websites, services, credit card companies, and internet service providers that are “profiting” from nonconsensual deepfake porn. 

But in the meantime, the dearth of regulation and legal precedent on deepfake pornography means that victims like Francesca have little to no recourse. Police in New Jersey told Bramnick that the incident would likely amount to nothing more than a “cyber-type harassment claim,” rather than a more serious crime like child pornography. After Bramnick got in touch with Francesca, he joined on as a cosponsor of a bill in New Jersey that would institute civil and criminal penalties for nonconsensual deepfake pornography at the state level.

The sense of powerlessness is precisely what Francesca is hoping to change. She and her mom, Dorota Mani, are planning to head to Washington, DC, in the next few weeks to speak with members of Congress to bring more attention to the issue and urge them to pass Morelle’s bill. 

“We should put laws in place, like, immediately—so when that happens to you, you have a law to protect you,” Francesca told me. “I didn’t really have a law to protect me.”

Update: This story has been updated to clarify how many students the school claims were affected by the incident.

Meta is giving researchers more access to Facebook and Instagram data

Meta is releasing a new transparency product called the Meta Content Library and API, according to an announcement from the company today. The new tools will allow select researchers to access publicly available data on Facebook and Instagram in an effort to give a more overarching view of what’s happening on the platforms. 

The move comes as social media companies are facing public and regulatory pressure to increase transparency about how their products—specifically recommendation algorithms—work and what impact they have. Academic researchers have long been calling for better access to data from social media platforms, including Meta. This new library is a step toward increased visibility about what is happening on its platforms and the effect that Meta’s products have on online conversations, politics, and society at large. 

In an interview, Meta’s president of global affairs, Nick Clegg, said the tools “are really quite important” in that they provide, in a lot of ways, “the most comprehensive access to publicly available content across Facebook and Instagram of anything that we’ve built to date.” The Content Library will also help the company meet new regulatory requirements and obligations on data sharing and transparency, as the company notes in a blog post Tuesday

The library and associated API were first released as a beta version several months ago and allow researchers to access near-real-time data about pages, posts, groups, and events on Facebook and creator and business accounts on Instagram, as well as the associated numbers of reactions, shares, comments, and post view counts. While all this data is publicly available—as in, anyone can see public posts, reactions, and comments on Facebook—the new library makes it easier for researchers to search and analyze this content at scale.  

Meta says that to protect user privacy, this data will be accessible only through a virtual “clean room” and not downloadable. And access will be limited to approved researchers, who will be required to apply via an independent third-party organization. 

In addition to the new library and API, Meta announced new partnerships to expand on research from 2022 on the connections between social networks and economic mobility

The announcements come just days after The Information reported that the company was disbanding its Responsible AI team and distributing researchers throughout other parts of the organization, sparking skepticism about its commitment to user safety. Clegg had no comment on the restructuring of the AI team.

Hopes for “meaningful” research

Researchers have had a fraught relationship with social media companies in the past, particularly when it comes to accessing data that platforms might not want public. (In 2021, for instance, Facebook sent a cease-and-desist letter to researchers at New York University’s Transparency Project, which was investigating political ad targeting on the platform through web scraping, which the company said violated user privacy.)

Clegg said he wants the product to enable research that, first and foremost, is “meaningful,” and he highlighted the current lack of consensus among researchers about the exact impacts of social media—research that has undoubtedly been made more difficult by the lack of public data from social media companies. 

The new library is primarily a database that can be accessed either through a web interface similar to a search engine or through an API where researchers can code their own queries to return large amounts of data. Researchers could, for example, ask to see all public posts in English about generative AI on February 14, 2023, sorted by most viewed to least viewed. 

Recent moves by regulators, particularly in the European Union, may have pushed Meta’s hand with mandates for greater transparency. The EU’s Digital Services Act (DSA), which went into effect in August, requires that big platforms the size of Meta provide access to real-time data for researchers investigating “the detection, identification, and understanding of systemic risks in the Union.” Other regulatory efforts in Australia, Brazil, the US, and elsewhere have attempted to mimic these requirements. In what’s known as the Brussels effect, tech companies often comply with the strictest standards, usually set by the EU, in every country they operate to avoid fragmentation in their products. 

Policy efforts have struggled to balance demands for greater transparency with concerns about privacy protections. Clegg said that Meta has attempted to strike such a balance, in part through the application process. 

Researchers looking to access the Content Library and API have to submit information about their institution and research questions to the Inter-university Consortium for Political and Social Research, an independent organization at the University of Michigan. Meta says the screening is primarily intended to provide a security check about the groups using the data and their financial interests, not to scrutinize the research questions. 

The application process, though, has already raised some eyebrows. Smitha Milli, a postdoctoral researcher at Cornell Tech who studies the impact of social media, says, “My main question is, Why isn’t this accessible to everyone?”—especially since the library only contains publicly available data. Milli adds that it’s important to consider the amount of time the application process will add to the research cycle, saying it could be “super limiting.” 

(Meta said access to the Content Library was limited to protect user privacy: “There’s a big difference between data being publicly available on the platform versus being able to access it programmatically in a way where you can get access to a large volume of that data,” said Kiran Jagadeesh, a Meta product manager.)

Milli notes that researchers really want access to information about how recommendation algorithms work and what people are seeing on their individual feeds, as well as ways to run experiments on the platforms. It’s not clear how the latest product will make progress on those fronts, though Clegg said researchers can pair the Content Library with other projects, like recommendation system cards, which combined will give “a much, much richer picture than was ever possible.”  

Lena Frischlich, a professor at the Digital Democracy Centre at the University of Southern Denmark, tested the beta version of the Content Library and said her team found the access to multimedia content like reels on Instagram and events on Facebook particularly useful, as well as the new data it provides about view counts. 

Frischlich also says that while the new product is “an important next step toward more transparency,” it is just a step. “Data access is still somehow restricted,” since not every country is included in the database and only researchers at qualifying academic or non-profit research institutions are granted access.

Clegg said he hopes that the new tool ultimately leads to better research about the role of social media in society, for multiple reasons. “I think there’s a sort of societal sense of responsibility here,” he said, “but also a self-interest in seeking to dispel some of the hyperbole that surrounds social media and to have the debate more grounded in fact.”

This story has been updated to clarify that non-profit research institutions and academic institutions may be granted access to the Content Library and API.