A high school’s deepfake porn scandal is pushing US lawmakers into action

On October 20, Francesca Mani was called to the counselor’s office at her New Jersey high school. A 14-year-old sophomore and a competitive fencer, Francesca wasn’t one for getting in trouble. That day, a rumor had been circulating the halls: over the summer, boys in the school had used artificial intelligence to create sexually explicit and even pornographic photos of some of their classmates. She learned that she was one of more than 30 girls who may have been victimized. (In an email, the school claimed “far fewer” than 30 students were affected.)

Francesca didn’t see the photo of herself that day. And she still doesn’t intend to. Instead, she’s put all her energy into ensuring that no one else is targeted this way. 

Within 24 hours of learning about the photos, Francesca was writing letters to four area lawmakers, sharing her story and asking them to take action. Three of them quickly responded: US Representative Joe Morelle of New York, US Representative Tom Kean Jr. of New Jersey, and New Jersey state senator Jon Bramnick. In the past few weeks, her advocacy has already fueled new legislative momentum to regulate nonconsensual deepfake pornography in the US.

“I just realized that day [that] I need to speak out, because I really think this isn’t okay,” Francesca told me in a phone call this week. “This is such a new technology that people don’t really know about and don’t really know how to protect themselves against.” Over the past few weeks, in addition to celebrating her 15th birthday, Francesca has also launched a new website that offers resources to other victims of deepfake pornography. 

Studies from 2019 and 2021 show that deepfakes—which are images convincingly manipulated by artificial intelligence, often by swapping in faces or voices from different pieces of media—are primarily used for pornography, overwhelmingly without the consent of those who appear in the images. Beyond consent, deepfakes have sparked serious concerns about people’s privacy online

As AI tools have continued to proliferate and become more popular over the last year, so has deepfake pornography and sexual harassment in the form of AI-generated imagery. In September, for instance, an estimated 20 young girls in Spain were sent naked images of themselves after AI was used to strip their clothes in photos. And in December, one of my colleagues, reporter Melissa Heikkilä, showed how the viral generative-AI app Lensa created sexualized renderings of her without her consent—a stark contrast to the images it produced of our male colleagues.

Efforts from members of Congress to clamp down on deepfake pornography are not entirely new. In 2019 and 2021, Representative Yvette Clarke introduced the DEEPFAKES Accountability Act, which requires creators of deepfakes to watermark their content. And in December 2022, Representative Morelle, who is now working closely with Francesca, introduced the Preventing Deepfakes of Intimate Images Act. His bill focuses on criminalizing the creation and distribution of pornographic deepfakes without the consent of the person whose image is used. Both efforts, which didn’t have bipartisan support, stalled in the past. 

But recently, the issue has reached a “tipping point,” says Hany Farid, a professor at the University of California, Berkeley, because AI has grown much more sophisticated, making the potential for harm much more serious. “The threat vector has changed dramatically,” says Farid. Creating a convincing deepfake five years ago required hundreds of images, he says, which meant those at greatest risk for being targeted were celebrities and famous people with lots of publicly accessible photos. But now, deepfakes can be created with just one image.

Farid says, “We’ve just given high school boys the mother of all nuclear weapons for them, which is to be able to create porn with [a single image] of whoever they want. And of course, they’re doing it.”

Clarke and Morelle, both Democrats from New York, have reintroduced their bills this year. Morelle’s now has 18 cosponsors from both parties, four of whom joined after the incident involving Francesca came to light—which indicates there could be real legislative momentum to get the bill passed. Then just this week, Representative Kean, one of the cosponsors of Morelle’s bill, released a related proposal intended to push forward AI-labeling efforts—in part in response to Francesca’s appeals. 

AI regulation in the US is tricky business, even though interest in taking action has reached new heights (and some states are moving forward with their own legislative attempts). Proposals to regulate deepfakes often include measures to label and detect AI-generated content and moderate child sexual abuse material on platforms. This raises thorny policy issues and First Amendment concerns.  

Morelle, though, thinks his bill has found an “elegant” solution that skirts some of those issues by focusing specifically on creators and distributors—developing an avenue for civil and criminal charges, and designating the creation and sharing of nonconsensual pornographic deepfakes as a federal crime. The bill “really puts the liability and the exposure on the person who will post something without the consent of the person who’s in the image and or video,” says Morelle. The bill is under consideration in the House Judiciary Committee, and Morelle’s office plans to push hard for passage in January. If it moves through committee, it will then go to a vote on the House floor. 

Farid says that Morelle’s bill is a good first step at toward awareness and accountability, but in the long run, the problem will need to be tackled upstream with the websites, services, credit card companies, and internet service providers that are “profiting” from nonconsensual deepfake porn. 

But in the meantime, the dearth of regulation and legal precedent on deepfake pornography means that victims like Francesca have little to no recourse. Police in New Jersey told Bramnick that the incident would likely amount to nothing more than a “cyber-type harassment claim,” rather than a more serious crime like child pornography. After Bramnick got in touch with Francesca, he joined on as a cosponsor of a bill in New Jersey that would institute civil and criminal penalties for nonconsensual deepfake pornography at the state level.

The sense of powerlessness is precisely what Francesca is hoping to change. She and her mom, Dorota Mani, are planning to head to Washington, DC, in the next few weeks to speak with members of Congress to bring more attention to the issue and urge them to pass Morelle’s bill. 

“We should put laws in place, like, immediately—so when that happens to you, you have a law to protect you,” Francesca told me. “I didn’t really have a law to protect me.”

Update: This story has been updated to clarify how many students the school claims were affected by the incident.

Meta is giving researchers more access to Facebook and Instagram data

Meta is releasing a new transparency product called the Meta Content Library and API, according to an announcement from the company today. The new tools will allow select researchers to access publicly available data on Facebook and Instagram in an effort to give a more overarching view of what’s happening on the platforms. 

The move comes as social media companies are facing public and regulatory pressure to increase transparency about how their products—specifically recommendation algorithms—work and what impact they have. Academic researchers have long been calling for better access to data from social media platforms, including Meta. This new library is a step toward increased visibility about what is happening on its platforms and the effect that Meta’s products have on online conversations, politics, and society at large. 

In an interview, Meta’s president of global affairs, Nick Clegg, said the tools “are really quite important” in that they provide, in a lot of ways, “the most comprehensive access to publicly available content across Facebook and Instagram of anything that we’ve built to date.” The Content Library will also help the company meet new regulatory requirements and obligations on data sharing and transparency, as the company notes in a blog post Tuesday

The library and associated API were first released as a beta version several months ago and allow researchers to access near-real-time data about pages, posts, groups, and events on Facebook and creator and business accounts on Instagram, as well as the associated numbers of reactions, shares, comments, and post view counts. While all this data is publicly available—as in, anyone can see public posts, reactions, and comments on Facebook—the new library makes it easier for researchers to search and analyze this content at scale.  

Meta says that to protect user privacy, this data will be accessible only through a virtual “clean room” and not downloadable. And access will be limited to approved researchers, who will be required to apply via an independent third-party organization. 

In addition to the new library and API, Meta announced new partnerships to expand on research from 2022 on the connections between social networks and economic mobility

The announcements come just days after The Information reported that the company was disbanding its Responsible AI team and distributing researchers throughout other parts of the organization, sparking skepticism about its commitment to user safety. Clegg had no comment on the restructuring of the AI team.

Hopes for “meaningful” research

Researchers have had a fraught relationship with social media companies in the past, particularly when it comes to accessing data that platforms might not want public. (In 2021, for instance, Facebook sent a cease-and-desist letter to researchers at New York University’s Transparency Project, which was investigating political ad targeting on the platform through web scraping, which the company said violated user privacy.)

Clegg said he wants the product to enable research that, first and foremost, is “meaningful,” and he highlighted the current lack of consensus among researchers about the exact impacts of social media—research that has undoubtedly been made more difficult by the lack of public data from social media companies. 

The new library is primarily a database that can be accessed either through a web interface similar to a search engine or through an API where researchers can code their own queries to return large amounts of data. Researchers could, for example, ask to see all public posts in English about generative AI on February 14, 2023, sorted by most viewed to least viewed. 

Recent moves by regulators, particularly in the European Union, may have pushed Meta’s hand with mandates for greater transparency. The EU’s Digital Services Act (DSA), which went into effect in August, requires that big platforms the size of Meta provide access to real-time data for researchers investigating “the detection, identification, and understanding of systemic risks in the Union.” Other regulatory efforts in Australia, Brazil, the US, and elsewhere have attempted to mimic these requirements. In what’s known as the Brussels effect, tech companies often comply with the strictest standards, usually set by the EU, in every country they operate to avoid fragmentation in their products. 

Policy efforts have struggled to balance demands for greater transparency with concerns about privacy protections. Clegg said that Meta has attempted to strike such a balance, in part through the application process. 

Researchers looking to access the Content Library and API have to submit information about their institution and research questions to the Inter-university Consortium for Political and Social Research, an independent organization at the University of Michigan. Meta says the screening is primarily intended to provide a security check about the groups using the data and their financial interests, not to scrutinize the research questions. 

The application process, though, has already raised some eyebrows. Smitha Milli, a postdoctoral researcher at Cornell Tech who studies the impact of social media, says, “My main question is, Why isn’t this accessible to everyone?”—especially since the library only contains publicly available data. Milli adds that it’s important to consider the amount of time the application process will add to the research cycle, saying it could be “super limiting.” 

(Meta said access to the Content Library was limited to protect user privacy: “There’s a big difference between data being publicly available on the platform versus being able to access it programmatically in a way where you can get access to a large volume of that data,” said Kiran Jagadeesh, a Meta product manager.)

Milli notes that researchers really want access to information about how recommendation algorithms work and what people are seeing on their individual feeds, as well as ways to run experiments on the platforms. It’s not clear how the latest product will make progress on those fronts, though Clegg said researchers can pair the Content Library with other projects, like recommendation system cards, which combined will give “a much, much richer picture than was ever possible.”  

Lena Frischlich, a professor at the Digital Democracy Centre at the University of Southern Denmark, tested the beta version of the Content Library and said her team found the access to multimedia content like reels on Instagram and events on Facebook particularly useful, as well as the new data it provides about view counts. 

Frischlich also says that while the new product is “an important next step toward more transparency,” it is just a step. “Data access is still somehow restricted,” since not every country is included in the database and only researchers at qualifying academic or non-profit research institutions are granted access.

Clegg said he hopes that the new tool ultimately leads to better research about the role of social media in society, for multiple reasons. “I think there’s a sort of societal sense of responsibility here,” he said, “but also a self-interest in seeking to dispel some of the hyperbole that surrounds social media and to have the debate more grounded in fact.”

This story has been updated to clarify that non-profit research institutions and academic institutions may be granted access to the Content Library and API.

A controversial US surveillance program is up for renewal. Critics are speaking out.

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

For the past week my social feeds have been filled with a pretty important tech policy debate that I want to key you in on: the renewal of a controversial program of American surveillance.

The program, outlined in Section 702 of the Foreign Intelligence Surveillance Act (FISA), was created in 2008. It was designed to expand the power of US agencies to collect electronic “foreign intelligence information,” whether about spies, terrorists, or cybercriminals abroad, and to do so without a warrant. 

Tech companies, in other words, are compelled to hand over communications records like phone calls, texts, and emails to US intelligence agencies including the FBI, CIA, and NSA. A lot of data about Americans who communicate with people internationally gets swept up in these searches. Critics say that is unconstitutional

Despite a history of abuses by intelligence agencies, Section 702 was successfully renewed in both 2012 and 2017. The program, which has to be periodically renewed by Congress, is set to expire again at the end of December. But a broad group that transcends parties is calling for reforming the program, out of concern about the vast surveillance it enables. Here is what you need to know.

What do the critics of Section 702 say?

Of particular concern is that while the program intends to target people who aren’t Americans, a lot of data from US citizens gets swept up if they communicate with anyone abroad—and, again, this is without a warrant. The 2022 annual report on the program revealed that intelligence agencies ran searches on an estimated 3.4 million “US persons” during the previous year; that’s an unusually high number for the program, though the FBI attributed it to an uptick in investigations of Russia-based cybercrime that targeted US infrastructure. Critics have raised alarms about the ways the FBI has used the program to surveil Americans including Black Lives Matter activists and a member of Congress.  

In a letter to Senate Majority Leader Chuck Schumer this week, over 25 civil society organizations, including the American Civil Liberties Union (ACLU), the Center for Democracy & Technology, and the Freedom of the Press Foundation, said they “strongly oppose even a short-term reauthorization of Section 702.”

Wikimedia, the foundation that runs Wikipedia, also opposes the program in its current form, saying it leaves international open-source projects vulnerable to surveillance. “Wikimedia projects are edited and governed by nearly 300,000 volunteers around the world who share free knowledge and serve billions of readers globally. Under Section 702, every interaction on these projects is currently subject to surveillance by the NSA,” says a spokesperson for the Wikimedia Foundation. “Research shows that online surveillance has a ‘chilling effect’ on Wikipedia users, who will engage in self-censorship to avoid the threat of governmental reprisals for accurately documenting or accessing certain kinds of information.”

And what about the proponents?

The main supporters of the program’s reauthorization are the intelligence agencies themselves, which say it enables them to gather critical information about foreign adversaries and online criminal activities like ransomware and cyberattacks. 

In defense of the provision, FBI director Christopher Wray has also pointed to procedural changes at the bureau in recent years that have reduced the number of Americans being surveilled from 3.4 million in 2021 to 200,000 in 2022. 

The Biden administration has also broadly pushed for the reauthorization of Section 702 without reform.  

“Section 702 is a necessary instrument within the intelligence community, leveraging the United States’ global telecommunication footprint through legal and court-approved means,” says Sabine Neschke, a senior policy analyst at the Bipartisan Policy Center. “Ultimately, Congress must strike a balance between ensuring national security and safeguarding individual rights.”

What would reform look like?

The proposal to reform the program, called the Government Surveillance Reform Act, was announced last week and focuses on narrowing the government’s authority to collect information on US citizens.

It would require warrants to collect Americans’ location data and web browsing or search records under the program and documentation that the queries were “reasonably likely to retrieve foreign intelligence information.” In a hearing before the House Committee on Homeland Security on Wednesday, Wray said that a warrant requirement would be a “significant blow” to the program, calling it a “de facto ban.”

Senator Ron Wyden, who cosponsored the reform bill and sits on the Senate Select Committee on Intelligence, has said he won’t vote to renew the program unless some of its powers are curbed. “Congress must have a real debate about reforming warrantless government surveillance of Americans,” Wyden said in a statement to MIT Technology Review. “Therefore, the administration and congressional leaders should listen to the overwhelming bipartisan coalition that supports adopting common-sense protections for Americans’ privacy and extending key national security authorities at the same time.”

The reform bill does not, as some civil society groups had hoped, limit the government’s powers for surveillance of people outside of the US. 

While it’s not yet clear whether these reforms will pass, intelligence agencies have never faced such a broad, bipartisan coalition of opponents. As for what happens next, we’ll have to wait and see. 

What else I’m reading

  • Here’s a great story from the New Yorker about how facial recognition searches can lead police to ignore other pieces of an investigation. 
  • I loved this excerpt of Broken Code, a new book from reporter Jeff Horwitz, who broke the Facebook Files revealed by whistleblower Frances Haugen. It’s a nice insidery look at the company’s AI strategy. 
  • Meta says that age verification requirements, such as those being proposed by child online safety bills, should be up to app stores like Apple’s and Google’s. It’s an interesting stance that the company says would help take the burden off individual websites to comply with the new regulations. 

What I learned this week

Some researchers and technologists have been calling for new and more precise language around artificial intelligence. This week, Google DeepMind released a paper outlining different levels of artificial general intelligence, often referred to as AGI, as my colleague Will Douglas Heaven reports.

“The team outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals),” Will writes. “They note that no level beyond emerging AGI has been achieved.” We’ll certainly be hearing more about what words we should use when referring to AI in the future.

Tackling our biggest problems

For all of history we’ve turned to technology, again and again, to help us solve our hardest problems. Technology gave us warmth and light when it was cold and dark. It helped us pull fish from the sea and crops from the earth so we would not be hungry. It enabled us to cross over the oceans and fly through the skies, shrinking vast distances down to routine travel. It’s given us vaccines and treatments and cures. It has made virtually every fact and all of human knowledge available to us instantly on demand. We can speak to each other in entirely different languages and be understood using nothing more than a slim slab of glass and metals in our pocket. 

Sometimes technology can seem like a miracle. Of course, it is nothing more than human achievement. Yet like all things human, our creations can be deeply flawed. As a result, we have also used tech to unleash horrors on ourselves, intentionally and by accident. 

We have employed it to broadcast hateful rhetoric and divisive ideologies. We have fine-tuned our machines to kill each other in ever greater numbers and with ever more efficiency. It is our technology that took the carbon from out of the ground and put it in the sky. Our technology that poisoned the water and the air, that made deserts out of forests, and that wiped entire species off the planet. 

Technology is an engine for problems, for solving them and for creating entirely new ones—and then we perversely turn to even newer technologies to try to solve those. In this issue, we step back from this cycle. We explore big questions and hard problems and ask: What role can—and should—technology play going forward?

Our cover is inspired by Douglas Main’s terrifying story on plastics. There’s an adage that says every piece of plastic ever made still exists. While that isn’t entirely true, as Main vividly describes, it is pretty darn close. We’re not reducing how much is made—precisely the opposite. Reuse is negligible. Recycling isn’t working. Meanwhile, plastic is absolutely everywhere, and in absolutely everything, including our own bodies. What are we going to do about it? 

AI epitomizes the sometimes fraught relationship we have with technology. It has the potential to massively benefit society—and yet it could cause incalculable harm if we get it wrong. As its development races ahead, Grace Huckins has written a powerful, even poetic exploration of AI consciousness. What would it take, and what would it mean, for an AI to become conscious? How would we know? What would we owe it? 

David W. Brown takes on the challenge of spacecraft design and the struggle to make smaller, cheaper missions that can still tell us meaningful new things about the solar system. If we are going to make the most of the resources we devote to space exploration, we’ll have to grapple with the hard limits of physics—and think hard about what we can, and want to, do.

Some of our hardest problems come down to human nature, and our capacity and sometimes outright desire for conflict. Social media and online communications are lousy with trolling, disinformation, harassment, and hate speech. Katie Notopoulos argues that the solution for much of this is to end our fixation with free services and move to smaller, distributed platforms that put more power in users’ hands. 

One hard problem most of us have likely faced is the experience of interacting with government services online. A decade after the famously botched launch of Healthcare.gov, Tate Ryan-Mosley explores why it is still so hard for the government to get tech right. Her reporting takes us to New York City, which has had some manner of success—in part by going with the lowest tech possible. 

And finally, we asked some of the smartest minds out there what they consider the biggest problems that aren’t getting enough attention right now. You’ll find their responses, and many more online at techreview.com/hardproblems.

Thanks for reading,

Mat Honan

Not a subscriber? Subscribe and save 25% to gain exclusive access to this thought-provoking issue that brings together some of the brightest minds shedding light on critical global challenges.