Four lessons from 2023 that tell us where AI regulation is going

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

In the US and elsewhere, 2023 was a blockbuster year for artificial intelligence and AI regulation, and this next year is guaranteed to bring even more action. On January 5, I published a story with my colleagues Melissa Heikkilä and Zeyi Yang that lays out what we should expect in the coming 12 months in AI policy around the world. 

Most broadly, we are likely to see the strategies that emerged last year continue, expand, and begin to be implemented. For example, following President Biden’s executive order, various US government agencies may outline new best practices but empower AI companies to police themselves. And across the pond, companies and regulators will begin to grapple with Europe’s AI Act and its risk-based approach. It certainly won’t be seamless, and there’s bound to be a lot of discussion about how these new laws and policies actually work in practice. 

While writing this piece, I took some time to reflect on how we got here. I think stories about technologies’ rise are worthy of reflective examination—they can help us better understand what might happen next. And as a reporter, I’ve seen patterns emerge in these stories over time—whether it’s with blockchain, social media, self-driving cars, or any other fast-developing, world-changing innovation. The tech usually moves much faster than regulation, with lawmakers increasingly challenged to stay up to speed with the technology itself while devising new ways to craft sustainable, future-proof laws. 

In thinking about the US specifically, I’m not sure what we’re experiencing so far is unprecedented, though certainly the speed with which generative AI has launched into our lives has been surprising. Last year, AI policy was marked by Big Tech power moves, congressional upskilling and bipartisanship (at least in this space!), geopolitical competition, and rapid deployment of nascent technologies on the fly. 

So what did we learn? And what is around the corner? There’s so much to try to stay on top of in terms of policy, but I’ve broken down what you need to know into four takeaways.

1. The US isn’t planning on putting the screws to Big Tech. But lawmakers do plan to engage the AI industry. 

OpenAI’s CEO, Sam Altman, first started his tour de Congress last May, six months after the bombshell launch of ChatGPT. He met with lawmakers at private dinners and testified about the existential threats his own technology could pose to humanity. In a lot of ways, this set the tone for how we’ve been talking about AI in the US, and it was followed by Biden’s speech on AI, congressional AI insight forums to help lawmakers get up to speed, and the release of more large language models. (Notably, the guest list for these AI insight forums skewed heavily toward industry.)

As US lawmakers began to really take on AI, it became a rare (if small) area of bipartisanship on the Hill, with legislators from both parties calling for more guardrails around the tech. At the same time, activity at the state level and in the courts increased, primarily around user protections like age verification and content moderation

As I wrote in the story, “Through this activity, a US flavor of AI policy began to emerge: one that’s friendly to the AI industry, with an emphasis on best practices, a reliance on different agencies to craft their own rules, and a nuanced approach of regulating each sector of the economy differently.” The culmination of all this was Biden’s executive order at the end of October, which outlined a distributed approach to AI policy, in which different agencies craft their own rules. It (perhaps unsurprisingly) will rely quite heavily on buy-in from AI companies

Next year, we can expect some new regulations to build on all this. As we wrote in our story today, Congress is looking to draft new laws and will consider existing bills on recommendation algorithms, data privacy, and transparency that will complement Biden’s executive order. States, too, will be considering their own regulations.

2. It’s not going to be easy to grapple with the harms and risks posed by AI.

While existential risk got the biggest headlines last year, human rights advocates and researchers frequently called out the harm that AI already on the market is causing right now, like perpetuating inaccuracy and bias. They warned that hyping existential risks would pull focus from dangerous realities, like medical AIs that disproportionately misdiagnose health issues in Black and brown patients.

As debates over how concerned we should be about the coming robot wars infiltrated dinner table chats and classrooms alike, agencies and local regulators started making declarations and issuing statements about AI, such as the joint statement in April from four federal agencies, including the FTC and CFPB, which warned that AI has the “potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.” Just how those outcomes will be monitored or prevented, however, is far from clear at this point. 

As for the tech industry itself, players are likely to continue to squabble with lawmakers over the riskiness of AI systems. The last-hour discussions over the EU AI Act were hung up on a fight over foundation models, and this debate will likely continue in different arenas this year, as will debates over what uses of AI should be considered high risk and who is responsible for managing those risks.

3. AI is the next frontier for techno-nationalism and global competition.

This past year also made clear that the US approach to AI is shaped by the desire to achieve and maintain a technological advantage over China. Meanwhile, the two countries continue to escalate their trade war over semiconductors, which provide the hardware necessary for AI models.  

Beyond keeping an edge in technological prowess, the US wants to be a leader on tech regulation and compete with a regulation-happy Europe. Biden’s executive order strategically dropped just days before the UK’s AI Summit and before the final negotiations over the EU AI Act were set to take place. 

4. Watch closely what happens in the US election and those around the world. 

Of course, the US will have a big election in 2024, but so will many many other countries. In my last Technocrat of 2023, we talked about how generative AI and other media technologies have created acute concern about an onslaught of deceitful and inaccurate information. I’m particularly interested in watching how social media platforms and politicians alike address the new threat of political disinformation as a result of generative AI. As I wrote in a story a few months ago, researchers are already seeing a negative impact

One thing at least is sure: the rapid release of generative AI to users in 2023 will affect 2024 elections, likely in a dramatic and unprecedented way. It’s hard to really predict what may happen given how rapidly the technology is changing and how quickly users are pushing it in different and unexpected directions. So even if governments or social media companies, among others, do try to strengthen safeguards or create new policies, the way generative AI is actually used in 2024 will be critical in shaping future regulations. 

No matter what, it’s definitely going to be an interesting ride!

What I am reading this week

  • The New York Times is suing OpenAI on the grounds that it used its articles to train ChatGPT. It’s one of the biggest stories over the past few weeks that you may have missed, and I was particularly interested in the similarity between some of the ChatGPT outputs and the NYT articles, as documented in the filing.
  • Researchers at the Stanford Internet Observatory found thousands of examples of child sexual abuse material in one of the major data sets used to train generative AI. That data set has now been temporarily taken down.
  • Smart cars are being weaponized by abusive partners as tools for surveillance and tracking, according to a new story by Kashmir Hill in the New York Times. In a world where almost everything has the ability to produce geolocation data, I’m afraid these sorts of stories will be more and more common.

What I learned this week

My colleagues Melissa Heikkilä and Will Douglas Heaven published a forward-thinking piece about what’s to come for AI in 2024, and I figured you all would want a taste! They predict a year of customized chatbots, new advances in generative-AI video, AI-generated misinformation during elections, and multitasking robots. Definitely worth the read!

Meet the 15-year-old deepfake victim pushing Congress into action

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

I want to share a story about an inspirational young woman and her mother, who have stepped into the fray on AI policy issues after a horrific incident. Too often, new technologies disproportionately harm women and girls without drawing enough attention or bringing about change. In reporting out a recent story, I was so impressed with this family’s drive to fix that. 

In October, Francesca Mani was one of reportedly more than 30 girls at Westfield High School in New Jersey who were victims of deepfake pornography. Boys at the school had taken photos of Francesca and her classmates and manipulated them with artificial intelligence to create sexually explicit images of them without their consent. (Westfield High School said in an email that “matters involving students are confidential” but claims that “far fewer” than 30 students were affected.)

The practice is actually stunningly commonplace, but we rarely hear such stories—at least in part because many victims of sexual harassment very understandably don’t want to talk publicly about incidents that are so private. But within just a day of learning about the violation, which she calls “shocking,” 15-year-old Francesca started speaking out and calling on lawmakers to do something about the broader problem. Her efforts are already starting to pay off with new momentum behind proposals for state and federal legislation, which I wrote about in a story published this morning. That includes a bill cosponsored by New Jersey state senators Jon Bramnick and Kristin Corrado that would establish civil and criminal penalties for the nonconsensual creation and sharing of deepfake porn.

Francesca and her mother, Dorota, say that their activism aims particularly to support women and girls who might not have their own ability to make change. I spoke with the Manis earlier this week to more deeply understand what this experience has been like, and I want to share parts of my conversation with them, which has been edited for length and clarity. 

Could you walk me through what happened to you and how you found out?

Francesca: Every single girl was worried on October 20. Rumors were going around, people were worried, and all the girls thought they were going to be one of the AI victims. And at the end of the day it was confirmed by the administration that I was one of many AI deepfake victims. Of course, I was shocked, because me and the other girls were betrayed by our classmates. We didn’t know our own classmates would do that to us. 

It’s really brave what you’re doing by speaking out and talking to all these people about your experience. What made you want to take action?

Francesca: Until I was one of the victims, I didn’t really know how complex and scary AI technology is. So it’s made me understand the importance of self-education in regards to the technology, because AI is here to stay, and we need to learn how to live with it without hurting ourselves and others. So this is actually why I have created a website called AI Help, which will help educate and protect us from AI. And this tool will provide resources that will help AI victims self-advocate. I also want to make sure that we have state and federal laws to protect us—children and women—from deepfakes, and that’s already being put into action.

What are some of the key things that you think other girls and women should know about the risks when it comes to AI?

Francesca: It’s important to know that it can happen to anyone, by anyone; like, it could be your classmates. That’s what happened to me. People should realize when they start posting stuff on Instagram or any type of social media that it can happen to you. Protect your image, make your account private, and have only certain followers on it, like people you know, instead of having a public account.

Can you tell me about the conversations you’ve been having with lawmakers about this issue? 

Francesca: I spoke to Senator Bramnick—he’s actually from Westfield—to help me advocate for our state and to [make] new AI laws. We actually spoke in person and he promised that he will do all he can to protect our state from deepfakes. He also immediately cosponsored Senator Corrado’s bill. And if everything goes well with the joint effort, we’ll be able to protect New Jersey with an AI bill by January 2024. And this makes me incredibly happy, to know that my own town senator cares enough to fight for the important cause. 

Congressman [Joe] Morelle [of New York] also invited us to Washington, DC, to meet with other congressmen [from both parties]. 

Dorota: We are hoping after our visit in Washington we’re gonna be able to have more and more support and make sure that we start with something. Then we can always better it.

What has this taught you about politics or the way the American government works? 

Francesca: I was so happy to know that—well, when I reached out I was 14, I just turned 15—but I’m, like, a 14-year-old, and [the lawmakers] listened to me and they helped me. They were willing to protect me and other girls. Something I learned from this is to speak up and not to be afraid. 

I know you filed a police report about this, but what is the legal recourse you are hoping for? 

Francesca: I would really like for whoever’s doing this to be suspended or expelled, because I think it’s important for everyone to feel comfortable if it’s someone from your school. And I would also like an apology. I would forgive that person, but, you know, I would never forget. 

Dorota, you mentioned you haven’t been pleased with the school’s response. Is there anything you would like schools to do differently to either respond to situations like this or prevent them from happening in the first place? 

Dorota: I think education is so important on this matter—educating our children, educating ourselves—and then taking responsibility. I think we should use this [opportunity] to educate our girls that they’re worth it; even though they have been victimized, it doesn’t mean they should be ashamed and they should just accept things as they are and hope for them to pass. We have a wonderful school district. Our teachers are fantastic. Francesca wouldn’t be able to go through this whole situation without the support of her teachers. I cannot say good enough things. 

But the administration simply is hoping for things to die down. I have no report. There’s no consequences, there’s no accountability, there’s no apology. I’m an educator myself. I own a private school in Jersey City. And I think as a mother and as a woman, I’m advocating for something different. I’m supporting my daughter, but as an educator, I’m advocating to create a safe place for our children, because it could happen to anybody. It does not necessarily have to be a woman, and I think we should send a clear message that this is not acceptable. 

[In a statement to MIT Technology Review, Westfield superintendent Raymond González said, The Westfield Public School District has safeguards in place to prevent this from happening on our network and school-issued devices. We continue to strengthen our efforts by educating our students and establishing clear guidelines to ensure that these new technologies are used responsibly in our schools and beyond.” The school also said it conducted an immediate investigation and is working with police.]

Do you think there’s a role for education that also teaches kids what’s appropriate and inappropriate, as well as how to protect themselves from the harms? 

Dorota: Oh, absolutely. I think on many occasions [digital education] will be given to a, you know, physical education teacher, and they will be teaching the dangers of AI. Let’s be honest with each other: this is such a complex and sophisticated technology, and it’s ever changing. We should make sure that a specialist is teaching that class, and it shouldn’t be just at the beginning of the school year. It should be at least twice or three times, and it should be delivered in a meaningful way. 

It really affects people’s lives. And Francesca has such a strong personality. She always has been a fighter. And I applaud her for speaking for herself. At the beginning, when she told me, Mom, I wanna fight, I said, Francesca, I want you to know that it can go either way. You’re going to hear people that are happy and you’re going to hear people that are really against you, and you have to be prepared for it. And she said, I am not a child. I can take people’s opinions and I want to speak up. But not everybody’s gonna have the same character as Francesca. Not everybody’s going to have the same support at home as Francesca. And there are going to be girls or boys who will not see the light at the end of the tunnel and will go for suicide or self-harm. And I don’t think we should wait for that. Education in a meaningful way is the most important thing.

Francesca: I also want to urge all school districts to update their policies on cyber-harassment to add a definition of AI and to add defined consequences if deepfakes are being created by a student. Laws can take time to be passed, but school policies can and should be updated immediately.

It feels like you really have created a lot of momentum. Have you had any negative reactions? 

Francesca: No, not really. It’s so cool to know that I have such a great community and support from my friends and teachers, and I just want to thank them. I’m so proud to be an American. I live in a country where the voice of a now-15-year-old girl can make positive change. 

What I am reading this week

  • Instagram influencers in India are being paid by political campaigns to sway local elections. It’s part of a growing trend in which smaller and nonpolitical social media personalities are tapped for campaign messaging. 
  • The EU AI Act is entering its final stage of negotiations, and some people are worried that the discussions are not going great. The EU Parliament, Commission, and Council have a preliminary deadline of December 6 to finalize the deal, and apparently tech lobbyists are making discussions sticky
  • The US government will no longer notify Meta when foreign disinformation campaigns are operating on its platforms. This reversal of a years-long policy is a result of a conservative legal campaign. And in the run-up to the US elections, it’s not good news for the health of the internet. 

What I learned this week

Using AI to generate an image uses a ton of energy, according to a new research study from Hugging Face and Carnegie Mellon University. As my colleague Melissa Heikkilä reported, “Their work, which is yet to be peer reviewed, shows that while training massive AI models is incredibly energy intensive, it’s only one part of the puzzle. Most of their carbon footprint comes from their actual use.” The research marks the first time the carbon emissions associated with using AI for different purposes, like image or text generation, have been calculated. 

A controversial US surveillance program is up for renewal. Critics are speaking out.

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

For the past week my social feeds have been filled with a pretty important tech policy debate that I want to key you in on: the renewal of a controversial program of American surveillance.

The program, outlined in Section 702 of the Foreign Intelligence Surveillance Act (FISA), was created in 2008. It was designed to expand the power of US agencies to collect electronic “foreign intelligence information,” whether about spies, terrorists, or cybercriminals abroad, and to do so without a warrant. 

Tech companies, in other words, are compelled to hand over communications records like phone calls, texts, and emails to US intelligence agencies including the FBI, CIA, and NSA. A lot of data about Americans who communicate with people internationally gets swept up in these searches. Critics say that is unconstitutional

Despite a history of abuses by intelligence agencies, Section 702 was successfully renewed in both 2012 and 2017. The program, which has to be periodically renewed by Congress, is set to expire again at the end of December. But a broad group that transcends parties is calling for reforming the program, out of concern about the vast surveillance it enables. Here is what you need to know.

What do the critics of Section 702 say?

Of particular concern is that while the program intends to target people who aren’t Americans, a lot of data from US citizens gets swept up if they communicate with anyone abroad—and, again, this is without a warrant. The 2022 annual report on the program revealed that intelligence agencies ran searches on an estimated 3.4 million “US persons” during the previous year; that’s an unusually high number for the program, though the FBI attributed it to an uptick in investigations of Russia-based cybercrime that targeted US infrastructure. Critics have raised alarms about the ways the FBI has used the program to surveil Americans including Black Lives Matter activists and a member of Congress.  

In a letter to Senate Majority Leader Chuck Schumer this week, over 25 civil society organizations, including the American Civil Liberties Union (ACLU), the Center for Democracy & Technology, and the Freedom of the Press Foundation, said they “strongly oppose even a short-term reauthorization of Section 702.”

Wikimedia, the foundation that runs Wikipedia, also opposes the program in its current form, saying it leaves international open-source projects vulnerable to surveillance. “Wikimedia projects are edited and governed by nearly 300,000 volunteers around the world who share free knowledge and serve billions of readers globally. Under Section 702, every interaction on these projects is currently subject to surveillance by the NSA,” says a spokesperson for the Wikimedia Foundation. “Research shows that online surveillance has a ‘chilling effect’ on Wikipedia users, who will engage in self-censorship to avoid the threat of governmental reprisals for accurately documenting or accessing certain kinds of information.”

And what about the proponents?

The main supporters of the program’s reauthorization are the intelligence agencies themselves, which say it enables them to gather critical information about foreign adversaries and online criminal activities like ransomware and cyberattacks. 

In defense of the provision, FBI director Christopher Wray has also pointed to procedural changes at the bureau in recent years that have reduced the number of Americans being surveilled from 3.4 million in 2021 to 200,000 in 2022. 

The Biden administration has also broadly pushed for the reauthorization of Section 702 without reform.  

“Section 702 is a necessary instrument within the intelligence community, leveraging the United States’ global telecommunication footprint through legal and court-approved means,” says Sabine Neschke, a senior policy analyst at the Bipartisan Policy Center. “Ultimately, Congress must strike a balance between ensuring national security and safeguarding individual rights.”

What would reform look like?

The proposal to reform the program, called the Government Surveillance Reform Act, was announced last week and focuses on narrowing the government’s authority to collect information on US citizens.

It would require warrants to collect Americans’ location data and web browsing or search records under the program and documentation that the queries were “reasonably likely to retrieve foreign intelligence information.” In a hearing before the House Committee on Homeland Security on Wednesday, Wray said that a warrant requirement would be a “significant blow” to the program, calling it a “de facto ban.”

Senator Ron Wyden, who cosponsored the reform bill and sits on the Senate Select Committee on Intelligence, has said he won’t vote to renew the program unless some of its powers are curbed. “Congress must have a real debate about reforming warrantless government surveillance of Americans,” Wyden said in a statement to MIT Technology Review. “Therefore, the administration and congressional leaders should listen to the overwhelming bipartisan coalition that supports adopting common-sense protections for Americans’ privacy and extending key national security authorities at the same time.”

The reform bill does not, as some civil society groups had hoped, limit the government’s powers for surveillance of people outside of the US. 

While it’s not yet clear whether these reforms will pass, intelligence agencies have never faced such a broad, bipartisan coalition of opponents. As for what happens next, we’ll have to wait and see. 

What else I’m reading

  • Here’s a great story from the New Yorker about how facial recognition searches can lead police to ignore other pieces of an investigation. 
  • I loved this excerpt of Broken Code, a new book from reporter Jeff Horwitz, who broke the Facebook Files revealed by whistleblower Frances Haugen. It’s a nice insidery look at the company’s AI strategy. 
  • Meta says that age verification requirements, such as those being proposed by child online safety bills, should be up to app stores like Apple’s and Google’s. It’s an interesting stance that the company says would help take the burden off individual websites to comply with the new regulations. 

What I learned this week

Some researchers and technologists have been calling for new and more precise language around artificial intelligence. This week, Google DeepMind released a paper outlining different levels of artificial general intelligence, often referred to as AGI, as my colleague Will Douglas Heaven reports.

“The team outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals),” Will writes. “They note that no level beyond emerging AGI has been achieved.” We’ll certainly be hearing more about what words we should use when referring to AI in the future.