Meta Integrates Google & Bing Search Results Into AI Assistant via @sejournal, @MattGSouthern

Meta’s AI assistant, powered by the new Llama 3 model, will now incorporate real-time search results from Google and Bing.

This integration marks a step forward in AI assistant capabilities and their potential impact on the search industry.

You can access Meta AI within Facebook, Instagram, WhatsApp, Messenger, and the newly launched Meta.ai desktop experience.

Google & Bing Search + Meta

Meta is integrating Google and Bing results directly into its AI assistant.

Instead of relying solely on training data or a single search engine, Meta’s AI assistant intelligently selects and displays results from either Google or Bing, depending on the query.

This provides users with a more comprehensive and diverse range of information.

Based on my limited testing before writing this piece, I could only get Meta AI to search using Bing. I’m uncertain about the criteria or conditions that would cause it to use Bing versus Google for web searches.

Here’s an example showing that it’s capable of providing up-to-date information:

Screenshot of a smartphone displaying a notification from Meta AI about the Google core update. The message details the update's gradual rollout, its focus on improving spam protections, and enhancing content quality in Google & Bing

Seamless Search Across Apps

Meta’s new search experience allows you to access relevant information without switching platforms.

For example, while planning a trip in a Messenger group chat, you can ask the assistant to find flights and determine the least crowded weekends to travel.

A screenshot of a smartphone messaging app named Screenshot from: about.fb.com/news/, April 2024.

Meta is taking a multi-pronged approach to make its AI assistant a consistent presence across its family of apps:

  • Integration into the search boxes of Facebook, Instagram, WhatsApp and Messenger
  • Appearing directly in the Facebook main feed to respond to posts
  • A new meta.ai website where users can log in to have continued conversations
  • Real-time text-to-image generation with an “Imagine” feature in WhatsApp
  • Future integration with Meta’s VR headsets and smart glasses

More information about these initiatives is available in Meta’s announcement.

Implications For Search Engine Optimization

Integrating Google and Bing search results into Meta’s AI assistant has potential consequences for SEO.

As AI chatbots become increasingly popular for finding information, visibility in the integrated search results will become more valuable for publishers.

SEO strategies may need to evolve to accommodate traditional search engines and AI assistants.

This could involve a greater focus on satisfying conversational queries that mirror how users interact with chatbots.

A Shifting Landscape

Meta’s move to integrate search results from Google and Bing into its AI assistant highlights the evolving nature of the search industry.

As the lines between traditional search and conversational AI continue to blur, companies are vying for dominance.

Meta’s CEO, Mark Zuckerberg, has expressed ambitious plans for the AI assistant, aiming to make it the most intelligent and widely accessible AI tool available.

With the release of the powerful Llama 3 model and incorporating search results from leading search engines, Meta is positioning itself as a top contender in the AI chatbot market.


FAQ

How is Meta’s AI Assistant changing how we interact with search engines?

Meta’s AI assistant is transforming the search engine experience by integrating Google and Bing search results, simplifying access to information across multiple platforms.

Users can search directly through Meta’s AI assistant within Facebook, Instagram, WhatsApp, and Messenger. This integration enables real-time information retrieval and a conversational interface that aligns with how users increasingly prefer to find and interact with content online.

By catering to conversational queries and reducing the need to switch platforms, Meta’s AI assistant may shift the focus of SEO strategies toward satisfying these user interactions.

What new features does Meta’s AI Assistant offer?

Meta’s AI Assistant provides various unique features to create a seamless search experience alongside live search results from Google and Bing.

These features include:

  • Integration into the search functions within Facebook, Instagram, WhatsApp, and Messenger.
  • The ability to interact with posts directly on the Facebook feed.
  • Access to the meta.ai desktop experience for continued conversations.
  • Real-time text-to-image generation through the “Imagine” feature in WhatsApp.
  • Planned future integration with Meta’s virtual reality headsets and smart glasses.


Featured Image: Screenshot from about.fb.com/news/, April 2024

Meta Plans A Less Punitive AI-Generated Content Policy via @sejournal, @martinibuster

Meta announced an update to its AI labeling policy, expanding its definition of “manipulated media” to go beyond AI-generated videos, to now include deceptive audio and images on Facebook, Instagram and Threads.

An important feature of the new policy is it’s sensitivity on being perceived as being restrictive of freedom of expression. Rather than adopt the approach of removing problematic content Meta is instead simply labeling it. Meta introduced two labels, “Made with AI” and “Imagined with AI,” to make clear what content was created or altered with AI.

New Warning Labels

The AI-generated content will rely on identifying the signals of AI-authorship and self-reporting:

“Our ‘Made with AI’ labels on AI-generated video, audio, and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content”

Content that is significantly misleading may receive more prominent labels so that users can get a better understanding.

Harmful content that violates the Community Standards, such as content that incites violence, election interference, bullying or harassments will qualify for removal, regardless if it is human or AI generated.

Reason For Meta’s Updated Policy

The original AI labeling policy was created in 2020 and because of the state of the technology it was narrowly defined confined to addressing deceptive videos (the kind that depicted public figures saying things they never did). Meta’s Oversight Board recognized that technology has progressed to the point that a new policy was needed. The new policy accordingly expands to now address AI-generated audio and images, in addition to videos.

Based On User Feedback

Meta’s process for updating their rules appear to have anticipated pushback from all sides. Their new policy is based on extensive feedback from from a wide range of stakeholder and input from the general public. The new policy also has the flexibility to bend if needed.

Meta explains:

“In Spring 2023, we began reevaluating our policies to see if we needed a new approach to keep pace with rapid advances… We completed consultations with over 120 stakeholders in 34 countries in every major region of the world. Overall, we heard broad support for labeling AI-generated content and strong support for a more prominent label in high-risk scenarios. Many stakeholders were receptive to the concept of people self-disclosing content as AI-generated.

…We also conducted public opinion research with more than 23,000 respondents in 13 countries and asked people how social media companies, such as Meta, should approach AI-generated content on their platforms. A large majority (82%) favor warning labels for AI-generated content that depicts people saying things they did not say.

…And the Oversight Board noted their recommendations were informed by consultations with civil-society organizations, academics, inter-governmental organizations and other experts.”

Collaboration And Consensus

Meta’s announcement explains that they plan for the policies to keep up with the pace of technology by revisiting it with organizations like the Partnership on AI, governments and non-governmental organizations.

Meta’s revised policy emphasizes the need for transparency and context for AI-generated content, that removal of content will be based on violations of their community standards and that the preferred response will be to label potentially problematic content.

Read Meta’s announcement

Our Approach to Labeling AI-Generated Content and Manipulated Media

Featured Image by Shutterstock/Boumen Japet

Facebook And Instagram Hit By Massive Outage via @sejournal, @martinibuster

Having trouble logging into Facebook and Instagram? It appears that both Facebook and Instagram have gone down. Users are reporting issues logging into either platforms.

Attempts to log into Facebook results in an unrecognized password.

Over Half A Million Reports Of Facebook Outages

The DownDetector website shows the outage beginning about 7:01 AM Pacific/10:01 EST, rapidly escalating to over half a million users reporting issues through the Down Detector website by a half an hour later.

Facebook Outage Reports By Users

Screenshot Down Detector for FacebookScreenshot DownDetector

The unknown problem is affecting the ability to login and it affects both Facebook and Instagram on the web and the app.

Where Facebook Outage Affects

Down Detector Reports for FacebookScreenshot Down Detector

Instagram Outage Reports

Perhaps reflecting the user base of Instagram and the likelihood of them reporting the outage, a smaller amount of users have reported the inability to use the site. By 8:40 AM PST there were only +92,000 reports of an outage at the DownDetector website.

Screenshot DownDetector for Instagram

Instagram App Outage

The statistics related to the Instagram users reporting an outage shows that most users who reported an outage reflects how they use Instagram. The clear majority of users are reporting an outage in the Instagram App.

Instagram App Affected

Instagram App OutageSceenshot DownDetector

The reporting for Facebook shows that 75% of reports were about problems logging in, whereas that is only 12% of reports for Instagram.

None of them are functioning.

This is a developing story.

Featured image by Shutterstock/FrameStudio

Meta Will Label AI Images Across Facebook, Instagram, & Threads via @sejournal, @MattGSouthern

Meta announced it will start labeling images created by AI across Facebook, Instagram, and Threads in the coming months.

The move comes as AI image generation tools grow in popularity, making distinguishing human-made and AI-created content harder.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” states Nick Clegg, President of Global Affairs at Meta.

Upcoming Features & Expectations

Meta plans to start labeling AI-generated images in multiple languages on its platforms over the next few months.

This move comes during major elections worldwide, when knowing the source of content is especially important.

Meta will employ various techniques to differentiate AI-generated images from other images. These include visible markers, invisible watermarks, and metadata embedded in the image files.

Additionally, Meta is implementing new policies requiring users to disclose when media is generated by artificial intelligence, with consequences for failing to comply.

Meta’s methods follow best practices recommended by the Partnership on AI (PAI), an industry group focused on responsible AI development.

Looking Ahead

Over the next 12 months, Meta will closely monitor user engagement with labeled AI content. These insights will shape the platform’s long-term strategy.

Currently, Meta manually labels images created through its internal AI image generator with disclosures like “Imagined by AI.” Now, the company will leverage its detection tools to label AI content from other providers like Google, Microsoft, Adobe, and leading AI art platforms.

In the interim, Meta advises users to critically evaluate accounts sharing images and watch for visual inconsistencies that may reveal computer generation.

Key Takeaways

Here are some key takeaways for businesses and social media marketers based on Meta’s announcement:

  • Authenticity and transparency will be crucial as AI image creation goes mainstream. Businesses should consider proactive disclosures if using AI-generated content in marketing.
  • Two camps may emerge – those who embrace AI creations and those who value “human-made” content. Brands should understand what their audience prefers.
  • With proper labeling, synthetic content may not negatively impact trust. However, marketers should closely monitor user sentiment surrounding AI usage.

AI could become a powerful marketing asset for content creation at scale, but ethical AI development is advised. Rushing to use immature technologies could backfire.

Interest in synthetic media detection tools, digital watermarking, and metadata standards will likely surge. Savvy marketers should stay on top of these technologies.

Meta’s approach hints at a measured transition, but swift change is likely. Marketers can stay ahead by preparing flexible creative and compliance strategies for synthetic content.


Featured Image: Screenshot from about.fb.com/news/, February 2024. 

Social Media In 2024: YouTube & Facebook On Top, TikTok Growing via @sejournal, @MattGSouthern

A recent survey by the Pew Research Center found that YouTube and Facebook remain the most widely used social media platforms among adults in the United States. At the same time, the survey showed substantial increases in the number of TikTok users.

According to the survey, YouTube and Facebook are the most widely used online platforms among U.S. adults, with 83% and 68% usage rates, respectively.

Approximately 50% of U.S. adults use Instagram. Other platforms like Pinterest, TikTok, LinkedIn, WhatsApp, and Snapchat have usage rates ranging from 27% to 35% of U.S. adults.

This year’s survey was the first to ask about BeReal, a newer photo-sharing app with a usage rate of just 3% among U.S. adults.

The percentage of U.S. adults who use TikTok has increased from 21% in 2021 to 33% currently. This growth rate for TikTok exceeds the more modest or stagnant growth rates observed for other social media platforms over the same period.

Age Disparities In Social Media Use

The survey results reveal differences in social media platform usage across age groups. Adults under 30 were likelier to use Instagram, Snapchat, and TikTok than older adults.

For example, 78% of 18-29-year-olds reported using Instagram, much higher than the 15% of adults 65 and over. Snapchat and TikTok followed similar usage patterns, with younger adults showing higher rates.

In contrast, YouTube and Facebook had more consistent usage across age groups, though younger adults still exhibited higher engagement on these platforms than older adults.

Demographic Differences In Social Media Use

The Pew Research Center study revealed demographic differences in social media platform usage.:

  • Instagram: More popular among Hispanic and Asian adults, women, and those with some college education.
  • TikTok: Higher usage rates among Hispanic adults and women.
  • LinkedIn: Most popular among Americans with higher educational attainment.
  • Twitter (now “X”): Usage correlates with higher household incomes.
  • Pinterest: Significantly more popular among women.
  • WhatsApp: More frequently used by Hispanic and Asian adults.

Takeaways For Social Media Marketers

The Pew Research Center’s latest findings on social media usage in the United States provide several valuable insights that social media marketers should consider when developing marketing strategies:

  • YouTube has broad appeal across all age groups, making it an essential platform for video campaigns targeting a broad audience.
  • Facebook maintains an extensive user base and provides capabilities for targeted advertising and reaching diverse demographics.
  • TikTok is experiencing explosive growth, especially among younger users, presenting opportunities for brands to leverage its creative and viral nature.
  • Instagram is highly popular with youth and minority groups like Hispanics and Asians, making it suitable for campaigns targeting these demographics.
  • LinkedIn caters to educated professionals, making it ideal for B2B marketing, thought leadership, and employer branding.
  • Short-form video content is rising in popularity, as seen with TikTok, so bite-sized engaging videos can capture limited user attention spans.
  • Though smaller in scale, niche platforms like Pinterest, Snapchat, and WhatsApp enable targeted niche marketing opportunities.
  • Emerging platforms such as BeReal could provide first-mover advantages as they expand.
  • Snapchat and TikTok are essential for engaging users under 30 years old.
  • Cross-platform campaigns allow greater reach and unified messaging.
  • Platforms popular with specific audiences like Hispanics, Asians, and higher-income households, such as WhatsApp and Twitter, should be considered when marketing to those groups.

Marketers can apply these insights to craft platform-specific strategies tailored to user demographics and behaviors. A nuanced understanding of the latest trends can inform more effective social media engagement and returns on marketing investment.

Methodology

The Pew Research Center surveyed 5,733 U.S. adults between May 19 and September 5, 2023. Ipsos carried out the survey using both online and mail methodologies to obtain a demographically representative sample of the U.S. adult population. The results were weighted by gender, race and ethnicity, education, and other relevant factors to align with U.S. Census benchmarks.

The survey represented a transition from traditional phone polling to a combination of web and mail. The Pew Research Center has provided details on the survey methodology and the potential impact of this change for those interested in better understanding the data collection process.


Featured Image: Primakov/Shutterstock

Use social media to increase your sales

Social media isn’t just for sharing memes and cat videos. It can be a useful tool to boost your sales or get more subscriptions. Just think about it: billions of users are scrolling through various social media platforms everyday. If you know how to reach them, you’ll be increasing your sales while connecting with your audience. Want to know how? Just read on!

Let’s quickly look at the benefits before we dive into the how. Because there are plenty! For example, social media is an affordable way to reach your audience across multiple platforms. You won’t need a big budget in order to have a successful social media strategy. 

Social media is also the preferred platform for most people to receive news about your new products or services, according to 68% of consumers. And one of the reasons for this is that social media feels more personal. The communication lines feel shorter. Which is important, because most people say that the most memorable brands on social media are the ones that respond to customers

1. Treat social media like a shopping mall

Because your audience is. They’re buying more and more on social media marketplaces. In 2023, sales were expected to reach $1.3 trillion dollars, which is huge. Especially compared to 2020, when sales reached ‘just’ $560 billion dollars. 

It’s clear that the industry is growing rapidly, so why not take advantage and make your products or services available on social media? A few popular ones are Facebook Shop, TikTok Shop, Instagram Shopping, and Pinterest Product Pins. And there are many more, so we suggest researching which ecommerce platforms work best for you and your audience.

Example of Instagram Shopping in a meme post by Loop Earplugs

2. Try conversational commerce

Have you ever received a promotional code or discount in your direct messages (DMs)? Or perhaps you’ve asked a question about a product via WhatsApp? That’s conversational commerce.

So, why should you try this? Because for your audience, it can feel like they’re talking to a store employee in-person, except virtually. They can ask questions, and receive advice and recommendations. Conversational commerce also simplifies the buyer journey. Your audience can read reviews, explore products, ask questions, and hit the checkout button without ever having to go somewhere else. 

An example of conversational commerce. It depicts a chat of a fictional company with a potential customer about yoga leggings. The customer asks the company for help and the company shows a few products. All the products have a buy button.
Example of conversational commerce
Source: ShipBob

The difference between conversational commerce and social media commerce

So, what’s the difference? Mostly, the place where the sale happens. Conversational commerce is your Instagram DMs, for example, and social media commerce takes place in Instagram Shopping.

3. Don’t post on a Sunday

It sounds funny, but it’s true. There are good and bad moments to post on social media, but Sunday is by far the worst. That’s not to say you can never post on a Sunday – just that you’ll get more engagement if you post on a different day. 

The best time to post differs per social media platform, so it’s good to be aware of what those times are. Otherwise, your audience won’t ever see your content. And that’s a shame!

4. Don’t put your eggs in one basket

In the same vein, it’s generally a good idea to become active on multiple social media platforms. After all, you want to reach as many people as possible. Not everyone in your audience uses Instagram – some might be very loyal Facebook users! 

However, a large part of your audience uses more than one social media platform. This gives you plenty of opportunity to connect with them multiple times a day. Just don’t spread yourself too thin! You don’t want to copy and paste your content on every platform. Instead, it’s better to focus on a maximum of three social media platforms, and create good content for those.

5. Post what your audience likes

So, what is good content? In the world of social media, this means content that your audience wants to share and engage with. Generally, this also means: images and videos. Last year, for example, online videos had an audience reach of 92.3% among internet users worldwide! The most popular videos were music videos, memes, and tutorial or how-to videos. 

A graph titled most popular video content type world wide. The most popular type is music video with 49.4% followed by comedy, meme, or viral video with 35.3%. Then, in descending order, it's tutorial or how-to videos, video live stream, educational video, product review video, sport clips or highlights video, gaming video, and at the bottom is influencer videos and vlogs.
Most popular video content type worldwide
Source: Statista

Of course, this doesn’t have to be the case for your audience. Perhaps they really like funny product pictures, or customer testimonials. Simply look at how many people like, comment, and share your product pictures versus your other content. Is it a lot more? Then you should make more! 

And don’t forget to make your post accessible, so everyone can read and engage with your content.

Relevant content usually does well too

If you keep up with trends, you’ll never be lost for content to create and post. Granted, you should be aware of what trends appeal to your audience. Referencing an exciting new change in the Lego company, for example, might not interest your audience of horse enthusiasts. 

Global news, however, generally speaks to a lot more people. By referencing a recent global event, you could attract your audience’s attention and keep your brand relevant at the same time. 

Don’t overwhelm your audience

Always choose quality over quantity. If you worry that you might be prioritizing quantity, simply ask yourself the following questions:

  • Is this original?
  • Does this content help my followers?
  • Is this inspiring or entertaining or actionable? 

Another tip that might help is creating a social media calendar. If you plan in advance how many times a week you’ll post and what you’ll post, then you don’t risk overwhelming your audience. Just keep some space for spontaneous posts. For example, when a meme has gone viral and you want to jump on the bandwagon. 

Hopefully, you’ll feel inspired about how you can use your business’s social media in new ways. Perhaps you want to try conversational commerce? Maybe you want to create more content that references memes. Whatever it is, just be sure that you pick what fits your brand. Because in a world with millions of posts, you want your content to be uniquely yours.

Read more: Social Media Strategy: where to begin »

Coming up next!

Meta Lets Users Separate Facebook & Instagram Accounts In Europe via @sejournal, @MattGSouthern

Meta has announced a change in policy for users in Europe.

The company plans to give users more options to control how they use Instagram and Facebook in countries that are part of the European Union, the European Economic Area, and Switzerland.

This shift comes as Meta prepares for new regulations from the Digital Markets Act (DMA) that will start being enforced in March 2024 across Europe. The DMA aims to create fairer competition in the technology industry.

By voluntarily giving users more control now, Meta is trying to get ahead of the regulatory requirements that the DMA will impose on large tech companies like Meta in Europe.

Adapting To New Regulations

To comply with the demands of the new Digital Markets Act, Meta is committing substantial resources to follow the law and also improve users’ control over their data and experiences on Meta’s platforms.

The DMA is part of the European Union’s larger plan to promote fair competition, innovation, and consumer options in the digital marketplace.

Soon, Meta will begin sending notices to its users in Europe explaining how they can control their data sharing between Instagram and Facebook.

Users can keep linking their Instagram and Facebook accounts through Meta’s account management portal, or they can choose to separate the accounts, which would stop their information from being shared across the platforms.

Meta assures that the fundamental features and protections that users have come to expect will remain intact.

Diverse Options Across Services

Facebook Messenger

Facebook Messenger users will have two options. They can keep their Messenger account connected to their Facebook profile or create a new, separate one not linked to Facebook.

Making an independent Messenger account means users can still use Messenger’s core features like messaging and calls, but without providing any of their Facebook data.

Facebook Marketplace

Marketplace users will have two options for how their data is handled. They can choose to have their Facebook information connected to their Marketplace account or opt out of sharing Facebook data.

Users who don’t share their Facebook info can still buy and sell on Marketplace. However, if they opt out of sharing Facebook data, communications about transactions will be through email rather than Messenger.

Facebook Gaming

Gamers who use Facebook have two options for their gaming experience. One option allows Facebook to use their data to customize the games, including suggesting new games and enabling multiplayer features with friends. The other option doesn’t employ any of the user’s Facebook data, which limits them to only certain single-player games without personalized recommendations or social features.

Meta’s Ongoing Commitment

Meta has voiced approval of the Digital Markets Act to promote competition and fairness online. The company said it’s dedicated to ensuring its offerings in Europe follow the DMA’s rules while continuing to provide benefits to users.


Featured Image: rarrarorro/Shutterstock

Meta Sets New Transparency Standards For Political Ads via @sejournal, @kristileilani

Meta is instituting a groundbreaking policy to promote transparency in political and social issue advertising with regards to AI-altered or digitally manipulated content in ads for political and social causes.

Starting in the new year, the tech giant will require advertisers to disclose any digital creation or alteration of images, videos, or audio within such ads on Facebook and Instagram.

This move aims to combat the spread of misinformation by ensuring users can identify content that has been manipulated to depict events or statements that did not occur.

The new regulation mandates that advertisements containing photorealistic representations of nonexistent people or events, or altered footage of real events, carry a disclosure.

However, disclosures are not required for minor edits like color correction or image resizing, provided they do not materially affect the ad’s message.

Meta will include a notice in the ad itself and in the Ad Library when content is identified as digitally altered.

Advertisers who fail to comply will face ad rejection and potential penalties.

Meta’s policy continues to prohibit content that breaches its guidelines, relying on independent fact-checkers to vet viral misinformation.

The implementation of this policy reflects a growing concern over the integrity of information disseminated through online platforms.

By compelling advertisers to be upfront about the use of digital techniques in ads, Meta is taking a step towards restoring user trust in the content they encounter online.

The new policy is a significant development for anyone concerned with the authenticity of online content, especially in the politically charged atmosphere of social media.

It addresses the sophisticated capabilities of AI and digital tools that can create convincing yet false narratives.

The requirement for disclosure is not just about informing viewers but is also an attempt to deter the manipulation of information for political gain.

For professionals managing brand reputation and crafting marketing strategies, this policy underscores the importance of transparency and ethical advertising practices.


Featured image: PixieMe/Shutterstock

Live Blog: Meta Reports Q3 Earnings via @sejournal, @MattGSouthern

Meta is set to report Q3 earnings today at 5:00 PM Eastern time. Highlights from the earnings report published earlier today reveal growth that beat analysts’ expectations, driven by a significant rise in advertising revenue.

Key takeaways from Meta’s Q3 2023 earnings report include:

  • Revenue of $34.1 billion, up 23% year-over-year
  • Advertising revenue grew 21%, excluding currency impact
  • Earnings per share of $4.39, far exceeding estimates
  • Net income increased 164% to $11.6 billion
  • Expenses decreased 7% compared to Q3 2022
  • Reality Labs operating loss was $3.7 billion as metaverse investment continues
  • Revised 2023 expense outlook to $87-89 billion, down from the prior $88-91 billion range. Now expects 2023 capital spending of $27-29 billion
  • Increasing legal and regulatory scrutiny cited as a headwind
  • User growth and revenue growth had slowed in recent quarters

Stay tuned as we live blog key earnings insights for digital marketing professionals.

5:02 PM ET:


Featured Image: Dennis Diatel/Shutterstock

How Meta and AI companies recruited striking actors to train AI

One evening in early September, T, a 28-year-old actor who asked to be identified by his first initial, took his seat in a rented Hollywood studio space in front of three cameras, a director, and a producer for a somewhat unusual gig.

The two-hour shoot produced footage that was not meant to be viewed by the public—at least, not a human public. 

Rather, T’s voice, face, movements, and expressions would be fed into an AI database “to better understand and express human emotions.” That database would then help train “virtual avatars” for Meta, as well as algorithms for a London-based emotion AI company called Realeyes. (Realeyes was running the project; participants only learned about Meta’s involvement once they arrived on site.)

The “emotion study” ran from July through September, specifically recruiting actors. The project coincided with Hollywood’s historic dual strikes by the Writers Guild of America and the Screen Actors Guild (SAG-AFTRA). With the industry at a standstill, the larger-than-usual number of out-of-work actors may have been a boon for Meta and Realeyes: here was a new pool of “trainers”—and data points—perfectly suited to teaching their AI to appear more human. 

For actors like T, it was a great opportunity too: a way to make good, easy money on the side, without having to cross the picket line. 

“There aren’t really clear rules right now.”

“This is fully a research-based project,” the job posting said. It offered $150 per hour for at least two hours of work, and asserted that “your individual likeness will not be used for any commercial purposes.”  

The actors may have assumed this meant that their faces and performances wouldn’t turn up in a TV show or movie, but the broad nature of what they signed makes it impossible to know the full implications for sure. In fact, in order to participate, they had to sign away certain rights “in perpetuity” for technologies and use cases that may not yet exist. 

And while the job posting insisted that the project “does not qualify as struck work” (that is, work produced by employers against whom the union is striking), it nevertheless speaks to some of the strike’s core issues: how actors’ likenesses can be used, how actors should be compensated for that use, and what informed consent should look like in the age of AI. 

“This isn’t a contract battle between a union and a company,” said Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, at a panel on AI in entertainment at San Diego Comic-Con this summer. “It’s existential.”

Many actors across the industry, particularly background actors (also known as extras), worry that AI—much like the models described in the emotion study—could be used to replace them, whether or not their exact faces are copied. And in this case, by providing the facial expressions that will teach AI to appear more human, study participants may in fact have been the ones inadvertently training their own potential replacements. 

“Our studies have nothing to do with the strike,” Max Kalehoff, Realeyes’s vice president for growth and marketing, said in an email. “The vast majority of our work is in evaluating the effectiveness of advertising for clients—which has nothing to do with actors and the entertainment industry except to gauge audience reaction.” The timing, he added, was “an unfortunate coincidence.” Meta did not respond to multiple requests for comment.

Given how technological advancements so often build upon one another, not to mention how quickly the field of artificial intelligence is evolving, experts point out that there’s only so much these companies can truly promise. 

In addition to the job posting, MIT Technology Review has obtained and reviewed a copy of the data license agreement, and its potential implications are indeed vast. To put it bluntly: whether the actors who participated knew it or not, for as little as $300, they appear to have authorized Realeyes, Meta, and other parties of the two companies’ choosing to access and use not just their faces but also their expressions, and anything derived from them, almost however and whenever they want—as long as they do not reproduce any individual likenesses. 

Some actors, like Jessica, who asked to be identified by just her first name, felt there was something “exploitative” about the project—both in the financial incentives for out-of-work actors and in the fight over AI and the use of an actor’s image. 

Jessica, a New York–based background actor, says she has seen a growing number of listings for AI jobs over the past few years. “There aren’t really clear rules right now,” she says, “so I don’t know. Maybe … their intention [is] to get these images before the union signs a contract and sets them.”

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

All this leaves actors, struggling after three months of limited to no work, primed to accept the terms from Realeyes and Meta—and, intentionally or not, to affect all actors, whether or not they personally choose to engage with AI. 

“It’s hurt now or hurt later,” says Maurice Compte, an actor and SAG-AFTRA member who has had principal roles on shows like Narcos and Breaking Bad. After reviewing the job posting, he couldn’t help but see nefarious intent. Yes, he said, of course it’s beneficial to have work, but he sees it as beneficial “in the way that the Native Americans did when they took blankets from white settlers,” adding: “They were getting blankets out of it in a time of cold.”  

Humans as data 

Artificial intelligence is powered by data, and data, in turn, is provided by humans. 

It is human labor that prepares, cleans, and annotates data to make it more understandable to machines; as MIT Technology Review has reported, for example, robot vacuums know to avoid running over dog poop because human data labelers have first clicked through and identified millions of images of pet waste—and other objects—inside homes. 

When it comes to facial recognition, other biometric analysis, or generative AI models that aim to generate humans or human-like avatars, it is human faces, movements, and voices that serve as the data. 

Initially, these models were powered by data scraped off the internet—including, on several occasions, private surveillance camera footage that was shared or sold without the knowledge of anyone being captured.

But as the need for higher-quality data has grown, alongside concerns about whether data is collected ethically and with proper consent, tech companies have progressed from “scraping data from publicly available sources” to “building data sets with professionals,” explains Julian Posada, an assistant professor at Yale University who studies platforms and labor. Or, at the very least, “with people who have been recruited, compensated, [and] signed [consent] forms.”

But the need for human data, especially in the entertainment industry, runs up against a significant concern in Hollywood: publicity rights, or “the right to control your use of your name and likeness,” according to Corynne McSherry, the legal director of the Electronic Frontier Foundation (EFF), a digital rights group.

This was an issue long before AI, but AI has amplified the concern. Generative AI in particular makes it easy to create realistic replicas of anyone by training algorithms on existing data, like photos and videos of the person. The more data that is available, the easier it is to create a realistic image. This has a particularly large effect on performers. 

He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

Some actors have been able to monetize the characteristics that make them unique. James Earl Jones, the voice of Darth Vader, signed off on the use of archived recordings of his voice so that AI could continue to generate it for future Star Wars films. Meanwhile, de-aging AI has allowed Harrison Ford, Tom Hanks, and Robin Wright to portray younger versions of themselves on screen. Metaphysic AI, the company behind the de-aging technology, recently signed a deal with Creative Artists Agency to put generative AI to use for its artists. 

But many deepfakes, or images of fake events created with deep-learning AI, are generated without consent. Earlier this month, Hanks posted on Instagram that an ad purporting to show him promoting a dental plan was not actually him. 

The AI landscape is different for noncelebrities. Background actors are increasingly being asked to undergo digital body scans on set, where they have little power to push back or even get clarity on how those scans will be used in the future. Studios say that scans are used primarily to augment crowd scenes, which they have been doing with other technology in postproduction for years—but according to SAG representatives, once the studios have captured actors’ likenesses, they reserve the rights to use them forever. (There have already been multiple reports from voice actors that their voices have appeared in video games other than the ones they were hired for.)

In the case of the Realeyes and Meta study, it might be “study data” rather than body scans, but actors are dealing with the same uncertainty as to how else their digital likenesses could one day be used.

Teaching AI to appear more human

At $150 per hour, the Realeyes study paid far more than the roughly $200 daily rate in the current Screen Actors Guild contract (nonunion jobs pay even less). 

This made the gig an attractive proposition for young actors like T, just starting out in Hollywood—a notoriously challenging environment even had he not arrived just before the SAG-AFTRA strike started. (T has not worked enough union jobs to officially join the union, though he hopes to one day.) 

In fact, even more than a standard acting job, T described performing for Realeyes as “like an acting workshop where … you get a chance to work on your acting chops, which I thought helped me a little bit.”

For two hours, T responded to prompts like “Tell us something that makes you angry,” “Share a sad story,” or “Do a scary scene where you’re scared,” improvising an appropriate story or scene for each one. He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

In addition to wanting the pay, T participated in the study because, as he understood it, no one would see the results publicly. Rather, it was research for Meta, as he learned when he arrived at the studio space and signed a data license agreement with the company that he only skimmed through. It was the first he’d heard that Meta was even connected with the project. (He had previously signed a separate contract with Realeyes covering the terms of the job.) 

The data license agreement says that Realeyes is the sole owner of the data and has full rights to “license, distribute, reproduce, modify, or otherwise create and use derivative works” generated from it, “irrevocably and in all formats and media existing now or in the future.” 

This kind of legalese can be hard to parse, particularly when it deals with technology that is changing at such a rapid pace. But what it essentially means is that “you may be giving away things you didn’t realize … because those things didn’t exist yet,” says Emily Poler, a litigator who represents clients in disputes at the intersection of media, technology, and intellectual property.

“If I was a lawyer for an actor here, I would definitely be looking into whether one can knowingly waive rights where things don’t even exist yet,” she adds. 

As Jessica argues, “Once they have your image, they can use it whenever and however.” She thinks that actors’ likenesses could be used in the same way that other artists’ works, like paintings, songs, and poetry, have been used to train generative AI, and she worries that the AI could just “create a composite that looks ‘human,’ like believable as human,” but “it wouldn’t be recognizable as you, so you can’t potentially sue them”—even if that AI-generated human was based on you. 

This feels especially plausible to Jessica given her experience as an Asian-American background actor in an industry where representation often amounts to being the token minority. Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

It’s not just images that actors should be worried about, says Adam Harvey, an applied researcher who focuses on computer vision, privacy, and surveillance and is one of the co-creators of Exposing.AI, which catalogues the data sets used to train facial recognition systems. 

What constitutes “likeness,” he says, is changing. While the word is now understood primarily to mean a photographic likeness, musicians are challenging that definition to include vocal likenesses. Eventually, he believes, “it will also … be challenged on the emotional frontier”—that is, actors could argue that their microexpressions are unique and should be protected. 

Realeyes’s Kalehoff did not say what specifically the company would be using the study results for, though he elaborated in an email that there could be “a variety of use cases, such as building better digital media experiences, in medical diagnoses (i.e. skin/muscle conditions), safety alertness detection, or robotic tools to support medical disorders related to recognition of facial expressions (like autism).”

Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

When asked how Realeyes defined “likeness,” he replied that the company used that term—as well as “commercial,” another word for which there are assumed but no universally agreed-upon definitions—in a manner that is “the same for us as [a] general business.” He added, “We do not have a specific definition different from standard usage.”  

But for T, and for other actors, “commercial” would typically mean appearing in some sort of advertisement or a TV spot—“something,” T says, “that’s directly sold to the consumer.” 

Outside of the narrow understanding in the entertainment industry, the EFF’s McSherry questions what the company means: “It’s a commercial company doing commercial things.”

Kalehoff also said, “If a client would ask us to use such images [from the study], we would insist on 100% consent, fair pay for participants, and transparency. However, that is not our work or what we do.” 

Yet this statement does not align with the language of the data license agreement, which stipulates that while Realeyes is the owner of the intellectual property stemming from the study data, Meta and “Meta parties acting on behalf of Meta” have broad rights to the data—including the rights to share and sell it. This means that, ultimately, how it’s used may be out of Realeyes’s hands. 

As explained in the agreement, the rights of Meta and parties acting on its behalf also include: 

  • Asserting certain rights to the participants’ identities (“identifying or recognizing you … creating a unique template of your face and/or voice … and/or protecting against impersonation and identity misuse”)
  • Allowing other researchers to conduct future research, using the study data however they see fit (“conducting future research studies and activities … in collaboration with third party researchers, who may further use the Study Data beyond the control of Meta”)
  • Creating derivative works from the study data for any kind of use at any time (“using, distributing, reproducing, publicly performing, publicly displaying, disclosing, and modifying or otherwise creating derivative works from the Study Data, worldwide, irrevocably and in perpetuity, and in all formats and media existing now or in the future”)

The only limit on use was that Meta and parties would “not use Study Data to develop machine learning models that generate your specific face or voice in any Meta product” (emphasis added). Still, the variety of possible use cases—and users—is sweeping. And the agreement does little to quell actors’ specific anxieties that “down the line, that database is used to generate a work and that work ends up seeming a lot like [someone’s] performance,” as McSherry puts it.

When I asked Kalehoff about the apparent gap between his comments and the agreement, he denied any discrepancy: “We believe there are no contradictions in any agreements, and we stand by our commitment to actors as stated in all of our agreements to fully protect their image and their privacy.” Kalehoff declined to comment on Realeyes’s work with clients, or to confirm that the study was in collaboration with Meta.

Meanwhile, Meta has been building  photorealistic 3D “Codec avatars,” which go far beyond the cartoonish images in Horizon Worlds and require human training data to perfect. CEO Mark Zuckerberg recently described these avatars on the popular podcast from AI researcher Lex Fridman as core to his vision of the future—where physical, virtual, and augmented reality all coexist. He envisions the avatars “delivering a sense of presence as if you’re there together, no matter where you actually are in the world.”

Despite multiple requests for comment, Meta did not respond to any questions from MIT Technology Review, so we cannot confirm what it would use the data for, or who it means by “parties acting on its behalf.” 

Individual choice, collective impact 

Throughout the strikes by writers and actors, there has been a palpable sense that Hollywood is charging into a new frontier that will shape how we—all of us—engage with artificial intelligence. Usually, that frontier is described with reference to workers’ rights; the idea is that whatever happens here will affect workers in other industries who are grappling with what AI will mean for their own livelihoods. 

Already, the gains won by the Writers Guild have provided a model for how to regulate AI’s impact on creative work. The union’s new contract with studios limits the use of AI in writers’ rooms and stipulates that only human authors can be credited on stories, which prevents studios from copyrighting AI-generated work and further serves as a major disincentive to use AI to write scripts. 

In early October, the actors’ union and the studios also returned to the bargaining table, hoping to provide similar guidance for actors. But talks quickly broke down because “it is clear that the gap between the AMPTP [Alliance of Motion Picture and Television Producers] and SAG-AFTRA is too great,” as the studio alliance put it in a press release. Generative AI—specifically, how and when background actors should be expected to consent to body scanning—was reportedly one of the sticking points. 

Whatever final agreement they come to won’t forbid the use of AI by studios—that was never the point. Even the actors who took issue with the AI training projects have more nuanced views about the use of the technology. “We’re not going to fully cut out AI,” acknowledges Compte, the Breaking Bad actor. Rather, we “just have to find ways that are going to benefit the larger picture… [It] is really about living wages.”

But a future agreement, which is specifically between the studios and SAG, will not be applicable to tech companies conducting “research” projects, like Meta and Realeyes. Technological advances created for one purpose—perhaps those that come out of a “research” study—will also have broader applications, in film and beyond. 

“The likelihood that the technology that is developed is only used for that [audience engagement or Codec avatars] is vanishingly small. That’s not how it works,” says the EFF’s McSherry. For instance, while the data agreement for the emotion study does not explicitly mention using the results for facial recognition AI, McSherry believes that they could be used to improve any kind of AI involving human faces or expressions.

(Besides, emotion detection algorithms are themselves controversial, whether or not they even work the way developers say they do. Do we really want “our faces to be judged all the time [based] on whatever products we’re looking at?” asks Posada, the Yale professor.)

This all makes consent for these broad research studies even trickier: there’s no way for a participant to opt in or out of specific use cases. T, for one, would be happy if his participation meant better avatar options for virtual worlds, like those he uses with his Oculus—though he isn’t agreeing to that specifically. 

But what are individual study participants—who may need the income—to do? What power do they really have in this situation? And what power do other people—even people who declined to participate—have to ensure that they are not affected? The decision to train AI may be an individual one, but the impact is not; it’s collective.

“Once they feed your image and … a certain amount of people’s images, they can create an endless variety of similar-looking people,” says Jessica. “It’s not infringing on your face, per se.” But maybe that’s the point: “They’re using your image without … being held liable for it.”

T has considered the possibility that, one day, the research he has contributed to could very well replace actors. 

But at least for now, it’s a hypothetical. 

“I’d be upset,” he acknowledges, “but at the same time, if it wasn’t me doing it, they’d probably figure out a different way—a sneakier way, without getting people’s consent.” Besides, T adds, “they paid really well.” 

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489.