How Meta and AI companies recruited striking actors to train AI

One evening in early September, T, a 28-year-old actor who asked to be identified by his first initial, took his seat in a rented Hollywood studio space in front of three cameras, a director, and a producer for a somewhat unusual gig.

The two-hour shoot produced footage that was not meant to be viewed by the public—at least, not a human public. 

Rather, T’s voice, face, movements, and expressions would be fed into an AI database “to better understand and express human emotions.” That database would then help train “virtual avatars” for Meta, as well as algorithms for a London-based emotion AI company called Realeyes. (Realeyes was running the project; participants only learned about Meta’s involvement once they arrived on site.)

The “emotion study” ran from July through September, specifically recruiting actors. The project coincided with Hollywood’s historic dual strikes by the Writers Guild of America and the Screen Actors Guild (SAG-AFTRA). With the industry at a standstill, the larger-than-usual number of out-of-work actors may have been a boon for Meta and Realeyes: here was a new pool of “trainers”—and data points—perfectly suited to teaching their AI to appear more human. 

For actors like T, it was a great opportunity too: a way to make good, easy money on the side, without having to cross the picket line. 

“There aren’t really clear rules right now.”

“This is fully a research-based project,” the job posting said. It offered $150 per hour for at least two hours of work, and asserted that “your individual likeness will not be used for any commercial purposes.”  

The actors may have assumed this meant that their faces and performances wouldn’t turn up in a TV show or movie, but the broad nature of what they signed makes it impossible to know the full implications for sure. In fact, in order to participate, they had to sign away certain rights “in perpetuity” for technologies and use cases that may not yet exist. 

And while the job posting insisted that the project “does not qualify as struck work” (that is, work produced by employers against whom the union is striking), it nevertheless speaks to some of the strike’s core issues: how actors’ likenesses can be used, how actors should be compensated for that use, and what informed consent should look like in the age of AI. 

“This isn’t a contract battle between a union and a company,” said Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, at a panel on AI in entertainment at San Diego Comic-Con this summer. “It’s existential.”

Many actors across the industry, particularly background actors (also known as extras), worry that AI—much like the models described in the emotion study—could be used to replace them, whether or not their exact faces are copied. And in this case, by providing the facial expressions that will teach AI to appear more human, study participants may in fact have been the ones inadvertently training their own potential replacements. 

“Our studies have nothing to do with the strike,” Max Kalehoff, Realeyes’s vice president for growth and marketing, said in an email. “The vast majority of our work is in evaluating the effectiveness of advertising for clients—which has nothing to do with actors and the entertainment industry except to gauge audience reaction.” The timing, he added, was “an unfortunate coincidence.” Meta did not respond to multiple requests for comment.

Given how technological advancements so often build upon one another, not to mention how quickly the field of artificial intelligence is evolving, experts point out that there’s only so much these companies can truly promise. 

In addition to the job posting, MIT Technology Review has obtained and reviewed a copy of the data license agreement, and its potential implications are indeed vast. To put it bluntly: whether the actors who participated knew it or not, for as little as $300, they appear to have authorized Realeyes, Meta, and other parties of the two companies’ choosing to access and use not just their faces but also their expressions, and anything derived from them, almost however and whenever they want—as long as they do not reproduce any individual likenesses. 

Some actors, like Jessica, who asked to be identified by just her first name, felt there was something “exploitative” about the project—both in the financial incentives for out-of-work actors and in the fight over AI and the use of an actor’s image. 

Jessica, a New York–based background actor, says she has seen a growing number of listings for AI jobs over the past few years. “There aren’t really clear rules right now,” she says, “so I don’t know. Maybe … their intention [is] to get these images before the union signs a contract and sets them.”

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

All this leaves actors, struggling after three months of limited to no work, primed to accept the terms from Realeyes and Meta—and, intentionally or not, to affect all actors, whether or not they personally choose to engage with AI. 

“It’s hurt now or hurt later,” says Maurice Compte, an actor and SAG-AFTRA member who has had principal roles on shows like Narcos and Breaking Bad. After reviewing the job posting, he couldn’t help but see nefarious intent. Yes, he said, of course it’s beneficial to have work, but he sees it as beneficial “in the way that the Native Americans did when they took blankets from white settlers,” adding: “They were getting blankets out of it in a time of cold.”  

Humans as data 

Artificial intelligence is powered by data, and data, in turn, is provided by humans. 

It is human labor that prepares, cleans, and annotates data to make it more understandable to machines; as MIT Technology Review has reported, for example, robot vacuums know to avoid running over dog poop because human data labelers have first clicked through and identified millions of images of pet waste—and other objects—inside homes. 

When it comes to facial recognition, other biometric analysis, or generative AI models that aim to generate humans or human-like avatars, it is human faces, movements, and voices that serve as the data. 

Initially, these models were powered by data scraped off the internet—including, on several occasions, private surveillance camera footage that was shared or sold without the knowledge of anyone being captured.

But as the need for higher-quality data has grown, alongside concerns about whether data is collected ethically and with proper consent, tech companies have progressed from “scraping data from publicly available sources” to “building data sets with professionals,” explains Julian Posada, an assistant professor at Yale University who studies platforms and labor. Or, at the very least, “with people who have been recruited, compensated, [and] signed [consent] forms.”

But the need for human data, especially in the entertainment industry, runs up against a significant concern in Hollywood: publicity rights, or “the right to control your use of your name and likeness,” according to Corynne McSherry, the legal director of the Electronic Frontier Foundation (EFF), a digital rights group.

This was an issue long before AI, but AI has amplified the concern. Generative AI in particular makes it easy to create realistic replicas of anyone by training algorithms on existing data, like photos and videos of the person. The more data that is available, the easier it is to create a realistic image. This has a particularly large effect on performers. 

He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

Some actors have been able to monetize the characteristics that make them unique. James Earl Jones, the voice of Darth Vader, signed off on the use of archived recordings of his voice so that AI could continue to generate it for future Star Wars films. Meanwhile, de-aging AI has allowed Harrison Ford, Tom Hanks, and Robin Wright to portray younger versions of themselves on screen. Metaphysic AI, the company behind the de-aging technology, recently signed a deal with Creative Artists Agency to put generative AI to use for its artists. 

But many deepfakes, or images of fake events created with deep-learning AI, are generated without consent. Earlier this month, Hanks posted on Instagram that an ad purporting to show him promoting a dental plan was not actually him. 

The AI landscape is different for noncelebrities. Background actors are increasingly being asked to undergo digital body scans on set, where they have little power to push back or even get clarity on how those scans will be used in the future. Studios say that scans are used primarily to augment crowd scenes, which they have been doing with other technology in postproduction for years—but according to SAG representatives, once the studios have captured actors’ likenesses, they reserve the rights to use them forever. (There have already been multiple reports from voice actors that their voices have appeared in video games other than the ones they were hired for.)

In the case of the Realeyes and Meta study, it might be “study data” rather than body scans, but actors are dealing with the same uncertainty as to how else their digital likenesses could one day be used.

Teaching AI to appear more human

At $150 per hour, the Realeyes study paid far more than the roughly $200 daily rate in the current Screen Actors Guild contract (nonunion jobs pay even less). 

This made the gig an attractive proposition for young actors like T, just starting out in Hollywood—a notoriously challenging environment even had he not arrived just before the SAG-AFTRA strike started. (T has not worked enough union jobs to officially join the union, though he hopes to one day.) 

In fact, even more than a standard acting job, T described performing for Realeyes as “like an acting workshop where … you get a chance to work on your acting chops, which I thought helped me a little bit.”

For two hours, T responded to prompts like “Tell us something that makes you angry,” “Share a sad story,” or “Do a scary scene where you’re scared,” improvising an appropriate story or scene for each one. He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

In addition to wanting the pay, T participated in the study because, as he understood it, no one would see the results publicly. Rather, it was research for Meta, as he learned when he arrived at the studio space and signed a data license agreement with the company that he only skimmed through. It was the first he’d heard that Meta was even connected with the project. (He had previously signed a separate contract with Realeyes covering the terms of the job.) 

The data license agreement says that Realeyes is the sole owner of the data and has full rights to “license, distribute, reproduce, modify, or otherwise create and use derivative works” generated from it, “irrevocably and in all formats and media existing now or in the future.” 

This kind of legalese can be hard to parse, particularly when it deals with technology that is changing at such a rapid pace. But what it essentially means is that “you may be giving away things you didn’t realize … because those things didn’t exist yet,” says Emily Poler, a litigator who represents clients in disputes at the intersection of media, technology, and intellectual property.

“If I was a lawyer for an actor here, I would definitely be looking into whether one can knowingly waive rights where things don’t even exist yet,” she adds. 

As Jessica argues, “Once they have your image, they can use it whenever and however.” She thinks that actors’ likenesses could be used in the same way that other artists’ works, like paintings, songs, and poetry, have been used to train generative AI, and she worries that the AI could just “create a composite that looks ‘human,’ like believable as human,” but “it wouldn’t be recognizable as you, so you can’t potentially sue them”—even if that AI-generated human was based on you. 

This feels especially plausible to Jessica given her experience as an Asian-American background actor in an industry where representation often amounts to being the token minority. Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

It’s not just images that actors should be worried about, says Adam Harvey, an applied researcher who focuses on computer vision, privacy, and surveillance and is one of the co-creators of Exposing.AI, which catalogues the data sets used to train facial recognition systems. 

What constitutes “likeness,” he says, is changing. While the word is now understood primarily to mean a photographic likeness, musicians are challenging that definition to include vocal likenesses. Eventually, he believes, “it will also … be challenged on the emotional frontier”—that is, actors could argue that their microexpressions are unique and should be protected. 

Realeyes’s Kalehoff did not say what specifically the company would be using the study results for, though he elaborated in an email that there could be “a variety of use cases, such as building better digital media experiences, in medical diagnoses (i.e. skin/muscle conditions), safety alertness detection, or robotic tools to support medical disorders related to recognition of facial expressions (like autism).”

Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

When asked how Realeyes defined “likeness,” he replied that the company used that term—as well as “commercial,” another word for which there are assumed but no universally agreed-upon definitions—in a manner that is “the same for us as [a] general business.” He added, “We do not have a specific definition different from standard usage.”  

But for T, and for other actors, “commercial” would typically mean appearing in some sort of advertisement or a TV spot—“something,” T says, “that’s directly sold to the consumer.” 

Outside of the narrow understanding in the entertainment industry, the EFF’s McSherry questions what the company means: “It’s a commercial company doing commercial things.”

Kalehoff also said, “If a client would ask us to use such images [from the study], we would insist on 100% consent, fair pay for participants, and transparency. However, that is not our work or what we do.” 

Yet this statement does not align with the language of the data license agreement, which stipulates that while Realeyes is the owner of the intellectual property stemming from the study data, Meta and “Meta parties acting on behalf of Meta” have broad rights to the data—including the rights to share and sell it. This means that, ultimately, how it’s used may be out of Realeyes’s hands. 

As explained in the agreement, the rights of Meta and parties acting on its behalf also include: 

  • Asserting certain rights to the participants’ identities (“identifying or recognizing you … creating a unique template of your face and/or voice … and/or protecting against impersonation and identity misuse”)
  • Allowing other researchers to conduct future research, using the study data however they see fit (“conducting future research studies and activities … in collaboration with third party researchers, who may further use the Study Data beyond the control of Meta”)
  • Creating derivative works from the study data for any kind of use at any time (“using, distributing, reproducing, publicly performing, publicly displaying, disclosing, and modifying or otherwise creating derivative works from the Study Data, worldwide, irrevocably and in perpetuity, and in all formats and media existing now or in the future”)

The only limit on use was that Meta and parties would “not use Study Data to develop machine learning models that generate your specific face or voice in any Meta product” (emphasis added). Still, the variety of possible use cases—and users—is sweeping. And the agreement does little to quell actors’ specific anxieties that “down the line, that database is used to generate a work and that work ends up seeming a lot like [someone’s] performance,” as McSherry puts it.

When I asked Kalehoff about the apparent gap between his comments and the agreement, he denied any discrepancy: “We believe there are no contradictions in any agreements, and we stand by our commitment to actors as stated in all of our agreements to fully protect their image and their privacy.” Kalehoff declined to comment on Realeyes’s work with clients, or to confirm that the study was in collaboration with Meta.

Meanwhile, Meta has been building  photorealistic 3D “Codec avatars,” which go far beyond the cartoonish images in Horizon Worlds and require human training data to perfect. CEO Mark Zuckerberg recently described these avatars on the popular podcast from AI researcher Lex Fridman as core to his vision of the future—where physical, virtual, and augmented reality all coexist. He envisions the avatars “delivering a sense of presence as if you’re there together, no matter where you actually are in the world.”

Despite multiple requests for comment, Meta did not respond to any questions from MIT Technology Review, so we cannot confirm what it would use the data for, or who it means by “parties acting on its behalf.” 

Individual choice, collective impact 

Throughout the strikes by writers and actors, there has been a palpable sense that Hollywood is charging into a new frontier that will shape how we—all of us—engage with artificial intelligence. Usually, that frontier is described with reference to workers’ rights; the idea is that whatever happens here will affect workers in other industries who are grappling with what AI will mean for their own livelihoods. 

Already, the gains won by the Writers Guild have provided a model for how to regulate AI’s impact on creative work. The union’s new contract with studios limits the use of AI in writers’ rooms and stipulates that only human authors can be credited on stories, which prevents studios from copyrighting AI-generated work and further serves as a major disincentive to use AI to write scripts. 

In early October, the actors’ union and the studios also returned to the bargaining table, hoping to provide similar guidance for actors. But talks quickly broke down because “it is clear that the gap between the AMPTP [Alliance of Motion Picture and Television Producers] and SAG-AFTRA is too great,” as the studio alliance put it in a press release. Generative AI—specifically, how and when background actors should be expected to consent to body scanning—was reportedly one of the sticking points. 

Whatever final agreement they come to won’t forbid the use of AI by studios—that was never the point. Even the actors who took issue with the AI training projects have more nuanced views about the use of the technology. “We’re not going to fully cut out AI,” acknowledges Compte, the Breaking Bad actor. Rather, we “just have to find ways that are going to benefit the larger picture… [It] is really about living wages.”

But a future agreement, which is specifically between the studios and SAG, will not be applicable to tech companies conducting “research” projects, like Meta and Realeyes. Technological advances created for one purpose—perhaps those that come out of a “research” study—will also have broader applications, in film and beyond. 

“The likelihood that the technology that is developed is only used for that [audience engagement or Codec avatars] is vanishingly small. That’s not how it works,” says the EFF’s McSherry. For instance, while the data agreement for the emotion study does not explicitly mention using the results for facial recognition AI, McSherry believes that they could be used to improve any kind of AI involving human faces or expressions.

(Besides, emotion detection algorithms are themselves controversial, whether or not they even work the way developers say they do. Do we really want “our faces to be judged all the time [based] on whatever products we’re looking at?” asks Posada, the Yale professor.)

This all makes consent for these broad research studies even trickier: there’s no way for a participant to opt in or out of specific use cases. T, for one, would be happy if his participation meant better avatar options for virtual worlds, like those he uses with his Oculus—though he isn’t agreeing to that specifically. 

But what are individual study participants—who may need the income—to do? What power do they really have in this situation? And what power do other people—even people who declined to participate—have to ensure that they are not affected? The decision to train AI may be an individual one, but the impact is not; it’s collective.

“Once they feed your image and … a certain amount of people’s images, they can create an endless variety of similar-looking people,” says Jessica. “It’s not infringing on your face, per se.” But maybe that’s the point: “They’re using your image without … being held liable for it.”

T has considered the possibility that, one day, the research he has contributed to could very well replace actors. 

But at least for now, it’s a hypothetical. 

“I’d be upset,” he acknowledges, “but at the same time, if it wasn’t me doing it, they’d probably figure out a different way—a sneakier way, without getting people’s consent.” Besides, T adds, “they paid really well.” 

Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

Meta To Launch AI Chatbots With Distinct Personas via @sejournal, @kristileilani

Facebook’s parent company, Meta, may be preparing to launch a new range of chatbots with distinct personalities, potentially as early as next month.

The bots aim to boost user engagement and provide innovative search functions and recommendations on Meta’s platforms.

The big tech company is exploring a variety of “personas.” One is rumored to be a bot that emulates the speaking style of Abraham Lincoln. Another adopts a surfer’s demeanor to provide travel advice.

The latest AI initiative, reported by the Financial Times, arrives as Meta faces competition from social media platforms like Snapchat My AI.

Chat 30 AI Personalities

AI agents for Meta’s top social platforms were likely developed by the top-level product group focusing on generative AI, announced earlier this year by Mark Zuckerberg, Meta CEO.

“Over the longer term, we’ll focus on developing AI personas that can help people in a variety of ways. We’re exploring experiences with text (like chat in WhatsApp and Messenger), with images (like creative Instagram filters and ad formats), and with video and multi-modal experiences.”

During an interview, Zuckerberg hinted at future AI agents who can be assistants and coaches. He emphasized that there won’t be a single AI entity with which people interact.

The hint aligned with a possible screenshot of AI chatbots on Instagram, shared by mobile developer Alessandro Paluzzi in June.

Privacy Concerns

The move to implement AI chatbots across Meta platforms like Facebook, Instagram, Messenger, and WhatsApp raises privacy concerns.

Notably, these chatbots will likely collect new troves of user data, enabling Meta to deliver more personalized content and advertisements. It’s a critical factor, considering that a significant portion of Meta’s $117 billion annual revenue comes from advertising.

Meta To Launch AI Chatbots With Distinct PersonasScreenshot from App Store, August 2023

Recent Meta AI Developments

For transparency, Meta recently shared 22 system cards explaining how AI ranks content throughout Facebook and Instagram. These cards give insight into how Meta social platform users benefit from AI algorithms.

Meta also released the latest version of its large language model, Llama 2, in partnership with Microsoft. The open-source LLM is free for commercial and research use, opening the door to many more applications of AI in business and marketing tools.

The Future Of Personalized Marketing

With Meta AI chatbots offering a wide range of engagement experiences, marketers may need to rethink their approach to personalized marketing.

We contacted Meta for comment on how the upcoming launch of AI chatbots could affect how businesses engage with customers.

As more people access AI chatbots through popular social platforms, like Snapchat’s My AI, consumers will come to expect access to 24/7 conversational AI assistance.

This will likely drive more marketers to implement AI chatbots to ensure customers are not slipping through the cracks due to simple, unanswered questions.


Featured image: salarko/Shutterstock

Meta Earnings Call: AI Improves UX For Over 3 Billion People via @sejournal, @kristileilani

Meta recently held its Q2 earnings call, revealing critical insights into the company’s performance, social platform engagement, and future plans.

The company’s focus on artificial intelligence (AI) developments and integrating these technologies into their platforms was a significant highlight.

As expected, Mark Zuckerberg, Meta CEO, provided a comprehensive overview of financial performance, user statistics, strategic initiatives, AI goals, and the metaverse.

Now that we’ve gotten through the major layoffs, the rest of 2023 will be about creating stability for employees, removing barriers that slow us down, introducing new AI-powered tools to speed us up, and so on

Meta AI Advancements Include Open-Source LLM

Meta reported a strong Q2 2023, with significant advancements in AI technology and increased user engagement across its platforms.

The company’s AI advancements include the development of open-source large language models (LLMs) like Llama 2 that improve the relevance and quality of content shown to users.

…you can imagine lots of ways AI could help people connect and express themselves in our apps: creative tools that make it easier and more fun to share content, agents that act as assistants, coaches, or that can help you interact with businesses and creators, and more.

More importantly, Meta hopes Llama 2 will be instrumental in combating harmful content and ensuring the safety and integrity of its platforms.

Improved User Experience Of Social Platforms

Many of Meta AI’s achievements have been integrated into platforms like Facebook, Instagram, Messenger, and WhatsApp to increase user engagement and satisfaction.

Continuous improvements in user experience, driven by AI advancements, were credited for the increased growth.

Reels is a key part of this Discovery Engine, and Reels plays exceed 200 billion per day across Facebook and Instagram. We’re seeing good progress on Reels monetization as well, with the annual revenue run rate across our apps now exceeding $10 billion, up from $3 billion last fall.

Zuckerberg was also confident about the trajectory of Threads. Meta will apparently use the same playbook to grow the new app as it has with other Meta platforms with over a billion users.

Integrating AI into Meta’s social platforms could lead to more personalized and relevant advertisements for users, increasing engagement and conversion rates.

We introduced AI Sandbox, a testing playground for generative AI-powered tools like automatic text variation, background generation, and image outcropping.

Furthermore, the increase in DAUs and MAUs indicates a larger audience for marketers to target, providing more opportunities for brand exposure and customer acquisition. One area that has a promising future is business massaging via WhatsApp.

Business messaging is another key piece of our monetization strategy, and we recently announced that the 200 million users of our WhatsApp Business app will now be able to create Click-to-WhatsApp ads for Facebook and Instagram without needing a Facebook account.

Meta Earnings Call: AI Improves UX For Over 3 Billion PeopleScreenshot from Meta, July 2023

A breakdown of the average revenue generated per Facebook user globally and in specific regions was included.

Meta Earnings Call: AI Improves UX For Over 3 Billion PeopleScreenshot from Meta, July 2023

The Metaverse

The metaverse remains the company’s ambitious vision for the future of the internet.

Meta’s investment in this new technology is a testament to their commitment to innovation and a potential game-changer in our online interactions.

However, the metaverse development is a long-term project with many uncertainties, and it’s clear that the company is still in the early stages of this journey.

Continuing The Shift In Focus

The latest Meta earnings call report reveals a company in transition, grappling with the challenges of shifting its focus towards developing the metaverse while maintaining its core business operations.

Revenue growth, driven by advertising, continues to be robust, but the costs associated with developing the metaverse impact profitability.

The focus on user privacy and safety and efforts to improve content moderation by Meta are commendable.

However, these initiatives come with their own challenges, as evidenced by the increased expenses and the ongoing scrutiny from regulators and the public.

The company’s future success will largely depend on its ability to successfully navigate these challenges and realize its vision for AI advancements and the metaverse.

Meta Earnings Call: AI Improves UX For Over 3 Billion PeopleScreenshot from Google Finance, July 2023

Featured image: /Shutterstock

Meta And Microsoft Release Llama 2 Free For Commercial Use And Research via @sejournal, @kristileilani

Meta and Microsoft announced an expanded artificial intelligence partnership with the release of their new large language model (LLM), Llama 2, free for research and commercial use.

This marks the latest trend toward an open LLM development and training approach.

Meta Releases Llama 2

Meta’s announced Llama 2, now accessible through Microsoft Azure, Amazon Web Services, Hugging Face, and other providers.

The post highlighted how increased access to foundational AI technologies can benefit businesses globally. Meta aimed to let developers and researchers stress-test LLMs to identify and fix problems faster.

Meta conveyed the belief that opening up access is safer than limiting availability. It hoped the AI community would collaborate on improving tools and addressing vulnerabilities.

Meta also announced an expanded partnership with Microsoft, making Microsoft the preferred cloud provider for Llama 2.

The new post noted Meta’s focus on responsible AI development, including how it red-teamed the models, disclosed shortcomings, and provided a responsible use guide. An open innovation community and upcoming challenges were introduced to get feedback.

The company expressed excitement in seeing what people worldwide build with the new model.

Declaration Of Support For Open AI Development

In addition to announcing the release of Llama 2, Meta also released a statement supporting open AI development:

“We support an open innovation approach to AI. Responsible and open innovation gives us all a stake in the AI development process, bringing visibility, scrutiny and trust to these technologies. Opening today’s Llama models will let everyone benefit from this technology.”

The statement brought together major figures from academia, venture capital, and leading technology companies – including NVIDIA, Dropbox, Doordash, Shopify, Zoom, and Intel – supporting open AI development.

Azure And Windows Support Llama2

In a related announcement, Microsoft revealed its partnership with Meta to make the new Llama 2 artificial intelligence model available on the Azure cloud platform.

Llama 2 models should be optimized to run locally on Windows as well. Microsoft explained Windows developers can build AI experiences for their apps using Llama 2.

Azure customers should be able to fine-tune and deploy 7B, 13B, and 70B-parameter Llama 2 models.

Meta And Microsoft Release Llama 2 Free For Commercial Use And ResearchScreenshot from Meta, July 2023

The Microsoft+Meta partnership aimed to increase access to foundational AI technologies. Microsoft said they share Meta’s commitment to democratizing AI and its benefits.

It also expanded Microsoft’s ecosystem of open AI models on Azure.

Microsoft also discussed their approach to responsible AI. They said techniques like prompt engineering can optimize Llama 2 for safer, more reliable experiences. Azure AI Content Safety also offers another layer of protection.

The Benefits Of Open LLMs

The release of Llama 2 by Meta and its availability on several platforms, including Microsoft Azure and Windows, marks an important milestone in the trend toward more open and accessible LLMs.

With this expanded partnership, Meta and Microsoft could provide the AI community with a more remarkable ability to build upon, test, and refine large language models like Llama 2.

By opening up access to Llama 2, Meta hopes to spur innovation and the development of helpful applications powered by the model as Microsoft provides key computing infrastructure and support through Azure and Windows so developers worldwide can leverage the new LLM.

Both companies emphasized their commitment to democratizing AI while pursuing responsible development through transparency, safety practices, and gathering feedback. Meta and Microsoft hope to maximize the benefits of AI advances while mitigating risks.

Time will tell how greatly Llama 2 and other open LLMs impact businesses and consumers. But this kind of expanded access and collaboration between tech leaders promises to progress AI capabilities and their thoughtful application rapidly.


Featured image: mundissima/Shutterstock

Meta Enhances Video Capabilities On Facebook via @sejournal, @MattGSouthern

Meta has revealed enhancements to the Facebook video platform intended to streamline making, viewing, and engaging with videos.

The platform update incorporates aspects of Reels, Meta’s short video feature, into the Facebook Feed.

This integration aims to simplify video creation and sharing for users by enabling more dynamic video capabilities within the Facebook experience.

The company states in an announcement:

“Whether posting a video for friends and family to see, or trying to reach people who share similar interests, our video editing tools will make it possible for people to express themselves in new ways via Reels or long-form videos.”

Upgraded Video Editing Tools

The latest updates to Reels include new editing features that allow users to combine audio, music, and text seamlessly to make engaging videos.

You can now manipulate clips by speeding, reversing, or swapping them out. There are also improved audio capabilities, allowing you to mix songs, record voiceovers, and reduce unwanted background noise.

Additionally, Meta is adding support for HDR video, meaning users can now upload and view high dynamic range videos shot on their mobile devices directly within Reels.

A Dedicated Video Tab

Facebook is changing how users find and engage with videos on its platform. The section of the app previously called Facebook Watch has been rebranded as the Video tab.

This tab now combines all Facebook video content into one place, including Reels, longer videos, and live streams.

The goal is to create a centralized destination for users to discover and watch the various video formats available on Facebook more easily.

Meta explains:

“The Video tab will look familiar – you can scroll vertically through a personalized feed that recommends all types of video content – but will also feature new horizontal-scroll reels sections that highlight recommended reels, so you can quickly jump into short-form video.”

Discovering Trending Videos

Meta is launching an updated version of the Explore feature for videos.

The updated version of Explore uses a combination of human editors selecting videos and algorithms to recommend relevant and popular videos based on users’ interests.

The goal is to make it easier for people to discover new videos and topics they may enjoy watching.

Instagram Reels Integration

Meta is deepening the integration between Instagram Reels and Facebook.

You can now watch and engage with Instagram Reels content directly on Facebook without switching between apps.

The goal is to help Instagram creators reach more people by exposing their Reels to the Facebook audience.

This initial integration is just the start, as Meta plans on making more improvements to how Instagram Reels and Facebook work together.

The company views this as an ongoing effort to bring its services closer together.

“We’ll continue developing more tools for creators so they can express themselves, build an audience and earn money, along with the discovery and personalization features that give you more control over your experience.”


Featured Image: Joao Serafim/Shutterstock

Meta Announces AI-Powered Tools To Streamline Ad Processes via @sejournal, @kristileilani

In another Big Tech move to redefine digital advertising with artificial intelligence (AI), Meta announced several AI-powered tools to enhance business performance and streamline ad processes.

From the AI Sandbox to upgraded features in the Meta Advantage suite, these innovations promise to catapult online advertising into a new era.

Meta’s latest offerings provide an intriguing glimpse into the future of marketing with generative AI.

Meta’s AI Sandbox

The first portion of the announcement covered the AI Sandbox, a testing playground for AI-driven advertising tools.

  • Text Variation automatically creates multiple versions of an advertiser’s copy, helping advertisers test messages tailored to different audiences.
  • Background Generation allows advertisers to create background images for creative assets from text inputs.
  • Image Outcropping adjusts creative assets to fit different aspect ratios across multiple platforms like Stories or Reels, saving advertisers time when repurposing ad creatives.

Meta Advantage Suite

Next, Meta shared new features for its Meta Advantage suite.

Meta Advantage is a suite of automation tools designed to enhance advertising campaigns by leveraging AI and machine learning. It streamlines ad personalization and optimizes results, potentially saving advertisers time and ad spending.

The platform, which consolidated various automated products under a single banner last year, has witnessed significant growth in adoption.

The latest update brings several new features to Meta Advantage:

  • Businesses can convert existing manual campaigns to Advantage+ shopping campaigns with a single click. This feature, available in the Ads Manager, is expected to roll out across the platform within a month.
  • Advertisers can enrich their catalog ads with video content, a departure from the previous restriction to static images. This feature is being tested and is expected to launch later this year.
  • Performance Comparison reports allow advertisers to measure and compare the performance of their manual and Advantage+ shopping campaigns. This feature is already in the process of being rolled out.
  • Enhanced audience targeting with Advantage+ Audience allows advertisers to provide audience preferences as guidance instead of rigid constraints, opening the door to a broader ad audience. This tool is expected to be more widely available in the coming months.

Investment In AI Infrastructure

Finally, Meta highlighted its investment in AI infrastructure, with billions of dollars allocated annually to build AI capacity for advertisers.

The investment could help new AI-powered tools reach their full potential, benefiting businesses and users.

AI Continues To Transform Digital Advertising

With a commitment to leveraging AI for improved ad performance and user experience, Meta could keep businesses ahead of the curve in the evolving world of digital advertising.


Featured Image: BigTunaOnline/Shutterstock

Should You Invest In Paid Verification From Twitter Blue Or Meta Verified? via @sejournal, @kristileilani

Twitter plans to end its legacy verified program at the end of this month. To continue having a verified blue checkmark, you must subscribe to Twitter Blue, now available globally.

You can check any blue checkmark on Twitter to see if it is a Twitter Blue or legacy verified checkmark by clicking or tapping it.

twitter legacy verified blue checkmarkScreenshot from Twitter, March 2023

Twitter Blue Benefits And Eligibility

Eligibility requirements for a verified blue checkmark include having a confirmed phone number, an account older than 90 days, and no changes to your name, username, or profile picture within 30 days. Accounts with a verified blue checkmark cannot engage in misleading or deceptive practices, such as impersonating someone else or using fake identities.

The premium subscription plan offers Twitter users several exclusive features, including the following.

  • A verified blue checkmark.
  • The ability to post longer Tweets and longer videos.
  • The chance to undo a Tweet before it’s sent.
  • The chance to edit some Tweets within the first 30 minutes.
  • A feed of Top Articles shared by those you follow and the people they follow.
  • Account security with two-factor authentication via SMS or authentication apps.
  • Increased visibility when you reply to other users’ Tweets.

Pricing varies based on your country and device. In the United States, it is $8 – $11 monthly.

Twitter also offers distinct profile labels for organizations (a gold checkmark), government officials (a gray checkmark), and other account types.

Meta Verified Benefits And Eligibility

Meta is also rolling out a paid subscription bundle, Meta Verified, that includes verification of Facebook and Instagram profiles.

Eligibility requirements on Facebook and Instagram include having an active profile with your real name and profile photo matching your government-issued ID.

Two-factor authentication must be used to secure your account, and your account must always adhere to the Terms of Service and Community Guidelines for each network.

The paid subscription offers Facebook and Instagram users several exclusive features, including the following.

  • A verified checkmark that lets your audience know you are who you say you are.
  • Exclusive stickers to use on Facebook and Instagram.
  • 100 stars per month to support your favorite Facebook creators.
  • Help from a real person when you experience issues with your account.

Pricing varies based on the device you sign up on and is limited to select users over 18 years old in the U.S., New Zealand, and Australia. It is $11.99 – $14.99 monthly.

The Downsides To Paid Verification

While it offers people who never had the chance to be verified in the past the option to pay for the blue checkmark, paid verification is controversial for several reasons.

For starters, many Twitter Blue users complain that they haven’t noticed an increase in engagement since paying for the subscription and feel they are now paying to be ignored.

Another major concern is the lack of distinction between notable public figures and people who have paid for the checkmark. Previously, accounts had to belong to prominently recognized individuals or brands based on news coverage, industry references, and audience size. Now, notable accounts will have to pay for verification with everyone else.

This new false “notability” could allow bad actors to spread misinformation and scam people based on the account’s status as a verified profile. Some agencies have released consumer alerts in response to growing reports of scams committed by Twitter blue verified accounts.

While these actions violate social platforms’ terms of service and community guidelines, these verified accounts could continue spreading misinformation and scamming others until someone reports an issue. A lot of damage could be done in the time it takes for someone from the social network to investigate reported users.

Some Twitter users strongly oppose paid verification. Some accounts have launched campaigns encouraging others to block Twitter Blue users to decrease the reach of accounts with the paid blue checkmark.

Should You Invest In Paid Verification From Twitter Blue Or Meta Verified?Screenshot from Twitter, March 2023

Others will dismiss opinions shared by users simply because the account has a Twitter Blue verification.

Should You Invest In Paid Verification From Twitter Blue Or Meta Verified?Screenshot from Twitter, March 2023

Is Paid Verification Right For You?

It’s important to weigh the benefits of being verified through Twitter Blue or Meta Verified and the potential implications of paying for notability on social media.

As a social network user, it’s also important to remember some basic safety rules.

  • Regardless of verification status, never give out personal information or account details to other social media users.
  • If you are asked to send money for a specific cause or reason, research it outside social media to ensure it is a legitimate request, not a scam.
  • Fact-check information before you share it with others to prevent spreading misinformation to larger, susceptible audiences. This especially applies to images and video thanks to AI content generation.
  • Utilize two-factor authentication to secure your accounts and save your backup/recovery code for Twitter, Facebook, and Instagram, just in case.

Featured Image: Fantastic Studio/Shutterstock

Should Congress Investigate Big Tech Platforms? via @sejournal, @kristileilani

This week, the House Energy and Commerce Committee will hold a full committee hearing with TikTok CEO Shou Chew to discuss how the platform handles users’ data, its effect on kids, and its relationship with ByteDance, its Chinese parent company.

This hearing is part of an ongoing investigation to determine whether TikTok should be banned in the United States or forced to split from ByteDance.

A ban on TikTok would affect over 150 million Americans who use TikTok for education, entertainment, and income generation.

It would also affect the five million U.S. businesses using TikTok to reach customers.

Is TikTok The Only Risk To National Security?

According to a memo released by the Tech Oversight Project, TikTok is not the only tech platform that poses risks to national security, mental health, and children.

As Congress scrutinizes TikTok, the Tech Oversight Project also strongly urges an investigation of risks posed by tech companies like Amazon, Apple, Meta, and Google.

These platforms have a documented history of serving content harmful to younger audiences and adversarial to U.S. interests. They have also failed on many occasions to protect users’ private data.

Many Big Tech companies have seen TikTok’s success and tried to emulate some of its features to encourage users to spend as much time within their platforms’ ecosystems as possible. Academics, activists, non-governmental organizations, and others have long raised concerns about these platforms’ risks.

To truly reduce Big Rech’s risks to our society, Congress must look beyond TikTok and hold other companies accountable for the same dangers they pose to national security, mental health, and private data.

Risks Posed By Big Tech Companies

The following are examples of the risks Big Tech companies pose to U.S. users.

Amazon

Amazon has made several controversial moves, including a partnership with a state propaganda agency to launch a China books portal and offering AWS services to Chinese companies, including a banned surveillance firm with ties to the military.

Apple

Independent research found that Apple collects detailed information about its users, even when users choose not to allow tracking by apps from the App Store. Over half of the top 200 suppliers for Apple operate factories in China.

Google

The FTC fined Google and YouTube $170 million for collecting children’s data without parental consent. YouTube also changed its algorithm to make it more addictive, increasing users’ time watching videos and consuming ads.

Meta

Facebook allowed Cambridge Analytica to harvest the private data of over 50 million users. It also failed to notify over 530 million users of a data breach that resulted in users’ private data being stolen.

It also allowed Russian interference in the 2016 elections. The influence operation posed as an independent news organization with 13 accounts and two pages, pushing messages critical of right-wing voices and the center-left.

TikTok 

TikTok employees confirmed that its Chinese parent company, ByteDance, is involved in decision-making and has access to TikTok’s user data. While testifying before the Senate Homeland Security Committee, Vanessa Pappas, TikTok COO, would not confirm whether ByteDance would give TikTok user data to the Chinese government.

Conclusion

While the dangers posed by TikTok are undeniable, it’s clear that Congress should also address the risks posed throughout the tech industry. By holding all major offenders accountable, we can create a safe, secure, and responsible digital landscape for everyone.


Featured Image: Koshiro K/Shutterstock

Social Media Engagement Rates Dropping Across Top Networks via @sejournal, @kristileilani

Do you know what social media success looks like for your business?

Like most areas of marketing, results vary based on industry, target audience, and the ability to create content that attracts customers.

Rival IQ released its annual Social Media Benchmark Report for 2023, where brands in 14 industries compare their social media performance against other brands in the same competitive landscape.

The data set covers social media engagement on Facebook, Instagram, TikTok, and Twitter for 2,100 companies across numerous industries, ranging from food & beverage to tech.

The Facebook following of the companies analyzed ranges from 25,000 – 1,000,0000, and all have over 5,000 followers on Instagram, TikTok, and Twitter.

The following are the top insights marketing professionals need to know.

Overall Engagement

Between 2019-2022 all industries have seen a drop in overall engagement on Facebook, Instagram, and Twitter.

Facebook and Twitter only showed a slight change in engagement.

For Facebook, it dropped to 0.06% in 2021, maintaining that rate the following year. For Twitter, it dropped  0.01% between 2019-2022.

Weekly posting over time for both networks has fallen from 5.8 to 5 posts per week on Facebook and 5.4 to 3.9 posts per week on Twitter.

On the other hand, Instagram saw a much larger drop, from 1.22% to 0.47%. But unlike Facebook and Twitter, weekly posting on this platform has increased from 4.3 to 4.5 posts per week.

Facebook Engagement

Across all industries, Facebook’s median engagement rate per post by followers is 0.06%.

The median number of weekly posts across all industries is 5.04, with media posting the most at 73.5 times weekly. This is likely because media companies publish more news content than brands in other industries.

Instagram Engagement

Across all industries, Instagram’s median engagement rate per post by followers is 0.47%.

The median number of weekly posts across all industries is 4.6, with sports teams posting the most at 15.6 times weekly.

TikTok Engagement

Across all industries, TikTok’s median engagement rate per post by followers is 5.69%.

The median number of videos per week across all industries is 1.75, with media posting the most at 4.2 times weekly.

Twitter Engagement

Across all industries, Twitter’s median engagement rate per post by followers is 0.035%.

The median number of weekly tweets across all industries is 3.91, with media tweeting the most at 70.2 times weekly.

Top Post Types

The best types of posts on each social network vary by industry.

Photo and video posts drive the most engagement on Facebook, while link and status posts have the least.

Social Media Engagement Rates Dropping Across Top NetworksScreenshot from Rival IQ, March 2023

For Instagram, the data indicates that businesses should focus content creation efforts on Reels, carousels, and photos. Video posts not uploaded as Reels tend to have the least engagement.

On Twitter, posts with photos, videos, and statuses show the most engagement, while Tweets with links tend to have the least.

Top Hashtags

Hashtags vary significantly across industries and platforms. Holiday hashtags tend to generate the most engagement across all industries, while contests and giveaways have dropped in popularity compared to previous years.

Social Media Engagement Rates Dropping Across Top NetworksScreenshot from Rival IQ, March 2023

Key Takeaways

The key takeaway is that each industry’s audience is slightly different. While food & beverage brands see the best engagement with Instagram Reels, higher education brands see the best engagement with Instagram carousels.

To get the most out of your social media strategy, find ways to transform your content into the format that gets the best engagement on each of the top social networks. This will ensure you reach the most potential customers with the content they enjoy consuming.

For 100+ pages of industry-specific insights, visit Rival IQ and download the 2023 Social Media Bookmark Report.


Featured Image: 13_Phunkod/Shutterstock

Meta Announces New Top-Level Product Group For Generative AI via @sejournal, @kristileilani

Mark Zuckerberg, CEO of Meta Platforms, announced the creation of a top-level product group that will focus on turbocharging Meta products with generative AI.

AI teams from across the company will come together to focus on ways to implement generative AI for a more delightful experience.

Generative AI Top-Level Product Group

Meta’s new AI top-level product group will develop AI personas that can help Meta product users in various ways. Examples given by Zuckerberg include the following.

  • AI chat experiences in WhatsApp and Messenger.
  • AI image filters and ad formats in Instagram.
  • AI video and multi-modal experiences.

In 2022, Meta AI introduced Make-A-Video. This AI system allows users to generate videos from a text prompt. You can read the research paper for more and sign up to receive notifications about future tool releases.

These new features will give users a more personalized experience and new ways to express themselves. This will allow creators to produce better content faster to earn more income through Facebook and Instagram monetization programs.

They will also allow Meta to compete with social media platforms that are already incorporating AI.

Social Media’s AI Features Race

Snapchat released its AI persona, My AI, that allows users to prompt it for recipe suggestions, gift ideas, and content inspiration. It runs off of OpenAI’s ChatGPT.

Although the feature is limited to Snapchat+ subscribers, for now, it will be available to all users after an initial testing person.

TikTok already offers AI filters that leave users stunned, as well as concerned, over the dangerously realistic output and the emotional effects of that kind of hyper-realism.

Instead of simply transforming their face into an animal or adding glowing layers, users can see themselves as they were as a teenager or get a preview of what they would look like with a glamorous makeover.

Neal Mohan, YouTube’s new CEO, published a letter about the priorities for 2023. In addition to new monetization opportunities, he notes that with the capabilities of generative AI, creators will be able to level up their storytelling and production value.

LLaMA For Researchers From Meta AI

Zuckerberg noted in his Facebook post that much foundational work is required before Meta can bring futuristic experiences to the Metaverse without unintended consequences.

In related news, Meta announced the release of LLaMA – Large Language Model Meta AI in an effort to democratize access for researchers. It will be available for noncommercial licensing, specifically for research use cases.

Most importantly, LLaMA won’t require researchers to have the large number of resources typically needed to train and run large language models. Researchers can choose from 7B, 13B, 33B, and 65B parameter models, the smallest of which is trained on one trillion tokens.

This allows researchers to test new approaches to limiting or eliminating the risk of bias, hallucinations, and toxic responses that users have been exposed to when interacting with an AI.

They also invited everyone in the AI community – academic researchers, civil society, policymakers, and industry – to develop guidelines for responsible AI and large language models to ensure a better future.


Featured Image: Sergei Elagin/Shutterstock