Google Ads Tutorial: How to Leverage AI Video Enhancements via @sejournal, @brookeosmundson

Earlier this year, Google introduced new AI features into its Google Ads platform to help streamline work for many advertisers.

One of those new features comes in the form of AI video enhancements.

This is no surprise since video makes up over 65% of all internet traffic.

Read on to learn more about the AI video enhancements tutorial and how they can help streamline your PPC campaign optimization.

How do AI Video Enhancements Work?

In Google Ads, the AI video enhancement tool uses smart automation features to enhance your videos in a variety of ways.

It’s important to note that video enhancements are turned on by default but can easily be turned off at any time.

The feature can be found within your Google Ads campaign settings.

If you’re creating a new Google Ads campaign, this setting will only be available if you choose “Sales” as the goal and “Video” as the campaign type.

From there, go down to “Additional settings” to find the AI video enhancements settings:

New AI video enhancements tool for video campaigns.

Types of Enhancements Available

Google’s new AI video enhancements provide marketers with two areas to optimize current video assets.

#1: Additional video sizes and ratios

The first enhancement type can take existing horizontal videos and create additional versions for vertical and square ratios for optimal viewing.

The new ratio sizes that Google can create include:

  • 1:1
  • 4:5
  • 9:16

This feature can help the new video sizes feel more native to users viewing on mobile devices and create a better user experience.

#2: Get shorter versions of video ads

Say you’ve got a stellar video asset, but it may be too long to keep a user’s attention.

This new video enhancement uses Google AI to select key moments in the existing video to create shorter versions while keeping the original video message and appearance.

Per Google, these shorter video versions will automatically start running if they pass a quality review.

What are the benefits of using AI Video Enhancements?

As we come into Q4, time is of the essence for every marketer.

Resources are trying to do more with less, all while producing optimal PPC campaign results.

If you or your team are strapped for time or have a heavy workload, testing the Google AI video enhancements could be a key helper for your creative assets.

Some key benefits of testing out video enhancements include:

  • Reducing workload and budget
  • Takes the heavy lifting out of manually optimizing creatives
  • Can help boost campaign effectiveness
  • May help improve conversions

In Summary

With the ability to automatically generate different video sizes and ratios and create shorter versions of longer video ads, these tools are designed to save time and reduce the manual effort of video editing.

As marketers head into Q4 and face increasing demands, utilizing Google’s AI video enhancements can help lighten workloads, improve campaign effectiveness, and potentially increase conversions while keeping within budget constraints.


Featured image: monticello/Shutterstock

Snapchat Marketing: An In-Depth Guide For Businesses via @sejournal, @annabellenyst

Social media marketing is all about being where your audience is.

If your target audience is active on a platform, then you should be, too. And if you’re looking to speak to a younger demographic, Snapchat can be a powerful marketing tool for your business.

Snapchat is the fifth largest social media network in the world – but it’s one of the most misunderstood (and underestimated) by marketers.

So, if you’re concerned about missing the boat on this platform, you’re not alone. You’re also not out of touch – you just need a comprehensive guide to get started. And you’re in the right place.

In this updated guide to Snapchat marketing, we’ll provide you with a deep dive into the multimedia messaging platform, explain who’s using it, and give you the strategies you need to add it effectively to your marketing mix.

Why Use Snapchat For Marketing? (Is It Still Relevant?)

Successful marketers focus on grasping every opportunity to reach potential customers – and Snapchat continues to offer unique opportunities.

As of 2024, Snapchat boasts over 406 million active daily users, with more than 80% of them being 34 or younger. The platform reaches 90% of the 13-24-year-old population worldwide and 75% of 13-34-year-olds in over 25 countries.

This makes it an ideal platform for targeting Millennials, Gen Z, and, increasingly, Gen Alpha.

While it might not have the massive user base of Facebook or YouTube, it drives impressive engagement. According to recent data, the average Snapchat user opens the app over 40 times per day and spends about 30+ minutes daily on the platform – which means people interact with their social circles on Snapchat more than any other social network.

Yet, when it comes to marketing, Snapchat is a relatively untapped territory. While every brand seems to have a Facebook and Instagram presence these days, comparatively few have a foothold on Snapchat. And that just means more opportunity for your brand.

The boon of Snapchat is how it’s optimized for authenticity. Given the ephemeral nature of the app and the more unfiltered style of content approach, brands can use the platform to connect with their audience in ways that feel genuine and immediate – which is more valuable than ever.

And, as if all of this wasn’t enough, Snapchat has established itself as a competitive force in the social commerce market. With its augmented reality (AR) shopping experiences, you’re able to build immersive shopping experiences for users through the app – something most other platforms can’t compete with.

By now, it should be crystal clear why Snapchat could be a strong addition to your marketing strategy. So, how do you get started? Let’s break it down.

How Does Snapchat Work? A Brief Overview

If your social media experience is primarily with platforms like Facebook, Instagram, or X (Twitter), Snapchat’s interface may initially feel like a whole new world.

Snapchat’s design is unique – it’s built for spontaneity, exclusivity, and in-the-moment content creation. When you open the app, it goes directly to the camera, making it super easy for you to capture and share videos and photos (called “Snaps”) in just a few seconds.

You can then edit these Snaps using a variety of filters, stickers, and AR Lenses before sending them directly to your chosen friends or adding to your “Story,” which remains viewable for 24 hours. There’s also a newer Spotlight feature, similar to TikTok, for sharing short-form videos with wider audiences.

The app offers a host of other features, including a Snap Map, an AI-powered chatbot, and disappearing direct messages. Long story short: There are a bunch of innovative and creative ways for brands to engage with audiences on Snapchat.

What Brands Are Best Suited To Snapchat Marketing?

Let me be clear: Snapchat isn’t for every brand. There’s a reason why it’s less frequented than some other social media platforms. That said, if your target audience includes younger consumers, it’s absolutely worth considering.

Brands that are best suited to Snapchat are those that present a youthful image and tone and prioritize authenticity, “realness,” and creativity. If your brand image is highly professional or you have red tape around your marketing efforts, you should likely look elsewhere.

Lifestyle brands, fashion labels, beauty products, entertainment companies, and tech startups – these are the kinds of companies that typically see success on the app.

But really, the opportunity exists for any type of brand that is interested in using Snapchat’s tools to create immersive, engaging content that resonates with the platform’s users.

How To Get the Right “Tone Of Voice” For Snapchat

I mentioned tone above – and on Snapchat, tone of voice is a big deal. The platform itself is playful and casual, so you’re not likely to find success using an overly formal or professional tone.

Instead, your brand should focus on having a genuine conversation with users and infusing your content with humor.

Here are some tips for nailing the Snapchat tone of voice:

  • Keep it light: People go to Snapchat to be entertained. Leave your hard sales pitch at the door.
  • Prioritize authenticity: It’s a place to show the human side of your brand, whether it’s through user-generated content or behind-the-scenes Snaps.
  • Engage, engage, engage: Snapchat has a ton of interactive tools for engaging directly with users, like polls, Q&As, and more. Put them to use!

How To Create A Snapchat Strategy For Business

So, you want to create a successful Snapchat strategy. You can just start posting content sporadically, right? Wrong.

You need to start by understanding your brand’s goals and audience, and then determining the type of content that will best help you reach those people on Snapchat specifically. Here are some steps you can take to start building your Snapchat strategy:

  • Decide what you want to achieve on the platform. Maybe it’s brand awareness, community building, or increasing sales – once you know your goals, you can build your content approach around them.
  • Know your audience. As with any kind of marketing strategy, this is crucial.
  • Experiment and be creative. Try your hand with some of Snapchat’s different tools (like Bitmoji, AR Lenses, filters, etc.) to create content that resonates with your audience. Don’t just choose one type of content and settle – you can (and should) experiment with a variety of Snaps, Stories, and Spotlight videos.
  • Be consistent. All great marketers know that consistency is key – and it’s the same story on Snapchat.
  • Keep a good balance. Of course, you want to sell your brand to users, but don’t go all in on self-promotional content. Make sure you’re balancing organic, engaging storytelling with talking about yourself.
  • Learn (and follow) Snapchat best practices. This is a no-brainer. Spend time on the platform to find what works, and see how you can adopt it in your own strategy.

Types Of Content That Work Best On Snapchat

Snapchat is all about driving engagement. What does that look like in action? Here are some examples of content approaches that work particularly well on the platform.

1. Sneak Peeks & Teasers

Launching a new product or service? Snapchat is a great place to drive excitement by giving your audience teasers or sneak peeks at what’s to come.

You might think about dropping hints about the product, sharing a quick glimpse, or some other behind-the-scenes moments to encourage anticipation among your Snapchat followers.

Warner Bros., for example, has used teaser content to promote its upcoming movie releases.

Screenshot from Snapchat.com, August 2024Screenshot from Snapchat.com, August 2024

2. Behind-The-Scenes Content

Speaking of behind-the-scenes, this type of content is tailor-made for Snapchat.

Showing your audience what goes on behind the curtain at your brand is a clever way to create a sense of exclusivity and make people feel like insiders.

3. User-Generated Content (UGC)

You’ve heard about user-generated content – well, Snapchat is a place where UGC really thrives. Consider prompting your followers to create their own Snaps that feature your products or brand, and then share them along with a custom hashtag. Then, you can amplify the strongest ones in your own Stories.

UGC is proven to be a highly effective way to generate social proof, increase brand loyalty, and build a stronger social community.

Javy Coffee is one example of a brand that leveraged UGC by featuring real customer testimonials and stories about how they enjoy the coffee concentrate. This helped the company create relatable ads that resonated with its audience.

Screenshot from Snapchat.com, August 2024Screenshot from Snapchat.com, August 2024

4. Interactive Content

One thing that separates Snapchat from most other social media platforms is its interactivity. And brands have plenty of opportunities to get in on the action!

Try devising interactive moments for your followers, whether it’s a simple poll, a challenge, or a unique AR Lens. These allow users to really get involved and have a fun experience with your brand – and can lead to new UGC for you, as well.

For example, e.l.f. Cosmetics used AR Lenses combined with Bitmoji to allow users to virtually try on makeup, creating a highly interactive experience for its audience.

Screenshot from Snapchat.com, August 2024Screenshot from Snapchat.com, August 2024

5. Exclusive Offers

Want to really impress your Snapchat followers? Reward them. From promo codes to exclusive discounts or early bird access to new products, there is no shortage of ways to treat your audience.

It’s great for them but beneficial for you, too. It gives people a reason to keep engaging with your content and following your brand.

Advertising On Snapchat

While organic content is at the heart of Snapchat, the platform also offers some unique and powerful paid advertising options. The company also rolled out a new ad platform update in August 2024 that provides enhanced analytics, improved targeting, and more.

1. Snapchat Ads

Served to users in between user-generated content, these are full-screen, vertical video ads.  They’re not dissimilar to Instagram Stories, and can include interactive elements such as having a user swipe up to visit a website.

The Salvation Army’s Snapchat Ads featured videos depicting real-life stories of individuals impacted by its services, emphasizing themes of hope and transformation. These ads connected emotionally with viewers and included a swipe-up option to learn more or donate, making the content both impactful and actionable.

Screenshot from Snapchat.com, August 2024Screenshot from Snapchat.com, August 2024

2. Sponsored AR Lenses

One of Snapchat’s unique offerings is its custom AR Lenses, which brands can create for users to experience.

You can create Lenses that allow users to virtually “try on” products, for example, put people in a humorous visual setting or even have them play games. At times, they can even respond to users’ movements or the environment around them.

These can be hyper-engaging and drive a lot of interactions – like Volkswagen did when it used an AR Lens to enable people to experience its ID.3 electric vehicle virtually. Using the Lens, people could place a virtual model of the car in their environment, explore its features, and even change the color.

Screenshot from Snapchat.com, August 2024Screenshot from Snapchat.com, August 2024

3. Filters

These are static overlays that you can apply to your Snaps once they’re created – and brands are able to design their own to delight Snapchat users. These are typically non-interactive but are fun visual enhancements that enable people to add some creative flair to their content.

Post Consumer Brands created its own custom Snapchat Filter to promote its Honeycomb cereal – and it was very sweet!

Screenshot from Snapchat.com, August 2024Screenshot from Snapchat.com, August 2024

4. Dynamic Ads

Snapchat’s Dynamic Ads enable brands to automatically create and deliver personalized ads to users based on their behavior and interactions with your company. For example, if a user visits your website and looks at a specific hat, Snapchat might serve them an ad for that product.

If you work for an ecommerce company, these might be particularly interesting to you, as you can automate ad creation based on your product catalog – so you don’t need to lift a finger.

Fashion brand Free People has used Dynamic Ads to automate personalized ads for users, ensuring that those who viewed specific items on its website were later served ads featuring those exact products on Snapchat. It resulted in a 396% increase in demand.

Screenshot from Snapchat.com, August 2024Screenshot from Snapchat.com, August 2024

5. Commercials

These are non-skippable Snapchat video ads that usually appear within the platform’s premium content, like Snap Originals.

They’re short – three to six seconds for a Standard Commercial and up to 180 seconds for an Extended Play Commercial (though users can skip after the first 6 seconds) – and are optimized for high visibility.

They typically focus more on storytelling than some of the other ads on the network.

Let’s Get Snapping!

Yes, Snapchat is still an effective marketing platform for reaching younger audiences – but you must be mindful about your strategy and approach.

With its unique blend of high engagement, unique creative tools, and loyal audience, Snapchat offers brands a variety of ways to connect with their target consumers.

Hopefully, this guide has given you the insights and inspiration you need to build a successful Snapchat marketing strategy in 2024.

Now, it’s time to put these ideas into action and start Snapping your way to success.

More resources: 


Featured Image: Andrey_Popov/Shutterstock

B2B Dev Buys D2C Brand, Part 2: Holiday Prep

Lori McDonald is a pioneering B2B ecommerce developer, having founded Brilliance Business Solutions, an agency, in 1998. In February she acquired Norsland Lefse, a direct-to-consumer food manufacturer.

Now revving up Norsland with new tools and strategies, she has agreed to share the journey with us. In July, she discussed her rationale and goals for purchasing the business. In this conversation, she addressed customer feedback, holiday sales preparation, and more.

The entire audio of our conversation is below. The transcript is edited for length and clarity.

Practical Ecommerce: In February, you acquired a D2C food manufacturer. Give us a rundown of the first seven months.

Lori McDonald: Yes, we acquired Norsland Lefse in Rushford, Minnesota. The company manufactures lefse, a traditional Norwegian flatbread similar to a potato-based tortilla, and sells other Scandinavian foods and gifts. It’s been an exciting year, and I’ve learned so much.

We’ve migrated to BigCommerce from Wix, and that process has gone well. We no longer sell certain products on Amazon or our own site because they weren’t making enough money. I’ve learned it’s crucial to have systems in place to track profitability on every item.

Sales in August from our own site were more than double last year. A top goal of the acquisition is to grow direct revenue because we have higher margins there.

Our advertising is increasingly efficient. We advertise on Google and Meta (for Facebook and Instagram). We’re seeing great responses to Meta campaigns especially.

Our email campaigns with Klaviyo are going well too. In addition to sales promotions, we have email campaigns that invite customers to return and review products. It has helped us to collect some great reviews on our products and provide feedback for improvements. For example, some reviewers had experienced our lefse flatbread sticking together. So we started packaging the lefse with wax paper sheets between them.

PEC: Do you manage ad campaigns in-house?

McDonald: We’re working with OX Optimal, a creative agency. They’ve designed and tested ads. The best performers include photos of the lefse itself, like when coming off our manufacturing floor, to see how thin it is and what it looks like.

Early on we brought in a photographer who provided some terrific images that we’ve used for ad creative on and on our website.

PEC: How do you manage inventory and profitability on Amazon?

McDonald: We haven’t integrated Amazon with our BigCommerce backend. We’re tracking our inventory and profitability in Excel and updating the item quantities on BigCommerce and Amazon. It’s a manual process. We’re looking at automated solutions, such as Feedonomics, owned by BigCommerce.

There are different ways to manage multichannel selling. We’re looking at the best options for our situation. But our priority now, in September, is ensuring we’re ready for holiday sales.

PEC: It will be your first busy season!

McDonald: Yes. It’s critical we we make enough lefse for that period. We will start producing it in early September. We can freeze and store it for up to a year.

Sales in November and December have historically been 10 times higher than the rest of the year. So I anticipate being really busy. We’re just trying to ensure that we have the staff in place to make enough lefse and that we’re efficient in our process.

We’re improving our product descriptions so folks can understand why they should buy our lefse. We’re making improvements on our BigCommerce site, such as including categories in site search, moving out-of-stock items to the bottom of the page, and implementing better analytics.

We launched an exit survey using Hotjar that gives us feedback on the user experience and helps us understand why visitors leave our site. That’s provided us with some good information.

It’s so important to listen to customers. A frequent feedback item is that our shipping costs are too high. We’re now looking at better communication — shipping perishable goods is expensive. We’re also reassessing our shipping charges for accuracy. We offer free shipping for purchases of $200 or more. We ship via FedEx 2Day. We recently implemented ShipperHQ to help manage it all. We use a local carrier for nearby orders and XPS Ship to print all labels. It integrates with BigCommerce and Amazon.

We won’t be able to do everything I dream of by this holiday season. Our goal is to implement what we can, learn from it, and improve next year.

PEC: You’re also the founder of Brilliance Business Solutions, a busy B2B ecommerce agency. How has the lefse acquisition impacted that company?

McDonald: It’s largely positive. We’re all learning a lot and incorporating those insights into how we help our development customers. Based on our lefse experience, we could develop specialty themes and features to help those customers, for example. The analytics capabilities we build for Norsland Lefse could help them too.

Norsland Lefse has helped me become more focused while creating opportunities at Brilliance for our fabulous team members. Many have stepped into doing some of the things I used to do.

PEC: How can folks reach out?

McDonald: Brilliance Business Solutions is at BrillianceWeb.com. Norsland is at NorslandLefse.com.

New Ecommerce Tools: September 19, 2024

This week’s installment of new products and services for ecommerce merchants includes influencer marketing, virtual try-ons, digital payments, ecommerce platforms, financing, and global and last-mile logistics.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants: September 19, 2024

PayPal partners with Shopify Payments in the U.S. PayPal has expanded its partnership with Shopify in the U.S. PayPal will become an additional online credit and debit card processor for Shopify Payments through PayPal Complete Payments, an offering for marketplaces and platforms. PayPal wallet transactions will integrate into Shopify Payments in the U.S., which will streamline managing orders, payouts, reporting, and chargeback flows. The enhancement creates a single, unified experience for PayPal and Shopify merchants.

Web page for Shopify Payments

Shopify Payments

TD to launch an ecommerce platform for Canadian small businesses. TD Bank Group has launched TD eCommerce Solutions, a customizable ecommerce platform for Canadian small businesses. At launch, TD eCommerce Solutions, powered by BigCommerce, will be available to all new and existing TD Business Banking customers, offering accounting features, search engine optimization, fraud detection, multiple payment acceptance offerings, and more. TD customers can use same-day deposits and streamlined billing, allowing businesses to manage payments to multiple parties.

Alibaba logistics arm Cainiao launches next-day delivery in Europe. Cainiao, a global ecommerce logistics provider owned by Alibaba, has expanded its local express delivery services, now operational in 10 countries and regions worldwide, in addition to its presence in China. Cainiao Europe now can offer next-day delivery for €2 (roughly USD 2.25) within the home country, along with three-day delivery options ranging from €3 to €5 for other European countries. ​​Cainiao also announced the development of a second batch of global ecommerce logistics hubs, called e-hubs.

Amazon ads to appear in Rufus shopping assistant. To help shoppers discover more products, Amazon may insert ads in Rufus, its generative AI-powered shopping assistant. According to Amazon, Rufus can generate accompanying text based on the context of the conversation. Amazon’s campaign reports won’t include Rufus metrics. Amazon’s tests with Rufus ads are running in the U.S. only.

Web page for Rufus

Amazon’s Rufus

Humanz launches its influencer marketing platform in the U.S. Humanz, a collaborative AI-powered marketing platform for creators and brands, is entering the U.S. market alongside a partnership with Valeria Lipovetsky, a global influencer in beauty and fashion with more than 6 million followers. By providing a transparent and data-driven platform that gives real-time insight into a campaign’s performance, Humanz aims to empower creators to make informed decisions and optimize their content.

Fero launches embedded personalization ecommerce tool in U.K. Dutch fintech Fero has launched a personalization ecommerce tool, Checkout Companion, for Shopify-based businesses in the U.K. Fero says it can provide online shops with 430 data points to understand and act on why shoppers abandon purchases, including details such as when a customer lands on a site and the number searches and affiliate marketing referral sources.

Walmart Marketplace and Parafin collaborate for seller financing. Walmart Marketplace and Parafin have announced an initiative to provide sellers with financing through the Walmart Marketplace Capital program. Eligible sellers can access funding through Parafin, an approved Walmart Marketplace solution provider, to grow their businesses and prepare for the upcoming holiday retail season. Parafin offers financing services and employs machine learning to analyze sales data from millions of businesses to determine eligibility and terms.

Home page for Parafin

Parafin

Bold Commerce announces a collaboration with PayPal. Bold Commerce, a customized checkout provider, has announced a partnership with PayPal to integrate Fastlane by PayPal with Adobe Commerce. This integration aims to streamline the checkout process for U.S.-based retailers using Adobe Commerce, allowing them to deploy Fastlane by PayPal without switching platforms. Merchants can now add Fastlane by PayPal to their Adobe Commerce setups through Bold’s Magento extension.

Amazon launches “LastMileTram” in Germany for zero-tailpipe emissions. Amazon is working with partners to use a package transport tram (a vehicle running on rails) to deliver packages to the center of Frankfurt, from where electric cargo bikes complete the delivery, creating a zero-tailpipe emission journey from the delivery station to the customer. The pilot project is part of research from Frankfurt University of Applied Sciences, together with Verkehrsgesellschaft Frankfurt am Main and Intermodal City Injections, a collaboration between Amazon Transportation Services and Amazon Logistics teams.

DNA Payments launches Apple Pay Express Checkout for ecommerce. DNA Payments, an integrated omnichannel payment provider based in the U.K., has introduced Apple Pay Express Checkout, giving ecommerce merchants a new way to accept customer payments. Merchants can offer Apple Pay at checkout or via the Express option through the Safari browser. DNA Payments serves merchants in the U.K., helping them accept transactions through point-of-sale devices, via websites and payment links, or over the phone.

Google adds dresses to its AI shopping tool. Google is expanding its virtual try-on to dresses. Shoppers can now visualize dresses from hundreds of brands across Google’s Shopping Graph. Shoppers can search for dresses on Google and click any style that includes a “try-on” badge. From there, they can see what that garment looks like on a diverse set of real models, ranging from XXS to XXXL. Then, they pick a dress and click to the retailer’s site to buy it.

Screenshot from Google of virtual dress try ons.

Google’s virtual try-on for dresses.

Why we need an AI safety hotline

In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. It’s only a matter of time before labs release another round of models that pose new regulatory challenges. We’re likely just weeks away, for example, from OpenAI’s release of ChatGPT-5, which promises to push AI capabilities further than ever before. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

Testing AI models before they’re released is a common approach to mitigating certain risks, and it may help regulators weigh up the costs and benefits—and potentially block models from being released if they’re deemed too dangerous. But the accuracy and comprehensiveness of these tests leaves a lot to be desired. AI models may “sandbag” the evaluation—hiding some of their capabilities to avoid raising any safety concerns. The evaluations may also fail to reliably uncover the full set of risks posed by any one model. Evaluations likewise suffer from limited scope—current tests are unlikely to uncover all the risks that warrant further investigation. There’s also the question of who conducts the evaluations and how their biases may influence testing efforts. For those reasons, evaluations need to be used alongside other governance tools. 

One such tool could be internal reporting mechanisms within the labs. Ideally, employees should feel empowered to regularly and fully share their AI safety concerns with their colleagues, and they should feel those colleagues can then be counted on to act on the concerns. However, there’s growing evidence that, far from being promoted, open criticism is becoming rarer in AI labs. Just three months ago, 13 former and current workers from OpenAI and other labs penned an open letter expressing fear of retaliation if they attempt to disclose questionable corporate behaviors that fall short of breaking the law. 

How to sound the alarm

In theory, external whistleblower protections could play a valuable role in the detection of AI risks. These could protect employees fired for disclosing corporate actions, and they could help make up for inadequate internal reporting mechanisms. Nearly every state has a public policy exception to at-will employment termination—in other words, terminated employees can seek recourse against their employers if they were retaliated against for calling out unsafe or illegal corporate practices. However, in practice this exception offers employees few assurances. Judges tend to favor employers in whistleblower cases. The likelihood of AI labs’ surviving such suits seems particularly high given that society has yet to reach any sort of consensus as to what qualifies as unsafe AI development and deployment. 

These and other shortcomings explain why the aforementioned 13 AI workers, including ex-OpenAI employee William Saunders, called for a novel “right to warn.” Companies would have to offer employees an anonymous process for disclosing risk-related concerns to the lab’s board, a regulatory authority, and an independent third body made up of subject-matter experts. The ins and outs of this process have yet to be figured out, but it would presumably be a formal, bureaucratic mechanism. The board, regulator, and third party would all need to make a record of the disclosure. It’s likely that each body would then initiate some sort of investigation. Subsequent meetings and hearings also seem like a necessary part of the process. Yet if Saunders is to be taken at his word, what AI workers really want is something different. 

When Saunders went on the Big Technology Podcast to outline his ideal process for sharing safety concerns, his focus was not on formal avenues for reporting established risks. Instead, he indicated a desire for some intermediate, informal step. He wants a chance to receive neutral, expert feedback on whether a safety concern is substantial enough to go through a “high stakes” process such as a right-to-warn system. Current government regulators, as Saunders says, could not serve that role. 

For one thing, they likely lack the expertise to help an AI worker think through safety concerns. What’s more, few workers will pick up the phone if they know it’s a government official on the other end—that sort of call may be “very intimidating,” as Saunders himself said on the podcast. Instead, he envisages being able to call an expert to discuss his concerns. In an ideal scenario, he’d be told that the risk in question does not seem that severe or likely to materialize, freeing him up to return to whatever he was doing with more peace of mind. 

Lowering the stakes

What Saunders is asking for in this podcast isn’t a right to warn, then, as that suggests the employee is already convinced there’s unsafe or illegal activity afoot. What he’s really calling for is a gut check—an opportunity to verify whether a suspicion of unsafe or illegal behavior seems warranted. The stakes would be much lower, so the regulatory response could be lighter. The third party responsible for weighing up these gut checks could be a much more informal one. For example, AI PhD students, retired AI industry workers, and other individuals with AI expertise could volunteer for an AI safety hotline. They could be tasked with quickly and expertly discussing safety matters with employees via a confidential and anonymous phone conversation. Hotline volunteers would have familiarity with leading safety practices, as well as extensive knowledge of what options, such as right-to-warn mechanisms, may be available to the employee. 

As Saunders indicated, few employees will likely want to go from 0 to 100 with their safety concerns—straight from colleagues to the board or even a government body. They are much more likely to raise their issues if an intermediary, informal step is available.

Studying examples elsewhere

The details of how precisely an AI safety hotline would work deserve more debate among AI community members, regulators, and civil society. For the hotline to realize its full potential, for instance, it may need some way to escalate the most urgent, verified reports to the appropriate authorities. How to ensure the confidentiality of hotline conversations is another matter that needs thorough investigation. How to recruit and retain volunteers is another key question. Given leading experts’ broad concern about AI risk, some may be willing to participate simply out of a desire to lend a hand. Should too few folks step forward, other incentives may be necessary. The essential first step, though, is acknowledging this missing piece in the puzzle of AI safety regulation. The next step is looking for models to emulate in building out the first AI hotline. 

One place to start is with ombudspersons. Other industries have recognized the value of identifying these neutral, independent individuals as resources for evaluating the seriousness of employee concerns. Ombudspersons exist in academia, nonprofits, and the private sector. The distinguishing attribute of these individuals and their staffers is neutrality—they have no incentive to favor one side or the other, and thus they’re more likely to be trusted by all. A glance at the use of ombudspersons in the federal government shows that when they are available, issues may be raised and resolved sooner than they would be otherwise.

This concept is relatively new. The US Department of Commerce established the first federal ombudsman in 1971. The office was tasked with helping citizens resolve disputes with the agency and investigate agency actions. Other agencies, including the Social Security Administration and the Internal Revenue Service, soon followed suit. A retrospective review of these early efforts concluded that effective ombudspersons can meaningfully improve citizen-government relations. On the whole, ombudspersons were associated with an uptick in voluntary compliance with regulations and cooperation with the government. 

An AI ombudsperson or safety hotline would surely have different tasks and staff from an ombudsperson in a federal agency. Nevertheless, the general concept is worthy of study by those advocating safeguards in the AI industry. 

A right to warn may play a role in getting AI safety concerns aired, but we need to set up more intermediate, informal steps as well. An AI safety hotline is low-hanging regulatory fruit. A pilot made up of volunteers could be organized in relatively short order and provide an immediate outlet for those, like Saunders, who merely want a sounding board.

Kevin Frazier is an assistant professor at St. Thomas University College of Law and senior research fellow in the Constitutional Studies Program at the University of Texas at Austin.

Why OpenAI’s new model is such a big deal

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.

Last weekend, I got married at a summer camp, and during the day our guests competed in a series of games inspired by the show Survivor that my now-wife and I orchestrated. When we were planning the games in August, we wanted one station to be a memory challenge, where our friends and family would have to memorize part of a poem and then relay it to their teammates so they could re-create it with a set of wooden tiles. 

I thought OpenAI’s GPT-4o, its leading model at the time, would be perfectly suited to help. I asked it to create a short wedding-themed poem, with the constraint that each letter could only appear a certain number of times so we could make sure teams would be able to reproduce it with the provided set of tiles. GPT-4o failed miserably. The model repeatedly insisted that its poem worked within the constraints, even though it didn’t. It would correctly count the letters only after the fact, while continuing to deliver poems that didn’t fit the prompt. Without the time to meticulously craft the verses by hand, we ditched the poem idea and instead challenged guests to memorize a series of shapes made from colored tiles. (That ended up being a total hit with our friends and family, who also competed in dodgeball, egg tosses, and capture the flag.)    

However, last week OpenAI released a new model called o1 (previously referred to under the code name “Strawberry” and, before that, Q*) that blows GPT-4o out of the water for this type of purpose

Unlike previous models that are well suited for language tasks like writing and editing, OpenAI o1 is focused on multistep “reasoning,” the type of process required for advanced mathematics, coding, or other STEM-based questions. It uses a “chain of thought” technique, according to OpenAI. “It learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working,” the company wrote in a blog post on its website.

OpenAI’s tests point to resounding success. The model ranks in the 89th percentile on questions from the competitive coding organization Codeforces and would be among the top 500 high school students in the USA Math Olympiad, which covers geometry, number theory, and other math topics. The model is also trained to answer PhD-level questions in subjects ranging from astrophysics to organic chemistry. 

In math olympiad questions, the new model is 83.3% accurate, versus 13.4% for GPT-4o. In the PhD-level questions, it averaged 78% accuracy, compared with 69.7% from human experts and 56.1% from GPT-4o. (In light of these accomplishments, it’s unsurprising the new model was pretty good at writing a poem for our nuptial games, though still not perfect; it used more Ts and Ss than instructed to.)

So why does this matter? The bulk of LLM progress until now has been language-driven, resulting in chatbots or voice assistants that can interpret, analyze, and generate words. But in addition to getting lots of facts wrong, such LLMs have failed to demonstrate the types of skills required to solve important problems in fields like drug discovery, materials science, coding, or physics. OpenAI’s o1 is one of the first signs that LLMs might soon become genuinely helpful companions to human researchers in these fields. 

It’s a big deal because it brings “chain-of-thought” reasoning in an AI model to a mass audience, says Matt Welsh, an AI researcher and founder of the LLM startup Fixie. 

“The reasoning abilities are directly in the model, rather than one having to use separate tools to achieve similar results. My expectation is that it will raise the bar for what people expect AI models to be able to do,” Welsh says.

That said, it’s best to take OpenAI’s comparisons to “human-level skills” with a grain of salt, says Yves-Alexandre de Montjoye, an associate professor in math and computer science at Imperial College London. It’s very hard to meaningfully compare how LLMs and people go about tasks such as solving math problems from scratch.

Also, AI researchers say that measuring how well a model like o1 can “reason” is harder than it sounds. If it answers a given question correctly, is that because it successfully reasoned its way to the logical answer? Or was it aided by a sufficient starting point of knowledge built into the model? The model “still falls short when it comes to open-ended reasoning,” Google AI researcher François Chollet wrote on X.

Finally, there’s the price. This reasoning-heavy model doesn’t come cheap. Though access to some versions of the model is included in premium OpenAI subscriptions, developers using o1 through the API will pay three times as much as they pay for GPT-4o—$15 per 1 million input tokens in o1, versus $5 for GPT-4o. The new model also won’t be most users’ first pick for more language-heavy tasks, where GPT-4o continues to be the better option, according to OpenAI’s user surveys. 

What will it unlock? We won’t know until researchers and labs have the access, time, and budget to tinker with the new mode and find its limits. But it’s surely a sign that the race for models that can outreason humans has begun. 

Now read the rest of The Algorithm


Deeper learning

Chatbots can persuade people to stop believing in conspiracy theories

Researchers believe they’ve uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people’s belief in it by about 20%—even among participants who claimed that their beliefs were important to their identity. 

Why this matters: The findings could represent an important step forward in how we engage with and educate people who espouse such baseless theories, says Yunhao (Jerry) Zhang, a postdoc fellow affiliated with the Psychology of Technology Institute who studies AI’s impacts on society. “They show that with the help of large language models, we can—I wouldn’t say solve it, but we can at least mitigate this problem,” he says. “It points out a way to make society better.” Read more from Rhiannon Williams here.

Bits and bytes

Google’s new tool lets large language models fact-check their responses

Called DataGemma, it uses two methods to help LLMs check their responses against reliable data and cite their sources more transparently to users. (MIT Technology Review)

Meet the radio-obsessed civilian shaping Ukraine’s drone defense 

Since Russia’s invasion, Serhii “Flash” Beskrestnov has become an influential, if sometimes controversial, force—sharing expert advice and intel on the ever-evolving technology that’s taken over the skies. His work may determine the future of Ukraine, and wars far beyond it. (MIT Technology Review)

Tech companies have joined a White House commitment to prevent AI-generated sexual abuse imagery

The pledges, signed by firms like OpenAI, Anthropic, and Microsoft, aim to “curb the creation of image-based sexual abuse.” The companies promise to set limits on what models will generate and to remove nude images from training data sets where possible.  (Fortune)

OpenAI is now valued at $150 billion

The valuation arose out of talks it’s currently engaged in to raise $6.5 billion. Given that OpenAI is becoming increasingly costly to operate, and could lose as much as $5 billion this year, it’s tricky to see how it all adds up. (The Information)

There are more than 120 AI bills in Congress right now

More than 120 bills related to regulating artificial intelligence are currently floating around the US Congress.

They’re pretty varied. One aims to improve knowledge of AI in public schools, while another is pushing for model developers to disclose what copyrighted material they use in their training.  Three deal with mitigating AI robocalls, while two address biological risks from AI. There’s even a bill that prohibits AI from launching a nuke on its own.

The flood of bills is indicative of the desperation Congress feels to keep up with the rapid pace of technological improvements. “There is a sense of urgency. There’s a commitment to addressing this issue, because it is developing so quickly and because it is so crucial to our economy,” says Heather Vaughan, director of communications for the US House of Representatives Committee on Science, Space, and Technology.

Because of the way Congress works, the majority of these bills will never make it into law. But simply taking a look at all the different bills that are in motion can give us insight into policymakers’ current preoccupations: where they think the dangers are, what each party is focusing on, and more broadly, what vision the US is pursuing when it comes to AI and how it should be regulated.

That’s why, with help from the Brennan Center for Justice, which created a tracker with all the AI bills circulating in various committees in Congress right now, MIT Technology Review has taken a closer look to see if there’s anything we can learn from this legislative smorgasbord. 

As you can see, it can seem as if Congress is trying to do everything at once when it comes to AI. To get a better sense of what may actually pass, it’s useful to look at what bills are moving along to potentially become law. 

A bill typically needs to pass a committee, or a smaller body of Congress, before it is voted on by the whole Congress. Many will fall short at this stage, while others will simply be introduced and then never spoken of again. This happens because there are so many bills presented in each session, and not all of them are given equal consideration. If the leaders of a party don’t feel a bill from one of its members can pass, they may not even try to push it forward. And then, depending on the makeup of Congress, a bill’s sponsor usually needs to get some members of the opposite party to support it for it to pass. In the current polarized US political climate, that task can be herculean. 

Congress has passed legislation on artificial intelligence before. Back in 2020, the National AI Initiative Act was part of the Defense Authorization Act, which invested resources in AI research and provided support for public education and workforce training on AI.

And some of the current bills are making their way through the system. The Senate Commerce Committee pushed through five AI-related bills at the end of July. The bills focused on authorizing the newly formed US AI Safety Institute (AISI) to create test beds and voluntary guidelines for AI models. The other bills focused on expanding education on AI, establishing public computing resources for AI research, and criminalizing the publication of deepfake pornography. The next step would be to put the bills on the congressional calendar to be voted on, debated, or amended.

“The US AI Safety Institute, as a place to have consortium building and easy collaboration between corporate and civil society actors, is amazing. It’s exactly what we need,” says Yacine Jernite, an AI researcher at Hugging Face.

The progress of these bills is a positive development, says Varun Krovi, executive director of the Center for AI Safety Action Fund. “We need to codify the US AI Safety Institute into law if you want to maintain our leadership on the global stage when it comes to standards development,” he says. “And we need to make sure that we pass a bill that provides computing capacity required for startups, small businesses, and academia to pursue AI.”

Following the Senate’s lead, the House Committee on Science, Space, and Technology just passed nine more bills regarding AI on September 11. Those bills focused on improving education on AI in schools, directing the National Institute of Standards and Technology (NIST) to establish guidelines for artificial-intelligence systems, and expanding the workforce of AI experts. These bills were chosen because they have a narrower focus and thus might not get bogged down in big ideological battles on AI, says Vaughan.

“It was a day that culminated from a lot of work. We’ve had a lot of time to hear from members and stakeholders. We’ve had years of hearings and fact-finding briefings on artificial intelligence,” says Representative Haley Stevens, one of the Democratic members of the House committee.

Many of the bills specify that any guidance they propose for the industry is nonbinding and that the goal is to work with companies to ensure safe development rather than curtail innovation. 

For example, one of the bills from the House, the AI Development Practices Act, directs NIST to establish “voluntary guidance for practices and guidelines relating to the development … of AI systems” and a “voluntary risk management framework.” Another bill, the AI Advancement and Reliability Act, has similar language. It supports “the development of voluntary best practices and technical standards” for evaluating AI systems. 

“Each bill contributes to advancing AI in a safe, reliable, and trustworthy manner while fostering the technology’s growth and progress through innovation and vital R&D,” committee chairman Frank Lucas, an Oklahoma Republican, said in a press release on the bills coming out of the House.

“It’s emblematic of the approach that the US has taken when it comes to tech policy. We hope that we would move on from voluntary agreements to mandating them,” says Krovi.

Avoiding mandates is a practical matter for the House committee. “Republicans don’t go in for mandates for the most part. They generally aren’t going to go for that. So we would have a hard time getting support,” says Vaughan. “We’ve heard concerns about stifling innovation, and that’s not the approach that we want to take.” When MIT Technology Review asked about the origin of these concerns, they were attributed to unidentified “third parties.” 

And fears of slowing innovation don’t just come from the Republican side. “What’s most important to me is that the United States of America is establishing aggressive rules of the road on the international stage,” says Stevens. “It’s concerning to me that actors within the Chinese Communist Party could outpace us on these technological advancements.”

But these bills come at a time when big tech companies have ramped up lobbying efforts on AI. “Industry lobbyists are in an interesting predicament—their CEOs have said that they want more AI regulation, so it’s hard for them to visibly push to kill all AI regulation,” says David Evan Harris, who teaches courses on AI ethics at the University of California, Berkeley. “On the bills that they don’t blatantly try to kill, they instead try to make them meaningless by pushing to transform the language in the bills to make compliance optional and enforcement impossible.”

“A [voluntary commitment] is something that is also only accessible to the largest companies,” says Jernite at Hugging Face, claiming that sometimes the ambiguous nature of voluntary commitments allows big companies to set definitions for themselves. “If you have a voluntary commitment—that is, ‘We’re going to develop state-of-the-art watermarking technology’—you don’t know what state-of-the-art means. It doesn’t come with any of the concrete things that make regulation work.”

“We are in a very aggressive policy conversation about how to do this right, and how this carrot and stick is actually going to work,” says Stevens, indicating that Congress may ultimately draw red lines that AI companies must not cross.

There are other interesting insights to be gleaned from looking at the bills all together. Two-thirds of the AI bills are sponsored by Democrats. This isn’t too surprising, since some House Republicans have claimed to want no AI regulations, believing that guardrails will slow down progress.

The topics of the bills (as specified by Congress) are dominated by science, tech, and communications (28%), commerce (22%), updating government operations (18%), and national security (9%). Topics that don’t receive much attention include labor and employment (2%), environmental protection (1%), and civil rights, civil liberties, and minority issues (1%).

The lack of a focus on equity and minority issues came into view during the Senate markup session at the end of July. Senator Ted Cruz, a Republican, added an amendment that explicitly prohibits any action “to ensure inclusivity and equity in the creation, design, or development of the technology.” Cruz said regulatory action might slow US progress in AI, allowing the country to fall behind China.

On the House side, there was also a hesitation to work on bills dealing with biases in AI models. “None of our bills are addressing that. That’s one of the more ideological issues that we’re not moving forward on,” says Vaughan.

The lead Democrat on the House committee, Representative Zoe Lofgren, told MIT Technology Review, “It is surprising and disappointing if any of my Republican colleagues have made that comment about bias in AI systems. We shouldn’t tolerate discrimination that’s overt and intentional any more than we should tolerate discrimination that occurs because of bias in AI systems. I’m not really sure how anyone can argue against that.”

After publication, Vaughan clarified that “[Bias] is one of the bigger, more cross-cutting issues, unlike the narrow, practical bills we considered that week. But we do care about bias as an issue,” and she expects it to be addressed within an upcoming House Task Force report.

One issue that may rise above the partisan divide is deepfakes. The Defiance Act, one of several bills addressing them, is cosponsored by a Democratic senator, Amy Klobuchar, and a Republican senator, Josh Hawley. Deepfakes have already been abused in elections; for example, someone faked Joe Biden’s voice for a robocall to tell citizens not to vote. And the technology has been weaponized to victimize people by incorporating their images into pornography without their consent. 

“I certainly think that there is more bipartisan support for action on these issues than on many others,” says Daniel Weiner, director of the Brennan Center’s Elections & Government Program. “But it remains to be seen whether that’s going to win out against some of the more traditional ideological divisions that tend to arise around these issues.” 

Although none of the current slate of bills have resulted in laws yet, the task of regulating any new technology, and specifically advanced AI systems that no one entirely understands, is difficult. The fact that Congress is making any progress at all may be surprising in itself. 

“Congress is not sleeping on this by any stretch of the means,” says Stevens. “We are evaluating and asking the right questions and also working alongside our partners in the Biden-Harris administration to get us to the best place for the harnessing of artificial intelligence.”

Update: We added further comments from the Republican spokesperson.

The Download: Congress’s AI bills, and Snap’s new AR spectacles

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

There are more than 120 AI bills in Congress right now

More than 120 bills related to regulating artificial intelligence are currently floating around the US Congress. This flood of bills is indicative of the desperation Congress feels to keep up with the rapid pace of technological improvements. 

Because of the way Congress works, the majority of these bills will never make it into law. But simply taking a look at them all can give us insight into policymakers’ current preoccupations: where they think the dangers are, what each party is focusing on, and more broadly, what vision the US is pursuing when it comes to AI and how it should be regulated.

That’s why, with help from the Brennan Center for Justice, we’ve created a tracker with all the AI bills circulating in various committees in Congress right now, to see if there’s anything we can learn from this legislative smorgasbord. Read the full story.

—Scott J Mulligan

Here’s what I made of Snap’s new augmented-reality Spectacles

Snap has announced a new version of its Spectacles: AR glasses that could finally deliver on the promises that devices like Magic Leap, or HoloLens, or even Google Glass, made many years ago.

Our editor-in-chief Mat Honan got to try them out a couple of weeks ago. He found they packed a pretty impressive punch layering visual information and applications directly on their see-through lenses, making objects appear as if they are in the real world—if you don’t mind looking a little goofy, that is. Read Mat’s full thoughts here.

Google is funding an AI-powered satellite constellation that will spot wildfires faster

What’s happening: Early next year, Google and its partners plan to launch the first in a series of satellites that together would provide close-up, frequently refreshed images of wildfires around the world, offering data that could help firefighters battle blazes more rapidly, effectively, and safely.

Why it matters: The images and analysis will be provided free to fire agencies around the world, helping to improve understanding of where fires are, where they’re moving, and how hot they’re burning. The information could help agencies stamp out small fires before they turn into raging infernos, place limited firefighting resources where they’ll do the most good, and evacuate people along the safest paths. Read the full story.

—James Temple

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 California has passed three election deepfake laws
But only one will take effect in time for the presidential election in November. (NYT $)
+ The bills also protect actors from AI impersonation without their consent. (WP $)

2 How did thousands of Hezbollah pagers explode simultaneously?
The devices were probably intercepted by hackers during shipment. (WSJ $)
+ Here’s everything we know about the attack so far. (Vox)
+ Small lithium batteries alone don’t tend to cause this much damage. (404 Media)
+ Exploding comms devices are nothing new. (FT $)

3 Instagram has introduced new accounts specifically for teens
In response to increasing pressure over Meta’s minor protection policies. (BBC)
+ Parents will be given greater control over their activities. (The Guardian)
+ Here’s how to set up the new restricted accounts. (WP $)

4 Google has won its bid to overturn a €1.5 billion fine from the EU
But the court said it stands by the majority of the previous findings. (CNBC)
+ But the ruling can still be appealed in the Court of Justice. (Bloomberg $)
+ Meanwhile, Meta’s antitrust woes are escalating. (FT $)

5 SpaceX has been accused of breaking launch rules 
And the US Federal Aviation Administration wants to slap it with a hefty fine. (WP $)

6 Electric cars now outnumber petrol cars in Norway
It’s particularly impressive given the country’s history as an oil producer. (The Guardian)
+ Why full EVs, not hybrids, are the future. (Economist $)
+ Three frequently asked questions about EVs, answered. (MIT Technology Review)

7 Our understanding of the universe is still up in the air
What looked like a breakthrough in physics actually might not be at all. (New Scientist $)
+ Why is the universe so complex and beautiful? (MIT Technology Review)

8 Tech’s middle managers are having a tough time
They’re losing their jobs left, right and center. (Insider $)

9 YouTube astrology is booming in Pakistan
Amid economic and political turmoil, Pakistanis are seeking answers in the stars. (Rest of World)

10 Not everything bad is AI-generated
But what’s AI-generated is often bad. (NY Mag $)

Quote of the day

“I’d rather go back to school than work in an office again.”

—CJ Felli, a system development engineer for Amazon Web Services, is not happy about the company’s back-to-the-office directive, Quartz reports.

The big story

What’s next for the world’s fastest supercomputers

September 2023

When the Frontier supercomputer came online last year, it marked the dawn of so-called exascale computing, with machines that can execute an exaflop—or a quintillion (1018) floating point operations a second.

Since then, scientists have geared up to make more of these blazingly fast computers: several exascale machines are due to come online in the US and Europe in 2024.

But speed itself isn’t the endgame. Researchers hope to pursue previously unanswerable questions about nature—and to design new technologies in areas from transportation to medicine. Read the full story.

—Sophia Chen

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ These Ocean Photographer of the Year winning images are simply stunning 🐋($)
+ Here’s where you’ll have the best chance of finding a fossilized shark tooth in the US.
+ Vans are back in style, as if they ever went out of it.
+ Potatoes are great every which way, but here’s how long to boil them for that perfect al dente bite.

AI-generated content doesn’t seem to have swayed recent European elections 

AI-generated falsehoods and deepfakes seem to have had no effect on election results in the UK, France, and the European Parliament this year, according to new research. 

Since the beginning of the generative-AI boom, there has been widespread fear that AI tools could boost bad actors’ ability to spread fake content with the potential to interfere with elections or even sway the results. Such worries were particularly heightened this year, when billions of people were expected to vote in over 70 countries. 

Those fears seem to have been unwarranted, says Sam Stockwell, the researcher at the Alan Turing Institute who conducted the study. He focused on three elections over a four-month period from May to August 2024, collecting data on public reports and news articles on AI misuse. Stockwell identified 16 cases of AI-enabled falsehoods or deepfakes that went viral during the UK general election and only 11 cases in the EU and French elections combined, none of which appeared to definitively sway the results. The fake AI content was created by both domestic actors and groups linked to hostile countries such as Russia. 

These findings are in line with recent warnings from experts that the focus on election interference is distracting us from deeper and longer-lasting threats to democracy.   

AI-generated content seems to have been ineffective as a disinformation tool in most European elections this year so far. This, Stockwell says, is because most of the people who were exposed to the disinformation already believed its underlying message (for example, that levels of immigration to their country are too high). Stockwell’s analysis showed that people who were actively engaging with these deepfake messages by resharing and amplifying them had some affiliation or previously expressed views that aligned with the content. So the material was more likely to strengthen preexisting views than to influence undecided voters. 

Tried-and-tested election interference tactics, such as flooding comment sections with bots and exploiting influencers to spread falsehoods, remained far more effective. Bad actors mostly used generative AI to rewrite news articles with their own spin or to create more online content for disinformation purposes. 

“AI is not really providing much of an advantage for now, as existing, simpler methods of creating false or misleading information continue to be prevalent,” says Felix Simon, a researcher at the Reuters Institute for Journalism, who was not involved in the research. 

However, it’s hard to draw firm conclusions about AI’s impact upon elections at this stage, says Samuel Woolley, a disinformation expert at the University of Pittsburgh. That’s in part because we don’t have enough data.

“There are less obvious, less trackable, downstream impacts related to uses of these tools that alter civic engagement,” he adds.

Stockwell agrees: Early evidence from these elections suggests that AI-generated content could be more effective for harassing politicians and sowing confusion than changing people’s opinions on a large scale. 

Politicians in the UK, such as former prime minister Rishi Sunak, were targeted by AI deepfakes that, for example, showed them promoting scams or admitting to financial corruption. Female candidates were also targeted with nonconsensual sexual deepfake content, intended to disparage and intimidate them. 

“There is, of course, a risk that in the long run, the more that political candidates are on the receiving end of online harassment, death threats, deepfake pornographic smears—that can have a real chilling effect on their willingness to, say, participate in future elections, but also obviously harm their well-being,” says Stockwell. 

Perhaps more worrying, Stockwell says, his research indicates that people are increasingly unable to discern the difference between authentic and AI-generated content in the election context. Politicians are also taking advantage of that. For example, political candidates in the European Parliament elections in France have shared AI-generated content amplifying anti-immigration narratives without disclosing that they’d been made with AI. 

“This covert engagement, combined with a lack of transparency, presents in my view a potentially greater risk to the integrity of political processes than the use of AI by the general population or so-called ‘bad actors,’” says Simon. 

Charts: Ecommerce Revenue Forecasts U.S., Global

The International Trade Administration, an agency of the U.S. Department of Commerce, projects global B2B ecommerce sales to reach $36.2 trillion by 2026, a 50% increase from 2023. The ITA’s mission is to promote trade and investment, strengthen the competitiveness of the U.S. industry, and ensure fair trade and compliance with trade laws and agreements.

Gross merchandise value is total sales over a specified period, typically measured quarterly or yearly.

The ITA projects B2C ecommerce revenue to reach $5.5 trillion by 2027, a compound annual growth rate from 2024 of 14.4%. Although fashion and consumer electronics are the largest sectors, pharmaceuticals is the fastest-growing category.


Statista tracks the leading online shopping categories by revenue, both globally and within the United States. Electronics account for a substantial share of global ecommerce sales, with projected spending reaching $922.5 billion. Fashion and apparel follow, ranking second among the top online shopping categories.

Per Statista, the top ecommerce categories in the U.S. reflect global trends, with fashion emerging as the top revenue category, projected to generate $162.9 billion in revenue in 2024.