Why materials science is key to unlocking the next frontier of AI development

The Intel 4004, the first commercial microprocessor, was released in 1971. With 2,300 transistors packed into 12mm2, it heralded a revolution in computing. A little over 50 years later, Apple’s M2 Ultra contains 134 billion transistors.

The scale of progress is difficult to comprehend, but the evolution of semiconductors, driven for decades by Moore’s Law, has paved a path from the emergence of personal computing and the internet to today’s AI revolution.

But this pace of innovation is not guaranteed, and the next frontier of technological advances—from the future of AI to new computing paradigms—will only happen if we think differently.

Atomic challenges

The modern microchip stretches both the limits of physics and credulity. Such is the atomic precision, that a few atoms can decide the function of an entire chip. This marvel of engineering is the result of over 50 years of exponential scaling creating faster, smaller transistors.

But we are reaching the physical limits of how small we can go, costs are increasing exponentially with complexity, and efficient power consumption is becoming increasingly difficult. In parallel, AI is demanding ever-more computing power. Data from Epoch AI indicates the amount of computing needed to develop AI is quickly outstripping Moore’s Law, doubling every six months in the “deep learning era” since 2010.

These interlinked trends present challenges not just for the industry, but society as a whole. Without new semiconductor innovation, today’s AI models and research will be starved of computational resources and struggle to scale and evolve. Key sectors like AI, autonomous vehicles, and advanced robotics will hit bottlenecks, and energy use from high-performance computing and AI will continue to soar.

Materials intelligence

At this inflection point, a complex, global ecosystem—from foundries and designers to highly specialized equipment manufacturers and materials solutions providers like Merck—is working together more closely than ever before to find the answers. All have a role to play, and the role of materials extends far, far beyond the silicon that makes up the wafer.

Instead, materials intelligence is present in almost every stage of the chip production process—whether in chemical reactions to carve circuits at molecular scale (etching) or adding incredibly thin layers to a wafer (deposition) with atomic precision: a human hair is 25,000 times thicker than layers in leading edge nodes.

Yes, materials provide a chip’s physical foundation and the substance of more powerful and compact components. But they are also integral to the advanced fabrication methods and novel chip designs that underpin the industry’s rapid progress in recent decades.

For this reason, materials science is taking on a heightened importance as we grapple with the limits of miniaturization. Advanced materials are needed more than ever for the industry to unlock the new designs and technologies capable of increasing chip efficiency, speed, and power. We are seeing novel chip architectures that embrace the third dimension and stack layers to optimize surface area usage while lowering energy consumption. The industry is harnessing advanced packaging techniques, where separate “chiplets” are fused with varying functions into a more efficient, powerful single chip. This is called heterogeneous integration.

Materials are also allowing the industry to look beyond traditional compositions. Photonic chips, for example, harness light rather than electricity to transmit data. In all cases, our partners rely on us to discover materials never previously used in chips and guide their use at the atomic level. This, in turn, is fostering the necessary conditions for AI to flourish in the immediate future.

New frontiers

The next big leap will involve thinking differently. The future of technological progress will be defined by our ability to look beyond traditional computing.

Answers to mounting concerns over energy efficiency, costs, and scalability will be found in ambitious new approaches inspired by biological processes or grounded in the principles of quantum mechanics.

While still in its infancy, quantum computing promises processing power and efficiencies well beyond the capabilities of classical computers. Even if practical, scalable quantum systems remain a long way off, their development is dependent on the discovery and application of state-of-the-art materials.

Similarly, emerging paradigms like neuromorphic computing, modelled on the human brain with architectures mimicking our own neural networks, could provide the firepower and energy-efficiency to unlock the next phase of AI development. Composed of a deeply complex web of artificial synapses and neurons, these chips would avoid traditional scalability roadblocks and the limitations of today’s Von Neumann computers that separate memory and processing.

Our biology consists of super complex, intertwined systems that have evolved by natural selection, but it can be inefficient; the human brain is capable of extraordinary feats of computational power, but it also requires sleep and careful upkeep. The most exciting step will be using advanced compute—AI and quantum—to finally understand and design systems inspired by biology. This combination will drive the power and ubiquity of next-generation computing and associated advances to human well-being.

Until then, the insatiable demand for more computing power to drive AI’s development poses difficult questions for an industry grappling with the fading of Moore’s Law and the constraints of physics. The race is on to produce more powerful, more efficient, and faster chips to progress AI’s transformative potential in every area of our lives.

Materials are playing a hidden, but increasingly crucial role in keeping pace, producing next-generation semiconductors and enabling the new computing paradigms that will deliver tomorrow’s technology.

But materials science’s most important role is yet to come. Its true potential will be to take us—and AI—beyond silicon into new frontiers and the realms of science fiction by harnessing the building blocks of biology.

This content was produced by EMD Electronics. It was not written by MIT Technology Review’s editorial staff.

Charts: U.S. TikTok Shop BFCM Sales 2024

Adobe estimates U.S. shoppers spent $41.1 billion online in the five days from Thanksgiving 2024 through Cyber Monday. Shopify says global merchants on its platform received $11.5 billion in revenue from Black Friday through Cyber Monday.

The $137.14 million in sales on TikTok Shop for 2024 Black Friday ($85.9 million) and Cyber Monday ($51.24 million) may seem small, but the platform is little more than a year old, with growth rates far exceeding most retailers and marketplaces.

Charm.io is a U.S.-based data provider helping retailers, sales teams, and investors with in-depth industry analytics, including the TikTok Shop performance metrics in this recap.

According to Charm, from Black Friday 2024 through Cyber Monday, Tarte Cosmetics (makeup) emerged as the top-performing brand on U.S. TikTok Shop, generating an impressive $15.33 million in revenue. Goli Nutrition (supplements) followed with $9.17 million, and Halara (female apparel) secured the third spot with $7.28 million.

The top-performing products highlighted diverse consumer interests. The Flybird Vibration Plate Exercise Machine led with $1.17 million in revenue from 15,940 sold units, followed by the Rhino USA Retractable Ratchet Straps ($981,190, 21,0000 units) and the ecozy Nugget Ice Maker ($930,650, 8,360 units).

U.S. TikTok Shop’s top categories from Black Friday through Cyber Monday underscored broad consumer appeal. “Beauty & Personal Care” led with $51.11 million in revenue, followed by “Womenswear” with $26.47 million, and “Sports & Outdoors” at $19.21 million.

Per Charm.io, livestreams were a powerful sales driver, with the 14-hour “Black Friday Knockout!” from Canvas Beauty Brand (hair and skin products) generating $2.10 million in revenue from 2.25 million viewers. The “Cyber Monday Mega Live” 10-hour event from Simply Mandys (cosmetics)  stood out with $667,430 in sales and 1.49 million viewers.

New Ecommerce Tools: December 12, 2024

This week’s rundown of new products and services for merchants includes generative AI app builders, virtual commerce, product image tools, customer experience platforms, blockchain payment solutions, and drone deliveries.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants

Amazon introduces Nova foundational models for genAI apps. Amazon has announced Nova, a new series of foundation models, available in Amazon Bedrock, a generative AI application builder. The Nova models lower costs and latency for almost any generative AI task, per Amazon, including analyzing complex documents and videos, understanding charts and diagrams, producing videos, and building sophisticated AI agents.

Web page for Amazon Nova Foundation Models

Amazon Nova Foundation Models

Seguno introduces Shopify Connect for Canva. Seguno, a provider of built-for-Shopify apps, has released Shopify Connect for Canva, an app to streamline content creation and allow users to move seamlessly between Canva and Shopify. Per Suguno, the app enhances product photos and brings consistent designs across storefronts and other marketing channels. The integration also streamlines file management, allowing media to flow in both directions.

Alibaba International launches Pic Copilot, an AI-powered ecommerce design tool. Alibaba International has announced the U.S. launch of Pic Copilot, an AI-driven design tool to help ecommerce businesses save on photography and design costs. Pic Copilot provides 12 AI design features for swapping image backgrounds, editing images and videos, generating ads, and more. The tool’s virtual try-on facilitates multiple skin tones and body types, featuring more than 160 models. Pic Copilot also allows users to upload personal images for product try-ons.

Adobe and Amazon Web Services partner to bring Adobe Experience Platform to brands. Adobe has announced an expanded partnership with Amazon Web Services to make Adobe Experience Platform available on AWS. The collaboration enables brands to strengthen customer relationships through highly personalized experiences, per Adobe. Applications powered by AEP — e.g., Real-Time CDP, Journey Optimizer, Customer Journey Analytics — will also be available on AWS. According to Adobe, brands can create and engage with high-value audiences, orchestrate personalized customer journeys, and analyze and optimize campaigns.

Web page of Adobe Experience Platform

Adobe Experience Platform

Roblox begins closed beta of Shopify integration. Roblox, the gaming platform, has allowed select developers to gain access to Shopify integration tools, including the creators of the Roblox experiences “Catalog Avatar Creator,” “Tower Heroes,” and “Creatures of Sonaria,” with more experiences expected to launch soon. The Shopify storefronts participating in the closed beta primarily sell merchandise tied to the in-game experiences rather than movie tickets or goods from external brands.

Amazon successfully tests delivery drones in Italy. Amazon has completed a test of delivery drones in Italy, the first country in Europe to receive the service. The test occurred in San Salvo, a town on the central Adriatic coast, with the new MK-30 drone, an automated Amazon system prompting drones to move away from obstacles and avoid other aircraft.

Nuvei launches blockchain payments in Latin America. Nuvei, a Canada-based fintech company, has launched a blockchain-based payment service for merchants across Latin America. Through Nuvei’s partnerships with Rain, a vertically integrated issuing partner for global platforms, BitGo, a digital asset custodian and wallet solutions provider, and Visa, Nuvei is enabling businesses to utilize stablecoin, including USD Coin, for faster global settlement and reduced reliance on traditional payments rails. The new service enables payments from a digital wallet anywhere Visa is accepted.

Screenshot of Nuvei's web page for the blockchain payment service

Nuvei blockchain payment service.

Clearco and Boundless launch integration to streamline funding for ecommerce brands. Clearco, a working capital provider for ecommerce brands, and Boundless, a capital marketplace, have announced an integration that builds on their partnership for working capital access. Clearco offers companies direct access to Boundless’s marketplace, while Boundless connects ecommerce brands to Clearco to secure funding in as little as 24 hours.

Asendia USA launches multi-service direct entry for ecommerce shipping to Mexico. Asendia USA has announced the launch of direct entry shipping to Mexico for ecommerce sellers. The offering provides Asendia logistics — from U.S. origin hubs through customs clearance and final Mexico delivery. The new service includes both delivered duty paid and unpaid options, multiple entry points to Mexico, simplified customs clearance, improved transit times for both expedited and standard deliveries, and enhanced tracking.

ShipStation provides U.K. SMBs with discounted daily collections from DPD. ShipStation, a shipping software provider, is serving small businesses in the U.K. with the launch of discounted daily parcel collections through DPD, the Geopost-owned parcel delivery company. This offering makes daily pickups for ready-to-ship packages more accessible and cost-effective for small businesses using ShipStation. U.K. consumers also have access to one-hour delivery windows with DPD Predict, real-time tracking through the DPD App, 10,000 drop-off shops, and next-day delivery seven days a week.

Maropost unveils AI-powered email marketing for commerce brands. Maropost, a connected commerce platform for mid-market merchants, has announced two AI-driven features for its Marketing Cloud product. The new eRFM harnesses AI to process and analyze four key dimensions of customer behavior: recency of interactions, frequency of interactions, monetary value, and enhanced metrics on other behavioral data points. Marketing Cloud’s new AI-driven product recommendation engine automatically populates email campaigns with personalized offers based on real-time purchase data and customer behavior.

Marapost home page

Marapost

Google Rolls Out December 2024 Core Update via @sejournal, @MattGSouthern

Google announced it’s rolling out the December core algorithm update, which the company expects to complete over the next two weeks.

The news comes just a week after Google finished rolling out the November core update.

Google’s Announcement

In a post on X, Google stated:

“The Dec. 2024 core update is rolling out, and we expect it will complete in two weeks.

If you’re wondering why there’s a core update this month after one last month, we have different core systems we’re always improving. This past blog post explains more,”

Google’s post included a link to a blog post from November 2023 titled “A Q&A on Google Search updates.”

The blog post provides context around the company’s cadence of algorithm updates.

Multiple Ranking Systems

According to the announcement, Google uses “multiple ranking systems that do different things” and is “always looking at ways to improve these systems to show better results.”

The company said it generally shares information about “notable” updates that it thinks might produce noticeable changes in search results.

Regarding the proximity of the November and December updates, Google explained that while it tries to separate notable updates, “it’s not always possible” given the large number of updates the company implements overall. The post stated:

“If we have updates that can improve Search, that have been developed over the course of several months, we release them when they’re ready.”

Advice For Websites

As with previous core updates, the December update’s specific changes are unknown. However, Google has consistently advised that the best way for creators to succeed through these updates is to remain focused on creating helpful, reliable, people-first content.

Site owners who notice changes in traffic following an update are advised to look closely at Google’s update-specific guidance, which can be found via the Google Search Status Dashboard. The dashboard also allows users to check the status of an update rollout and subscribe to an RSS feed for alerts.

Wrapping Up a Year of Algorithm Updates

The December core update caps off a busy year of algorithm changes for Google Search.

We will closely watch traffic patterns and search rankings to assess the impact as the December update rolls out over the coming weeks.

Search Engine Journal will continue to monitor the situation and provide updates as they become available.


Featured Image: Rohit-Tripathi/Shutterstock

Google Launches New ’24 Hours’ View In Search Console via @sejournal, @MattGSouthern

Google launches 24-hour data view in Search Console, offering near real-time insights for website performance monitoring.

  • Google Search Console now shows performance data from the last 24 hours with minimal delay.
  • The new feature includes hourly data breakdowns and works across Search, Discover, and News reports.
  • Google has cut data delay times by nearly half, making performance tracking more immediate.
Google Ads 2024 Recap: With An Eye To 2025 via @sejournal, @adsliaison

This year brought a steady stream of updates in Google Ads that spanned across campaign types and creative, media activation, and measurement solutions – many informed directly by advertiser feedback.

I won’t cover every big update here, but building on a talk I gave at Hero Conf in San Diego recently, I’ll highlight some of the key themes in this year’s launches and the technological and consumer trends driving product innovation in Google Ads. (It was wonderful to catch up with old marketing friends and meet so many new ones!)

Let’s dig into some of the top trends and launches of what’s possible now to help you engage audiences and drive better results – and get a sense of where we’re headed.

Search Is Evolving & Bringing New Opportunities For Advertisers

Google Search is undergoing significant changes – both in the types of questions people ask, how they’re asking them, and in the results Google provides.

For many years, people largely searched with short two- to three-word queries. For advertisers, that meant we could simply target a list of keywords matching those short queries to reach the right audience.

This has been changing.

We are seeing people asking longer, more complex questions.

Queries of five or more words are growing 1.5 times faster than shorter queries (Source: Google Internal Data, Global-EN, November 2022 – April 2023 vs. November 2023 – April 2024). You may notice this in your own search behavior.

This shift is why we continue to invest so heavily in broad match to help ensure your Search strategy can keep up with the complexity and diversity of searches.

AI Overviews in Search combines large language models (LLMs) with Google’s core search systems to provide responses and resources for more complex queries.

AI Overviews is now available in more than 100 countries and territories, reaching more than 1 billion users monthly in six languages (Source: Alphabet Q3 2024 Earnings).

Additionally, visual searches on Google are growing, thanks to huge leaps in multi-modal visual search capabilities with Lens.

Overall, we’re now seeing 20 billion visual searches a month on Lens, and 1 in 4 visual searches has commercial intent (Source: Google Internal Data, Global. Lens, August-September 2024).

To help advertisers connect with consumers in these new experiences when relevant, we’ve introduced Shopping ads in Lens results globally and text and Shopping ads in AI Overviews on mobile in the U.S.

More Personalized Shopping Discovery

Another new experience to highlight is the completely reimagined Shopping tab.

Currently live on mobile in the U.S., the new Shopping tab experience features a personalized feed for signed-in users and a dedicated deals page. It also incorporates features like Virtual Try-On.

Powered by Gemini models, Virtual Try-On lets potential customers see how an item of clothing drapes, clings, and stretches on real models of different sizes and shapes rig by combining the images of real, diverse human models with photos of your garments from Merchant Center.

All apparel brands with a shopping feed and high-quality imagery are automatically opted into Apparel Try-On and can show in both free listings and Shopping ads.

And while we’re on the topic of Shopping, Merchant Center Next (now simply called Merchant Center) rolled out globally this year.

The new interface has feature parity with the previous version, plus more features such as generated performance insights, tailored recommendations, and visual reporting that you generate with plain language prompts.

Launch, Iterate, And Scale Engaging Creatives

Creative generation solutions make it easier for businesses to create and launch higher-performing, on-brand ads.

The conversational experience for Search campaigns expanded to more languages and is available in English, Spanish, French, and German. It’s also now powered by Gemini models.

This feature is particularly helpful for new and small business advertisers.

We’ve seen that advertisers that use the conversational experience in Google Ads are 63% more likely to publish Search campaigns with “Good” or “Excellent” Ad Strength (Source: Google Internal Data. US, English campaigns published after using asset generation vs. published without using asset generation. January 1-31, 2024).

In short, that means they’re launching campaigns that are more likely to perform better from the start.

Image from author, December 2024

We also made continued improvements in our generative AI models and capabilities to make it a whole lot easier to create varieties of high-quality, on-brand image and video assets at scale.

The asset enhancements feature for responsive display ads uses AI to automatically modify your ad with smart cropping to highlight focal points, text assets, and logo overlays on relevant image areas, and improve image resolution and sharpness. It can even animate your static images for more engaging ads.

We also expanded generative creative capabilities beyond Performance Max to other campaign types this year.

Image asset generation is available in Performance Max, Demand Gen, Display, and App campaigns. It is now powered by Imagen 3, Google’s latest text-to-image model that generates crisper, more lifelike images for your ads.

To generate on-brand image assets, you can upload image references to help generate multiple image assets that better match your brand’s visual style.

Image editing got more capabilities this year as well and is now available in Performance Max, Demand Gen, Search, Display, and App campaigns.

During campaign construction, you can remove, add, modify elements, and extend backgrounds in your image assets, as well as adjust images to fit any size, aspect ratio, or orientation.

Pro tip: Image editing can be great for moments like seasonal campaigns to make sure your assets are on-trend with different holidays and moments during the year so they resonate strongly with audiences.

Note that image editing is different from Product Studio, which is where you can edit your product assets in Google Merchant Center and the Google and YouTube app on Shopify.

Product Studio also now supports reference images to create assets that reflect your brand’s visual style. And with image-to-video animation, it can quickly generate videos from your existing product images.

Speaking Of Video . . .

Image from author, December 2024

Creating great video assets for all the inventory options on YouTube can be challenging for businesses of all sizes.

This fall, we introduced video enhancements, which use Google AI to automatically create additional flipped and shortened versions of your existing videos.

These new ads go through extensive quality review before going live. You can remove generated assets you don’t want or opt-out (if desired) at the campaign level.

Voice-over is a new self-service feature available globally in the asset library in Google Ads. Simply add your script, choose the voice option you want, and then click to generate a voice-over for any YouTube video ad in more than 12 languages.

Long-form content is still extremely popular on YouTube, of course, but Shorts now see 70 billion daily views and an audience of 2 billion signed-in users monthly. And Shorts views on connected TVs more than doubled last year.

This year, we launched branded QR codes on YouTube connected TV. Viewers can scan the code on their phone to visit your website, make a purchase, or learn more about your product or service.

In Video View Campaigns, we introduced new format buying controls with the option to run ads on Shorts inventory only.

And if you’re interested in tapping the power of YouTube creators, Partnership ads powered by BrandConnect are now available in Google Ads globally.

You can use videos made by a creator and promote them as ads, then create new audience segments based on viewers of those videos.

A new video-linking API is also available to link creator videos to your Google Ads account at scale.

New Controls. More Transparency.

We all know that when using AI, better inputs lead to better outputs – and outcomes for your business.

Google AI doesn’t automatically know the definition of better results for your business – only you do. That’s why we’ve continued to add more ways to tell Google what’s important to your business.

In Search campaigns, brand inclusions allow you to use broad match, while still constraining your brand campaigns to serving on specific brand or related product queries.

Brand exclusions are now available for all match types and Dynamic Search Ads to prevent your ads from serving on certain brand queries, including misspellings and variants.

We also rolled out these highly requested updates for Search campaigns:

  • Negative keywords now take misspellings into account. Just add one negative keyword to exclude traffic from all misspelling variations.
  • The search term report shows 9% more search terms on average by reporting misspelled queries with the correctly spelled query.
Image from author, December 2024

You can also see this focus on controls and transparency emphasized in many of the Performance Max updates this year, such as:

  • With Brand guidelines, you get to tell Google about your brand colors and font to generate on-brand visuals.
  • Campaign-level negative keywords – a top ask – are in beta and will be rolling out soon.
  • IP exclusions are supported, and account-level placement exclusions now also apply to the Search partner network.
  • A new experiment allows you to test the impact of final URL expansion to let Google AI select the most relevant landing pages and help you match to additional relevant search queries.
  • To give you more flexibility when managing both Performance Max and Standard Shopping together, Ad Rank is now used to determine which campaign serves when you have product overlap between them.

In addition to controls, we’ve also added more insights for Performance Max, including:

  • Asset-level conversion reporting.
  • Impression share reporting.
  • Demographics in audience insights.
  • New target pacing insights.

This is an area we are actively focused on. Stay tuned for more in 2025!

More Bidding Options Tailored To Specific Goals

Another area I want to call out is the continued focus on expanding and improving bidding capabilities tailored to advertisers’ specific goals.

Here’s a look at some of the work happening in this area:

For retailers with both online and physical stores, omnichannel shoppers tend to spend more.

In Demand Gen campaigns, Omnichannel Goals is now in beta to give those retailers the ability to optimize towards both online conversions and Store Visits.

For lead gen advertisers, the customer journey can be complex. And, of course, not every customer has the same value to your business.

I’ve talked a lot about value-based bidding for lead gen advertisers this year, including a series of short videos followed by deeper dives here in Search Engine Journal.

Continuing to make value-based bidding easier to understand and execute will continue to be a focus area because we’ve seen the positive results it can drive for advertisers.

Lifecycle goals offer additional options to optimize toward your most valuable customers:

  • Last month, we added the ability to use custom experiments in PMax (in beta) and Search to test new customer acquisition.
  • The new retention goal is currently in beta for Performance Max. It allows you to optimize your campaign to win back lapsed customers to reduce churn rates.

And lastly, bidding to profit has also been a top ask from customers.

The new gross profit goal is in beta in Performance Max and Standard Shopping campaigns. It pulls in profit data from sources you already have, like Merchant Center, enabling you to bid to profit with Smart Bidding.

You can also easily switch between revenue and profit goals without disrupting performance.

Data, Measurement & Privacy

Image from author, December 2024

While advancements like image generation may capture attention, the solutions that provide Google AI with the necessary data are equally vital.

Your first-party data is the foundation for better performance and measurement. It helps drive better results and safeguard your campaigns against the impact of privacy changes and signal loss.

We’ve developed a number of privacy-centric solutions that enable durable measurement and allow you to make the most of your first-party data.

Google Ads Data Manager is a big step forward in simplifying the process of connecting your first-party data sources to your account and keeping your audience lists and conversion data complete and accurate.

This fall, we introduced confidential matching for Customer Match in Google Ads Data Manager. It securely processes first-party data for use in Google Ads.

This happens automatically in the background so you don’t have to think about it other than knowing your data remains encrypted and unseen by anyone – including Google.

We’ve also launched the option to encrypt the data yourself and receive proof that your data is processed as intended. And, we are currently running a closed beta to enable confidential matching for enhanced conversions for web.

Tag diagnostics for the Google Tag is available in Google Tag Manager, Google Ads, and Google Analytics to help you quickly identify and troubleshoot potential issues.

Measurement diagnostics for Enhanced Conversions for Leads is also fully rolled out in Google Ads. Use it to monitor your setup and ensure you can take action against the offline data you share with Google.

While we’re on lead generation, new lead funnel reporting for lead gen gives you added visibility into offline conversions when you share qualified and converted leads with Google.

Lastly, advanced consent mode includes two new parameters for sending consent signals needed for ad personalization and remarketing purposes to Google.

The easiest way to enable and maintain advanced consent mode is to work with a Google CMP partner.

The new integrated CMP setup in the Google Tag UI makes this even easier with select partners. Just connect your CMP and configure consent settings right within the Google Tag UI – no code editing needed.

Looking Ahead

AI’s power comes in helping you dynamically adapt to market shifts and create better experiences – and ultimately better outcomes – for your customers and your business.

When you put AI to work with good data and inputs about what you know about your business and goals, you can spend more time focused on, well, the joy of marketing.

In the year ahead, you can expect us to continue building on these capabilities to help you create and measure engaging experiences that drive incremental value for your business.

Keep the feedback coming, and be sure to check out the full recap of top launches across each campaign type in Google Ads this year!

More Resources:


Featured Image: Ginny Marvin/Google

Communicating The Impact Of AI On SEO To C-Level via @sejournal, @TaylorDanRW

AI has emerged as one of the most transformative and disruptive forces in marketing as an industry in a long time, probably the most impactful since the mass adoption of the internet.

It will continue to evolve and change search as a practice for years to come.

While brands are working on AI adoption at an organizational level, the benefits and applications to most departments within the business are clear to the C-level.

When it comes to SEO, the opportunities and threats are less clear.

As an industry, we’re still deliberating how we classify ChatGPT (and other large language model tools). Are they search engines or discovery engines?

If we don’t have clear definitions of what is happening in the industry, we can’t expect our C-level stakeholders to understand – and this can breed uncertainty.

There are enough headlines surrounding AI from various mainstream publications that the perception of, and application capabilities of AI can vary greatly depending on your field of view.

To best explain and communicate how worried – or excited – your C-suite stakeholders need to be with the impact of AI on your SEO program, you need to be able to make it relatable.

This often comes down to the potential impact on website traffic (all channels), and the measurable impact on conversions (and the ROI/CPA stemming from specific channels). But also includes how it affects your audience.

AI Adoption In Your Audience

Before, we look at how to assess your SEO opportunities and threats with AI. A key part of this is understanding how your target markets perceive AI, their planning on adopting AI in their daily lives, and what opportunities AI has to enter their lives seamlessly.

Depending on your target markets, you’ll find active AI adoption rates differ.

Adopting any new technology relies on its ability to provide value by either enhancing user experience or solving a disutility. To do this, it has to achieve ARC:

  • Accessibility.
  • Reliability.
  • Cost.

Only by achieving these three things while providing value can a new technology gain mass market adoption.

Different demographics are adopting AI at different rates.

Looking at consumer surveys and reports, we see Gen Z and Gen Alpha embracing AI and actively utilizing platforms other than Google as the first port of call to discover information and content.

This is supported by a recent data release by Ofcom (Online Nation 2024 Report), which identified that those aged 18-24 are the highest adopters of AI technologies.

It is reported that 1 in 4 (27%) uses ChatGPT at least monthly, with 1 in 3 of this age segment using it.

Another notable data point from this report is that men are more likely to adopt AI, with 50% reporting using AI tools, compared to only 33% of women.

Adoption rates don’t tell the full story.

Threads reached 100 million users in less than a week, but quality issues have seen demand and daily active users (DAU) drop substantially.

A key part factor in this has been Threads’ algorithm capabilities to return satisfying and relevant content to users, and this same challenge is facing LLM tools such as ChatGPT.

The Ofcom Online Nation 2024 Report found that only 1 in 5 (18%) of adults found the information on ChatGPT to be reliable, but this rose to 33% among young adults.

Passive AI Adoption

ChatGPT and the other LLM tools fall under the banner of active AI adoption. Using these tools is a conscious adoption of AI, as you don’t accidentally log in to Claude or Perplexity.

In my opinion, the greater “softening” of the mass market and normalization of LLM tools and AI in the mainstream will come from the passive AI touchpoints that our target audiences are subjected to.

These include things like:

  • The appearance of Google AI Overviews and Bing Generative Search frequently appearing as part of routine internet usage.
  • Additional prompts to use AI tools such as virtual try-ons.
  • Phone manufacturers promoting AI-driven features such as Gemini and Circle Search as product unique selling points (USPs).
  • Apple’s integration of Intelligence.
  • Spotify’s AI DJ.
  • Meta AI’s integration into their suite of products.

These non-invasive touchpoints will, over time, soften attitudes towards AI and build trust, leading to increased adoption elsewhere.

This means that we need to understand where our users spend their time, and the potential exposure to passive AI interactions.

To do this, we need to understand which channels have higher than average term usage, and this helps us identify which platforms our audience over-indexes on.

For example, the topic of “eyeliner” over-indexes on Facebook and Instagram and under-indexes on Reddit and LinkedIn, which is the same channel indexing patterns as “Adidas Samba”.

Understanding which channels your audience is actively engaging with also aids buy-in from other internal marketers and agencies handling the non-SEO channels, and reaches closer to a collaborative integrated communications strategy.

This is a great opportunity to get buy-in from other marketing stakeholders, but if your success metrics are bound to metrics such as directly attributable organic traffic – this is a threat.

Adapting To AI-Origin Users

AI introduces opportunities, but also raises the bar for channel performance.

As mentioned earlier in the article, this means a better understanding of your product and which channels are the right fit, in addition to visibility in organic search.

One way we can do this, in addition to data from third-party tools, is to utilize the Kano Model. The Kano model is a framework traditionally used to categorize and prioritize customer needs and can be effectively applied to assess and enhance product-channel fit in marketing.

For marketing product-channel fit, think of the channel (e.g., email, social media, SEO, paid ads) as a “feature” and map how well it satisfies user expectations.

To adapt and reach our target audiences as they adopt AI tools, or AI product features in existing tools, we as marketers need to:

  • Shift from Broad Channels to Intent-Driven Channels: Focus on channels that align with customer intent, as AI improves its ability to match consumer needs in real-time.
  • Embrace AI-Native Platforms: Platforms like ChatGPT or AI-powered discovery engines require new strategies for delivering benefit-focused, concise, and conversational content.

Monitoring AI Traffic

Another key part of communicating your exposure to LLMs and AI chatbots is the accurate tracking of AI traffic to your website.

This also informs your marketing strategies and adaptation to cater to shifting user behaviors within your target audience.

Traffic from LLMs can be easily monitored through Google Analytics 4 Explore reports, or through Google Looker Studio.

The method for segmenting data depends on the objective, who needs data access, and to what depth.

Tracking LLM traffic via Looker Studio. Image from author, December 2024

GA4 Explore Reports are effective for routine updates, such as monthly reporting, and provide clients with direct access to data through their Google Analytics accounts.

Looker Studio offers two distinct approaches. The first focuses on detailed, client-specific reports that track granular data, such as landing pages and events triggered by LLM traffic, tailored to individual needs.

The second is a quick overview dashboard, which is less customized but allows easy navigation through GA4 accounts, making it useful for ad hoc analysis and monitoring.

Marketers Must Adapt And Align

AI is transforming marketing, offering new opportunities and challenges across all marketing channels. To adapt, marketers must align strategies with evolving user behaviors and clearly communicate AI’s impact on traffic, conversions, and audience engagement to the C-suite and wider business stakeholders.

By focusing on clear communication, measurement and simple visualizations, and strategic adaptation of AI technologies into existing processes, brands can successfully navigate and take advantage of opportunities presented by AI while transforming and future-proofing the marketing function.

More Resources:


Featured Image: wenich_mit/Shutterstock

Automattic Removes WP Engine Client List From Tracker Site via @sejournal, @martinibuster

Automattic removed a spreadsheet containing the domain names of WP Engine customers from the WP Engine Tracker website. The removal is in response to a preliminary injunction granted to WP Engine, ordering Automattic and Matt Mullenweg to remove the spreadsheet within 72 hours.

The preliminary injunction was warmly received on X (formerly Twitter), a tweet by Joe Youngblood representative of the general sentiment:

“The ruling was a gigantic win for small businesses and entrepreneurs that rely on open source keeping it’s promises. That includes allowing webhosts to host and not stealing code repositories.

I am hopeful the full outcome of this looks much the same.”

Someone else tweeted:

“Unbiased parties watching on the sidelines think the court got it right. This was obvious from day one.

Next step for you guys is to try to settle out of court to prevent further embarrassment and reduce potential risk in damages.”

Mullenweg’s Dispute With WP Engine

Matt Mullenweg began an attack against WP Engine on September 20, 2024 after WP Engine declined to pay tens of millions of dollars, what WP Engine’s attorney’s called “extortionate monetary demands” in a cease and desist letter sent to Automattic’s Chief Legal Officer on September 23rd.

On November 6th Automattic intensified the pressure on WP Engine by launching a website called WP Engine Tracker that offered a list of WP Engine customers that could be used by other web hosts to solicit the clients with offers to leave WP Engine.

Solicitations of WP Engine customers apparently followed, as related by a Redditor in a discussion about the WP Engine Tracker website:

“I was out of the office for some medical procedures, so I missed the WPE Tracker thing. However, this explains why I’ve received unsolicited hosting calls from certain operations. Clearly, someone is mining it to solicit business. Absolutely aggravating and also completely expected.

All this does is further entrench me on WP Engine. Good work, Matt, you dweeb.”

The WP Engine Tracker website became evidence of the harm Mullenweg was causing to WP Engine and was cited in the request for a preliminary injunction.

The judge sided with WP Engine and granted the preliminary injunction, requiring among many other things that Automattic and Mullenweg take down the list of WP Engine customers.

The court order states:

“Within 72 hours, Defendants are ORDERED to:

…(a) remove the purported list of WPEngine customers contained in the “domains.csv” file linked to Defendants’ wordpressenginetracker.com website (which was launched on or about
November 7, 2024) and stored in the associated GitHub repository located at https://github.com/wordpressenginetracker/wordpressenginetracker.github.io.”

The CSV file was subsequently removed although the link to a non-existent file , with a link showing zero :

Screenshot Of WP Engine Tracker Website

Clicking the link leads to a 404 error response message.

Screenshot Of 404 Error Response For CSV Download

A pull request on GitHub shows that a request was made to remove the CSV file on December 11th.

“Remove CTA to download list of sites #29

wordpressenginetracker commented 9 hours ago
This PR removes the text and download link to download the list of sites that have are still using WPE”

Screenshot Of GitHub Pull Request

Advanced Custom Fields Plugin

Automattic removed WP Engine’s Advanced Custom Fields (ACF) plugin from the official WordPress.org plugin repository and replaced it with Automattic’s cloned version, renamed as Secure Custom Fields (SCF).

The preliminary injunction orders Automattic to also restore access to the Advanced Custom Fields (ACF) plugin repository:

“Within 72 hours, Defendants are ORDERED to:

…(v) returning and restoring WPEngine’s access to and control of its Advanced Custom Fields (“ACF”) plugin directory listing at https://wordpress.org/plugins/advanced-customfields, as it existed as of September 20, 2024.”

The cloned SCF plugin currently still exists at that URL, although Automattic still has time to take it down.

Screenshot Of SCF Plugin In The ACF Directory Listing

Featured Image by Shutterstock/tomertu

AI’s hype and antitrust problem is coming under scrutiny

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The AI sector is plagued by a lack of competition and a lot of deceit—or at least that’s one way to interpret the latest flurry of actions taken in Washington. 

Last Thursday, Senators Elizabeth Warren and Eric Schmitt introduced a bill aimed at stirring up more competition for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle currently dominate those contracts. “The way that the big get bigger in AI is by sucking up everyone else’s data and using it to train and expand their own systems,” Warren told the Washington Post

The new bill would “require a competitive award process” for contracts, which would ban the use of “no-bid” awards by the Pentagon to companies for cloud services or AI foundation models. (The lawmakers’ move came a day after OpenAI announced that its technology would be deployed on the battlefield for the first time in a partnership with Anduril, completing a year-long reversal of its policy against working with the military.)

While Big Tech is hit with antitrust investigations—including the ongoing lawsuit against Google about its dominance in search, as well as a new investigation opened into Microsoft—regulators are also accusing AI companies of, well, just straight-up lying. 

On Tuesday, the Federal Trade Commission took action against the smart-camera company IntelliVision, saying that the company makes false claims about its facial recognition technology. IntelliVision has promoted its AI models, which are used in both home and commercial security camera systems, as operating without gender or racial bias and being trained on millions of images, two claims the FTC says are false. (The company couldn’t support the bias claim and the system was trained on only 100,000 images, the FTC says.)

A week earlier, the FTC made similar claims of deceit against the security giant Evolv, which sells AI-powered security scanning products to stadiums, K-12 schools, and hospitals. Evolv advertises its systems as offering better protection than simple metal detectors, saying they use AI to accurately screen for guns, knives, and other threats while ignoring harmless items. The FTC alleges that Evolv has inflated its accuracy claims, and that its systems failed in consequential cases, such as a 2022 incident when they failed to detect a seven-inch knife that was ultimately used to stab a student. 

Those add to the complaints the FTC made back in September against a number of AI companies, including one that sold a tool to generate fake product reviews and one selling “AI lawyer” services. 

The actions are somewhat tame. IntelliVision and Evolv have not actually been served fines. The FTC has simply prohibited the companies from making claims that they can’t back up with evidence, and in the case of Evolv, it requires the company to allow certain customers to get out of contracts if they wish to. 

However, they do represent an effort to hold the AI industry’s hype to account in the final months before the FTC’s chair, Lina Khan, is likely to be replaced when Donald Trump takes office. Trump has not named a pick for FTC chair, but he said on Thursday that Gail Slater, a tech policy advisor and a former aide to vice president–elect JD Vance, was picked to head the Department of Justice’s Antitrust Division. Trump has signaled that the agency under Slater will keep tech behemoths like Google, Amazon, and Microsoft in the crosshairs. 

“Big Tech has run wild for years, stifling competition in our most innovative sector and, as we all know, using its market power to crack down on the rights of so many Americans, as well as those of Little Tech!” Trump said in his announcement of the pick. “I was proud to fight these abuses in my First Term, and our Department of Justice’s antitrust team will continue that work under Gail’s leadership.”

That said, at least some of Trump’s frustrations with Big Tech are different—like his concerns that conservatives could be targets of censorship and bias. And that could send antitrust efforts in a distinctly new direction on his watch. 


Now read the rest of The Algorithm

Deeper Learning

The US Department of Defense is investing in deepfake detection

The Pentagon’s Defense Innovation Unit, a tech accelerator within the military, has awarded its first contract for deepfake detection. Hive AI will receive $2.4 million over two years to help detect AI-generated video, image, and audio content. 

Why it matters: As hyperrealistic deepfakes get cheaper and easier to produce, they hurt our ability to tell what’s real. The military’s investment in deepfake detection shows that the problem has national security implications as well. The open question is how accurate these detection tools are, and whether they can keep up with the unrelenting pace at which deepfake generation techniques are improving. Read more from Melissa Heikkilä

Bits and Bytes

The owner of the LA Times plans to add an AI-powered “bias meter” to its news stories

Patrick Soon-Shiong is building a tool that will allow readers to “press a button and get both sides” of a story. But trying to create an AI model that can somehow provide an objective view of news events is controversial, given that models are biased both by their training data and by fine-tuning methods. (Yahoo

Google DeepMind’s new AI model is the best yet at weather forecasting

It’s the second AI weather model that Google has launched in just the past few months. But this one’s different: It leaves out traditional physics models and relies on AI methods alone. (MIT Technology Review)

How the Ukraine-Russia war is reshaping the tech sector in Eastern Europe

Startups in Latvia and other nearby countries see the mobilization of Ukraine as a warning and an inspiration. They are now changing consumer products—from scooters to recreational drones—for use on the battlefield. (MIT Technology Review)

How Nvidia’s Jensen Huang is avoiding $8 billion in taxes

Jensen Huang runs Nvidia, the world’s top chipmaker and most valuable company. His wealth has soared during the AI boom, and he has taken advantage of a number of tax dodges “that will enable him to pass on much of his fortune tax free,” according to the New York Times. (The New York Times)

Meta is pursuing nuclear energy for its AI ambitions
Meta wants more of its AI training and development to be powered by nuclear energy, joining the ranks of Amazon and Microsoft. The news comes as many companies in Big Tech struggle to meet their sustainability goals amid the soaring energy demands from AI. (Meta)

Correction: A previous version of this article stated that Gail Slater was picked by Donald Trump to be the head of the FTC. Slater was in fact picked to lead the Department of Justice’s Antitrust Division. We apologize for the error.

We saw a demo of the new AI system powering Anduril’s vision for war

One afternoon in late November, I visited a weapons test site in the foothills east of San Clemente, California, operated by Anduril, a maker of AI-powered drones and missiles that recently announced a partnership with OpenAI. I went there to witness a new system it’s expanding today, which allows external parties to tap into its software and share data in order to speed up decision-making on the battlefield. If it works as planned over the course of a new three-year contract with the Pentagon, it could embed AI more deeply into the theater of war than ever before. 

Near the site’s command center, which looked out over desert scrubs and sage, sat pieces of Anduril’s hardware suite that have helped the company earn its $14 billion valuation. There was Sentry, a security tower of cameras and sensors currently deployed at both US military bases and the US-Mexico border, and advanced radars. Multiple drones, including an eerily quiet model called Ghost, sat ready to be deployed. What I was there to watch, though, was a different kind of weapon, displayed on two large television screens positioned at the test site’s command station. 

I was here to examine the pitch being made by Anduril, other companies in defense tech, and growing numbers of people within the Pentagon itself: A future “great power” conflict—military jargon for a global war involving competition between multiple countries—will not be won by the entity with the most advanced drones or firepower, or even the cheapest firepower. It will be won by whoever can sort through and share information the fastest. And that will have to be done “at the edge” where threats arise, not necessarily at a command post in Washington. 

A desert drone test

“You’re going to need to really empower lower levels to make decisions, to understand what’s going on, and to fight,” Anduril CEO Brian Schimpf says. “That is a different paradigm than today.” Currently, information flows poorly among people on the battlefield and decision-makers higher up the chain. 

To show how the new tech will fix that, Anduril walked me through an exercise demonstrating how its system would take down an incoming drone threatening a base of the US military or its allies (the scenario at the center of Anduril’s new partnership with OpenAI). It began with a truck in the distance, driving toward the base. The AI-powered Sentry tower automatically recognized the object as a possible threat, highlighting it as a dot on one of the screens. Anduril’s software, called Lattice, sent a notification asking the human operator if he would like to send a Ghost drone to monitor. After a click of his mouse, the drone piloted itself autonomously toward the truck, as information on its location gathered by the Sentry was sent to the drone by the software.

The truck disappeared behind some hills, so the Sentry tower camera that was initially trained on it lost contact. But the surveillance drone had already identified it, so its location stayed visible on the screen. We watched as someone in the truck got out and launched a drone, which Lattice again labeled as a threat. It asked the operator if he’d like to send a second attack drone, which then piloted autonomously and locked onto the threatening drone. With one click, it could be instructed to fly into it fast enough to take it down. (We stopped short here, since Anduril isn’t allowed to actually take down drones at this test site.) The entire operation could have been managed by one person with a mouse and computer.

Anduril is building on these capabilities further by expanding Lattice Mesh, a software suite that allows other companies to tap into Anduril’s software and share data, the company announced today. More than 10 companies are now building their hardware into the system—everything from autonomous submarines to self-driving trucks—and Anduril has released a software development kit to help them do so. Military personnel operating hardware can then “publish” their own data to the network and “subscribe” to receive data feeds from other sensors in a secure environment. On December 3, the Pentagon’s Chief Digital and AI Office awarded a three-year contract to Anduril for Mesh. 

Anduril’s offering will also join forces with Maven, a program operated by the defense data giant Palantir that fuses information from different sources, like satellites and geolocation data. It’s the project that led Google employees in 2018 to protest against working in warfare. Anduril and Palantir announced on December 6 that the military will be able to use the Maven and Lattice systems together. 

The military’s AI ambitions

The aim is to make Anduril’s software indispensable to decision-makers. It also represents a massive expansion of how the military is currently using AI. You might think the US Department of Defense, advanced as it is, would already have this level of hardware connectivity. We have some semblance of it in our daily lives, where phones, smart TVs, laptops, and other devices can talk to each other and share information. But for the most part, the Pentagon is behind.

“There’s so much information in this battle space, particularly with the growth of drones, cameras, and other types of remote sensors, where folks are just sopping up tons of information,” says Zak Kallenborn, a warfare analyst who works with the Center for Strategic and International Studies. Sorting through to find the most important information is a challenge. “There might be something in there, but there’s so much of it that we can’t just set a human down and to deal with it,” he says. 

Right now, humans also have to translate between systems made by different manufacturers. One soldier might have to manually rotate a camera to look around a base and see if there’s a drone threat, and then manually send information about that drone to another soldier operating the weapon to take it down. Those instructions might be shared via a low-tech messenger app—one on par with AOL Instant Messenger. That takes time. It’s a problem the Pentagon is attempting to solve through its Joint All-Domain Command and Control plan, among other initiatives.

“For a long time, we’ve known that our military systems don’t interoperate,” says Chris Brose, former staff director of the Senate Armed Services Committee and principal advisor to Senator John McCain, who now works as Anduril’s chief strategy officer. Much of his work has been convincing Congress and the Pentagon that a software problem is just as worthy of a slice of the defense budget as jets and aircraft carriers. (Anduril spent nearly $1.6 million on lobbying last year, according to data from Open Secrets, and has numerous ties with the incoming Trump administration: Anduril founder Palmer Luckey has been a longtime donor and supporter of Trump, and JD Vance spearheaded an investment in Anduril in 2017 when he worked at venture capital firm Revolution.) 

Defense hardware also suffers from a connectivity problem. Tom Keane, a senior vice president in Anduril’s connected warfare division, walked me through a simple example from the civilian world. If you receive a text message while your phone is off, you’ll see the message when you turn the phone back on. It’s preserved. “But this functionality, which we don’t even think about,” Keane says, “doesn’t really exist” in the design of many defense hardware systems. Data and communications can be easily lost in challenging military networks. Anduril says its system instead stores data locally. 

An AI data treasure trove

The push to build more AI-connected hardware systems in the military could spark one of the largest data collection projects the Pentagon has ever undertaken, and companies like Anduril and Palantir have big plans. 

“Exabytes of defense data, indispensable for AI training and inferencing, are currently evaporating,” Anduril said on December 6, when it announced it would be working with Palantir to compile data collected in Lattice, including highly sensitive classified information, to train AI models. Training on a broader collection of data collected by all these sensors will also hugely boost the model-building efforts that Anduril is now doing in a partnership with OpenAI, announced on December 4. Earlier this year, Palantir also offered its AI tools to help the Pentagon reimagine how it categorizes and manages classified data. When Anduril founder Palmer Luckey told me in an interview in October that “it’s not like there’s some wealth of information on classified topics and understanding of weapons systems” to train AI models on, he may have been foreshadowing what Anduril is now building. 

Even if some of this data from the military is already being collected, AI will suddenly make it much more useful. “What is new is that the Defense Department now has the capability to use the data in new ways,” Emelia Probasco, a senior fellow at the Center for Security and Emerging Technology at Georgetown University, wrote in an email. “More data and ability to process it could support great accuracy and precision as well as faster information processing.”

The sum of these developments might be that AI models are brought more directly into military decision-making. That idea has brought scrutiny, as when Israel was found last year to have been using advanced AI models to process intelligence data and generate lists of targets. Human Rights Watch wrote in a report that the tools “rely on faulty data and inexact approximations.”

“I think we are already on a path to integrating AI, including generative AI, into the realm of decision-making,” says Probasco, who authored a recent analysis of one such case. She examined a system built within the military in 2023 called Maven Smart System, which allows users to “access sensor data from diverse sources [and] apply computer vision algorithms to help soldiers identify and choose military targets.”

Probasco said that building an AI system to control an entire decision pipeline, possibly without human intervention, “isn’t happening” and that “there are explicit US policies that would prevent it.”

A spokesperson for Anduril said that the purpose of Mesh is not to make decisions. “The Mesh itself is not prescribing actions or making recommendations for battlefield decisions,” the spokesperson said. “Instead, the Mesh is surfacing time-sensitive information”—information that operators will consider as they make those decisions.