WooCommerce May Gain Sidekick-Type AI Through Extensions via @sejournal, @martinibuster

WooCommerce is approaching a turning point in 2026 thanks to the Model Context Protocol and the convergence of open source technologies that enable it to function as a layer any AI system can plug into, helping store owners and consumers accomplish more with less friction. Automattic’s Director Of Engineering AI, James LePage, discussed what’s possible right now, what’s coming in the near future, and why the current limitations are temporary.

WooCommerce

Because WooCommerce is built on WordPress and is highly extensible through plugins, APIs, and now MCP, it is rapidly evolving into a coordination layer where AI-based systems can plug in and work together through it. Automattic’s James LePage describes this approach as one in which WooCommerce fits perfectly in the center.

Model Context Protocol

Model Context Protocol is an open standard that enables platforms like WooCommerce to connect their capabilities to AI systems, making AI-powered features possible.

While MCP sounds like an API, which enables software systems to communicate, the key difference is that an API handles predefined requests, whereas MCP enables platforms like WooCommerce to support a broader range of AI interactions without building custom integrations for each one.

WooCommerce Sits In The Middle

ACP (Agentic Commerce Protocol), developed by OpenAI and Stripe, enables an AI agent to handle product, discovery, checkout, and payments from a chat interface like ChatGPT.

The UCP (Universal Commerce Protocol), an open source solution developed by Shopify and Google, provides a way for checkouts to happen through a buy button throughout Google’s AI and Search ecosystem as well as Anthropic’s Claude, regardless of whether the transaction is happening on a WooCommerce store or any other shopping platform. A developer only has to implement a UCP-compliant MCP Server for WooCommerce.

WooCommerce sits in the middle of those protocols, where their integrations come together.

Enablement Strategy For WooCommerce

LePage described a practical perspective for how AI fits into the WooCommerce platform through MCP. He calls this approach enablement.

He explains this approach:

“What’s interesting about that is it follows a strategy that we’re taking at WooCommerce, which is what I refer to as enablement, where WooCommerce is this core software, this core way that you run a digital business online.

And we want to make sure that core software is available and always in the middle of whatever’s happening in AI.

So we want to build AI features for it. We want to make it really easy for others to build AI features for it. But we absolutely want to make sure it will meet you wherever your AI tools are, wherever the best financial analysis AI tool exists, wherever the best general chatbot exists.

So to us, MCP represents a really strong opportunity there.”

Because MCP is flexible to whatever AI platform a user is on, WooCommerce is able to remain in the middle, regardless of which AI system a user subscribes to.

Practical Use Of AI In WooCommerce

LePage brought attention to practical uses of AI right now, where users can leverage ChatGPT Connectors and Claude Code from within WooCommerce in order to have multiple apps and AI communicate with each other to accomplish various tasks.

He explains:

“What’s also cool is if you use ChatGPT with connectors, if you use Cloud Code with their MCP support, there’s a lot of opportunity that you get when you add multiple pieces of software to one session.

So if I take my WooCommerce stuff and I take QuickBooks and I take X, Y, and Z, I can interact with all of them in a conversational manner.

And that’s got me very excited, but it’s also got all the merchants really excited.”

AI Is Developer-Facing Infrastructure

While profound AI implementations are quickly coming together for WooCommerce, LePage indicated that, at this moment, the current work is foundational, providing the building blocks that developers and agencies use to make it all work rather than delivering out-of-the-box merchant features today.

The question asked in the podcast was:

“…is that where we are with WooCommerce and AI at the moment is that you do need really a developer to hook it all up and make it work?”

LePage answered:

“So I’d say yes, if you want a really robust AI implementation that’s built and fits like a glove on your store and does everything that you ever want, the pieces are there.”

He later said that there are plugins that can implement some of those functionalities.

Sidekick-Type Functionality

LePage offered an exciting preview of what’s in store in the near future for WooCommerce when asked if WooCommerce will ship with deep native integration of AI similar to Shopify’s Sidekick AI assistant.

Shopify Sidekick is an AI assistant that can be invoked at various points in the store management workflow, enabling store owners to perform creative tasks like transforming product images or creating email marketing campaigns to handling common store management tasks.

The question asked was:

“One thing I’d love to know is what is planned for Core, possibly WordPress as a whole, certainly WooCommerce, in terms of like an interface built into Core, like how Shopify has Sidekick where wherever you are, you can just type what you want and it will do it for you.”

LePage answered that this kind of AI integration will likely be in the form of an extension, explaining that integrating this kind of functionality within core would be good, but doing it with a plugin would be great. He explained that all the pieces for doing this will be in place within core in version 7, which will be released on April 9, 2026.

He shared that WooCommerce will be an orchestration layer, where WooCommerce sits in the middle, directing and coordinating multiple services, tools, and data sources.

He explained:

“…it will work if we made it a very basic implementation in core, or as even like a very basic plugin, but it will be great when we can plug it into things like WooCommerce Analytics, when we can plug it into much more complex orchestration workflows under the hood to go and do things like really bulk product optimization and catalog stuff and analytics and deep number crunching, all of the fun stuff that we’re actually working on as we speak.

So you will see AI support in terms of this Sidekick-type implementation coming out from Automattic in this extension territory. And that extension also housing additional AI features to make it a much more approachable AI experience to merchants.”

Consumer-Facing AI In WooCommerce Stores

Another area discussed in the podcast was consumer-facing AI implementations that introduce more personalization and chat interfaces for retrieving order information or product selection.

At this point, the podcast jumps into agentic AI shopping, which is projected to become a thing between the near future and 2030.

But at the end, LePage circles back to affirming WordPress’s role as the orchestration layer intended to support whatever functionality and vision emerge.

LePage shared:

“These building blocks are intended to make WordPress into a platform where a developer can build any AI solution.”

WordPress and WooCommerce are very much in transition to providing the option of becoming an orchestration layer. While other content management systems are a little further down the road with these kinds of functionalities, WordPress and WooCommerce have a huge developer ecosystem that is already innovating new features that will become more powerful and useful in the very near future.

Watch the Do the Woo podcast with hosts Katie Keith and James Kemp:

AI Meets Woo: the Future of Ecommerce is Already Here

Featured Image/Screenshot Of Do the Woo Podcast

The scientist using AI to hunt for antibiotics just about everywhere

When he was just a teenager trying to decide what to do with his life, César de la Fuente compiled a list of the world’s biggest problems. He ranked them inversely by how much money governments were spending to solve them. Antimicrobial resistance topped the list. 

Twenty years on, the problem has not gone away. If anything, it’s gotten worse. Infections caused by bacteria, fungi, and viruses that have evolved ways to evade treatments are now associated with more than 4 million deaths per year, and a recent analysis, published in the Lancet, predicts that number could surge past 8 million by 2050. In a July 2025 essay in Physical Review Letters, de la Fuente, now a bioengineer and computational biologist, and synthetic biologist James Collins warned of a looming “post­antibiotic” era in which infections from drug-resistant strains of common bacteria like Escherichia coli or Staphylococcus aureus, which can often still be treated by our current arsenal of medications, become fatal. “The antibiotic discovery pipeline remains perilously thin,” they wrote, “impeded by high development costs, lengthy timelines, and low returns on investment.”

But de la Fuente is using artificial intelligence to bring about a different future. His team at the University of Pennsylvania is training AI tools to search genomes far and deep for peptides with antibiotic properties. His vision is to assemble those peptides—molecules made of up to 50 amino acids linked together—into various configurations, including some never seen in nature. The results, he hopes, could defend the body against microbes that withstand traditional treatments. 

His quest has unearthed promising candidates in unexpected places. In August 2025 his team, which includes 16 scientists in Penn’s Machine Biology Group, described peptides hiding in the genetic code of ancient single-celled organisms called archaea. Before that, they’d excavated a list of candidates from the venom of snakes, wasps, and spiders. And in an ongoing project de la Fuente calls “molecular de-­extinction,” he and his collaborators have been scanning published genetic sequences of extinct species for potentially functional molecules. Those species include hominids like Neanderthals and Denisovans and charismatic megafauna like woolly mammoths, as well as ancient zebras and penguins. In the history of life on Earth, de la Fuente reasons, maybe some organism evolved an antimicrobial defense that could be helpful today. Those long-gone codes have given rise to resurrected compounds with names like ­mammuthusin-2 (from woolly mammoth DNA), mylodonin-2 (from the giant sloth), and hydrodamin-1 (from the ancient sea cow). Over the last few years, this molecular binge has enabled de la Fuente to amass a library of more than a million genetic recipes.

At 40 years old, de la Fuente has also collected a trophy case of awards from the American Society for Microbiology, the American Chemical Society, and other organizations. (In 2019, this magazine named him one of “35 Innovators Under 35” for bringing computational approaches to antibiotic discovery.) He’s widely recognized as a leader in the effort to harness AI for real-world problems. “He’s really helped pioneer that space,” says Collins, who is at MIT. (The two have not collaborated in the laboratory, but Collins has long been at the forefront of using AI for drug discovery, including the search for antibiotics. In 2020, Collins’s team used an AI model to predict a broad-­spectrum antibiotic, halicin, that is now in preclinical development.) 

The world of antibiotic development needs as much creativity and innovation as researchers can muster, says Collins. And de la Fuente’s work on peptides has pushed the field forward: “César is marvelously talented, very innovative.” 

A messy, noisy endeavor

De la Fuente describes antimicrobial resistance as an “almost impossible” problem, but he sees plenty of room for exploration in the word almost. “I like challenges,” he says, “and I think this is the ultimate challenge.” 

The use, overuse, and misuse of antibiotics, he says, drives antimicrobial resistance. And the problem is growing unchecked because conventional ways to find, make, and test the drugs are prohibitively expensive and often lead to dead ends. “A lot of the companies that have attempted to do antibiotic development in the past have ended up folding because there’s no good return on investment at the end of the day,” he says.

Antibiotic discovery has always been a messy, noisy endeavor, driven by serendipity and fraught with uncertainty and misdirection. For decades, researchers have largely relied on brute-force mechanical methods. “Scientists dig into soil, they dig into water,” says de la Fuente. “And then from that complex organic matter they try to extract antimicrobial molecules.” 

But molecules can be extraordinarily complex. Researchers have estimated the number of possible organic combinations that could be synthesized at somewhere around 1060. For reference, Earth contains an estimated 1018 grains of sand. “Drug discovery in any domain is a statistics game,” says Jonathan Stokes, a chemical biologist at McMaster University in Canada, who has been using generative AI to design potential new antibiotics that can be synthesized in a lab, and who worked with Collins on halicin. “You need enough shots on goal to happen to get one.” 

Those have to be good shots, though. And AI seems well suited to improving researchers’ aim. Biology is an information source, de la Fuente explains: “It’s like a bunch of code.” The code of DNA has four letters; proteins and peptides have 20, where each “letter” represents an amino acid. De la Fuente says his work amounts to training AI models to recognize sequences of letters that encode antimicrobial peptides, or AMPs. “If you think about it that way,” he says, “you can devise algorithms to mine the code and identify functional molecules, which can be antimicrobials. Or antimalarials. Or anticancer agents.” 

Practically speaking, we’re still not there: These peptides haven’t yet been transformed into usable drugs that help people, and there are plenty of details—dosage, delivery, specific targets—that need to be sorted out, says de la Fuente. But AMPs are appealing because the body already uses them.They’re a critical part of the immune system and often the first line of defense against pathogenic infections. Unlike conventional antibiotics, which typically have one trick for killing bacteria, AMPs often exhibit a multimodal approach. They may disrupt the cell wall and the genetic material inside as well as a variety of cellular processes. A bacterial pathogen may evolve resistance to a conventional drug’s single mode of action, but maybe not to a multipronged AMP attack.

From discovery to delivery

De la Fuente’s group is one of many pushing the boundaries of using AI for antibiotics. Where he focuses primarily on peptides, Collins works on small-molecule discovery. So does Stokes, at McMaster, whose models identify promising new molecules and predict whether they can be synthesized. “It’s only been a few years since folks have been using AI meaningfully in drug discovery,” says Collins. 

Even in that short time the tools have changed, says James Zou, a computer scientist at Stanford University, who has worked with Stokes and Collins. Researchers have moved from using predictive models to developing generative approaches. With a predictive approach, Zou says, researchers screen large libraries of candidates that are known to be promising. Generative approaches offer something else: the appeal of designing a new molecule from scratch. Last year, for example, de la Fuente’s team used one generative AI model to design a suite of synthetic peptides and another to assess them. The group tested two of the resulting compounds on mice infected with a drug-resistant strain of Acinetobacter baumannii, a germ that the World Health Organization has identified as a “critical priority” in research on antimicrobial resistance. Both successfully and safely treated the infection. 

But the field is still in the discovery phase. In his current work, de la Fuente is trying to get candidates closer to clinical testing. To that end, his team is developing an ambitious multimodal model called ApexOracle that’s designed to analyze a new pathogen, pinpoint its genetic weaknesses, match it to antimicrobial peptides that might work against it, and then predict how an antibiotic, built from those peptides, would fare in lab tests. It “converges understanding in chemistry, genomics, and language,” he says. It’s preliminary, he adds, but even if it doesn’t work perfectly, it will help steer the next generation of AI models toward the ultimate goal of resisting resistance. 

Using AI, he believes, human researchers now have a fighting chance at catching up to the giant threat before them. The technology has already saved decades of human research time. Now he wants it to save lives, too: “This is the world that we live in today, and it’s incredible.” 

Stephen Ornes is a science writer in Nashville, Tennessee.

Hackers made death threats against this security researcher. Big mistake.

The threats started in spring. 

In April 2024, a mysterious someone using the online handles “Waifu” and “Judische” began posting death threats on Telegram and Discord channels aimed at a cybersecurity researcher named Allison Nixon. 

“Alison [sic] Nixon is gonna get necklaced with a tire filled with gasoline soon,” wrote Waifu/Judische, both of which are words with offensive connotations. “Decerebration is my fav type of brain death, thats whats gonna happen to alison Nixon.” 

It wasn’t long before others piled on. Someone shared AI-generated nudes of Nixon.

These anonymous personas targeted Nixon because she had become a formidable threat: As chief research officer at the cyber investigations firm Unit 221B, named after Sherlock Holmes’s apartment, she had built a career tracking cybercriminals and helping get them arrested. For years she had lurked quietly in online chat channels or used pseudonyms to engage with perpetrators directly while piecing together clues they’d carelessly drop about themselves and their crimes. This had helped her bring to justice a number of cybercriminals—especially members of a loosely affiliated subculture of anarchic hackers who call themselves the Com.

But members of the Com aren’t just involved in hacking; some of them also engage in offline violence against researchers who track them. This includes bricking (throwing a brick through a victim’s window) and swatting (a dangerous type of hoax that involves reporting a false murder or hostage situation at someone’s home so SWAT teams will swarm it with guns drawn). Members of a Com offshoot known as 764 have been accused of even more violent acts—including animal torture, stabbings, and school shootings—or of inciting others in and outside the Com to commit these crimes.

Nixon started tracking members of the community more than a decade ago, when other researchers and people in law enforcement were largely ignoring them because they were young—many in their teens. Her early attention allowed her to develop strategies for unmasking them.

Ryan Brogan, a special agent with the FBI, says Nixon has helped him and colleagues identify and arrest more than two dozen members of the community since 2011, when he first began working with her, and that her skills in exposing them are unparalleled. “If you get on Allison’s and my radar, you’re going [down]. It’s just a matter of time,” he says. “No matter how much digital anonymity and tradecraft you try to apply, you’re done.”

Though she’d done this work for more than a decade, Nixon couldn’t understand why the person behind the Waifu/Judische accounts was suddenly threatening her. She had given media interviews about the Com—most recently on 60 Minutes—but not about her work unmasking members to get them arrested, so the hostility seemed to come out of the blue. And although she had taken an interest in the Waifu persona in years past for crimes he boasted about committing, he hadn’t been on her radar for a while when the threats began, because she was tracking other targets. 

Now Nixon resolved to unmask Waifu/Judische and others responsible for the death threats—and take them down for crimes they admitted to committing. “Prior to them death-threatening me, I had no reason to pay attention to them,” she says. 

Com beginnings

Most people have never heard of the Com, but its influence and threat are growing.

It’s an online community comprising loosely affiliated groups of, primarily, teens and twentysomethings in North America and English-speaking parts of Europe who have become part of what some call a cybercrime youth movement. 

International laws and norms, and fears of retaliation, prevent states from going all out in cyber operations. That doesn’t stop the anarchic Com.

Over the last decade, its criminal activities have escalated from simple distributed denial-of-service (DDoS) attacks that disrupt websites to SIM-swapping hacks that hijack a victim’s phone service, as well as crypto theft, ransomware attacks, and corporate data theft. These crimes have affected AT&T, Microsoft, Uber, and others. Com members have also been involved in various forms of sextortion aimed at forcing victims to physically harm themselves or record themselves doing sexually explicit activities. The Com’s impact has also spread beyond the digital realm to kidnapping, beatings, and other violence. 

One longtime cybercrime researcher, who asked to remain anonymous because of his work, says the Com is as big a threat in the cyber realm as Russia and China—for one unusual reason.

“There’s only so far that China is willing to go; there’s only so far that Russia or North Korea is willing to go,” he says, referring to international laws and norms, and fears of retaliation, that prevent states from going all out in cyber operations. That doesn’t stop the anarchic Com, he says.

FRANZISKA BARCZYK

“It is a pretty significant threat, and people tend to … push it under the rug [because] it’s just a bunch of kids,” he says. “But look at the impact [they have].”

Brogan says the amount of damage they do in terms of monetary losses “can become staggering very quickly.”

There is no single site where Com members congregate; they spread across a number of web forums and Telegram and Discord channels. The group follows a long line of hacking and subculture communities that emerged online over the last two decades, gained notoriety, and then faded or vanished after prominent members were arrested or other factors caused their decline. They differed in motivation and activity, but all emerged from “the same primordial soup,” says Nixon. The Com’s roots can be traced to the Scene, which began as a community of various “warez” groups engaged in pirating computer games, music, and movies.

When Nixon began looking at the Scene, in 2011, its members were hijacking gaming accounts, launching DDoS attacks, and running booter services. (DDoS attacks overwhelm a server or computer with traffic from bot-controlled machines, preventing legitimate traffic from getting through; booters are tools that anyone can rent to launch a DDoS attack against a target of choice.) While they made some money, their primary goal was notoriety.

This changed around 2018. Cryptocurrency values were rising, and the Com—or the Community, as it sometimes called itself—emerged as a subgroup that ultimately took over the Scene. Members began to focus on financial gain—cryptocurrency theft, data theft, and extortion.

The pandemic two years later saw a surge in Com membership that Nixon attributes to social isolation and the forced movement of kids online for schooling. But she believes economic conditions and socialization problems have also driven its growth. Many Com members can’t get jobs because they lack skills or have behavioral issues, she says. A number who have been arrested have had troubled home lives and difficulty adapting to school, and some have shown signs of mental illness. The Com provides camaraderie, support, and an outlet for personal frustrations. Since 2018, it has also offered some a solution to their money problems.

Loose-knit cells have sprouted from the community—Star Fraud, ShinyHunters, Scattered Spider, Lapsus$—to collaborate on clusters of crime. They usually target high-profile crypto bros and tech giants and have made millions of dollars from theft and extortion, according to court records. 

But dominance, power, and bragging rights are still motivators, even in profit operations, says the cybercrime researcher, which is partly why members target “big whales.”

“There is financial gain,” he says, “but it’s also [sending a message that] I can reach out and touch the people that think they’re untouchable.” In fact, Nixon says, some members of the Com have overwhelming ego-driven motivations that end up conflicting with their financial motives.

“Often their financial schemes fall apart because of their ego, and that phenomenon is also what I’ve made my career on,” she says.

The hacker hunter emerges

Nixon has straight dark hair, wears wire-rimmed glasses, and has a slight build and bookish demeanor that, on first impression, could allow her to pass for a teen herself. She talks about her work in rapid cadences, like someone whose brain is filled with facts that are under pressure to get out, and she exudes a sense of urgency as she tries to make people understand the threat the Com poses. She doesn’t suppress her happiness when someone she’s been tracking gets arrested.

In 2011, when she first began investigating the communities from which the Com emerged, she was working the night shift in the security operations center of the security firm SecureWorks. The center responded to tickets and security alerts emanating from customer networks, but Nixon coveted a position on the company’s counter-threats team, which investigated and published threat-intelligence reports on mostly state-sponsored hacking groups from China and Russia. Without connections or experience, she had no path to investigative work. But Nixon is an intensely curious person, and this created its own path.

Allison Nixon
Allison Nixon is chief research officer at the cybersecurity investigations firm Unit 221B, where she tracks cybercriminals and helps bring them to justice.
YLVA EREVALL

Where the threat team focused on the impact hackers had on customer networks—how they broke in, what they stole—Nixon was more interested in their motivations and the personality traits that drove their actions. She assumed there must be online forums where criminal hackers congregated, so she googled “hacking forums” and landed on a site called Hack Forums.

“It was really stupid simple,” she says.

She was surprised to see members openly discussing their crimes there. She reached out to someone on the SecureWorks threat team to see if he was aware of the site, and he dismissed it as a place for “script kiddies”—a pejorative term for unskilled hackers.

This was a time when many cybersecurity pros were shifting their focus away from cybercrime to state-sponsored hacking operations, which were more sophisticated and getting a lot of attention. But Nixon likes to zig where others zag, and her colleague’s dismissiveness fueled her interest in the forums. Two other SecureWorks colleagues shared that interest, and the three studied the forums during downtime on their shifts. They focused on trying to identify the people running DDoS booters. 

What Nixon loved about the forums was how accessible they were to a beginner like herself. Threat-intelligence teams require privileged access to a victim’s network to investigate breaches. But Nixon could access everything she needed in the public forums, where the hackers seemed to think no one was watching. Because of this, they often made mistakes in operational security, or OPSEC—letting slip little biographical facts such as the city where they lived, a school they attended, or a place they used to work. These details revealed in their chats, combined with other information, could help expose the real identities behind their anonymous masks. 

“It was a shock to me that it was relatively easy to figure out who [they were],” she says. 

She wasn’t bothered by the immature boasting and petty fights that dominated the forums. “A lot of people don’t like to do this work of reading chat logs. I realize that this is a very uncommon thing. And maybe my brain is built a little weird that I’m willing to do this,” she says. “I have a special talent that I can wade through garbage and it doesn’t bother me.” 

Nixon soon realized that not all the members were script kiddies. Some exhibited real ingenuity and “powerful” skills, she says, but because they were applying these to frivolous purposes—hijacking gamer accounts instead of draining bank accounts—researchers and law enforcement were ignoring them. Nixon began tracking them, suspecting that they would eventually direct their skills at more significant targets—an intuition that proved to be correct. And when they did, she had already amassed a wealth of information about them. 

She continued her DDoS research for two years until a turning point in 2013, when the cybersecurity journalist Brian Krebs, who made a career tracking cybercriminals, got swatted. 

About a dozen people from the security community worked with Krebs to expose the perpetrator, and Nixon was invited to help. Krebs sent her pieces of the puzzle to investigate, and eventually the group identified the culprit (though it would take two years for him to be arrested). When she was invited to dinner with Krebs and the other investigators, she realized she’d found her people.

“It was an amazing moment for me,” she says. “I was like, wow, there’s all these like-minded people that just want to help and are doing it just for the love of the game, basically.”

Staying one step ahead

It was porn stars who provided Nixon with her next big research focus—one that underscored her skill at spotting Com actors and criminal trends in their nascent stages, before they emerged as major threats.

In 2018, someone was hijacking the social media accounts of certain adult-film stars and using those accounts to blast out crypto scams to their large follower bases. Nixon couldn’t figure out how the hackers had hijacked the social media profiles, but she promised to help the actors regain access to their accounts if they agreed to show her the private messages the hackers had sent or received during the time they controlled them. These messages led her to a forum where members were talking about how they stole the accounts. The hackers had tricked some of these actors into disclosing the mobile phone numbers of others. Then they used a technique called SIM swapping to reset passwords for social media accounts belonging to those other stars, locking them out. 

In SIM swapping, fraudsters get a victim’s phone number assigned to a SIM card and phone they control, so that calls and messages intended for the victim go to them instead. This includes one-time security codes that sites text to account holders to verify themselves when accessing their account or changing its password. In some of the cases involving the porn stars, the hackers had manipulated telecom workers into making the SIM swaps for what they thought were legitimate reasons, and in other cases they bribed the workers to make the change. The hackers were then able to alter the password on the actors’ social media accounts, lock out the owners, and use the accounts to advertise their crypto scams. 

SIM swapping is a powerful technique that can be used to hijack and drain entire cryptocurrency and bank accounts, so Nixon was surprised to see the fraudsters using it for relatively unprofitable schemes. But SIM swapping had rarely been used for financial fraud at that point, and like the earlier hackers Nixon had seen on Hack Forums, the ones hijacking porn star accounts didn’t seem to grasp the power of the technique they were using. Nixon suspected that this would change and SIM swapping would soon become a major problem, so she shifted her research focus accordingly. It didn’t take long for the fraudsters to pivot as well.

Nixon’s skill at looking ahead in this way has served her throughout her career. On multiple occasions a hacker or hacking group would catch her attention—for using a novel hacking approach in some minor operation, for example—and she’d begin tracking their online posts and chats in the belief that they’d eventually do something significant with that skill. 

They usually did. When they later grabbed headlines with a showy or impactful operation, these hackers would seem to others to have emerged from nowhere, sending researchers and law enforcement scrambling to understand who they were. But Nixon would already have a dossier compiled on them and, in some cases, had unmasked their real identity as well. Lizard Squad was an example of this. The group burst into the headlines in 2014 and 2015 with a series of high-profile DDoS campaigns, but Nixon and colleagues at the job where she worked at the time had already been watching its members as individuals for a while. So the FBI sought their assistance in identifying them.

“The thing about these young hackers is that they … keep going until they get arrested, but it takes years for them to get arrested,” she says. “So a huge aspect of my career is just sitting on this information that has not been actioned [yet].”

It was during the Lizard Squad years that Nixon began developing tools to scrape and record hacker communications online, though it would be years before she began using these concepts to scrape the Com chatrooms and forums. These channels held a wealth of data that might not seem useful during the nascent stage of a hacker’s career but could prove critical later, when law enforcement got around to investigating them; yet the contents were always at risk of being deleted by Com members or getting taken down by law enforcement when it seized websites and chat channels.

Nixon’s work is unique because she engages with the actors in chat spaces to draw out information from them that “would not be otherwise normally available.”

Over several years, she scraped and preserved whatever chatrooms she was investigating. But it wasn’t until early 2020, when she joined Unit 221B, that she got the chance to scrape the Telegram and Discord channels of the Com. She pulled all of this data together into a searchable platform that other researchers and law enforcement could use. The company hired two former hackers to help build scraping tools and infrastructure for this work; the result is eWitness, a community-driven, invitation-­only platform. It was initially seeded only with data Nixon had collected after she arrived at Unit 221B, but has since been augmented with data that other users of the platform have scraped from Com social spaces as well, some of which doesn’t exist in public forums anymore.

Brogan, of the FBI, says it’s an incredibly valuable tool, made more so by Nixon’s own contributions. Other security firms scrape online criminal spaces as well, but they seldom share the content with outsiders, and Brogan says Nixon’s work is unique because she engages with the actors in chat spaces to draw out information from them that “would not be otherwise normally available.” 

The preservation project she started when she got to Unit 221B could not have been better timed, because it coincided with the pandemic, the surge in new Com membership, and the emergence of two disturbing Com offshoots, CVLT and 764. She was able to capture their chats as these groups first emerged; after law enforcement arrested leaders of the groups and took control of the servers where their chats were posted, this material went offline.

CVLT—pronounced “cult”—was reportedly founded around 2019 with a focus on sextortion and child sexual abuse material. 764 emerged from CVLT and was spearheaded by a 15-year-old in Texas named Bradley Cadenhead, who named it after the first digits of his zip code. Its focus was extremism and violence. 

In 2021, because of what she observed in these groups, Nixon turned her attention to sextortion among Com members.

The type of sextortion they engaged in has its roots in activity that began a decade ago as “fan signing.” Hackers would use the threat of doxxing to coerce someone, usually a young female, into writing the hacker’s handle on a piece of paper. The hacker would use a photo of it as an avatar on his online accounts—a kind of trophy. Eventually some began blackmailing victims into writing the hacker’s handle on their face, breasts, or genitals. With CVLT, this escalated even further; targets were blackmailed into carving a Com member’s name into their skin or engaging in sexually explicit acts while recording or livestreaming themselves.

During the pandemic a surprising number of SIM swappers crossed into child sexual abuse material and sadistic sextortion, according to Nixon. She hates tracking this gruesome activity, but she saw an opportunity to exploit it for good. She had long been frustrated at how leniently judges treated financial fraudsters because of their crimes’ seemingly nonviolent nature. But she saw a chance to get harsher sentences for them if she could tie them to their sextortion and began to focus on these crimes. 

At this point, Waifu still wasn’t on her radar. But that was about to change.

Endgame

Nixon landed in Waifu’s crosshairs after he and fellow members of the Com were involved in a large hack involving AT&T customer call records in April 2024.

Waifu’s group gained access to dozens of cloud accounts with Snowflake, a company that provides online data storage for customers. One of those customers had more than 50 billion call logs of AT&T wireless subscribers stored in its Snowflake account. 

They tried to re-extort the telecom, threatening on social media to leak the records. They tagged the FBI in the post. “It’s like they were begging to be investigated,” says Nixon.

Among the subscriber records were call logs for FBI agents who were AT&T customers. Nixon and other researchers believe the hackers may have been able to identify the phone numbers of agents through other means. Then they may have used a reverse-lookup program to identify the owners of phone numbers that the agents called or that called them and found Nixon’s number among them. This is when they began harassing her.

But then they got reckless. They allegedly extorted nearly $400,000 from AT&T in exchange for promising to delete the call records they’d stolen. Then they tried to re-extort the telecom, threatening on social media to leak the records they claimed to have deleted if it didn’t pay more. They tagged the FBI in the post.

“It’s like they were begging to be investigated,” says Nixon.

The Snowflake breaches and AT&T records theft were grabbing headlines at the time, but Nixon had no idea her number was in the stolen logs or that Waifu/Judische was a prime suspect in the breaches. So she was perplexed when he started taunting and threatening her online.

FRANZISKA BARCZYK

Over several weeks in May and June, a pattern developed. Waifu or one of his associates would post a threat against her and then post a message online inviting her to talk. She assumes now that they believed she was helping law enforcement investigate the Snowflake breaches and hoped to draw her into a dialogue to extract information from her about what authorities knew. But Nixon wasn’t helping the FBI investigate them yet. It was only after she began looking at Waifu for the threats that she became aware of his suspected role in the Snowflake hack.

It wasn’t the first time she had studied him, though. Waifu had come to her attention in 2019 when he bragged about framing another Com member for a hoax bomb threat and later talked about his involvement in SIM-swapping operations. He made an impression on her. He clearly had technical skills, but Nixon says he also often appeared immature, impulsive, and emotionally unstable, and he was desperate for attention in his interactions with other members. He bragged about not needing sleep and using Adderall to hack through the night. He was also a bit reckless about protecting personal details. He wrote in private chats to another researcher that he would never get caught because he was good at OPSEC, but he also told the researcher that he lived in Canada—which turned out to be true.

Nixon’s process for unmasking Waifu followed a general recipe she used to unmask Com members: She’d draw a large investigative circle around a target and all the personas that communicated with that person online, and then study their interactions to narrow the circle to the people with the most significant connections to the target. Some of the best leads came from a target’s enemies; she could glean a lot of information about their identity, personality, and activities from what the people they fought with online said about them.

“The enemies and the ex-girlfriends, generally speaking, are the best [for gathering intelligence on a suspect],” she says. “I love them.”

While she was doing this, Waifu and his group were reaching out to other security researchers, trying to glean information about Nixon and what she might be investigating. They also attempted to plant false clues with the researchers by dropping the names of other cybercriminals in Canada who could plausibly be Waifu. Nixon had never seen cybercriminals engage in counterintelligence tactics like this.

Amid this subterfuge and confusion, Nixon and another researcher working with her did a lot of consulting and cross-checking with other researchers about the clues they were gathering to ensure they had the right name before they gave it to the FBI.

By July she and the researcher were convinced they had their guy: Connor Riley Moucka, a 25-year-old high school dropout living with his grandfather in Ontario. On October 30, Royal Canadian Mounted Police converged on Moucka’s home and arrested him.

According to an affidavit filed in Canadian court, a plainclothes Canadian police officer visited Moucka’s house under some pretense on the afternoon of October 21, nine days before the arrest, to secretly capture a photo of him and compare it with an image US authorities had provided. The officer knocked and rang the bell; Moucka opened the door looking disheveled and told the visitor: “You woke me up, sir.” He told the officer his name was Alex; Moucka sometimes used the alias Alexander Antonin Moucka. Satisfied that the person who answered the door was the person the US was seeking, the officer left. Waifu’s online rants against Nixon escalated at this point, as did his attempts at misdirection. She believes the visit to his door spooked him.

Nixon won’t say exactly how they unmasked Moucka—only that he made a mistake.

“I don’t want to train these people in how to not get caught [by revealing his error],” she says.

The Canadian affidavit against Moucka reveals a number of other violent posts he’s alleged to have made online beyond the threats he made against her. Some involve musings about becoming a serial killer or mass-mailing sodium nitrate pills to Black people in Michigan and Ohio; in another, his online persona talks about obtaining firearms to “kill Canadians” and commit “suicide by cop.” 

Prosecutors, who list Moucka’s online aliases as including Waifu, Judische, and two more in the indictment, say he and others extorted at least $2.5 million from at least three victims whose data they stole from Snowflake accounts. Moucka has been charged with nearly two dozen counts, including conspiracy, unauthorized access to computers, extortion, and wire fraud. He has pleaded not guilty and was extradited to the US last July. His trial is scheduled for October this year, though hacking cases usually end in plea agreements rather than going to trial. 

It took months for authorities to arrest Moucka after Nixon and her colleague shared their findings with the authorities, but an alleged associate of his in the Snowflake conspiracy, a US Army soldier named Cameron John Wagenius (Kiberphant0m online), was arrested more quickly. 

On November 10, 2024, Nixon and her team found a mistake Wagenius made that helped identify him, and on December 20 he was arrested. Wagenius has already pleaded guilty to two charges around the sale or attempted sale of confidential phone records and will be sentenced this March.

These days Nixon continues to investigate sextortion among Com members. But she says that remaining members of Waifu’s group still taunt and threaten her.

“They are continuing to persist in their nonsense, and they are getting taken out one by one,” she says. “And I’m just going to keep doing that until there’s no one left on that side.” 

Kim Zetter is a journalist who covers cybersecurity and national security. She is the author of Countdown to Zero Day.

The Download: unraveling a death threat mystery, and AI voice recreation for musicians

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Hackers made death threats against this security researcher. Big mistake.

In April 2024, a mysterious someone using the online handles “Waifu” and “Judische” began posting death threats on Telegram and Discord channels aimed at a cybersecurity researcher named Allison Nixon.

These anonymous personas targeted Nixon because she had become a formidable threat: As chief research officer at the cyber investigations firm Unit 221B, named after Sherlock Holmes’s apartment, she had built a career tracking cybercriminals and helping get them arrested.

Though she’d done this work for more than a decade, Nixon couldn’t understand why the person behind the accounts was suddenly threatening her. And although she had taken an interest in the Waifu persona in years past for crimes he boasted about committing, he hadn’t been on her radar for a while when the threats began, because she was tracking other targets.

Now Nixon resolved to unmask Waifu/Judische and others responsible for the death threats—and take them down for crimes they admitted to committing. Read the full story.

—Kim Zetter

This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land. 

ALS stole this musician’s voice. AI let him sing again.

There are tears in the audience as Patrick Darling’s song begins to play. It’s a heartfelt song written for his great-grandfather, whom he never got the chance to meet. But this performance is emotional for another reason: It’s Darling’s first time on stage with his bandmates since he lost the ability to sing two years ago.

The 32-year-old musician was diagnosed with amyotrophic lateral sclerosis (ALS) when he was 29 years old. Like other types of motor neuron disease, it affects nerves that supply the body’s muscles. People with ALS eventually lose the ability to control their muscles, including those that allow them to move, speak, and breathe.

Darling’s last stage performance was over two years ago. By that point, he had already lost the ability to stand and play his instruments and was struggling to sing or speak. But recently, he was able to re-create his lost voice using an AI tool trained on snippets of old audio recordings. Another AI tool has enabled him to use this “voice clone” to compose new songs. Darling is able to make music again. Read the full story.

—Jessica Hamzelou

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The creator of OpenClaw is joining OpenAI
Sam Altman was sufficiently impressed by Peter Steinberger’s ideas to get agents to interact with each other. (The Verge)
+ The move demonstrates how seriously OpenAI is taking agents. (FT $)
+ Moltbook was peak AI theater. (MIT Technology Review)

2 How North Korea is illegally funding its nuclear program
A defector explains precisely how he duped remote IT workers into funneling money into its missiles.(WSJ $)
+ Nukes are a hot topic across Europe right now. (The Atlantic $)

3 Radio host David Greene is convinced Google stole his voice
He’s suing the company over similarities between his own distinctive vocalizations and the AI voice used in its NotebookLM app. (WP $)
+ People are using Google study software to make AI podcasts. (MIT Technology Review)

4 US automakers are worried by the prospect of a Chinese invasion
They fear Trump may greenlight Chinese carmakers to build plants in the US. (FT $)
+ China figured out how to sell EVs. Now it has to deal with their aging batteries. (MIT Technology Review)

5 Google downplays safety warnings on its AI-generated medical advice
It only displays extended warnings when a user clicks to ‘Show more.’ (The Guardian)
+ Here’s another reason why you should keep a close eye on AI Overviews. (Wired $)
+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review)

6 How to make Lidar affordable for all cars
A compact device could prove the key. (IEEE Spectrum)

7 Robot fight nights are all the rage in San Francisco
Step aside, Super Bowl! (Rest of World)
+ Humanoid robots will take to the stage for Chinese New Year celebrations. (Reuters)

8 Influencers and TikTokers are feeding their babies butter
But there’s no scientific evidence to back up some of their claims. (NY Mag $)

9 This couple can’t speak the same language
Microsoft Translator has helped them to sustain a marriage. (NYT $)
+ AI romance scams are on the rise. (Vox)

10 AI promises to make better, more immersive video games
But those are lofty goals that may never be achieved. (The Verge)
+ Google DeepMind is using Gemini to train agents inside Goat Simulator 3. (MIT Technology Review)

Quote of the day

“Right now this is a baby version. But I think it’s incredibly concerning for the future.”

—Scott Shambaugh, a software engineer who recently became the subject of a scathing blog post written by an AI bot accusing him of hypocrisy and prejudice, tells the Wall Street Journal why this could be the tip of the iceberg.

One more thing

Why do so many people think the Fruit of the Loom logo had a cornucopia?

Quick question: Does the Fruit of the Loom logo feature a cornucopia?

Many of us have been wearing the company’s T-shirts for decades, and yet the question of whether there is a woven brown horn of plenty on the logo is surprisingly contentious.

According to a 2022 poll, 55% of Americans believe the logo does include a cornucopia, 25% are unsure, and only 21% are confident that it doesn’t, even though this last group is correct.

There’s a name for what’s happening here: the “Mandela effect,” or collective false memory, so called because a number of people misremember that Nelson Mandela died in prison. Yet while many find it easy to let their unconfirmable beliefs go, some spend years seeking answers—and vindication. Read the full story.

—Amelia Tait

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ When dating apps and book lovers collide, who knows what could happen.
+ It turns out humans have a secret third set of teeth, which is completely wild.
+ We may never know the exact shape of the universe. But why is that?
+ If your salad is missing a certain something, some crispy lentils may be just the ticket.

Tuning into the future of collaboration 

When work went remote, the sound of business changed. What began as a scramble to make home offices functional has evolved into a revolution in how people hear and are heard. From education to enterprises, companies across industries have reimagined what clear, reliable communication can mean in a hybrid world. For major audio and communications enterprises like Shure and Zoom, that transformation has been powered by artificial intelligence, new acoustic technologies, and a shared mission: making connection effortless. 

Necessity during the pandemic accelerated years of innovation in months.  

“Audio and video just working is a baseline for collaboration,” says chief ecosystem officer at Zoom, Brendan Ittelson. “That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem.”  

Audio is a foundation for trust, understanding, and collaboration. Poor sound quality can distort meaning and fatigue listeners, while crisp audio and intelligent processing can make digital interactions feel nearly as natural as in-person exchanges. 

“If you think about the fundamental need here,” adds chief technology officer at Shure, Sam Sabet, “It’s the ability to amplify the audio and the information that’s really needed, and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate.”  

For both Ittelson and Sabet, AI now sits at the center of this progress. For Shure, machine learning powers real-time noise suppression, adaptive beamforming, and spatial audio that tunes itself to a room’s acoustics. For Zoom, AI underpins every layer of its platform, from dynamic noise reduction to automated meeting summaries and intelligent assistants that anticipate user needs. These tools are transforming communication from reactive to proactive, enabling systems that understand intent, context, and emotion. 

“Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing,” says Sabet. “Having software and algorithms that adapt seamlessly and self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.” 

The future, they suggest, is one where technology fades into the background. As audio devices and AI companions learn to self-optimize, users won’t think about microphones or meeting links. Instead, they’ll simply connect. Both companies are now exploring agentic AI systems and advanced wireless solutions that promise to make collaboration seamless across spaces, whether in classrooms, conference rooms, or virtual environments yet to come. 

“It’s about helping people focus on strategy and creativity instead of administrative busy work,” says Ittelson. 

This episode of Business Lab is produced in partnership with Shure. 

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.  

This episode is produced in partnership with Shure.  

Now as the pandemic ushered in the cultural shift that led to our increasingly virtual world, it also sparked a flurry of innovation in the audio and video industries to keep employees and customers connected and businesses running. Today we’re going to talk about the AI technologies behind those innovations, the impact on audio innovation, and the continuing emerging opportunities for further advances in audio capabilities.  

Two words for you: elevated audio.  

My guests today are Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom.  

Welcome Sam, welcome Brendan. 

Sam Sabet: Thank you, Megan. It’s a pleasure to be here and I’m looking forward to this conversation with both you and Brendan. It should be a very exciting conversation. 

Brendan Ittelson: Thank you so much for having me today. I’m looking forward to the conversation and all the topics we have to dive into on this area. 

Megan: Fantastic. Lovely to have you both here. And Sam, just to set some context, I wonder if we could start with the pandemic and the innovation that really was born out of necessity. I mean, when it became clear that we were all going to be virtual for the foreseeable future, I wonder what was the first technological mission for Shure? 

Sam: Yeah, very good question. The pandemic really accelerated a lot of innovation around virtual communications and fundamentally how we perform our everyday jobs remotely. One of our first technological mission when the pandemic happened and everybody ended up going home and performing their functions remotely was to make sure that people could continue to communicate effectively, whether that’s for business meetings, virtual events, or educational purposes. We focused on collaboration and enhancing collaboration tools. And ideally what we were aiming to do, or we focused on, was to basically improve the ease of use and configuration of audio tool sets. 

Because unlike the office environment where it might be a lot more controlled, people are working from non-traditional areas like home offices or other makeshift solutions, we needed to make sure that people could still get pristine audio and that studio level audio even in uncontrolled environments that are not really made for that. We expedited development in our software solutions. We created tool sets that allowed for ease of deployment and remote configuration and management so we could enable people to continue doing the things they needed to do without having to worry about the underlying technology. 

Megan: And Brendan, during that time, it seemed everyone became a Zoom user of some sort. I mean, what was the first mission at Zoom when virtual connection became this necessity for everyone? 

Brendan: Well, our mission fundamentally didn’t change. It’s always been about delivering frictionless communications. What shifted was the urgency and the magnitude of what we were doing. Our focus shifted on how we do this reliably, securely, and to scale to ensure these millions of new users could connect instantly without friction. We really shifted our thinking of being just a business continuity tool to becoming a lifeline for so many individuals and industries. The stories that we heard across education, healthcare, and just general human connection, the number of those moments that matter to people that we were able to help facilitate just became so important. We really focused on how can we be there and make it frictionless so folks can focus on that human connection. And that accelerated our thinking in terms of innovation and reinforced the thought that we need to focus on the simplicity, accessibility, and trust in communication technology so that people could focus on that connection and not the technology that makes it possible. 

Megan: That’s so true. It did really just become an absolute lifeline for people, didn’t it? And before we dive into the technologies beyond these emerging capabilities, I wonder if we could first talk about just the importance of clear audio. I mean, Sam, as much as we all worry over how we look on Zoom, is how we sound perhaps as or even more impactful? 

Sam: Yeah, you’re absolutely correct. I mean, clear audio is absolutely critical for effective communications. Video quality is very important absolutely, but poor audio can really hinder understanding and engagement. As a matter of fact, there’s studies and research from areas such as Yale University that say that poor audio can make understanding somewhat more challenged and even affect retention of information. Especially in an educational type environment where there’s a lot of background noise and very differing types of spaces like auditoriums and lecture halls, it really becomes a high priority that you have great audio quality. And during the pandemic, as you said, and as Brendan rightly said, it became one of our highest priorities to focus on technologies like beamforming mics and ways to focus on the speaker’s voice and minimize that unwanted background noise so that we could ensure that the communication was efficient, was well understood, and that it removed the distraction so people could be able to actually communicate and retain the information that was being shared. 

Megan: It is incredible just how impactful audio can be, can’t it? Brendan, I mean as you said, remote and hybrid collaboration is part of Zoom’s DNA. What observations can you share about how users have grown along with the technological advancements and maybe how their expectations have grown as well? 

Brendan: Definitely. I mean, users now expect seamless and intelligent experiences. Audio and video just working is a baseline for collaboration. That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem. When we look at it, we’re really looking at these trends in terms of how people want to be better when they’re at home. For example, AI-powered tools like Smart Summaries, translation and noise suppression to help people stay productive and connected no matter where they’re working. But then this also comes into play at the office. We’re starting to see folks that dive into our technology like Intelligent Director and Smart Name Tags that create that meeting equity even when they’re in a conference room. 

So, the remote experience and the room experience all are similar and create that same ability to be seen, heard, and contribute. And we’re now diving further into this that it’s beyond just meetings. Zoom is really transforming into an AI-first work platform that’s focused on human connection. And so that goes beyond the meetings into things like Chat, Zoom Docs, Zoom Events and Webinars, the Zoom Contact Center and more. And all of this being brought together using our AI Companion at its core to help connect all of those different points of connection for individuals. 

Megan: I mean, so Brendan, we know it wasn’t only workplaces that were affected by the pandemic, it was also the education sector that had to undergo a huge change. I wondered if you could talk a little bit about how Zoom has operated in that higher education sphere as well. 

Brendan: Definitely. Education has always been a focus for Zoom and an area that we’ve believed in. Because education and learning is something as a company we value and so we have invested in that sector. And personally being the son of academics, it is always an area that I find fascinating. We continue to invest in terms of how do we make the classroom a stronger space? And especially now that the classroom has changed, where it can be in person, it can be virtual, it can be a mix. And using Zoom and its tools, we’re able to help bridge all those different scenarios to make learning accessible to students no matter their means. 

That’s what truly excites us, is being able to have that technology that allows people to pursue their desires, their interests, and really up-level their pursuits and inspire more. We’re constantly investing in how to allow those messages to get out and to integrate in the flow of communication and collaboration that higher education uses, whether that’s being integrated into the classroom, into learning management systems, to make that a seamless flow so that students and their educators can just collaborate seamlessly. And also that we can support all the infrastructure and administration that helps make that possible. 

Megan: Absolutely. Such an important thing. And Sam, Shure as well, could you talk to us a bit about how you worked in that kind of education space as well from an audio point of view? 

Sam: Absolutely. Actually, this is a topic that’s near and dear to my heart because I’m actually an adjunct professor in my free time. 

Megan: Oh, wow. Very impressive. 

Sam: And the challenges of trying to do this sort of a hybrid lecture, if you will. And Shure has been particularly well suited for this environment and we’ve been focused on it and investing in technologies there for decades. If you think about how a lecture hall is structured, it’s a little different than just having a meeting around the conference table. And Shure has focused on creating products that allow this combination of a presenter scenario along with a meeting space plus the far end where users or students are remote, they can hear intelligibly what’s happening in the lecture hall, but they can also participate. 

Between our products like the Ceiling Mic Arrays and our wireless microphones that are purpose built for presenters and educators like our MXW neXt product line, we’ve created technologies that allow those two previously separate worlds to integrate together. And then add that onto integrating with Zoom and other products that allow for that collaboration has been very instrumental. And again, being a user and providing those lectures, I can see a night and day difference and just how much more effective my lectures are today from where they were five to six years ago. And that’s all just made possible by all the technologies that are purpose built for these scenarios and integrating more with these powerful tools that just make the job so much more seamless. 

Megan: Absolutely fascinating that you got to put the technology to use yourself as well to check that it was all working well. And you mentioned AI there, of course. I mean, Sam, what AI technologies have had the most significant impact on recent audio advancements too? 

Sam: Yeah. Absolutely. If you think about the fundamental need here, it’s the ability to amplify the audio and the information that’s really needed and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate. With our innovations at Shure, we’ve leveraged the cutting-edge technologies to both enhance communication effectiveness and to align seamlessly with evolving features in unified communications like the ones that Brandon just mentioned in the Zoom platforms.  

We partner with industry leaders like Zoom to ensure that we’re providing the ability to be able to focus on that needed audio and eliminate all the background distractions. AI has transformed that audio technology with things like machine learning algorithms that enable us to do more real-time audio processing and significantly enhancing things like noise reduction and speech isolation. Just to give you a simple example, our IntelliMix Room audio processing software that we’ve released as well as part of a complete room solution uses AI to optimize sound in different environments. 

And really that’s one of the fundamental changes in this period, whether that’s pandemic or post-pandemic, is that the key is really flexibility and being able to adapt to changing work environments. Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing. And so having software and algorithms that adapt seamlessly and are able to self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.  

And then last but not least, AI has transformed the way audio and video integrate. For example, we utilize voice recognition systems that integrate with intelligent cameras so that we enable voice tracking technology so that cameras can not only identify who’s speaking, but you have the ability to hear and see people clearly. And that in general just enhances the overall communication experience. 

Megan: Wow. It’s just so much innovation in quite a short space of time really. I mean, Brendan, you mentioned AI a little bit there beforehand, but I wonder what other AI technologies have had the biggest impact as Zoom builds out its own emerging capabilities? 

Brendan: Definitely. And I couldn’t agree more with Sam that, I mean, AI has made such a big shift and it’s really across the spectrum. And when I think about it, there’s almost three tiers when you look at the stack. You start off at the raw audio where AI is doing those things like noise suppression, echo cancellation, voice enhancements. All of that just makes this amazing audio signal that can then go into the next layer, which is the speech AI and natural language processing. Which starts to open up those items such as the real-time transcription, translation, searchable content to make the communication not just what’s heard, but making it more accessible to more individuals and inclusive by providing that content in a format that is best for them. 

And then you take those two layers and put the generative and agentic AI on top of that, that can start surfacing insights, summarize the conversation, and even take actions on someone’s behalf. It really starts to change the way that people work and how they have access and allows them to connect. I think it is a huge shift and I’m very excited by how those three levels start to interact to really enable people to do more and to connect thanks to AI. 

Megan: Yeah. Absolutely. So much rich information that can come out from a single call now because of those sorts of tools. And following on from that, Brendan, I mean, you mentioned before the Zoom AI Companion. I wondered if you could talk a bit about what were your top priorities when building that product to ensure it was truly useful for your customers? 

Brendan: Definitely. When we developed AI Companion, we had two priority focus areas from day one, trust and security, and then accuracy and relevance. On the trust side, it was a non-negotiable that customer data wouldn’t be used to train our models. People need to know that their conversations and content are private and secure. 

Megan: Of course. 

Brendan: And then with accuracy, we needed to ensure AI outputs weren’t generic but grounded in the actual context of a meeting, a chat or a product. But the real story here when I think about AI Companion is the customer value that it delivers. AI Companion helps people save time with meeting recaps, task generation, and proactive prep for the next session. It reduces that friction in hybrid work, whether you’re in a meeting room, a Zoom room, or collaborating across different collaboration tools like Microsoft or Google. And it enables more equitable participation by surfacing the right context for everyone no matter where and how they’re working.  

All this leads to a result where it’s practical, trustworthy, and embedded where work happens. And it’s just not another tool to manage, it’s there in someone’s flow of work to help them along the way. 

Megan: Yeah. That trust piece is just so important, isn’t it, today? And Sam, as much as AI has impacted audio innovation, audio has also had an impact on AI capabilities. I wondered if you could talk a little bit about audio as a data input and the advancements technologies like large language models, LLMs, are enabling. 

Sam: Absolutely. Audio is really a rich data source that’s added a new dimension to AI capabilities. If you think about speech recognition or natural language processing, they’ve had significant advances due to audio data that’s provided for them. And to Brendan’s point about trust and accuracy, I like to think of the products that Shure enables customers with as essentially the eyes and ears in the room for leading AI companions just like the Zoom AI Companion. You really need that pristine audio input to be able to trust the accuracy of what the AI generates. These AI Companions have been very instrumental in the way we do business every day. I mean, between transcription, speaker attributions, the ability to add action items within a meeting and be able to track what’s happening in our interactions, all of that really has to rely on that accurate and pristine input from audio into the AI. I feel that further improves the trust that our end users have to the results of AI and be able to leverage it more.  

If you think about it, if you look at how AI audio inputs enhance that interactive AI system, it enables more natural and intuitive interactions with AI. And it really allows for that seamless integration and the ability for users to use it without having to worry about, is the room set up correctly? Is the audio level proper? And when we talk even about agentic AI, we’re working on future developments where systems can self-heal or detect that there are issues in the environment so that they can autocorrect and adapt in all these different environments and further enable the AI to be able to do a much more effective job, if you will. 

Megan: Sam, you touched on future developments there. I wonder if we could close our conversation today with a bit of a future forward look, if we could. Brendan, can you share innovations that Zoom is working on now and what are you most excited to see come to fruition? 

Brendan: Well, your timing for this question is absolutely perfect because we’ve just wrapped up Zoomtopia 2025. 

Megan: Oh, wow. 

Brendan: And this is where we discussed a lot of the new AI innovations that we have coming to Zoom. Starting off, there’s AI Companion 3.0. And we’ve launched this next generation of agentic AI capabilities in Zoom Workplace. And with 3.0 when it releases, it isn’t just about transcribing, it’s turned into really a platform that helps you with follow-up task, prep for your next conversation, and even proactively suggest how to free up your time. For example, AI Companion can help you schedule meetings intelligently across time zones, suggest which meetings you can skip, and still stay informed and even prepare you with context and insights before you walk into the conversation. It’s about helping people focus on strategy and creativity instead of administrative busy work. And for hybrid work specifically, we introduced Zoomie Group Assistant, which will be a big leap for hybrid collaboration. 

Acting as an assistant for a group chat and meetings, you can simply ask, “@Zoomie, what’s the latest update on the project?” Or “@Zoomie, what are the team’s action items?” And then get instant answers. Or because we’re talking about audio here, you can go into a conference room and say, “Hey, Zoomie,” and get help with things like checking into a room, adjusting lights, temperature, or even sharing your screen. And while all these are built-in features, we’re also expanding the platform to allow custom AI agents through our AI Studio, so organizations can bring their own agents or integrate with third-party ones.  

Zoom has always believed in an open platform and philosophy and that is continuing. Folks using AI Companion 3.0 will be able to use agents across platforms to work with the workflows that they have across all the different SaaS vendors that they might have in their environment, whether that’s Google, Microsoft, ServiceNow, Cisco, and so many other tools. 

Megan: Fantastic. It certainly sounds like a tool I could use in my work, so I look forward to hearing more about that. And Sam, we’ve touched on there are so many exciting things happening in audio too. What are you working on at Shure? And what are you most excited to see come to fruition? 

Sam: At Shure, our engineering teams are really working on a range of exciting projects, but particularly we’re working on developing new collaboration solutions that are integral for IT end users. And these integrate obviously with the leading UC platforms.  

We’re integrating audio and video technologies that are scalable, reliable solutions. And we want to be able to seamlessly connect these to cloud services so that we can leverage both AI technologies and the tool sets available to optimize every type of workspace essentially. Not just meeting rooms, but lecture halls, work from home scenarios, et cetera.  

The other area that we really focus on in terms of our reliability and quality really comes from our DNA in the pro audio world. And that’s really all-around wireless audio technologies. We’re developing our next-generation wireless systems and these are going to offer even greater reliability and range. And they really become ideal for everything from a large-scale event to personal home use and the gamut across that whole spectrum. And I think all of that in partnership with our partners like Zoom will help just facilitate the modern workspace. 

Megan: Absolutely. So much exciting innovation clearly going on behind the scenes. Thank you both so much.  

That was Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom, whom I spoke with from Brighton in England.  

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.  

This show is available wherever you get your podcasts. And if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review and this episode was produced by Giro Studios. Thanks for listening. 

Bing Adds AI Visibility Reporting

Unlike traditional search engine optimization, AI search lacks native performance reporting to help businesses develop organic visibility strategies.

Google’s Search Console combines AI Overviews and organic listings in its “Performance” section, leaving optimizers to guess which channel drove visibility and traffic. ChatGPT shares metrics only with publishers that have licensed their content to OpenAI.

Bing is the first platform to offer some transparency. A few weeks after publishing its “guide to AEO and GEO,” Bing launched an “AI Performance Report” in Webmaster Tools.

AI Performance

The new report tracks citations in Microsoft Copilot, AI-generated summaries in Bing, and select AI partner integrations. But there’s no option to filter by a single surface, and no way to identify the integration partners or their purpose.

The report shows users’ “Total Citations” for the chosen period and “Avg. Cited Pages.” It then lists:

  • “Grounding Queries,” which are “the key phrases the AI used when retrieving content that was cited in its answer.” In other words, the queries are the “fan-out” terms that Bing’s AI agents use to search for and find answers, though we don’t know which search engines or platforms they access.
  • “Pages,” the URLs mentioned in AI answers.
Screenshot of the new AI Performance section

The new Webmaster Tools section lists citations by “Grounding Queries” and “Pages.” Click image to enlarge.

Each tab includes additional visibility data:

  • For every grounding query, Webmaster Tools reports on the average number of unique pages cited per day in AI answers.
  • For each cited URL, the report includes its frequency — how often it appears in an answer — not its importance, ranking, or role within a response.

The report provides no traffic or click-through data and no clarity into which Grounding Queries triggered which citations.

Using the Data

The report is a good first step, but it offers little actionable data. Perhaps it will force other players to do more.

According to Bing, the new report:

… shows how your site’s content is used in AI‑generated answers across Microsoft Copilot and partner experiences by highlighting which pages are cited, how visibility trends change over time, and the grounding queries associated with your content.

I’m making the report more useful by:

  • Researching organic keywords on Bing and Google that drive traffic to the cited URLs,
  • Prompting ChatGPT or Gemini to turn the keywords into prompts,
  • Evaluating whether the cited pages address those prompts or need better structure or clarity.

Also, I identify common modifiers in the grounding queries to understand how AI agents find the pages.

Identify common modifiers, such as “virus” in this example, to understand how AI agents find your pages.

Webmaster Tools

Setting up Bing Webmaster Tools takes only a couple of minutes with Search Console enabled.

Log in to Webmaster Tools with your Microsoft account, click “Add site,” and choose the “Import your sites from GSC” option. Allow roughly 24 hours for Bing to collect and report the data.

CleanTalk WordPress Plugin Vulnerability Threatens Up To 200K Sites via @sejournal, @martinibuster

An advisory was issued for a critical vulnerability rated 9.8/10 in the CleanTalk Antispam WordPress plugin, installed in over 200,000 websites. The vulnerability enables unauthenticated attackers to install vulnerable plugins that can then be used to launch remote code execution attacks.

CleanTalk Antispam Plugin

The CleanTalk Antispam plugin is a subscription based software as a service that protects websites from inauthentic user actions like spam subscriptions, registrations, form emails, plus a firewall for blocking bad bots.

Because it’s a subscription based plugin it relies on a valid API in to reach out to the CleanTalk servers and this is the part of the plugin is where the flaw that enabled the vulnerability was discovered.

CleanTalk Plugin Vulnerability CVE-2026-1490

The plugin contains a WordPress function that checks if a valid API key is being used to contact the CleanTalk servers. A WordPress function is PHP code that performs a specific task.

In this specific case, if the plugin cannot validate a connection to CleanTalk’s servers because of an invalid API key, it relies on the checkWithoutToken function to verify “trusted” requests.

The problem is that the checkWithoutToken function doesn’t properly verify the identity of the requester. An attacker is able to misrepresent their identity as coming from the cleantalk.org domain and then launch their attacks. Thus, this vulnerability only affects plugins that do not have a valid API key.

The Wordfence advisory describes the vulnerability:

“The Spam protection, Anti-Spam, FireWall by CleanTalk plugin for WordPress is vulnerable to unauthorized Arbitrary Plugin Installation due to an authorization bypass via reverse DNS (PTR record) spoofing on the ‘checkWithoutToken’ function…”

Recommended Action

The vulnerability affects CleanTalk plugin versions up to an including 6.71. Wordfence recommends users update their installations to the latest version at the time of writing, version 6.72.

Are Your Google Ads Gen Z Proof? Strategies To Win The 18-24 Segment

When the average customer age increases for a brand, it’s rarely a platform failure. It’s usually a signal that younger audiences are discovering, evaluating, and buying in different places, and older established brands haven’t kept pace.

As of 2026, Gen Z spans ages 14 to 29. They’re the first generation raised in a digital online world. Moving from smartphones to social video to AI without ever experiencing a world without them. Their expectations for advertising reflect that upbringing. Traditional creative formats, linear funnels, and keyword‑centric strategies simply don’t match how they navigate the internet.

Many PPC practitioners built their instincts during the 2010-2016 era, when search behavior was more predictable and creative requirements were narrower. Those instincts don’t translate cleanly to a generation that jumps between platforms, verifies claims through peers, and expects ads to feel like the content they already consume.

This article looks at why standard Google Ads approaches fall short with the 18-24 segment, how Gen Z actually discovers products, and what advertisers can adjust to stay relevant.

The “Skip Ad” Generation

Gen Z grew up with pre‑roll ads, sponsored content, and ad blockers. They learned early how to ignore anything that feels like an interruption. Studies show their active attention for digital ads drops after about 1.3 seconds, which is a number that explains a lot about their behavior with ads.

Authenticity As A Baseline Expectation

For Gen Z, authenticity isn’t a marketing trend; it’s the baseline expectation. They gravitate toward brands that feature real people instead of polished models, communicate in plain, natural language rather than corporate phrasing, and embrace imperfect, lo-fi visuals over highly produced studio creative.

84% of Gen Z say they trust brands more when they see real customers in the ads.

Girlfriend Collective is a good example. Its product imagery features real people, not traditional models, and the approach mirrors what Gen Z expects to see in their feeds.

Authenticity isn’t a differentiator anymore. It’s table stakes.

Real people featured in Girlfriend Collective advertising campaign.
Girlfriend Collective uses real people in its advertising, aligning with Gen Z’s preference for authentic, human‑centered creative. (Screenshot from girlfriend.com, February 2026)

Discovery Habits: Beyond Google Search

Google Search still matters, but it’s no longer the first stop for many younger users.

Recent data shows:

  • 64% of Gen Z use TikTok as a primary search engine.
  • 77% identify TikTok as the top platform for products.

Their discovery path often starts with a short‑form video, not a search bar. They move through:

  • TikTok.
  • YouTube Shorts.
  • Instagram Reels.
  • Reddit.
  • Creator content.

Only after that do they turn to Google to verify what they’ve seen. Queries like [best running shoes 2026] often begin on TikTok and end on Google, not the other way around.

The Role Of Performance Max And Demand Gen

Google’s push toward Performance Max and Demand Gen reflects this shift. These formats reach users across YouTube, Discover, Gmail, Display, and Search, which are the same surfaces Gen Z moves through naturally.

But PMax can only perform as well as the creative inside it. Legacy assets built for static search campaigns rarely translate well to visual placements. Gen Z scrolls past anything that looks like an ad, especially if it’s overly polished or logo‑heavy.

The Shift Toward Intent‑Based Matching

Keyword matching is evolving. During a January 2026 PPC Chat session, Google Ads Liaison Ginny Marvin noted that appearing in AI Overviews and “AI Mode” inventory requires broad match or keywordless targeting.

This aligns with how Gen Z searches. Their queries are conversational, fragmented, and context-driven, which mirrors Google’s increasing emphasis on intent, context, and meaning rather than strict keyword matching.

Advertisers who avoid broad match risk losing visibility in the surfaces where younger users spend their time.

The Nonlinear Buyer Journey

Gen Z doesn’t move through a funnel. Their path looks more like a loop:

  1. Short‑form video discovery.
  2. Google Search verification.
  3. Social proof on Reddit or Instagram.
  4. Long‑form YouTube reviews.
  5. More short‑form content.
  6. Conversion.

Social proof carries significant weight. 77% say UGC helps them make decisions, and unboxing‑style clips can lift conversion rates by up to 161%.

The offer doesn’t change, but the format of the proof does.

Privacy And The Value Exchange

Gen Z is cautious about privacy but not unwilling to share data. They simply expect a clear value exchange. When that exchange is obvious and transparent, they are more open to participating. Incentives that work include early access, exclusive drops, loyalty rewards, and insider content.

Transparency matters. They want to know what they’re giving and what they’re getting.

Tactical Adjustments To Future‑Proof Your Google Ads Account

The following adjustments can help advertisers align with Gen Z behavior.

1. Rewrite RSAs for Tone and Context

Many RSAs still rely on keyword‑stuffed templates:

  • “Blue running shoes”
  • “Best blue running shoes”

RSAs can generate over 43,680 combinations. Use that flexibility to test tone, not just keywords. Use that range to experiment with conversational phrasing, modern language, benefit-driven messaging, social-proof elements, and UGC-inspired copy that better reflects how audiences actually search and engage.

This approach allows Google to assemble combinations that better match user intent.

How RSAs Handle Text Variation

RSAs assemble headlines and descriptions dynamically. The inputs determine the tone Google can test.

The following two examples illustrate how different brands approach RSA‑style messaging and how those choices affect relevance and emotional resonance.

Example 1: Glossier

Headline: Glow With Glossier® Today – Feel Your Glowy, Dewy Best

Description: Shop Accessible Luxury Products Inspired By Our Community To Make You Look And Feel Good. Shop Glossier Skincare Essentials For Glowy, Dewy Skin + Makeup You’ll Actually Use.

Analysis:

  • Conversational, emotional, community‑driven.
  • This style aligns with Gen Z’s expectations.
Sponsored Glossier skincare ad featuring a headline about glowing skin and promotional text highlighting community‑inspired products.
Glossier’s ad uses emotionally driven language and community framing, aligning with Gen Z’s preference for authentic, benefit-led messaging. (Screenshot by author, February 2026)

Example 2: COVERGIRL

Headline: COVERGIRL® Official Site – Available Online & In‑Store

Description: Explore Our New Makeup Products, Best Sellers, & Trending Tutorials to Enhance Your Look.

Analysis:

  • Structured, brand‑led, availability‑focused.
  • Clear and informative, but less emotionally resonant.
Sponsored COVERGIRL makeup ad with a headline promoting online and in‑store availability and text highlighting new products and tutorials.
COVERGIRL’s ad uses structured, brand-led messaging focused on product availability and category breadth. (Screenshot by author, February 2026)

Key Takeaway For RSAs

Both ads are valid inputs for RSAs, but they serve different strategic purposes:

Brand Tone Focus Gen Z Alignment
Glossier Conversational Emotional <+ Community High
COVERGIRL Informational Product + Availability Moderate

A mix of both styles gives Google more flexibility across AI‑driven surfaces like AI Overviews and AI mode.

2. Refresh Creative Assets

Gen Z doesn’t like advertising that interrupts content, which means asset groups should feel native to the environments where they appear. That includes lifestyle imagery, lo-fi video, real customers, UGC-style clips, and visuals that blend naturally into the feed rather than stand out as overt advertising.

Organic‑looking creative performs better across PMax and Demand Gen.

3. Leverage Smart Bidding

Smart bidding is designed for nonlinear, multi-touch journeys. It adapts to device switching, platform hopping, and privacy-centric signals, allowing campaigns to respond more effectively to the way users move between channels and interactions before converting.

This makes it well‑suited for Gen Z’s browsing behavior.

4. Test Gen Z‑Specific Variants

Use Google Ads Experiments to compare:

  • Control: Standard corporate creative
  • Variant: Conversational, UGC‑style creative

This approach provides clear performance insights without requiring a full account overhaul.

5. Use Data‑Driven Attribution (DDA)

Last‑click attribution hides the impact of upper‑funnel channels. DDA provides a clearer view of how YouTube, Demand Gen, and PMax contribute to conversions, which is essential for understanding Gen Z behavior.

Adapting To The New Standard

Gen Z is not opposed to advertising; they are opposed to interruption. They respond to messaging that feels honest, human, relevant, and aligned with their expectations in the spaces where they spend their time.

Brands that adapt their full funnel and not just their headlines will be better positioned to reach this demographic in 2026.

Advertisers should review their current Google Ads campaigns and assess whether Gen Z can see themselves in the messaging. If not, a strategic refresh is warranted.

Final Thoughts

Gen Z isn’t rejecting advertising outright. They’re rejecting anything that feels out of place in the spaces where they spend their time. When brands adjust their creative, targeting, and proof to match how this generation actually discovers and evaluates products, the results tend to follow.

The shift doesn’t require a full rebuild. It just requires intention, testing, and updating the parts of your Google Ads strategy that still assume a linear funnel or a polished, brand‑first message.

If your current campaigns don’t reflect how Gen Z searches, scrolls, and decides, this is the moment to rethink the approach. Small changes go a long way when they match the way people actually behave.

More Resources:


Featured Image: Stock-Asso/Shutterstock

International PPC: Why Consistency Is So Hard To Maintain via @sejournal, @brookeosmundson

With PPC becoming more automated every day, managing PPC accounts in one country is challenging enough.

Your campaign structure may stay the same, but once you add in different countries, languages, regulatory nuances, and different agency partners, PPC management gets messy quickly.

If you currently manage paid media for international brands, you probably see that scaling isn’t an issue. Typically, it’s more likely to be a coordination and consistency issue.

Not only are you launching campaigns in each region, but you’re also keeping up on different market expectations, aligning with separate teams per region, and possibly even different agency partners.

For example, you could launch the exact same campaign structure and bidding strategies in the United States and the United Kingdom and get completely different results.

Each of those probably have their own style, processes, and priorities.

This article breaks down tips on how to keep your campaigns on track across regions without losing brand consistency.

The Realities Of International PPC Management

In a perfect management relationship, every agency partner would follow your brand guidelines to a T, campaign messaging would be accurately localized, and all markets being advertised would operate under the same strategy.

The reality of this scenario? That rarely happens.

Consistency, or lack of, is a real problem. Creative assets, bidding strategies, or keyword targeting often vary widely between markets. This leads to a disjointed user experience and potentially diluted brand impact.

Then, there’s the overlap problem. Without clear global oversight, multiple agencies may accidentally compete in the same auctions or target the same audience, driving up costs unnecessarily.

Reporting visibility becomes an issue, too. Reporting formats may differ from agency to agency, or depending on the region. Some agencies might use custom dashboards, while others may send static PDFs. This can make comparing performance across the board a nightmare.

Speaking of agencies, if you’re working with multiple agencies across regions, their level of expertise may vary. Some have deep experience in a particular market, while others simply learn as they go.

Lastly, there are likely regulatory hurdles you haven’t thought of if you’re used to marketing only in the United States. Different countries have different rules around data collection, targeting methods, and ad content. It’s easy to miss a compliance detail if you’re not on top of local policies.

Managing all of that on top of the actual PPC campaigns is a lot for one person to handle.

Aligning Global Strategy With Local Execution

It’s tempting to create a single PPC strategy and roll it out globally, but that rarely works.

For example, what resonates in the U.S. may fall flat in Germany or Australia. Your job as a marketing manager is to set the strategic foundation while giving local teams enough flexibility to adapt.

Here are a few tips on how to find that balance while managing multiple PPC regions:

  • Create a global brand playbook: Define your core objectives, brand voice, performance metrics, and non-negotiables. Make it clear which elements must be consistent across markets (e.g., logo usage, value propositions) and which can be localized (e.g., promotions, tone, CTAs).
  • Set up centralized tracking and reporting: Use tools like Looker Studio, Funnel, or Tableau to consolidate data from different platforms and agencies. A unified reporting view helps you spot inconsistencies and optimize faster.
  • Spell out roles and responsibilities: Who owns budget allocation? Who reviews creative? Who has the final say on the copy? Spell this out. Confusion around ownership often slows campaigns down.
  • Use regular syncs to stay aligned: Host monthly or bi-weekly meetings with all agency partners. Even if the agendas are light, the face time builds accountability.

For example, say you’re a global hotel chain that operates on multiple continents. A great place to start is to create a shared creative playbook, but allowing each region to tailor their offers like ski packages in Switzerland or beach getaways in Spain.

A shared creative playbook helps keep brand visuals consistent while making local campaigns relevant.

This reinforces that your global strategy is the blueprint, but you still need localization to tailor what actually works in each market.

Choosing And Managing Agency Partners

If you’re working with multiple agencies across regions, things can quickly get siloed.

One agency might perform strongly in Canada while another underperforms in France. Your role is to manage these relationships without getting stuck in the weeds.

Below are some recommendations to keep things streamlined:

  • Standardize onboarding: No matter what type of agency or vendor you’re onboarding, start with a structured checklist. This can include items like tech stack access, brand guidelines, reporting templates, key contacts, etc.
  • Evaluate based on shared key performance indicators (KPIs): Hold every agency accountable to the same high-level metrics (e.g., return on ad spend, cost per acquisition, conversion volume), even if market-specific tactics differ. This makes it easier to identify outliers.
  • Encourage cross-agency collaboration: Set up a shared communication channel or quarterly town halls where agency teams can exchange learnings. One partner’s success story could inspire a breakthrough elsewhere.
  • Avoid micromanagement, but stay involved: Agencies need room to operate, but that doesn’t mean you go completely hands-off. Review ad copy regularly. Ask questions about performance drivers and what sort of experiments or tests they’re running.
  • Consider a lead regional agency model: Some brands appoint one agency as the lead for a particular continent or region. This partner acts as a point of coordination, helping to roll out strategies more efficiently.

Say you’re running a consumer electronics brand’s PPC efforts, and the company is looking to expand into Europe, the Middle East, and Africa. It may be easy to give all that work in-house, but that can essentially double your workload, which can make your existing campaigns’ performance drop since your focus has shifted.

Instead, consider hiring an agency for the EMEA region, where your responsibility may be overseeing their operations across Europe.

This frees up your time to still focus on the core markets, but still have visibility in the expansion region to understand what’s working and what’s not.

This can lead to reduced duplicated efforts, standardized reporting, and improved speed-to-market.

Tailoring Localization Without Losing Brand Consistency

One of the biggest risks in international PPC is watering down your brand, or creating an inconsistent brand. When you allow each market to fully customize messaging, your consistency issue will continue to show up.

However, localization doesn’t mean reinventing your brand. It means adapting the core message to fit cultural norms, search behavior, and language nuances.

The first way to accomplish this is to provide flexible brand guidelines. Instead of a rigid and hard-to-follow rulebook, create a toolkit. Include items like brand values, tone of voice examples, and explicit dos/don’ts. Make it clear that it leaves space for creativity.

When it comes to translation, translating ads word-for-word often leads to awkward or irrelevant messaging. Instead, invest in native-language copywriters who understand local search intent.

Be sure to test and/or vet creative with local experts. Even if your agencies are global, ensure that someone close to the market signs off on copy and visuals. One poorly placed phrase or image can derail an entire campaign or brand image.

Don’t be afraid to test and learn in each market. What works in France might not work in Spain. Build in budget and time to A/B test creative and offers in each country before scaling.

For example, say you’re running back-to-school ads for an apparel brand across the United States and Japan. You think that everyone has a back-to-school need, right?

You’d be correct, but it’d be incorrect to run them at the same time due to Japan’s school year starting in the spring, whereas the United States typically starts in the fall.

Adjusting campaign timing based on regions can help lead to an uplift in engagement.

When it comes to localization, every ad should feel like your brand, even if it says something slightly different.

Managing Regulatory And Platform Differences

The compliance side of international PPC often gets overlooked until it becomes a problem.

Before you even begin expanding your PPC efforts in other regions, start with these guardrails in place:

  • Work with legal early: Involve your legal or compliance teams in the planning process. Get clarity on what’s allowed in each region before campaigns launch.
  • Stay up-to-date with platform policies: Google Ads, Meta, and Microsoft all have country-specific ad restrictions. Review them regularly. This goes beyond demographic targeting or ad copy. How you track users once they get to your landing page is extremely important to understand what’s allowed and what’s not.
  • Use regional ad accounts: If you’re running large-scale campaigns, separate ad accounts by region. This makes it easier to manage billing, user access, and compliance settings. Google now has an account setting where admins need to check a box if they are going to run ads in the EU. For this reason alone, it’s good to keep each region in its own separate account.
  • Document your approach: Create a shared doc outlining how your team handles regulatory compliance, consent tracking, and ad policy enforcement. It helps new team members and agencies get up to speed quickly.

When in doubt, err on the side of caution. It’s better to delay a campaign launch and get it right than clean up a PR or legal mess later.

When To Consolidate Vs. Decentralize

One of the biggest international strategic decisions you’ll face: Should you centralize all campaigns under one global agency, or let each region work with its own partner?

There’s no perfect answer, but here’s a framework to help you decide:

  • Consolidate if:
    • You need unified reporting and brand control.
    • You operate in fewer countries with similar languages or cultures.
    • Your internal team is small and needs a streamlined workflow.
  • Decentralize if:
    • You’re in highly diverse markets with unique buying behaviors.
    • Local teams have strong relationships with trusted regional agencies.
    • You want to test different approaches and compare outcomes.

Some brands use a hybrid approach, which includes a central strategy with local execution. The key is to revisit your setup as you grow. What worked at five markets may not work at 15.

Managing International PPC Without Losing Control

The reality of managing international PPC campaigns is that it’s oftentimes messy and chaotic. This is especially true if you don’t have the right foundations to go off of.

If you’re struggling to understand where to start, your first priority should be working on your brand and messaging framework. Make sure that’s solid before you try to scale, whether that’s being done in-house or having an agency take that work on. Trust me, this step will make everything easier in the long run.

Your second priority should be defining clear ownership. If you’re working in a hybrid model with an agency and in-house teams, set clear expectations with everyone upfront. This reduces duplicate work and makes your teams more efficient.

Once those are in play, then you can tackle centralizing reporting and visibility.

Not everything can be optimized at once. Otherwise, you won’t know what’s working or not working. Be patient as you scale to new regions, but don’t be afraid to test the waters to see if you can find some clear winners along the way.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Antitrust Filing Says Google Cannibalizes Publisher Traffic via @sejournal, @martinibuster

Penske Media Corporation (PMC) filed a federal court memorandum opposing Google’s motion to dismiss its antitrust lawsuit. The company argues that Google has broken the longstanding premise of a web ecosystem in which publishers allowed their content to be crawled in exchange for receiving search traffic in return.

PMC is the publisher of twenty brands like Deadline, The Hollywood Reporter, and Rolling Stone.

Web Ecosystem

The PMC legal filing makes repeated references to the “fundamental fair exchange” where Google sends traffic in exchange for allowing them to crawl and index websites, explicitly quoting Google’s expressions of support for “the health of the web ecosystem.”

And yet there are some industry outsiders on social media who deny that there is any understanding between Google and web publishers, a concept that even Google doesn’t deny.

This concept dates to pretty much the beginning of Google and is commonly understood by all web workers. It’s embedded in Google’s Philosophy, expressed at least as far back as 2004:

“Google may be the only company in the world whose stated goal is to have users leave its website as quickly as possible.”

In May 2025 Google published a blog post where they affirmed that sending users to websites remained their core goal:

“…our core goal remains the same: to help people find outstanding, original content that adds unique value.”

What’s relevant about that passage is that it’s framed within the context of encouraging publishers to create high quality content and in exchange they will be considered for referral traffic.

The concept of a web ecosystem where both sides benefit was discussed by Google CEO Sundar Pichai in a June 2025 podcast interview by Lex Fridman where Pichai said that sending people to the human created web in AI Mode was “going to be a core design principle for us.”

In response to a follow-up question referring to journalists who are nervous about web referrals, Sundar Pichai explicitly mentioned the ecosystem and Google’s commitment to it.

Pichai responded:

“I think news and journalism will play an important role, you know, in the future we’re pretty committed to it, right? And so I think making sure that ecosystem… In fact, I think we’ll be able to differentiate ourselves as a company over time because of our commitment there. So it’s something I think you know I definitely value a lot and as we are designing we’ll continue prioritizing approaches.”

This “fundamental fair exchange” serves as the baseline competitive condition for their claims of coercive reciprocal dealing and unlawful monopoly maintenance.

That baseline helps PMC argue:

  • That Google changed the understood terms of participation in search in a way publishers cannot refuse.
  • And that Google used its dominance in search to impose those new terms.

And despite that Google’s own CEO expressed that sending people to websites is a core design principal and there are multiple instances in the past and the present where Google’s own documentation refers to this reciprocity between publishers and Google, Google’s legal response expressly denies that it exists.

The PMC document states:

“Google …argues that no reciprocity agreement exists because it has not “promised to deliver” any search referral traffic.”

Profound Consequences Of Google AI Search

PMC filed a federal court memorandum in February 2026 opposing Google’s motion to dismiss its antitrust complaint. The complaint details Google’s use of its search monopoly to “coerce” publishers into providing content for AI training and AI Overviews without compensation.

The suit argues that Google has pivoted from being a search engine (that sends traffic to websites) to an answer engine that removes the incentive for users to click to visit a website. The lawsuit claims that this change harms the economic viability of digital publishers.

The filing explains the consequences of this change:

“Google has shattered the longstanding bargain that allows the open internet to exist. The consequences for online publishers—to say nothing of the public at large—are profound.”

Google Is Using Their Market Power

The filing claims that the collapse of the traditional search ecosystem positions Google’s AI search system as coercive rather than innovative, arguing that publishers must either allow AI to reuse their content or risk losing search visibility.

The legal filing alleges that Google’s generative AI competes directly with online publishers for user’s attention, describing Google as cannibalizing publisher’s traffic, specifically alleging that Google is using their “market power” to maintain a situation in which publishers can’t block the AI without also negatively affecting what little search traffic is left.

The memorandum portrays a bleak choice offered by Google:

“Google’s search monopoly leaves publishers with no choice: acquiesce—even as Google cannibalizes the traffic publishers rely on—or perish.”

It also describes the role of AI grounding plays in cannibalizing publisher traffic for its sole benefit:

“Through RAG, or “grounding,” Google uses, repackages, and republishes publisher content for display on Google’s SERP, cannibalizing the traffic on which PMC depends.”

Expansion Of Zero-Click Search Results And Traffic Loss

The filing claims AI answers divert users away from publisher sites and diminish monetizable audience visits. Multiple parts of the filing directly confronts Google with the fact of reduced traffic from search due to the cannibalization of their content.

The filing alleges:

“Google reduces click‑throughs to publisher sites, increases zero‑click behavior, and diverts traffic that publishers need to support their advertising, affiliate, and subscription revenue.

…Google’s insinuation . . . that AI Overview is not getting in the way of the ten blue links and the traffic going back to creators and publishers is just 100% false . . . . [Users] are reading the overview and stopping there . . . . We see it.”

…The purpose is not to facilitate click-throughs but to have users consume PMC’s content, repackaged by Google, directly on the SERP.”

Zero-click searches are described as a component of a multi-part process in which publishers are injured by Google’s conduct. The filing accuses Google of using publisher content for training, grounding their AI on facts, and then republishing it within the zero-click AI search environment that either reduces or eliminates clicks back to PMC’s websites.

Should Google Send More Referral Traffic?

Everything that’s described in the PMC filing is the kind of thing that virtually all online businesses have been complaining about in terms of traffic losses as a result of Google’s AI search surfaces. It’s the reason why Lex Fridman specifically challenged Google’s CEO on the amount of traffic Google is sending to websites.