Google Text Ad Click Share Rises Sharply In Some Verticals via @sejournal, @MattGSouthern

An analysis of 16,000 U.S. search queries found that text ads gained 7 to 13 percentage points of click share between January 2025 and January 2026.

SEO consultant Aleyda Solis used Similarweb clickstream estimates to measure click share across classic organic results, SERP features, text ads, PLAs, and zero-click behavior.

She also tracked how often AI Overviews appeared on the page, but the dataset doesn’t attribute clicks to AI Overviews directly.

What The Data Shows

Text ads gained between 7 and 13 percentage points of click share across every vertical Solis analyzed.

In the headphones vertical (top 5,000 US queries), classic organic click share fell from 73% to 50%. Text ads grew from 3% to 16%, and PLAs grew from 13% to 20%. Combined paid results now capture 36% of clicks in that category, up from 16% a year earlier.

Jeans followed a similar pattern. Classic organic dropped from 73% to 56%, while combined paid results rose from 18% to 34%.

The online games vertical saw text ads quadruple, from 3% to 13%, even though the category had historically had almost no ad presence.

In greeting cards, the only vertical where total clicks actually grew year over year, organic click share still fell from 88% to 75% as text ads nearly doubled.

The AI Overview presence on SERPs grew across all four verticals. Headphones saw AIO presence jump from 2.28% to 32.76%, and online games went from 0.38% to 29.80%. But the analysis measured how often AIOs appeared on the page, not how many clicks they captured or prevented.

Solis wrote:

“When I started this research, my hypothesis was that text ads and organic SERP features -not just AI Overviews- could be significant culprits behind declining organic clicks. The data confirmed this across all four verticals, and the scale of the text ad impact surprised me: they gained between +7 and +13 percentage points of click share in every vertical, making them the single biggest measurable driver of the organic decline.”

Independent Data Points To The Same Pattern

The SERP-level click data lines up with what advertisers are seeing from the other side.

Tinuiti’s Q4 2025 benchmark report found Google text ad clicks hit a 19-quarter high in Tinuiti’s benchmark dataset, growing 9% year over year. Overall Google search ad spend rose 13% in the quarter, up from 10% in Q3.

Google’s earnings tell a similar story. In its Q3 2025 report, Alphabet posted $102.3 billion in revenue, its first $100 billion quarter, with search ad revenue reaching $56.6 billion. CEO Sundar Pichai said AI features were expanding total query volume, including commercial queries.

More queries and more commercial intent create more ad inventory. The Similarweb data is consistent with more clicks shifting to paid placements in these verticals.

Why This Matters

The industry has spent much of the past year focused on AI Overviews as the explanation for declining organic clicks.

AIO presence is growing, and Google reported 1.5 billion monthly AIO users as of Q1 2025. This data indicates that text ads are an increasingly important factor to consider.

When diagnosing drops in organic traffic, it’s helpful to look at the SERP composition for your industry rather than assuming AI Overviews are the sole reason.

Looking Ahead

Data from different sources indicate that text ads are gaining click share.

Whether Google is actively expanding ad placements or advertisers are bidding more aggressively on existing inventory is unknown.

What you may consider doing now is tracking SERP composition changes in your own vertical using tools that measure click distribution rather than rankings alone.

Google Lost Two Antitrust Cases, But Stock Rose 65% – Here’s Why via @sejournal, @MattGSouthern

In January, Alphabet passed Apple in market capitalization to become the second most valuable company in the world. Alphabet was worth $3.885 trillion. Apple sat at $3.846 trillion. Only Nvidia, at $4.595 trillion, was ahead.

That alone would be news. But the context makes it something else entirely. Courts had found that Google violated antitrust law in both general search services and general search text advertising. The Department of Justice asked judges to break the company apart, sell off Chrome, divest the Android operating system, and force the sale of its ad exchange. In the search case, the court rejected those proposed divestitures. In the ad-tech case, the government is still asking the judge to order a sale of Google’s ad exchange, and remedies are pending.

In this article, I’ll walk through every active Google antitrust thread, what courts have ordered, what’s still pending, and what the timelines mean. The gap between Google’s legal exposure and its market performance tells a story that matters for everyone working in search.

How We Got Here

When the DOJ’s search monopoly trial opened in 2023, the government argued that Google spent billions on exclusive deals with Apple, Samsung, and browser makers to lock in its position as the default search engine. The case centered on whether those deals maintained a monopoly or reflected a better product.

In 2024, Judge Amit Mehta ruled that Google had maintained an illegal monopoly in general search services. It was the first time a federal court found a tech company had maintained an illegal monopoly since the Microsoft case in 2001.

Then came the remedies phase, where the real fight began. The DOJ wanted dramatic structural changes. Prosecutors laid out four options, including forcing Google to sell Chrome and potentially divesting Android. That was the peak fear moment for investors. It was also the point at which the case stopped being abstract legal theory and started having direct implications for how search distribution works.

What happened next surprised the industry.

The Search Case: Where It Stands

On Sept. 2, 2025, Judge Mehta issued his remedies opinion. He declined to order any divestitures. No Chrome sale. No Android breakup. No forced separation of search from the broader Alphabet structure.

His reasoning centered on AI. Mehta wrote that generative AI had changed the course of the case. He pointed to the competitive threat that AI chatbots posed to Google’s search business and concluded that the market was too dynamic for the kind of structural remedy the DOJ wanted.

Instead, Mehta ordered behavioral remedies. The final judgment, entered on Dec. 5, 2025, limits how Google can structure search distribution deals. Agreements are capped at one year and cannot be used to lock partners into defaults across multiple access points. The judgment includes provisions that require partners to have more flexibility to surface rival search options and, in some cases, third-party generative AI products.

The order also sets out data-licensing obligations for qualified rivals, including access to a portion of Google’s web index and certain user-side data. An oversight process oversees how the implementation is carried out and ensures everything stays in line during the remedy period.

Google filed its Notice of Appeal on Jan. 16, 2026. The company is specifically challenging the data-sharing requirements and the technical committee oversight. The DOJ had until Feb. 3, 2026, to decide whether to file a cross-appeal seeking stronger remedies than what Mehta ordered.

The search case landed in a unique place. Google keeps Chrome and Android. The default search deals that delivered Google the majority of mobile search activity get restructured with shorter terms and fewer restrictions on partners.

Data-sharing could enable competitors to build better search products, but the timeline for that playing out is years, not months.

The Ad-Tech Case: What’s Coming

The second federal case against Google involves digital advertising technology. This one operates on a different track with a different judge and a different set of remedies at stake.

In April 2025, Judge Leonie Brinkema ruled that Google had willfully monopolized parts of the digital ad market. Where the search case focused on consumer-facing search defaults, this case targeted Google’s ad server, ad exchange (AdX), and the connections between them.

The DOJ’s post-trial brief requested the divestiture of Google’s Ad Manager suite, including the AdX exchange. That would mean separating the tool publishers use to sell ads from the marketplace where those ads get bought and sold.

During closing arguments in November, Brinkema expressed skepticism. She noted that a potential buyer for the ad exchange hadn’t been identified and called the divestiture proposal “fairly abstract.” The court, she said, needed to be “far more down to earth and concrete.”

Brinkema said she plans to issue a decision early in 2026. That ruling could arrive at any point in Q1.

The practical stakes here are different from the search case. The search remedies affect how people find Google. The ad-tech remedies affect how publishers make money through Google.

Any forced separation of AdX would directly change the monetization stack that millions of websites rely on. Even if Brinkema follows the same pattern as Mehta and declines structural remedies, the behavioral changes she orders could reshape how programmatic advertising flows through Google’s systems.

The Epic/Play Store Settlement Question

In late January 2026, Judge James Donato held a hearing in San Francisco on a proposed settlement between Google and Epic Games. The case, which centered on Google’s Play Store practices, appeared headed for resolution. But Donato threw the terms into question.

Donato described the settlement as overly favorable to the two companies and questioned whether it came at the expense of the broader class of developers affected by Google’s Play Store policies.

The settlement terms include Epic spending $800 million over six years on Google services, plus a marketing and exploratory partnership. Reports described the partnership as involving Epic’s technology, including Unreal Engine, alongside marketing and other commercial terms.

This case matters because it touches a different part of Google’s ecosystem. The search and ad-tech cases are about how Google dominates web search and digital advertising. The Play Store case is about how Google controls app distribution on Android. Together, these cases cover the three main ways Google generates revenue and the three main ways practitioners interact with Google’s platforms.

The EU Front

European regulators are pursuing their own path, and in some areas, they’re moving faster than U.S. courts.

In September 2025, the European Commission fined Google €2.95 billion for abusing its dominance in ad tech. Google said it would appeal the decision.

Reports from December indicate the EU is preparing a non-compliance fine against Google related to Play Store anti-steering rules. That fine is expected as early as Q1 2026, which would put it on roughly the same timeline as Brinkema’s ad-tech ruling in the U.S.

But the most consequential EU action may be the newest one. On January 26, the Commission opened specification proceedings under the Digital Markets Act focused on online search data sharing and interoperability for Android AI features. The process is framed around access for rivals, including AI developers and search competitors, and is expected to conclude within six months.

That goes beyond what the U.S. search case requires. Mehta’s order mandates data-sharing with search competitors. The EU proceedings ask whether Google must open access to a broader set of rivals, including those building AI-powered products that don’t fit neatly into the traditional search category.

For those watching how AI search develops, this EU proceeding could have bigger long-term implications than anything in the U.S. cases. The question of whether Google’s search index data feeds into competing AI products affects the entire ecosystem of AI-generated answers, citations, and traffic referrals.

Why The Stock Rose Anyway

Google’s stock rose 65% in 2025, CNBC reported, which made it the best performer among the big tech stocks. Apple, by comparison, rose 8.6%. The gap between Google’s legal losses and its market gains points to a pattern that has repeated at every stage of these cases.

When we covered the original verdict in October 2024 and looked at what it could mean for SEO, the range of possible outcomes was wide. Chrome divestiture, Android breakup, elimination of default deals, forced data sharing, and structural separation of search from advertising all sat on the table.

What investors watched play out was a narrowing of that range at every step. Google offered to loosen its search engine deals in December 2024, signaling that behavioral concessions were coming. The DOJ pushed for breakups. The court landed closer to Google’s position than the government’s.

A Financial Times analysis from January 2026 placed Google’s outcome in a broader context. Across multiple Big Tech antitrust cases, judges have shown reluctance to order structural remedies. Meta won outright in November when Judge James Boasberg ruled the company doesn’t hold an illegal monopoly. In the Google ad-tech case, Brinkema expressed discomfort with divestiture. Former DOJ antitrust chief Jonathan Kanter, who helped bring these cases, acknowledged to the FT that the rulings showed the U.S. was too slow to act.

The pattern across cases is consistent. Courts are willing to find that tech companies violated antitrust law. They’re reluctant to order the kind of structural changes that would break the companies apart. And they’re citing AI competition as a central reason for that restraint.

For Google specifically, the combination of light remedies, a strong AI narrative (signs that Google had caught up to OpenAI reinforced investor confidence, according to a Fortune report), and continued dominance in search revenue removed the threat that investors feared most. The breakup scenario didn’t happen, and the stock reflected that.

What This Means For Search Professionals

The antitrust cases resolved in a way that preserves Google’s structure while introducing new requirements around data access and distribution agreements. The impact will unfold over years, not weeks. Here’s what to track.

Search distribution could diversify gradually. The one-year cap on distribution agreements and the restrictions on tying defaults across access points give Apple and Samsung more room to offer users alternatives or to negotiate different terms. Whether they will is a separate question.

Apple’s search-default deal with Google has been widely reported to be worth tens of billions annually. Without that kind of long-term lock-in, Apple has financial incentive to build or license an alternative.

Data-sharing mandates could create new competitors. The judgment requires Google to license a portion of its web index and certain user-side data to qualified rivals, with an oversight process governing the details. The scope matters enormously. Providing limited index access is different from sharing the ranking signals and full index depth that would let a competitor build a viable alternative. Google is appealing this requirement, which tells you where the company sees the real threat.

The ad-tech ruling will directly affect publisher revenue. Brinkema’s decision, expected in early 2026, determines whether Google must separate the tools publishers use to sell ads from the exchange where those ads trade. Even if she orders behavioral remedies instead of a full divestiture, changes to how Google’s ad stack operates will ripple through programmatic advertising. Publishers using Google Ad Manager should pay close attention to the timeline.

The EU’s DMA proceedings open a different front. The January proceedings cover online search data sharing and Android AI interoperability, framed around access for rivals, including AI developers. The outcome would affect how AI search products source their information and, by extension, how content gets cited in AI-generated answers.

Looking Ahead

The next 12 months will determine whether the antitrust cases produce real changes to search markets or settle into a compliance exercise that preserves the status quo.

Key dates and events to watch include Brinkema’s ad-tech remedies ruling, expected in Q1 2026. The DOJ’s decision on whether to cross-appeal Mehta’s rejection of stronger search remedies was due by early February.

Google’s search case appeal will move through the D.C. Circuit, likely taking a year or more. The EU’s DMA specification proceedings on search data sharing and Android AI interoperability are expected to conclude within six months. And the Epic/Play Store settlement faces scrutiny after Judge Donato’s criticism.

Meanwhile, the Amazon and Apple antitrust cases are pending, with trials expected in 2027. Those cases will test whether courts continue the pattern of finding violations but declining breakups, or whether the legal environment changes.

In Summary

Google was found to have maintained illegal monopolies in two separate markets. It’s appealing one case and awaiting remedies in another. Regulators on two continents are pressing forward, and yet the company just became the second most valuable in the world.

Whether the courts ultimately deliver continuity or disruption will play out over the years ahead. Either way, what gets decided in these cases shapes the infrastructure that every search professional works within.

More Resources:


Featured Image: Collagery/Shutterstock

WooCommerce May Gain Sidekick-Type AI Through Extensions via @sejournal, @martinibuster

WooCommerce is approaching a turning point in 2026 thanks to the Model Context Protocol and the convergence of open source technologies that enable it to function as a layer any AI system can plug into, helping store owners and consumers accomplish more with less friction. Automattic’s Director Of Engineering AI, James LePage, discussed what’s possible right now, what’s coming in the near future, and why the current limitations are temporary.

WooCommerce

Because WooCommerce is built on WordPress and is highly extensible through plugins, APIs, and now MCP, it is rapidly evolving into a coordination layer where AI-based systems can plug in and work together through it. Automattic’s James LePage describes this approach as one in which WooCommerce fits perfectly in the center.

Model Context Protocol

Model Context Protocol is an open standard that enables platforms like WooCommerce to connect their capabilities to AI systems, making AI-powered features possible.

While MCP sounds like an API, which enables software systems to communicate, the key difference is that an API handles predefined requests, whereas MCP enables platforms like WooCommerce to support a broader range of AI interactions without building custom integrations for each one.

WooCommerce Sits In The Middle

ACP (Agentic Commerce Protocol), developed by OpenAI and Stripe, enables an AI agent to handle product, discovery, checkout, and payments from a chat interface like ChatGPT.

The UCP (Universal Commerce Protocol), an open source solution developed by Shopify and Google, provides a way for checkouts to happen through a buy button throughout Google’s AI and Search ecosystem as well as Anthropic’s Claude, regardless of whether the transaction is happening on a WooCommerce store or any other shopping platform. A developer only has to implement a UCP-compliant MCP Server for WooCommerce.

WooCommerce sits in the middle of those protocols, where their integrations come together.

Enablement Strategy For WooCommerce

LePage described a practical perspective for how AI fits into the WooCommerce platform through MCP. He calls this approach enablement.

He explains this approach:

“What’s interesting about that is it follows a strategy that we’re taking at WooCommerce, which is what I refer to as enablement, where WooCommerce is this core software, this core way that you run a digital business online.

And we want to make sure that core software is available and always in the middle of whatever’s happening in AI.

So we want to build AI features for it. We want to make it really easy for others to build AI features for it. But we absolutely want to make sure it will meet you wherever your AI tools are, wherever the best financial analysis AI tool exists, wherever the best general chatbot exists.

So to us, MCP represents a really strong opportunity there.”

Because MCP is flexible to whatever AI platform a user is on, WooCommerce is able to remain in the middle, regardless of which AI system a user subscribes to.

Practical Use Of AI In WooCommerce

LePage brought attention to practical uses of AI right now, where users can leverage ChatGPT Connectors and Claude Code from within WooCommerce in order to have multiple apps and AI communicate with each other to accomplish various tasks.

He explains:

“What’s also cool is if you use ChatGPT with connectors, if you use Cloud Code with their MCP support, there’s a lot of opportunity that you get when you add multiple pieces of software to one session.

So if I take my WooCommerce stuff and I take QuickBooks and I take X, Y, and Z, I can interact with all of them in a conversational manner.

And that’s got me very excited, but it’s also got all the merchants really excited.”

AI Is Developer-Facing Infrastructure

While profound AI implementations are quickly coming together for WooCommerce, LePage indicated that, at this moment, the current work is foundational, providing the building blocks that developers and agencies use to make it all work rather than delivering out-of-the-box merchant features today.

The question asked in the podcast was:

“…is that where we are with WooCommerce and AI at the moment is that you do need really a developer to hook it all up and make it work?”

LePage answered:

“So I’d say yes, if you want a really robust AI implementation that’s built and fits like a glove on your store and does everything that you ever want, the pieces are there.”

He later said that there are plugins that can implement some of those functionalities.

Sidekick-Type Functionality

LePage offered an exciting preview of what’s in store in the near future for WooCommerce when asked if WooCommerce will ship with deep native integration of AI similar to Shopify’s Sidekick AI assistant.

Shopify Sidekick is an AI assistant that can be invoked at various points in the store management workflow, enabling store owners to perform creative tasks like transforming product images or creating email marketing campaigns to handling common store management tasks.

The question asked was:

“One thing I’d love to know is what is planned for Core, possibly WordPress as a whole, certainly WooCommerce, in terms of like an interface built into Core, like how Shopify has Sidekick where wherever you are, you can just type what you want and it will do it for you.”

LePage answered that this kind of AI integration will likely be in the form of an extension, explaining that integrating this kind of functionality within core would be good, but doing it with a plugin would be great. He explained that all the pieces for doing this will be in place within core in version 7, which will be released on April 9, 2026.

He shared that WooCommerce will be an orchestration layer, where WooCommerce sits in the middle, directing and coordinating multiple services, tools, and data sources.

He explained:

“…it will work if we made it a very basic implementation in core, or as even like a very basic plugin, but it will be great when we can plug it into things like WooCommerce Analytics, when we can plug it into much more complex orchestration workflows under the hood to go and do things like really bulk product optimization and catalog stuff and analytics and deep number crunching, all of the fun stuff that we’re actually working on as we speak.

So you will see AI support in terms of this Sidekick-type implementation coming out from Automattic in this extension territory. And that extension also housing additional AI features to make it a much more approachable AI experience to merchants.”

Consumer-Facing AI In WooCommerce Stores

Another area discussed in the podcast was consumer-facing AI implementations that introduce more personalization and chat interfaces for retrieving order information or product selection.

At this point, the podcast jumps into agentic AI shopping, which is projected to become a thing between the near future and 2030.

But at the end, LePage circles back to affirming WordPress’s role as the orchestration layer intended to support whatever functionality and vision emerge.

LePage shared:

“These building blocks are intended to make WordPress into a platform where a developer can build any AI solution.”

WordPress and WooCommerce are very much in transition to providing the option of becoming an orchestration layer. While other content management systems are a little further down the road with these kinds of functionalities, WordPress and WooCommerce have a huge developer ecosystem that is already innovating new features that will become more powerful and useful in the very near future.

Watch the Do the Woo podcast with hosts Katie Keith and James Kemp:

AI Meets Woo: the Future of Ecommerce is Already Here

Featured Image/Screenshot Of Do the Woo Podcast

The scientist using AI to hunt for antibiotics just about everywhere

When he was just a teenager trying to decide what to do with his life, César de la Fuente compiled a list of the world’s biggest problems. He ranked them inversely by how much money governments were spending to solve them. Antimicrobial resistance topped the list. 

Twenty years on, the problem has not gone away. If anything, it’s gotten worse. Infections caused by bacteria, fungi, and viruses that have evolved ways to evade treatments are now associated with more than 4 million deaths per year, and a recent analysis, published in the Lancet, predicts that number could surge past 8 million by 2050. In a July 2025 essay in Physical Review Letters, de la Fuente, now a bioengineer and computational biologist, and synthetic biologist James Collins warned of a looming “post­antibiotic” era in which infections from drug-resistant strains of common bacteria like Escherichia coli or Staphylococcus aureus, which can often still be treated by our current arsenal of medications, become fatal. “The antibiotic discovery pipeline remains perilously thin,” they wrote, “impeded by high development costs, lengthy timelines, and low returns on investment.”

But de la Fuente is using artificial intelligence to bring about a different future. His team at the University of Pennsylvania is training AI tools to search genomes far and deep for peptides with antibiotic properties. His vision is to assemble those peptides—molecules made of up to 50 amino acids linked together—into various configurations, including some never seen in nature. The results, he hopes, could defend the body against microbes that withstand traditional treatments. 

His quest has unearthed promising candidates in unexpected places. In August 2025 his team, which includes 16 scientists in Penn’s Machine Biology Group, described peptides hiding in the genetic code of ancient single-celled organisms called archaea. Before that, they’d excavated a list of candidates from the venom of snakes, wasps, and spiders. And in an ongoing project de la Fuente calls “molecular de-­extinction,” he and his collaborators have been scanning published genetic sequences of extinct species for potentially functional molecules. Those species include hominids like Neanderthals and Denisovans and charismatic megafauna like woolly mammoths, as well as ancient zebras and penguins. In the history of life on Earth, de la Fuente reasons, maybe some organism evolved an antimicrobial defense that could be helpful today. Those long-gone codes have given rise to resurrected compounds with names like ­mammuthusin-2 (from woolly mammoth DNA), mylodonin-2 (from the giant sloth), and hydrodamin-1 (from the ancient sea cow). Over the last few years, this molecular binge has enabled de la Fuente to amass a library of more than a million genetic recipes.

At 40 years old, de la Fuente has also collected a trophy case of awards from the American Society for Microbiology, the American Chemical Society, and other organizations. (In 2019, this magazine named him one of “35 Innovators Under 35” for bringing computational approaches to antibiotic discovery.) He’s widely recognized as a leader in the effort to harness AI for real-world problems. “He’s really helped pioneer that space,” says Collins, who is at MIT. (The two have not collaborated in the laboratory, but Collins has long been at the forefront of using AI for drug discovery, including the search for antibiotics. In 2020, Collins’s team used an AI model to predict a broad-­spectrum antibiotic, halicin, that is now in preclinical development.) 

The world of antibiotic development needs as much creativity and innovation as researchers can muster, says Collins. And de la Fuente’s work on peptides has pushed the field forward: “César is marvelously talented, very innovative.” 

A messy, noisy endeavor

De la Fuente describes antimicrobial resistance as an “almost impossible” problem, but he sees plenty of room for exploration in the word almost. “I like challenges,” he says, “and I think this is the ultimate challenge.” 

The use, overuse, and misuse of antibiotics, he says, drives antimicrobial resistance. And the problem is growing unchecked because conventional ways to find, make, and test the drugs are prohibitively expensive and often lead to dead ends. “A lot of the companies that have attempted to do antibiotic development in the past have ended up folding because there’s no good return on investment at the end of the day,” he says.

Antibiotic discovery has always been a messy, noisy endeavor, driven by serendipity and fraught with uncertainty and misdirection. For decades, researchers have largely relied on brute-force mechanical methods. “Scientists dig into soil, they dig into water,” says de la Fuente. “And then from that complex organic matter they try to extract antimicrobial molecules.” 

But molecules can be extraordinarily complex. Researchers have estimated the number of possible organic combinations that could be synthesized at somewhere around 1060. For reference, Earth contains an estimated 1018 grains of sand. “Drug discovery in any domain is a statistics game,” says Jonathan Stokes, a chemical biologist at McMaster University in Canada, who has been using generative AI to design potential new antibiotics that can be synthesized in a lab, and who worked with Collins on halicin. “You need enough shots on goal to happen to get one.” 

Those have to be good shots, though. And AI seems well suited to improving researchers’ aim. Biology is an information source, de la Fuente explains: “It’s like a bunch of code.” The code of DNA has four letters; proteins and peptides have 20, where each “letter” represents an amino acid. De la Fuente says his work amounts to training AI models to recognize sequences of letters that encode antimicrobial peptides, or AMPs. “If you think about it that way,” he says, “you can devise algorithms to mine the code and identify functional molecules, which can be antimicrobials. Or antimalarials. Or anticancer agents.” 

Practically speaking, we’re still not there: These peptides haven’t yet been transformed into usable drugs that help people, and there are plenty of details—dosage, delivery, specific targets—that need to be sorted out, says de la Fuente. But AMPs are appealing because the body already uses them.They’re a critical part of the immune system and often the first line of defense against pathogenic infections. Unlike conventional antibiotics, which typically have one trick for killing bacteria, AMPs often exhibit a multimodal approach. They may disrupt the cell wall and the genetic material inside as well as a variety of cellular processes. A bacterial pathogen may evolve resistance to a conventional drug’s single mode of action, but maybe not to a multipronged AMP attack.

From discovery to delivery

De la Fuente’s group is one of many pushing the boundaries of using AI for antibiotics. Where he focuses primarily on peptides, Collins works on small-molecule discovery. So does Stokes, at McMaster, whose models identify promising new molecules and predict whether they can be synthesized. “It’s only been a few years since folks have been using AI meaningfully in drug discovery,” says Collins. 

Even in that short time the tools have changed, says James Zou, a computer scientist at Stanford University, who has worked with Stokes and Collins. Researchers have moved from using predictive models to developing generative approaches. With a predictive approach, Zou says, researchers screen large libraries of candidates that are known to be promising. Generative approaches offer something else: the appeal of designing a new molecule from scratch. Last year, for example, de la Fuente’s team used one generative AI model to design a suite of synthetic peptides and another to assess them. The group tested two of the resulting compounds on mice infected with a drug-resistant strain of Acinetobacter baumannii, a germ that the World Health Organization has identified as a “critical priority” in research on antimicrobial resistance. Both successfully and safely treated the infection. 

But the field is still in the discovery phase. In his current work, de la Fuente is trying to get candidates closer to clinical testing. To that end, his team is developing an ambitious multimodal model called ApexOracle that’s designed to analyze a new pathogen, pinpoint its genetic weaknesses, match it to antimicrobial peptides that might work against it, and then predict how an antibiotic, built from those peptides, would fare in lab tests. It “converges understanding in chemistry, genomics, and language,” he says. It’s preliminary, he adds, but even if it doesn’t work perfectly, it will help steer the next generation of AI models toward the ultimate goal of resisting resistance. 

Using AI, he believes, human researchers now have a fighting chance at catching up to the giant threat before them. The technology has already saved decades of human research time. Now he wants it to save lives, too: “This is the world that we live in today, and it’s incredible.” 

Stephen Ornes is a science writer in Nashville, Tennessee.

Hackers made death threats against this security researcher. Big mistake.

The threats started in spring. 

In April 2024, a mysterious someone using the online handles “Waifu” and “Judische” began posting death threats on Telegram and Discord channels aimed at a cybersecurity researcher named Allison Nixon. 

“Alison [sic] Nixon is gonna get necklaced with a tire filled with gasoline soon,” wrote Waifu/Judische, both of which are words with offensive connotations. “Decerebration is my fav type of brain death, thats whats gonna happen to alison Nixon.” 

It wasn’t long before others piled on. Someone shared AI-generated nudes of Nixon.

These anonymous personas targeted Nixon because she had become a formidable threat: As chief research officer at the cyber investigations firm Unit 221B, named after Sherlock Holmes’s apartment, she had built a career tracking cybercriminals and helping get them arrested. For years she had lurked quietly in online chat channels or used pseudonyms to engage with perpetrators directly while piecing together clues they’d carelessly drop about themselves and their crimes. This had helped her bring to justice a number of cybercriminals—especially members of a loosely affiliated subculture of anarchic hackers who call themselves the Com.

But members of the Com aren’t just involved in hacking; some of them also engage in offline violence against researchers who track them. This includes bricking (throwing a brick through a victim’s window) and swatting (a dangerous type of hoax that involves reporting a false murder or hostage situation at someone’s home so SWAT teams will swarm it with guns drawn). Members of a Com offshoot known as 764 have been accused of even more violent acts—including animal torture, stabbings, and school shootings—or of inciting others in and outside the Com to commit these crimes.

Nixon started tracking members of the community more than a decade ago, when other researchers and people in law enforcement were largely ignoring them because they were young—many in their teens. Her early attention allowed her to develop strategies for unmasking them.

Ryan Brogan, a special agent with the FBI, says Nixon has helped him and colleagues identify and arrest more than two dozen members of the community since 2011, when he first began working with her, and that her skills in exposing them are unparalleled. “If you get on Allison’s and my radar, you’re going [down]. It’s just a matter of time,” he says. “No matter how much digital anonymity and tradecraft you try to apply, you’re done.”

Though she’d done this work for more than a decade, Nixon couldn’t understand why the person behind the Waifu/Judische accounts was suddenly threatening her. She had given media interviews about the Com—most recently on 60 Minutes—but not about her work unmasking members to get them arrested, so the hostility seemed to come out of the blue. And although she had taken an interest in the Waifu persona in years past for crimes he boasted about committing, he hadn’t been on her radar for a while when the threats began, because she was tracking other targets. 

Now Nixon resolved to unmask Waifu/Judische and others responsible for the death threats—and take them down for crimes they admitted to committing. “Prior to them death-threatening me, I had no reason to pay attention to them,” she says. 

Com beginnings

Most people have never heard of the Com, but its influence and threat are growing.

It’s an online community comprising loosely affiliated groups of, primarily, teens and twentysomethings in North America and English-speaking parts of Europe who have become part of what some call a cybercrime youth movement. 

International laws and norms, and fears of retaliation, prevent states from going all out in cyber operations. That doesn’t stop the anarchic Com.

Over the last decade, its criminal activities have escalated from simple distributed denial-of-service (DDoS) attacks that disrupt websites to SIM-swapping hacks that hijack a victim’s phone service, as well as crypto theft, ransomware attacks, and corporate data theft. These crimes have affected AT&T, Microsoft, Uber, and others. Com members have also been involved in various forms of sextortion aimed at forcing victims to physically harm themselves or record themselves doing sexually explicit activities. The Com’s impact has also spread beyond the digital realm to kidnapping, beatings, and other violence. 

One longtime cybercrime researcher, who asked to remain anonymous because of his work, says the Com is as big a threat in the cyber realm as Russia and China—for one unusual reason.

“There’s only so far that China is willing to go; there’s only so far that Russia or North Korea is willing to go,” he says, referring to international laws and norms, and fears of retaliation, that prevent states from going all out in cyber operations. That doesn’t stop the anarchic Com, he says.

FRANZISKA BARCZYK

“It is a pretty significant threat, and people tend to … push it under the rug [because] it’s just a bunch of kids,” he says. “But look at the impact [they have].”

Brogan says the amount of damage they do in terms of monetary losses “can become staggering very quickly.”

There is no single site where Com members congregate; they spread across a number of web forums and Telegram and Discord channels. The group follows a long line of hacking and subculture communities that emerged online over the last two decades, gained notoriety, and then faded or vanished after prominent members were arrested or other factors caused their decline. They differed in motivation and activity, but all emerged from “the same primordial soup,” says Nixon. The Com’s roots can be traced to the Scene, which began as a community of various “warez” groups engaged in pirating computer games, music, and movies.

When Nixon began looking at the Scene, in 2011, its members were hijacking gaming accounts, launching DDoS attacks, and running booter services. (DDoS attacks overwhelm a server or computer with traffic from bot-controlled machines, preventing legitimate traffic from getting through; booters are tools that anyone can rent to launch a DDoS attack against a target of choice.) While they made some money, their primary goal was notoriety.

This changed around 2018. Cryptocurrency values were rising, and the Com—or the Community, as it sometimes called itself—emerged as a subgroup that ultimately took over the Scene. Members began to focus on financial gain—cryptocurrency theft, data theft, and extortion.

The pandemic two years later saw a surge in Com membership that Nixon attributes to social isolation and the forced movement of kids online for schooling. But she believes economic conditions and socialization problems have also driven its growth. Many Com members can’t get jobs because they lack skills or have behavioral issues, she says. A number who have been arrested have had troubled home lives and difficulty adapting to school, and some have shown signs of mental illness. The Com provides camaraderie, support, and an outlet for personal frustrations. Since 2018, it has also offered some a solution to their money problems.

Loose-knit cells have sprouted from the community—Star Fraud, ShinyHunters, Scattered Spider, Lapsus$—to collaborate on clusters of crime. They usually target high-profile crypto bros and tech giants and have made millions of dollars from theft and extortion, according to court records. 

But dominance, power, and bragging rights are still motivators, even in profit operations, says the cybercrime researcher, which is partly why members target “big whales.”

“There is financial gain,” he says, “but it’s also [sending a message that] I can reach out and touch the people that think they’re untouchable.” In fact, Nixon says, some members of the Com have overwhelming ego-driven motivations that end up conflicting with their financial motives.

“Often their financial schemes fall apart because of their ego, and that phenomenon is also what I’ve made my career on,” she says.

The hacker hunter emerges

Nixon has straight dark hair, wears wire-rimmed glasses, and has a slight build and bookish demeanor that, on first impression, could allow her to pass for a teen herself. She talks about her work in rapid cadences, like someone whose brain is filled with facts that are under pressure to get out, and she exudes a sense of urgency as she tries to make people understand the threat the Com poses. She doesn’t suppress her happiness when someone she’s been tracking gets arrested.

In 2011, when she first began investigating the communities from which the Com emerged, she was working the night shift in the security operations center of the security firm SecureWorks. The center responded to tickets and security alerts emanating from customer networks, but Nixon coveted a position on the company’s counter-threats team, which investigated and published threat-intelligence reports on mostly state-sponsored hacking groups from China and Russia. Without connections or experience, she had no path to investigative work. But Nixon is an intensely curious person, and this created its own path.

Allison Nixon
Allison Nixon is chief research officer at the cybersecurity investigations firm Unit 221B, where she tracks cybercriminals and helps bring them to justice.
YLVA EREVALL

Where the threat team focused on the impact hackers had on customer networks—how they broke in, what they stole—Nixon was more interested in their motivations and the personality traits that drove their actions. She assumed there must be online forums where criminal hackers congregated, so she googled “hacking forums” and landed on a site called Hack Forums.

“It was really stupid simple,” she says.

She was surprised to see members openly discussing their crimes there. She reached out to someone on the SecureWorks threat team to see if he was aware of the site, and he dismissed it as a place for “script kiddies”—a pejorative term for unskilled hackers.

This was a time when many cybersecurity pros were shifting their focus away from cybercrime to state-sponsored hacking operations, which were more sophisticated and getting a lot of attention. But Nixon likes to zig where others zag, and her colleague’s dismissiveness fueled her interest in the forums. Two other SecureWorks colleagues shared that interest, and the three studied the forums during downtime on their shifts. They focused on trying to identify the people running DDoS booters. 

What Nixon loved about the forums was how accessible they were to a beginner like herself. Threat-intelligence teams require privileged access to a victim’s network to investigate breaches. But Nixon could access everything she needed in the public forums, where the hackers seemed to think no one was watching. Because of this, they often made mistakes in operational security, or OPSEC—letting slip little biographical facts such as the city where they lived, a school they attended, or a place they used to work. These details revealed in their chats, combined with other information, could help expose the real identities behind their anonymous masks. 

“It was a shock to me that it was relatively easy to figure out who [they were],” she says. 

She wasn’t bothered by the immature boasting and petty fights that dominated the forums. “A lot of people don’t like to do this work of reading chat logs. I realize that this is a very uncommon thing. And maybe my brain is built a little weird that I’m willing to do this,” she says. “I have a special talent that I can wade through garbage and it doesn’t bother me.” 

Nixon soon realized that not all the members were script kiddies. Some exhibited real ingenuity and “powerful” skills, she says, but because they were applying these to frivolous purposes—hijacking gamer accounts instead of draining bank accounts—researchers and law enforcement were ignoring them. Nixon began tracking them, suspecting that they would eventually direct their skills at more significant targets—an intuition that proved to be correct. And when they did, she had already amassed a wealth of information about them. 

She continued her DDoS research for two years until a turning point in 2013, when the cybersecurity journalist Brian Krebs, who made a career tracking cybercriminals, got swatted. 

About a dozen people from the security community worked with Krebs to expose the perpetrator, and Nixon was invited to help. Krebs sent her pieces of the puzzle to investigate, and eventually the group identified the culprit (though it would take two years for him to be arrested). When she was invited to dinner with Krebs and the other investigators, she realized she’d found her people.

“It was an amazing moment for me,” she says. “I was like, wow, there’s all these like-minded people that just want to help and are doing it just for the love of the game, basically.”

Staying one step ahead

It was porn stars who provided Nixon with her next big research focus—one that underscored her skill at spotting Com actors and criminal trends in their nascent stages, before they emerged as major threats.

In 2018, someone was hijacking the social media accounts of certain adult-film stars and using those accounts to blast out crypto scams to their large follower bases. Nixon couldn’t figure out how the hackers had hijacked the social media profiles, but she promised to help the actors regain access to their accounts if they agreed to show her the private messages the hackers had sent or received during the time they controlled them. These messages led her to a forum where members were talking about how they stole the accounts. The hackers had tricked some of these actors into disclosing the mobile phone numbers of others. Then they used a technique called SIM swapping to reset passwords for social media accounts belonging to those other stars, locking them out. 

In SIM swapping, fraudsters get a victim’s phone number assigned to a SIM card and phone they control, so that calls and messages intended for the victim go to them instead. This includes one-time security codes that sites text to account holders to verify themselves when accessing their account or changing its password. In some of the cases involving the porn stars, the hackers had manipulated telecom workers into making the SIM swaps for what they thought were legitimate reasons, and in other cases they bribed the workers to make the change. The hackers were then able to alter the password on the actors’ social media accounts, lock out the owners, and use the accounts to advertise their crypto scams. 

SIM swapping is a powerful technique that can be used to hijack and drain entire cryptocurrency and bank accounts, so Nixon was surprised to see the fraudsters using it for relatively unprofitable schemes. But SIM swapping had rarely been used for financial fraud at that point, and like the earlier hackers Nixon had seen on Hack Forums, the ones hijacking porn star accounts didn’t seem to grasp the power of the technique they were using. Nixon suspected that this would change and SIM swapping would soon become a major problem, so she shifted her research focus accordingly. It didn’t take long for the fraudsters to pivot as well.

Nixon’s skill at looking ahead in this way has served her throughout her career. On multiple occasions a hacker or hacking group would catch her attention—for using a novel hacking approach in some minor operation, for example—and she’d begin tracking their online posts and chats in the belief that they’d eventually do something significant with that skill. 

They usually did. When they later grabbed headlines with a showy or impactful operation, these hackers would seem to others to have emerged from nowhere, sending researchers and law enforcement scrambling to understand who they were. But Nixon would already have a dossier compiled on them and, in some cases, had unmasked their real identity as well. Lizard Squad was an example of this. The group burst into the headlines in 2014 and 2015 with a series of high-profile DDoS campaigns, but Nixon and colleagues at the job where she worked at the time had already been watching its members as individuals for a while. So the FBI sought their assistance in identifying them.

“The thing about these young hackers is that they … keep going until they get arrested, but it takes years for them to get arrested,” she says. “So a huge aspect of my career is just sitting on this information that has not been actioned [yet].”

It was during the Lizard Squad years that Nixon began developing tools to scrape and record hacker communications online, though it would be years before she began using these concepts to scrape the Com chatrooms and forums. These channels held a wealth of data that might not seem useful during the nascent stage of a hacker’s career but could prove critical later, when law enforcement got around to investigating them; yet the contents were always at risk of being deleted by Com members or getting taken down by law enforcement when it seized websites and chat channels.

Nixon’s work is unique because she engages with the actors in chat spaces to draw out information from them that “would not be otherwise normally available.”

Over several years, she scraped and preserved whatever chatrooms she was investigating. But it wasn’t until early 2020, when she joined Unit 221B, that she got the chance to scrape the Telegram and Discord channels of the Com. She pulled all of this data together into a searchable platform that other researchers and law enforcement could use. The company hired two former hackers to help build scraping tools and infrastructure for this work; the result is eWitness, a community-driven, invitation-­only platform. It was initially seeded only with data Nixon had collected after she arrived at Unit 221B, but has since been augmented with data that other users of the platform have scraped from Com social spaces as well, some of which doesn’t exist in public forums anymore.

Brogan, of the FBI, says it’s an incredibly valuable tool, made more so by Nixon’s own contributions. Other security firms scrape online criminal spaces as well, but they seldom share the content with outsiders, and Brogan says Nixon’s work is unique because she engages with the actors in chat spaces to draw out information from them that “would not be otherwise normally available.” 

The preservation project she started when she got to Unit 221B could not have been better timed, because it coincided with the pandemic, the surge in new Com membership, and the emergence of two disturbing Com offshoots, CVLT and 764. She was able to capture their chats as these groups first emerged; after law enforcement arrested leaders of the groups and took control of the servers where their chats were posted, this material went offline.

CVLT—pronounced “cult”—was reportedly founded around 2019 with a focus on sextortion and child sexual abuse material. 764 emerged from CVLT and was spearheaded by a 15-year-old in Texas named Bradley Cadenhead, who named it after the first digits of his zip code. Its focus was extremism and violence. 

In 2021, because of what she observed in these groups, Nixon turned her attention to sextortion among Com members.

The type of sextortion they engaged in has its roots in activity that began a decade ago as “fan signing.” Hackers would use the threat of doxxing to coerce someone, usually a young female, into writing the hacker’s handle on a piece of paper. The hacker would use a photo of it as an avatar on his online accounts—a kind of trophy. Eventually some began blackmailing victims into writing the hacker’s handle on their face, breasts, or genitals. With CVLT, this escalated even further; targets were blackmailed into carving a Com member’s name into their skin or engaging in sexually explicit acts while recording or livestreaming themselves.

During the pandemic a surprising number of SIM swappers crossed into child sexual abuse material and sadistic sextortion, according to Nixon. She hates tracking this gruesome activity, but she saw an opportunity to exploit it for good. She had long been frustrated at how leniently judges treated financial fraudsters because of their crimes’ seemingly nonviolent nature. But she saw a chance to get harsher sentences for them if she could tie them to their sextortion and began to focus on these crimes. 

At this point, Waifu still wasn’t on her radar. But that was about to change.

Endgame

Nixon landed in Waifu’s crosshairs after he and fellow members of the Com were involved in a large hack involving AT&T customer call records in April 2024.

Waifu’s group gained access to dozens of cloud accounts with Snowflake, a company that provides online data storage for customers. One of those customers had more than 50 billion call logs of AT&T wireless subscribers stored in its Snowflake account. 

They tried to re-extort the telecom, threatening on social media to leak the records. They tagged the FBI in the post. “It’s like they were begging to be investigated,” says Nixon.

Among the subscriber records were call logs for FBI agents who were AT&T customers. Nixon and other researchers believe the hackers may have been able to identify the phone numbers of agents through other means. Then they may have used a reverse-lookup program to identify the owners of phone numbers that the agents called or that called them and found Nixon’s number among them. This is when they began harassing her.

But then they got reckless. They allegedly extorted nearly $400,000 from AT&T in exchange for promising to delete the call records they’d stolen. Then they tried to re-extort the telecom, threatening on social media to leak the records they claimed to have deleted if it didn’t pay more. They tagged the FBI in the post.

“It’s like they were begging to be investigated,” says Nixon.

The Snowflake breaches and AT&T records theft were grabbing headlines at the time, but Nixon had no idea her number was in the stolen logs or that Waifu/Judische was a prime suspect in the breaches. So she was perplexed when he started taunting and threatening her online.

FRANZISKA BARCZYK

Over several weeks in May and June, a pattern developed. Waifu or one of his associates would post a threat against her and then post a message online inviting her to talk. She assumes now that they believed she was helping law enforcement investigate the Snowflake breaches and hoped to draw her into a dialogue to extract information from her about what authorities knew. But Nixon wasn’t helping the FBI investigate them yet. It was only after she began looking at Waifu for the threats that she became aware of his suspected role in the Snowflake hack.

It wasn’t the first time she had studied him, though. Waifu had come to her attention in 2019 when he bragged about framing another Com member for a hoax bomb threat and later talked about his involvement in SIM-swapping operations. He made an impression on her. He clearly had technical skills, but Nixon says he also often appeared immature, impulsive, and emotionally unstable, and he was desperate for attention in his interactions with other members. He bragged about not needing sleep and using Adderall to hack through the night. He was also a bit reckless about protecting personal details. He wrote in private chats to another researcher that he would never get caught because he was good at OPSEC, but he also told the researcher that he lived in Canada—which turned out to be true.

Nixon’s process for unmasking Waifu followed a general recipe she used to unmask Com members: She’d draw a large investigative circle around a target and all the personas that communicated with that person online, and then study their interactions to narrow the circle to the people with the most significant connections to the target. Some of the best leads came from a target’s enemies; she could glean a lot of information about their identity, personality, and activities from what the people they fought with online said about them.

“The enemies and the ex-girlfriends, generally speaking, are the best [for gathering intelligence on a suspect],” she says. “I love them.”

While she was doing this, Waifu and his group were reaching out to other security researchers, trying to glean information about Nixon and what she might be investigating. They also attempted to plant false clues with the researchers by dropping the names of other cybercriminals in Canada who could plausibly be Waifu. Nixon had never seen cybercriminals engage in counterintelligence tactics like this.

Amid this subterfuge and confusion, Nixon and another researcher working with her did a lot of consulting and cross-checking with other researchers about the clues they were gathering to ensure they had the right name before they gave it to the FBI.

By July she and the researcher were convinced they had their guy: Connor Riley Moucka, a 25-year-old high school dropout living with his grandfather in Ontario. On October 30, Royal Canadian Mounted Police converged on Moucka’s home and arrested him.

According to an affidavit filed in Canadian court, a plainclothes Canadian police officer visited Moucka’s house under some pretense on the afternoon of October 21, nine days before the arrest, to secretly capture a photo of him and compare it with an image US authorities had provided. The officer knocked and rang the bell; Moucka opened the door looking disheveled and told the visitor: “You woke me up, sir.” He told the officer his name was Alex; Moucka sometimes used the alias Alexander Antonin Moucka. Satisfied that the person who answered the door was the person the US was seeking, the officer left. Waifu’s online rants against Nixon escalated at this point, as did his attempts at misdirection. She believes the visit to his door spooked him.

Nixon won’t say exactly how they unmasked Moucka—only that he made a mistake.

“I don’t want to train these people in how to not get caught [by revealing his error],” she says.

The Canadian affidavit against Moucka reveals a number of other violent posts he’s alleged to have made online beyond the threats he made against her. Some involve musings about becoming a serial killer or mass-mailing sodium nitrate pills to Black people in Michigan and Ohio; in another, his online persona talks about obtaining firearms to “kill Canadians” and commit “suicide by cop.” 

Prosecutors, who list Moucka’s online aliases as including Waifu, Judische, and two more in the indictment, say he and others extorted at least $2.5 million from at least three victims whose data they stole from Snowflake accounts. Moucka has been charged with nearly two dozen counts, including conspiracy, unauthorized access to computers, extortion, and wire fraud. He has pleaded not guilty and was extradited to the US last July. His trial is scheduled for October this year, though hacking cases usually end in plea agreements rather than going to trial. 

It took months for authorities to arrest Moucka after Nixon and her colleague shared their findings with the authorities, but an alleged associate of his in the Snowflake conspiracy, a US Army soldier named Cameron John Wagenius (Kiberphant0m online), was arrested more quickly. 

On November 10, 2024, Nixon and her team found a mistake Wagenius made that helped identify him, and on December 20 he was arrested. Wagenius has already pleaded guilty to two charges around the sale or attempted sale of confidential phone records and will be sentenced this March.

These days Nixon continues to investigate sextortion among Com members. But she says that remaining members of Waifu’s group still taunt and threaten her.

“They are continuing to persist in their nonsense, and they are getting taken out one by one,” she says. “And I’m just going to keep doing that until there’s no one left on that side.” 

Kim Zetter is a journalist who covers cybersecurity and national security. She is the author of Countdown to Zero Day.

The Download: unraveling a death threat mystery, and AI voice recreation for musicians

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Hackers made death threats against this security researcher. Big mistake.

In April 2024, a mysterious someone using the online handles “Waifu” and “Judische” began posting death threats on Telegram and Discord channels aimed at a cybersecurity researcher named Allison Nixon.

These anonymous personas targeted Nixon because she had become a formidable threat: As chief research officer at the cyber investigations firm Unit 221B, named after Sherlock Holmes’s apartment, she had built a career tracking cybercriminals and helping get them arrested.

Though she’d done this work for more than a decade, Nixon couldn’t understand why the person behind the accounts was suddenly threatening her. And although she had taken an interest in the Waifu persona in years past for crimes he boasted about committing, he hadn’t been on her radar for a while when the threats began, because she was tracking other targets.

Now Nixon resolved to unmask Waifu/Judische and others responsible for the death threats—and take them down for crimes they admitted to committing. Read the full story.

—Kim Zetter

This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land. 

ALS stole this musician’s voice. AI let him sing again.

There are tears in the audience as Patrick Darling’s song begins to play. It’s a heartfelt song written for his great-grandfather, whom he never got the chance to meet. But this performance is emotional for another reason: It’s Darling’s first time on stage with his bandmates since he lost the ability to sing two years ago.

The 32-year-old musician was diagnosed with amyotrophic lateral sclerosis (ALS) when he was 29 years old. Like other types of motor neuron disease, it affects nerves that supply the body’s muscles. People with ALS eventually lose the ability to control their muscles, including those that allow them to move, speak, and breathe.

Darling’s last stage performance was over two years ago. By that point, he had already lost the ability to stand and play his instruments and was struggling to sing or speak. But recently, he was able to re-create his lost voice using an AI tool trained on snippets of old audio recordings. Another AI tool has enabled him to use this “voice clone” to compose new songs. Darling is able to make music again. Read the full story.

—Jessica Hamzelou

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The creator of OpenClaw is joining OpenAI
Sam Altman was sufficiently impressed by Peter Steinberger’s ideas to get agents to interact with each other. (The Verge)
+ The move demonstrates how seriously OpenAI is taking agents. (FT $)
+ Moltbook was peak AI theater. (MIT Technology Review)

2 How North Korea is illegally funding its nuclear program
A defector explains precisely how he duped remote IT workers into funneling money into its missiles.(WSJ $)
+ Nukes are a hot topic across Europe right now. (The Atlantic $)

3 Radio host David Greene is convinced Google stole his voice
He’s suing the company over similarities between his own distinctive vocalizations and the AI voice used in its NotebookLM app. (WP $)
+ People are using Google study software to make AI podcasts. (MIT Technology Review)

4 US automakers are worried by the prospect of a Chinese invasion
They fear Trump may greenlight Chinese carmakers to build plants in the US. (FT $)
+ China figured out how to sell EVs. Now it has to deal with their aging batteries. (MIT Technology Review)

5 Google downplays safety warnings on its AI-generated medical advice
It only displays extended warnings when a user clicks to ‘Show more.’ (The Guardian)
+ Here’s another reason why you should keep a close eye on AI Overviews. (Wired $)
+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review)

6 How to make Lidar affordable for all cars
A compact device could prove the key. (IEEE Spectrum)

7 Robot fight nights are all the rage in San Francisco
Step aside, Super Bowl! (Rest of World)
+ Humanoid robots will take to the stage for Chinese New Year celebrations. (Reuters)

8 Influencers and TikTokers are feeding their babies butter
But there’s no scientific evidence to back up some of their claims. (NY Mag $)

9 This couple can’t speak the same language
Microsoft Translator has helped them to sustain a marriage. (NYT $)
+ AI romance scams are on the rise. (Vox)

10 AI promises to make better, more immersive video games
But those are lofty goals that may never be achieved. (The Verge)
+ Google DeepMind is using Gemini to train agents inside Goat Simulator 3. (MIT Technology Review)

Quote of the day

“Right now this is a baby version. But I think it’s incredibly concerning for the future.”

—Scott Shambaugh, a software engineer who recently became the subject of a scathing blog post written by an AI bot accusing him of hypocrisy and prejudice, tells the Wall Street Journal why this could be the tip of the iceberg.

One more thing

Why do so many people think the Fruit of the Loom logo had a cornucopia?

Quick question: Does the Fruit of the Loom logo feature a cornucopia?

Many of us have been wearing the company’s T-shirts for decades, and yet the question of whether there is a woven brown horn of plenty on the logo is surprisingly contentious.

According to a 2022 poll, 55% of Americans believe the logo does include a cornucopia, 25% are unsure, and only 21% are confident that it doesn’t, even though this last group is correct.

There’s a name for what’s happening here: the “Mandela effect,” or collective false memory, so called because a number of people misremember that Nelson Mandela died in prison. Yet while many find it easy to let their unconfirmable beliefs go, some spend years seeking answers—and vindication. Read the full story.

—Amelia Tait

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ When dating apps and book lovers collide, who knows what could happen.
+ It turns out humans have a secret third set of teeth, which is completely wild.
+ We may never know the exact shape of the universe. But why is that?
+ If your salad is missing a certain something, some crispy lentils may be just the ticket.

Tuning into the future of collaboration 

When work went remote, the sound of business changed. What began as a scramble to make home offices functional has evolved into a revolution in how people hear and are heard. From education to enterprises, companies across industries have reimagined what clear, reliable communication can mean in a hybrid world. For major audio and communications enterprises like Shure and Zoom, that transformation has been powered by artificial intelligence, new acoustic technologies, and a shared mission: making connection effortless. 

Necessity during the pandemic accelerated years of innovation in months.  

“Audio and video just working is a baseline for collaboration,” says chief ecosystem officer at Zoom, Brendan Ittelson. “That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem.”  

Audio is a foundation for trust, understanding, and collaboration. Poor sound quality can distort meaning and fatigue listeners, while crisp audio and intelligent processing can make digital interactions feel nearly as natural as in-person exchanges. 

“If you think about the fundamental need here,” adds chief technology officer at Shure, Sam Sabet, “It’s the ability to amplify the audio and the information that’s really needed, and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate.”  

For both Ittelson and Sabet, AI now sits at the center of this progress. For Shure, machine learning powers real-time noise suppression, adaptive beamforming, and spatial audio that tunes itself to a room’s acoustics. For Zoom, AI underpins every layer of its platform, from dynamic noise reduction to automated meeting summaries and intelligent assistants that anticipate user needs. These tools are transforming communication from reactive to proactive, enabling systems that understand intent, context, and emotion. 

“Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing,” says Sabet. “Having software and algorithms that adapt seamlessly and self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.” 

The future, they suggest, is one where technology fades into the background. As audio devices and AI companions learn to self-optimize, users won’t think about microphones or meeting links. Instead, they’ll simply connect. Both companies are now exploring agentic AI systems and advanced wireless solutions that promise to make collaboration seamless across spaces, whether in classrooms, conference rooms, or virtual environments yet to come. 

“It’s about helping people focus on strategy and creativity instead of administrative busy work,” says Ittelson. 

This episode of Business Lab is produced in partnership with Shure. 

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.  

This episode is produced in partnership with Shure.  

Now as the pandemic ushered in the cultural shift that led to our increasingly virtual world, it also sparked a flurry of innovation in the audio and video industries to keep employees and customers connected and businesses running. Today we’re going to talk about the AI technologies behind those innovations, the impact on audio innovation, and the continuing emerging opportunities for further advances in audio capabilities.  

Two words for you: elevated audio.  

My guests today are Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom.  

Welcome Sam, welcome Brendan. 

Sam Sabet: Thank you, Megan. It’s a pleasure to be here and I’m looking forward to this conversation with both you and Brendan. It should be a very exciting conversation. 

Brendan Ittelson: Thank you so much for having me today. I’m looking forward to the conversation and all the topics we have to dive into on this area. 

Megan: Fantastic. Lovely to have you both here. And Sam, just to set some context, I wonder if we could start with the pandemic and the innovation that really was born out of necessity. I mean, when it became clear that we were all going to be virtual for the foreseeable future, I wonder what was the first technological mission for Shure? 

Sam: Yeah, very good question. The pandemic really accelerated a lot of innovation around virtual communications and fundamentally how we perform our everyday jobs remotely. One of our first technological mission when the pandemic happened and everybody ended up going home and performing their functions remotely was to make sure that people could continue to communicate effectively, whether that’s for business meetings, virtual events, or educational purposes. We focused on collaboration and enhancing collaboration tools. And ideally what we were aiming to do, or we focused on, was to basically improve the ease of use and configuration of audio tool sets. 

Because unlike the office environment where it might be a lot more controlled, people are working from non-traditional areas like home offices or other makeshift solutions, we needed to make sure that people could still get pristine audio and that studio level audio even in uncontrolled environments that are not really made for that. We expedited development in our software solutions. We created tool sets that allowed for ease of deployment and remote configuration and management so we could enable people to continue doing the things they needed to do without having to worry about the underlying technology. 

Megan: And Brendan, during that time, it seemed everyone became a Zoom user of some sort. I mean, what was the first mission at Zoom when virtual connection became this necessity for everyone? 

Brendan: Well, our mission fundamentally didn’t change. It’s always been about delivering frictionless communications. What shifted was the urgency and the magnitude of what we were doing. Our focus shifted on how we do this reliably, securely, and to scale to ensure these millions of new users could connect instantly without friction. We really shifted our thinking of being just a business continuity tool to becoming a lifeline for so many individuals and industries. The stories that we heard across education, healthcare, and just general human connection, the number of those moments that matter to people that we were able to help facilitate just became so important. We really focused on how can we be there and make it frictionless so folks can focus on that human connection. And that accelerated our thinking in terms of innovation and reinforced the thought that we need to focus on the simplicity, accessibility, and trust in communication technology so that people could focus on that connection and not the technology that makes it possible. 

Megan: That’s so true. It did really just become an absolute lifeline for people, didn’t it? And before we dive into the technologies beyond these emerging capabilities, I wonder if we could first talk about just the importance of clear audio. I mean, Sam, as much as we all worry over how we look on Zoom, is how we sound perhaps as or even more impactful? 

Sam: Yeah, you’re absolutely correct. I mean, clear audio is absolutely critical for effective communications. Video quality is very important absolutely, but poor audio can really hinder understanding and engagement. As a matter of fact, there’s studies and research from areas such as Yale University that say that poor audio can make understanding somewhat more challenged and even affect retention of information. Especially in an educational type environment where there’s a lot of background noise and very differing types of spaces like auditoriums and lecture halls, it really becomes a high priority that you have great audio quality. And during the pandemic, as you said, and as Brendan rightly said, it became one of our highest priorities to focus on technologies like beamforming mics and ways to focus on the speaker’s voice and minimize that unwanted background noise so that we could ensure that the communication was efficient, was well understood, and that it removed the distraction so people could be able to actually communicate and retain the information that was being shared. 

Megan: It is incredible just how impactful audio can be, can’t it? Brendan, I mean as you said, remote and hybrid collaboration is part of Zoom’s DNA. What observations can you share about how users have grown along with the technological advancements and maybe how their expectations have grown as well? 

Brendan: Definitely. I mean, users now expect seamless and intelligent experiences. Audio and video just working is a baseline for collaboration. That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem. When we look at it, we’re really looking at these trends in terms of how people want to be better when they’re at home. For example, AI-powered tools like Smart Summaries, translation and noise suppression to help people stay productive and connected no matter where they’re working. But then this also comes into play at the office. We’re starting to see folks that dive into our technology like Intelligent Director and Smart Name Tags that create that meeting equity even when they’re in a conference room. 

So, the remote experience and the room experience all are similar and create that same ability to be seen, heard, and contribute. And we’re now diving further into this that it’s beyond just meetings. Zoom is really transforming into an AI-first work platform that’s focused on human connection. And so that goes beyond the meetings into things like Chat, Zoom Docs, Zoom Events and Webinars, the Zoom Contact Center and more. And all of this being brought together using our AI Companion at its core to help connect all of those different points of connection for individuals. 

Megan: I mean, so Brendan, we know it wasn’t only workplaces that were affected by the pandemic, it was also the education sector that had to undergo a huge change. I wondered if you could talk a little bit about how Zoom has operated in that higher education sphere as well. 

Brendan: Definitely. Education has always been a focus for Zoom and an area that we’ve believed in. Because education and learning is something as a company we value and so we have invested in that sector. And personally being the son of academics, it is always an area that I find fascinating. We continue to invest in terms of how do we make the classroom a stronger space? And especially now that the classroom has changed, where it can be in person, it can be virtual, it can be a mix. And using Zoom and its tools, we’re able to help bridge all those different scenarios to make learning accessible to students no matter their means. 

That’s what truly excites us, is being able to have that technology that allows people to pursue their desires, their interests, and really up-level their pursuits and inspire more. We’re constantly investing in how to allow those messages to get out and to integrate in the flow of communication and collaboration that higher education uses, whether that’s being integrated into the classroom, into learning management systems, to make that a seamless flow so that students and their educators can just collaborate seamlessly. And also that we can support all the infrastructure and administration that helps make that possible. 

Megan: Absolutely. Such an important thing. And Sam, Shure as well, could you talk to us a bit about how you worked in that kind of education space as well from an audio point of view? 

Sam: Absolutely. Actually, this is a topic that’s near and dear to my heart because I’m actually an adjunct professor in my free time. 

Megan: Oh, wow. Very impressive. 

Sam: And the challenges of trying to do this sort of a hybrid lecture, if you will. And Shure has been particularly well suited for this environment and we’ve been focused on it and investing in technologies there for decades. If you think about how a lecture hall is structured, it’s a little different than just having a meeting around the conference table. And Shure has focused on creating products that allow this combination of a presenter scenario along with a meeting space plus the far end where users or students are remote, they can hear intelligibly what’s happening in the lecture hall, but they can also participate. 

Between our products like the Ceiling Mic Arrays and our wireless microphones that are purpose built for presenters and educators like our MXW neXt product line, we’ve created technologies that allow those two previously separate worlds to integrate together. And then add that onto integrating with Zoom and other products that allow for that collaboration has been very instrumental. And again, being a user and providing those lectures, I can see a night and day difference and just how much more effective my lectures are today from where they were five to six years ago. And that’s all just made possible by all the technologies that are purpose built for these scenarios and integrating more with these powerful tools that just make the job so much more seamless. 

Megan: Absolutely fascinating that you got to put the technology to use yourself as well to check that it was all working well. And you mentioned AI there, of course. I mean, Sam, what AI technologies have had the most significant impact on recent audio advancements too? 

Sam: Yeah. Absolutely. If you think about the fundamental need here, it’s the ability to amplify the audio and the information that’s really needed and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate. With our innovations at Shure, we’ve leveraged the cutting-edge technologies to both enhance communication effectiveness and to align seamlessly with evolving features in unified communications like the ones that Brandon just mentioned in the Zoom platforms.  

We partner with industry leaders like Zoom to ensure that we’re providing the ability to be able to focus on that needed audio and eliminate all the background distractions. AI has transformed that audio technology with things like machine learning algorithms that enable us to do more real-time audio processing and significantly enhancing things like noise reduction and speech isolation. Just to give you a simple example, our IntelliMix Room audio processing software that we’ve released as well as part of a complete room solution uses AI to optimize sound in different environments. 

And really that’s one of the fundamental changes in this period, whether that’s pandemic or post-pandemic, is that the key is really flexibility and being able to adapt to changing work environments. Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing. And so having software and algorithms that adapt seamlessly and are able to self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.  

And then last but not least, AI has transformed the way audio and video integrate. For example, we utilize voice recognition systems that integrate with intelligent cameras so that we enable voice tracking technology so that cameras can not only identify who’s speaking, but you have the ability to hear and see people clearly. And that in general just enhances the overall communication experience. 

Megan: Wow. It’s just so much innovation in quite a short space of time really. I mean, Brendan, you mentioned AI a little bit there beforehand, but I wonder what other AI technologies have had the biggest impact as Zoom builds out its own emerging capabilities? 

Brendan: Definitely. And I couldn’t agree more with Sam that, I mean, AI has made such a big shift and it’s really across the spectrum. And when I think about it, there’s almost three tiers when you look at the stack. You start off at the raw audio where AI is doing those things like noise suppression, echo cancellation, voice enhancements. All of that just makes this amazing audio signal that can then go into the next layer, which is the speech AI and natural language processing. Which starts to open up those items such as the real-time transcription, translation, searchable content to make the communication not just what’s heard, but making it more accessible to more individuals and inclusive by providing that content in a format that is best for them. 

And then you take those two layers and put the generative and agentic AI on top of that, that can start surfacing insights, summarize the conversation, and even take actions on someone’s behalf. It really starts to change the way that people work and how they have access and allows them to connect. I think it is a huge shift and I’m very excited by how those three levels start to interact to really enable people to do more and to connect thanks to AI. 

Megan: Yeah. Absolutely. So much rich information that can come out from a single call now because of those sorts of tools. And following on from that, Brendan, I mean, you mentioned before the Zoom AI Companion. I wondered if you could talk a bit about what were your top priorities when building that product to ensure it was truly useful for your customers? 

Brendan: Definitely. When we developed AI Companion, we had two priority focus areas from day one, trust and security, and then accuracy and relevance. On the trust side, it was a non-negotiable that customer data wouldn’t be used to train our models. People need to know that their conversations and content are private and secure. 

Megan: Of course. 

Brendan: And then with accuracy, we needed to ensure AI outputs weren’t generic but grounded in the actual context of a meeting, a chat or a product. But the real story here when I think about AI Companion is the customer value that it delivers. AI Companion helps people save time with meeting recaps, task generation, and proactive prep for the next session. It reduces that friction in hybrid work, whether you’re in a meeting room, a Zoom room, or collaborating across different collaboration tools like Microsoft or Google. And it enables more equitable participation by surfacing the right context for everyone no matter where and how they’re working.  

All this leads to a result where it’s practical, trustworthy, and embedded where work happens. And it’s just not another tool to manage, it’s there in someone’s flow of work to help them along the way. 

Megan: Yeah. That trust piece is just so important, isn’t it, today? And Sam, as much as AI has impacted audio innovation, audio has also had an impact on AI capabilities. I wondered if you could talk a little bit about audio as a data input and the advancements technologies like large language models, LLMs, are enabling. 

Sam: Absolutely. Audio is really a rich data source that’s added a new dimension to AI capabilities. If you think about speech recognition or natural language processing, they’ve had significant advances due to audio data that’s provided for them. And to Brendan’s point about trust and accuracy, I like to think of the products that Shure enables customers with as essentially the eyes and ears in the room for leading AI companions just like the Zoom AI Companion. You really need that pristine audio input to be able to trust the accuracy of what the AI generates. These AI Companions have been very instrumental in the way we do business every day. I mean, between transcription, speaker attributions, the ability to add action items within a meeting and be able to track what’s happening in our interactions, all of that really has to rely on that accurate and pristine input from audio into the AI. I feel that further improves the trust that our end users have to the results of AI and be able to leverage it more.  

If you think about it, if you look at how AI audio inputs enhance that interactive AI system, it enables more natural and intuitive interactions with AI. And it really allows for that seamless integration and the ability for users to use it without having to worry about, is the room set up correctly? Is the audio level proper? And when we talk even about agentic AI, we’re working on future developments where systems can self-heal or detect that there are issues in the environment so that they can autocorrect and adapt in all these different environments and further enable the AI to be able to do a much more effective job, if you will. 

Megan: Sam, you touched on future developments there. I wonder if we could close our conversation today with a bit of a future forward look, if we could. Brendan, can you share innovations that Zoom is working on now and what are you most excited to see come to fruition? 

Brendan: Well, your timing for this question is absolutely perfect because we’ve just wrapped up Zoomtopia 2025. 

Megan: Oh, wow. 

Brendan: And this is where we discussed a lot of the new AI innovations that we have coming to Zoom. Starting off, there’s AI Companion 3.0. And we’ve launched this next generation of agentic AI capabilities in Zoom Workplace. And with 3.0 when it releases, it isn’t just about transcribing, it’s turned into really a platform that helps you with follow-up task, prep for your next conversation, and even proactively suggest how to free up your time. For example, AI Companion can help you schedule meetings intelligently across time zones, suggest which meetings you can skip, and still stay informed and even prepare you with context and insights before you walk into the conversation. It’s about helping people focus on strategy and creativity instead of administrative busy work. And for hybrid work specifically, we introduced Zoomie Group Assistant, which will be a big leap for hybrid collaboration. 

Acting as an assistant for a group chat and meetings, you can simply ask, “@Zoomie, what’s the latest update on the project?” Or “@Zoomie, what are the team’s action items?” And then get instant answers. Or because we’re talking about audio here, you can go into a conference room and say, “Hey, Zoomie,” and get help with things like checking into a room, adjusting lights, temperature, or even sharing your screen. And while all these are built-in features, we’re also expanding the platform to allow custom AI agents through our AI Studio, so organizations can bring their own agents or integrate with third-party ones.  

Zoom has always believed in an open platform and philosophy and that is continuing. Folks using AI Companion 3.0 will be able to use agents across platforms to work with the workflows that they have across all the different SaaS vendors that they might have in their environment, whether that’s Google, Microsoft, ServiceNow, Cisco, and so many other tools. 

Megan: Fantastic. It certainly sounds like a tool I could use in my work, so I look forward to hearing more about that. And Sam, we’ve touched on there are so many exciting things happening in audio too. What are you working on at Shure? And what are you most excited to see come to fruition? 

Sam: At Shure, our engineering teams are really working on a range of exciting projects, but particularly we’re working on developing new collaboration solutions that are integral for IT end users. And these integrate obviously with the leading UC platforms.  

We’re integrating audio and video technologies that are scalable, reliable solutions. And we want to be able to seamlessly connect these to cloud services so that we can leverage both AI technologies and the tool sets available to optimize every type of workspace essentially. Not just meeting rooms, but lecture halls, work from home scenarios, et cetera.  

The other area that we really focus on in terms of our reliability and quality really comes from our DNA in the pro audio world. And that’s really all-around wireless audio technologies. We’re developing our next-generation wireless systems and these are going to offer even greater reliability and range. And they really become ideal for everything from a large-scale event to personal home use and the gamut across that whole spectrum. And I think all of that in partnership with our partners like Zoom will help just facilitate the modern workspace. 

Megan: Absolutely. So much exciting innovation clearly going on behind the scenes. Thank you both so much.  

That was Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom, whom I spoke with from Brighton in England.  

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.  

This show is available wherever you get your podcasts. And if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review and this episode was produced by Giro Studios. Thanks for listening. 

Bing Adds AI Visibility Reporting

Unlike traditional search engine optimization, AI search lacks native performance reporting to help businesses develop organic visibility strategies.

Google’s Search Console combines AI Overviews and organic listings in its “Performance” section, leaving optimizers to guess which channel drove visibility and traffic. ChatGPT shares metrics only with publishers that have licensed their content to OpenAI.

Bing is the first platform to offer some transparency. A few weeks after publishing its “guide to AEO and GEO,” Bing launched an “AI Performance Report” in Webmaster Tools.

AI Performance

The new report tracks citations in Microsoft Copilot, AI-generated summaries in Bing, and select AI partner integrations. But there’s no option to filter by a single surface, and no way to identify the integration partners or their purpose.

The report shows users’ “Total Citations” for the chosen period and “Avg. Cited Pages.” It then lists:

  • “Grounding Queries,” which are “the key phrases the AI used when retrieving content that was cited in its answer.” In other words, the queries are the “fan-out” terms that Bing’s AI agents use to search for and find answers, though we don’t know which search engines or platforms they access.
  • “Pages,” the URLs mentioned in AI answers.
Screenshot of the new AI Performance section

The new Webmaster Tools section lists citations by “Grounding Queries” and “Pages.” Click image to enlarge.

Each tab includes additional visibility data:

  • For every grounding query, Webmaster Tools reports on the average number of unique pages cited per day in AI answers.
  • For each cited URL, the report includes its frequency — how often it appears in an answer — not its importance, ranking, or role within a response.

The report provides no traffic or click-through data and no clarity into which Grounding Queries triggered which citations.

Using the Data

The report is a good first step, but it offers little actionable data. Perhaps it will force other players to do more.

According to Bing, the new report:

… shows how your site’s content is used in AI‑generated answers across Microsoft Copilot and partner experiences by highlighting which pages are cited, how visibility trends change over time, and the grounding queries associated with your content.

I’m making the report more useful by:

  • Researching organic keywords on Bing and Google that drive traffic to the cited URLs,
  • Prompting ChatGPT or Gemini to turn the keywords into prompts,
  • Evaluating whether the cited pages address those prompts or need better structure or clarity.

Also, I identify common modifiers in the grounding queries to understand how AI agents find the pages.

Identify common modifiers, such as “virus” in this example, to understand how AI agents find your pages.

Webmaster Tools

Setting up Bing Webmaster Tools takes only a couple of minutes with Search Console enabled.

Log in to Webmaster Tools with your Microsoft account, click “Add site,” and choose the “Import your sites from GSC” option. Allow roughly 24 hours for Bing to collect and report the data.

CleanTalk WordPress Plugin Vulnerability Threatens Up To 200K Sites via @sejournal, @martinibuster

An advisory was issued for a critical vulnerability rated 9.8/10 in the CleanTalk Antispam WordPress plugin, installed in over 200,000 websites. The vulnerability enables unauthenticated attackers to install vulnerable plugins that can then be used to launch remote code execution attacks.

CleanTalk Antispam Plugin

The CleanTalk Antispam plugin is a subscription based software as a service that protects websites from inauthentic user actions like spam subscriptions, registrations, form emails, plus a firewall for blocking bad bots.

Because it’s a subscription based plugin it relies on a valid API in to reach out to the CleanTalk servers and this is the part of the plugin is where the flaw that enabled the vulnerability was discovered.

CleanTalk Plugin Vulnerability CVE-2026-1490

The plugin contains a WordPress function that checks if a valid API key is being used to contact the CleanTalk servers. A WordPress function is PHP code that performs a specific task.

In this specific case, if the plugin cannot validate a connection to CleanTalk’s servers because of an invalid API key, it relies on the checkWithoutToken function to verify “trusted” requests.

The problem is that the checkWithoutToken function doesn’t properly verify the identity of the requester. An attacker is able to misrepresent their identity as coming from the cleantalk.org domain and then launch their attacks. Thus, this vulnerability only affects plugins that do not have a valid API key.

The Wordfence advisory describes the vulnerability:

“The Spam protection, Anti-Spam, FireWall by CleanTalk plugin for WordPress is vulnerable to unauthorized Arbitrary Plugin Installation due to an authorization bypass via reverse DNS (PTR record) spoofing on the ‘checkWithoutToken’ function…”

Recommended Action

The vulnerability affects CleanTalk plugin versions up to an including 6.71. Wordfence recommends users update their installations to the latest version at the time of writing, version 6.72.

Are Your Google Ads Gen Z Proof? Strategies To Win The 18-24 Segment

When the average customer age increases for a brand, it’s rarely a platform failure. It’s usually a signal that younger audiences are discovering, evaluating, and buying in different places, and older established brands haven’t kept pace.

As of 2026, Gen Z spans ages 14 to 29. They’re the first generation raised in a digital online world. Moving from smartphones to social video to AI without ever experiencing a world without them. Their expectations for advertising reflect that upbringing. Traditional creative formats, linear funnels, and keyword‑centric strategies simply don’t match how they navigate the internet.

Many PPC practitioners built their instincts during the 2010-2016 era, when search behavior was more predictable and creative requirements were narrower. Those instincts don’t translate cleanly to a generation that jumps between platforms, verifies claims through peers, and expects ads to feel like the content they already consume.

This article looks at why standard Google Ads approaches fall short with the 18-24 segment, how Gen Z actually discovers products, and what advertisers can adjust to stay relevant.

The “Skip Ad” Generation

Gen Z grew up with pre‑roll ads, sponsored content, and ad blockers. They learned early how to ignore anything that feels like an interruption. Studies show their active attention for digital ads drops after about 1.3 seconds, which is a number that explains a lot about their behavior with ads.

Authenticity As A Baseline Expectation

For Gen Z, authenticity isn’t a marketing trend; it’s the baseline expectation. They gravitate toward brands that feature real people instead of polished models, communicate in plain, natural language rather than corporate phrasing, and embrace imperfect, lo-fi visuals over highly produced studio creative.

84% of Gen Z say they trust brands more when they see real customers in the ads.

Girlfriend Collective is a good example. Its product imagery features real people, not traditional models, and the approach mirrors what Gen Z expects to see in their feeds.

Authenticity isn’t a differentiator anymore. It’s table stakes.

Real people featured in Girlfriend Collective advertising campaign.
Girlfriend Collective uses real people in its advertising, aligning with Gen Z’s preference for authentic, human‑centered creative. (Screenshot from girlfriend.com, February 2026)

Discovery Habits: Beyond Google Search

Google Search still matters, but it’s no longer the first stop for many younger users.

Recent data shows:

  • 64% of Gen Z use TikTok as a primary search engine.
  • 77% identify TikTok as the top platform for products.

Their discovery path often starts with a short‑form video, not a search bar. They move through:

  • TikTok.
  • YouTube Shorts.
  • Instagram Reels.
  • Reddit.
  • Creator content.

Only after that do they turn to Google to verify what they’ve seen. Queries like [best running shoes 2026] often begin on TikTok and end on Google, not the other way around.

The Role Of Performance Max And Demand Gen

Google’s push toward Performance Max and Demand Gen reflects this shift. These formats reach users across YouTube, Discover, Gmail, Display, and Search, which are the same surfaces Gen Z moves through naturally.

But PMax can only perform as well as the creative inside it. Legacy assets built for static search campaigns rarely translate well to visual placements. Gen Z scrolls past anything that looks like an ad, especially if it’s overly polished or logo‑heavy.

The Shift Toward Intent‑Based Matching

Keyword matching is evolving. During a January 2026 PPC Chat session, Google Ads Liaison Ginny Marvin noted that appearing in AI Overviews and “AI Mode” inventory requires broad match or keywordless targeting.

This aligns with how Gen Z searches. Their queries are conversational, fragmented, and context-driven, which mirrors Google’s increasing emphasis on intent, context, and meaning rather than strict keyword matching.

Advertisers who avoid broad match risk losing visibility in the surfaces where younger users spend their time.

The Nonlinear Buyer Journey

Gen Z doesn’t move through a funnel. Their path looks more like a loop:

  1. Short‑form video discovery.
  2. Google Search verification.
  3. Social proof on Reddit or Instagram.
  4. Long‑form YouTube reviews.
  5. More short‑form content.
  6. Conversion.

Social proof carries significant weight. 77% say UGC helps them make decisions, and unboxing‑style clips can lift conversion rates by up to 161%.

The offer doesn’t change, but the format of the proof does.

Privacy And The Value Exchange

Gen Z is cautious about privacy but not unwilling to share data. They simply expect a clear value exchange. When that exchange is obvious and transparent, they are more open to participating. Incentives that work include early access, exclusive drops, loyalty rewards, and insider content.

Transparency matters. They want to know what they’re giving and what they’re getting.

Tactical Adjustments To Future‑Proof Your Google Ads Account

The following adjustments can help advertisers align with Gen Z behavior.

1. Rewrite RSAs for Tone and Context

Many RSAs still rely on keyword‑stuffed templates:

  • “Blue running shoes”
  • “Best blue running shoes”

RSAs can generate over 43,680 combinations. Use that flexibility to test tone, not just keywords. Use that range to experiment with conversational phrasing, modern language, benefit-driven messaging, social-proof elements, and UGC-inspired copy that better reflects how audiences actually search and engage.

This approach allows Google to assemble combinations that better match user intent.

How RSAs Handle Text Variation

RSAs assemble headlines and descriptions dynamically. The inputs determine the tone Google can test.

The following two examples illustrate how different brands approach RSA‑style messaging and how those choices affect relevance and emotional resonance.

Example 1: Glossier

Headline: Glow With Glossier® Today – Feel Your Glowy, Dewy Best

Description: Shop Accessible Luxury Products Inspired By Our Community To Make You Look And Feel Good. Shop Glossier Skincare Essentials For Glowy, Dewy Skin + Makeup You’ll Actually Use.

Analysis:

  • Conversational, emotional, community‑driven.
  • This style aligns with Gen Z’s expectations.
Sponsored Glossier skincare ad featuring a headline about glowing skin and promotional text highlighting community‑inspired products.
Glossier’s ad uses emotionally driven language and community framing, aligning with Gen Z’s preference for authentic, benefit-led messaging. (Screenshot by author, February 2026)

Example 2: COVERGIRL

Headline: COVERGIRL® Official Site – Available Online & In‑Store

Description: Explore Our New Makeup Products, Best Sellers, & Trending Tutorials to Enhance Your Look.

Analysis:

  • Structured, brand‑led, availability‑focused.
  • Clear and informative, but less emotionally resonant.
Sponsored COVERGIRL makeup ad with a headline promoting online and in‑store availability and text highlighting new products and tutorials.
COVERGIRL’s ad uses structured, brand-led messaging focused on product availability and category breadth. (Screenshot by author, February 2026)

Key Takeaway For RSAs

Both ads are valid inputs for RSAs, but they serve different strategic purposes:

Brand Tone Focus Gen Z Alignment
Glossier Conversational Emotional <+ Community High
COVERGIRL Informational Product + Availability Moderate

A mix of both styles gives Google more flexibility across AI‑driven surfaces like AI Overviews and AI mode.

2. Refresh Creative Assets

Gen Z doesn’t like advertising that interrupts content, which means asset groups should feel native to the environments where they appear. That includes lifestyle imagery, lo-fi video, real customers, UGC-style clips, and visuals that blend naturally into the feed rather than stand out as overt advertising.

Organic‑looking creative performs better across PMax and Demand Gen.

3. Leverage Smart Bidding

Smart bidding is designed for nonlinear, multi-touch journeys. It adapts to device switching, platform hopping, and privacy-centric signals, allowing campaigns to respond more effectively to the way users move between channels and interactions before converting.

This makes it well‑suited for Gen Z’s browsing behavior.

4. Test Gen Z‑Specific Variants

Use Google Ads Experiments to compare:

  • Control: Standard corporate creative
  • Variant: Conversational, UGC‑style creative

This approach provides clear performance insights without requiring a full account overhaul.

5. Use Data‑Driven Attribution (DDA)

Last‑click attribution hides the impact of upper‑funnel channels. DDA provides a clearer view of how YouTube, Demand Gen, and PMax contribute to conversions, which is essential for understanding Gen Z behavior.

Adapting To The New Standard

Gen Z is not opposed to advertising; they are opposed to interruption. They respond to messaging that feels honest, human, relevant, and aligned with their expectations in the spaces where they spend their time.

Brands that adapt their full funnel and not just their headlines will be better positioned to reach this demographic in 2026.

Advertisers should review their current Google Ads campaigns and assess whether Gen Z can see themselves in the messaging. If not, a strategic refresh is warranted.

Final Thoughts

Gen Z isn’t rejecting advertising outright. They’re rejecting anything that feels out of place in the spaces where they spend their time. When brands adjust their creative, targeting, and proof to match how this generation actually discovers and evaluates products, the results tend to follow.

The shift doesn’t require a full rebuild. It just requires intention, testing, and updating the parts of your Google Ads strategy that still assume a linear funnel or a polished, brand‑first message.

If your current campaigns don’t reflect how Gen Z searches, scrolls, and decides, this is the moment to rethink the approach. Small changes go a long way when they match the way people actually behave.

More Resources:


Featured Image: Stock-Asso/Shutterstock