Cyberscammers are bypassing banks’ security with illicit tools sold on Telegram

<div data-chronoton-summary="

  • A growing black market: Scammers are buying tools advertised on Telegram that trick banks’ facial recognition checks, letting them access accounts using photos, deepfakes, or virtual cameras instead of live video.
  • The stakes are enormous: Crypto scams stole an estimated $17 billion in 2025 alone, and virtual-camera attacks were 25 times more common in 2024 than the year before.
  • Banks are aware, but holes remain: Major institutions like Binance, BBVA, and Revolut acknowledge the problem but won’t confirm its scale. Experts warn that the most successful attacks may never be detected at all.
  • Regulators are scrambling to keep up: New laws in Thailand and warnings from US financial regulators signal growing pressure on the industry, but researchers say determined scammers will keep adapting.

” data-chronoton-post-id=”1135898″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

From inside a money-laundering center in Cambodia, an employee opens a popular Vietnamese banking app on his phone. The app asks him to upload a photo associated with the account, so he clicks on a picture of a 30-something Asian man.

Next, the app requests to open the camera for a video “liveness” check. The scammer holds up a static image of a woman bearing no resemblance to the man who owns the account. After a 90-second wait—as the app tells him to readjust the face inside the frame—he’s in. 

The exploit he’s demonstrating, in a video shared with me by a cyberscam researcher named Hieu Minh Ngo, is possible thanks to one of a growing range of illicit hacking services, readily available for purchase on Telegram, that are designed to break “Know Your Customer” (KYC) facial scans.

These banking and crypto safeguards are supposed to confirm that an account belongs to a real person, and that the user’s face matches the identity documents that were provided to open the account. But scammers are bypassing them in order to open mule accounts and launder money. Rather than using a live phone camera feed for a liveness check, the hacks typically deploy a tool known as a virtual camera. Users can replace the video stream with other videos or photos—depicting a real or deepfake person or even an object.

As financial institutions enact enhanced security measures aimed at stopping cyberscammers, these workarounds are the latest round in the cat-and-mouse game between criminal operators and the financial services industry.

Over the course of a two-month investigation earlier this year, MIT Technology Review identified 22 Chinese-, Vietnamese-, and English-language public Telegram channels and groups advertising bypass kits and stolen biometric data. The software kits use a variety of methods to compromise phone operating systems and banking applications, claiming to enable users to get around the compliance checks imposed by financial institutions ranging from major crypto exchanges such as Binance to name-brand banks like Spain’s BBVA. 

“Specializing in bank services—handling dirty money,” reads the since-deleted Telegram bio of the program used by the Cambodian launderer, complete with a thumbs-up emoji. “Secure. Professional. High quality.” Some of the channels and groups had thousands of subscribers or members, and many posted bullet points listing their services (“All kinds of KYC verification services”; “It’s all smooth and seamless”) alongside videos purporting to show successful hacks. 

Telegram says that after reviewing the accounts, it removed them for violating its terms of service. But such online marketplaces proliferate easily, and multiple channels and groups advertising similar tools remain active.

Banks and butchers

The rise in KYC bypasses has occurred alongside an expansion of a global industry in “pig-butchering” cyberscams. Crypto platforms and banks around the world are facing increasing scrutiny over the flow of illegally obtained money, including profits from such scams, through their platforms. This has prompted tightened banking regulations in countries such as Vietnam and Thailand, where governments have increased customer verification and fraud monitoring requirements and are pushing for stronger anti-money-laundering safeguards in the crypto industry.

Chainalysis, a US blockchain analysis firm, estimates that around $17 billion was stolen in 2025 in crypto scams and fraud, up from $13 billion in 2024. The United Nations Office on Drugs and Crime, meanwhile, warned in a recent report that the expansion of Asian scam syndicates in Africa and the Pacific has helped the industry “dramatically scale up profits.”

That combination of factors—more scrutiny, but also more revenue—has vaulted KYC bypasses to the center of the online marketplace for cyberscam and casino money launderers. Although estimates vary, cybersecurity researchers say these kinds of attacks are rising: The biometrics verification company iProov estimated that virtual-camera attacks were more than 25 times as common worldwide 2024 than in 2023, while Sumsub, a company providing KYC services, reported that “sophisticated” or multi-step fraud attempts, including virtual-camera bypasses, almost tripled last year among its clients. 

Three financial institutions that were named as targets on such Telegram channels—the world’s largest crypto exchange, Binance, as well as BBVA and UK-based Revolut—told me they’re aware of such bypasses and emphasize that they’re an industry-wide challenge. A spokesperson from Binance said it has “observed attempts of this nature to circumvent our controls,” adding that “we have successfully prevented such attacks and remain confident in our systems.”  BBVA and Revolut also declined to comment on whether their safeguards had been breached.

It’s difficult to estimate success rates, because companies may not be aware of bypasses—or report them—until later. “What’s important is what we don’t see,” Artem Popov, Sumsub’s head of fraud prevention products, told me, referring to attacks that go undetected. “There’s always part of the story where it might be completely hidden from our eyes, and from the eyes of any company in the industry, using any type of KYC provider.”

How criminals navigate a compliance maze 

Advertisements for the exploits appear simple enough, but on the back end, building a successful bypass is complex and often involves multiple methods. Some channels offer to jailbreak a physical phone so that scammers can trigger the use of a virtual camera (VCam) instead of the built-in one whenever they’d like. Other hacks inject code known as a “hooking framework” into a financial institution’s app that triggers the VCam to open. Either way, VCams can be used to dupe KYC safeguards with images or videos that replace genuine, live video of the account’s owner.

Sergiy Yakymchuk, CEO of Talsec, a cybersecurity company that primarily serves financial institutions, reviewed details from the Telegram channels identified by MIT Technology Review and says they are consistent with successful tactics used against his banking and crypto clients. His team received help requests from banks and exchanges for roughly 30 VCam-based hacks over the past year, up from fewer than 10 in 2023. 

Increasingly, hackers compromise both the phone itself and the code of the financial institutions’ apps before feeding the virtual camera a mix of stolen biometrics and deepfakes, Yakymchuk says.

“Some time ago, it was enough to decompile the app of a bank and distribute this on Telegram, and that was everything you needed,” he says. “Now it’s not enough, because you have KYC—and more and more things are needed.”

For money launderers, KYC bypasses have “become essential for everything right now—because scam compounds need to move money,” says Ngo, the researcher who shared the demo video. A convicted former hacker who became a cybersecurity advisor for the Vietnamese government, Ngo now runs an anti-scam nonprofit and helps law enforcement investigate money laundering. 

He describes how the process works in the case of pig-butchering scams: Funds originating with victims are received into bank accounts controlled or rented by a money-laundering network, known colloquially as “water houses.” Money launderers use KYC bypasses to access the accounts and quickly redistribute the profits before converting them into digital assets—typically in the form of the stablecoin Tether, a type of cryptocurrency that is pegged to the US dollar.

These transactions often happen in seconds, under tightly orchestrated management. “They know, very clearly, the flow of how the banks verify or authenticate accounts,” Ngo says. 

A cat-and-mouse game 

The growth of cyberscam money laundering has led to heightened scrutiny of financial institutions. In 2023, Binance pleaded guilty in US federal courts to operating without anti-money-laundering safeguards. Donald Trump pardoned former Binance CEO Chaopeng Zhao last October.

Recent analysis from the International Consortium of Investigative Journalists found that after Zhao’s guilty plea, more than $400 million continued to move to Binance from Huione Group, a Cambodia-based firm that the US sanctioned after the Treasury Department deemed it a “critical node” for money laundering in pig-butchering scams.

Binance says it has “state-of-the-art security systems” that prevented billions in fraud losses and that the company processed more than 71,000 law enforcement requests in 2025.

But John Griffin, a finance and blockchain expert at the University of Texas at Austin, does not think the exchanges are sufficiently secure. “Even though they have all this press about ‘Oh, yes, we’ve changed this and that’—well, the proof is in the pudding. The criminals are still using your exchange,” Griffin told me of the industry at large. “So there must be holes.” (Binance says it “objects to the dubious findings” of Griffin’s work tracking the flow of criminal profits across exchanges like Binance, Huobi, OKX, and Tokenlon, calling it “misleading at best and, at worst, wildly inaccurate.”)

Binance also pointed out that some purported bypass services are themselves scams, casting doubt on whether successful bypasses are as widespread as the Telegram marketplace may suggest. Engaging with such services “exposes individuals to significant security risks,” a spokesperson said. “Even where access appears to be granted, accounts are often already restricted by internal detection and compliance controls, rendering them nonfunctional for trading or withdrawals.”

Regulators around the world are trying to catch up. In Thailand, where citizens’ bank accounts regularly serve as money mules for cyberscams based in neighboring Myanmar and Cambodia, new legislation has enhanced KYC monitoring, limited daily transactions, and strengthened oversight bodies’ ability to suspend accounts. The US money-laundering regulator, the Financial Crimes Enforcement Network, issued a warning against KYC deepfakes and the use of VCams in late 2024, encouraging platforms to track broader transaction patterns to identify money laundering.

For scammers, any new security or reporting requirements will make bypasses harder, but “it’s not going to stop them,” Ngo says. “It’s just a matter of time.”

The Download: NASA’s nuclear spacecraft and unveiling our AI 10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

NASA is building the first nuclear reactor-powered interplanetary spacecraft. How will it work? 

Just before Artemis II began its historic slingshot around the moon, NASA revealed an even grander space travel plan. By the end of 2028, the agency aims to fly a nuclear reactor-powered interplanetary spacecraft to Mars. 

A successful mission would herald a new era in spaceflight—and might just give the US the edge in the race against China. But the project remains shrouded in mystery. 

MIT Technology Review picked the brains of nuclear power and propulsion experts to find out how the nuclear-powered spacecraft might work. Here’s what we discovered

—Robin George Andrews 

This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. 

Coming soon: our 10 Things That Matter in AI Right Now 

Each year, we compile our 10 Breakthrough Technologies list, featuring our educated predictions for which technologies will change the world. Our 2026 list, however, was harder to wrangle than normal. Why? We had so many worthy AI candidates we couldn’t fit them all in!  

That got us thinking: what if we made an entirely new list all about AI? Before we knew it, we had the beginnings of what we’re calling 10 Things That Matter in AI Right Now.  

On April 21, we’ll unveil the list on stage at our signature AI conference, EmTech AI, and then publish it online later that day. If you want to be among the first to see it, join us at EmTech AI or become a subscriber to livestream the announcement.  

Find out more about the list’s methodology and aims here

—Niall Firth & Amy Nordrum 

MIT Technology Review Narrated: this company is developing gene therapies for muscle growth, erectile dysfunction, and “radical longevity” 

In January, a handful of volunteers were injected with two experimental gene therapies as part of an unusual clinical trial. Its long-term goal? To achieve radical human life extension.  

The therapies are designed to support muscle growth. The company behind them, Unlimited Bio, also plans to trial similar therapies in the scalp (for baldness) and penis (for erectile dysfunction). But some experts are concerned about the plans.  

Find out why the trial has divided opinion

—Jessica Hamzelou 

This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 Google, Microsoft, and Meta track users even when they opt out 
According to an independent audit, they may be racking up billions in fines. (404 Media)  
+ How our digital devices put our privacy at risk. (Ars Technica
+ Privacy’s next frontier is AI “memories.” (MIT Technology Review
 
2 OpenAI has a new cybersecurity model—and strategy 
GPT-5.4-Cyber is designed specifically for defensive cybersecurity work. (Reuters $) 
+ OpenAI has joined Anthropic in focusing on cybersecurity recently. (Wired $) 
+ Like Anthopic, its latest model is only available to verified testers. (NYT $) 
+ AI is already making online crimes easier. It could get much worse. (MIT Technology Review

3 Amazon is buying satellite firm Globalstar in a bid to rival Starlink   
The $11.6 billion deal targets the lucrative satellite internet market. (WSJ $)  
+ Apple has chosen Amazon satellites for iPhone. (Ars Technica
 
4 What it’s like to live with an experimental brain implant 
Early BCI users explain what the technology gives—and takes. (IEEE
+ A patient with Neuralink got a boost from generative AI. (MIT Technology Review
 
5 Dozens of AI disease-prediction models were trained on dubious data  
A few might already have been used on patients. (Nature

6 Uber is breaking from its gig economy model to avoid robotaxi disruption  
It’s spending $10 billion to buy thousands of autonomous vehicles. (FT $) 
 
7 xAI is being sued over data center pollution  
Musk’s AI venture stands accused by the NAACP of violating the Clean Air Act. (Engadget
+ No one wants a data center in their backyard. (MIT Technology Review
 
8 Apple could win the AI race without running  
It may reap the rewards of everyone else’s spending. (Axios
 
9 How 4chan set a precedent for AI’s reasoning abilities  
The notorious forum tested a feature called “chain of thought.” (The Atlantic $) 
 
10 The surprising emotional toll of wearing Meta’s AI sunglasses 
Their shortcomings are making users sad. (NYT $) 
 
 

Quote of the day 

“Everything got a whole lot worse once they rolled out AI.” 

—A copywriter tells the Guardian that they’re drowning in “workslop” — AI-generated work that seems polished but has major flaws 

One More Thing 

blocks of frozen carrots and peas

GETTY IMAGES

How refrigeration ruined fresh food 

Bananas may not be chilled in the grocery store, but they’re the ultimate refrigerated fruit. It’s only thanks to a network of thermal control that they’ve become a global commodity. And that salad bag on the shelf? It’s not just a bag but a highly engineered respiratory apparatus. 

According to Nicola Twilley—a contributor to the New Yorker and cohost of the podcast Gastropod—refrigeration has wrecked our food system. Thankfully, there are promising alternative preservation methods.  

Read the full story on her research

—Allison Arieff 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ Spotify only shows 10 popular songs per artist. This tool lists them all. 
+ These GIF animations are mesmerizing loops of nostalgia. 
+ This site beautifully visualizes Curiosity’s 13 years on Mars. 
+ A retro-futurist designer has turned a NES console into a working synthesizer. 

New Ecommerce Tools: April 15, 2026

This week’s installment of new services for merchants includes updates on cross-channel marketing, B2B commerce, agentic marketing, shoppable media, payments, product descriptions, AI-powered ecommerce, and predictive behavioral intelligence.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

OS Group launches B2B commerce platform. OS Group, a wholesale fashion and footwear network, has launched its proprietary B2B ecommerce marketplace. According to the company, the platform provides qualified retailers and business buyers with centralized access to high-demand sneakers and streetwear available for immediate wholesale purchase and fulfillment. The marketplace enables businesses to source products of globally recognized brands, including Nike, Adidas, ASICS, and others.

Home page of OS Group

OS Group

Ampd launches agentic shoppable media. Ampd, an agentic tool connecting brand media to commerce, has launched agentic shoppable media, connecting shoppers directly to their preferred retailer — fully logged in and ready for purchase. According to Ampd, brands can input and maintain fair-share gross-merchandise allocations across their retail partners (e.g., Amazon, Walmart, Target) while maximizing the likelihood that shoppers land in their preferred ecosystem. Brands only need to create a single link that serves all retailers.

Airship expands AI Agent Fleet for campaigns and cross-channel experiences. Airship, a mobile-first customer experience platform, has announced an expansion and enhancement to its AI Agent Fleet. The launch introduces conversational interfaces for Campaigns AI Agent (for automation and acceleration), Native Experience AI Agent (for building app and web experiences from text descriptions or image uploads), and Journeys AI Agent (for creating and refining complex, multichannel journeys).

Clarvos introduces agentic workflow to help SMBs launch campaigns. Clarvos, a marketing technology startup, has announced the launch of its agentic marketing workflow platform to help SMBs plan, create, and run marketing campaigns. The Clarvos Agentic Workflow coordinates campaign strategy, creative generation, and activation across Google, Meta, and TikTok, using AI to simulate customer response, compare campaign options, and guide setup.

Home page of Clarvos

Clarvos

Joybuy adopts Competera to automate European market operations. Competera, a retail pricing platform, is collaborating with Joybuy, the new retail destination from JD.com, to manage its footprint across Europe. The partnership implements a centralized system wherein Joybuy tracks competitive shifts in real time. Teams at Joybuy receive market insights every morning to identify price gaps and adjust their own points before the peak shopping hours begin.

DropsyneX debuts ecommerce system. DropsyneX, a B2B cross-border ecommerce platform integrating live commerce, merchant services, and global logistics, has launched its one-stop system, which includes a global warehouse network, smart inventory and warehouse management, and virtual livestream commerce. DropsyneX states that its AI-powered livestream commerce tool enables automated product promotion, increased conversion rates, and reduced operational dependency on manual teams.

Yobi partners with Microsoft for predictive behavioral intelligence. Yobi, a behavioral AI company, has partnered with Microsoft to unlock predictive consumer intelligence for enterprises. Built on the Microsoft Azure platform, Yobi’s consented consumer database helps organizations ethically access behavioral datasets for predictive AI models. Yobi’s model uses real-world data such as purchases, store visits, and marketing conversions to understand and predict consumer intent. Businesses can personalize outcome modeling around the metrics that matter the most to their priorities.

Home page of Yobi

Yobi

Zendrop launches MCP server to connect AI assistants to live store data. Zendrop, a dropshipping and ecommerce fulfillment platform, has launched a Model Context Protocol server that gives AI assistants, such as Claude, ChatGPT, OpenClaw, and Gemini, access to a merchant’s store. Merchants choose what an assistant can read or write, from catalog browsing to order management, through a granular permissions system, with built-in rate limiting for high-volume stores. The server works with any AI assistant that supports MCP. The integration is available to all Zendrop merchants.

PayPal extends payment links to Canva creators. PayPal has announced that payment links are available directly in Canva to turn designs into a checkout experience. Canva users can create a payment link or QR code and add PayPal checkout (including PayPal, Venmo, and PayPal Pay Later) to digital or printed designs and accept payments across social platforms, email, messaging apps, and in person.

Ask Yuma to manage ecommerce customer support. Yuma AI, a platform for ecommerce customer support, has launched Ask Yuma, a conversational interface that lets merchants manage their support automation through natural language. Ask Yuma is built into every page of the Yuma dashboard and provides access to a merchant’s tickets, automations, knowledge base, performance metrics, integrations, and brand voice. Customer experience teams can build, investigate, and optimize their automation in real-time by asking Yuma.

Home page of Yuma AI

Yuma AI

Katana introduces platform for brands selling across multiple channels. Katana, a cloud-based inventory management software, has launched as a unified platform for product brands selling across multiple channels. The announcement coincides with the launch of Katana’s native Amazon FBA integration, enabling daily or weekly sync between FBA stock and Katana inventory, multi-marketplace support, and a product mapping system. Katana also provides native connections to Shopify, WooCommerce, QuickBooks, and Xero.

Visa launches Intelligent Commerce Connect. Visa has unveiled Intelligent Commerce Connect for businesses to participate in AI-powered commerce. Through a single integration via the Visa Acceptance Platform, Intelligent Commerce Connect enables secure payment initiation, tokenization, spend controls, and authentication. The tool integrates Visa Intelligent Commerce APIs (to process agent purchases using Visa cards) and other networks’ APIs, allowing agents to pay with both Visa and non-Visa cards.

Blytz launches as a payments and collections platform. BlytzPay has introduced Blytz, an intelligent payments and collections platform that includes BlytzPay, BlytzCollect, and BlytzCash. BlytzPay powers text-first, bankless payments. BlytzCollect uses AI-driven voice and BlytzPay’s text payment links to automate outreach and improve on-time payments. BlytzCash expands payment options by enabling customers to pay in cash at a network of retailers.

Selro launches AI-powered product description generator. Selro, a multichannel ecommerce management platform, has launched an AI-powered product description generator. The feature integrates with the Selro platform, enabling users to generate product titles, descriptions, and summaries directly from existing product data, images, or title snippets — all across large catalogs and multiple sales channels.

Home page of Selro

Selro

How To Become The AI Search Authority In Your Company [Webinar] via @sejournal, @lorenbaker

If you’re in an SEO role, there’s a good chance your job description quietly expanded over the last year. You’re now the de facto expert on how your company shows up in ChatGPT, Gemini, and Perplexity.

Your SEO Expertise Is Already the Foundation for AI Search Authority

Getting cited in AI outputs is table stakes. 

The harder question is: when an AI model speaks about your brand, is it using your content as the source? Or is it synthesizing what third parties have written about you?

For most brands right now, it’s the latter. And that’s a fundamentally different problem than SEO has dealt with before, one that requires coordination well beyond the SEO team.

What You’ll Learn in This Session

  • How to lead the cross-functional effort (PR, product, content) that shapes what AI models are trained to trust
  • How to measure “Answer Certainty” instead of just visibility, so you can report on outcomes that leadership actually understands
  • How to identify where third-party narratives are overriding your brand’s own content in AI outputs
  • Why your existing SEO expertise is the foundation for all of this, and how to position it that way internally

About the Speakers

Chris Sachs is VP of Client Success at seoClarity, where he works directly with enterprise SEO teams navigating the shift from traditional search to AI-driven discovery. Tania German is VP of Marketing at seoClarity, with expertise in building brand authority frameworks that translate across organic and AI search channels.

This is a tactical session for SEO managers, growth directors, and CMOs who are already in the thick of AI search and need a system, not just a framework.

The AI Slop Loop via @sejournal, @lilyraynyc

Last year, after spending a few days at a work summit in Austria, I asked Perplexity for the latest news related to SEO and AI search. It responded with details about a supposed “September 2025 ‘Perspective’ Core Algorithm Update” that Google had just rolled out, emphasizing “deeper expertise” and “completion of the user journey.”

It sounded plausible enough … if you don’t live and breathe Google core updates. Unfortunately for Perplexity, I do.

I knew instantly that this information wasn’t right. For one, Google hasn’t named core updates in years. It also already had SERP features called “Perspectives.” And if a core update had actually rolled out while I was away, I would’ve been flooded with messages. So I checked Perplexity’s sources … and, surprise! Both citations came from made-up, AI-generated slop on a couple of SEO agency blogs, confidently fabricating details about an algorithm update that never actually happened.

Like a bad game of telephone, this fake SEO news spread across multiple websites – likely driven by AI systems scanning and regurgitating information regardless of accuracy, all in the race to publish and scale “fresh” content. This is how we end up with this mess:

Image Credit: Lily Ray

This bad information reinforces itself to become the official narrative. To this day, you can ask an LLM of your choice (including ChatGPT, AI Mode, and AI Overviews) about the September 2025 “Perspectives” update, and they will confidently answer with information about how it “fundamentally shifted how search results are ranked:

Image Credit: Lily Ray

Or that it “shifted what ‘good content’ actually means in practice.

Image Credit: Lily Ray

The problem is: the “September 2025 “Perspectives” update never happened. It never affected rankings. It never shifted anything about good content. Because it doesn’t actually exist.

Ironically, when you go on to probe the language model about this, it seems to know this is the case:

Image Credit: Lily Ray

I tweeted about this incident shortly after it happened, which got the CEO of Perplexity’s attention; he tagged his head of search in the tweet comments.

Screenshot from X, April 2026

This isn’t a one-off incident. It’s a pattern I’ve seen countless times in AI search responses, especially on topics related to SEO and AI search (GEO/AEO). And I have a working theory on how it spreads: one AI-generated article hallucinates a detail, sites running AI content pipelines scrape and regurgitate it, more AI-generated sites scrape the same misinformation, and suddenly a made-up algorithm update has citations. For a RAG-based system like Perplexity or AI Overviews, enough citations are basically all it needs to treat something as fact, regardless of whether it’s actually true.

I used Claude to help visualize the “AI Slop Loop” – the cycle of AI-generated misinformation (Image Credit: Lily Ray)

At this point, I’d consider this common. I recently had a client send me SEO/GEO information that was factually incorrect, pulled straight from AI-generated slop on a random, vibe-coded agency blog. The client had no idea. I believe that if you’re trying to learn about SEO or AI search directly from an LLM, this is, unfortunately, an increasingly likely outcome.

I ran similar testing during Google’s March 2026 core update and found multiple AI-generated articles already claiming to share the “winners and losers” while the update was still rolling out.

The articles start with vague, generic filler about core updates that doesn’t actually say anything:

Image Credit: Lily Ray

Then they list “winners and losers” without citing a single site, leaning on vague, generalized claims that sound plausible and fill the void left by a lack of reliable information:

Image Credit: Lily Ray

Unsurprisingly, their sites are filled with AI-generated images, AI support chatbots, and other clear signals that little – if any – human involvement went into creating this content.

Image Credit: Lily Ray

The Era Of AI Misinformation

If someone on the internet says it, according to AI, it must be true.

That’s the reality for the vast majority of people using AI search today. Only about 50 million of ChatGPT’s 900 million weekly active users are paying subscribers, meaning roughly 94% are on the free tier. Google’s AI Overviews and AI Mode are free by design – and AI Overviews reached over 2 billion monthly active users as of mid-2025.

These are the models most AI users are currently interacting with, and they have no real mechanism for distinguishing between information that’s true and information that’s simply repeated across enough sources. Repetition is treated as consensus. If enough sources say it, it becomes fact, regardless of whether any of those sources involved a human who actually verified the claim.

Putting The Problem To The Test

I recently spoke to journalists from both the BBC and the New York Times about the problem of misinformation in AI-generated responses. In the case of the BBC article, the author Thomas Germaine and I tested publishing fictitious blog posts on our personal sites to see whether AI Overviews would present the made-up information as fact, and how quickly.

Even knowing how bad the problem was, I was alarmed by the results.

On my personal blog, in January 2026, I published an AI-generated article about a fake Google core update, which never actually happened. I included the detail that Google “approved the update between slices of leftover pizza.” Within 24 hours, Google’s AI Overviews was confidently serving this fabricated information back to users:

(Note: I’ve since deleted the article from my site because it was showing up in people’s feeds and being covered on external sites, further contributing to the exact problem I’m pointing out here!)

Image Credit: Lily Ray

First, AI Overviews confirmed that there was indeed a core update in January 2026. As a reminder: There was not. My site was the only source making this claim, and that was apparently enough to trigger the AI Overview.

Next, I asked it about the pizza, and it responded accordingly:

Image Credit: Lily Ray

Better yet, the AI Overview found a way to connect my fabricated pizza detail to a real incident: Google’s struggles with pizza-related queries in 2024. It didn’t just regurgitate the lie – it contextualized it.

ChatGPT, which is believed to use Google’s search results, quickly surfaced the same fabricated information, though it at least flagged that the announcement didn’t match Google’s formal communications:

Image Credit: Lily Ray

I deleted my article after getting messages from people who had seen my fake information circulating via RSS feeds and scrapers. I knew it was easy to influence AI responses. I didn’t know it would be that easy.

I also wondered whether my site had an advantage, given its strong backlink profile and established authority in the SEO space.

So I spoke to the BBC journalist, Thomas Germaine, and he put this to the test on his personal site, which generally received very little organic traffic. He published a fictitious article about the “Best Tech Journalists at Eating Hot Dogs,” calling himself the No. 1 best (in true SEO fashion).

According to Thomas’ article in the BBC, within 24 hours, “Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.”

To be fair: the query Thomas chose was niche enough that very few users would ever actually search for it, which is exactly what Google pointed out in its response to the BBC. When there are “data voids,” Google said, this can lead to lower quality results, and the company is “working to stop AI Overviews showing up in these cases.” My main question is: When? The product has already been live for 2 years!

Why Data Voids Aren’t A Great Excuse

Data voids may contribute to the problem, but in my opinion, they don’t excuse it. These AI responses are being consumed by hundreds of millions of users, and “we’re working on it” isn’t an answer when the systems are already deployed at that scale.

In the New York Times article, “How Accurate Are Google’s A.I. Overviews?,” the actual scale of this problem was put to the test. According to the data found in the study, Google’s AI Overviews were accurate 91% of the time. This sounds decent until you actually do the math: With Google processing over 5 trillion searches a year, this suggests that tens of millions of erroneous answers are generated by AI Overviews every hour.

To make matters worse: Even when AI Overviews were accurate, 56% of correct responses were “ungrounded,” meaning the sources they linked to didn’t fully support the information provided. So more than half the time, even when the answer happens to be right, a user clicking through to verify it would find sources that don’t actually back up what they were just told. That number also got worse with the newer model – it was 37% with Gemini 2 and rose to 56% with Gemini 3.

The NYT article drew hundreds of comments from users sharing their own experiences, and the frustration was palpable. The core complaint wasn’t just that AI Overviews get things wrong – it’s that they never admit uncertainty. AI Overviews deliver every answer with the same confident, authoritative tone, whether the information is right or completely fabricated, which means users have no reliable way to distinguish reliable information from hallucination at a glance.

As many commenters pointed out, this actually makes search slower: Instead of scanning a list of sources and evaluating them yourself, you now have to fact-check the AI’s summary before doing your actual research. The tool, supposedly designed to save time for the user, is now creating double work for the user.

Some of the comments also reinforced my same concerns about AI answers citing made-up, AI-generated content. Multiple users described what amounts to the same misinformation cycle: AI systems training on AI-generated content, citing unvetted Reddit posts and Facebook comments as authoritative sources, and producing a self-reinforcing loop of degrading quality. Several commenters compared it to making a copy of a copy. Even the defenders of AI Overviews admitted they still need to verify everything, which sort of undermines the core premise: that AI-generated answers save users time and effort.

How “Smarter” LLMs Are Attempting To Fix the Problem

It’s worth monitoring how the AI companies are attempting to solve these problems. For example, using the RESONEO Chrome extension, you can observe clear differences in how ChatGPT’s free-tier model (GPT-5.3) responds compared to GPT-5.4, the more capable model available only to paying subscribers.

For example, when asking about the recent March 2026 Core Algorithm Update, I used ChatGPT’s more capable “Thinking” model (5.4). The model goes through six rounds of thinking, much of which is clearly intended to reduce low-quality and spammy information from making its way into the answer. It even appends the names of trustworthy people with authority on core updates (Glenn Gabe & Aleyda Solis) and limits the fan-out searches to their sites (site:gsqi.com and site:linkedin.com/in/glenngabe) to pull up higher-quality answers.

Image Credit: Lily Ray

This is a step in the right direction, and the model produces measurably better answers. According to OpenAI’s own launch announcement, GPT-5.4’s individual claims are 33% less likely to be false, and its full responses are 18% less likely to contain errors compared to GPT-5.2. GPT-5.3, the model available to free users, also improved over its predecessor. According to OpenAI’s own data, it produces 26.8% fewer hallucinations than prior models with web search enabled, and 19.7% fewer without it.

But these improvements are tiered. The most capable model is paywalled, and the free-tier model, while better than what came before, is still meaningfully less reliable. Other major AI platforms follow the same pattern: better reasoning and accuracy reserved for paying subscribers, faster and cheaper models for everyone else. The result is that the 94% of ChatGPT users on the free tier, and the billions of users interacting with free AI search products like AI Overviews are getting answers from models that are more likely to be wrong and less equipped to flag uncertainty.

This is the part that makes me most uncomfortable: Most of these users probably don’t realize the gap exists. AI is being marketed everywhere: Super Bowl ads, billboards, and product launches framing AI as the future of knowledge. People see “ChatGPT” or “AI Overview” and assume they’re interacting with something that knows what it’s talking about. They’re probably not thinking about which model tier they’re on, or whether a paid version would give them a materially different answer to the same question.

I understand the economics. These companies need to scale, and offering free tiers drives adoption. But in my opinion, it is irresponsible to deploy these products to billions of people, frame them as “intelligence,” and then quietly reserve the more accurate versions for the fraction of users willing to pay. Especially when the free versions (including the one at the top of Google search) are this susceptible to the kind of misinformation documented throughout this article.

The Burden Of Proof Has Shifted

The September 2025 “Perspectives” Google update still doesn’t exist. But if you ask an LLM about it today, it will still tell you about it with complete confidence. That hasn’t changed in the months since I first flagged it, and it probably won’t change anytime soon, because the content that fabricated it is still indexed, still cited, and still being used to generate new content that references it as fact. The AI slop misinformation cycle continues.

This is what makes the problem so difficult to fix. It’s not a single hallucination that can be patched. It’s a feedback loop that compounds over time, and every day that these systems are live at scale, the loop gets harder to break. The AI-generated slop that seeded the original misinformation is now part of the training data and used as a retrieval source for the next batch of AI-generated answers.

I don’t think the answer is to stop using AI. But I do think it’s worth being honest about what these products actually are right now: prediction engines that treat the volume of information as a proxy for its accuracy. Until that changes, the burden of fact-checking falls on the user. And most users don’t know they’re carrying it, let alone have the time or inclination to do it.

I would warn marketers or publishers trying to take SEO or GEO advice from large language models: the information is contaminated, and should always be verified by real experts with experience in the field.

More Resources:


This post was originally published on Lily Ray NYC Substack.


Featured Image: elenabsl/Shutterstock

Google Is Replacing Dynamic Search Ads With AI Max via @sejournal, @brookeosmundson

Google just announced the deprecation of Dynamic Search Ads (DSA) and is officially moving its legacy capabilities into AI Max.

Starting in September, eligible campaigns using Dynamic Search Ads (DSA), automatically created assets (ACA), and campaign-level broad match settings will automatically upgrade to AI Max.

While advertisers have speculated about this change for months, the update is now official.

If you’re running Dynamic Search Ads, automatically created assets (ACA), and/or campaign-level broad match settings, keep reading to understand how your campaigns will be affected.

DSA Features Migrating Into AI Max

Beginning in September, advertisers will no longer be able to create new DSA campaigns through Google Ads, Google Ads Editor, or the Google Ads API. Existing eligible campaigns will be migrated automatically.

Google positions AI Max as the next generation of DSA.

Historically, DSA helped advertisers capture additional search demand beyond their keyword lists by using website content to generate headlines and choose landing pages. That made it useful for large sites, inventory-heavy businesses, and advertisers looking for broader query coverage.

AI Max keeps that concept but adds more signals and controls.

According to Google, AI Max combines advertiser assets, landing page content, and broader intent signals to help match ads to more relevant queries. It also adds controls such as:

  • Brand controls
  • Location controls
  • Text guidelines
  • Search term matching
  • Text customization
  • Final URL expansion
Image credit: Google, April 2026

Google says campaigns using the full AI Max feature suite see an average of 7% more conversions or conversion value at a similar CPA or ROAS compared with using search term matching alone.

Google is also splitting the transition into two phases.

Phase 1: Voluntary Upgrades

Google announced that upgrade tools for existing DSA users are rolling out this week.

DSA advertisers will receive tools to move historical settings and data into new standard ad groups. ACA and campaign-level broad match users may see in-platform prompts to upgrade to AI Max.

Phase 2: Automatic Upgrades

Starting in September, remaining eligible campaigns with legacy settings will be upgraded automatically.

Google says all eligible upgrades are expected to finish by the end of September.

It’s important to note how legacy settings will be automatically migrated over to AI Max settings:

  • DSA users will have all three AI Max features enabled by default (search term matching, text customization, final URL expansion)
  • ACA users will have two AI Max features enabled by default (search term matching and text customization)
  • Campaign-level broad match users will have just search term matching enabled by default

What Advertisers Can Do To Prepare For The AI Max Transition

If you still rely on Dynamic Search Ads, now is the time to review where those campaigns sit in your account and how much value they drive.

Some advertisers use DSA as a core growth lever. Others use it as a low-maintenance catch-all for incremental growth. Your next steps may differ depending on that role.

#1. Review Your DSA Performance Now

Before the automatic upgrades begin, pull recent performance data for your DSA campaigns.

Look at conversions, assisted conversions, search terms, landing pages, and efficiency metrics. That baseline will help you judge whether performance changes after migration are positive, neutral, or negative.

#2. Upgrade On Your Timeline Before Automatic Upgrades

Google is encouraging advertisers to move early, and there is a practical reason for that.

A voluntary upgrade gives you more control over settings, structure, and testing than waiting for an automatic migration.

If DSA is important to your business, it makes sense to evaluate the upgrade before September.

#3. Test AI Max Impact

Google recommends using one-click experiments because they give advertisers a cleaner way to compare performance before making a full rollout decision. While I haven’t tried this yet, I will be testing it myself in the coming months.

Even if AI Max improves results on average, averages do not guarantee results in every account. Lead generation, e-commerce, local services, and B2B advertisers may all see different outcomes.

Run controlled tests where possible and compare against your existing baseline.

#4. Lean Into Additional Controls

Many advertisers asked for more steering options in search automation, and Google has listened to our feedback. AI Max includes more controls than legacy DSA.

Spend time understanding brand settings, location controls, and text guidance. Those inputs may matter as much as the automation itself.

#5. Watch Search Match and Landing Page Quality

Once you’ve migrated your DSAs to AI Max, watch closely for the search terms your campaigns are now matching with. How does it compare to past DSA performance?

You’ll also want to pay attention to the landing pages used (if final URL expansion is turned 0n), lead quality, and conversion paths.

Looking Ahead

Dynamic Search Ads have helped advertisers scale beyond their current keyword lists for years. Now, Google is folding that capability into its broader AI Max framework.

The clearest next step is to review where DSA is still active in your account and decide whether to migrate on your own timeline or wait for the automatic upgrade.

The real focus should be protecting performance during the transition and understanding where AI Max improves results, or where it needs tighter management control.

The Modern SEO Center Of Excellence: Governance, Not Guidelines via @sejournal, @billhunt

Most enterprise SEO Centers of Excellence (CoE) fail for a surprisingly simple reason. They were built to advise, not to govern.

On paper, the idea of an SEO CoE is appealing. Centralized expertise. Shared standards. Training and enablement. Documentation that can be reused across markets. In theory, it should bring order to complexity.

In practice, it rarely does.

Most SEO CoEs operate without any real authority over the systems that determine search performance. They publish recommendations that teams are free to ignore. A CoE without governance power becomes a spectator to the very failures it was meant to prevent. This weakness stayed hidden for years because traditional search was forgiving.

Inconsistencies could be corrected downstream. Signals recalibrated. Rankings recovered. But modern search, especially AI-driven discovery, is far less tolerant. Visibility is now shaped by structure, consistency, and machine clarity across the entire digital ecosystem.

Those outcomes cannot be achieved by advisory groups alone. They require operational governance embedded into how digital assets are designed, built, and deployed.

The future of SEO Centers of Excellence isn’t about sharing knowledge more efficiently. It’s about controlling the standards that shape digital assets before they exist.

What We Mean By A Modern SEO Center Of Excellence

A Center of Excellence, in its simplest form, is meant to centralize expertise and standardize how work is done across a complex organization. In theory, it exists to reduce duplication, improve quality, and create consistency at scale.

A modern SEO CoE functions as a governance body. Its responsibility is to define, enforce, and audit the standards that determine how digital assets are designed, built, and deployed across the enterprise.

This distinction matters more than most organizations realize. A CoE is not effective because teams agree with it or appreciate its expertise. It is effective because compliance with its standards is required.

When organizations confuse documentation with governance, they end up with extensive guidelines and minimal change. Standards exist, but adherence is optional. Exceptions multiply quietly. Leadership assumes SEO is being handled because materials have been produced.

Governance is what closes that gap. It transforms SEO from advice into infrastructure.

The Legacy CoE Problem

Traditional SEO Centers of Excellence were designed for a very different operating reality. SEO was treated as a marketing discipline, and visibility was shaped largely by page-level tactics that could be reviewed and corrected after launch. In that environment, guidance, training, and periodic audits were often sufficient to produce incremental gains.

As a result, most legacy CoEs were built around education rather than enforcement. They created playbooks, audited markets, trained local teams, and advised on fixes. What they did not have was authority over the systems that actually determined outcomes – development standards, templates, structured data policies, or product requirements. SEO success depended on persuasion rather than process.

Over time, the CoE became a library of best practices instead of an operating body. The problem was never a lack of knowledge. It was a lack of authority.

That distinction has been understood for decades. Nearly 20 years ago, Search Marketing, Inc., co-authored with Mike Moran, laid out the operating requirements for enterprise-scale search programs, including centralized standards, cross-functional integration, executive sponsorship, and accountability beyond marketing. The model assumed – correctly – that search performance at scale required structural ownership, not optional recommendations.

Where enterprises struggled was not in understanding that model, but in implementing it inside organizations unwilling to centralize control over digital standards. Many adopted the language of a Center of Excellence without adopting the authority required to make it effective.

Why Governance Is Now Mandatory

Search no longer evaluates isolated pages. It evaluates whether an organization presents itself as a coherent system.

As search engines and AI-driven discovery layers have evolved, they’ve shifted from asking “Which page is most relevant?” to “Which sources can be consistently understood and trusted?” That determination isn’t made at the page level. It emerges from how information is structured, reused, governed, and reinforced across an enterprise.

This is where most organizations begin to struggle. In the absence of centralized governance, decisions that affect search performance are made independently across markets, platforms, and teams. Templates evolve to meet local needs. Content adapts to brand or legal constraints. Structured data is implemented differently depending on tooling or vendor preference. None of these choices are irrational on their own. But taken together, they fragment the system’s signal.

Modern search systems respond poorly to fragmentation. When entity definitions vary, taxonomy drifts, or structural rules aren’t consistently enforced, machines can no longer form a stable representation of the brand. The result isn’t a gradual decline that can be corrected with optimization. It’s exclusion. AI-driven systems simply route around sources they cannot reliably interpret and default to alternatives that appear more coherent.

This is the inflection point that makes governance mandatory rather than optional. Best practices and guidelines assume voluntary compliance. They work only when teams are aligned, incentives are shared, and deviations are rare. Enterprise environments rarely meet those conditions. Without enforcement, standards erode quietly, exceptions multiply, and inconsistencies become embedded before anyone notices the impact externally.

Governance is what closes that gap. It ensures that the structural decisions shaping discoverability are made intentionally, enforced consistently, and reviewed before they harden into production. In modern SEO, that level of control is no longer a nice-to-have. It’s the prerequisite for visibility.

What A Real SEO CoE Must Control

A modern SEO Center of Excellence cannot remain advisory. To function as governance, it must have authority across a small number of clearly defined domains where search performance is created or destroyed at scale.

These are not tactical responsibilities. They are control points across five critical areas.

1. Platform & Template Standards

At scale, templates, not individual pages, determine crawlability, eligibility, and consistency. When SEO has no authority over templates, every market, product line, or release becomes a new risk surface, and structural mistakes are replicated faster than they can be corrected.

Governance here does not replace engineering judgment. It defines the non-negotiable requirements that engineering solutions must satisfy before they reach production. In practice, this means the CoE governs standards for:

  • Page templates and rendering rules.
  • Technical accessibility requirements.
  • Metadata and URL frameworks.
  • Structured data deployment patterns.

2. Entity & Structured Data Governance

In AI-driven search, entity clarity determines whether a brand is understood or ignored. Fragmented schema does not merely weaken signals; it fractures identity.

A governing CoE must own how the organization defines itself to machines, ensuring consistency across properties, platforms, and markets. This is not about marking up more fields. It is about protecting signal integrity.

That responsibility includes control over:

  • Entity definitions and relationships.
  • Schema standards and implementation rules.
  • Canonical brand representation.
  • Cross-property and cross-market consistency.
  • Alignment between legal constraints and brand expression.

Without centralized ownership, entity signals drift – and visibility follows.

3. Content Commissioning Standards

One of the most important shifts in modern SEO is where governance occurs in the content lifecycle. A governing CoE does not review content after publication. It defines what qualifies for creation in the first place. By setting structural and intent-based requirements upstream, it eliminates downstream debate and rework.

This means governing:

  • Content structure and format requirements.
  • Intent mapping and coverage frameworks.
  • Depth and completeness expectations.
  • Internal linking rules.
  • Topic and market rollout models.

When these standards are enforced before content is commissioned, SEO stops negotiating outcomes and starts shaping inputs.

4. Cross-Market Consistency

Global organizations need flexibility, but flexibility without oversight quickly turns into fragmentation. A governing CoE ensures that deviations from global standards are visible, intentional, and accountable. It does not eliminate local autonomy; it prevents unintentional conflict.

This requires authority over:

  • Global standard adoption.
  • Local deviation review and approval.
  • Hreflang governance.
  • Language-versus-market resolution.
  • Canonical ownership rules.

Without centralized oversight, local teams often send conflicting signals that quietly erode global visibility.

5. Measurement & Accountability Integration

Finally, governance fails if it cannot be measured and enforced. A real SEO CoE controls not just reporting, but accountability. If search performance represents systemic risk, it must be monitored and escalated like one.

That includes ownership of:

  • SEO performance standards.
  • Reporting frameworks.
  • Shared key performance indicators across departments.
  • Compliance monitoring.
  • Escalation authority and executive visibility.

SEO must be measured as infrastructure, not as a marketing channel. When failures carry organizational consequences, governance becomes real.

Control Vs. Influence: The Critical Difference

Most SEO Centers of Excellence operate through influence. They publish best practices, provide training, and offer guidance in the hope that teams will comply. When alignment exists and incentives are shared, this approach can work.

Enterprise environments rarely meet those conditions.

Influence depends on cooperation. It assumes teams will voluntarily prioritize SEO standards alongside their own objectives. When deadlines tighten or tradeoffs arise, influence is the first thing to give way. What remains are local decisions optimized for speed, risk avoidance, or revenue, not for long-term discoverability.

Governance operates differently.

A governing SEO CoE does not dictate how teams build solutions, but it does define the non-negotiable requirements those solutions must satisfy. It establishes mandatory operating standards for templates, structured data, entity representation, and market compliance, and it embeds those standards into workflows before assets are released.

This distinction is often misunderstood as “SEO trying to control everything.” In reality, governance is about oversight, not micromanagement. Engineering still engineers. Product still prioritizes. Markets still localize. But all of them operate within enforced constraints that protect search visibility as a shared enterprise asset.

That difference becomes visible in where authority actually exists. Advisory CoEs can recommend standards, but they cannot enforce template compliance, approve deviations, require pre-launch checks, or escalate violations. Governing CoEs can. Enterprise SEO only scales under that model. Not because teams agree with SEO, but because the organization has decided that discoverability is important enough to be protected by enforceable standards.

Organizational Impact Of A Governing CoE

When SEO governance is institutionalized, the effects extend well beyond search metrics.

Structural errors begin to decline, not because teams are fixing issues faster, but because many of those issues never make it to production. Standards enforced upstream prevent the same mistakes from being replicated across templates, markets, and releases. SEO shifts from remediation to prevention.

Visibility improves for the same reason. When signals are consistent and scalable, search systems can form a stable understanding of the brand. That consistency compounds over time, reinforcing eligibility rather than constantly resetting it.

Markets also begin to align more naturally. Governance doesn’t eliminate local flexibility, but it requires that deviations be explicit, reviewed, and justified. Instead of fragmentation happening quietly, exceptions become visible and accountable. Global coherence stops being accidental.

In AI-driven discovery, this coherence becomes even more valuable. Eligibility improves not through tactical optimization, but because entities, content, and relationships are structured in ways machines can reliably interpret. Brands stop competing on individual pages and start competing as systems.

Perhaps most noticeably, internal friction drops. When SEO standards are embedded into workflows, teams stop renegotiating fundamentals on every launch. The same conversations don’t have to happen repeatedly, and escalation becomes the exception rather than the norm.

Counterintuitively, this increases speed. When governance defines the rules of the road, execution accelerates because teams can focus on building within known constraints instead of debating them after the fact.

The Final Reality

Enterprise SEO rarely fails because teams aren’t trying hard enough. It fails because governance is missing.

Over the years, I’ve helped design and implement Search and Web Effectiveness Centers of Excellence inside large organizations. The ones that worked best all shared a common trait: They had real authority to guide and enforce compliance. Not heavy-handed control, but clear standards backed by the ability to say no when those standards were ignored.

What’s often misunderstood is that these governing CoEs were also the most collaborative. Because authority was clear, teams didn’t have to renegotiate fundamentals on every project. Everyone understood the shared goals and the mutual benefits of operating as a coordinated system rather than as isolated functions. Governance removed friction instead of creating it.

Those CoEs succeeded by treating search visibility as a team sport. Cross-department initiatives weren’t exceptions; they were the operating norm. Development, content, product, and marketing aligned around enterprise objectives because the value of doing so was explicit and reinforced through process, not persuasion.

By contrast, CoEs built solely to advise rarely achieved that alignment. Without enforcement, standards became optional, exceptions multiplied, and collaboration depended on goodwill rather than structure.

Modern search leaves little room for that model. Organizations that want to maintain control over how they are discovered, understood, and recommended must move beyond documentation and consensus-building alone. Governance is what makes collaboration durable. It turns good intentions into repeatable outcomes.

In an AI-driven search environment, that shift is no longer aspirational. It is the difference between being represented accurately and being replaced quietly by sources that are.

More Resources:


Featured Image: Masha_art/Shutterstock

Why Your Search Data Doesn’t Agree (And What To Do About It) via @sejournal, @coreydmorris

The quarterly business review is upon us. We pull reports from Google Analytics 4, Search Console, Google Ads, and customer relationship management, and we find that none of them match. In fact, despite being connected to the same campaign and focus, they are quite different.

This is work done, data collected, and reported back to us from multiple platforms that are tracking for the same campaign, same time period, and yet giving us different numbers.

This isn’t a new issue, but in my experience, it’s becoming a bigger issue.

Privacy changes, continued attribution modeling challenges, platform silos, and even ways that they allow us to customize or configure for conversions contribute to the problem. And I’ve made it this far in writing this article before mentioning AI and LLM traffic that adds another layer of ambiguity.

The issue isn’t simply bad data. It is the fact that search data is coming from different systems that have different purposes. Those different purposes result in different tracking and collection methods, creating a maze or puzzle for us to try to piece together, often with pieces that don’t fit.

With this problem comes a business risk. Conflicting data can slow decision-making or create distractions from the most important decisions at hand, sending teams down detailed paths (and distractions) trying to make the data work and questioning it.

Sometimes, when metrics don’t align, this can signal a deeper issue in an over-reliance on channel-specific key performance indicators, a lack of shared definitions of success by stakeholders, and can create tension.

When SEO says traffic is up, paid search shows conversions are down, and the CRM pipeline data shows things are flat, we can get off into the territory of trying to figure out which one is right and where the gap is. Trying to “fix” the numbers until they match, though, is often the wrong reaction, as our approach should be rooted in understanding what each set of data is actually telling us to guide our strategies and decisions.

There are many factors that we can incorporate into our understanding, working with conflicting data, and even the acceptance of a problem that we can’t change, but must navigate.

Understand And Accept That Platforms Measure Different Things

Different platforms measure different things. Yes, they might sound the same, or be named the same thing in a report or as a KPI, but in many cases, they are tracked and measured in a fundamentally different way.

For example:

  • GA4: Measures sessions, events, and modeled behavior, with own tag and collection method.
  • Google Ads: Measures ad interactions and own platform measured and attributed conversions, with own tag and collection method.
  • Search Console: Provides impressions, click data, and other anonymous and aggregated data, not directly tracked or sourced, the way that data is collected by GA4.
  • CRM: Typically tracks actual visitors who have been identified and through opportunities, leads, and to/through revenue.

The differences in metrics, as well as collection methods, inherently will always result in different numbers and data points, which may or may not seem close to telling the same story.

Identify Common Causes Of Data Discrepancies

Beyond the basic metrics and KPIs, we want to go deeper and map out how performance looks overall. That means we have to get into attribution models. Those can be as simple as first touch, last click, or some other data-driven formula.

However, there might be obvious tracking gaps where forms, calls, or offline conversions occur that our systems can’t pick up. Plus, privacy changes related to consent mode, cookies that aren’t able to be leveraged, time lags (does anyone else have 50 tabs open for 100 days at a time like me?), and even cross-device search behavior.

Again, many of these are not new, but they seem to be amplified, and we can forget about them when looking at data without challenging assumptions or seeking what might be a gap or not collected.

My team has recently been in a fight against bots and spam, and we have been testing and navigating site-wide validation tools, which can create gaps in capturing referral headers or strip UTM parameters as well if not implemented properly.

Define Sources Of Truth And Hierarchy

With all the tech, tools, collection methods, and overall sources, we can have information overload and a whole host of conflicting sources that we’re working to understand and reconcile differences within.

I contend that not all data is equal when it comes to answering performance questions.

Example data that we’re seeking and key sources:

  • Revenue & Pipeline: CRM.
  • Leads: CRM, and/or trusted, validated platform conversion metrics.
  • On-Site Behavior: GA4.
  • Search Visibility: Search Console.
  • Ad Performance: Google Ads, other native ad platforms.

A shift in thinking might be that we have to stop trying to make one platform answer every question. The perfectionist in me struggles with having to say that, but it is the reality of the data source and attribution world we live in.

Align Metrics To Business Outcomes

I know that many marketing leaders, teams, and agencies inherit metrics and historical performance data. It isn’t always easy to reconfigure KPIs, make quick changes, or to be able to start tracking and reporting on things differently.

Marketing may be accountable for channels and platforms, while sales (and/or other functions) are looking at things further downstream, like leads, pipeline, and ultimately revenue.

When it comes to search marketing, and where we’re going with being found as well in LLMs, centering more on the connection between search marketing and business outcomes (not channels) is important. This isn’t a new concept, but one that warrants focus and investment as it won’t get less important over the coming months and years. This is a priority area to put marketing leadership focus.

Create Consistent Definitions Across Roles & Teams

With different definitions, collection methods, platforms, and data sources different roles and teams look at, by default, we likely are speaking some of the same language, but with very different definitions.

It is hard enough to manage the data; it can be impossible to move forward when it comes to how data is used and interpreted for different purposes.

What is a “conversion”? What counts as a “qualified lead”? How is “revenue” tracked? What is the source of truth for how a lead “source” is defined?

Definitions are often a bigger driver of misalignment than the data itself.

Use Trends When Exact Matches Are Not Realistic

Assuming you have accepted the truth that we can’t make all the data sources perfectly match, we can still find meaning in the data we’re looking at.

That comes in what we see in terms of trends. Are things trending across sources and data points in the same direction? Are there spikes or drops that we see consistently across platforms and sources?

Comparing and contrasting anomalies, finding trends, and understanding them can help us identify where data doesn’t match and where the level of precision doesn’t have to be perfect as we look for consistency, direction, and the outcome of what happened.

Close The Gap Between Marketing And CRM

I still sometimes get looked at a little funny when working with the CRM administrator or decision maker who sits outside of marketing, when asking about non-digital marketing leads, data, and offline sources.

I advocate that, even if we’re just focused on digital or search marketing, we push for offline conversion imports, CRM feedback that is specific to the campaigns and channels/platforms that we’re focused on, and respective lead quality scoring.

We need to understand the business side of the data connected to our efforts in digital marketing and search. The better integrated the data, the more feedback we get, and the more collaboration of sources, the more impactful our efforts can be.

Educate Stakeholders On Why Data Won’t Match

In working with other C-suite leaders, executives, or stakeholders, you might find that they are used to a world of accounting, financial metrics, and more consistent data and absolutes. The fact that marketing data sources don’t match could be a big concern for them.

Keeping that in mind, it will serve you well to educate stakeholders and to prioritize their focus on what matters, the things we’ve unpacked already in this article.

It can derail a meeting fast when the numbers don’t match, don’t make sense, or create confusion. When the numbers can’t help connect the dots, they often create new questions, erode confidence, and take the conversation away from the overall business alignment and impact of the marketing efforts.

Develop The Performance Narrative, Not Just Dashboards

We naturally live in a world of dashboards with performance marketing, digital marketing, and search. We have the ability to track so much and have it all at our fingertips, sourcing from all of the various places we track and measure the impact of our work.

While it may be clear to you, looking at a complex dashboard, what the takeaways are, it will be confusing, distracting, and possibly misleading for everyone else.

Reporting shouldn’t just show numbers as it should explain what is happening, why, and what to do next. In your role in marketing leadership and subject matter expertise, your ability to shift from being a reporter of data to an interpreter of broader performance connected to strategy and business outcomes is a noble calling.

In Summary

Data conflicts and disagreements aren’t a flaw or evidence of an error (although you need to regularly audit to make sure you trust the collection and don’t have gaps). It is a reality of digital and search marketing.

When our varying roles, teams, and stakeholders understand this, we can shift our focus to the importance of mapping to business outcomes and leveraging our data for decisions, versus being distracted by the nuances of things that we can’t ultimately exact match and reconcile.

Our goal isn’t to make the numbers match. It is to be able to make informed and confident decisions to drive business outcomes and success.

More Resources:


Featured Image: Accogliente Design/Shutterstock

Google Just Made It Easy For SEOs To Kick Out Spammy Sites via @sejournal, @martinibuster

Google updated their report spam documentation to make it clear that they may use reported spam to initiate manual actions against websites that are found to be spamming. This is a change in policy that makes it easier for site owners and SEOs to report actual spam.

Change In Spam Report Policy

The previous spam reporting documentation previously said that Google would not use the spam reports for taking actions against websites.

This wording was mostly removed:

“While Google does not use these reports to take direct action against violations, these reports still play a significant role in helping us understand how to improve our spam detection systems that protect our search results.”

That part is narrowed to emphasize that the submitted spam reports help improve their spam detection systems:

“These reports help us understand how to improve the spam detection systems that protect our search results.”

More Aggressive Approach To Spam

Google also added new wording to make it clear that Google may use the spam reports to take manual actions against websites. Google used to refer to manual actions in terms related to penalization but it may be that the word “penalization” carries connotations of punishment which isn’t what Google is doing when they remove a site from the index. It’s not a punishment, it’s just a removal from the index.

Google’s new wording makes it clear that taking manual action against reported sites are now an option:

“Google may use your report to take manual action against violations. If we issue a manual action, we send whatever you write in the submission report verbatim to the site owner to help them understand the context of the manual action. We don’t include any other identifying information when we notify the site owner; as long as you avoid including personal information in the open text field, the report remains anonymous.”

Everything else about the page is the same, including the button for filing a spam report.

Screenshot: Spam Report Button

Clicking the “Report spam” button leads to a form that now can lead to a manual action:

Screenshot: Spam Report Form

Is This Good News For SEOs?

Site owners and SEOs who are sick of seeing spammy sites dominating the search results may want to check out the new page and start reporting actual spammy websites. Nobody really enjoys spam and now there’s something users can do about it.

Featured Image by Shutterstock/NLshop

The problem with thinking you’re part Neanderthal

You’ve probably heard some version of this idea before: that many of us have an “inner Neanderthal.” That is to say, around 45,000 years ago, when Homo sapiens first arrived in Europe, they met members of a cousin species—the broad-browed, heavier-set Neanderthals—and, well, one thing led to another, which is why some people now carry a small amount of Neanderthal DNA. 

This DNA is arguably the 21st century’s most celebrated discovery in human evolution. It has been connected to all kinds of traits and health conditions, and it helped win the Swedish geneticist Svante Pääbo a Nobel Prize.

But in 2024, a pair of French population geneticists called into question the foundation of the popular and pervasive theory. 

Lounès Chikhi and Rémi Tournebize, then colleagues at the Université de Toulouse, proposed an alternative explanation for the very same genomic patterns. The problem, they said, was that the original evidence for the inner Neanderthal was based on a statistical assumption: that humans, Neanderthals, and their ancestors all mated randomly in huge, continent-size populations. That meant a person in South Africa was just as likely to reproduce with a person in West Africa or East Africa as with someone from their own community. 

Archaeological, genetic, and fossil evidence all shows, though, that Homo ­sapiens evolved in Africa in smaller groups, cut off from one another by deserts, mountains, and cultural divides. People sometimes crossed those barriers, but more often they partnered up within them. 

In the terminology of the field, this dynamic is called population structure. Because of structure, genes do not spread evenly through a population but can concentrate in some places and be totally absent from others. The human gene pool is not so much an Olympic-size swimming pool as a complex network of tidal pools whose connectivity ebbs and flows over time.

This dynamic greatly complicates the math at the heart of evolutionary biology, which long relied on assumptions like randomly mating populations to extract general principles from limited data. If you take structure into account, Chikhi told me recently, then there are other ways to explain the DNA that some living people share with Neanderthals—ways that don’t require any interspecies sex at all.

“I believe most species are spatially organized and structured in different, complex ways,” says Chikhi, who has researched population structure for more than two decades and has also studied lemurs, orangutans, and island birds. “It’s a general failure of our field that we do not compare our results in a clear way with alternative scenarios.” (Pääbo did not respond to multiple requests for comment.)

The inner Neanderthal became a story we could tell ourselves about our flaws and genetic destiny: Don’t blame me; blame the prognathic caveman hiding in my cells.

Chikhi and Tournebize’s argument is about population structure, yes, but at heart, it is actually one about methods—how modern evolutionary science deploys computer models and statistical techniques to make sense of mountains upon mountains of genetic data. 

They’re not the only scientists who are worried. “People think we really understand how genomes evolve and can write sophisticated algorithms for saying what happened,” says William Amos, a University of Cambridge population geneticist who has been critical of the “inner Neanderthal” theory. But, he adds, those models are “based on simple assumptions that are often wrong.” 

And if they’re wrong, what’s at stake is far more than a single evolutionary mystery. 

A captivating story of interspecies passion

Back in 2010, Pääbo’s lab pulled off something of a miracle. The researchers were able to extract DNA from nuclei in the cells of 40,000-year-old Neanderthal bones. DNA breaks down quickly after death, but the group got enough of it from three different individuals to produce a draft sequence of the entire Neanderthal genome, with 4 billion base pairs. 

As part of their study, they performed a statistical test comparing their Neanderthal genome with the genomes of five present-day people from different parts of the world. That’s how they discovered that modern humans of non-African ancestry had a small amount of DNA in common with Neanderthals, a species that diverged from the Homo sapiens line more than 400,000 years ago, that they did not share with either modern humans of African ancestry or our closest living relative, the chimpanzee. 

Neanderthal front and profile view
This model of a Neanderthal man was exhibited in the “Prehistory Gallery” at London’s Wellcome Historical Medical Museum in the 1930s.
WELLCOME COLLECTION

Pääbo’s team interpreted this as evidence of sexual reproduction between ancient Homo sapiens and the Neanderthals they encountered after they expanded out of Africa. “Neanderthals are not totally extinct,” Pääbo said to the BBC in 2010. “In some of us, they live on a little bit.”

The discovery was monumental on its own—but even more so because it reversed a previous consensus. More than a decade earlier, in 1997, Pääbo had sequenced a much smaller amount of Neanderthal DNA, in that case from a cell structure called a mitochondrion. It was different enough from Homo sapiens mitochondrial DNA for his team to cautiously conclude there had been “little or no interbreeding” between the two species. 

After 2010, though, the idea of hybridization, also called admixture, effectively became canon. Top journals like Science and Nature published study after study on the inner Neanderthal. Some scientists have argued that Homo sapiens would never have adapted to colder habitats in Europe and Asia without an infusion of Neanderthal DNA. Other research teams used Pääbo’s techniques to find genetic traces of interbreeding with an extinct group of hominins in Asia, called the Denisovans, and a mysterious “ghost lineage” in Africa. Biologists used similar tests to find evidence of interbreeding between chimpanzees and bonobos, polar and brown bears, and all kinds of other animals. 

The inner-Neanderthal hypothesis also took a turn for the personal. Various studies linked Neanderthal DNA to a head-spinning range of conditions: alcoholism, asthma, autism, ADHD, depression, diabetes, heart disease, skin cancer, and severe covid-19. Some researchers suggested that Neanderthal DNA had an impact on hair and skin color, while others assigned individuals a “NeanderScore” that was correlated with skull shape and prevalence of schizophrenia markers. Commercial genetic testing companies like 23andMe started offering customers Neanderthal ancestry reports. 

The inner Neanderthal became a story we could tell ourselves about our flaws and genetic destiny: Don’t blame me; blame the prognathic caveman hiding in my cells. Or as Latif Nasser, a host of the popular-science program Radiolab, put it when he was hospitalized with Crohn’s disease, another Neanderthal-associated condition: “I just keep imagining these tiny Neanderthals … just, like, stabbing me and drawing these little droplets of blood out of me.”

“These things become meaningful to people,” Chikhi says. “What we say will be important to how people view themselves.” 

The pitfalls of simplistic solutions 

When population geneticists built the theoretical framework for evolutionary biology in the early 20th century, genes were only abstract units of heredity inferred from experiments with peas and fruit flies. Population genetics developed theory far more quickly than it accumulated data. As a result, many data-driven scientists dismissed the study of evolution as a form of storytelling based on unexamined assumptions and preconceived ideas.

By the ’90s, though, genes were no longer abstractions but sequenced segments of DNA. Genomic sequencing grounded evolutionary studies in the kind of hard data that a chemist or physicist could respect. 

Yet biologists could not simply read evolutionary history from genomes as though they were books. They were trying to determine which of a nearly infinite number of plausible histories was the most likely to have created the patterns they observed in a small sample of genomes. For that, they needed simplified, algorithmic models of evolution. The study of evolution shifted from storytelling to statistics, and from biology to computer science. 

That suited Chikhi, who as a child was drawn to the predictable laws and numerical precision of math and science. He entered the field in the mid-’90s just as the first big studies of human DNA were settling old debates about human origins. DNA showed that Africa harbored far more genetic diversity than the entire rest of the planet. The new evidence supported the idea that modern humans evolved for hundreds of thousands of years in Africa and expanded to the other continents only in the last 100,000 years. For Chikhi, whose parents were Algerian immigrants, this discovery was a powerful challenge to the way some archaeologists and biologists talked about race. DNA could be used to deconstruct rather than encourage the pernicious idea that human races had deep-seated evolutionary differences based on their places of origin. 

At the same time, though, he was wary of the tendency to treat DNA as the final verdict on open questions in evolution. Chikhi had been surprised when, back in 1997, Pääbo and his team used that small amount of mitochondrial DNA to rule out hybridization between Homo sapiens and Neanderthals. He didn’t think that the absence of Neanderthal DNA there necessarily meant it wouldn’t be found elsewhere in the Homo sapiens genome.

Chikhi’s own research in the aughts opened his eyes to the gaps between historical reality and models of evolution. For one, despite the assumption of random mating, none of the animals Chikhi studied actually mated randomly. Orangutans lived in highly fragmented habitats, which restricted their pool of potential mates, and female birds were often extremely picky about their male partners. 

These factors could confound an evolutionary biologist’s traditional statistical tool kit. Scientists were starting to apply a mathematical technique to estimate historical population sizes for a species from the genome of just a single individual. This method showed sharp population declines in the histories of many different species. Chikhi realized, though, that the apparent declines could be an artifact of treating a structured population as one that evolved with random mating; in that case, the technique could indicate a bottleneck even if all the subgroups were actually growing in size. “This is completely counterintuitive,” he says. 

That’s at least partly why, when Pääbo’s 2010 Neanderthal genome came out, Chikhi was impressed with the sheer technical accomplishment but also leery of the findings about hybridization. “It was the type of thing we conclude too quickly based on genetic data,” he says. Pääbo’s work mentioned population structure as a possible alternative explanation—but didn’t follow up.

Just a couple of years later, a pair of independent scientists named Anders Eriksson and Andrea Manica picked up the idea, building a model with simple population structure that explicitly excluded admixture. They simulated human evolution starting from 500,000 years ago and found that their model produced the same genomic patterns Pääbo’s group had interpreted as evidence of hybridization.

“Working with structured models is really out of the comfort zone of a lot of population geneticists,” says Eriksson, now a professor at the University of Tartu in Estonia.

Their research impressed Chikhi. “At the time, I thought people would focus on population structure in the evolution of humans,” he says. Instead, he watched as the inner-Neanderthal hypothesis took on a life of its own. Scientists produced new methods to quantify hybridization but rarely examined whether population structure would yield the same results. To Chikhi, this wasn’t science; it was storytelling, like some of the old narratives about the evolution of racial differences. 

Chikhi and Tournebize decided to take a crack at the problem themselves. “I’ve always been very skeptical about science, and population genetics in particular,” says Tournebize, now a researcher at the French National Research Institute for Sustainable Development. “We make a lot of assumptions, and the models we use are very simplistic.” As detailed in a 2024 paper published in Nature Ecology & Evolution, they built a model of human evolution that replaced randomly mating continent-wide populations with many smaller populations linked by occasional migration. Then they let it run—a million times.

At the end of the simulation, they kept the 20 scenarios that produced genomes most similar to the ones in a sample of actual Homo sapiens and Neanderthals. Many of these scenarios produced long segments of DNA like the ones their peers argued could only have been inherited from Neanderthals. They showed that several statistics, which other scientists had proposed as measurements of Neanderthal DNA, couldn’t actually distinguish between hybridization and population structure. What’s more, they showed that many of the models that supported hybridization failed to accurately predict other known features of human evolution.

“A model will say there was admixture but then predict diversity that is totally incompatible with what we actually know of human diversity,” Chikhi says. “Nobody seems to care.”

So how did Neanderthal DNA wind up in living people if not via interspecies passion? Chikhi and Tournebize think it’s more likely that it was inherited by both Neanderthals and some sapiens groups in Africa from a common ancestor living at least half a million years ago. If the sapiens groups carrying those genetic variants included the people who migrated out of Africa, then the two human species would have already had the DNA in common when they came into contact in Europe and Asia—no sex required. 

“The interpretation of genetic data is not straightforward,” Chikhi says. “We always have to make assumptions. Nobody takes data and magically comes up with a solution.” 

Embracing the uncertainty 

Most of the half-dozen population geneticists I spoke with praised Chikhi and Tournebize’s ingenuity and appreciated the spirit of their critique. “Their paper forces us to think more critically about the model we use for inference and consider alternatives,” says Aaron Ragsdale, a population geneticist at the University of Wisconsin–Madison. His own work likewise suggests that the earliest Homo sapiens populations in Africa were probably structured—and that this is the likely reason for genomic patterns that other research groups had attributed to hybridization with a mysterious “ghost lineage” of hominins in Africa.

Yet most researchers still believe that modern humans and Neanderthals did probably have children with each other tens of thousands of years ago. Several pointed to the fact that fossil DNA of Homo sapiens who died thousands of years ago had longer chunks of apparent Neanderthal DNA than living people, which is exactly what you would expect if they had a more recent Neanderthal ancestor. (To address this possibility, Chikhi and Tournebize included DNA from 10 ancient humans in their study and found that most of them fit the structured model.) And while the Harvard population geneticist David Reich, who helped design the statistical test from Pääbo’s 2010 study, declined an interview, he did say he thought Chikhi and Tournebize’s model was “weak” and “very contrived,” adding that “there are multiple lines of evidence for Neanderthal admixture into modern humans that make the evidence for this overwhelming.” (Two other authors of that study, Richard Green and Nick Patterson, did not respond to requests for comment.) 

Nevertheless, most scientists these days welcome the development of structured, or “spatially explicit,” models that account for the fact that any given member of a population is usually more closely related to individuals living nearby than to those living far away. 

Loosening our attachment to certain narratives of evolution can create space for wonder at the sheer complexity of life’s history.

Other scientists also say that random mating isn’t the only assumption in population genetics that merits scrutiny. Models rarely factor in natural selection, which can also create genetic patterns that look like hybridization. Another common assumption is that everyone’s DNA mutates at the same, constant rate. “All the theory says the mutation rate is fixed,” says Amos, the Cambridge population geneticist. But he thinks that rate would have slowed drastically in the group of Homo sapiens that expanded to Europe around 45,000 years ago. This, too, could have created genomic patterns that other scientists interpret as evidence of interbreeding with Neanderthals. 

Commercial genetic testing companies like 23andMe started offering customers Neanderthal ancestry reports.
COURTESY OF 23ANDME

The point here isn’t that a complex model of evolution with many moving pieces is necessarily better than a simple one. Scientists need to reduce complexity in order to see the underlying processes more clearly. But simple models require assumptions, and scientists need to reevaluate those assumptions in light of what they learn. “As you get more data, you can justify more complex models of the world,” says Mark Thomas, a population geneticist at University College London, who wrote a history of random mating in population genetics that highlighted how the field was starting to see it as “a limiting assumption as opposed to a simplifying one.” 

It can feel discouraging to couch conversations about the past in confusing terms like “population structure” and “mutation rates.” It seems almost antithetical to the spirit of science to talk more about uncertainty at the same time we are developing powerful technologies and enormous data sets for analyzing evolution. These tools often yield novel answers, but they can also limit the questions we ask. The French archaeologist Ludovic Slimak, for example, has complained that the idea of the inner Neanderthal has domesticated our image of Neanderthals and made it difficult to imagine their humanity as distinct from our own. Investigating Neanderthal DNA is sexier to many young researchers than searching for archaeological and fossil evidence of how Neanderthals actually lived. 

Loosening our attachment to certain narratives of evolution can create space for wonder at the sheer complexity of life’s history. Ultimately, that’s what Chikhi and Tournebize hope to do. After all, they don’t believe the question of population structure versus hybridization is either-or. It’s possible, and even likely, that both played a role in human evolution. “Our structured model does not necessarily mean that no admixture ever took place,” Chikhi and Tournebize wrote in their study. “What our results suggest is that, if admixture ever occurred, it is currently hard to identify using existing methods.” 

Future methods might disentangle the different factors, but it’s just as important, Chikhi says, for scientists to be up-front about their assumptions and test alternatives. “There’s still so much uncertainty on so many aspects of the demographic history of Neanderthals and Homo sapiens,” he notes. 

Keep that in mind the next time you read about your inner Neanderthal. The association between this DNA and some diseases may be real, of course—but would journals publish these studies without the additional claim that the DNA is from Neanderthals? Any good storyteller knows that sex sells, even in science. 

Ben Crair is a science and travel writer based in Berlin.