Can quantum computers now solve health care problems? We’ll soon find out.

<div data-chronoton-summary="

  • A $5 million health care challenge: A nonprofit called Wellcome Leap is offering up to $5 million to quantum computing teams that can solve real-world health care problems classical computers can’t handle—using machines that are still noisy, error-prone, and far from perfect.
  • Hybrid computing is the real breakthrough: Facing limited quantum hardware, all six finalist teams developed clever quantum-classical hybrid approaches—offloading most work to conventional processors, then using quantum only where classical methods fall short.
  • Cancer, muscular dystrophy, and drug design are on the table: Teams are tackling problems ranging from identifying cancer origins to simulating light-activated cancer drugs to finding treatments for muscular dystrophy—applications previously impossible to model classically.
  • Even failure would count as progress: The competition’s own director doubts anyone will claim the grand prize, but says the field has already been transformed—teams now know where quantum computing can genuinely matter, even if the machines to fully prove it don’t exist yet.

” data-chronoton-post-id=”1134409″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

I’m standing in front of a quantum computer built out of atoms and light at the UK’s National Quantum Computing Centre on the outskirts of Oxford. On a laboratory table, a complex matrix of mirrors and lenses surrounds a Rubik’s Cube–size cell where 100 cesium atoms are suspended in grid formation by a carefully manipulated laser beam. 

The cesium atom setup is so compact that I could pick it up, carry it out of the lab, and put it on the backseat of my car to take home. I’d be unlikely to get very far, though. It’s small but powerful—and so it’s very valuable. Infleqtion, the Colorado-based company that owns it, is hoping the machine’s abilities will win $5 million next week, at an event to be held in Marina del Rey, California. 

Infleqtion is one of six teams that have made it to the final stage of a 30-month-long quantum computing competition called Quantum for Bio (Q4Bio). Run by the nonprofit Wellcome Leap, it aims to show that today’s quantum computers, though messy and error-prone and far from the large-scale machines engineers hope to build, could actually benefit human health. Success would be a significant step forward in proving the worth of quantum computers. But for now, it turns out, that worth seems to be linked to harnessing and improving the performance of conventional (also called classical) computers in tandem, creating a quantum-classical hybrid that can exceed what’s possible on classical machines by themselves.

There are two prize categories. A prize of $2 million will go to any and all teams that can run a significantly useful health care algorithm on computers with 50 or more qubits (a qubit is the basic processing unit in a quantum computer). To win the $5 million grand prize, a team must successfully run a quantum algorithm that solves a significant real-world problem in health care, and the work must use 100 or more qubits. Winners have to meet strict performance criteria, and they must solve a health care problem that can’t be solved with conventional computers—a tough task.

Despite the scale of the challenge, most of the teams think some of this money could be theirs. “I think we’re in with a good shout,” says Jonathan D. Hirst, a computational chemist at the University of Nottingham, UK. “We’re very firmly within the criteria for the $2 million prize,” says Stanford University’s Grant Rotskoff, whose collaboration is investigating the quantum properties of the ATP molecule that powers biological cells. 

The grand prize is perhaps less of a sure thing. “This is really at the very edge of doable,” Rotskoff says. Insiders say the challenge is so difficult, given the state of quantum computing technology, that much of the money could stay in Wellcome Leap’s account. 

With most of the Q4Bio work unpublished and protected by NDAs, and the quantum computing field already rife with claims and counterclaims about performance and achievements, only the judges will be in a position to decide who’s right. 

A hybrid solution

The idea behind quantum computers is that they can use small-scale objects that obey the laws of quantum mechanics, such as atoms and photons of light,  to simulate real-world processes too complex to model on our everyday classical machines. 

Researchers have been working for decades to build such systems, which could deliver insights for creating new materials, developing pharmaceuticals, and improving chemical processes such as fertilizer production.  But dealing with quantum stuff like atoms is excruciatingly difficult. The biggest, shiniest applications require huge, robust machines capable of withstanding the environmental “noise” that can very easily disrupt delicate quantum systems. We don’t have those yet—and it’s unclear when we will. 

Wellcome Leap wanted to find out if the smaller-scale machines we have today can be made to do something—anything—useful for health care while we wait for the era of powerful, large-scale quantum computers. The group started the competition in 2024, offering $1.5 million in funding to each group of 12 selected teams.

The six Q4Bio finalists have taken a range of approaches. Crucially, they’ve all come up with ingenious ways to overcome quantum computing’s drawbacks. Faced with noisy, limited machines, they have learned how to outsource much of the computational load to classical processors running newly developed algorithms that are, in many cases, better than the previous state of the art. The quantum processors are then required only for the parts of the problem where classical methods don’t scale well enough as the calculation gets bigger.

For example, a team led by Sergii Strelchuk of Oxford University is using a quantum computer to map genetic diversity among humans and pathogens on complex graph-based structures. These will—the researchers hope—expose hidden connections and potential treatment pathways. “You can think about it as a platform for solving difficult problems in computational genomics,” Strelchuk says. 

The corresponding classical tools struggle with even modest scale-up to large databases. Strelchuk’s team has built an automated pipeline that provides a way of determining whether classical solvers will struggle with a particular problem, and how a quantum algorithm might be able to formulate the data so that it becomes solvable on a classical computer or handleable on a noisy quantum one. “You can do all this before you start spending money on computing,” Strelchuk says.

In collaboration with Cleveland Clinic, Helsinki-based Algorithmiq has used a superconducting quantum computer built by IBM to simulate a cancer drug that is triggered by specific types of light. “The idea is you take the drug, and it’s everywhere in your body, but it’s doing nothing, just sitting there, until there’s light on it of a certain wavelength,” says Guillermo García-Pérez, Algorithmiq’s chief scientific officer. Then it acts as a molecular bullet, attacking the tumor only at the location in the body where that light is directed. 

The drug with which Algorithmiq began its work is already in phase II clinical trials for treating bladder cancers. The quantum-computed simulation, which adapts and improves on classical algorithms, will allow it to be redesigned for treating other conditions. “It has remained a niche treatment precisely because it can’t be simulated classically,” says Sabrina Maniscalco, Algorithmiq’s CEO and cofounder. 

Maniscalco, who is also confident of walking away from the competition with prize money, believes the methods used to create the algorithm will have wide applications:  “What we’ve done in the period of the Q4Bio program is something unique that can change how to simulate chemistry for health care and life sciences.”

Infleqtion’s entry, running on its cesium-powered machine, is an effort to improve the identification of cancer signatures in medical data. Together with collaborators at the University of Chicago and MIT, the company’s scientists have developed a quantum algorithm that mines huge data sets such as the Cancer Genome Atlas. 

The aim is to find patterns that allow clinicians to determine factors such as the likely origin of a patient’s metastasized cancer. “It’s very important to know where it came from because that can inform the best treatment,” says Teague Tomesh, a quantum software engineer who is Infleqtion’s Q4Bio project lead.

Unfortunately, those patterns are hidden inside data sets so large that they overwhelm classical solvers. Infleqtion uses the quantum computer to find correlations in the data that can reduce the size of the computation. “Then we hand the reduced problem back to the classical solver,” Teague says. “I’m basically trying to use the best of my quantum and my classical resources.”

The Nottingham-based team, meanwhile, is using quantum computing to nail down a drug candidate that can cure myotonic dystrophy, the most common adult-onset form of muscular dystrophy. One member of the team, David Brook, played a role in identifying the gene behind this condition in 1992. Over 30 years later, Brook, Hirst, and the others in their group—which includes QuEra, a Boston company developing a quantum computer based on neutral atoms—has now quantum-computed a way in which drugs can form chemical bonds with the protein that brings on the disease, blocking the mechanism that causes the problem.

Low expectations 

The entrants’ confidence might be high, but Shihan Sajeed’s is much lower. Sajeed, a quantum computing entrepreneur based in Waterloo, Ontario, is program director for Q4Bio. He believes the error-prone quantum machines the researchers must work with are unlikely to deliver on all the grand prize criteria. “It is very difficult to achieve something with a noisy quantum computer that a classical machine can’t do,” he says.

That said, he has been surprised by the progress. “When we started the program, people didn’t know about any use cases where quantum can definitely impact biology,” he says. But the teams have found promising applications, he adds: “We now know the fields where quantum can matter.” 

And the developments in “hybrid quantum-classical” processing that the entrants are using are “transformational,” Sajeed reckons.

Will it be enough to make him part with Wellcome Leap’s money? That’s down to a judging panel, whose members’ identities are a closely guarded secret to ensure that no one tailors their presentation to a particular kind of approach. But we won’t know the outcome for a while; the winner, or winners, will be announced in mid-April. 

If it does turn out that there are no winners, Sajeed has some words of comfort for the competitors. The goal has always been about running a useful algorithm on a machine that exists today, he points out; missing the mark doesn’t mean your algorithm won’t be useful on a future quantum computer. “It just means the machine you need doesn’t exist yet.”

The Download: Quantum computing for health, and why the world doesn’t recycle more nuclear waste

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

A $5 million prize awaits proof that quantum computers can solve health care problems 

In a laboratory on the outskirts of Oxford, a quantum computer built from atoms and light awaits its moment. The device is small but powerful—and also very valuable. Infleqtion, the company that owns it, is hoping its abilities will win $5 million at a competition next week. 

The prize will go to the quantum computer that can solve real health care problems that conventional “classical” computers are unable to solve. But there can be only one big winner—if there is a winner at all. Read the full story

—Michael Brooks 

Why the world doesn’t recycle more nuclear waste 

There’s still a lot of usable uranium in spent nuclear fuel when it’s pulled out of reactors. Recycling could reduce both the waste and the need to mine new material, but the process is costly, complicated, and not fully efficient. 

Find out why it’s such an issue—Casey Crownhart 

This story is from The Spark, MIT Technology Review’s weekly climate newsletter. Sign up to receive it in your inbox every Wednesday. 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 The FBI has confirmed it’s buying Americans’ location data  
Director Kash Patel said it’s led to “valuable intelligence.” (Politico
+ What AI “remembers” about you is privacy’s next frontier. (MIT Technology Review
 
2 The first draft of a federal AI bill has been introduced 
It aims to protect “children, creators, conservatives, and communities.” (Engadget
+ A war is brewing over AI regulation in the US. (MIT Technology Review  

3 Google is pitching itself to the Pentagon as the perfect defense partner 
It’s framing its AI as a safe alternative to OpenAI and Anthropic. (NYT $) 
+ Here’s where OpenAI’s tech could show up in Iran. (MIT Technology Review

4 A rogue AI agent at Meta leaked sensitive information to employees 
The exposure lasted for hours before it was contained. (The Information $) 
+ Don’t let AI agent hype get ahead of reality. (MIT Technology Review $) 

5 Sony just removed 135,000 ‘deepfakes’ of its music 
Fraudsters were impersonating the label’s artists on streaming services. (BBC
+ AI works better as a collaborator than a creator. (MIT Technology Review

6 The EU has backed a ban on nonconsensual sexualized deepfakes 
It has reacted to Elon Musk’s Grok chatbot “nudifying” children. (Bloomberg $) 

7 Two quantum cryptography pioneers have won the Turing Award 
Their encryption method can (theoretically) never be broken. (Quanta

8 Gamers are disgusted by Nvidia’s new rendering model  
They’ve labeled it an “AI slop filter.” (The Verge

9 The White House has registered the aliens.gov domain 
It’s sparked speculation that Trump’s long-awaited UFO disclosure is imminent. (404 Media
+ Meet the new biologists treating LLMs like ETs. (MIT Technology Review

10 Silicon Valley has embraced a new buzzword: “taste” 
As a USP amid the deluge of AI-driven recommendations. (The New Yorker $) 

Quote of the day 

“Big tech and China win. The rest of us lose.” 

—Elizabeth Warren gives her take on the Trump administration allowing Nvidia to sell advanced chips to China. 

One More Thing 

an arm hovering over a wafer during a test

PSIQUANTUM

Useful quantum computing is inevitable—and increasingly imminent 

Last year, Nvidia CEO Jensen Huang jolted the stock market by saying that practical quantum computing is still 15 to 30 years away. He also suggested that those computers would need Nvidia GPUs to function. But Huang’s predictions miss the mark—both on the timeline and the role his company’s technology will play.  

Quantum computing is rapidly converging on utility. And that’s good news, because the hope is that they will be able to perform calculations that no amount of AI or classical computation could ever achieve. Read the full story

—Peter Barrett 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ A self-described “mad scientist” has powered a car with vape batteries. 
+ Someone squeezed an Apple Mac Mini inside a classic LEGO computer. 
+ Watch thousands of satellites orbit Earth in real-time with this mesmerizing interactive map
+ This grilled wall cheese art looks good enough to eat.  

E.U. Law Tightens Marketplace Selling

Sellers on E.U. marketplaces face heightened identity checks, contract disclosures, and suspension risk. The changes stem from the Digital Services Act, enacted by the European Commission in 2022 and enforceable as of February 2024, governing how online “intermediaries” handle illegal products.

The growth of such products is dramatic. Across all intermediaries — marketplaces, ecommerce sites, social media platforms, app stores — E.U. authorities in 2023 seized 152 million illegal items worth €3.4 billion ($3.9 billion), versus 66 million and €1.5 billion in 2020 (PDF).

Seller Verification

The DSA requires marketplaces to verify sellers’ identities before allowing them to offer goods to E.U. consumers. Sellers provide:

  • Business registration documents
  • VAT or tax identification numbers
  • Physical address
  • Email and phone contact details
  • Self-certification confirming that products comply with E.U. law

Marketplaces must:

  • Suspend noncompliant sellers until they submit the required information. In practice, incomplete documentation during onboarding can trigger immediate suspension.
  • Display to consumers certain details of verified sellers. Most platforms now show seller contact information on public profile pages, typically including the business name, address, and an email or phone contact.

Enforcement

The law imposes penalties on marketplaces of up to 6% of global annual sales for non-compliance, giving strong incentives to verify sellers and remove problematic listings promptly.

Trusted flaggers play an important role in detection. Regulators recognize these organizations, which include trade associations, consumer protection groups, and intellectual property enforcement bodies. Marketplaces must process the flaggers’ reports with priority.

Regulatory investigations increasingly target large marketplaces.

  • In July 2025, the European Commission said Temu had breached the DSA for failing to assess the risk of illegal products on its platform adequately.
  • AliExpress faced scrutiny in 2025 and agreed to improve trader verification and product monitoring.
  • Shein received regulatory information requests regarding illegal goods and consumer protection practices.

Such investigations focus on the platforms, but the operational response often affects sellers first.

GPSR Too

Marketplace sellers are also required to comply with the E.U.’s General Product Safety Regulation, effective December 2024.

The law focuses on product safety and supply-chain accountability, including product traceability and the identification of responsible economic operators.

Together, the regulations create a dual compliance framework:

  • DSA governs platform responsibilities and enforcement procedures.
  • GPSR governs product safety and legal accountability for goods sold to consumers.

Since they now bear legal responsibility for illegal listings, marketplaces must carefully verify sellers and respond quickly to violations. Regulatory pressure often drives the marketplaces’ verification policies.

For merchants, the requirements are straightforward: Maintain current registration, tax, and compliance documents, and ensure the contact information displayed on marketplace profiles is accurate.

Selling through E.U. marketplaces increasingly entails the same discipline as a regulated retail environment.

In short, verification and transparency are becoming routine for marketplace selling.

How CRO Drives Ecommerce Longevity

I’m the founder of a U.K.-based conversion optimization agency. In close to a decade in ecommerce, I’ve learned that the keys to longevity for any brand are diversified traffic sources and a robust optimization program to convert those visitors to customers.

Brands that rely too heavily on one source are exposed when costs rise, algorithms change, or consumer behaviour shifts. Top channels are typically:

  • Advertising for acquisition and retargeting,
  • Organic traffic across search engines and, now, genAI platforms,
  • Email marketing.

Here’s how conversion rate optimization impacts each channel.

Advertising

Done well, CRO analyzes what visitors engage with, when and where they buy, how much they spend, and what prevents them from converting. That data informs ad decisions.

Average order values, landing page conversions, abandoned pages: All factor into channel selection, audience targeting, and ad creative and messaging. The result is a lower cost-per-click, improved quality scores, better engagement rates, and ultimately a higher return on ad spend.

For example, we once observed that visitors to a potato chip company’s website frequently searched for ingredient info. That signal told us ingredients mattered to the company’s audience.

We incorporated an ingredient-focused hook into the client’s ad creative and landing page messaging. The result was an uplift in return on ad spend.

In short, CRO can help ensure ad campaigns drive the right traffic.

Organic Traffic

Optimizing for conversions closely parallels the fundamentals of search engine and generative AI optimization. CRO, SEO, and GEO focus on site structure, internal linking, content, clarity, images, and videos.

Clear, well-structured category hierarchies, FAQs, comparison tables, educational content, and product detail pages not only improve conversion rates but also enhance crawlability and visibility across search engines and AI platforms.

For instance, we found that including detailed nutritional information on the product pages of vitamin and supplement brands builds trust, provides answers, and thus improves conversions. And that same structured, detailed content increases discoverability across ChatGPT, Gemini, and other generative search environments.

CRO, SEO, and GEO reinforce each other.

Email Marketing

Email marketing brings shoppers back. It is one of the most powerful tools for understanding consumer behaviour across segments and journeys.

Many tools facilitate A/B tests on subject lines, offers, and send times. Yet true value emerges when segmentation connects to the onsite experience. Sending unique messages to audience segments who then land on tailored pages reflecting their interests increases engagement and revenue.

We once worked with a nail polish brand offering over 400 SKUs. The volume overwhelmed shoppers, leading to depressed conversion rates. We deployed an interactive quiz that asked visitors to enter preferences such as colour, nail shape, and skin tone.

The outcome drove personalization that increased conversions. It also provided valuable first-party data that powered segmented email campaigns, resulting in improved click and purchase rates.

New Ecommerce Tools: March 18, 2026

We publish a rundown every week of new services for ecommerce merchants. This installment includes updates on reusable packaging, dropshipping, product feeds, finance, fulfillment platforms, B2C marketing, agentic commerce, and AI-powered contact centers.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

FedEx and Returnity launch reusable box solution for B2B shippers. FedEx has launched a reusable packaging system for B2B shippers, developed with Returnity, a packaging provider specializing in circular logistics. FedEx and Returnity have enabled FedEx’s U.S. B2B customers to switch from corrugated to reusable boxes without incurring additional handling fees. The program will expand to Australia and Europe soon, per FedEx.

Home page of Returnity

Returnity

Doba launches beta of an AI dropshipping agent, Pilot. Doba, a dropshipping platform, has launched a beta version of Pilot, an AI dropshipping agent. Key capabilities include product discovery, automated setup for Shopify stores, AI-generated product listings, and real-time inventory synchronization across suppliers and store platforms. The capabilities leverage Doba’s established supplier marketplace and dropshipping infrastructure.

Squarespace Balance simplifies business finances for merchants. Squarespace, a website-building platform, has launched Balance, a tool to help merchants manage finances and earn rewards within the Squarespace platform. Integrated with Squarespace Payments, Balance allows merchants to access funds within hours, earn cash rewards on balances, and spend with a business Visa. By keeping earnings, spending, and cash flow management in the same place, Balance says it simplifies financial oversight and reduces operational complexity.

Ship.com integrates with Walmart Marketplace. Ship.com, a fulfillment and shipping platform, has announced its integration with Walmart Marketplace. The platform provides a centralized dashboard that syncs Walmart orders, generates batch shipping labels, and helps sellers manage fulfillment alongside other selling channels. According to Ship.com, the integration also gives merchants access to discounted carrier rates.

Home page of Ship.com

Ship.com

Spreetail introduces BEx, a marketplace visibility tool. Spreetail, an ecommerce marketplace accelerator, has launched BEx, a tool to provide real-time visibility into performance across marketplaces, advertising, and fulfillment operations. BEx creates a unified view of results from Spreetail’s tools, including Price Pulse, Promise Pro, and True Ads. BEx operates inside Spreetail’s live retail and fulfillment systems. Brands can track sales, inventory, and advertising performance with AI-driven recommendations.

Amazon introduces Shop Direct for products not on Amazon. Amazon’s new Shop Direct is an AI search experience that shows products not currently sold on Amazon and is powered by third-party product feeds such as Feedonomics, Salsify, and CedCommerce. Shoppers can click Shop Direct to purchase items directly through the external merchant’s store or, for some products, click Buy for Me to have Amazon purchase on their behalf.

DHL Supply Chain to transform U.K. facility into ecommerce fulfillment hub. DHL Supply Chain has announced an expansion of its logistics facility in Derby, U.K., into a shared-user ecommerce fulfillment center. The site will deploy advanced automation to support high-volume ecommerce operations, seasonal peaks, and wide product ranges. Once live, the site will handle up to 350,000 units per day, per DHL.

Web page of DHL Supply Chain

DHL Supply Chain

Meta introduces AI-powered features on Facebook Marketplace. Meta has introduced AI-powered selling features on Facebook Marketplace. Sellers can upload item images and automatically create a detailed listing, including a suggested price. Sellers can also offer shipping on listings, generate prepaid shipping labels, track orders, and use Meta AI to draft and send an automated reply with the listing description, availability, pickup location, and price.

Santander and Visa deliver payments powered by AI agents. Banco Santander and Visa have announced the successful completion of controlled agentic commerce transactions in Argentina, Brazil, Chile, Mexico, and Uruguay. Visa Intelligent Commerce, which enables consent-driven transactions initiated by AI agents on behalf of consumers, powered the pilot.

BlueConic launches Agent Studio for brands. BlueConic, a customer growth engine for B2C marketers, has announced Agent Studio, an AI decisioning system that ensures agents evaluate what should happen for each customer in real time, improving performance over time.

Home page of BlueConic

BlueConic

eBay and Liberis launch flexible growth financing for U.K. small businesses. eBay U.K. has partnered with finance provider Liberis to introduce Flexible Growth Financing, which allows sellers to access funds when a need or opportunity arises. According to eBay, the program’s pricing and repayments adjust with sales to align cash flow. The feature is available starting April 2026 as part of eBay’s Seller Capital program.

Loxa raises £2.7 million to scale product protection across European retail. Loxa, an insurtech enabling retailers to offer product protection at the point of sale, has closed a £2.7 million ($3.6 million) funding round, backed primarily by angels, including the Lazaroo-Hood Group, facilitated by Angel Investment Network, FundMyPitch, and the Entrepreneur’s Collective. Loxa can build and launch insurance tailored to its retail partners. The new funding will drive E.U. expansion, scale Loxa’s retail network, and broaden product categories.

Salesforce introduces Agentforce Contact Center. Salesforce has introduced Agentforce Contact Center, a self-service portal unifying voice, digital channels, customer data, and AI agents natively in a single system. AI agents pull from the same data source to understand the customer’s history and incorporate insights from voice conversations, chats, texts, purchases, and marketing activity. Human agents can access the full transcript and customer history instantly.

Web page for Salesforce Agentforce Contact Center

Salesforce Agentforce Contact Center

AI Drives Smarter Ecommerce Pricing

AI can shift ecommerce pricing from one-size-fits-all to systems that adapt to shopper behavior and context, preserving margins.

Promotions, coupon codes, and bundles have long meant that two shoppers often pay different amounts for the same item. But now those variations can be intentional, measured, and optimized.

For years enterprise retailers have benefited from dynamic pricing thanks to investments in technology and expertise. Many of those tools, including do-it-yourself versions, are now available to smaller merchants.

The result enables real-time pricing decisions based on signals such as shopper intent, timing, and history — less about short-term conversions and more about protecting margin across many transactions.

Screenshot from the Shopify App Store for DynamicPricing's optimization app

Personalized tools such as DynamicPricing.ai are available on the Shopify App Store for smaller companies.

Offer and Price

The rise of AI agents enables ecommerce pricing to be an offer-and-price system that evaluates each session and decides whether to intervene.

The system could answer three questions in real time:

  • Should the AI intervene at all?
  • What type of intervention is appropriate?
  • And what level of personalization is acceptable?

Thus, the system moves from static pricing to dynamic decision-making to convert a specific shopper while protecting margin. It is custom pricing in a less offensive way.

Instead of blanket or rule-based promotions, merchants can offer incentives only to shoppers who would likely respond, while preserving full-price transactions where possible.

Over time, the AI would likely reduce unnecessary promotions and improve the margin per order.

Perception

Displaying different prices for the same item tends to create friction or even anger among shoppers.

Hence opponents of dynamic, personalized offers often call it “surveillance pricing” and believe monitoring behavioral signals, such as repeat visits, browsing depth, and referral sources, is unseemly.

Many retailers disagree, but the concern is real. Shoppers do not evaluate prices purely on economic terms. They judge fairness, consistency, and intent.

Bernard Meyer, AI operations manager at Omnisend, the marketing platform, shares that concern. He told me, “Consumers might have made peace with AI helping them shop, but there’s a very clear line between assistance and manipulation. The practice of using AI to adjust prices…has drawn understandable criticism.

“Our data shows consumers will share personal information if it helps them make better decisions, but not if it’s used against them. After years of inflation and constant price changes, people have a much clearer sense of what’s reasonable, and they’re far less tolerant of anything that looks like they’re being taken advantage of,” Meyer added.

Discounts and perks, on the other hand, are easier to explain and usually more acceptable. The outcome is the same: Margins are preserved. But a system that optimizes when to show discounts rather than lowering list prices feels better.

Instead of offering 10% off to everyone, merchants can reserve incentives for targeted shoppers. That alone can protect margins.

You’re Not Scaling Content. You’re Scaling Disappointment

Every few years, the SEO industry discovers a new way to mass-produce content and convinces itself that this time it’ll work. That the sheer volume of pages will overwhelm Google’s ability to assess quality. That if you just publish enough, the numbers will carry you.

It never works. It has never worked. And the people selling you these approaches know it has never worked. They just need it to work long enough to collect the invoice.

The Pattern Has A Name. It’s Called “Not Learning”

Let’s walk through the timeline, because apparently, we need to do this again.

2008-2011: Content Spinning

The pitch was simple: Take one article, run it through software that swaps synonyms, and suddenly you have 50 “unique” articles. The word “unique” was doing a lot of heavy lifting in that sentence. These articles read like someone had fed a dictionary through a blender. But even if the output had been polished, the premise was broken. Here’s what the content spinners never grasped, and what their successors still don’t: Uniqueness is trivially easy to produce. A monkey dropping its hands on a keyboard produces unique content. The string of characters has never existed before – congratulations, it’s original. The hard part was never uniqueness. It was producing uniqueness that’s worth something. Unique and valuable are not synonyms, and the gap between them is where every scaling strategy falls apart.

Google tolerated it for a while. Its systems simply hadn’t caught up yet. Then Panda arrived in February 2011, hit nearly 12% of all search queries, and content farms watched their traffic evaporate overnight … I was “fortunate” enough to watch it happen in real time. Demand Media, the poster child of the content-farm model, reported a $6.4 million loss the following year.

The lesson was supposed to be clear: You cannot industrialize quality. Volume without substance is a liability with a longer tail than most budgets can absorb.

2015-2022: Programmatic SEO

The pitch evolved. Instead of spinning existing articles, you’d build templates and fill them with structured data. “Best [X] in [City]” pages, generated by the thousand, each one a thin wrapper around a database query. Some of these actually provided value – if the underlying data was good and the template served genuine user needs. Most didn’t. Most were just doorway pages wearing a better outfit. Google spent years refining its ability to detect and demote templated content that existed primarily for indexing purposes rather than for humans.

The lesson was supposed to be reinforced: scale works when there’s substance underneath. Without it, you’re just building a bigger target.

2023-Present: AI-Generated Content At Scale

And here we are again. Same pitch, shinier tools. “We can produce 500 articles a month!” Wonderful. Can you produce 500 articles a month that are worth reading? That contain something a reader couldn’t get from the results already in the index? That demonstrate any form of expertise, experience, or original thought?

No? Then you’re not scaling content. You’re scaling your crawl budget waste.

And the pattern recognition failures are stunning. (This wasn’t subtle. Several of us noticed. No, we weren’t impressed.)

I recently came across an AI visibility tool – one that sells itself on helping you get discovered by AI systems – that had generated hundreds of pages following the pattern “best SEO agencies in {city}.” Déjà vu. Anyone who lived through programmatic SEO recognizes this immediately – it’s the 2017 playbook, except now the copy is written by an LLM. The template got a grammar upgrade and an “it’s AEO” stamp. The strategy didn’t.

Lily Ray flagged a similar case: a resume site with 500+ programmatic pages for “resume examples for {career}.” Every title following the exact same formula. Near-identical page templates. Misused AggregateRating schema. Obvious AI content throughout. Her summary was three words: “Worked until it didn’t.”

Image Credit: Pedro Dias

That phrase should be tattooed on every content scaling pitch deck. Worked until it didn’t. It always does. And then it doesn’t.

The irony of an AI optimization tool using mass-generated doorway pages to build its own visibility would be funny if it weren’t so perfectly on-brand for this industry.

The Qualitative Wall Doesn’t Move

Here’s what every generation of content scalers fails to understand: Google doesn’t evaluate content in isolation. It evaluates content relative to everything else in the index on the same topic.

Publishing 500 AI-generated articles about mortgage rates doesn’t make you an authority on mortgage rates. It makes you the 500th source saying the same thing in slightly different words. And Google already has 499 of those. It doesn’t need yours.

The qualitative wall is this: There is a minimum threshold of genuine value – original insight, lived experience, specific expertise, something the reader cannot get elsewhere – below which no amount of volume helps you. You can publish a million pages below that threshold. You’ll rank for nothing that matters.

And it gets worse. For the people scaling AI content specifically to gain visibility in AI-powered answer systems, the volume strategy doesn’t just fail; it actively backfires. A 2025 paper on retrieval evaluation for LLM-era systems introduces a metric that measures both helpful and distracting passages in retrieval. The finding that matters here: Low-utility content doesn’t sit quietly in the index waiting to be ignored. It can pull retrieval models off-track, degrading the quality of answers those systems produce. Your 500 thin articles aren’t just invisible. They’re noise. And if your site also has genuinely useful pages buried in that noise, congratulations – you’ve built your own interference pattern. The volume you thought would help discovery is actively drowning the pages that might have earned it.

This isn’t a new insight. It’s the same insight that content spinners ignored in 2010, that programmatic SEO factories ignored in 2018, and that AI content mills are ignoring right now. The tools got better at producing text. The text still has nothing to say.

Google Told You. Repeatedly

Google’s spam policies define scaled content abuse as generating pages “for the primary purpose of search rankings and not helping users.” They explicitly list “using generative AI tools or other similar tools to generate many pages without adding value for users” as an example. This is not subtext. It’s text.

In June 2025, Google began issuing manual actions specifically for scaled content abuse, targeting sites that had been mass-publishing AI-generated content. Sites across the UK, US, and EU received Search Console notifications citing “aggressive spam techniques, such as large-scale content abuse.” Complete visibility drops. Pages didn’t slide down the rankings; they vanished.

The August 2025 spam update continued the enforcement. Subsequent core updates have kept tightening the screws. Each time, the same profile gets hit: high volume, low substance, no editorial oversight.

And each time, the affected site owners acted surprised. As if Google hadn’t been telling them this for 15 years.

‘But Our Content Is Ranking Well’

This is my favorite delusion. I’ve seen it at every stage of this cycle. “Our AI content is ranking, so it must be fine.” Claiming “this is ranking well” is often precisely why Google issues algorithmic improvements and manual actions for your site. If your low-value content is ranking, the system hasn’t gotten to you yet. That’s all it means.

Google aggregates signals at the site level, not just the page level. You can have individual pages performing while the overall quality signal of your site degrades. And when the enforcement catches up (algorithmically or manually), it doesn’t pick off pages one by one. It hits the lot.

This is the content spinner’s fallacy, recycled: “It’s working right now, so it must be a strategy.” Demand Media’s content was ranking too. Right up until it wasn’t.

Lily captured this perfectly: “The case study: scaling AI content is working! The reality:” – followed by the traffic cliff that inevitably arrives. Every scaling success story is a snapshot taken before the correction. Nobody publishes the sequel.

Image Credit: Pedro Dias

The Economics Don’t Even Make Sense

Set aside the risk for a moment. Let’s talk about what you’re actually producing.

Five hundred AI-generated articles a month. Each one needs to be reviewed for accuracy – because LLMs hallucinate, and publishing incorrect information is a liability that extends well beyond SEO. Each one needs to be checked for originality – because if it reads like everything else in the index, it provides no added value; no competitive advantage. Each one needs editorial oversight to ensure it actually serves the audience you claim to serve.

If you’re doing all of that, the cost just moved – and possibly increased – while you convinced yourself you were being efficient. The “efficiency” of AI content generation evaporates the moment you apply the quality standards the content actually needs to meet.

And if you’re not doing any of that? You’re publishing unreviewed, unoriginal, potentially inaccurate content at scale under your brand name. I genuinely do not understand how anyone signs off on that.

Same Mistake, Better Tools

Content spinning. Programmatic SEO. AI-generated content at scale. Three different tools, one identical mistake: treating content as a manufacturing problem.

Manufacturing produces identical outputs at scale – that’s the point. Content derives its value from the opposite: from being specific, from being informed by experience, from saying something the rest of the index doesn’t. Every attempt to industrialise it crashes into that contradiction.

You can’t automate specificity. You can’t template experience. You can’t generate original thought by running a prompt through an LLM and hoping something useful comes out. And these constraints won’t be solved by the next model release. They’re baked into what makes content worth reading in the first place.

The people who keep chasing scale are optimising for the wrong variable. They see “more content” as an input that produces “more traffic” as an output. But the function is not linear. It never was. It’s gated by quality, and no amount of volume bypasses the gate.

The Only Question That Matters

Before you publish anything (AI-assisted or otherwise), ask one question: What does this page offer that the reader cannot already get?

If the answer is “nothing, but we’ll have more pages indexed,” you’re not building a content strategy. You’re building a liability. And you’re doing it with the confidence of someone who has apparently never heard of Panda, never looked at what happened to programmatic SEO sites in 2022, and never read Google’s own spam policies.

You can convince yourself for as long as you want. But you’ll only fool everyone else for a while.

The wall is still there. It’s always been there. The tools keep changing. The wall doesn’t.

More Resources:


This post was originally published on The Inference.


Featured Image: Roman Samborskyi/Shutterstock

AI Search Barely Cites Syndicated News Or Press Releases via @sejournal, @MattGSouthern

A BuzzStream report analyzing 4 million AI citations found that press releases distributed through syndication channels barely appear in AI-generated answers.

Background

Press release distribution services have been marketing AI visibility as a selling point.

For example, ACCESS Newswire offers an “AI Visibility Checklist” for press releases. eReleases published a guide positioning press releases as tools for AI search visibility. Business Wire has written about optimizing releases for answer engine discovery.

BuzzStream’s data offers a different perspective.

What They Found

The report’s authors used XOFU, a citation monitoring tool from Citation Labs, to track where AI platforms pull their sources across ChatGPT, Google AI Mode, Google AI Overviews, and Google Gemini. BuzzStream ran 3,600 prompts across 10 industries and collected data for one week.

Overall, news publications accounted for 14% of all citations in the dataset. But within that news category, the numbers drop off quickly for syndicated and distributed content.

Press releases published through syndication channels like Yahoo and MSN accounted for 0.32% of news citations and 0.04% of the entire dataset.

Direct citations from newswire services like PRNewswire made up 0.21% of the full dataset. They appeared most often in exploratory and informational prompts, but even there they only reached 0.37%.

Syndicated news content overall, including articles republished through MSN and Yahoo networks, accounted for 6.2% of news citations and 0.9% of the total dataset.

To identify syndicated content, BuzzStream cross-referenced author names against publications using its ListIQ tool and manually confirmed cases where the author name didn’t match the publication. The company acknowledged this method has limits, since some sites repost press releases without labeling them as such.

What The Data Shows About What Works

The report’s more interesting finding is what does get cited.

Original editorial content made up 81% of news citations in the dataset. Affiliate and review content accounted for the rest. The split held across prompt types, though affiliate content had its strongest showing in evaluative prompts at 39%.

The report broke prompts into three categories. Evaluative prompts like “Is Sony better than Bose?” generated the most news citations at 18% of all citations. Brand awareness prompts like “What is Chase known for?” generated the fewest at 7%. Informational prompts fell in between.

Editorial content that appeared most often in evaluative citations included head-to-head comparisons and cost analysis from outlets like Reuters, CNBC, and CNET.

The ChatGPT Newsroom Exception

One platform-level finding stood out. Internal press releases and newsroom content on company-owned domains accounted for 18% of ChatGPT’s citations in the dataset.

On Google’s AI platforms, that number dropped to around 3%.

BuzzStream cited examples including Iberdrola’s corporate press room and Target’s corporate subdomain. When prompted about Iberdrola’s role in renewables, ChatGPT cited a press release from Iberdrola’s own website. When asked about Target’s products, ChatGPT cited a 2015 press release from Target’s corporate domain.

BuzzStream said most earlier trends looked fairly uniform across platforms, with newsroom content on ChatGPT standing out as a clearer exception.

Why This Matters

The data challenges a premise that press release distribution services have been promoting. Multiple distribution platforms now market press releases as a path to AI visibility.

BuzzStream’s data suggests the distributed version of a press release, the one that lands on Yahoo Finance or MSN through a wire service, rarely becomes the version AI platforms cite. Original editorial coverage and owned newsroom content performed better by wide margins.

This connects to patterns we’ve been tracking. A BuzzStream report we covered in January found 79%of top news publishers block at least one AI training bot, and 71% block retrieval bots. Hostinger’s analysis of 66 billion bot requests showed AI training crawlers losing access while search bots expanded their reach.

The citation data suggests that even when syndicated content is accessible to AI crawlers, it rarely gets cited.

Google’s VP of Product for Search, Robby Stein, said in an interview we covered that being mentioned by other sites could help with AI recommendations, comparing AI’s behavior to how a human might research a question. That comparison favors earned editorial coverage over distributed press releases.

Adam Riemer made a related point in his Ask an SEO column, drawing a line between digital PR that builds brand coverage in publications and link building that focuses on placement metrics. BuzzStream’s data suggests that line extends to AI citations too.

For transparency, BuzzStream sells outreach and digital PR tools, so the finding that earned media outperforms distribution aligns with its business model. The company partnered with Citation Labs and used Citation Labs’ XOFU monitoring tool for the data collection.

Looking Ahead

This is part one of a multi-part analysis from BuzzStream. The single-week data window and large-brand focus are limits worth noting. Smaller brands with less existing editorial coverage may see different results.

Businesses investing in digital PR may want to look more closely at how different distribution channels perform in their category. Data suggests the channel you use can affect where your brand gets cited.


Featured Image: Cagkan Sayin/Shutterstock

How To Track AI Visibility & Prompts The Right Way via @sejournal, @lorenbaker

AI search has changed the rules, but has your tracking? 

How do you measure visibility without rankings?

Which prompts actually reflect real buyer intent?

And how do you avoid AI tracking data that looks useful, but isn’t?

Learn how to set up AI prompt tracking you can trust for smarter decisions.

ChatGPT, Google AI Overviews & Perplexity Are Reshaping Discoverability

In this on-demand webinar, Nick Gallagher, Sr. SEO Strategy Director at Conductor, breaks down how AI prompt tracking really works, why topics matter more than individual prompts, and how to avoid common mistakes that skew insights.

You’ll leave with a clear framework for measuring AI visibility in a way that reflects real user behavior and supports smarter search and content strategies.

You’ll Learn:

  • How AI prompt tracking works, and why setup matters more than volume
  • Best practices for choosing topics, prompts, and answer engines
  • Common mistakes that lead to inaccurate or misleading AI visibility data

Watch on-demand and learn how reputation management is shaping local visibility, trust, and growth in 2026.

View the slides below or check out the full webinar for all the details.

Vibe Coding Plugins? Validate With Official WordPress Plugin Checker via @sejournal, @martinibuster

Vibe coding WordPress plugins with AI can raise concerns about whether a plugin follows best practices for compatibility and security. WordPress.org’s Plugin Check Plugin offers a solution for those who wish to check whether a plugin conforms to the official standards. The latest version can now connect to AI.

The plugin is developed by WordPress.org, and it’s meant as a tool for plugin authors to test their own plugins with similar kinds of tests used by the official WordPress plugin repository, which can also help speed up the process of getting accepted into the repository.

According to the official plugin description:

“Plugin Check is a tool for testing whether your plugin meets the required standards for the WordPress.org plugin directory. With this plugin you will be able to run most of the checks used for new submissions, and check if your plugin meets the requirements.

Additionally, the tool flags violations or concerns around plugin development best practices, from basic requirements like correct usage of internationalization functions to accessibility, performance, and security best practices.”

The Plugin Check Plugin also has a Plug Namer feature that will check if a plugin’s name is similar to another plugin, if it may violate a trademark, complies with WordPress naming guidelines, and if the plugin name is too generic or broad.

The latest version of the plugin is version 1.9.0 and it adds the following new features:

  • Supports the new WordPress 7.0 AI connectors so that the plugin can work with the WordPress AI infrastructure
  • Updated block compatibility check for WordPress 7.0.
  • Checks for external URLs in top-level admin menus to avoid admin issues.
  • This latest version also contains additional tweaks, enhancements, and improvements.

User reviews share positive experiences:

“This plugin helped me identify areas of my plugin that I thought I had taken care of. When developing my first plugin. I learned a lot through the feedback given and was able to re-run and eventually remove of all errors.”

“Useful tool for catching issues early. If you’re serious about plugin development, this is a must-have.”

Download the official WordPress Plugin Checker Tool here:

Plugin Check (PCP) By WordPress.org