Web Governance As A Growth Lever: Building A Center Of Excellence That Actually Works via @sejournal, @billhunt

In every digital transformation I’ve consulted on, from global banks to manufacturing giants, the failure point isn’t usually the strategy. It’s the governance.

Strategy defines where to go. Operations define how to get there. Governance is what keeps everyone moving in the same direction, at the same speed, without crashing into each other.

In my earlier Search Engine Journal articles, we built the foundation for this discussion:

This article closes the loop. Because until governance and accountability take hold, every strategy, no matter how visionary, remains a PowerPoint slide.

Governance As Guardrails For Growth

Governance has a branding problem. Too often, it’s mistaken for red tape – a set of rules designed to slow things down. In reality, good governance is what lets organizations move faster without flying apart. It’s a system of guardrails, not gates – a shared framework that protects creativity by keeping it aligned with purpose.

When done right, governance is the difference between freedom and anarchy. It ensures that every team, design, dev, content, and analytics can innovate confidently within an agreed-upon structure of trust, compliance, and clarity.

Governance doesn’t limit autonomy; it enables responsible autonomy.

The most effective Centers of Excellence build their governance around three principles:

  1. Guardrails, not barriers – Standards prevent rework and confusion, not creativity.
  2. Enablement through clarity – When expectations are clear, teams spend less time negotiating and more time executing.
  3. Evolution, not enforcement – Governance must adapt with technology, markets, and now, AI systems.

This turns governance into a living framework – one that scales excellence, accelerates innovation, and protects enterprise value simultaneously.

The Cast Of Characters: Who Belongs In A Modern Center of Excellence

A true Center of Excellence (COE) isn’t a department—it’s an alignment mechanism.

Its power lies in uniting diverse roles around shared definitions of value, performance, and accountability.

Role Type Primary Focus Key Question They Answer
Business Leadership (CEO, CFO, CMO) Direction, metrics, incentives “Are our digital assets creating measurable enterprise value?”
Digital Operations (CTO, DevOps, Product) Infrastructure, scalability, uptime “Can we deploy and measure at scale without friction?”
Marketing & Experience (SEO, UX, Content, CX) Discoverability, usability, trust “Is our content findable, credible, and consistent across markets?”
Data & AI Enablement (Analytics, Schema, AI Strategy) Structuring and measuring the data layer “Can machines – and humans – understand our brand at every level?”

An effective COE sits at the crossroads of these groups. It translates corporate objectives into digital guardrails, workflows, and shared KPIs.

And it does so through clarity of ownership – who decides, who executes, and who is accountable for outcomes.

Without that alignment, teams drift into the ownership gap I outlined in “Who Owns Web Performance?,” each optimizing their own slice while the organization loses system-level performance.

Anatomy Of A Working Center Of Excellence

A COE that works isn’t a poster on the wall but an ecosystem built around five components:

  1. Vision & Mandate – A clearly articulated purpose with executive sponsorship. Governance without mandate becomes optional. Tie the COE to measurable outcomes – revenue efficiency, cost avoidance, and risk reduction.
  2. Standards & Playbooks – Codified frameworks for content hierarchy, tagging, schema, and AI readiness. Standards remove friction when they’re written for usability, not perfection.
  3. Measurement & Accountability – Shared dashboards connecting digital KPIs to business KPIs. The CEO shouldn’t ask, “How’s SEO?” but “What’s the digital contribution to EBITDA?”
  4. Enablement & Knowledge Sharing – Training, automation, and playbooks that make compliance the natural outcome of good work, not an afterthought.
  5. Feedback & Evolution – Regular audits and retrospectives to ensure standards evolve as the technology – and the company – does.

A COE that only publishes rules is a library.

A COE that enforces and evolves them is a growth engine.

Effective governance transforms from control to enablement when standards become self-reinforcing. Instead of asking, “Did we follow the rules?” teams ask, “Do the rules help us move faster and smarter?” That’s the culture shift a Center of Excellence exists to create.

Corporate Judo: Turning Structure Into Strength

In “Epiphany 2 — Leverage Corporate Judo,” I wrote that the secret to lasting change isn’t fighting the system – it’s using its momentum. You don’t overpower corporate structure; you redirect it.

“The art of corporate judo is learning to use the organization’s own weight to create forward motion.”

Web governance works the same way. Rather than viewing process and policy as obstacles, a skilled COE converts them into leverage – turning approvals, reporting lines, and compliance requirements into tools for acceleration. A well-designed COE doesn’t rebel against structure; it channels it toward growth.

In this sense, governance becomes corporate aikido by absorbing friction and transforming it into alignment.

Cross-Channel Alignment: The Prerequisite For Performance

Before you can optimize, you must align. The most advanced analytics stack or SEO roadmap will fail if the organization itself is out of sync.

A functioning COE creates connective tissue between:

  • Search & Content – shared definitions of topics, authority, and metrics.
  • UX & Engineering – balance between design freedom and structural consistency.
  • Marketing & Analytics – unified measurement across paid, earned, and owned.
  • Corporate & Regional Teams – global templates with local flexibility.

In multinational environments, this alignment prevents the “geo-targeting misalignment” I’ve written about – where the wrong market page ranks, or translation replaces true localization. The COE becomes the referee between global efficiency and local relevance.

Why This Matters Even More In The AI Era

AI has raised the stakes for governance.
In the old world, poor governance hurt rankings.
In the new world, it hurts eligibility.

Search-grounded AI systems like Google’s AI Overviews and Bing Copilot rely on structured, accessible, and authoritative data to decide what’s trustworthy enough to include. If your schema, content, or infrastructure is inconsistent, the machine can’t reconcile your brand—and when it can’t reconcile, it omits you.

If SEO was about visibility, AI is about eligibility – and eligibility depends on governance.

As I argued in “Stop Retrofitting. Start Commissioning: The New Role of SEO in the Age of AI,” The role of SEO and, by extension, digital governance has shifted from a reactive fix to a proactive design function. SEO is no longer the cleanup crew that patches gaps after launch. It must become the Commissioning Authority, the group that ensures what’s being built meets both user and machine standards before it ever goes live.

Governance, in this new context, isn’t back-office oversight. It’s front-office enablement.

It ensures that every digital asset – content, structure, schema, and technical architecture – is commissioned for machine interpretation, not just human readability.

Because in today’s AI-first ecosystem, the question isn’t simply, “Can users find us?” It’s “Can machines trust, understand, and use us?”

“The era of being brought in after launch is over.
Governance – and SEO – must move upstream to where strategy and systems are conceived.”

Good governance isn’t a final check; it’s a design ethos. It transforms your organization from retrofitting performance to commissioning excellence.

And that shift from reactivity to readiness is what separates the brands that survive AI disruption from the ones that silently vanish from the conversation.

Governance As Digital Operating Leverage

Governance may not sound glamorous, but it’s the lever that compounds returns across every other investment.

  • Revenue Growth – Faster launches, better discoverability, consistent brand experience.
  • Cost Efficiency – Reduced rework, redundant tools, and duplicated content.
  • Capital Efficiency – Shared systems and reusable frameworks across markets.
  • Risk & Resilience – Compliance, uptime, and data consistency.
  • Innovation & Optionality – Guardrails that enable safe experimentation with AI and automation.

In financial terms, governance converts digital activity into operating leverage by increasing output without proportionally growing cost. This means your overall Web Effectiveness is a shareholder issue, not a marketing one. Governance is how you turn that theory into muscle.

The Leadership Imperative

Ultimately, governance fails when it’s delegated. A COE can’t succeed without executive willpower and cross-functional buy-in.

The CEO owns shareholder value.
The CMO owns demand.
The CTO owns systems.
But the COE owns the connection between them.

If your website is the factory, your COE is the operations manual that keeps it producing value – efficiently, predictably, and at scale.

Web governance isn’t a brake pedal; it’s a steering system. It creates the clarity and confidence that allow innovation to scale safely. It’s how large organizations protect creativity without chaos — and how they turn complexity into compound value.

In the age of AI, alignment isn’t optional. Governance is growth.

More Resources:


Featured Image: ImageFlow/Shutterstock

How Recommender Systems Like Google Discover May Work via @sejournal, @martinibuster

Google Discover is largely a mystery to publishers and the search marketing community even though Google has published official guidance about what it is and what they feel publishers should know about it. Nevertheless, it’s so mysterious that it’s generally not even considered as a recommender system, yet that is what it is. This is a review of a classic research paper that shows how to scale a recommender system. Although it’s for YouTube, it’s not hard to imagine how this kind of system can be adapted to Google Discover.

Recommender Systems

Google Discover belongs to the class of systems known as a recommender systems. A classic recommender system I remember is the MovieLens system from way back in 1997. It is a university science department project that allowed users to rate movies and it would use those ratings to recommend movies to watch. The way it worked is like, people who tend to like these kinds of movies tend to also like these other kinds of movies. But these kinds of algorithms have limitations that make them fall short for the scale necessary to personalize recommendations for YouTube or Google Discover.

Two-Tower Recommender System Model

The modern style of recommender systems are sometimes referred to as the Two-Tower architecture or the Two-Tower model. The Two-Tower model came about as a solution for YouTube, even though the original research paper (Deep Neural Networks for YouTube Recommendations) does not use this term.

It may seem counterintuitive to look to YouTube to understand how the Google Discover algorithm works, but the fact is that the system Google developed for YouTube became the foundation for how to scale a recommender system for an environment where massive amounts of content are generated every hour of the day, 24 hours a day.

It’s called the Two-Tower architecture because there are two representations that are matched against each other, like two towers.

In this model, which handles the initial “retrieval” of content from the database, a neural network processes user information to produce a user embedding, while content items are represented by their own embeddings. These two representations are matched using similarity scoring rather than being combined inside a single network.

I’m going to repeat that the research paper does not refer to the architecture as a Two-Tower architecture, it’s a description for this kind of approach that was created later. So, while the research paper doesn’t use the word tower, I’m going to continue using it as it makes it easier to visualize what’s going on in this kind of recommender system.

User Tower
The User Tower processes things like a user’s watch history, search tokens, location, and basic demographics. It uses this data to create a vector representation that maps the user’s specific interests in a mathematical space.

Item Tower
The Item Tower represents content using learned embedding vectors. In the original YouTube implementation, these were trained alongside the user model and stored for fast retrieval. This allows the system to compare a user’s “coordinates” against millions of video “coordinates” instantly, without having to run a complex analysis on every single video each time you refresh your feed.

The Fresh Content Problem

Google’s research paper offers an interesting take on freshness. The problem of freshness is described as a tradeoff between exploitation and exploration. The YouTube recommendation system has to balance between showing users content that is already known to be popular (exploitation) versus exposing them to new and unproven content (exploration). What motivates Google to show new but unproven content, at least for the context of YouTube, is that users show a strong preference for new and fresh content.

The research paper explains why fresh content is important:

“Many hours worth of videos are uploaded each second to YouTube. Recommending this recently uploaded (“fresh”) content is extremely important for YouTube as a product. We consistently observe that users prefer fresh content, though not at the expense of relevance.”

This tendency to show fresh content seems to hold true for Google Discover, where Google tends to show fresh content on topics that users are personally trending with. Have you ever noticed how Google Discover tends to favor fresh content? The insights that the researchers had about user preferences probably carry over to the Google Discover recommendation system. The takeaway here is that producing content on a regular basis could be helpful for getting web pages surfaced in Google Discover.

An interesting insight in this research paper, and I don’t know if it’s still true but it’s still interesting, is that the researchers state that machine learning algorithms show an implicit biased toward older existing content because they are trained on historical data.

They explain:

“Machine learning systems often exhibit an implicit bias towards the past because they are trained to predict future behavior from historical examples.”

The neural network is trained on past videos and they learn that things from one or two days ago were popular. But this creates a bias for things that happened in the past. The way they solved the freshness issue is when the system is recommending videos to a user (serving), this time-based feature is set to zero days ago (or slightly negative). This signals to the model that it is making a prediction at the very end of the training window, essentially forcing it to predict what is popular right now rather than what was popular on average in the past.

Accuracy Of Click Data

Google’s foundational research paper also provides insights about implicit user feedback signals, which is a reference to click data. The researchers say that this kind of data rarely provides accurate user satisfaction information.

The researchers write:

“Noise: Historical user behavior on YouTube is inherently difficult to predict due to sparsity and a variety of unobservable external factors. We rarely obtain the ground truth of user satisfaction and instead model noisy implicit feedback signals. Furthermore, metadata associated with content is poorly structured without a well defined ontology. Our algorithms need
to be robust to these particular characteristics of our training data.”

The researchers conclude the paper by stating that this approach to recommender systems helped increase user watch time and proved to be more effective than other systems.

They write:

“We have described our deep neural network architecture for recommending YouTube videos, split into two distinct problems: candidate generation and ranking.
Our deep collaborative filtering model is able to effectively assimilate many signals and model their interaction with layers of depth, outperforming previous matrix factorization approaches used at YouTube.

We demonstrated that using the age of the training example as an input feature removes an inherent bias towards the past and allows the model to represent the time-dependent behavior of popular of videos. This improved offline holdout precision results and increased the watch time dramatically on recently uploaded videos in A/B testing.

Ranking is a more classical machine learning problem yet our deep learning approach outperformed previous linear and tree-based methods for watch time prediction. Recommendation systems in particular benefit from specialized features describing past user behavior with items. Deep neural networks require special representations of categorical and continuous features which we transform with embeddings and quantile normalization, respectively.”

Although this research paper is ten years old, it still offers insights into how recommender systems work and takes a little of the mystery out of recommender systems like Google Discover. Read the original research paper: Deep Neural Networks for YouTube Recommendations

Featured Image by Shutterstock/Andrii Iemelianenko

NotificationX WordPress WooCommerce Plugin Vulnerabilities Impact 40k Sites via @sejournal, @martinibuster

A vulnerability advisory was published for the NotificationX FOMO plugin for WordPress and WooCommerce sites, affecting more than 40,000 websites. The vulnerability, which is rated at a 7.2 (High) severity level, enables unauthenticated attackers to inject malicious JavaScript that can execute in a visitor’s browser when specific conditions are met.

NotificationX – FOMO Plugin

The NotificationX FOMO plugin is used by WordPress and WooCommerce site owners to display notification bars, popups, and real-time alerts such as recent sales, announcements, and promotional messages. The plugin is commonly deployed on marketing and e-commerce sites to create urgency and draw visitor attention through notifications.

Exposure Level

The vulnerability does not require any authentication or acquire any user role before launching an attack. Attackers do not need a WordPress account or any prior access to the site to trigger the vulnerability. Exploitation relies on getting a victim to visit a specially crafted page that interacts with the vulnerable site.

Root Cause Of The Vulnerability

The issue is a DOM-based Cross-Site Scripting (XSS) vulnerability tied to how the plugin processes preview data. In the context of a WordPress plugin vulnerability, DOM-based Cross-Site Scripting (XSS) vulnerability happens when a WordPress plugin contains client-side JavaScript that processes data from an untrusted source (the “source”) in an unsafe way, usually by writing the data to the web page (the “sink”).

In the context of the NotificationX plugin, the vulnerability exists because the plugin’s scripts accepts input through the nx-preview POST parameter, but does not properly sanitize the input or escape the output before it is rendered in the browser. Security checks that are supposed to check that user-supplied data is treated as plain text are missing. This allows an attacker to create a malicious web page that automatically submits a form to the victim’s site, forcing the victim’s browser to execute harmful scripts injected via that parameter.

The end result is that an attacker-controlled input can be interpreted as executable JavaScript instead of harmless preview content.

What Attackers Can Do

If exploited, the vulnerability enables attackers to execute arbitrary JavaScript in the context of the affected site. The injected script executes when a user visits a malicious page that automatically submits a form to the vulnerable NotificationX site.

This can allow attackers to:

  • Hijack logged-in administrator or editor sessions
  • Perform actions on behalf of authenticated users
  • Redirect visitors to malicious or fraudulent websites
  • Access sensitive information available through the browser

The official Wordfence advisory explains:

“The NotificationX – FOMO, Live Sales Notification, WooCommerce Sales Popup, GDPR, Social Proof, Announcement Banner & Floating Notification Bar plugin for WordPress is vulnerable to DOM-Based Cross-Site Scripting via the ‘nx-preview’ POST parameter in all versions up to, and including, 3.2.0. This is due to insufficient input sanitization and output escaping when processing preview data. This makes it possible for unauthenticated attackers to inject arbitrary web scripts in pages that execute when a user visits a malicious page that auto-submits a form to the vulnerable site.”

Affected Versions

All versions of NotificationX up to and including 3.2.0 are vulnerable. A patch is available and the vulnerability was addressed in NotificationX version 3.2.1, which includes security enhancements related to this issue.

Recommended Action

Site owners using NotificationX are recommended to update their plugin immediately to version 3.2.1 or later. Sites that cannot update should disable the plugin until the patched version can be applied. Leaving vulnerable versions active exposes visitors and logged-in users to client-side attacks that can be difficult to detect and mitigate.

One More Vulnerability

This plugin has another vulnerability that is rated 4.3 medium threat level.  The Wordfence advisory for this one describes it like this:

“The NotificationX plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check on the ‘regenerate’ and ‘reset’ REST API endpoints in all versions up to, and including, 3.1.11. This makes it possible for authenticated attackers, with Contributor-level access and above, to reset analytics for any NotificationX campaign, regardless of ownership.”

The NotificationX WordPress plugin includes two REST API endpoints called “regenerate” and “reset.” These endpoints are used to manage campaign analytics, such as resetting or rebuilding the stats that show how a notification is performing.

The problem is that these endpoints do not properly check user permissions for modifying data. In this case, the plugin only checks whether a user is logged in with Contributor-level access or higher, not whether they are actually allowed to perform the action. Even though users with the Contributor level role normally have very limited permissions, this flaw lets them perform actions they should not be able to do.

In this case, the damage that an attacker can do is limited. For example, an attacker can’t take over a site. Updated to version 3.2.1 or higher (same as the other vulnerability) will patch this vulnerability.

An attacker can:

  • Reset analytics for any NotificationX campaign
  • Do this even if they did not create or own the campaign
  • Repeatedly wipe or regenerate campaign statistics

Featured Image by Shutterstock/Art Furnace

What Profitable Google Ads Look Like in 2026 [Webinar] via @sejournal, @hethr_campbell

Google Ads’ Performance Max Smart Bidding is finally delivering real results for teams that know how to work with it.

As marketers are forced to give PMax more control, many are struggling to understand exactly how to structure automated Google Ads campaigns and accounts.

In this webinar, the marketing leadership team at DigiCom, a 2025 Inc. 5000-listed ecommerce growth agency, breaks down how they are running Google Ads at scale in 2026.

With hands-on experience managing PPC programs totaling $200M+ in ad spend across multiple accounts, they will share how high-growth brands are structuring paid search, Performance Max, and YouTube campaigns to meet shoppers where they are and drive consistent returns.

And, they’re doing a live Google Ads audit during the webinar, so register today and submit your site

What You’ll Learn

This webinar session will showcase how top brands are navigating Smart Bidding changes in 2026.

RSVP now, and learn:

  • How to structure Google Ads accounts to maintain control over ROAS in an automated landscape
  • The right creative and copy to feed into Google’s systems to capture high-intent shoppers
  • Proven ways to move beyond keyword-first strategies and focus on profit-driven outcomes

Why Attend?

You will gain practical PPC strategy frameworks you can apply immediately, along with the chance for select attendees to receive a live Google Ads audit during the webinar. If you are responsible for scaling paid media performance in 2026, these strategies are worth studying.

Register now to get a clear, founder-led Google Ads playbook for scaling profitably in 2026.

🛑 Can’t make it live? Register anyway, and we’ll send you the on demand recording after the event.

WordPress Advanced Custom Fields Extended Plugin Vulnerability via @sejournal, @martinibuster

An advisory was published about a vulnerability in the popular Advanced Custom Fields: Extended WordPress plugin that is rated 9.8, affecting up to 100,000 installations.

The flaw enables unauthenticated attackers to register themselves with administrator privileges and gain full control of a website and all settings.

Advanced Custom Fields: Extended Plugin

The Advanced Custom Fields: Extended plugin is an add-on to the popular Advanced Custom Fields Pro plugin. It is used by WordPress site owners and developers to extend how custom fields work, manage front-end forms, create options pages, define custom post types and taxonomies, and customize the WordPress admin experience.

The plugin is widely used, with more than 100,000 active installations, and is commonly deployed on sites that rely on front-end forms and advanced content management workflows.

Who Can Exploit This Vulnerability

This vulnerability can be exploited by unauthenticated attackers, which means there is no barrier of first having to attain a higher permission level before launching an attack. If the affected version of the plugin is present with a specific configuration in place, anyone on the internet can attempt to exploit the flaw. That kind of exposure significantly increases risk because it removes the need for compromised credentials or insider access.

Privilege Escalation Exposure

The vulnerability is a privilege escalation flaw caused by missing role restrictions during user registration.

Specifically, the plugin’s insert_user function does not limit which user roles can be assigned when a new user account is created by anyone. Under normal circumstances, WordPress should strictly control which roles users can select or be assigned during registration.

Because this check is missing, an attacker can submit a registration request that explicitly assigns the administrator role to the new account.

This issue only occurs when the site’s form configuration maps a custom field directly to the WordPress role field. When that condition is met, the plugin accepts the supplied role value without verifying that it is safe or permitted.

The flaw appears to be due to insufficient server-side validation of the form field “Choices.” The plugin seems to have relied on the the HTML form to restrict which roles a user could select. For example, the developer could create a user sign up form with only the “subscriber” role as an option. But there was no verification on the backend to check if the user role the subscriber was signing up with matched the user roles that the form is supposed to be limited to.

What was probably happening is that an unauthenticated attacker could inspect the form’s HTML, see the field responsible for the user role, and intercept the HTTP request so that, for example, instead of sending role=subscriber, the attacker could change the value to role=administrator. The code responsible for the insert_user action took this input and passed it directly to WordPress user creation functions. It did not check if “administrator” was actually one of the allowed options in the field’s “Choices” list.

The Changelog for the plugin lists the following entry as one of the patches to the plugin:

“Enforced front-end fields validation against their respective “Choices” settings.”

That entry in the changelog means the plugin now actively checks front-end form submissions to ensure the submitted value matches the field’s defined “Choices”, rather than trusting whatever value is posted.

There is also this entry in the changelog:

“Module: Forms – Added security measure for forms allowing user role selection”

This entry means the plugin added server-side protections to prevent abuse when a front-end form is allowed to set or select a WordPress user role.

Overall, the patches to the plugin added stronger validation controls for front-end forms plus made them more configurable.

What Attackers Can Gain

If successfully exploited, the attacker gains administrator-level access to the WordPress site.

That level of access allows attackers to:

  • Install or modify plugins and themes
  • Inject malicious code
  • Create backdoor administrator accounts
  • Steal or manipulate site data
  • Redirect visitors or distribute malware

Gaining administrator access is a full site takeover.

The Wordfence advisory describes the issue as follows:

“The Advanced Custom Fields: Extended plugin for WordPress is vulnerable to Privilege Escalation in all versions up to, and including, 0.9.2.1. This is due to the ‘insert_user’ function not restricting the roles with which a user can register. This makes it possible for unauthenticated attackers to supply the ‘administrator’ role during registration and gain administrator access to the site.”

As Wordfence describes, the plugin trusts user-supplied input for account roles when it should not. That trust allows attackers to bypass WordPress’s normal protections and grant themselves the highest possible permission level.

Wordfence also reports having blocked active exploitation attempts targeting this vulnerability, indicating that attackers are already probing sites for exposure.

Conditions Required For Exploitation

The vulnerability is not automatically exploitable on every site running the plugin.

Exploitation requires that:

  • The site uses a front-end form provided by the plugin
  • The form maps a custom field directly to the WordPress user role

Patch Status and What Site Owners Should Do

The vulnerability affects all versions up to and including 0.9.2.1. The issue is addressed in version 0.9.2.2, which introduces additional validation and security checks around front-end forms and user role handling.

The entry for the official changelog for ACF Extended Basic 0.9.2.2:

  • Module: Forms – Enforced front-end fields validation against their respective “Choices” settings
  • Module: Forms – Added security measure for forms allowing user role selection
  • Module: Forms – Added acfe/form/validate_value hook to validate fields individually on front
  • Module: Forms – Added acfe/form/pre_validate_value hook to bypass enforced validation

Site owners using this plugin should update immediately to the latest patched version. If updating is not possible, the plugin should be disabled until the fix can be applied.

Given the severity of the flaw and the lack of authentication required to exploit it, delaying action leaves affected sites exposed to a complete takeover.

Featured Image by Shutterstock/Art Furnace

The Smart Way To Take Back Control Of Google’s Performance Max [A Step-By-Step Guide]

This post was sponsored by Channable. The opinions expressed in this article are the sponsor’s own.

If you’ve ever watched your best-selling product devour your entire ad budget while dozens of promising SKUs sit in the dark, you’re not alone.

Google’s Performance Max (PMax) campaigns have transformed ecommerce advertising since launching in 2021.

For many advertisers, PMax introduced a significant challenge: a lack of transparency in budget allocation. Without clear insights into which placements, audiences, or assets are driving performance, it’s easy to feel like you’re flying blind.

The good news? You don’t have to stay there.

This guide walks you through a practical framework for reclaiming control over your Performance Max campaigns, allowing you to segment products by actual performance and make data-driven decisions rather than hope AI figures it out for you.

The Budget Black Hole: Where Your Performance Max Ad Spend Actually Goes

Most ecommerce brands start by organizing PMax campaigns around categories. Shoes in one campaign. Accessories in another. That seems logical and clean but can completely ignore how products actually perform.

Here’s what typically happens:

  • Top sellers monopolize budget. Google’s algorithm prioritizes products with strong historical performance, which means your star items keep getting the spotlight while everything else struggles for visibility.
  • New arrivals never get traction. Without performance history, fresh products can’t compete, so they never build the data they need to succeed.
  • “Zombie” products stay invisible. Some items might perform well if given the chance, but static segmentation never gives them that opportunity.
  • Manual adjustments eat your time. Every tweak requires you to dig through data, make changes, and hope for the best.

The result? Wasted potential, uneven budget distribution, and marketing teams stuck reacting instead of strategizing. You’re already doing the hard work; this framework helps that effort go further and helps you set and manage your PPC budget efficiently and effectively.

How To Fix It: Segment Campaigns By What’s Actually Working

Instead of organizing campaigns by category, segment by how products actually perform.

This approach creates dynamic groupings that automatically shift as performance data changes with no manual reshuffling.

Step 1: Classify Your Products into Three Groups

Start by categorizing your catalogue based on real performance metrics: ROAS, clicks, conversions, and visibility.

Image created by Channable, January 2026

Star Products

These are your proven winners, with high ROAS, strong click-through rates, and consistent conversions. Your goal with stars is to maximize their potential while protecting margins.

  • Set higher ROAS targets (3x–5x or above based on your margins).
  • Allocate budget confidently.
  • Monitor to ensure profitability stays intact.

Zombie Products

These are the “invisible” items that haven’t had enough exposure to prove themselves. They might be underperformers, or they might be hidden gems waiting for their moment.

  • Set lower ROAS targets (0.5x–2x) to prioritize visibility.
  • Give them a dedicated budget to gather performance data.
  • Review regularly and promote graduates to the star category.

New Arrivals

Fresh products need their own ramp-up period before being judged against established items. Without historical data, they can’t compete fairly in a mixed campaign.

  • Create a separate campaign specifically for new launches.
  • Use dynamic date fields to automatically include recently added items.
  • Set goals focused on awareness and data collection rather than immediate ROAS.

Step 2: Define Your Performance Thresholds

Decide what metrics determine which bucket a product falls into. For example:

  • Stars: ROAS above 3x–5x, strong click volume, goal is maximizing profitability.
  • Zombies: ROAS below 2x or insufficient data, low click volume, goal is testing and learning.
  • New Arrivals: Date-based (for example, added within last 30 days), goal is building visibility.

Your thresholds will depend on your margins, industry, and historical benchmarks. The key is defining clear criteria so products can move between segments automatically as their performance changes.

Step 3: Shorten Your Analysis Window

Many advertisers’ default to 30-day lookback windows for performance analysis. For fast-moving catalogues, that’s too slow.

Consider shifting to a 14-day rolling window for better analysis. You’ll get:

  • Faster reactions to performance shifts
  • More accurate data for seasonal or trending items
  • Less wasted spend on products that peaked two weeks ago

This is especially important for fashion, home goods, and any category where trends move quickly.

Step 4: Apply Segmentation Across All Channels

Your segmentation logic shouldn’t stop at Google. The same star/zombie/new arrival framework can (and should) apply to:

  • Meta Ads
  • Pinterest
  • TikTok
  • Criteo
  • Amazon

Cross-channel consistency compounds your optimization efforts. A product that’s a “zombie” on Google might be a star on TikTok, or vice versa. Unified segmentation helps you connect products to the right audiences on the right channels and distribute budget accordingly.

Step 5: Build Rules That Move Products Automatically

Here’s where the real efficiency gains come in. Instead of manually reviewing every SKU, create rules that automatically shift products between campaigns based on performance.

For example:

  • If ROAS exceeds 3x–5x over your analysis window – Move to Stars campaign
  • If ROAS falls below 2x or clicks drop below your average (for example, 20 clicks in 14 days) – Move to Zombies campaign
  • If product was added within a set time limit (for example, the last 30 days) -Include in New Arrivals campaign

This dynamic automation ensures your campaigns stay optimized without requiring constant manual intervention.

Get Smart: Let Intelligent Automation Do the Heavy Lifting

Image created by Channable, January 2026

The steps above work—but implementing them manually across thousands of SKUs and multiple channels is time-consuming. Product-level performance data lives in different dashboards. Calculating ROAS at the SKU level requires combining data from multiple sources. And building automation rules from scratch takes technical resources most teams don’t have.

This is where the right use of feed management and the right use of PPC automation really helps. For example, it can merge product-level performance data into a single view and let you build rules that automatically segment products based on criteria you define.

To see what this looks like in practice, Canadian fashion retailer La Maison Simons offers a useful reference point. They faced the same challenges-category-based campaigns where top sellers consumed the budget while newer items never gained traction.

After shifting to performance-based segmentation, they saw measurable improvements without increasing ad spend:

  • ROAS nearly doubled over a three-year period
  • Cost-per-click decreased while click-through rates improved
  • Average order value increased by 14%
  • Their dedicated new arrivals campaigns consistently outperformed expectations
  • Perhaps most notably, their previously “invisible” products became some of their strongest performers once they received dedicated visibility

The takeaway isn’t about any single tool, it’s that performance-driven segmentation works. When you stop letting one popular item take all the budget and start giving every product a fair shot based on data, the results tend to follow.

Learn more about the success story and the full details of their approach here.

Quick Principles to Keep in Mind

Image created by Channable, January 2026
  • Segment by performance, not category: Budget flows to what works, not what’s familiar
  • Use 14-day windows for fast-moving catalogues: Capture fresher signals, reduce wasted spend
  • Give new products their own campaign: Build data before judging against established items
  • Automate product movement between segments: Save time and stay responsive without manual work
  • Apply logic across all paid channels: Compounding optimization across Google, Meta, TikTok, and more

Your Next Step

Performance Max doesn’t have to feel like handing Google your wallet and hoping for the best. With the right segmentation strategy, you can restore control, surface overlooked opportunities and make smarter decisions about where your budget goes.

Curious whether your product data is ready for this kind of optimization? A free feed and segmentation audit can help you find gaps and opportunities, no commitment, just clarity.

Because better data leads to better decisions. And better decisions lead to results you can actually control.


Image Credits

Featured Image: Image by Channable Used with permission.

In-Post Images: Images by Channable. Used with permission.

The Download: digitizing India, and scoring embryos

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The man who made India digital isn’t done yet

Nandan Nilekani can’t stop trying to push India into the future. He started nearly 30 years ago, masterminding an ongoing experiment in technological state capacity that started with Aadhaar—the world’s largest digital identity system. 

Using Aadhaar as the bedrock, Nilekani and people working with him went on to build a sprawling collection of free, interoperating online tools that add up to nothing less than a digital infrastructure for society, covering government services, banking, and health care. They offer convenience and access that would be eye-popping in wealthy countries a tenth of India’s size. 

At 70 years old, Nilekani should be retired. But he has a few more ideas. Read our profile to learn about what he’s set his sights on next.

—Edd Gent

Embryo scoring is slowly becoming more mainstream

Many Americans agree that it’s acceptable to screen embryos for severe genetic diseases. Far fewer say it’s okay to test for characteristics related to a future child’s appearance, behavior, or intelligence. But a few startups are now advertising what they claim is a way to do just that.

This new kind of testing—which can cost up to $50,000—is incredibly controversial. Nevertheless, the practice has grown popular in Silicon Valley, and it’s becoming more widely available to everyone. Read the full story

—Julia Black
Embryo scoring is one of our 10 Breakthrough Technologies this year. Check out what else made the list, and scroll down to vote for the technology you think deserves the 11th slot.

Five AI predictions for 2026

What will surprise us most about AI in 2026?

Tune in at 12.30pm today to hear me, our senior AI editor Will Douglas Heaven and senior AI reporter James O’Donnell discuss our “5 AI Predictions for 2026”. This special LinkedIn Live event will explore the trends that are poised to transform the next twelve months of AI. The conversation will also offer a first glimpse at EmTech AI 2026, MIT Technology Review’s longest running AI event for business leadership. Sign up to join us later today! 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Europe is trying to build its own DeepSeek
That’s been a goal for a while, but US hostility is making those efforts newly urgent. (Wired $)
Plenty of Europeans want to wean off US technology. That’s easier said than done. (New Scientist $)
DeepSeek may have found a new way to improve AI’s ability to remember. (MIT Technology Review $)

2 Ship-tracking data shows China is creating massive floating barriers
The maneuvers show that Beijing can now rapidly muster large numbers of the boats in disputed seas. (NYT $)
Quantum navigation could solve the military’s GPS jamming problem. (MIT Technology Review)

3 The AI bubble risks disrupting the global economy, says the IMF
But it’s hard to see anyone pumping the brakes any time soon. (FT $)
British politicians say the UK is being exposed to ‘serious harm’ by AI risks. (The Guardian)
What even is the AI bubble? (MIT Technology Review)

4 Cryptocurrencies are dying in record numbers
In an era of one-off joke coins and pump and dump scams, that’s surely a good thing. (Gizmodo)
President Trump has pardoned a lot of people who’ve committed financial crimes. (NBC)

5 Threads has more global daily mobile users than X now
And once-popular alternative Bluesky barely even makes the charts. (Forbes)

6 The UK is considering banning under 16s from social media 
Just weeks after a similar ban took effect in Australia. (BBC)

7 You can burn yourself out with AI coding agents 
They could be set to make experienced programmers busier than ever before. (Ars Technica)
Why Anthropic’s Claude Code is taking the AI world by storm. (WSJ $)
AI coding is now everywhere. But not everyone is convinced. (MIT Technology Review)

8 Some tech billionaires are leaving California 👋
Not all though—the founders of Nvidia and Airbnb say they’ll stay and pay the 5% wealth tax. (WP $)
Tech bosses’ support for Trump is paying off for them big time. (FT $)

9 Matt Damon says Netflix tells directors to repeat movie plots
To accommodate all the people using their phones. (NME)

10 Why more people are going analog in 2026 🧶
Crafting, reading, and other screen-free hobbies are on the rise. (CNN)
Dumbphones are becoming popular too—but it’s worth thinking hard before you switch. (Wired $)

Quote of the day

‘It may sound like American chauvinism…and it is. We’re done apologising about that.”

—Thomas Dans, a Trump appointee who heads the US Arctic Research Commission, tells the FT his boss is deadly serious about acquiring Greenland. 

One more thing

BRUCE PETERSON

Inside the fierce, messy fight over “healthy” sugar tech

On the outskirts of Charlottesville, Virginia, a new kind of sugar factory is taking shape. The facility is being developed by a startup called Bonumose. It uses a processed corn product called maltodextrin that is found in many junk foods and is calorically similar to table sugar (sucrose). 

But for Bonumose, maltodextrin isn’t an ingredient—it’s a raw material. When it’s poured into the company’s bioreactors, what emerges is tagatose. Found naturally in small concentrations in fruit, some grains, and milk, it is nearly as sweet as sucrose but apparently with only around half the calories, and wider health benefits.

Bonumose’s process originated in a company spun out of the Virginia Tech lab of Yi-Heng “Percival” Zhang. When MIT Technology Review spoke to Zhang, he was sitting alone in an empty lab in Tianjin, China, after serving a two-year sentence of supervised release in Virginia for conspiracy to defraud the US government, making false statements, and obstruction of justice. If sugar is the new oil, the global battle to control it has already begun. Read the full story

—Mark Harris

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Paul Mescal just keeps getting cooler.
+ Make this year calmer with these evidence-backed tips. ($)
+ I can confirm that Lumie wake-up lamps really are worth it (and no one paid me to say so!)
+ There are some real gems in Green Day’s bassist Mike Dirnt’s favorite albums list.

The UK government is backing AI that can run its own lab experiments

A number of startups and universities that are building “AI scientists” to design and run experiments in the lab, including robot biologists and chemists, have just won extra funding from the UK government agency that funds moonshot R&D. The competition, set up by ARIA (the Advanced Research and Invention Agency), gives a clear sense of how fast this technology is moving: The agency received 245 proposals from research teams that are already building tools capable of automating increasing amounts of lab work.

ARIA defines an AI scientist as a system that can run an entire scientific workflow, coming up with hypotheses, designing and running experiments to test those hypotheses, and then analyzing the results. In many cases, the system may then feed those results back into itself and run the loop again and again. Human scientists become overseers, coming up with the initial research questions and then letting the AI scientist get on with the grunt work.

“There are better uses for a PhD student than waiting around in a lab until 3 a.m. to make sure an experiment is run to the end,” says Ant Rowstron, ARIA’s chief technology officer. 

ARIA picked 12 projects to fund from the 245 proposals, doubling the amount of funding it had intended to allocate because of the large number and high quality of submissions. Half the teams are from the UK; the rest are from the US and Europe. Some of the teams are from universities, some from industry. Each will get around £500,000 (around $675,000) to cover nine months’ work. At the end of that time, they should be able to demonstrate that their AI scientist was able to come up with novel findings.

Winning teams include Lila Sciences, a US company that is building what it calls an AI nano-scientist—a system that will design and run experiments to discover the best ways to compose and process quantum dots, which are nanometer-scale semiconductor particles used in medical imaging, solar panels, and QLED TVs.

“We are using the funds and time to prove a point,” says Rafa Gómez-Bombarelli, chief science officer for physical sciences at Lila: “The grant lets us design a real AI robotics loop around a focused scientific problem, generate evidence that it works, and document the playbook so others can reproduce and extend it.”

Another team, from the University of Liverpool, UK, is building a robot chemist, which runs multiple experiments at once and uses a vision language model to help troubleshoot when the robot makes an error.

And a startup based in London, still in stealth mode, is developing an AI scientist called ThetaWorld, which is using LLMs to design experiments on the physical and chemical interactions that are important for the performance of batteries. The experiments will then be run in an automated lab by Sandia National Laboratories in the US.

Taking the temperature

Compared with the £5 million projects spanning two or three years that ARIA usually funds, £500,000 is small change. But that was the idea, says Rowstron: It’s an experiment on ARIA’s part too. By funding a range of projects for a short amount of time, the agency is taking the temperature at the cutting edge to determine how the way science is done is changing, and how fast. What it learns will become the baseline for funding future large-scale projects.   

Rowstron acknowledges there’s a lot of hype, especially now that most of the top AI companies have teams focused on science. When results are shared by press release and not peer review, it can be hard to know what the technology can and can’t do. “That’s always a challenge for a research agency trying to fund the frontier,” he says. “To do things at the frontier, we’ve got to know what the frontier is.”

For now, the cutting edge involves agentic systems calling up other existing tools on the fly. “They’re running things like large language models to do the ideation, and then they use other models to do optimization and run experiments,” says Rowstron. “And then they feed the results back round.”

Rowstron sees the technology stacked in tiers. At the bottom are AI tools designed by humans for humans, such as AlphaFold. These tools let scientists leapfrog slow and painstaking parts of the scientific pipeline but can still require many months of lab work to verify results. The idea of an AI scientist is to automate that work too.  

AI scientists sit in a layer above those human-made tools and call ton hose tools as needed, says Rowstron. “But there’s a point in time—and I don’t think it’s a decade away—where that AI scientist layer says, ‘I need a tool and it doesn’t exist,’ and it will actually create an AlphaFold kind of tool just on the way to figuring out how to solve another problem. That whole bottom zone will just be automated.”

That’s still some way off, he says. All the projects ARIA is now funding involve systems that call on existing tools rather than spin up new ones.

There are also unsolved problems with agentic systems in general, which limits how long they can run by themselves without going off track or making errors. For example, a study, titled “Why LLMs aren’t scientists yet,” posted online last week by researchers at Lossfunk, an AI lab based in India, reports that in an experiment to get LLM agents to run a scientific workflow to completion, the system failed three out of four times. According to the researchers, the reasons the LLMs broke down included changes in the initial specifications and “overexcitement that declares success despite obvious failures.”

“Obviously, at the moment these tools are still fairly early in their cycle and these things might plateau,” says Rowstron. “I’m not expecting them to win a Nobel Prize.”

“But there is a world where some of these tools will force us to operate so much quicker,” he continues. “And if we end up in that world, it’s super important for us to be ready.”

The era of agentic chaos and how data will save us

AI agents are moving beyond coding assistants and customer service chatbots into the operational core of the enterprise. The ROI is promising, but autonomy without alignment is a recipe for chaos. Business leaders need to lay the essential foundations now.

The agent explosion is coming

Agents are independently handling end-to-end processes across lead generation, supply chain optimization, customer support, and financial reconciliation. A mid-sized organization could easily run 4,000 agents, each making decisions that affect revenue, compliance, and customer experience. 

The transformation toward an agent-driven enterprise is inevitable. The economic benefits are too significant to ignore, and the potential is becoming a reality faster than most predicted. The problem? Most businesses and their underlying infrastructure are not prepared for this shift. Early adopters have found unlocking AI initiatives at scale to be extremely challenging. 

The reliability gap that’s holding AI back

Companies are investing heavily in AI, but the returns aren’t materializing. According to recent research from Boston Consulting Group, 60% of companies report minimal revenue and cost gains despite substantial investment. However, the leaders reported they achieved five times the revenue increases and three times the cost reductions. Clearly, there is a massive premium for being a leader. 

What separates the leaders from the pack isn’t how much they’re spending or which models they’re using. Before scaling AI deployment, these “future-built” companies put critical data infrastructure capabilities in place. They invested in the foundational work that enables AI to function reliably. 

A framework for agent reliability: The four quadrants

To understand how and where enterprise AI can fail, consider four critical quadrants: models, tools, context, and governance.

Take a simple example: an agent that orders you pizza. The model interprets your request (“get me a pizza”). The tool executes the action (calling the Domino’s or Pizza Hut API). Context provides personalization (you tend to order pepperoni on Friday nights at 7pm). Governance validates the outcome (did the pizza actually arrive?). 

Each dimension represents a potential failure point:

  • Models: The underlying AI systems that interpret prompts, generate responses, and make predictions
  • Tools: The integration layer that connects AI to enterprise systems, such as APIs, protocols, and connectors 
  • Context: Before making decisions, information agents need to understand the full business picture, including customer histories, product catalogs, and supply chain networks
  • Governance: The policies, controls, and processes that ensure data quality, security, and compliance

This framework helps diagnose where reliability gaps emerge. When an enterprise agent fails, which quadrant is the problem? Is the model misunderstanding intent? Are the tools unavailable or broken? Is the context incomplete or contradictory? Or is there no mechanism to verify that the agent did what it was supposed to do?

Why this is a data problem, not a model problem

The temptation is to think that reliability will simply improve as models improve. Yet, model capability is advancing exponentially. The cost of inference has dropped nearly 900 times in three years, hallucination rates are on the decline, and AI’s capacity to perform long tasks doubles every six months.

Tooling is also accelerating. Integration frameworks like the Model Context Protocol (MCP) make it dramatically easier to connect agents with enterprise systems and APIs.

If models are powerful and tools are maturing, then what is holding back adoption?

To borrow from James Carville, “It is the data, stupid.” The root cause of most misbehaving agents is misaligned, inconsistent, or incomplete data.

Enterprises have accumulated data debt over decades. Acquisitions, custom systems, departmental tools, and shadow IT have left data scattered across silos that rarely agree. Support systems do not match what is in marketing systems. Supplier data is duplicated across finance, procurement, and logistics. Locations have multiple representations depending on the source.

Drop a few agents into this environment, and they will perform wonderfully at first, because each one is given a curated set of systems to call. Add more agents and the cracks grow, as each one builds its own fragment of truth.

This dynamic has played out before. When business intelligence became self-serve, everyone started creating dashboards. Productivity soared, reports failed to match. Now imagine that phenomenon not in static dashboards, but in AI agents that can take action. With agents, data inconsistency produces real business consequences, not just debates among departments.

Companies that build unified context and robust governance can deploy thousands of agents with confidence, knowing they’ll work together coherently and comply with business rules. Companies that skip this foundational work will watch their agents produce contradictory results, violate policies, and ultimately erode trust faster than they create value.

Leverage agentic AI without the chaos 

The question for enterprises centers on organizational readiness. Will your company prepare the data foundation needed to make agent transformation work? Or will you spend years debugging agents, one issue at a time, forever chasing problems that originate in infrastructure you never built?

Autonomous agents are already transforming how work gets done. But the enterprise will only experience the upside if those systems operate from the same truth. This ensures that when agents reason, plan, and act, they do so based on accurate, consistent, and up-to-date information. 

The companies generating value from AI today have built on fit-for-purpose data foundations. They recognized early that in an agentic world, data functions as essential infrastructure. A solid data foundation is what turns experimentation into dependable operations.

At Reltio, the focus is on building that foundation. The Reltio data management platform unifies core data from across the enterprise, giving every agent immediate access to the same business context. This unified approach enables enterprises to move faster, act smarter, and unlock the full value of AI.

Agents will define the future of the enterprise. Context intelligence will determine who leads it.

For leaders navigating this next wave of transformation, see Relatio’s practical guide:
Unlocking Agentic AI: A Business Playbook for Data Readiness. Get your copy now to learn how real-time context becomes the decisive advantage in the age of intelligence. 

Reimagining ERP for the agentic AI era

The story of enterprise resource planning (ERP) is really a story of businesses learning to organize themselves around the latest, greatest technology of the times. In the 1960s through the ’80s, mainframes, material requirements planning (MRP), and manufacturing resource planning (MRP II) brought core business data from file cabinets to centralized systems. Client-server architectures defined the ’80s and ’90s, taking digitization mainstream during the internet’s infancy. And in the 21st century, as work moved beyond the desktop, SaaS and cloud ushered in flexible access and elastic infrastructure.

The rise of composability and agentic AI marks yet another dawn—and an apt one for the nascent intelligence age. Composable architectures let organizations assemble capabilities from multiple systems in a mix-and-match fashion, so they can swap vendor gridlock for an à la carte portfolio of fit-for-purpose modules. On top of that architectural shift, agentic AI enables coordination across systems that weren’t originally designed to talk to one another.

Early indicators suggest that AI-enabled ERP will yield meaningful performance gains: One 2024 study found that organizations implementing AI-driven ERP solutions stand to gain around a 30% boost in user satisfaction and a 25% lift in productivity; another suggested that AI-driven ERP can lead to processing time savings of up to 45%, as well as improvements in decision accuracy to the tune of 60%.

These dual advancements address long-standing gaps that previous ERP eras fell short of delivering: freedom to innovate outside of vendor roadmaps, capacity for rapid iteration, and true interoperability across all critical functions. This shift signals the end of monolithic dependency as well as a once-in-a-generation opportunity for early movers to gain a competitive edge.

Key takeaways include:

  • Enterprises are moving away from monolithic ERP vendor upgrades in favor of modular architectures that allow them to change or modernize components independently while keeping a stable core for essential transactions.
  • Agentic AI is a timely complement to composability, functioning as a UX and orchestration layer that can coordinate workflows across disparate systems and turn multi-step processes into automated, cross-platform operations.
  • These dual shifts are finally enabling technology architecture to organize around the business, instead of the business around the ERP. Companies can modernize by reconfiguring and extending what they already have, rather than relying on ERP-centric upgrades.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.