What Profitable Google Ads Look Like in 2026 [Webinar] via @sejournal, @hethr_campbell

Google Ads’ Performance Max Smart Bidding is finally delivering real results for teams that know how to work with it.

As marketers are forced to give PMax more control, many are struggling to understand exactly how to structure automated Google Ads campaigns and accounts.

In this webinar, the marketing leadership team at DigiCom, a 2025 Inc. 5000-listed ecommerce growth agency, breaks down how they are running Google Ads at scale in 2026.

With hands-on experience managing PPC programs totaling $200M+ in ad spend across multiple accounts, they will share how high-growth brands are structuring paid search, Performance Max, and YouTube campaigns to meet shoppers where they are and drive consistent returns.

And, they’re doing a live Google Ads audit during the webinar, so register today and submit your site

What You’ll Learn

This webinar session will showcase how top brands are navigating Smart Bidding changes in 2026.

RSVP now, and learn:

  • How to structure Google Ads accounts to maintain control over ROAS in an automated landscape
  • The right creative and copy to feed into Google’s systems to capture high-intent shoppers
  • Proven ways to move beyond keyword-first strategies and focus on profit-driven outcomes

Why Attend?

You will gain practical PPC strategy frameworks you can apply immediately, along with the chance for select attendees to receive a live Google Ads audit during the webinar. If you are responsible for scaling paid media performance in 2026, these strategies are worth studying.

Register now to get a clear, founder-led Google Ads playbook for scaling profitably in 2026.

🛑 Can’t make it live? Register anyway, and we’ll send you the on demand recording after the event.

WordPress Advanced Custom Fields Extended Plugin Vulnerability via @sejournal, @martinibuster

An advisory was published about a vulnerability in the popular Advanced Custom Fields: Extended WordPress plugin that is rated 9.8, affecting up to 100,000 installations.

The flaw enables unauthenticated attackers to register themselves with administrator privileges and gain full control of a website and all settings.

Advanced Custom Fields: Extended Plugin

The Advanced Custom Fields: Extended plugin is an add-on to the popular Advanced Custom Fields Pro plugin. It is used by WordPress site owners and developers to extend how custom fields work, manage front-end forms, create options pages, define custom post types and taxonomies, and customize the WordPress admin experience.

The plugin is widely used, with more than 100,000 active installations, and is commonly deployed on sites that rely on front-end forms and advanced content management workflows.

Who Can Exploit This Vulnerability

This vulnerability can be exploited by unauthenticated attackers, which means there is no barrier of first having to attain a higher permission level before launching an attack. If the affected version of the plugin is present with a specific configuration in place, anyone on the internet can attempt to exploit the flaw. That kind of exposure significantly increases risk because it removes the need for compromised credentials or insider access.

Privilege Escalation Exposure

The vulnerability is a privilege escalation flaw caused by missing role restrictions during user registration.

Specifically, the plugin’s insert_user function does not limit which user roles can be assigned when a new user account is created by anyone. Under normal circumstances, WordPress should strictly control which roles users can select or be assigned during registration.

Because this check is missing, an attacker can submit a registration request that explicitly assigns the administrator role to the new account.

This issue only occurs when the site’s form configuration maps a custom field directly to the WordPress role field. When that condition is met, the plugin accepts the supplied role value without verifying that it is safe or permitted.

The flaw appears to be due to insufficient server-side validation of the form field “Choices.” The plugin seems to have relied on the the HTML form to restrict which roles a user could select. For example, the developer could create a user sign up form with only the “subscriber” role as an option. But there was no verification on the backend to check if the user role the subscriber was signing up with matched the user roles that the form is supposed to be limited to.

What was probably happening is that an unauthenticated attacker could inspect the form’s HTML, see the field responsible for the user role, and intercept the HTTP request so that, for example, instead of sending role=subscriber, the attacker could change the value to role=administrator. The code responsible for the insert_user action took this input and passed it directly to WordPress user creation functions. It did not check if “administrator” was actually one of the allowed options in the field’s “Choices” list.

The Changelog for the plugin lists the following entry as one of the patches to the plugin:

“Enforced front-end fields validation against their respective “Choices” settings.”

That entry in the changelog means the plugin now actively checks front-end form submissions to ensure the submitted value matches the field’s defined “Choices”, rather than trusting whatever value is posted.

There is also this entry in the changelog:

“Module: Forms – Added security measure for forms allowing user role selection”

This entry means the plugin added server-side protections to prevent abuse when a front-end form is allowed to set or select a WordPress user role.

Overall, the patches to the plugin added stronger validation controls for front-end forms plus made them more configurable.

What Attackers Can Gain

If successfully exploited, the attacker gains administrator-level access to the WordPress site.

That level of access allows attackers to:

  • Install or modify plugins and themes
  • Inject malicious code
  • Create backdoor administrator accounts
  • Steal or manipulate site data
  • Redirect visitors or distribute malware

Gaining administrator access is a full site takeover.

The Wordfence advisory describes the issue as follows:

“The Advanced Custom Fields: Extended plugin for WordPress is vulnerable to Privilege Escalation in all versions up to, and including, 0.9.2.1. This is due to the ‘insert_user’ function not restricting the roles with which a user can register. This makes it possible for unauthenticated attackers to supply the ‘administrator’ role during registration and gain administrator access to the site.”

As Wordfence describes, the plugin trusts user-supplied input for account roles when it should not. That trust allows attackers to bypass WordPress’s normal protections and grant themselves the highest possible permission level.

Wordfence also reports having blocked active exploitation attempts targeting this vulnerability, indicating that attackers are already probing sites for exposure.

Conditions Required For Exploitation

The vulnerability is not automatically exploitable on every site running the plugin.

Exploitation requires that:

  • The site uses a front-end form provided by the plugin
  • The form maps a custom field directly to the WordPress user role

Patch Status and What Site Owners Should Do

The vulnerability affects all versions up to and including 0.9.2.1. The issue is addressed in version 0.9.2.2, which introduces additional validation and security checks around front-end forms and user role handling.

The entry for the official changelog for ACF Extended Basic 0.9.2.2:

  • Module: Forms – Enforced front-end fields validation against their respective “Choices” settings
  • Module: Forms – Added security measure for forms allowing user role selection
  • Module: Forms – Added acfe/form/validate_value hook to validate fields individually on front
  • Module: Forms – Added acfe/form/pre_validate_value hook to bypass enforced validation

Site owners using this plugin should update immediately to the latest patched version. If updating is not possible, the plugin should be disabled until the fix can be applied.

Given the severity of the flaw and the lack of authentication required to exploit it, delaying action leaves affected sites exposed to a complete takeover.

Featured Image by Shutterstock/Art Furnace

The Smart Way To Take Back Control Of Google’s Performance Max [A Step-By-Step Guide]

This post was sponsored by Channable. The opinions expressed in this article are the sponsor’s own.

If you’ve ever watched your best-selling product devour your entire ad budget while dozens of promising SKUs sit in the dark, you’re not alone.

Google’s Performance Max (PMax) campaigns have transformed ecommerce advertising since launching in 2021.

For many advertisers, PMax introduced a significant challenge: a lack of transparency in budget allocation. Without clear insights into which placements, audiences, or assets are driving performance, it’s easy to feel like you’re flying blind.

The good news? You don’t have to stay there.

This guide walks you through a practical framework for reclaiming control over your Performance Max campaigns, allowing you to segment products by actual performance and make data-driven decisions rather than hope AI figures it out for you.

The Budget Black Hole: Where Your Performance Max Ad Spend Actually Goes

Most ecommerce brands start by organizing PMax campaigns around categories. Shoes in one campaign. Accessories in another. That seems logical and clean but can completely ignore how products actually perform.

Here’s what typically happens:

  • Top sellers monopolize budget. Google’s algorithm prioritizes products with strong historical performance, which means your star items keep getting the spotlight while everything else struggles for visibility.
  • New arrivals never get traction. Without performance history, fresh products can’t compete, so they never build the data they need to succeed.
  • “Zombie” products stay invisible. Some items might perform well if given the chance, but static segmentation never gives them that opportunity.
  • Manual adjustments eat your time. Every tweak requires you to dig through data, make changes, and hope for the best.

The result? Wasted potential, uneven budget distribution, and marketing teams stuck reacting instead of strategizing. You’re already doing the hard work; this framework helps that effort go further and helps you set and manage your PPC budget efficiently and effectively.

How To Fix It: Segment Campaigns By What’s Actually Working

Instead of organizing campaigns by category, segment by how products actually perform.

This approach creates dynamic groupings that automatically shift as performance data changes with no manual reshuffling.

Step 1: Classify Your Products into Three Groups

Start by categorizing your catalogue based on real performance metrics: ROAS, clicks, conversions, and visibility.

Image created by Channable, January 2026

Star Products

These are your proven winners, with high ROAS, strong click-through rates, and consistent conversions. Your goal with stars is to maximize their potential while protecting margins.

  • Set higher ROAS targets (3x–5x or above based on your margins).
  • Allocate budget confidently.
  • Monitor to ensure profitability stays intact.

Zombie Products

These are the “invisible” items that haven’t had enough exposure to prove themselves. They might be underperformers, or they might be hidden gems waiting for their moment.

  • Set lower ROAS targets (0.5x–2x) to prioritize visibility.
  • Give them a dedicated budget to gather performance data.
  • Review regularly and promote graduates to the star category.

New Arrivals

Fresh products need their own ramp-up period before being judged against established items. Without historical data, they can’t compete fairly in a mixed campaign.

  • Create a separate campaign specifically for new launches.
  • Use dynamic date fields to automatically include recently added items.
  • Set goals focused on awareness and data collection rather than immediate ROAS.

Step 2: Define Your Performance Thresholds

Decide what metrics determine which bucket a product falls into. For example:

  • Stars: ROAS above 3x–5x, strong click volume, goal is maximizing profitability.
  • Zombies: ROAS below 2x or insufficient data, low click volume, goal is testing and learning.
  • New Arrivals: Date-based (for example, added within last 30 days), goal is building visibility.

Your thresholds will depend on your margins, industry, and historical benchmarks. The key is defining clear criteria so products can move between segments automatically as their performance changes.

Step 3: Shorten Your Analysis Window

Many advertisers’ default to 30-day lookback windows for performance analysis. For fast-moving catalogues, that’s too slow.

Consider shifting to a 14-day rolling window for better analysis. You’ll get:

  • Faster reactions to performance shifts
  • More accurate data for seasonal or trending items
  • Less wasted spend on products that peaked two weeks ago

This is especially important for fashion, home goods, and any category where trends move quickly.

Step 4: Apply Segmentation Across All Channels

Your segmentation logic shouldn’t stop at Google. The same star/zombie/new arrival framework can (and should) apply to:

  • Meta Ads
  • Pinterest
  • TikTok
  • Criteo
  • Amazon

Cross-channel consistency compounds your optimization efforts. A product that’s a “zombie” on Google might be a star on TikTok, or vice versa. Unified segmentation helps you connect products to the right audiences on the right channels and distribute budget accordingly.

Step 5: Build Rules That Move Products Automatically

Here’s where the real efficiency gains come in. Instead of manually reviewing every SKU, create rules that automatically shift products between campaigns based on performance.

For example:

  • If ROAS exceeds 3x–5x over your analysis window – Move to Stars campaign
  • If ROAS falls below 2x or clicks drop below your average (for example, 20 clicks in 14 days) – Move to Zombies campaign
  • If product was added within a set time limit (for example, the last 30 days) -Include in New Arrivals campaign

This dynamic automation ensures your campaigns stay optimized without requiring constant manual intervention.

Get Smart: Let Intelligent Automation Do the Heavy Lifting

Image created by Channable, January 2026

The steps above work—but implementing them manually across thousands of SKUs and multiple channels is time-consuming. Product-level performance data lives in different dashboards. Calculating ROAS at the SKU level requires combining data from multiple sources. And building automation rules from scratch takes technical resources most teams don’t have.

This is where the right use of feed management and the right use of PPC automation really helps. For example, it can merge product-level performance data into a single view and let you build rules that automatically segment products based on criteria you define.

To see what this looks like in practice, Canadian fashion retailer La Maison Simons offers a useful reference point. They faced the same challenges-category-based campaigns where top sellers consumed the budget while newer items never gained traction.

After shifting to performance-based segmentation, they saw measurable improvements without increasing ad spend:

  • ROAS nearly doubled over a three-year period
  • Cost-per-click decreased while click-through rates improved
  • Average order value increased by 14%
  • Their dedicated new arrivals campaigns consistently outperformed expectations
  • Perhaps most notably, their previously “invisible” products became some of their strongest performers once they received dedicated visibility

The takeaway isn’t about any single tool, it’s that performance-driven segmentation works. When you stop letting one popular item take all the budget and start giving every product a fair shot based on data, the results tend to follow.

Learn more about the success story and the full details of their approach here.

Quick Principles to Keep in Mind

Image created by Channable, January 2026
  • Segment by performance, not category: Budget flows to what works, not what’s familiar
  • Use 14-day windows for fast-moving catalogues: Capture fresher signals, reduce wasted spend
  • Give new products their own campaign: Build data before judging against established items
  • Automate product movement between segments: Save time and stay responsive without manual work
  • Apply logic across all paid channels: Compounding optimization across Google, Meta, TikTok, and more

Your Next Step

Performance Max doesn’t have to feel like handing Google your wallet and hoping for the best. With the right segmentation strategy, you can restore control, surface overlooked opportunities and make smarter decisions about where your budget goes.

Curious whether your product data is ready for this kind of optimization? A free feed and segmentation audit can help you find gaps and opportunities, no commitment, just clarity.

Because better data leads to better decisions. And better decisions lead to results you can actually control.


Image Credits

Featured Image: Image by Channable Used with permission.

In-Post Images: Images by Channable. Used with permission.

The Download: digitizing India, and scoring embryos

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The man who made India digital isn’t done yet

Nandan Nilekani can’t stop trying to push India into the future. He started nearly 30 years ago, masterminding an ongoing experiment in technological state capacity that started with Aadhaar—the world’s largest digital identity system. 

Using Aadhaar as the bedrock, Nilekani and people working with him went on to build a sprawling collection of free, interoperating online tools that add up to nothing less than a digital infrastructure for society, covering government services, banking, and health care. They offer convenience and access that would be eye-popping in wealthy countries a tenth of India’s size. 

At 70 years old, Nilekani should be retired. But he has a few more ideas. Read our profile to learn about what he’s set his sights on next.

—Edd Gent

Embryo scoring is slowly becoming more mainstream

Many Americans agree that it’s acceptable to screen embryos for severe genetic diseases. Far fewer say it’s okay to test for characteristics related to a future child’s appearance, behavior, or intelligence. But a few startups are now advertising what they claim is a way to do just that.

This new kind of testing—which can cost up to $50,000—is incredibly controversial. Nevertheless, the practice has grown popular in Silicon Valley, and it’s becoming more widely available to everyone. Read the full story

—Julia Black
Embryo scoring is one of our 10 Breakthrough Technologies this year. Check out what else made the list, and scroll down to vote for the technology you think deserves the 11th slot.

Five AI predictions for 2026

What will surprise us most about AI in 2026?

Tune in at 12.30pm today to hear me, our senior AI editor Will Douglas Heaven and senior AI reporter James O’Donnell discuss our “5 AI Predictions for 2026”. This special LinkedIn Live event will explore the trends that are poised to transform the next twelve months of AI. The conversation will also offer a first glimpse at EmTech AI 2026, MIT Technology Review’s longest running AI event for business leadership. Sign up to join us later today! 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Europe is trying to build its own DeepSeek
That’s been a goal for a while, but US hostility is making those efforts newly urgent. (Wired $)
Plenty of Europeans want to wean off US technology. That’s easier said than done. (New Scientist $)
DeepSeek may have found a new way to improve AI’s ability to remember. (MIT Technology Review $)

2 Ship-tracking data shows China is creating massive floating barriers
The maneuvers show that Beijing can now rapidly muster large numbers of the boats in disputed seas. (NYT $)
Quantum navigation could solve the military’s GPS jamming problem. (MIT Technology Review)

3 The AI bubble risks disrupting the global economy, says the IMF
But it’s hard to see anyone pumping the brakes any time soon. (FT $)
British politicians say the UK is being exposed to ‘serious harm’ by AI risks. (The Guardian)
What even is the AI bubble? (MIT Technology Review)

4 Cryptocurrencies are dying in record numbers
In an era of one-off joke coins and pump and dump scams, that’s surely a good thing. (Gizmodo)
President Trump has pardoned a lot of people who’ve committed financial crimes. (NBC)

5 Threads has more global daily mobile users than X now
And once-popular alternative Bluesky barely even makes the charts. (Forbes)

6 The UK is considering banning under 16s from social media 
Just weeks after a similar ban took effect in Australia. (BBC)

7 You can burn yourself out with AI coding agents 
They could be set to make experienced programmers busier than ever before. (Ars Technica)
Why Anthropic’s Claude Code is taking the AI world by storm. (WSJ $)
AI coding is now everywhere. But not everyone is convinced. (MIT Technology Review)

8 Some tech billionaires are leaving California 👋
Not all though—the founders of Nvidia and Airbnb say they’ll stay and pay the 5% wealth tax. (WP $)
Tech bosses’ support for Trump is paying off for them big time. (FT $)

9 Matt Damon says Netflix tells directors to repeat movie plots
To accommodate all the people using their phones. (NME)

10 Why more people are going analog in 2026 🧶
Crafting, reading, and other screen-free hobbies are on the rise. (CNN)
Dumbphones are becoming popular too—but it’s worth thinking hard before you switch. (Wired $)

Quote of the day

‘It may sound like American chauvinism…and it is. We’re done apologising about that.”

—Thomas Dans, a Trump appointee who heads the US Arctic Research Commission, tells the FT his boss is deadly serious about acquiring Greenland. 

One more thing

BRUCE PETERSON

Inside the fierce, messy fight over “healthy” sugar tech

On the outskirts of Charlottesville, Virginia, a new kind of sugar factory is taking shape. The facility is being developed by a startup called Bonumose. It uses a processed corn product called maltodextrin that is found in many junk foods and is calorically similar to table sugar (sucrose). 

But for Bonumose, maltodextrin isn’t an ingredient—it’s a raw material. When it’s poured into the company’s bioreactors, what emerges is tagatose. Found naturally in small concentrations in fruit, some grains, and milk, it is nearly as sweet as sucrose but apparently with only around half the calories, and wider health benefits.

Bonumose’s process originated in a company spun out of the Virginia Tech lab of Yi-Heng “Percival” Zhang. When MIT Technology Review spoke to Zhang, he was sitting alone in an empty lab in Tianjin, China, after serving a two-year sentence of supervised release in Virginia for conspiracy to defraud the US government, making false statements, and obstruction of justice. If sugar is the new oil, the global battle to control it has already begun. Read the full story

—Mark Harris

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Paul Mescal just keeps getting cooler.
+ Make this year calmer with these evidence-backed tips. ($)
+ I can confirm that Lumie wake-up lamps really are worth it (and no one paid me to say so!)
+ There are some real gems in Green Day’s bassist Mike Dirnt’s favorite albums list.

The UK government is backing AI that can run its own lab experiments

A number of startups and universities that are building “AI scientists” to design and run experiments in the lab, including robot biologists and chemists, have just won extra funding from the UK government agency that funds moonshot R&D. The competition, set up by ARIA (the Advanced Research and Invention Agency), gives a clear sense of how fast this technology is moving: The agency received 245 proposals from research teams that are already building tools capable of automating increasing amounts of lab work.

ARIA defines an AI scientist as a system that can run an entire scientific workflow, coming up with hypotheses, designing and running experiments to test those hypotheses, and then analyzing the results. In many cases, the system may then feed those results back into itself and run the loop again and again. Human scientists become overseers, coming up with the initial research questions and then letting the AI scientist get on with the grunt work.

“There are better uses for a PhD student than waiting around in a lab until 3 a.m. to make sure an experiment is run to the end,” says Ant Rowstron, ARIA’s chief technology officer. 

ARIA picked 12 projects to fund from the 245 proposals, doubling the amount of funding it had intended to allocate because of the large number and high quality of submissions. Half the teams are from the UK; the rest are from the US and Europe. Some of the teams are from universities, some from industry. Each will get around £500,000 (around $675,000) to cover nine months’ work. At the end of that time, they should be able to demonstrate that their AI scientist was able to come up with novel findings.

Winning teams include Lila Sciences, a US company that is building what it calls an AI nano-scientist—a system that will design and run experiments to discover the best ways to compose and process quantum dots, which are nanometer-scale semiconductor particles used in medical imaging, solar panels, and QLED TVs.

“We are using the funds and time to prove a point,” says Rafa Gómez-Bombarelli, chief science officer for physical sciences at Lila: “The grant lets us design a real AI robotics loop around a focused scientific problem, generate evidence that it works, and document the playbook so others can reproduce and extend it.”

Another team, from the University of Liverpool, UK, is building a robot chemist, which runs multiple experiments at once and uses a vision language model to help troubleshoot when the robot makes an error.

And a startup based in London, still in stealth mode, is developing an AI scientist called ThetaWorld, which is using LLMs to design experiments on the physical and chemical interactions that are important for the performance of batteries. The experiments will then be run in an automated lab by Sandia National Laboratories in the US.

Taking the temperature

Compared with the £5 million projects spanning two or three years that ARIA usually funds, £500,000 is small change. But that was the idea, says Rowstron: It’s an experiment on ARIA’s part too. By funding a range of projects for a short amount of time, the agency is taking the temperature at the cutting edge to determine how the way science is done is changing, and how fast. What it learns will become the baseline for funding future large-scale projects.   

Rowstron acknowledges there’s a lot of hype, especially now that most of the top AI companies have teams focused on science. When results are shared by press release and not peer review, it can be hard to know what the technology can and can’t do. “That’s always a challenge for a research agency trying to fund the frontier,” he says. “To do things at the frontier, we’ve got to know what the frontier is.”

For now, the cutting edge involves agentic systems calling up other existing tools on the fly. “They’re running things like large language models to do the ideation, and then they use other models to do optimization and run experiments,” says Rowstron. “And then they feed the results back round.”

Rowstron sees the technology stacked in tiers. At the bottom are AI tools designed by humans for humans, such as AlphaFold. These tools let scientists leapfrog slow and painstaking parts of the scientific pipeline but can still require many months of lab work to verify results. The idea of an AI scientist is to automate that work too.  

AI scientists sit in a layer above those human-made tools and call ton hose tools as needed, says Rowstron. “But there’s a point in time—and I don’t think it’s a decade away—where that AI scientist layer says, ‘I need a tool and it doesn’t exist,’ and it will actually create an AlphaFold kind of tool just on the way to figuring out how to solve another problem. That whole bottom zone will just be automated.”

That’s still some way off, he says. All the projects ARIA is now funding involve systems that call on existing tools rather than spin up new ones.

There are also unsolved problems with agentic systems in general, which limits how long they can run by themselves without going off track or making errors. For example, a study, titled “Why LLMs aren’t scientists yet,” posted online last week by researchers at Lossfunk, an AI lab based in India, reports that in an experiment to get LLM agents to run a scientific workflow to completion, the system failed three out of four times. According to the researchers, the reasons the LLMs broke down included changes in the initial specifications and “overexcitement that declares success despite obvious failures.”

“Obviously, at the moment these tools are still fairly early in their cycle and these things might plateau,” says Rowstron. “I’m not expecting them to win a Nobel Prize.”

“But there is a world where some of these tools will force us to operate so much quicker,” he continues. “And if we end up in that world, it’s super important for us to be ready.”

The era of agentic chaos and how data will save us

AI agents are moving beyond coding assistants and customer service chatbots into the operational core of the enterprise. The ROI is promising, but autonomy without alignment is a recipe for chaos. Business leaders need to lay the essential foundations now.

The agent explosion is coming

Agents are independently handling end-to-end processes across lead generation, supply chain optimization, customer support, and financial reconciliation. A mid-sized organization could easily run 4,000 agents, each making decisions that affect revenue, compliance, and customer experience. 

The transformation toward an agent-driven enterprise is inevitable. The economic benefits are too significant to ignore, and the potential is becoming a reality faster than most predicted. The problem? Most businesses and their underlying infrastructure are not prepared for this shift. Early adopters have found unlocking AI initiatives at scale to be extremely challenging. 

The reliability gap that’s holding AI back

Companies are investing heavily in AI, but the returns aren’t materializing. According to recent research from Boston Consulting Group, 60% of companies report minimal revenue and cost gains despite substantial investment. However, the leaders reported they achieved five times the revenue increases and three times the cost reductions. Clearly, there is a massive premium for being a leader. 

What separates the leaders from the pack isn’t how much they’re spending or which models they’re using. Before scaling AI deployment, these “future-built” companies put critical data infrastructure capabilities in place. They invested in the foundational work that enables AI to function reliably. 

A framework for agent reliability: The four quadrants

To understand how and where enterprise AI can fail, consider four critical quadrants: models, tools, context, and governance.

Take a simple example: an agent that orders you pizza. The model interprets your request (“get me a pizza”). The tool executes the action (calling the Domino’s or Pizza Hut API). Context provides personalization (you tend to order pepperoni on Friday nights at 7pm). Governance validates the outcome (did the pizza actually arrive?). 

Each dimension represents a potential failure point:

  • Models: The underlying AI systems that interpret prompts, generate responses, and make predictions
  • Tools: The integration layer that connects AI to enterprise systems, such as APIs, protocols, and connectors 
  • Context: Before making decisions, information agents need to understand the full business picture, including customer histories, product catalogs, and supply chain networks
  • Governance: The policies, controls, and processes that ensure data quality, security, and compliance

This framework helps diagnose where reliability gaps emerge. When an enterprise agent fails, which quadrant is the problem? Is the model misunderstanding intent? Are the tools unavailable or broken? Is the context incomplete or contradictory? Or is there no mechanism to verify that the agent did what it was supposed to do?

Why this is a data problem, not a model problem

The temptation is to think that reliability will simply improve as models improve. Yet, model capability is advancing exponentially. The cost of inference has dropped nearly 900 times in three years, hallucination rates are on the decline, and AI’s capacity to perform long tasks doubles every six months.

Tooling is also accelerating. Integration frameworks like the Model Context Protocol (MCP) make it dramatically easier to connect agents with enterprise systems and APIs.

If models are powerful and tools are maturing, then what is holding back adoption?

To borrow from James Carville, “It is the data, stupid.” The root cause of most misbehaving agents is misaligned, inconsistent, or incomplete data.

Enterprises have accumulated data debt over decades. Acquisitions, custom systems, departmental tools, and shadow IT have left data scattered across silos that rarely agree. Support systems do not match what is in marketing systems. Supplier data is duplicated across finance, procurement, and logistics. Locations have multiple representations depending on the source.

Drop a few agents into this environment, and they will perform wonderfully at first, because each one is given a curated set of systems to call. Add more agents and the cracks grow, as each one builds its own fragment of truth.

This dynamic has played out before. When business intelligence became self-serve, everyone started creating dashboards. Productivity soared, reports failed to match. Now imagine that phenomenon not in static dashboards, but in AI agents that can take action. With agents, data inconsistency produces real business consequences, not just debates among departments.

Companies that build unified context and robust governance can deploy thousands of agents with confidence, knowing they’ll work together coherently and comply with business rules. Companies that skip this foundational work will watch their agents produce contradictory results, violate policies, and ultimately erode trust faster than they create value.

Leverage agentic AI without the chaos 

The question for enterprises centers on organizational readiness. Will your company prepare the data foundation needed to make agent transformation work? Or will you spend years debugging agents, one issue at a time, forever chasing problems that originate in infrastructure you never built?

Autonomous agents are already transforming how work gets done. But the enterprise will only experience the upside if those systems operate from the same truth. This ensures that when agents reason, plan, and act, they do so based on accurate, consistent, and up-to-date information. 

The companies generating value from AI today have built on fit-for-purpose data foundations. They recognized early that in an agentic world, data functions as essential infrastructure. A solid data foundation is what turns experimentation into dependable operations.

At Reltio, the focus is on building that foundation. The Reltio data management platform unifies core data from across the enterprise, giving every agent immediate access to the same business context. This unified approach enables enterprises to move faster, act smarter, and unlock the full value of AI.

Agents will define the future of the enterprise. Context intelligence will determine who leads it.

For leaders navigating this next wave of transformation, see Relatio’s practical guide:
Unlocking Agentic AI: A Business Playbook for Data Readiness. Get your copy now to learn how real-time context becomes the decisive advantage in the age of intelligence. 

Reimagining ERP for the agentic AI era

The story of enterprise resource planning (ERP) is really a story of businesses learning to organize themselves around the latest, greatest technology of the times. In the 1960s through the ’80s, mainframes, material requirements planning (MRP), and manufacturing resource planning (MRP II) brought core business data from file cabinets to centralized systems. Client-server architectures defined the ’80s and ’90s, taking digitization mainstream during the internet’s infancy. And in the 21st century, as work moved beyond the desktop, SaaS and cloud ushered in flexible access and elastic infrastructure.

The rise of composability and agentic AI marks yet another dawn—and an apt one for the nascent intelligence age. Composable architectures let organizations assemble capabilities from multiple systems in a mix-and-match fashion, so they can swap vendor gridlock for an à la carte portfolio of fit-for-purpose modules. On top of that architectural shift, agentic AI enables coordination across systems that weren’t originally designed to talk to one another.

Early indicators suggest that AI-enabled ERP will yield meaningful performance gains: One 2024 study found that organizations implementing AI-driven ERP solutions stand to gain around a 30% boost in user satisfaction and a 25% lift in productivity; another suggested that AI-driven ERP can lead to processing time savings of up to 45%, as well as improvements in decision accuracy to the tune of 60%.

These dual advancements address long-standing gaps that previous ERP eras fell short of delivering: freedom to innovate outside of vendor roadmaps, capacity for rapid iteration, and true interoperability across all critical functions. This shift signals the end of monolithic dependency as well as a once-in-a-generation opportunity for early movers to gain a competitive edge.

Key takeaways include:

  • Enterprises are moving away from monolithic ERP vendor upgrades in favor of modular architectures that allow them to change or modernize components independently while keeping a stable core for essential transactions.
  • Agentic AI is a timely complement to composability, functioning as a UX and orchestration layer that can coordinate workflows across disparate systems and turn multi-step processes into automated, cross-platform operations.
  • These dual shifts are finally enabling technology architecture to organize around the business, instead of the business around the ERP. Companies can modernize by reconfiguring and extending what they already have, rather than relying on ERP-centric upgrades.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Best Writing Tools for Business in 2026

Good writing takes time, which is in short supply when you’re launching or running a business.

Fortunately, there are excellent online tools that can streamline composition, check grammar, and even overcome disadvantages such as dyslexia. AI capabilities, while not perfect, make them more powerful than ever.

Writing Aids

Grammarly

Category leader Grammarly is an all-purpose workhorse that checks for errors and style faults and suggests corrections and improvements. It integrates with Microsoft Office, Google Workspace, and the top web browsers on Windows, Mac, iOS, and Android platforms.

Grammarly offers a limited, free version and paid plans starting at $12 per user per month with a free trial.

Grammarly home page

Grammarly checks for errors and style faults and suggests corrections and improvements.

ProWritingAid

ProWritingAid goes beyond composition mechanics with features such as goal tracking, manuscript analysis, and the ability to compare your writing style to that of famous authors, such as Stephen King.

Subscriptions include a free basic version and paid plans starting at $120 per user per year.

Ginger

In addition to checking grammar and style across multiple platforms, programs, and devices, Ginger provides instant language translation into Spanish, French, German, and Japanese.

Ginger’s grammar checker is free. Paid versions start at $9.90 per month.

Language Tool

Language Tool caters to users who aren’t native speakers of the language they’re writing in. It claims to cover more than 30 languages, including Spanish, Dutch, German, Portuguese, Catalan, French, and six varieties of English.

A basic AI grammar and usage checker is free. Premium monthly plans start at $24.90 per user.

Hemingway Editor

Hemingway Editor tightens and simplifies prose and displays the changes with brightly colored highlights.

The basic online editor is free. Downloadable desktop versions for Mac and PC cost $19.99.

Otter

Otter is an automated notetaking tool that transcribes, outlines, and summarizes meetings and conversations. It integrates with popular platforms such as Google Workspace, HubSpot, Jira, Asana, and Zoom. In my testing, the raw AI-generated audio transcripts required a fair amount of cleanup, but it learns the voices of frequent speakers.

Otter offers a free, limited subscription. Paid plans start at $16.99 per user per month.

Reference Tools

AP Stylebook. Geared toward professional journalists, the Associated Press Stylebook offers usage recommendations, style suggestions, and grammar rules. The annual printed version is $34.95. The online Stylebook starts at $30 per user per year or $42 when bundled with Merriam-Webster Unabridged.

Home page of AP Stylebook

The Associated Press Stylebook offers usage recommendations, style suggestions, and grammar rules.

Merriam-Webster Unabridged. For a monthly subscription of $4.95, the authoritative dictionary-thesaurus is worthwhile for users needing more than built-in spellcheckers or free resources.

Chicago Manual of Style. Popular among professional editors and publishers, the Chicago Manual of Style from the University of Chicago Press delves into the fine points of grammar and usage. Various printed versions include a hardback book for $75. Subscriptions to the online version start at $48 per user per year.

Purdue OWL. Purdue University’s comprehensive, free Online Writing Lab is valuable for writers of all kinds. It includes grammar guides, plagiarism-avoidance tips, research and citation advice, overviews of subject-specific writing such as healthcare, and summaries of the most widely used style guides.

OpenAI Search Crawler Passes 55% Coverage In Hostinger Study via @sejournal, @MattGSouthern

Hostinger analyzed 66 billion bot requests across more than 5 million websites and found that AI crawlers are following two different paths.

LLM training bots are losing access to the web as more sites block them. Meanwhile, AI assistant bots that power search tools like ChatGPT are expanding their reach.

The analysis draws on anonymized server logs from three 6-day windows, with bot classification mapped to AI.txt project classifications.

Training Bots Are Getting Blocked

The starkest finding involves OpenAI’s GPTBot, which collects data for model training. Its website coverage dropped from 84% to 12% over the study period.

Meta’s ExternalAgent was the largest training-category crawler by request volume in Hostinger’s data. Hostinger says this training-bot group shows the strongest declines overall, driven in part by sites blocking AI training crawlers.

These numbers align with patterns I’ve tracked through multiple studies. BuzzStream found that 79% of top news publishers now block at least one training bot. Cloudflare’s Year in Review showed GPTBot, ClaudeBot, and CCBot had the highest number of full disallow directives across top domains.

The data quantifies what those studies suggested. Hostinger interprets the drop in training-bot coverage as a sign that more sites are blocking those crawlers, even when request volumes remain high.

Assistant Bots Tell a Different Story

While training bots face resistance, the bots that power AI search tools are expanding access.

OpenAI’s OAI-SearchBot, which fetches content for ChatGPT’s search feature, reached 55.67% average coverage. TikTok’s bot grew to 25.67% coverage with 1.4 billion requests. Apple’s bot reached 24.33% coverage.

These assistant crawls are user-triggered and more targeted. They serve users directly rather than collecting training data, which may explain why sites treat them differently.

Classic Search Remains Stable

Traditional search engine crawlers held steady throughout the study. Googlebot maintained 72% average coverage with 14.7 billion requests. Bingbot stayed at 57.67% coverage.

The stability contrasts with changes in the AI category. Google’s main crawler faces a unique position since blocking it affects search visibility.

SEO Tools Show Decline

SEO and marketing crawlers saw declining coverage. Ahrefs maintained the largest footprint at 60% coverage, but the category overall shrank. Hostinger attributes this to two factors. These tools increasingly focus on sites actively doing SEO work. And website owners are blocking resource-intensive crawlers.

I reported on the resource concerns when Vercel data showed GPTBot generating 569 million requests in a single month. For some publishers, the bandwidth costs became a business problem.

Why This Matters

The data confirms a pattern that’s been building over the past year. Site operators are drawing a line between AI crawlers they’ll allow and those they won’t.

The decision comes down to function. Training bots collect content to improve models without sending traffic back. Assistant bots fetch content to answer specific user questions, which means they can surface your content in AI search results.

Hostinger suggests a middle path: block training bots while allowing assistant bots that drive discovery. This lets you participate in AI search without contributing to model training.

Looking Ahead

OpenAI recommends allowing OAI-SearchBot if you want your site to appear in ChatGPT search results, even if you block GPTBot.

OpenAI’s documentation clarifies the difference. OAI-SearchBot controls inclusion in ChatGPT search results and respects robots.txt. ChatGPT-User handles user-initiated browsing and may not be governed by robots.txt in the same way.

Hostinger recommends checking server logs to see what’s actually hitting your site, then making blocking decisions based on your goals. If you’re concerned about server load, you can use CDN-level blocking. If you want to potentially increase your AI visibility, review current AI crawler user agents and allow only the specific bots that support your strategy.


Featured Image: BestForBest/Shutterstock

The Great Decoupling via @sejournal, @Kevin_Indig

SEO died as a traffic channel the moment pipeline stopped following page views. Traffic is either down for many sites, or its growth nowhere near reflects growth rates of 2019-2022, but demos and pipeline are up for brands that shifted from chasing clicks to building authority.

What you’ll get in today’s memo:

  • Why traffic and pipeline decoupled.
  • What brand strength actually means in AI search.
  • How to reframe SEO with executives.
The old funnel has holes. Traffic and pipeline no longer move together. (Image Credit: Kevin Indig)

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

1. We’ve Hit Peak Search Volume For Traditional Queries

Image Credit: Kevin Indig
Image Credit: Kevin Indig
Image Credit: Kevin Indig

Short-head keyword demand is in permanent decline and likely contributing to slowed traffic growth or decline.

An analysis of roughly 10,000 short-head keywords shows that collective search volume grew only 1.2% over the last 12 months and is forecasted to decline by 0.74% over the next 12 months.

Two forces are driving it:

  • Fragmentation into long-tail: Demand did not disappear; it atomized into thousands of specific queries.
  • Bypass behavior: More users start in AI interfaces (AIOs, AI Mode, ChatGPT) instead of classic search.

This shift is irreversible for four structural reasons:

1. AI Overviews are here to stay. Google’s revenue model depends on keeping users inside the SERP. Zero-click search protects Google’s ad business. The company is not reverting to the 10 blue links.

2. LLM outputs are preferred starting points. Many users have conditioned themselves to expect direct answers. The behavior change is complete.

3. Zero-click is now the default expectation. Clicking through now feels like friction, not value. If the answer or solution isn’t easily acquired, the search experience failed.

4. Content supply exploded. There is significantly more content competing for the same queries than three years ago. AI-generated articles, Reddit threads, YouTube videos, and newsletters all compete for visibility. Even if visibility or “rankings” hold, CTR collapses under the weight of infinite options.

Optimizing for traffic growth in this environment is like optimizing for fax machine usage in 2010. The channel is structurally shifting – the products that people use to find answers have fundamentally changed.

2. Traffic And Pipeline Decoupled Because AI Ate The Click

The correlation between organic traffic and pipeline has broken. But it takes a bit more work to convince stakeholders and executives. We’re seeing this across the industry.

In December, Maeva Cifuentes reported traffic growth of 32% for one of her clients, while signups grew 75% over the same six-month period. Her post was in response to one from Gaetano DiNardi, who found no correlation between traffic and pipeline across multiple B2B SaaS companies he advises. Maeva’s client data shows you can grow pipeline 2.3x faster than traffic. Gaetano’s data shows you can grow pipeline while traffic stays flat or even declines.

Image Credit: Kevin Indig

The classic SEO model assumed a linear relationship: More rankings meant more clicks, more clicks meant more traffic, more traffic meant more leads.

Alternatively, AI answers queries without sending clicks. The Growth Memo AI Mode Study found that when the search task was informational and non-transactional, the number of external clicks to sources outside the AI Mode output was nearly zero across all user tasks. Users get the information they need – directly in their interface of choice – without ever visiting your site.

But buying intent didn’t disappear with the clicks.

SEO creates influence. It can still shape which brands buyers trust. It just doesn’t deliver the click anymore.

Education happens inside the AI interface. Brand selection happens after. Your traffic vanished, but the demand for your product/services didn’t.

This explains why Maeva noted she has clients whose traffic is declining, but demos are growing by double digits month-over-month.

Image Credit: Kevin Indig

The SEO work didn’t stop working. The measurement broke. Teams optimized for clicks are being judged on a metric that no longer predicts business outcomes.

3. Strong Brands Still Win In AI Search, But “Brand Strength” Has A New Definition

In AI search, performance depends less on “more pages” and more on whether AI systems can confidently understand, trust, and cite you for a specific audience and context.

Brand strength in AI search has four components:

  1. Topical Authority: Complete ownership of the conceptual map (see topic-first SEO), not just keyword coverage.
  2. ICP Alignment: Answers tailored to specific buyer questions, prioritizing relevance over volume. Read Personas are critical for AI search to learn more.
  3. Third-Party Validation: Citations from category-defining sources matter more than high-DA links (see the data in How AI weighs your links).
  4. Positioning Clarity: LLMs must recognize what a brand is known for. Vague positioning gets skipped; sharp positioning gets cited (covered in State of AI Search).

SEO teams that are structured for traffic optimization are now misaligned with business outcomes.

The conversation you need to have is “traffic and pipeline decoupled, here’s the data proving it, and here’s what we’re measuring instead.”

Move from keyword-first workflows to ICP-first workflows. Start with ICP research (what questions do your buyers ask and where do they ask them), positioning (what are you known for), and omnichannel distribution (SEO + Reddit + YouTube + earned media). SEO is no longer a standalone channel. It’s one input in a brand-building system.

Move from traffic reporting to influence reporting. Stop leading stakeholder conversations with sessions, impressions, and rankings. Report on brand lift (are more people searching for you by name?), pipeline influence (what percentage of demos started with organic touchpoints?), and LLM visibility rates (how often do AI systems mention your brand vs cite your content?).

4. The Uncomfortable Question: If SEO Doesn’t Drive Traffic Anymore, What Does It Do?

Here’s what SEO actually does and always did: It shapes mental availability and brand recognition, builds topic/category authority, frames the problem (and the solution), and reduces buyer uncertainty.

Traffic was a proxy for those things. The click was the observable action, but the trust was the outcome that mattered.

LLM-based search has removed the click but kept the trust-building. Users still learn from your content. It just happens inside an LLM interface instead of on your domain. Your content can still influence which brands buyers trust. Yes, it’s harder to measure because it’s invisible to analytics. But the outcome – buyers choosing your brand when they’re ready to buy – is the same.

SEO influences brand preference within the category. When buyers are in-market and researching solutions, SEO determines whether your brand is in the consideration set and whether AI systems recommend you.

Traffic was never the point. It was just the easiest thing to measure.


Featured Image: Paulo Bobita/Search Engine Journal