Ask A PPC: How Do I Avoid Cannibalization On Similar Products? via @sejournal, @navahf

There’s nothing worse than watching your own products compete against each other.

When your paid media strategy starts pitting your product lines against one another, you’re not just inflating costs; you’re undercutting your own chances at conversion.

That’s the question this month’s “Ask A PPC” will tackle:

“I work for a company that has three brands in the same niche with a high ticket item for house renovation. All companies have high spend on search ads, but we are targeting the same keywords and we are seeing cannibalization.

What can we do with our bidding strategy to try and reduce our CPC and still compete on the same products/keywords, but not cannibalize each other?”

Let’s break down how to avoid keyword cannibalization, particularly when dealing with premium products, and how to structure campaigns in a way that keeps everything working together.

The Hard Truth: You Can’t Avoid All Cannibalization

Let’s start here because this is what no one wants to hear: If you’re targeting the same non-branded keywords, the same geographies, and similar audiences with similar value props, some level of internal competition is inevitable.

Search campaigns don’t know your product lines are siblings. All they see are bids, relevance scores, and conversion data. Some keywords/ads will win. Some won’t.

The goal is to mitigate the internal crossfire and make strategic decisions that give every product its best shot to shine.

Prioritize: Which Products Get Which Keywords?

We don’t like to play favorites with our products, but when it comes to generic, high-volume keywords, you might have to.

Unless you have contractual obligations to spend equally across product lines (try to avoid this), you’ll need to assign certain non-branded queries to one product or another.

Here’s how you can do it:

  • Segment by market: Allocate geographic zones to different products based on performance trends, sales reps, or product-market fit.
  • Use keyword research as a compass: Both Google’s and Microsoft’s keyword planners can show you which search terms have better affinity with which product.
  • Establish thematic lanes: If Product A is more “entry-level” and Product B is the “pro version,” let them own different stages of the funnel.

Use Category Pages, Not Product Pages

One workaround, especially with Dynamic Search Ads (DSA) and Performance Max (PMax), is to avoid pushing people directly to product pages. Instead, drive them to category or collection pages.

Why this works:

  • It gives consumers options without forcing them to pick one.
  • You can still control targeting and ad creative at the campaign or asset group level.
  • It creates a more balanced distribution of visibility without inflating cost-per-click (CPCs) by bidding on the same SKUs.

DSAs and PMax campaigns do this particularly well. You’re not bidding on keywords in the traditional sense; you’re letting Google’s (or Microsoft’s) AI determine which queries to match based on content and intent.

On Google, AI Max lets you guide that intent more narrowly through ad group-level settings.

On Microsoft, PMax can do something similar, especially if you feed it clean, structured data and lean into visual creative.

Build A Branded Safety Net

You likely already have branded campaigns in place, and if you don’t, this is an important go do.

Branded search and Shopping should ensure that anyone looking for a specific product by name sees only that product. This is where you can (and should) be strict about campaign segmentation.

Branded campaigns give you clean performance data, protect your CPCs from cannibalization, and provide the clearest attribution path.

Leverage Visual Differentiation

This is where platforms like Google Demand Gen and Microsoft Audience Ads really shine.

Visual content lets you sidestep keywords altogether and lean into product storytelling. You can target by interest, topic, or custom segments – not search intent – which means you can:

  • Run one campaign per product and assign each a budget.
  • Or run one big campaign and let the creative guide user choice.

You can use PMax here, too, especially on Microsoft, where PMax makes it more likely to secure Copilot placements across mobile and desktop.

Copilot has been shown to have 25% more relevancy than traditional search, according to Microsoft internal data.

The key is to treat these upper-funnel plays as audience builders. Then, once users engage, you can segment them with remarketing across both platforms.

Pro tip: On Microsoft, even just an impression is enough to build an audience. Which means your remarketing and exclusions can get very precise, very quickly.

So long as there’s at least one audience ad campaign in your impression-based remarketing sources, you can allow PMax to remarket to PMax and Search/Shopping to remarket to Search/Shopping, i.e., you can capture intent from Copilot even if they didn’t engage with you there.

Does This Really Solve Cannibalization?

The only surefire way to fully prevent cannibalization would be to run entirely separate ad accounts, one per product. But that opens up a Pandora’s box of compliance risks.

Google and Microsoft are both very aware of efforts to double-serve, and if they perceive your accounts as trying to game the system – even if you’re just trying to stay organized – you could end up suspended.

So instead, your best move is to manage the overlap, not eliminate it. Focus on:

  • Using category pages for non-branded queries.
  • Owning branded queries with tightly segmented campaigns.
  • Differentiating products visually through audience-first formats.
  • Using geographic and thematic separation when assigning generic keywords.

When done right, the consumer makes the final decision, not your CPC strategy. That’s not cannibalization. That’s just a user choosing which of your great products fits their needs best. And either way? You win.

Final Takeaways

To recap:

  • You can’t fully eliminate cannibalization without risking violating platform policies.
  • Smart segmentation of campaigns by geography, theme, and intent, helps mitigate overlap.
  • Category pages + visual ads can guide consumers to the right product without inflating CPCs.
  • Branded campaigns are your best friend; keep them clean, tight, and product-specific.
  • Audience-based targeting gives you control without competing on search terms.

At the end of the day, your campaigns should reflect how your users shop: exploring, comparing, deciding. Make that process easier for them, and less expensive for you.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

6 AI Marketing Myths That Are Costing You Money [Webinar] via @sejournal, @duchessjenm

Stop letting AI drain your budget. Learn how to make it work for you.

Think AI can fully run your marketing strategy on autopilot? 

Or that AI-generated content should deliver instant results? 

It is time to bust the AI myths that are slowing you down and costing you money.

Join Bailey Beckham, Senior Partner Marketing Manager at CallRail, and Jennifer McDonald, Senior Marketing Manager at Search Engine Journal, on August 21, 2025, for an exclusive webinar. Get the insights you need to stop wasting time and money and start leveraging AI the right way.

In this session, you will learn:

Why this session is essential:

AI tools can’t run your strategy on autopilot. You need to make smarter decisions, ask the right questions, and guide your AI tools to work for you, not against you. 

This webinar will help you unlock AI’s full potential and optimize your content to improve your marketing performance.

Register now to learn how to get your content loved by AI, LLMs, and most importantly, your audience. Can’t attend live? Don’t worry, sign up anyway, and we will send you the on-demand recording.

The Download: OpenAI’s open-weight models, and the future of internet search

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

OpenAI has finally released open-weight language models

The news: OpenAI has finally released its first open-weight large language models since 2019’s GPT-2. Unlike the models available through OpenAI’s web interface, these new open models can be freely downloaded, run, and even modified on laptops and other local devices.

Why it matters: These releases re-establish OpenAI as a presence for users of open models. That’s particularly notable at a time when Meta, which had previously dominated the American open-model landscape with its Llama models, may be reorienting toward closed releases—and when Chinese open models are becoming more popular than their American competitors. Read the full story

—Grace Huckins

MIT Technology Review Narrated: AI means the end of internet search as we’ve known it

The biggest change to the way search engines deliver information to us since the 1990s is happening right now. No more keyword searching. Instead, you can ask questions in natural language. And instead of links, you’ll increasingly be met with answers written by generative AI and based on live information from across the internet, delivered the same way.

Not everyone is excited for the change. Publishers are completely freaked out. And people are also worried about what these new LLM-powered results will mean for our fundamental shared reality.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Nvidia insists its AI chips don’t have a “kill switch”
After China’s Cyberspace Administration asked for security documentation. (CNBC)
+ The country’s ambitions to consolidate its chip giants aren’t going to plan. (FT $)
+ Two Chinese nationals have been charged with illegally shipping chips. (Reuters)

2 America’s new data centers are driving colossal electricity demand
And a handful of equipment makers are reaping the benefits. (FT $)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

3 RFK Jr has cancelled close to $500 million in mRNA vaccine contracts 
Which could leave us dangerously underprepared for a future pandemic. (Politico)
+ We’re losing a key insight into global health. (Vox)
+ How measuring vaccine hesitancy could help health professionals tackle it. (MIT Technology Review)

4 Uber has a sexual assault problem
Newly-unveiled records show it gathered far more sexual assault and misconduct reports than previously revealed. (NYT $)

5 A British politician created an AI clone of himself
And although it provoked a backlash, other MPs may follow his lead. (WP $)
+ A former CNN journalist has interviewed an AI version of a mass-shooting victim. (The Guardian)

6 xAI’s new Grok Imagine tool has a “spicy” mode
Which seems to be code for non-consensual porn images. (The Verge)  
+ It’s already generated fake Taylor Swift nudes without being asked. (Ars Technica)

7 How does ChatGPT fare as a couple’s counselor?
It gets some stuff right. But it also gets some things really wrong. (NPR)
+ The AI relationship revolution is already here. (MIT Technology Review)

8 Syria’s refugees are returning to rebuild its tech industry
But sectarian violence and poor connectivity mean it’s an uphill battle. (Rest of World)

9 Sales of Ozempic have dropped
Rival Mounjaro seems to be more effective. (The Guardian)
+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

10 Google Calendar rules college kids’ lives
They schedule everything from assignments to parties and hook ups. (WSJ $)

Quote of the day

“This is a bad day for science.”

—Scott Hensley, an immunologist at the University of Pennsylvania, criticizes the Department of Health and Human Services’ decision to cancel hundreds of millions of dollars in funding for mRNA vaccine projects, the New York Times reports.

One more thing

Future space food could be made from astronaut breath

The future of space food could be as simple—and weird—as a protein shake made with astronaut breath or a burger made from fungus.

For decades, astronauts have relied mostly on pre-packaged food during their forays off our planet. With missions beyond Earth orbit in sight, a NASA-led competition is hoping to change all that and usher in a new era of sustainable space food.

To solve the problem of feeding astronauts on long-duration missions, NASA asked companies to propose novel ways to develop sustainable foods for future missions. Around 200 rose to the challenge—creating nutritious (and outlandish) culinary creations in the process. Read the full story

—Jonathan O’Callaghan

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ There are a lot of funny cat videos out there but honestly, this is top-drawer.
+ Check out this adorable website where people share what they see in clouds.
+ Babe you’re glowing! No seriously, you literally are
+ I loved watching this woman from London’s East End wax lyrical about the dawn of TV.

Five ways that AI is learning to improve itself

Last week, Mark Zuckerberg declared that Meta is aiming to achieve smarter-than-human AI. He seems to have a recipe for achieving that goal, and the first ingredient is human talent: Zuckerberg has reportedly tried to lure top researchers to Meta Superintelligence Labs with nine-figure offers. The second ingredient is AI itself.  Zuckerberg recently said on an earnings call that Meta Superintelligence Labs will be focused on building self-improving AI—systems that can bootstrap themselves to higher and higher levels of performance.

The possibility of self-improvement distinguishes AI from other revolutionary technologies. CRISPR can’t improve its own targeting of DNA sequences, and fusion reactors can’t figure out how to make the technology commercially viable. But LLMs can optimize the computer chips they run on, train other LLMs cheaply and efficiently, and perhaps even come up with original ideas for AI research. And they’ve already made some progress in all these domains.

According to Zuckerberg, AI self-improvement could bring about a world in which humans are liberated from workaday drudgery and can pursue their highest goals with the support of brilliant, hypereffective artificial companions. But self-improvement also creates a fundamental risk, according to Chris Painter, the policy director at the AI research nonprofit METR. If AI accelerates the development of its own capabilities, he says, it could rapidly get better at hacking, designing weapons, and manipulating people. Some researchers even speculate that this positive feedback cycle could lead to an “intelligence explosion,” in which AI rapidly launches itself far beyond the level of human capabilities.

But you don’t have to be a doomer to take the implications of self-improving AI seriously. OpenAI, Anthropic, and Google all include references to automated AI research in their AI safety frameworks, alongside more familiar risk categories such as chemical weapons and cybersecurity. “I think this is the fastest path to powerful AI,” says Jeff Clune, a professor of computer science at the University of British Columbia and senior research advisor at Google DeepMind. “It’s probably the most important thing we should be thinking about.”

By the same token, Clune says, automating AI research and development could have enormous upsides. On our own, we humans might not be able to think up the innovations and improvements that will allow AI to one day tackle prodigious problems like cancer and climate change.

For now, human ingenuity is still the primary engine of AI advancement; otherwise, Meta would hardly have made such exorbitant offers to attract researchers to its superintelligence lab. But AI is already contributing to its own development, and it’s set to take even more of a role in the years to come. Here are five ways that AI is making itself better.

1. Enhancing productivity

Today, the most important contribution that LLMs make to AI development may also be the most banal. “The biggest thing is coding assistance,” says Tom Davidson, a senior research fellow at Forethought, an AI research nonprofit. Tools that help engineers write software more quickly, such as Claude Code and Cursor, appear popular across the AI industry: Google CEO Sundar Pichai claimed in October 2024 that a quarter of the company’s new code was generated by AI, and Anthropic recently documented a wide variety of ways that its employees use Claude Code. If engineers are more productive because of this coding assistance, they will be able to design, test, and deploy new AI systems more quickly.

But the productivity advantage that these tools confer remains uncertain: If engineers are spending large amounts of time correcting errors made by AI systems, they might not be getting any more work done, even if they are spending less of their time writing code manually. A recent study from METR found that developers take about 20% longer to complete tasks when using AI coding assistants, though Nate Rush, a member of METR’s technical staff who co-led the study, notes that it only examined extremely experienced developers working on large code bases. Its conclusions might not apply to AI researchers who write up quick scripts to run experiments.

Conducting a similar study within the frontier labs could help provide a much clearer picture of whether coding assistants are making AI researchers at the cutting edge more productive, Rush says—but that work hasn’t yet been undertaken. In the meantime, just taking software engineers’ word for it isn’t enough: The developers METR studied thought that the AI coding tools had made them work more efficiently, even though the tools had actually slowed them down substantially.

2. Optimizing infrastructure

Writing code quickly isn’t that much of an advantage if you have to wait hours, days, or weeks for it to run. LLM training, in particular, is an agonizingly slow process, and the most sophisticated reasoning models can take many minutes to generate a single response. These delays are major bottlenecks for AI development, says Azalia Mirhoseini, an assistant professor of computer science at Stanford University and senior staff scientist at Google DeepMind. “If we can run AI faster, we can innovate more,” she says.

That’s why Mirhoseini has been using AI to optimize AI chips. Back in 2021, she and her collaborators at Google built a non-LLM AI system that could decide where to place various components on a computer chip to optimize efficiency. Although some other researchers failed to replicate the study’s results, Mirhoseini says that Nature investigated the paper and upheld the work’s validity—and she notes that Google has used the system’s designs for multiple generations of its custom AI chips.

More recently, Mirhoseini has applied LLMs to the problem of writing kernels, low-level functions that control how various operations, like matrix multiplication, are carried out in chips. She’s found that even general-purpose LLMs can, in some cases, write kernels that run faster than the human-designed versions.

Elsewhere at Google, scientists built a system that they used to optimize various parts of the company’s LLM infrastructure. The system, called AlphaEvolve, prompts Google’s Gemini LLM to write algorithms for solving some problem, evaluates those algorithms, and asks Gemini to improve on the most successful—and repeats that process several times. AlphaEvolve designed a new approach for running datacenters that saved 0.7% of Google’s computational resources, made further improvements to Google’s custom chip design, and designed a new kernel that sped up Gemini’s training by 1%.   

That might sound like a small improvement, but at a huge company like Google it equates to enormous savings of time, money, and energy. And Matej Balog, a staff research scientist at Google DeepMind who led the AlphaEvolve project, says that he and his team tested the system on only a small component of Gemini’s overall training pipeline. Applying it more broadly, he says, could lead to more savings.

3. Automating training

LLMs are famously data hungry, and training them is costly at every stage. In some specific domains—unusual programming languages, for example—real-world data is too scarce to train LLMs effectively. Reinforcement learning with human feedback, a technique in which humans score LLM responses to prompts and the LLMs are then trained using those scores, has been key to creating models that behave in line with human standards and preferences, but obtaining human feedback is slow and expensive. 

Increasingly, LLMs are being used to fill in the gaps. If prompted with plenty of examples, LLMs can generate plausible synthetic data in domains in which they haven’t been trained, and that synthetic data can then be used for training. LLMs can also be used effectively for reinforcement learning: In an approach called “LLM as a judge,” LLMs, rather than humans, are used to score the outputs of models that are being trained. That approach is key to the influential “Constitutional AI” framework proposed by Anthropic researchers in 2022, in which one LLM is trained to be less harmful based on feedback from another LLM.

Data scarcity is a particularly acute problem for AI agents. Effective agents need to be able to carry out multistep plans to accomplish particular tasks, but examples of successful step-by-step task completion are scarce online, and using humans to generate new examples would be pricey. To overcome this limitation, Stanford’s Mirhoseini and her colleagues have recently piloted a technique in which an LLM agent generates a possible step-by-step approach to a given problem, an LLM judge evaluates whether each step is valid, and then a new LLM agent is trained on those steps. “You’re not limited by data anymore, because the model can just arbitrarily generate more and more experiences,” Mirhoseini says.

4. Perfecting agent design

One area where LLMs haven’t yet made major contributions is in the design of LLMs themselves. Today’s LLMs are all based on a neural-network structure called a transformer, which was proposed by human researchers in 2017, and the notable improvements that have since been made to the architecture were also human-designed. 

But the rise of LLM agents has created an entirely new design universe to explore. Agents need tools to interact with the outside world and instructions for how to use them, and optimizing those tools and instructions is essential to producing effective agents. “Humans haven’t spent as much time mapping out all these ideas, so there’s a lot more low-hanging fruit,” Clune says. “It’s easier to just create an AI system to go pick it.”

Together with researchers at the startup Sakana AI, Clune created a system called a “Darwin Gödel Machine”: an LLM agent that can iteratively modify its prompts, tools, and other aspects of its code to improve its own task performance. Not only did the Darwin Gödel Machine achieve higher task scores through modifying itself, but as it evolved, it also managed to find new modifications that its original version wouldn’t have been able to discover. It had entered a true self-improvement loop.

5. Advancing research

Although LLMs are speeding up numerous parts of the LLM development pipeline, humans may still remain essential to AI research for quite a while. Many experts point to “research taste,” or the ability that the best scientists have to pick out promising new research questions and directions, as both a particular challenge for AI and a key ingredient in AI development. 

But Clune says research taste might not be as much of a challenge for AI as some researchers think. He and Sakana AI researchers are working on an end-to-end system for AI research that they call the “AI Scientist.” It searches through the scientific literature to determine its own research question, runs experiments to answer that question, and then writes up its results.

One paper that it wrote earlier this year, in which it devised and tested a new training strategy aimed at making neural networks better at combining examples from their training data, was anonymously submitted to a workshop at the International Conference on Machine Learning, or ICML—one of the most prestigious conferences in the field—with the consent of the workshop organizers. The training strategy didn’t end up working, but the paper was scored highly enough by reviewers to qualify it for acceptance (it is worth noting that ICML workshops have lower standards for acceptance than the main conference). In another instance, Clune says, the AI Scientist came up with a research idea that was later independently proposed by a human researcher on X, where it attracted plenty of interest from other scientists.

“We are looking right now at the GPT-1 moment of the AI Scientist,” Clune says. “In a few short years, it is going to be writing papers that will be accepted at the top peer-reviewed conferences and journals in the world. It will be making novel scientific discoveries.”

Is superintelligence on its way?

With all this enthusiasm for AI self-improvement, it seems likely that in the coming months and years, the contributions AI makes to its own development will only multiply. To hear Mark Zuckerberg tell it, this could mean that superintelligent models, which exceed human capabilities in many domains, are just around the corner. In reality, though, the impact of self-improving AI is far from certain.

It’s notable that AlphaEvolve has sped up the training of its own core LLM system, Gemini—but that 1% speedup may not observably change the pace of Google’s AI advancements. “This is still a feedback loop that’s very slow,” says Balog, the AlphaEvolve researcher. “The training of Gemini takes a significant amount of time. So you can maybe see the exciting beginnings of this virtuous [cycle], but it’s still a very slow process.”

If each subsequent version of Gemini speeds up its own training by an additional 1%, those accelerations will compound. And because each successive generation will be more capable than the previous one, it should be able to achieve even greater training speedups—not to mention all the other ways it might devise to improve itself. Under such circumstances, proponents of superintelligence argue, an eventual intelligence explosion looks inevitable.

This conclusion, however, ignores a key observation: Innovation gets harder over time. In the early days of any scientific field, discoveries come fast and easy. There are plenty of obvious experiments to run and ideas to investigate, and none of them have been tried before. But as the science of deep learning matures, finding each additional improvement might require substantially more effort on the part of both humans and their AI collaborators. It’s possible that by the time AI systems attain human-level research abilities, humans or less-intelligent AI systems will already have plucked all the low-hanging fruit.

Determining the real-world impact of AI self-improvement, then, is a mighty challenge. To make matters worse, the AI systems that matter most for AI development—those being used inside frontier AI companies—are likely more advanced than those that have been released to the general public, so measuring o3’s capabilities might not be a great way to infer what’s happening inside OpenAI.

But external researchers are doing their best—by, for example, tracking the overall pace of AI development to determine whether or not that pace is accelerating. METR is monitoring advancements in AI abilities by measuring how long it takes humans to do tasks that cutting-edge systems can complete themselves. They’ve found that the length of tasks that AI systems can complete independently has, since the release of GPT-2 in 2019, doubled every seven months. 

Since 2024, that doubling time has shortened to four months, which suggests that AI progress is indeed accelerating. There may be unglamorous reasons for that: Frontier AI labs are flush with investor cash, which they can spend on hiring new researchers and purchasing new hardware. But it’s entirely plausible that AI self-improvement could also be playing a role.

That’s just one indirect piece of evidence. But Davidson, the Forethought researcher, says there’s good reason to expect that AI will supercharge its own advancement, at least for a time. METR’s work suggests that the low-hanging-fruit effect isn’t slowing down human researchers today, or at least that increased investment is effectively counterbalancing any slowdown. If AI notably increases the productivity of those researchers, or even takes on some fraction of the research work itself, that balance will shift in favor of research acceleration.

“You would, I think, strongly expect that there’ll be a period when AI progress speeds up,” Davidson says. “The big question is how long it goes on for.”

Google Ads Unveils RSA Asset Stats

A helpful reporting update is rolling out in Google Ads accounts. Advertisers can now view click and conversion data for each headline and description line of Responsive Search Ads, as well as aggregate RSA performance.

More Control

Advertisers have generally responded positively to RSAs. The ads allow up to 15 headlines and four description lines that rotate interchangeably for, potentially, thousands of combinations. With smart bidding, artificial intelligence, and personalization signals, Google shows the most likely-to-convert combination for each searcher.

Until now, however, advertisers could only see the overall RSA performance and total impressions of each asset and combination.

But click and conversion metrics for each asset now appear in the interface. The example below ranks the number of conversions from highest to lowest, along with their conversion rates and cost per conversion. Advertisers can easily identify which assets are meeting goals.

Screenshot of the RSA report

Google Ads now reports click and conversion metrics for each RSA asset. This example ranks the number of conversions from highest to lowest.

With the data, advertisers regain some control, although it’s essential to consider the bigger picture. More data doesn’t necessarily mean more changes.

Google’s AI optimizes for advertisers’ goals. A lower-performing asset could result from Google testing combinations. For instance, a headline could perform poorly for group A but well for group B when combined with description line C. Unfortunately, impressions remain the only available metric to advertisers when viewing RSA combinations.

Using the Data

Nonetheless, advertisers should not entirely defer to Google’s AI. Here are my typical action items.

Remove underperforming assets. I apply a filter to highlight poor performers, such as any asset with at least 100 clicks and zero conversions. It’s a quick rundown of headlines and descriptions to remove, as the message or landing page isn’t resonating with searchers.

Advertisers can view asset-level performance at the ad, ad group, and campaign levels. The ad level provides the most detail, but ad groups and campaigns are sufficient if the assets are identical. Regardless, ensure you have enough data for informed decisions — I aim for at least 50 clicks.

Pin the best performers. Conversely, identify the most productive assets through pinning — locking specific headlines and descriptions, such as a headline with a better-than-average conversion rate or a description with a low cost per lead.

Creating a new RSA for the top three to seven assets is another option. For example, if headlines A, D, F, and description lines M and N perform well, create an RSA with only those assets.

Keep in mind that pinning assets will reduce an ad’s strength. To be sure, “ad strength” is a novelty metric, but it roughly aligns with the number of likely impressions. Thus pin assets selectively to ensure consistent traffic.

Find new messaging from AI Max. When turned on, AI Max ads reveal performance for its automated assets.

Recall that AI Max campaigns create assets from copy on an advertiser’s website, landing page, and other ads. If an automatically created asset performs well, consider creating a new RSA ad or adding it to an existing one.

Screenshot of a AI Max performance report

AI Max’s automatic headlines and descriptions are a source for new or existing RSAs.

Caution

More data can lead to bad decisions. Exercise caution. Google Ads AI algorithm considers many variables to determine the best message for each searcher. Knowing the clicks and conversions for each headline and description is helpful, but part of the bigger picture.

Charts: U.S. Small Business Trends Q3 2025

The U.S. Chamber of Commerce Small Business Index is published quarterly in conjunction with MetLife, the financial services firm, and based on unique online interviews with 760 small business owners and operators. The index captures owners’ views on the “economy, hiring, investment, and other key economic indicators.”

The index is a measure of owners’ sentiment across key topics with 0 = extremely negative, 100 = extremely positive, and 50 = neutral.

For Q2 2025, the index rose to 65.2, up from 62.3 in the previous quarter, reflecting growing optimism around business health and cash flow.

The National Small Business Association, a 65,000-member non-profit advocacy organization unaffiliated with the U.S. government, conducts an annual in-depth survey of small businesses nationwide on the state of their companies.

This year’s survey report (PDF), issued in May, is based on approximately 650 interviews in April 2025 with small business owners in all 50 states and industries. Economic uncertainty is the most significant challenge facing small businesses today, with 59% identifying it as their primary concern.

Despite the uncertainty, roughly 50% of surveyed owners expect their sales to increase this year.

U.S. Bank surveyed 1,000 small business owners with annual revenues of $25 million or less and between two and 99 employees to examine the main macroeconomic challenges they face and their use of digital tools and AI. The survey was carried out from March 14 to April 4, 2025, and published in the bank’s “2025 Small Business Perspective” report (PDF).

Per the survey, U.S. small business owners are adopting new payment options to serve their customers better. Although cash is still the preferred in-store method, other payment options are becoming increasingly popular, with 42% reporting tap-to-pay as a primary method.

Ecosia & Qwant Launch European Search Infrastructure via @sejournal, @MattGSouthern

Ecosia has begun delivering its own search results for the first time in its 16-year history, starting with users in France who will receive a portion of results from a new European search index developed jointly with Qwant.

The rollout marks the first implementation of the European Search Perspective (EUSP) joint venture, which has created Staan (Search Trusted API Access Network), a privacy-focused search infrastructure designed for Europe.

Current Implementation & Timeline

French users are now receiving search results directly from EUSP’s independent European index. Ecosia aims to serve 30% of French search queries through the new infrastructure by the end of 2025.

In a statement to Tech.eu, Christian Kroll, CEO of Ecosia, said:

“Having our own search infrastructure is a critical step for digital plurality and for building a sovereign European alternative. With more control over our offering, we can better serve users, develop ethical AI, and double down on our mission to build tech that benefits people and the planet.”

Technical Independence

Ecosia and Qwant have historically relied on syndication platforms from major US tech companies. The new infrastructure allows both companies to deliver results independently and make backend improvements without relying on external providers.

The broader goal is to reduce reliance on digital infrastructure controlled by foreign companies.

Open Index, Structured For Growth

EUSP isn’t limited to Ecosia and Qwant. The index is open to other companies building search or generative AI tools.

It is also structured to allow outside investment, unlike Ecosia’s steward-owned model, where 99.99% of shares belong to a foundation.

Kroll said the goal is to create an infrastructure that supports competition and innovation in Europe while maintaining strong privacy protections:

“This isn’t just about better search. It’s about the freedom to build and shape the future of tech in Europe.”

Looking Ahead

Ecosia’s partnership with Qwant could lead to more diversity in how European users access and interact with search.

While the initial rollout is limited to France, the infrastructure is designed to scale and support other companies and markets over time.


Featured Image: George Khelashvili/Shutterstock

Google Says AI Clicks Are Better, What Does Your Data Say? via @sejournal, @MattGSouthern

Google’s latest blog post claims AI is making Search more useful than ever. Google says people are asking new kinds of questions, clicking on more links, and spending more time on the content they visit.

But with no supporting data or clear definitions, the message reads more like reassurance than transparency.

Rather than take Google at its word or assume the worst, you can use your own analytics to understand how AI in Search is affecting your site.

Here’s how to do that.

Google Says: “Quality Clicks” Are Up

In the post, Google says total organic traffic is “relatively stable year over year,” but that quality has improved.

According to the company, “quality clicks” are those where users don’t bounce back immediately, indicating they’re finding value in the destination.

This sounds good in theory, but it raises a few questions:

  • What does “slightly more” quality clicks mean?
  • Which sites are gaining, and which are losing?
  • And how is click quality being measured?

You won’t find those answers in Google’s post. But you can find clues in your own data.

1. Track Click-Through Rate On High-Volume Queries

If you suspect your site has lost ground due to AI Overviews, your first stop should be Google Search Console.

Try this:

  • Filter for top queries from the past 12 months.
  • Look at CTR changes before and after May 2024 (when AI Overviews began expanding).
  • Pay attention to queries that are longer, question-based, or likely to trigger summaries.

You may find impressions are holding steady or rising while CTR declines. That suggests your content is still being surfaced, but users may be getting their answers directly in Google’s AI-generated response.

2. Approximate “Quality Clicks” With Engagement Metrics

To test Google’s claim about higher quality clicks, you’ll need to look beyond Search Console.

In GA4, examine:

  • Engaged sessions (sessions lasting more than 10 seconds or including a conversion or multiple pageviews).
  • Average engagement time per session.
  • Scroll depth or video watch time, if applicable.

Compare these engagement metrics to the same period last year. If they’re improving, you may be getting more motivated visitors, supporting Google’s view.

But if they’re dropping, it could mean that AI Overviews are sending fewer, possibly less interested, visitors your way.

3. See Which Content Formats Are Gaining Visibility

Google says people are increasingly clicking on forums, videos, podcasts, and posts with “authentic voices.”

That aligns with its integration of Reddit and YouTube content into AI Overviews.

To see how this shift might be playing out for you:

  • Compare the performance of listicles, tutorials, and original reviews to more generic content.
  • If you create video or podcast content, track any uptick in referral traffic from Google.
  • Watch for changes in how your forum threads, product reviews, or community content perform compared to static pages.

You may find that narrative-style content, first-hand experiences, and multimedia formats are gaining traction, even if traditional evergreen pages are flat.

4. Watch For Redistribution, Not Just Declines

Google acknowledges that while overall traffic is stable, traffic is being redistributed.

That means some sites will lose while others gain, based on how well they align with evolving search behavior.

If your traffic has declined, it doesn’t necessarily mean your content isn’t ranking. It may be that the types of questions being asked and answered have changed.

Analyzing your top landing pages can help you spot patterns:

  • Are you seeing fewer entries on pages that used to rank for quick-answer queries?
  • Are in-depth or comparison-style pages gaining traffic?

The patterns you spot could help guide your content strategy.

Looking Ahead

When you rely on Search traffic, you deserve more than vague reassurances. Your analytics can help fill in the blanks.

By keeping an eye on your CTR, engagement, and how your content performs, you’ll get a better sense of whether AI in Search is helping you. This way, you can tweak your strategy to fit what works best for you.


Featured Image: Roman Samborskyi/Shutterstock

Study: Advanced Personalization Linked To Higher Conversions via @sejournal, @MattGSouthern

A new study commissioned by Meta and conducted by Deloitte finds that advanced personalization strategies are associated with a 16 percentage point increase in conversions compared to more basic efforts.

The research also introduces a maturity framework to help organizations evaluate their personalization capabilities and identify areas for improvement.

What the Data Shows

According to the study, 80% of U.S. consumers say they’re more likely to make a purchase when brands personalize their experiences. Consumers also report spending 50% more with brands that tailor interactions to their needs.

The report connects these behaviors to broader business outcomes. In the EU, Meta’s personalized advertising technologies were linked to €213 billion in economic activity and 1.4 million jobs.

While the economic impact data is specific to Meta, the findings reflect a wider trend in digital marketing: personalized engagement influences purchase decisions and brand loyalty.

Derya Matras, VP for Global Business Group at Meta, commented:

“As people want content and services that are more relevant to them, they are increasingly drawn to brands that make them feel understood.”

Maturity Model for Personalization

The report outlines a four-level maturity model to help you assess where you stand with personalization. The study links higher maturity levels with measurable business outcomes.

Level 1: Low Maturity

Data remains siloed, and messaging tends to be generic. Personalization, if present, is rule-based and limited to a few channels.

Level 2: Medium Maturity

Some systems are integrated, enabling basic audience segmentation and limited customization across channels. These organizations may also use analytics tools and consent management.

Level 3: High Maturity

Unified customer profiles and identity resolution enable greater personalization across multiple touchpoints. Predictive modeling and dynamic content are more common.

Level 4: Champion Maturity

Real-time personalization, generative AI, and clean-room tech support tailored omnichannel experiences. Teams collaborate across departments, with AI governance integrated into decisions.

Three Personalization Strategies

The study outlines three personalization strategies:

  1. Customer-based: Tailors experiences to individuals based on personal data and behavior.
  2. Cohort-based: Segments audiences based on shared traits or behaviors.
  3. Aggregated data-based: Uses anonymized, large-scale datasets to identify general trends.

The report doesn’t suggest a single best method. Instead, it offers examples to help you evaluate what fits your capabilities and goals.

Looking Ahead

For marketers assessing their next steps, the maturity framework offers a structured way to evaluate readiness across people, processes, and technology.

Rather than treating personalization as a software problem, the report frames it as a long-term shift in how organizations structure teams and manage data.

Effective SEO Organizational Structure For A Global Company via @sejournal, @motokohunt

Global companies today face a paradox. Search is more important than ever, yet how it’s managed across markets is often inconsistent, inefficient, and misaligned with broader digital goals.

Too often, SEO is seen as a localized effort, tactically delegated to regional teams or outsourced agencies.

While local knowledge is critical, international SEO success demands structure, governance, and repeatable processes. Otherwise, companies waste resources, duplicate efforts, and fail to capitalize on scalable gains.

This article offers a blueprint for designing an effective SEO organizational structure for global companies, rooted in real-world service-level governance.

We’ll explore what to centralize, what to localize, and how to balance best practices with market nuance.

Drawing from the Service Level Agreement (SLA)-based SEO model used at leading enterprises, we’ll break down the building blocks of a successful international SEO operation, from key performance indicators (KPIs) and tooling to budget models and agency management.

What To Centralize Vs. Localize

An effective SEO structure isn’t just about resourcing; it’s about allocation logic. Knowing which tasks belong at corporate, brand, or market levels prevents duplication, preserves strategic clarity, and empowers those closest to the customer.

This may be one of the most challenging aspects of international SEO operations, particularly for decentralized organizations. You’ll need to evaluate what must be done at each level thoughtfully.

Consider where content is created, how websites are maintained, how diverse market content truly is, and how mature your localization process is.

Unfortunately, there is no one-size-fits-all solution, not even a “one-size-fits-most” option. Each organization must assess its structure, workflows, and existing capabilities.

In many cases, it’s advisable to begin with a few uncontroversial initiatives, such as aligning on what is already established in brand or web standards, content themes, topical coverage, and entity research, and establishing consistent reporting.

Once those foundational elements are in place, you can move toward more sensitive and territorial elements such as Webmaster Tools account management, diagnostic methodology standardization, and global governance of webpage templates.

Centralized Functions (Corporate Center Of Excellence)

These activities are best housed within a corporate SEO function or Center of Excellence (CoE), where scale, tooling, and data access are leveraged across the enterprise:

  • International SEO strategy and policy.
  • Topical taxonomy and preferred landing page (PLP) models.
  • Searcher intent modeling and content framework development.
  • Enterprise reporting dashboards and KPIs.
  • Training and enablement of brand and market teams.
  • Tool governance and platform procurement.

Shared Responsibilities (BU And Editorial)

Some functions require cross-functional collaboration between the brand/business unit and central teams:

  • Editorial workflow integration.
  • Quarterly content planning tied to search trends.
  • Performance reviews of strategic campaigns.
  • Metadata refinement and topics alignment.
  • KPI alignment between SEO, PPC, and social media.

Localized Responsibilities (Market Or Regional Teams)

Localization is more than just translation. Market teams need autonomy in areas that require cultural fluency, deep customer knowledge, and search behavior insight:

  • Local-language topic and content research and mapping.
  • Regional optimization of content and metadata.
  • Management of local SEO agencies and freelancers.
  • Social listening integration with regional campaign planning.
  • Market opportunity modeling and gap assessments.

When Not To Localize

Not all localization adds value. Avoid local divergence when:

  • The infrastructure doesn’t support market-specific subdomains or folders.
  • The same product or offer is consistent across regions.
  • Central models can outperform local improvisation (e.g., PLPs).
  • There’s limited market-specific search volume or opportunity.

Standardization Of Best Practices

To succeed at scale, international SEO must rely on shared standards that create consistency and reduce avoidable errors.

Standardization accelerates execution and allows for cross-market insights.

Key Elements Of Standardization:

  • Enterprise SEO Playbook: Documented standards, processes, templates, and escalation paths.
  • SEO Training Curriculum: Modular training by role type, from content creators to developers.
  • Content Optimization Templates: Consistent formats for metadata, searcher intent, and markup.
  • Glossary and Taxonomy: Shared terminology dictionary and content tagging schema.
  • Governance Reviews: Scheduled audits of adherence to SEO standards by markets and BUs.

Standardization doesn’t mean rigidity. It means creating a foundation that enables innovation and agility at the local level while preserving enterprise-wide integrity.

KPIs That Matter At Each Level

Metrics must reflect both operational performance and business impact, and be meaningful at each layer of the organization.

In one real-world example, a company managing SEO through multiple agencies across markets experienced significant inefficiencies due to inconsistent reporting standards.

Regional and global teams were forced to spend time reconciling disparate metrics, definitions, and formats.

Enforcing consistent KPIs and using standardized reporting templates eliminated this wasted effort, freeing up time for analysis and action rather than reconciliation.

Corporate-Level KPIs

  • Organic market share growth.
  • Revenue or lead contributions.
  • Topical and answer shelf space across global regions.
  • Inclusion rates in major search engines.
  • Adoption of SEO standards across business units.

Brand/BU-Level KPIs

  • Strategic PLP performance and visibility.
  • SEO-driven lead generation or ecommerce conversion lift.
  • Funnel impact of natural search.
  • Content alignment with topics and intent models.

Market-Level KPIs

  • Local-language ranking performance and velocity.
  • Bounce rate and engagement on localized pages.
  • SEO uplift from localized content efforts.
  • Market-specific opportunity capture vs. baseline.

Cross-Cutting Diagnostic Metrics

  • Technical SEO issue trends.
  • Ratio of indexed vs. published pages.
  • Internal search and site experience feedback.
  • SEO vs. PPC vs. social synergy.

If data collection and presentation are consistent, it is easy to roll up data across markets and business units to see the total impact on the business, opportunities, and problems.

Consistent and business-oriented metrics are critical to making the business case for continued funding and support of your initiatives.

Ensure KPIs are actionable, standardized across teams and markets, and demonstrate business value to stakeholders.

Process Design & SLA Governance

Clearly defined processes eliminate ambiguity and ensure that SEO deliverables happen on time and with quality.

SLAs are formal commitments defining expected service levels, responsibilities, and response times across collaborating teams.

As organizations mature in their SEO operations, introducing SLAs becomes essential, especially when coordinating between global, brand, and market-level stakeholders.

For example, suppose a global or brand team is responsible for actions that impact a lower level, such as a local market. In that case, those responsibilities must be documented and bound to SLA metrics.

This not only clarifies accountability but reinforces cross-functional support. Consider a global product launch: If the worldwide team owns the standardized topic taxonomy, it must be delivered to local markets in time for localization and adaptation.

Failure to meet these timeframes puts pressure on markets at launch and risks missed visibility. An SLA helps prevent this by enforcing alignment through timelines and accountability.

Core SLA Components:

  • Defined Turnaround Times: For topical or taxonomy research, page audits, and performance reporting.
  • Prioritization Levels: Normal, high-priority, emergency, with response timelines.
  • Escalation Paths: For unmet KPIs or technical blockers.
  • Quarterly Review Cadence: For content clusters, PLPs, and editorial integration.
  • Feedback Loops: Structured inputs from local teams into topic and content models and optimization cycles.

All SLAs should be clearly documented and agreed upon by both internal stakeholders and external agency partners. This alignment ensures that expectations are mutually understood and that accountability is shared.

In addition, a defined escalation process, covering both operational delays and performance disputes, must be in place and visible across all participants in the SEO workflow.

Process governance should be transparent, with clear ownership between corporate, brand, and local roles.

A robust tool utilization strategy ensures consistency, visibility, and collaboration across geographies.

The proper tool structure minimizes duplication, improves time-to-insight, and supports efficient SEO workflows, yet it does not impede any unique requirements at market levels.

Core Elements:

  • Centralized Tool Procurement: Licensing enterprise-grade platforms at scale and using automation or appropriate seat licenses for brands and markets.
  • Shared Access & Dashboards: Central teams provision access and enforce naming conventions and tagging protocols.
  • Integration With Tech Stack: SEO tools integrated into content management system (CMS), digital asset management (DAM), analytics, and campaign platforms.
  • Local Adaptation Guidelines: Empower markets to use supplementary tools while maintaining reporting standards.
  • Tool Governance Board: Review tool usage, redundancy, and sunset decisions regularly.

Tools should be centrally funded to ensure consistency, leverage volume-based pricing, and simplify vendor relationships.

When centralized funding is not feasible, a “tin cup” model may be used, with markets contributing based on utilization and need. This hybrid approach helps ensure broad access to necessary tools while aligning budgets to value creation.

A real-world example underscores the importance of strategic tooling governance. In one organization, the enterprise licensed a powerful SEO diagnostic platform, but with a cap on the number of URLs that could be crawled.

Since U.S.-based teams initiated most crawls, smaller markets were often excluded due to exhausted crawl credits.

This led to a lack of visibility into localized issues, missed global diagnostic signals, and an inability to surface SEO problems across the full portfolio.

Organizations must ensure tooling limits don’t inadvertently prioritize one region over another and that diagnostic equity is built into global processes.

Budget & Resource Allocation Models

Budgets must reflect strategic intent, balancing centralized enablement with market agility.

A key benefit of adopting a three-level management structure aligned to global and local goals is the ability to accurately identify actual resource needs.

This structure helps link local execution to global outcomes, providing the data and justification needed to support budget requests.

When budget allocation aligns with tactical needs and enterprise goals, securing executive sponsorship and scaling successful models becomes easier.

Funding Models:

  • Core CoE Funding: Covers training, shared tools, strategy, and reporting infrastructure.
  • Pay-for-Play Services: Market-funded services like local content research, link building, or page audits.
  • Joint-Funded Pilots: CoE co-invests with business units to explore new opportunities.
  • Agency Rate Cards: Pre-negotiated pricing and scope packages to streamline engagement.
  • ROI Justification Models: Frameworks to link SEO investment to lead gen, conversion uplift, and efficiency gains.

Allocating resources based on market opportunity modeling helps prioritize high-impact work and avoid waste.

Managing Local Agencies And Execution Partners

International SEO execution often depends on external support, but market inconsistency can erode gains.

A lack of coordination in one multinational SEO initiative involving multiple agencies led to numerous tickets being submitted for nearly identical issues.

Some tickets addressed the same problem using different approaches, while others attempted to undo recently completed work based on alternate recommendations from a local agency.

This fragmentation caused unnecessary backlog, confusion, and frustration, highlighting the need for strong alignment on how SEO issues and changes are approached.

Key guidelines may be integrated directly into contracts with external partners. One proven approach references the corporate SEO Center of Excellence playbooks, brand-specific standards, and Google’s SEO fundamentals as foundational compliance requirements.

These guidelines should be codified in contractual language, with a clause stating that any unapproved deviations will be corrected at the partner’s expense.

This ensures that new websites, SEO experiments, or localization practices do not introduce non-compliant structures or technical risks without visibility and alignment.

Best Practices:

  • Approved Vendor Lists: Curated list of pre-vetted agencies aligned with corporate standards.
  • Onboarding Templates: Playbooks for briefing agencies on brand voice, workflows, and KPIs.
  • Monthly Performance Reviews: Standard cadence of reporting and performance analysis.
  • SEO Task Scoping Tools: Templates for briefs, content, and searcher interest research requests, and content updates.
  • Audit Trail Protocols: Visibility into agency deliverables, implementation logs, and turnaround times.

With effective agency governance, local teams can move fast, without compromising quality or consistency.

Transitioning To A Mature SEO Operating Model

A successful shift to an international SEO structure requires staged planning and executive alignment.

The saying goes that Rome wasn’t built in a day, and neither will your global search program be. However, the framework outlined here provides a structured starting point.

With the accelerating change in AI-driven search, having a uniform and consistent process that is well integrated across marketing, development, content creation, and all teams responsible for visibility and engagement is critical for future success.

Roadmap Elements:

  • Stakeholder Interviews: Capture local challenges, needs, and barriers to change.
  • Current-State Coverage Map: Understand what is done, where, by whom.
  • SEO Maturity Assessment: Evaluate readiness across people, process, tools, and performance.
  • Pilot Programs: Test governance, SLA models, and tooling structures with one region or BU.
  • Training & Change Management: Ongoing enablement to embed new practices and workflows.

Phased rollouts ensure learning loops and scalability.

Building An SEO Organization Built For Scale

As search becomes more multimodal and AI-driven, companies can no longer afford disjointed SEO practices.

A strong SEO organizational structure balances strategy and execution, global alignment and local nuance, standardization and innovation.

By embracing a service-level model, aligning KPIs to business outcomes, and establishing clear governance, global enterprises can:

  • Improve search visibility.
  • Reduce operational waste.
  • Enable consistent, scalable content performance.

Ultimately, SEO becomes not just a marketing function but a critical enabler of digital growth and global value creation.

More Resources:


Featured Image: NicoElNino/Shutterstock