Google’s Product Feed Strategy Points To The Future Of Retail Discovery via @sejournal, @brookeosmundson

For years, many advertisers treated product feeds as a channel task tied mainly to Shopping campaigns.

If you were running Shopping ads, feed optimization likely got attention. If you weren’t, it often slipped behind priorities for the PPC campaigns you were running.

Now, that approach is starting to show its age.

Google’s recent Ads Decoded podcast episode suggests that mindset may need to change. Product data was discussed in connection with free listings, AI-powered search experiences, YouTube formats, Lens, virtual try-on, and newer e-commerce surfaces still evolving.

That reflects a much broader role than many advertisers have historically assigned to their feed.

Google appears to be positioning product data as a larger part of how products are discovered across its platforms, not just how Shopping campaigns perform.

Advertisers who still view Merchant Center as a side task may be underestimating how much visibility now starts with product data.

The more interesting question is what that shift tells us about where Google wants retail advertising to go next.

Merchant Center Is Starting To Look Like Retail Infrastructure

What stood out most in the podcast was how broadly Google described the role of Merchant Center data.

Nadja Bissinger, General Product Manager of Retail on YouTube, described Merchant Center feeds as the “backbone that powers organic and ads experiences,” adding that merchants should submit the most robust product data possible to increase discoverability.

That is a wider role than many advertisers have traditionally associated with Merchant Center.

Google said in a 2025 retail insights piece that people shop across Google more than 1 billion times per day. It also highlighted Search, YouTube, Maps, and visual discovery as key parts of modern shopping journeys. That helps explain why reusable product data is becoming more valuable than channel-specific assets alone.

Google also said Google Lens now sees more than 20 billion visual searches per month, and 1 in 4 Lens searches carry commercial intent. That is another signal that structured product data is becoming more important outside traditional Shopping ads.

For years, many brands viewed Merchant Center as a necessary setup for Shopping campaigns. Google now appears to be positioning it as a core input for how products are surfaced across its platforms.

That should change how feed work is prioritized internally.

Feed optimization is no longer just a PPC responsibility. It can influence:

  • Organic visibility
  • Merchandising strategy
  • Creative presentation
  • Promotions
  • How products appear in newer AI-led experiences.

For larger organizations, that may require closer coordination between paid media, SEO, e-commerce, merchandising, and product teams.

For smaller brands, it may be as simple as giving feed quality the same level of attention already given to ad copy, landing pages, and campaign structure.

Many advertisers still treat feed work as cleanup work. That mindset is becoming expensive as product data plays a larger role in who gets seen across Google.

Why Is Google Pushing Product Data So Hard Right Now?

Google’s direction here makes sense when you look at where its retail products are heading.

The company wants more e-commerce activity to happen across Search, YouTube, Maps, AI experiences, and future agentic tools. To support that expansion, it needs merchant data that is accurate, structured, and easy to reuse across different surfaces (as Google refers to them as).

Google has financial reasons to expand e-commerce activity beyond traditional ad clicks. In their 2025 Q4 Earnings Release, they reported a 17% growth in Google Search, and YouTube revenue across ads and subscriptions over $60 billion.

A strong feed helps Google understand:

  • What a product is
  • Who it is for
  • What makes it different
  • Where it is available
  • What it costs
  • How the product should be presented

That matters even more as retail experiences, paid or organic, become more visual, more personalized, and more automated.

Traditional search ads leaned heavily on keywords, headlines, and landing pages. Newer e-commerce formats can also depend on product images, attributes, ratings, promotions, availability, shipping details, and other feed inputs that help match products to user intent.

Better data can lead to better experiences for users. It can also create more places where merchants can appear across Google’s properties.

Google is building more e-commerce surfaces, and product data is the fuel behind them. Advertisers who ignore that may keep optimizing campaigns while missing the larger shift happening around them.

Is Google Prepping For A More Strategic Shift?

From my perspective, there is a larger strategic shift behind Google’s product data push.

I don’t see this as a routine push for better feeds or cleaner campaign inputs. I see Google working to become more of a growth engine for advertisers, with a role that reaches beyond media buying and campaign delivery.

That expansion is moving into areas that shape business performance, including merchandising, product discovery, pricing visibility, local commerce, measurement, and newer purchase-ready experiences.

Google is not only trying to improve how ads run. It appears to be building a deeper position in how products are surfaced, how demand is created, how buying decisions are influenced, and how performance is measured.

My view is that the more Google becomes embedded across those moments, the more connected it becomes to broader business growth rather than media performance alone.

Why Many Advertisers Are Still Measuring Feed Value Wrong

One reason feed optimization still gets deprioritized is simple: many teams are using an outdated scorecard.

Google cited a 33% conversion uplift for advertisers using Demand Gen with product feeds during the podcast discussion. Even if results vary by account, it is another sign that feed quality is being tied to campaign types beyond classic Shopping ads.

If the main question is whether Shopping ROAS improved last week, it becomes easy to undervalue the broader impact of stronger product data.

That measurement approach came from a time when feeds were more closely tied to Shopping campaigns. Google is now using the same data across a much wider set of retail experiences, including discovery surfaces, visual placements, AI-led results, and other formats that do not fit neatly into one campaign report.

That creates a gap between where feed work adds value and where many teams are looking for it.

A stronger title may improve discoverability. Better imagery can increase engagement in visual placements. Accurate pricing and promotions can improve click appeal. Richer attributes can help Google better understand relevance. Availability data can support local and omnichannel visibility.

Those gains may show up across multiple touchpoints, assisted paths, and blended performance trends rather than one Shopping dashboard.

That is why some advertisers continue to underinvest in feed quality. The value is there, but their reporting model was built for an earlier version of Google.

As Google expands where products can appear, feed optimization deserves to be measured more like a visibility and growth lever, not just a Shopping maintenance task.

One of the more important quotes from the podcast came from Ginny Marvin, Google Ads Liaison, as she wrapped up the episode:

Merchants with the most structured, high quality data foundations will be positioned to win.

Winning will not come from uploading a feed once and forgetting about it for months at a time.

It comes from treating product data as an ongoing optimization just like your existing campaigns.

What Google’s AI Max Focus May Be Signaling About Search

One of the more revealing parts of the podcast was how often Search strategy was discussed through the lens of AI Max for Search, while traditional standard Search campaigns were barely mentioned.

During the episode, Firas Yaghi, Global Product Lead for Retail Solutions, talked about how advertisers should be thinking about different campaign types:

I think the role of each campaign really depends on your high level objective. Whether you’re prioritizing cross channel efficiency, granular control or hybrid approach that balances top line sales with OKRs.

He mentioned a lot around Performance Max, Demand Gen, with a little bit of AI Max for Search.

I would avoid treating that as proof that standard Search is going away. There is still clear value in campaigns built around tighter search control, brand protection, and proven high-intent terms.

At the same time, it’s hard to ignore the direction of Google’s messaging.

When Google talks about growth, expansion, and newer retail opportunities, the conversation increasingly centers on AI-assisted campaign types. We have seen similar signals elsewhere, including Google’s announcement that Dynamic Search Ads will upgrade into AI Max for Search and that AI Max represents the next step for search expansion.

My read is that standard Search remains important, but it is no longer the only story Google wants advertisers thinking about.

The company appears to be steering incremental growth toward campaign types that rely on broader matching, stronger inputs, automation, and first-party signals.

I think that Search strategies built around legacy structures will become less competitive over time. I’m not confident enough yet to say that standard Search campaigns will go away completely in the near future, but the increasing signals around keyword-less technology has me thinking more changes for Search campaigns are bound to happen.

What This Means For Your Campaigns

The bigger risk for PPC managers is assuming the teams responsible for merchandising or product data already understand how much feed quality can affect campaign performance.

In many organizations, merchandising, e-commerce, product, or development teams control what goes into Merchant Center. Their priorities may be centered on inventory, pricing, site operations, or category management, not media efficiency or visibility across Google.

That is where PPC managers can add real value.

If product information is influencing how products appear across paid, organic, and AI-led surfaces, someone needs to connect those decisions to marketing outcomes. PPC managers are often in the best position to do that because they can see changes in impressions, traffic quality, conversion trends, and missed opportunities firsthand.

That may mean bringing examples into weekly meetings, showing where missing attributes are limiting reach, flagging weak imagery, highlighting pricing issues, or sharing results from tests that improved performance.

You may not own the feed, but you can help the business understand why it deserves greater priority and where better inputs can improve campaign results.

Put More Focus On Inputs That Can Scale Performance

Many teams spend valuable time on small bid changes, minor budget moves, or endless rounds of creative tweaks while core product data remains incomplete or outdated.

Those tasks still have value, but the upside is often limited when the underlying product information is weak.

If titles are thin, images are poor, attributes are missing, or product details are outdated, fixing those gaps may create more value than another round of minor account adjustments.

Add Feed Health To Regular Performance Reviews

Most reporting cycles focus on spend, ROAS, CPA, and conversion volume.

Those metrics are important, but they do not always show whether product data is helping or limiting visibility.

Feed health deserves a place in regular reviews. Look at disapprovals, missing fields, image quality, pricing accuracy, promotional coverage, and product-level gaps with the same discipline used for media metrics.

Broaden How You Test For Growth

Many retail accounts still treat Search, Shopping, YouTube, and newer campaign types as separate lanes.

Google’s recent direction suggests those lines are becoming less rigid.

Growth testing should include where products can appear across newer surfaces, how feeds support Demand Gen and AI-led placements, and whether stronger product data can unlock reach that existing campaigns are not capturing today.

Treat Better Product Data As A Competitive Advantage

Some advertisers will wait until these newer placements are fully mature before investing seriously in feed quality.

While that delay may be costly for them, your proactiveness can pay off significantly.

What PPC Professionals Are Saying

Recent LinkedIn discussions suggest many practitioners are viewing feed quality as a larger performance lever.

Comments from the podcast episode have been overall positive and has many marketers agreeing that feed management needs to be routine.

Zhao Hanbo commented:

Really interesting to see how something that used to feel mostly like ad ops plumbing is now becoming core infra for AI commerce.

Sophie Westall had similar sentiments, stating that “feed quality is quickly becoming a core part of overall media strategy, not just a hygiene task.”

In a recent LinkedIn post, Menachem Ani said that by fixing a product feed, “campaigns start working harder without touching a single bid.”

More marketers appear to be focusing less on isolated settings and more on the quality of the data – regardless if they’re running paid campaigns or not.

What Comes Next For Retail Marketers

Some advertisers will hear Google’s renewed focus on product data and assume it mainly matters for brands running Shopping campaigns.

That interpretation misses how much wider the opportunity has become.

Google is quickly expanding how products can show up across paid placements, organic surfaces, visual experiences, and newer AI-led formats. As that happens, feed quality becomes more connected to visibility and performance than many teams have historically assumed.

In many organizations, product data still gets treated as maintenance work. It gets attention when something breaks or when Shopping results decline, then falls back down the priority list.

That approach may be harder to justify going forward.

Product data needs a larger role in planning, testing, and cross-functional discussions because it can influence far more than one campaign type.

Read more resources:


Featured Image: Summit Art Creations/Shutterstock

Should You Use Auto-Generated Creative? – Ask A PPC via @sejournal, @navahf

It won’t surprise anyone that most advertisers are hesitant to use auto-generated creative from ad platforms. Auto-generated ads fall into the following categories:

  • Customer-in-the-loop (CITL): Assets are generated based on inputs like a website URL or a user prompt. The advertiser always has a choice as to whether or not they want to include these assets in their campaigns.
  • Dynamic composition: Ads are composed at serving time in different formats based on existing groups of assets, with performant winners selected and scaled (i.e., how Performance Max works). May or may not include AI-generated assets based on customer preferences.
  • Auto-generated: New assets or ads are generated after a campaign is launched based on inputs like URLs, search queries, or existing videos to improve performance. These assets are not reviewed and approved by advertisers before serving, but can generally be viewed and controlled in reporting.

Even advertisers who embrace automation in bidding, targeting, and budget allocation often draw a firm line when it comes to creative.

Image from author, April 2026

That resistance usually comes from a few places:

  • Quality concerns due to generic copy instead of product/service-specific.
  • Brand compliance requirements.
  • A strong desire to maintain creative ownership.
  • Discomfort with the idea of ads going live without a human signing off on every variation.

Yet, auto-generated creative can sometimes perform just as well as, if not better than, human-created assets. A 2025 study found that autogenerated ads had a 19% better CTR.

These performance gains aren’t new; AI ads have been meeting or exceeding human creative as early as 2018.

Three text ads: one made by a human, the others autogenerated (Image from author, April 2026)
Results of three ads from a logistics company over 30 days (Image from author, April 2026)

That performance edge comes from two core advantages.

First, auto-generated creative is highly adaptable. It can flex across formats and placements in ways that would be time-consuming or impractical for humans to manage manually.

Second, it is bias-free in its willingness to apply the creative most likely to perform for humans searching in a profitable way, rather than the semantic syntax we think will succeed.

This article is not about declaring auto-generated creative right or wrong. There is no universal answer. Whether leaning into it makes sense will always depend on business constraints, brand rules, and personal comfort levels.

What we are going to do is walk through a practical framework you can use to decide whether auto-generated creative is worth testing for your business, and how to use platform tools to better understand how well your site and messaging are being interpreted by AI systems.

Before we get into it, an important disclosure. I am a Microsoft Advertising employee. The guidance here is intended to be platform-agnostic, but I will reference a few Microsoft-specific tools that are free to use and particularly helpful for understanding how your site is being interpreted by machines and humans alike.

The Case For Using Auto-Generated Creative

The number one reason to consider auto-generated creative is simple: time savings.

At its core, auto-generated creative takes your existing assets and adapts them to meet the formatting and placement needs of different inventory. Instead of building bespoke creative for every surface, you allow the system to reassemble what you already have in ways that let you reach more people with less manual effort.

The inputs for auto-generated creative typically come from your website, your existing ads, and, in some cases, proven concepts that are broadly applicable across advertisers. You can also apply brand style guides to ensure fonts, colors, and creative (including tone of voice) are compliant with brand standards.

Image from author, April 2026

Advertisers who are able to say yes to auto-generated creative often see faster campaign ramp-up. Eligibility for more placements means more opportunities to enter auctions, and fewer bottlenecks make it easier for the system to test and learn which creative works best in which contexts.

Because auto-generated creative allows advertisers to be eligible for more placements (with Ad Rank determining the ad shown), it naturally has access to more impressions. More impressions create more opportunities to win auctions, which can translate into incremental volume that would have been difficult to capture using tightly controlled, manually built assets alone.

Auto-generated creative does not have to be all-or-nothing. There is also a hybrid approach where humans partner with AI systems. That can mean using in-platform tools from Google or Microsoft, or external AI tools, to help generate ideas, headlines, or variations that are then reviewed, approved, and manually uploaded.

Some advertisers draw a distinction between AI-assisted ideation and auto-generated creative. In practice, if you are using AI at any point to help create or shape ad messaging, there is already an element of automation in the process.

The Case Against Using Auto-Generated Creative

There are absolutely valid reasons to opt out.

The most pressing is brand compliance. If your organization requires explicit approval for every piece of creative before spend can occur, allowing systems to dynamically generate variations may simply not be permissible.

That said, many platforms provide preview tools that show examples of how creative may appear.

Image from author, April 2026

If you are willing to explore those previews and lean into tools like brand kits that enforce fonts, colors, and tone, it may be possible to secure internal approval where it previously felt impossible.

Another reason advertisers shy away from auto-generated creative is reliance on proven assets with no tolerance for variation. Sometimes budget approval is contingent on using specific creative that has already demonstrated performance, and there is no room to test alternatives.

Image from author, April 2026

It is worth noting, however, that auto-generated creative already relies heavily on your existing assets. If the primary concern is avoiding untested messaging, allowing your site content and proven ads to inform the system can help mitigate that risk.

Bonus Tip: Using Auto-Generated Creative To Understand How AI Sees You

One of the most underrated benefits of campaigns like Performance Max, Dynamic Search Ads, and other feed or keywordless-based formats is that they reveal how well platforms understand your site and landing pages.

Image from author, April 2026

If you strongly disagree with the creative shown in previews for AI Max, Performance Max, or similar formats, that is a warning sign. Running budget to those pages risks confusing users if the system’s interpretation does not align with your intended messaging.

These tools can function as diagnostic instruments, not just delivery mechanisms.

Image from author, April 2026

You can go a step further by pairing them with behavioral analysis tools like Microsoft Clarity, which shows how users actually interact with your site. When creative interpretation and user behavior do not line up, the issue is often not the ads, but the underlying content.

Another advantage of modern campaign creation tools is their built-in AI editing capabilities. Even if you never allow auto-generated creative to go live, you can still use these tools to explore tone shifts, rewrites, and messaging ideas that inform your manual creative work.

Image from author, April 2026

There are many use cases for these systems beyond automation alone. Insight generation is one of the most valuable.

Final Takeaways

At its core, the decision to lean into auto-generated creative comes down to whether your brand is allowed to test.

If the answer is yes, there is little downside to experimenting. Auto-generated creative is largely built from your existing assets, and poor results are often a signal that your landing pages or messaging need refinement anyway.

If the answer is no, whether due to brand compliance, limited testing bandwidth, or the need to lock spend behind proven creative, it is entirely reasonable to opt out.

Used thoughtfully, it can save time, unlock scale, and surface insights about how your brand is understood by machines and users alike. Used blindly, it can create risk. The goal is not blind trust, but informed experimentation.

Hope you found this helpful, and I’ll see you next month for another edition of Ask the PPC.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Google Is Replacing Dynamic Search Ads With AI Max via @sejournal, @brookeosmundson

Google just announced the deprecation of Dynamic Search Ads (DSA) and is officially moving its legacy capabilities into AI Max.

Starting in September, eligible campaigns using Dynamic Search Ads (DSA), automatically created assets (ACA), and campaign-level broad match settings will automatically upgrade to AI Max.

While advertisers have speculated about this change for months, the update is now official.

If you’re running Dynamic Search Ads, automatically created assets (ACA), and/or campaign-level broad match settings, keep reading to understand how your campaigns will be affected.

DSA Features Migrating Into AI Max

Beginning in September, advertisers will no longer be able to create new DSA campaigns through Google Ads, Google Ads Editor, or the Google Ads API. Existing eligible campaigns will be migrated automatically.

Google positions AI Max as the next generation of DSA.

Historically, DSA helped advertisers capture additional search demand beyond their keyword lists by using website content to generate headlines and choose landing pages. That made it useful for large sites, inventory-heavy businesses, and advertisers looking for broader query coverage.

AI Max keeps that concept but adds more signals and controls.

According to Google, AI Max combines advertiser assets, landing page content, and broader intent signals to help match ads to more relevant queries. It also adds controls such as:

  • Brand controls
  • Location controls
  • Text guidelines
  • Search term matching
  • Text customization
  • Final URL expansion
Image credit: Google, April 2026

Google says campaigns using the full AI Max feature suite see an average of 7% more conversions or conversion value at a similar CPA or ROAS compared with using search term matching alone.

Google is also splitting the transition into two phases.

Phase 1: Voluntary Upgrades

Google announced that upgrade tools for existing DSA users are rolling out this week.

DSA advertisers will receive tools to move historical settings and data into new standard ad groups. ACA and campaign-level broad match users may see in-platform prompts to upgrade to AI Max.

Phase 2: Automatic Upgrades

Starting in September, remaining eligible campaigns with legacy settings will be upgraded automatically.

Google says all eligible upgrades are expected to finish by the end of September.

It’s important to note how legacy settings will be automatically migrated over to AI Max settings:

  • DSA users will have all three AI Max features enabled by default (search term matching, text customization, final URL expansion)
  • ACA users will have two AI Max features enabled by default (search term matching and text customization)
  • Campaign-level broad match users will have just search term matching enabled by default

What Advertisers Can Do To Prepare For The AI Max Transition

If you still rely on Dynamic Search Ads, now is the time to review where those campaigns sit in your account and how much value they drive.

Some advertisers use DSA as a core growth lever. Others use it as a low-maintenance catch-all for incremental growth. Your next steps may differ depending on that role.

#1. Review Your DSA Performance Now

Before the automatic upgrades begin, pull recent performance data for your DSA campaigns.

Look at conversions, assisted conversions, search terms, landing pages, and efficiency metrics. That baseline will help you judge whether performance changes after migration are positive, neutral, or negative.

#2. Upgrade On Your Timeline Before Automatic Upgrades

Google is encouraging advertisers to move early, and there is a practical reason for that.

A voluntary upgrade gives you more control over settings, structure, and testing than waiting for an automatic migration.

If DSA is important to your business, it makes sense to evaluate the upgrade before September.

#3. Test AI Max Impact

Google recommends using one-click experiments because they give advertisers a cleaner way to compare performance before making a full rollout decision. While I haven’t tried this yet, I will be testing it myself in the coming months.

Even if AI Max improves results on average, averages do not guarantee results in every account. Lead generation, e-commerce, local services, and B2B advertisers may all see different outcomes.

Run controlled tests where possible and compare against your existing baseline.

#4. Lean Into Additional Controls

Many advertisers asked for more steering options in search automation, and Google has listened to our feedback. AI Max includes more controls than legacy DSA.

Spend time understanding brand settings, location controls, and text guidance. Those inputs may matter as much as the automation itself.

#5. Watch Search Match and Landing Page Quality

Once you’ve migrated your DSAs to AI Max, watch closely for the search terms your campaigns are now matching with. How does it compare to past DSA performance?

You’ll also want to pay attention to the landing pages used (if final URL expansion is turned 0n), lead quality, and conversion paths.

Looking Ahead

Dynamic Search Ads have helped advertisers scale beyond their current keyword lists for years. Now, Google is folding that capability into its broader AI Max framework.

The clearest next step is to review where DSA is still active in your account and decide whether to migrate on your own timeline or wait for the automatic upgrade.

The real focus should be protecting performance during the transition and understanding where AI Max improves results, or where it needs tighter management control.

How To Measure PPC Performance When AI Controls The Auction via @sejournal, @brookeosmundson

For most of the history of paid search, performance measurement followed a clear cause-and-effect relationship.

Advertisers controlled the inputs inside their campaigns like bid strategies, keyword and campaign structure, ad copy, and landing pages. All these factors contributed to conversion performance in some shape or form.

When performance changed, the explanation was usually traceable. For example, a new keyword theme improved conversion rates. Or, a bidding strategy increased efficiency.

That simple cause-and-effect framework is breaking down in real time, and has been for a while.

Over the past several months, Google has accelerated its transition toward AI-driven campaign types like Performance Max, Demand Gen, or assets inside those like AI Max or AI-driven ad creative components.

Not only do these change how campaigns are set up and managed, but they also change how performance must be measured.

Advertisers increasingly receive conversions from queries they did not explicitly target, from creative assets that are automatically assembled, and from placements distributed across multiple channels. In this environment, measuring performance by analyzing individual campaign inputs becomes less useful.

The real challenge is understanding how automated systems generate outcomes.

This article provides a measurement framework for that reality. It explains what has changed in advertising platforms, how PPC teams can evaluate performance when automation controls more of the auction, and how practitioners can communicate results clearly to leadership.

The Current Measurement Crisis In PPC

Right now, most discussions about AI in PPC tend to focus on automation features like campaign types, targeting capabilities, ad creative development, and bid strategy expansion.

But, there’s a deeper shift happening in measurement but not talked about as much.

Automation introduces a larger set of variables influencing each auction. When the platforms make targeting, bidding, placement decisions (and more) dynamically, isolating the impact of individual campaign inputs becomes difficult.

Recent platform updates have not only changed how campaigns are managed, but also how performance should be interpreted. The connection between action and outcome is less direct, and in many cases, partially obscured.

Several platform developments illustrate why traditional measurement methods are becoming less reliable.

AI Max Expands Queries Beyond Keyword Lists

In my opinion, AI Max represents Google’s most aggressive step toward intent-driven matching.

Instead of relying solely on advertiser-defined keywords, AI systems evaluate contextual signals, user behavior patterns, and historical performance data to match ads with queries that may not exist in the account.

Not only that, but AI Max goes beyond search terms. It also has the ability to change your ad assets for more tailored messaging when Google deems appropriate.

For PPC managers, this introduces a structural shift in how to measure performance. Conversions may originate from queries that were never explicitly targeted.

And we knew that something like this was coming. Back in 2023, Google first publicly used the word “keywordless” in communications when talking about Search and Performance Max.

Source: Mike Ryan, X.com, March 2026

For example, a retailer who bids on “trail running shoes” may now appear for search terms like:

  • “best shoes for rocky terrain running”
  • “ultra marathon footwear”
  • “durable hiking running hybrids”

These queries reflect the same intent, but they don’t map cleanly back to the original keyword strategy.

Instead of trying to force these queries into keyword-level reporting, try analyzing performance by grouping into intent clusters. By evaluating conversion rate and revenue at the category level, teams can maintain strategic clarity even as query matching expands.

Google Ads already does a decent job of this in the Insights tab within the platform. They have a “Search terms insights” report that groups queries into “Search category,” where you can see conversions and search volume.

Screenshot by author, March 2026

Performance Max Distributes Spend Across Multiple Channels

Performance Max can further complicate measurement by distributing budget across Search, YouTube, Display, Discover, Gmail, and Maps.

Up until last year, there was little-to-no transparency in how spend was allocated across those channels. Back in April 2025, Google launched the long-awaited feature of channel reporting to the PMax campaign type. It now shows channel-level reporting, better search terms data, and expanded asset performance metrics.

For example, say you have a $40,000 monthly PMax campaign budget and see this channel breakdown:

Channel Spend Conversions
Search $18,500 310
YouTube $10,200 82
Display $7,100 45
Discover $4,200 28

If Search drives the majority of conversions, but YouTube consumes a large portion of spend, PPC marketers could try the following:

  • Test separating out branded search outside of PMax.
  • Refine asset groups to improve search alignment.
  • Run controlled experiments comparing PMax vs. Search.

Measurement becomes an exercise in interpreting how the system allocates spend rather than controlling each placement.

Ads Are Beginning To Appear Inside AI Conversations

Conversational search introduces an entirely new layer of complexity into PPC measurement.

Google is now testing shopping results embedded directly within AI Mode, allowing users to compare products without leaving the interface.

Google isn’t the only one doing this. ChatGPT announced on Jan. 16, 2026, that it would begin testing ads for its Free and Go users in the United States.

No matter which platform is running or testing ads in AI conversations, it’s clear that the measurement gap hasn’t been solved, and leaves many PPC managers with unanswered questions.

In my own recent search, I came across ads at the end of an AI Mode thread when I searched “noise cancelling headphones”:

So, if I were to click on one of those sponsored ads but convert at a later time, that attribution is unclear right now. Will my conversion be measured from the AI recommendation, the product listing click, or a later branded search?

These journeys challenge traditional attribution models, which were built around linear click paths rather than multi-step AI interactions.

Why Traditional PPC Metrics Are No Longer Enough

Many PPC reporting dashboards still rely on communicating metrics like impressions, clicks, conversion rate, and return on ad spend.

While some of those metrics remain useful, they no longer tell the full user story when bringing in automated and AI-driven environments.

These three shifts explain why.

1. Attribution Windows Are Expanding

AI-assisted search increases both the length and complexity of user journeys.

Research from Google and Boston Consulting Group show that “4S behaviors” (streaming, scrolling, searching, and shopping) have completely reshaped how users discover and engage with brands.

When AI introduces product recommendations earlier in a user’s journey, the time between initial interaction and conversion often grows. This could be because that user is still at the beginning of their research phase. Just because you’re introducing a product earlier, does not mean that they’ll be ready to purchase it any earlier.

So, what can marketers do about that gap now? Here are a few helpful tips to better understand how users are engaging with your business:

  • Review conversion lag reports in Google Ads.
  • Analyze time-to-conversion in GA4. Are there any differences or shifts in the last three, six, or nine months?
  • Extend attribution windows to 60-90 days where appropriate.

This ensures automated systems receive more accurate feedback on what (and when they) drive conversions.

Organic Search Is Losing Click Share

Search results now include everything from AI Overviews, scrollable shopping modules at the top, and expanded ad placements across all devices.

Where does that leave organic listings?

A study conducted by SparkToro and Datos found that nearly 60% of Google searches end without a click.

This reduces organic traffic even more and shifts more demand capture towards paid media.

From a measurement standpoint, PPC should be evaluated alongside organic performance when possible.

Tracking blended search revenue provides a more accurate view of total search performance, rather than isolating paid channels.

AI Systems Optimize For Outcomes Rather Than Inputs

Traditional PPC management focused on inputs like keywords, bids, and ad copy to influence performance directly.

AI systems work differently. Instead of optimizing individual levers, they evaluate large sets of signals in real-time to determine which combinations are most likely to drive conversions.

This changes what measurement needs to do. Instead of asking which specific keyword or bid strategy adjustment improved performance, marketers need to evaluate whether the platform is producing the right business outcomes.

As platforms take over more of the execution, measurement has to focus less on the mechanics and more on whether automation is driving profitable, meaningful results.

The New Measurement Stack For AI-Driven PPC

If AI is now controlling more of the auction, then PPC teams need a different way to evaluate performance.

The old measurement stack was built around visibility into campaign inputs. You could look at keyword performance, search terms, ad copy, device segmentation, and bid adjustments to understand what was working. That model starts to fall apart when automation is making many of those decisions on your behalf.

The replacement becomes a new measurement stack that advertisers should look at in these four layers:

  • Profitability.
  • Incrementality.
  • Blended acquisition efficiency.
  • First-party conversion quality.

Together, these give marketers a more accurate picture of whether automation is actually helping the business grow.

Start With Profit, Not Just ROAS

ROAS still has value, but it should no longer be treated as the primary success metric in highly automated campaigns.

The problem is that AI-driven systems are often very good at capturing demand that already exists. That can make campaign efficiency look strong on paper, even if the business is not gaining much incremental value.

A campaign with a 700% ROAS may still be underperforming if it is primarily driving low-margin products, repeat purchasers, or orders that would have happened anyway.

That is why profitability should sit at the top of the measurement stack.

Instead of asking, “Did this campaign generate enough revenue?” marketers should be asking, “Did this campaign generate profitable revenue?”

For ecommerce brands, this could mean incorporating:

  • Contribution margin.
  • Product margin by category.
  • Average order profitability.
  • New customer revenue vs. returning customer revenue.

A simple starting point is to compare campaign revenue against both ad spend and cost of goods sold.

For lead gen advertisers, the same principle applies, just different incorporations:

  • Qualified lead rate.
  • Sales acceptance rate.
  • Close rate by campaign.
  • Revenue per opportunity.

If AI is optimizing toward cheap conversions that never turn into revenue, the system is learning the wrong lesson.

Add Incrementality To Separate Demand Capture From Demand Creation

The second layer of the stack is incrementality. This is where many PPC measurement frameworks still fall short.

Automation can be highly effective at finding conversions, but that does not automatically mean it is generating new business. In many cases, AI systems are simply getting better at intercepting users who were already on their way to converting.

If your campaign is mostly capturing existing demand, performance may look strong inside the ad platform while actual business lift remains modest.

This is why incrementality testing has become much more important in the AI era.

For PPC teams, this means at least part of measurement should be designed to answer: “Would this conversion have happened without the ad?”

You don’t need an enterprise-level media mix modeling to get started. A few practical approaches include:

  • Geo holdout tests. Pause or reduce spend in a small set of markets while maintaining normal activity elsewhere.
  • Use Google incrementality testing. Google reduced the minimum of testing incrementality in its platform to just $5,000, making it more affordable for many advertisers.
  • Branded search suppression tests. In select markets or windows, test the impact of reducing branded spend where brand demand is already strong.

Answering this question does not mean automation is bad. It means PPC teams need a better way to distinguish between platform efficiency and true business lift.

Use Blended CAC To Measure Search More Realistically

The third layer of the new measurement stack is blended acquisition efficiency.

As AI Overviews, AI Mode, and other search changes continue to reduce traditional organic click opportunities, PPC should not be measured in a vacuum.

That is especially true for brands where paid and organic search are increasingly working together to capture the same demand.

A campaign may appear less efficient in-platform while still playing a critical role in maintaining total search visibility and revenue.

That is where blended customer acquisition cost (CAC) becomes useful.

Blended CAC looks at total acquisition spend across relevant channels and divides it by the total number of new customers acquired.

The formula for this is simple:

Total acquisition spend ÷ total new customers = blended CAC

This gives leadership a much more realistic picture of what it actually costs to grow the business.

It also helps PPC managers explain why paid search may need to carry more weight when organic search visibility declines due to AI-driven search features.

In other words, this metric helps move the conversation away from “Did Google Ads hit target ROAS?” and toward “What is it costing us to acquire a customer across modern search systems?”

Make First-Party Conversion Quality The Foundation

The final layer of the stack is first-party data quality. This is the part many advertisers still underestimate.

As platforms automate more of the targeting, bidding, and matching logic, the quality of the signals you send back becomes even more important. If the platform is deciding who to show ads to and which conversions to optimize toward, your job is to make sure it is learning from the right outcomes.

That means not all conversions should be treated equally.

If a lead form completion, low-value purchase, repeat customer order, and high-margin new customer sale are all fed back into the system the same way, automation will optimize toward volume, not value.

For PPC teams, that means the measurement stack should include a serious review of conversion quality inputs, including:

  • Offline conversion imports.
  • CRM-based revenue mapping.
  • New vs. returning customer segmentation.
  • Lead quality or opportunity-stage imports.
  • Customer lifetime value indicators where available.

This is where measurement and optimization start to overlap.

If the wrong conversions are being measured, the wrong outcomes will be optimized.

That is why first-party data is not just a reporting issue. It is the foundation of the entire AI-era measurement stack.

What To Show Your CMO Or Clients

One of the most difficult aspects of managing automated campaigns is explaining performance to leadership teams.

Executives often expect reporting frameworks built around the mechanics of traditional campaign management. In automated environments, those indicators tell only a small part of the story.

A more effective reporting structure focuses on three layers that connect advertising performance to business outcomes.

The first layer should always focus on the metrics that leadership teams care about most. Revenue growth, contribution margin, and customer acquisition cost provide a direct connection between marketing activity and company performance. These indicators allow executives to evaluate marketing investments in the same framework they use to evaluate other business decisions.

Instead of presenting keyword-level reports, PPC leaders should begin with a clear summary of how paid media contributed to revenue and profit during the reporting period. If revenue increased by 18% quarter over quarter while customer acquisition costs remained stable, that outcome provides a far more meaningful signal than any individual campaign metric.

The second layer of reporting should explain how paid media contributes to the broader acquisition ecosystem. As AI-driven search experiences reshape the visibility of organic results, paid media often carries a larger share of the responsibility for capturing demand.

Blended customer acquisition cost provides an effective way to communicate this relationship. By combining marketing spend across channels and dividing it by the total number of new customers acquired, organizations gain a clearer understanding of the overall efficiency of their acquisition strategy.

This approach also helps executives understand how paid search interacts with organic search, social advertising, and other marketing channels. Rather than evaluating PPC in isolation, leadership can see how the entire acquisition system performs.

The final layer of reporting should focus on experimentation and strategic insights. Automated systems constantly evolve, and the best way to evaluate them is through structured experimentation.

Reports should include summaries of campaign experiments, including:

  • The hypotheses tested.
  • The metrics evaluated.
  • The outcomes observed.

For example, if enabling AI-driven query expansion increased conversion volume while maintaining acceptable acquisition costs, that result provides valuable guidance for future campaign structure decisions.

Equally important is identifying metrics that are becoming less relevant.

Keyword-level performance reports, average ad position, and manual bid adjustments were once central components of PPC reporting. In automated campaign environments, those metrics often provide little strategic value. Continuing to emphasize them can distract leadership from the outcomes that truly matter.

Effective reporting in the AI era should emphasize growth, profitability, and strategic learning rather than operational mechanics.

Measurement Gaps That Still Exist

Despite improvements in automation and reporting transparency, several emerging advertising experiences remain difficult to measure.

One example is the growing presence of personalized offers within AI-driven shopping experiences. Google’s Direct Offers feature allows retailers to surface dynamic discounts during AI-generated shopping recommendations. While the feature may influence purchase decisions, advertisers currently have limited visibility into how frequently those offers appear or how strongly they influence conversion behavior.

Without that visibility, marketers cannot easily determine whether the discounts are generating incremental revenue or simply reducing margins on purchases that would have occurred anyway.

Another emerging measurement challenge involves conversational commerce. Google has begun exploring “agentic commerce” systems where AI assistants help users research and purchase products across multiple retailers.

In these environments, the user journey may involve several conversational prompts before a purchase occurs. The traditional concept of an ad impression or click may become less meaningful when AI systems guide the user through a multi-step research process.

As these experiences evolve, marketers will need new attribution models capable of evaluating influence across conversational journeys rather than isolated interactions.

These developments highlight the importance of ongoing experimentation and advocacy from advertisers. Measurement frameworks will need to evolve alongside the platforms themselves.

The Future Of PPC Measurement

Automation has changed the mechanics of paid advertising, but it has not eliminated the need for strategic oversight.

If anything, the role of human expertise has become more important.

AI systems are extremely effective at executing campaigns across large datasets and complex auctions. What they cannot do on their own is define the business outcomes that matter most or interpret performance within the broader context of organizational growth.

The most effective PPC teams are adapting to this reality. Instead of focusing exclusively on the mechanics of campaign management, they are investing more effort in defining profitability metrics, designing incrementality tests, and building reporting frameworks that connect advertising performance to business outcomes.

Measurement in the AI era will look different from the measurement frameworks that defined the early years of paid search. The focus will shift away from controlling individual campaign inputs and toward understanding how automated systems generate value for the business.

For PPC practitioners and marketing leaders alike, that shift represents the next stage in the evolution of paid media strategy.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Google’s Push For Data Strength Is Really A Push For Better Bidding via @sejournal, @brookeosmundson

Google keeps coming back to the same message this year: your AI is only as good as the data feeding it.

That message has shown up across the Ads Decoded podcast, Data Manager updates, tagging guidance, partner integrations, and now even developer-focused content like the Ads DevCast podcast. It seems to reflect a broader shift in how Google expects campaigns to be built and optimized.

The issue is not that advertisers lack data. Most accounts have plenty of it. The problem is how that data has been structured, selected, and fed into bidding systems over time.

As Google leans further into AI-driven optimization, that gap becomes more visible for advertisers who don’t have a sound conversion setup. Campaign performance is increasingly tied to how clearly the system understands what success looks like.

Why Google Is Pushing Advertisers To Rethink Conversion Strategy

For years, many advertisers treated conversion tracking as something to expand, not refine over and over again.

If a platform made it easy to track an action, it got added. If a CRM could send something back, it got imported. If a new conversion type became available, it often made its way into the account without much resistance.

On paper, that sounds like a more complete dataset. The more data, the better – right?

In reality, it’s created a lot of noise for machines to learn what truly matters.

Campaigns are often optimized toward a mix of actions that did not share the same level of intent, value, or timing.

Some signals are high quality but might have low volume due to a delay in sales cycle activity. Others may be immediate but loosely tied to actual business outcomes. Many accounts end up blending all of them together under a single bidding strategy for the sake of measuring everything.

That worked well enough when automation was less dependent on precise inputs.

It becomes a bigger problem when bidding systems are expected to make decisions based on patterns in that data.

Where Most Conversion Setups Break Down

In one of the recent Ads Decoded podcast episodes, Google’s recent guidance around lead generation makes it clear what they are trying to correct. The focus is on mapping the full customer journey and identifying the conversion point that provides a usable signal for bidding.

That means looking at three things at the same time:

  1. How predictive the action is of real business value
  2. How frequently it occurs
  3. How quickly it happens after the initial interaction

Many advertisers still default to the deepest possible conversion, assuming that optimizing toward the final sale will produce the best outcome across every campaign.

The issue isn’t that particular goal itself, but more how usable that signal is for the system in a higher-funnel campaign. And this is where many conversion strategies start to fall apart.

If that action happens infrequently or takes weeks to materialize, it limits how much the bidding system can learn from it. The result is often slower optimization, higher volatility, and less efficient scaling.

On the other end, optimizing toward early-stage actions without considering quality can inflate volume without improving actual outcomes.

Selecting the right signal requires matching the conversion to the role the campaign plays and ensuring that signal is both meaningful and usable for bidding.

That shift requires more intentional decision-making than many accounts have historically applied to conversion setup. It also introduces a level of discipline that many advertisers have not needed when automation was less dependent on signal quality.

Why Is Google Putting So Much Weight On Data Strength?

Google is not being subtle about the Data Strength push. It’s showing up in product updates, integrations, tagging changes, and even in the way Google is speaking to both advertisers and developers.

Part of the reason is practical. Advertisers have lost visibility into many of the signals they used to rely on. Privacy changes, browser restrictions, and platform limitations have made measurement less complete than it used to be.

At the same time, Google’s bidding systems are being asked to do more with less. That puts more pressure on the signals that are still available.

This is where Data Strength comes in. Google is trying to make those signals more reliable, easier to connect, and more useful for optimization. Data Manager, tag gateway, and partner integrations all support that goal.

The expansion of integrations with platforms like HubSpot, Zapier, and Cloudflare also supports this effort. Instead of relying on custom implementations, advertisers can connect the systems where their data already exists with less effort.

This improves consistency in how data flows into bidding systems.

It also reinforces Google’s broader goal of making its automation more effective in a lower-signal environment.

Does This Point To A Broader Role For Google?

I also think there is a bigger shift underneath this push.

Google is moving closer to the systems where business outcomes actually happen, not just where ads are served. Connecting CRM data, offline conversions, and audience signals allows Google’s platforms to better understand what a “good” customer looks like beyond the initial click or form fill.

That can absolutely help advertisers improve performance.

At the same time, it positions Google as more than just an ads platform. It becomes more integrated into how businesses measure performance, define value, and connect marketing efforts back to real outcomes

Where Does Server-Side Tagging Fit In With This?

There has been a lot of confusion around server-side tagging and how it relates to what Google is promoting today.

They are related, but they aren’t the same thing.

Google tag gateway focuses on how the Google tag is delivered and how requests are routed through first-party infrastructure. It is a way to make existing tagging setups more resilient and aligned with privacy expectations.

Server-side tagging is a broader architectural approach. It shifts data processing from the browser to a server environment that the advertiser controls. This can improve site performance, provide more control over data handling, and support more advanced use cases across multiple platforms.

In practical terms, tag gateway is often a more accessible first step for advertisers looking to improve data reliability without a full infrastructure overhaul.

Server-side tagging is a larger investment and tends to be more relevant for organizations with more complex data requirements or stricter governance needs.

The two approaches can work together, and Google documentation often recommends combining them for a more durable setup.

A Thoughtful Approach To Data Strength

The increased focus on Data Strength is directionally positive, but it does not remove the need for careful decision-making.

Simplifying setup does not automatically lead to better outcomes. If conversion actions are poorly defined or not aligned with campaign intent, connecting them more efficiently will not improve performance.

If you’re a marketer who isn’t directly involved with setting up conversions, it may be worthwhile to meet with your Analytics teams. Create a list of must-have conversion events or actions you need to track for campaigns (online and/or offline), and cross-check that list with what’s currently set up.

There is also a governance component to consider. As tagging becomes more automated and data collection expands, teams need to understand what is being captured, how it is being used, and how it aligns with internal policies.

Google has noted that expanded automatic event collection may result in additional data being sent to its systems, which should be reviewed as part of implementation.

Another consideration is how platform-specific improvements fit into a broader measurement strategy.

Google’s push around Data Strength is primarily focused on improving performance within its own arena. That is valuable, but it should be complemented by broader measurement approaches when making budget and channel decisions.

This is where initiatives like Meridian come into play. Google has positioned Meridian as an open-source marketing mix modeling solution to help advertisers evaluate performance across channels and connect those insights to budget planning.

How Google Is Reinforcing Data Strength Across The Industry

One of the more interesting aspects of this push is how consistently it’s showing up across different mediums.

Product updates are only one piece of it.

Google is also investing in education and communication around Data Strength, using formats that reach both marketers and developers. Ads Decoded continues to focus on practical campaign strategies, including how to map the customer journey and select the right conversion signals.

At the same time, newer initiatives like Ads DevCast are aimed at a more technical audience, with episodes focused on topics like the Data Manager API and data integration workflows. The goal seems to be to meet teams where they are, whether they are responsible for campaign strategy or the underlying implementation.

The Data Manager API itself reinforces this direction. Google is shifting workflows like Customer Match into a system designed specifically for data connectivity, privacy controls, and more consistent ingestion of first-party data.

That combination of product changes, partnerships, and education signals a coordinated effort to strengthen how data is collected, connected, and used across the entire advertising atmosphere.

What Advertisers Are Saying About The Data Strength Conversation

The discussion around Data Strength and lead quality have sparked a lot of needed conversations between Google and advertisers.

In reaction to the Ads Decoded episode “Beyond the Form Fill“, many advertisers are happy that B2B businesses are getting the attention they’ve been asking for. Melissa Mackey praised the episode, stating that “All lead gen advertisers should go listen.” A few marketers noted the need to improve or purge the amount of bot leads they see in their B2B campaigns, including Robert Peck.

Google also did a series of posts and interviews with experts on the importance of data strength. All seemed to have similar sentiment and this is where I started seeing more and more advertisers connect the dots.

Adrija Bose commented on a discussion with Kamal Janardhan, Senior PM Director at Google, and Jeff Sauer, CEO of MeasureU:

What strikes me most is the framing of AI as the engine, not the strategy. Too many leaders conflate the two, expecting AI to compensate for weak signals. This post nails why high-quality data is non-negotiable for meaningful outcomes.

Jonathan Reed also showed his support on the renewed focus of data strength, stating that while it’s a full-time job for his team, they’ve seen “seeing dramatic increases in conversions, and dramatic decreases in cost!”

What Does This Mean For Your Campaigns?

This shift will show up pretty quickly once you look at how your campaigns are actually set up.

A lot of accounts still treat conversion tracking as something to build once and leave alone. But if the signals feeding your campaigns don’t match the intent behind the queries you’re targeting, it becomes harder for bidding to do its job well.

That usually shows up in ways you’ve probably already seen, where performance feels inconsistent and scaling becomes more difficult. Even small changes can create overly volatile swings.

None of that is coming from one setting or one campaign. It is usually a reflection of how the system is learning from the data it is given.

That is why this push toward Data Strength matters so much.

It forces a closer look at which signals are actually being used for optimization, how reliable they are, and whether they reflect real business outcomes.

In some cases, that means connecting better data from your CRM. In others, it is fixing how your tags are set up or how conversions are being defined in the first place.

As Google continues to lean into this direction, the gap will likely grow between accounts that are intentional about their data and those that aren’t.

More Resources:


Featured Image: Garun.Prdt/Shutterstock

From T-Shaped To M-Shaped: The PPC Career Evolution Nobody Is Talking About

Ask any PPC professional what career shape they are working toward, and most will say T-shaped. One deep specialism, broad supporting knowledge across adjacent areas. It became the dominant career framework in marketing over the last decade, and for good reason. In a world where platforms were simpler and clients valued versatility, the T-shaped practitioner was exactly what the market wanted.

That model is no longer enough.

Not because T-shaped practitioners are bad at their jobs or the model does not work anymore. Most are excellent. But the conditions that made T-shaped the right target have changed fundamentally, and the practitioners commanding the highest compensation in 2026 are not T-shaped. They are something more evolved: M-shaped. Two or three deep pillars of expertise, sitting on a broad foundation of knowledge across five to seven adjacent domains. It looks like a generalist from a distance and like a specialist up close, depending on which conversation you are in.

I want to make the case that M-shaped is not just an incremental upgrade on T-shaped. It is a fundamentally different career posture, built for a fundamentally different market.

Why T-Shaped Made Sense, And Why It Is No Longer Enough

The T-shaped model solved a real problem. Early in a career, being good at one thing gets you hired. Being good at only one thing gets you stuck. T-shaped gave practitioners a path: Go deep first, then build outward. It worked particularly well in agency environments where account managers needed enough breadth to have intelligent conversations across channels without needing to own them all.

The problem is that AI has quietly made T-shaped the new floor, not the ceiling. The State of PPC 2026 report, with over 1,306 responses, suggests that the skills now expected of a competent PPC manager include data analysis, first-party data activation, creative testing strategy, attribution modeling, prompt engineering, and scripting. That is not a job description for a specialist. It is the broad knowledge layer of a T-shaped practitioner, repackaged as the baseline requirement.

When the broad layer of your T becomes everyone’s minimum viable requirement, the T itself stops being a differentiator. What differentiates you now is what sits on top of it.

There is also a structural issue that the T-shaped model was never designed to address. A single deep specialism creates a single point of failure. If your specialism is automated, commoditised, or simply stops being valued by clients, you are exposed. Practitioners who built their identity around a single skill have already felt this. The M-shaped model spreads that risk across multiple pillars without sacrificing depth.

What M-Shaped Actually Means In PPC

M-shaped is not a new term, but it has barely been applied to paid media specifically. In talent and HR circles, it describes a senior professional with multiple areas of genuine depth connected by a wide base of contextual knowledge. Think of the shape literally: two or three peaks, not one, all sitting on the same broad foundation.

In a PPC context, the broad foundation could cover seven domains. Not mastery of each, but enough fluency to be credible, to ask the right questions, and to connect dots across them:

Broad knowledge layer (the base of the M) What fluency looks like in practice
Google Ads and paid search fundamentals Understanding platform mechanics, bid strategy, and campaign architecture at a working level.
Creative strategy Briefing creative from a performance hypothesis, not an aesthetic preference.
Data and analytics fundamentals Enough to interpret a dataset, build a basic model in Google Sheets or Looker Studio, and know when the numbers you are looking at are telling you something real versus something misleading.
Audience and first-party data Knowing what signals matter and how first-party data integrate.
Business fundamentals Reading a P&L, understanding margin, talking to a CFO.
Reporting and data visualisation Turning raw data into a decision, not just a dashboard.
CRO basics Enough to understand where paid traffic lands and why conversion rate affects the economics of every campaign you run.

On top of that base, the M-shaped PPC professional has two or three peaks. These are not sub-specializations within PPC. They are complementary disciplines that sit alongside it. The difference matters. Going deeper on Smart Bidding or Performance Max is valuable, but it is still PPC. Building genuine expertise in data engineering, CRO, SEO, business consulting, or marketing attribution is something different. It takes you into rooms and conversations that pure PPC expertise does not open. That is what the second and third peaks are for.

My own peaks are measurement and attribution strategy, AI-driven automation and scripting, and high-value commercial consulting. Importantly, these are not just deeper layers within PPC. They are distinct disciplines in their own right, each requiring a different knowledge base and opening access to different conversations. Attribution sits at the intersection of PPC and broader data strategy. Automation and scripting sit at the intersection of PPC and engineering. Consulting sits at the intersection of all of it and commercial strategy. That is the point. The peaks of an M-shaped profile should take you somewhere your PPC foundation alone cannot reach.

The specific peaks will differ for every practitioner. What matters is that they are genuinely deep, that they are visible, and that they are connected to each other and to the broad base in a way that makes sense commercially.

A sample M-shaped skillset could look like this:

Image from author, March 2026

Why M-Shaped Is Where The Premium Compensation Actually Lives

The salary data backs this up in a way that is hard to ignore. Duane Brown’s PPC Salary Survey 2026 shows that U.S. freelancers with 10 to 15 years of experience earn a median of $202,895, compared to $123,545 for agency practitioners at the same experience level. That is a gap of nearly $80,000 for the same years on the clock.

That premium is not explained by experience alone. It is explained by the ability to operate across disciplines. The practitioners earning at that level are not running campaigns for retainer fees. They are being engaged as experts who can bridge PPC with adjacent high-value problems: a consultant who understands both automation and business strategy, a specialist who can speak to attribution in a language the CFO recognises, a practitioner who can connect first-party data infrastructure to paid media outcomes. The peaks make that possible. The base alone does not.

The in-house data tells a similar story. The same survey shows a median of $170,000 for in-house practitioners with six to nine years of experience, against $90,000 for their agency counterparts at the same stage. That $80,000 gap reflects something structural: in-house senior roles, particularly growth-oriented ones, tend to be built around practitioners who own multiple critical functions rather than managing a portfolio of client accounts. They are hired for their peaks, not their base.

Agencies have to spread expertise across too many clients to let anyone go truly deep. In-house is where M-shaped profiles find the room to build.

This is worth sitting with if you work in an agency. Agency environments are excellent for building a range. You see more campaigns, more industries, more budget levels in two years at a good agency than you would in five years in-house. But agencies have a structural ceiling on depth: there are too many clients, too many accounts, too much context-switching for any one practitioner to genuinely own a problem from end to end. The practitioners who break through that ceiling are the ones who build their peaks outside the day job, through side projects, consulting work, speaking, writing, and building tools, and use the agency as the base, not the destination.

The Counterargument Worth Addressing

The obvious pushback to all of this is that M-shaped sounds good in theory but is unrealistic in practice. Most practitioners do not have the time or the organizational support to develop multiple genuine areas of deep expertise while also managing a full workload. And they are right that it cannot happen overnight.

But I think this objection confuses building M-shaped with being M-shaped. You do not arrive at M-shaped by trying to become an expert in three things simultaneously. You arrive there by going deep in one area first, then, once that pillar is solid enough to be commercially useful, identifying a second area where your first pillar gives you a natural edge. Measurement and attribution, for example, becomes a much more tractable second pillar once you already understand automation. If you know how Performance Max actually allocates budget, what signals Smart Bidding consumes, and where platform reporting diverges from reality, you are not approaching attribution as an abstract measurement problem. You are solving a specific one: how do you build a framework that accounts for what you already know the platform is doing wrong? That prior knowledge makes you faster, more credible, and harder to replace than someone who learned attribution in isolation.

The progression is not linear, and it is not fast. But the practitioners commanding $150,000 to $200,000 in this industry did not get there by deepening a single specialism forever. They got there by building a second peak, and then finding a way to connect the two.

What This Means For Where You Invest Next

If the argument holds that T-shaped is the new floor and M-shaped is where the premium lives, then the practical question is how to identify which second or third peak to build.

My honest advice is to start from your first peak and ask what adjacent problems your clients or employers consistently struggle with that you are currently not equipped to solve. If your peak is campaign automation, the adjacent problem is probably measurement: clients who have great automation in place but no reliable way to attribute outcomes to it. If your peak is creative performance, the adjacent problem is probably first-party data and audience strategy: clients who are producing great creative but targeting it at the wrong signals.

The peaks that compound best are the ones that are genuinely complementary, where depth in one makes you better at the other and more valuable to the businesses you work with. That is what separates M-shaped from simply having two T-shapes that happen to coexist in the same person.

The State of PPC 2026 report is unambiguous on the wider context: the performance gap between sophisticated advertisers and the average is wider than it has ever been. Platforms are not becoming more transparent, privacy constraints are not loosening, and competition is not decreasing. In that environment, the practitioners who will win are not the ones who are good at everything. They are the ones who are indispensable at two or three things that matter deeply to the businesses they serve.

T-shaped got a lot of us to where we are. M-shaped is what gets us to where the market is heading, and to a point where your career becomes genuinely difficult to commoditise or replace.

One last thing worth saying clearly: Do not be discouraged by this. M-shaped is not a certification you earn or a checklist you complete in a training sprint. It is the professional identity you build over a career.

The practitioners I know who have reached it did not set out to become M-shaped. They went deep on one thing, got good enough that it opened a door to something adjacent, walked through it, and repeated the process. That takes years, sometimes a decade or more. The fact that it takes that long is precisely why it is worth building. Anything that can be acquired in two or three years can be acquired by everyone. What you are working toward is something that cannot.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

ChatGPT Ads: New Acquisition Channel Or Just Another Brand Tax? via @sejournal, @brookeosmundson

A lot of PPC managers are going to get asked about ChatGPT Ads over the next few months.

That was probably inevitable the moment OpenAI moved beyond testing ads and started building a real monetization story around them. The initial pilot was easy enough for most advertisers to ignore. It was invite-only, expensive, and limited enough that it felt more like a premium media test than something the average paid media team needed to factor into a media plan.

It’s going to be harder for PPC pros to ignore with the newest announcement from OpenAI.

OpenAI is reportedly preparing to launch self-serve advertiser capabilities in April while also expanding its ads pilot into additional countries. That does not automatically make ChatGPT Ads a serious channel for every advertiser. It does, however, make this the first point where more paid media teams may actually have to form a view on it.

And that view should probably be more skeptical than enthusiastic.

Because while the headlines around ChatGPT Ads are easy to frame as momentum, that is not the same thing as proving this is already a channel worth real budget.

For a lot of advertisers, the more useful question is not whether OpenAI can sell ads. It clearly can. The better question is whether this becomes a meaningful new acquisition channel or just another place brands feel pressure to pay for visibility before the economics are fully there.

That is the part worth taking seriously.

What OpenAI’s First Ads Pilot Told Us

The first version of ChatGPT Ads was never built for broad advertiser adoption.

OpenAI said in January that it would begin testing ads in the U.S. for logged-in adult users on Free and Go plans, while keeping Plus, Pro, Business, Enterprise, and Education ad-free. It also made a point of saying ads would not influence answers, would remain clearly separated from responses, and would not involve selling user conversations to advertisers.

That setup was important, because OpenAI was clearly trying to introduce monetization without damaging trust in the product. In practical terms, though, it also meant the pilot looked much closer to a controlled brand environment than a normal PPC channel.

The early economics reinforced that. Reuters reported in March that Criteo had been pitching advertiser commitments in the $50,000 to $100,000 range as OpenAI expanded the U.S. pilot, while other early reporting around the first wave of access pointed to premium CPMs and high barriers to entry.

That is not how platforms behave when they are trying to onboard the average mid-market advertiser. That is how they behave when they are trying to keep the test small, high-value, and manageable.

Some advertisers reported CTR of ads in ChatGPT as low as 0.91%, compared to an average benchmark of 6.4% on Google search. This metric is something marketers will want to watch closely when trying to identify how ChatGPT fits into their marketing strategy and aligning it with realistic expectations.

The context of those details matter, because some of the current reaction to ChatGPT Ads skips too quickly past what the pilot actually was. It was not broad proof of market fit.

At the same time, it would be too dismissive to treat the pilot as nothing more than a PR-friendly experiment.

OpenAI has a massive user base, a product people are already using in research and discovery behaviors, and enough advertiser demand to justify moving beyond the first phase. That does not prove long-term channel value, but it does suggest there is more here than novelty.

What About the Reported $100 Million Annualized Revenue From The Pilot?

The most repeated number in the current conversation is Reuters’ report that OpenAI’s U.S. ads pilot exceeded $100 million in annualized revenue within six weeks. That is a strong headline, and on its face, it suggests there is real advertiser demand. Reuters also reported that the pilot has expanded to more than 600 advertisers, with nearly 80% of small and medium-sized businesses signaling interest.

For a limited pilot, that seems to be a meaningful revenue pace. Even allowing for premium pricing and controlled access, it tells you this is not a fringe experiment with a handful of novelty buyers. Advertisers are interested, and OpenAI has clearly found enough demand to justify building this out further.

It also suggests there may be real commercial value in conversational inventory if the platform can maintain trust while expanding scale.

But, let’s take a deeper look into what the claim of annualized revenue means.

What Does Annualized Revenue Mean?

“Annualized revenue” is not the same thing as saying OpenAI booked $100 million in actual revenue in six weeks. It means the current pace of revenue, if sustained over a year, would exceed that number.

That is still notable, especially for a limited pilot. But it is also one of the easiest ways to make an early-stage business line sound bigger and more mature than it may actually be.

There are a few reasons to be careful about what it does and does not prove.

For one, premium pilot economics can make early revenue look healthier than a scaled platform may actually be. If access is limited, inventory is scarce, and pricing is high, you can build a very attractive short-term revenue story without proving that the platform is broadly investable for normal advertisers.

Second, Reuters reported that while about 85% of users are currently eligible to see ads, fewer than 20% are shown ads daily. That gives OpenAI room to increase monetization, but it also means the current revenue run rate is still being generated in a fairly controlled environment.

Third, the $100 million figure tells us very little about advertiser outcomes. It tells us advertisers are willing to buy in.

It does not tell us yet whether those advertisers are seeing meaningful incremental conversions, efficient customer acquisition, or strong downstream value relative to other channels.

So, while the revenue number is worth paying attention to, it shouldn’t be treated as proof that ChatGPT Ads are already a mature or “must-test” channel for most advertisers.

How Will The Self-Serve Ads Platform Change The Conversation?

In its newest development, OpenAI is reportedly preparing to open self-serve advertiser access in April.

That changes the conversation because self-serve is what turns a tightly controlled pilot into something more PPC managers may be expected to evaluate, budget for, or at least have an opinion on. Reuters also reported that OpenAI plans to expand the pilot beyond the U.S. into Canada, Australia, and New Zealand, which further signals that this is moving out of “contained experiment” territory.

A premium pilot mostly tells you whether a company can sell scarce inventory to selected advertisers. A self-serve platform is the first stage where advertisers can start evaluating whether the product behaves like a usable media channel at all.

That’s where the real learning begins again.

There’s a legitimate case for why some advertisers will want to pay close attention. If ChatGPT continues to become a place where people compare products, explore options, and work through buying decisions, then ad placements in that environment could eventually matter in a way that does not map cleanly to either search or paid social.

That possibility is real, it just has not been fully proven yet.

Why ChatGPT Ads Could Become A Meaningful Channel

If ChatGPT Ads are going to matter, the case for why is not hard to understand.

People are already using AI tools for research, planning, troubleshooting, product comparisons, and early-stage decision-making. That behavior is commercially important because it sits in a part of the journey that many advertisers care about but do not always capture especially well.

  • Search often captures explicit demand.
  • Paid social often creates or interrupts demand.
  • ChatGPT (or other AI platforms down the road) may end up sitting somewhere in-between.

A user in ChatGPT is often not just typing a keyword. They are explaining a situation, asking for options, and narrowing a decision. That creates a different kind of commercial context.

In theory, that should be valuable to advertisers, especially in categories where buyers need more information, more confidence, or more help evaluating tradeoffs before they convert.

If OpenAI can build an ad product that fits that behavior without damaging trust, there is a reasonable case that this becomes a genuinely useful environment rather than just another place to buy impressions.

Could The Hype Of ChatGPT Ads Be Overrated?

AI platforms have gotten a lot of hype over the past few years, and they all seem to be a race towards the top.

Now that ads are being placed into ChatGPT, the market anticipation may get ahead of what the platform has actually proven.

That tends to happen whenever a platform has three things at once:

  • Cultural momentum
  • Advertiser curiosity
  • Enough scale to make marketers nervous about being absent

That combination can create pressure to show up before the underlying economics are fully understood.

And that is where the “brand tax” concern comes in.

A brand tax shows up when advertisers feel compelled to buy visibility because the platform is becoming too important to ignore, even if the measurement is still fuzzy and the performance case is still incomplete.

That does not mean the spend is automatically wasteful. But, the motivation behind the spend can shift from strategic fit to defensive presence if not clearly thought through.

This is why I think the right posture for most advertisers is curiosity, not urgency.

What Types Of Advertisers Could Benefit First?

If ChatGPT Ads are going to work well, they are most likely to work first for businesses that already benefit from longer, more thoughtful buying journeys.

That includes categories where users are naturally looking for help evaluating options, understanding tradeoffs, or narrowing a set of choices.

Think along the lines of:

  • B2B software
  • Education
  • Travel
  • Home improvement
  • Higher-consideration e-commerce categories (like furniture)
  • Services where buyers need more confidence before converting

These are the kinds of businesses where the user journey is not always driven by a clean keyword and an immediate click. Often, the person is still trying to figure out what they need, what the differences are, or what is worth paying for.

That is where a conversational interface could eventually become commercially valuable.

If your ideal buyer tends to ask detailed, open-ended questions before making a decision, ChatGPT is a much more natural fit than it would be for a business relying on urgency, impulse, or low-friction conversion volume.

Why Many Mid-Market Advertisers Should Probably Wait

This is the part that will probably matter most to a lot of teams.

Most mid-market advertisers do not need to rush into ChatGPT Ads the moment self-serve opens.

That is not because the platform is irrelevant, but because most mid-market advertisers still have far more obvious growth opportunities in channels they already understand better.

If your search account structure is still messy, your paid social creative testing is inconsistent, your landing pages are underperforming, or your measurement setup is still weak, ChatGPT Ads are probably not the next smartest dollar.

That is especially true for advertisers that depend on:

  • Short purchase windows
  • Lower-ticket conversion volume
  • Aggressive CPA efficiency
  • Highly predictable scale

Those businesses may eventually find a role for ChatGPT Ads. But in the near term, it is hard to make the case that they should prioritize it over more proven opportunities.

That is where a lot of marketers get into trouble with new platforms. They confuse early visibility with early fit.

And those are not the same thing.

What Should PPC Teams Do Right Now?

For most PPC managers, the smartest move is not to force a test. It is to build a more useful framework for evaluating whether ChatGPT Ads deserve one later.

That starts with a few practical questions.

First, is your category one where conversational research behavior is likely to influence purchase decisions in a meaningful way?

Second, if you were to test this, what would success actually look like? Not in vague terms, but in measurable ones.

Would you be looking for qualified traffic? Stronger engagement? Assisted conversion value? Branded search lift? Lead quality? Or net-new customer acquisition?

If you cannot answer that before testing, then the test is probably not ready.

Third, do you have the measurement maturity to evaluate a channel that may sit somewhere between search, content discovery, and assisted decision support?

Because that is likely where ChatGPT Ads will live if they work at all.

A lot of teams will either under-credit this type of channel or over-excuse it. Neither is especially useful.

What Should PPC Managers Take From This?

ChatGPT Ads are worth paying attention to, even if your brand isn’t ready to test them yet.

Whether they become a durable acquisition channel, a useful upper- to mid-funnel complement, or simply another place where advertisers feel pressure to buy visibility before the performance case is fully established is unclear.

Right now, there is evidence for more than one possible outcome.

There is enough here to justify serious interest. OpenAI has the user scale, advertiser demand, and product usage patterns to make this more than a passing media story.

There is also enough uncertainty here to justify restraint. The platform still has a lot to prove around advertiser outcomes, economics, and where it truly fits in the paid media mix.

That is why the smartest response is probably not to rush in or write it off.

Watch the rollout carefully and pay attention to where category-specific fit starts to emerge. Then, be honest about whether your business has a reason to test beyond the fact that the platform is new.

That is a much better standard than hype, and a much better one than reflexive skepticism too.

More Resources:


Featured Image: Saeedreza/Shutterstock

How To Identify And Solve Click Fraud In Paid Media – Ask A PPC via @sejournal, @navahf

This week’s Ask a PPC addresses one of advertisers’ most frustrating fears:

“I suspect my account has click fraud. What checks can I do to confirm this, and what can I do about it?”

Click fraud is easily one of the most frustrating pitfalls in managing a paid media account. Whether it shows up as bots on low‑quality apps, suspicious display placements, or highly sophisticated schemes that mimic real search behavior, click fraud is real.

That said, not every odd click pattern, low cost-per-click, or disappointing conversion rate is the result of fraud. In many cases, what looks like click fraud is actually the outcome of campaign settings, targeting choices, or creative mismatches.

In this article, we will cover:

  • How to distinguish click fraud from human‑driven performance issues.
  • What ad platforms proactively do to protect advertisers.
  • What you can do when click fraud is genuinely present.

A quick note on perspective: I am a Microsoft Ads employee. This article is platform‑agnostic, and the guidance shared here applies broadly across paid media platforms.

1. Distinguishing Click Fraud From Human Error

Before assuming malicious intent, it is critical to audit whether your own campaign setup could be creating performance patterns that resemble click fraud.

There are several common scenarios where human behavior can look suspicious at first glance.

Start With Where Your Budget Is Going

The first question to ask is simple: Is the majority of my spend going to placements I intentionally targeted?

If the answer is no, that is your first red flag.

  • Review placement and domain reports carefully.
  • Identify whether spend is flowing to sites, apps, or partner placements you do not recognize.
  • If you see unfamiliar placements, open those URLs on a device or browser where you are comfortable evaluating risk.

If a placement feels spammy, low‑quality, or clearly misaligned with your brand, exclude it immediately. If the placement appears legitimate but you cannot realistically see how a user would engage with the ad, that may indicate fraudulent behavior.

In either case, exclusion is the right move, followed by a conversation with platform support. Ad platforms have a vested interest in removing low‑quality or fraudulent inventory.

Review Location Targeting Settings Closely

Location targeting is one of the most common sources of perceived click fraud.

When advertisers enable “People who show interest in your target locations,” they are effectively allowing global eligibility. This can lead to traffic from regions with higher bot activity or from users who appear suspicious simply because they are unlikely to convert.

If you choose to use “showing interest in,” consider adding an additional layer of geographic exclusions to ensure your ads only serve where you truly intend.

Evaluate Creative For Accidental Click Risk

Ad creative can also create misleading signals.

  • Display ads with prominent buttons can invite accidental clicks.
  • Creative that does not clearly communicate value may generate curiosity clicks without intent.
  • Small screens increase the risk of fat‑finger clicks.

In these cases, the issue is not fraud. It is design. Adjusting creative can often resolve the problem.

2. What Ad Platforms Proactively Do To Prevent Click Fraud

While I cannot speak for every ad platform, there are shared principles across the industry.

Platforms Are Incentivized To Protect Inventory Quality

If inventory performs poorly, advertisers stop investing. That creates a strong incentive for platforms to maintain secure, valuable placements.

One example from Microsoft Ads is a policy requiring Search Partner publishers to implement Microsoft Clarity. This allows deeper insight into user behavior and helps identify invalid or fraudulent activity before advertisers are exposed to it.

Other platforms have similar verification and monitoring systems in place, even if the tools differ.

Advertisers Are Not Charged For Invalid Clicks

Another core principle is that advertisers should not pay for fraudulent activity.

Most platforms continuously review clicks. When invalid or fraudulent clicks are detected, those costs are credited back to the advertiser. These credits may not appear immediately, as click validation takes time, but they are visible in platform reporting.

If you believe a significant spike in fraudulent clicks was missed, you should contact support. Platforms expect and encourage those conversations.

3. What You Can Do When Click Fraud Is Real

Once you have ruled out configuration and creative issues, and click fraud still appears present, there are concrete actions you can take.

Consider Click Fraud Mitigation Tools

If fraudulent clicks represent 40% or more of your traffic, I would recommend investing in a third‑party solution.

These tools typically focus on:

  • IP‑based blocking for simpler threats.
  • Behavioral pattern detection for advanced schemes.

Be aware that consent requirements can complicate implementation in certain regions, particularly where third‑party cookie consent is required. In markets with fewer restrictions, these tools are easier to deploy.

Use AI And Automation Where Possible

Some advertisers choose to build their own systems using AI to identify patterns and automatically exclude malicious IPs. This can be effective when done carefully and within privacy and consent guidelines.

Set Expectations Around Risky Placements And Markets

Certain placements and regions carry higher click fraud risk. If you choose to invest in them, transparency matters.

A practical approach is to communicate a 10% variance buffer to clients or stakeholders. This acknowledges that temporary spikes may occur before credits are issued.

You should not ultimately pay for click fraud, but there may be short periods where spend looks inflated before reconciliation. Monitoring credit card billing closely is important to avoid overcharging during those windows.

Remember That Fraud Is Not Limited To Clicks

Some of the most damaging fraud never happens at the click level.

Account takeovers, My Client Center (MCC) compromises, and phishing attempts are real threats. Protect yourself by:

  • Only opening emails from trusted senders.
  • Verifying suspicious messages with peers or platform support.
  • Avoiding login links unless you are certain of their legitimacy.

A well‑run account can unravel quickly if access is compromised.

Final Thoughts

Click fraud is frustrating, but it is manageable. The key is separating perception from reality, understanding how platforms protect advertisers, and knowing when to take action.

If you found this helpful, I would love to hear from you. And as always, stay tuned for next month’s Ask the PPC.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Building An In-House PPC Team: Why A Hybrid Model May Protect Your Ad Spend via @sejournal, @LisaRocksSEM

AI and automation in ad platforms are well established. Google Ads and Microsoft Advertising are heavily invested in automated features, and the technical barrier to entry has never been lower. However, that accessibility comes with a tradeoff.

Two common challenges surface when bringing a PPC team in-house:

  1. Campaigns are easier to launch than they are to explain and analyze.
  2. Machine-driven decisions risk going unquestioned without an outside perspective.

Those challenges point to something CMOs probably already know: Automation doesn’t eliminate the need for human judgment. It raises the requirements for it. Even with strong AI tools in place, experienced PPC practitioners are still writing strategy, creating ad copy, and manually updating targeting.

This article covers two structural paths for managing that reality.

  1. All in-house means your internal team manages PPC end-to-end, with no agency or external consultant involved.
  2. Hybrid means your internal team handles day-to-day execution and internal oversight while an external specialist or consultant provides strategy, auditing, and a second set of eyes.

Both models can work. The goal is to match machine automation with human accountability and independent performance checks. Without that structure, an in-house team can end up in a bubble where the ad platform’s suggestions dictate all of the optimization decisions.

Is Your Organization Ready? What To Assess Before You Hire

Before you post a job description, determine whether your company is ready to manage the technical work that comes with modern PPC search ads. Hiring an internal team is a long-term commitment.

The Shift In Daily Tasks

The role of the search marketer is shifting from manual campaign creation to evaluating and guiding automated systems. The human role is increasingly about checking what the AI creates and stepping in to do the work the ad platform can’t do well on its own.

That last part matters so much more than most job descriptions reflect. In my experience, AI-generated ad copy is often not platform-ready, and strategy still requires a human who understands the brand, the profit model, and the customer. If your candidates are only talking about managing manual bids and features, they may not be ready for the current landscape. You need people who can navigate automated systems and know when to override them.

Input And Data Quality

Because AI success depends on signal strength, an in-house PPC team’s value is directly tied to their ability to connect and maintain clean data. Ad platforms rely on:

  • Conversion tracking.
  • CRM integration.
  • Audience modeling.
  • Bidding inputs.

Tools such as Google Ads Data Manager (connecting external products inside Google Ads) and offline conversion uploads mean managing data should be a core responsibility of in-house PPC specialists.

Poorly configured conversion tracking or incomplete data signals can lead automated bidding to optimize toward low-value actions, if the data isn’t managed effectively in-house. You can’t expect a machine to give you good results if you’re feeding it bad information.

If You Are Hiring, Look For These Skills

If you’ve decided to build fully in-house, hiring criteria should shift toward business data management and the ability to work alongside AI without taking every single suggestion.

1. Understanding Business Margins

Most PPC managers haven’t had to think in depth about COGS (Cost of Goods Sold) or return rates, but that’s changing.

The bar is rising for in-house hires. A team that can connect ad spend to net profit, not just revenue, is far better positioned to make smart decisions as automation takes over the mechanical work.

2. Owning The Post-Click Experience

The PPC team must care about what happens after the user lands on the site. Creative quality and landing page performance are directly tied to conversions and what the algorithm learns over time.

AI-driven traffic efficiency can be thrown off by a poor landing page experience. Your internal hires should have a working knowledge of landing page testing and website user experience.

3. Ad Copy And Strategic Judgment

AI can generate ad copy, but it can create variations that are missing marketing strategy or brand-ready messaging. Your team needs to evaluate, rewrite, and at times reject what the ad platform produces.

The same applies to strategy. Automated systems optimize toward the goals you set, but setting the right goals and interpreting performance still require a skilled human. Hire for that judgment, not just ad platform knowledge.

4. Technical Data Strategy

Your team needs to know how to build and maintain first-party data connections, such as CRM data and customer match uploads.

Your team’s job is to ensure the right signals are flowing to the right campaigns at the right time. Technical data competency should be a core requirement for the job.

Why A Hybrid Model May Work Better

Even when hiring and data processes are going well, blind spots can happen inside fully internal teams. Three issues can show up:

  • Brand blindness from working primarily inside a single account.
  • Lack of independent auditing on spend and profit.
  • Difficulty pushing back on ad platform pressure.

An external perspective adds accountability that internal teams can have trouble providing for themselves. In an environment where so many features are automated, that accountability matters more because teams don’t tend to deep dive into the automations.

1. The Problem With Brand Blindness

Internal teams are focused on one brand. That focus builds deep expertise, but it can limit perspective. For example, when performance changes, it’s difficult to determine whether the change reflects a platform-wide trend, an industry shift, or a campaign-specific issue.

Working across many industries gives specialist consultants a reference point that internal teams may not have. They can tell you if a performance drop is happening to everyone in the industry or just to you.

2. The Need For Independent Auditing

An external partner acts as an independent auditor for your search spend. They can help confirm that internal goals line up with actual business profit rather than ad platform metrics.

It’s easy for internal teams to grow comfortable and focus on vanity metrics like ROAS (Return on Ad Spend). An objective third party can help show you exactly how much actual profit your search spend is generating.

3. Managing Ad Platform Pressure

Internal teams are the primary target for PPC ad platform representatives. These reps frequently push recommendations such that are auto-applied and display network serving that eat up budgets and prioritize the platform’s revenue over your business.

Independent experts are less likely to follow these suggestions without questioning them. They provide the pushback needed to ensure spend is justified by performance, not the platform’s optimization score.

Structuring The Partnership For Success

Consider a division of labor that draws on internal brand knowledge and external expertise. This hybrid approach offers the most protection for your ad spend.

What The In-House Team Should Own

  • Data Ownership: Managing the privacy and quality of your customer signals.
  • Creative Guidance: Ensuring brand voice stays consistent across AI-generated ads.
  • Ad Copy and Strategy: Writing, evaluating, and refining what the ad platform produces.
  • Sales Coordination: Connecting PPC spend with internal inventory levels and sales cycles.

What The External Specialist Should Own

  • Strategic Roadmap: Providing a long-term view of where the search industry is heading.
  • Advanced Analysis: Proving the true value of your spend through profit-based measurement.
  • Objective Auditing: Serving as an independent check against ad platform recommendations.

Successful PPC teams in an AI-first search environment won’t be worried about who automated the fastest. They’ll be more thoughtful and strategic about defining what the machine does and what a human approves.

Matching Structure To Accountability

The decision to go fully in-house or hybrid isn’t permanent. What matters is that your structure matches the level of accountability your ad spend requires.

If your team has clean data, strong hiring, and the ability to question what the ad platform suggests, a fully in-house model can work. But if no one is challenging the machine’s recommendations, you have a gap that’s hard to fix from the inside.

A hybrid model doesn’t mean your internal team isn’t capable. It means you’re building in a check that protects your budget from blind spots.

Whatever you choose, the people managing your PPC need to understand your business at the profit level, not just the platform level. Automation handles the mechanics. Your team handles the judgment.

More Resources:


Featured Image: ImageFlow/Shutterstock

Google Adds Scenario Planner, Performance Max Updates, And Veo – PPC Pulse via @sejournal, @brookeosmundson

Welcome to this week’s PPC Pulse.

This week’s updates focus on Performance Max visibility improvements, new budget planning tools in Google Analytics, and generative video now built directly into Google Ads.

Here’s what was announced this week and why they matter for your campaigns.

Google Adds More Visibility and Control To Performance Max

Google rolled out several updates to Performance Max aimed at two ongoing gaps: control and reporting.

Advertisers can now exclude first-party customer lists. This gives teams running acquisition-focused campaigns a cleaner way to avoid spending on existing users.

On the reporting side, Google added:

  • Budget report
  • Expanded audience insights, including demographic breakdowns
  • Placement reporting segmented by network

Why This Matters For Advertisers

Audience exclusions help reduce overlap between prospecting and retention, assuming your customer lists are accurate. The reporting updates are more practical. Advertisers get better visibility into spend pacing, who campaigns are reaching, and where ads are showing.

For teams already using Performance Max, this improves day-to-day oversight. It does not turn it into a fully controllable campaign type.

What PPC Professionals Are Saying

Anthony Simonetti is “very excited for more insight” for PMax campaigns, while the company Optifeed shared its support for the update by saying “Love seeing PMax get more transparent!”

Google Analytics Introduces Scenario Planner and Projections

Google Analytics launched two new tools as part of its cross-channel budgeting feature:

  • Scenario Planner for building forward-looking budget models
  • Projections for tracking whether live campaigns are pacing toward goals

Both tools use historical data to estimate conversions, revenue, and spend across channels, including non-Google platforms if cost data is imported.

Right now access is limited due to it being a beta feature. Advertisers need at least one year of data across multiple channels, as well as a few other eligibility requirements.

Why This Matters For Advertisers

Planning and performance have traditionally lived in separate places. These tools bring them closer together, especially to those marketers who manage more than just Google Ads.

Advertisers can now model budgets and monitor pacing in the same platform used for reporting. That can help teams managing multiple channels make faster adjustments during a campaign.

The tradeoff is reliability. Outputs depend entirely on data quality and historical consistency. For many accounts, that will limit how actionable these projections actually are.

Veo Brings AI Video Creation Into Google Ads

Google introduced Veo, its generative video model, inside Asset Studio in Google Ads.

Advertisers can start by uploading just three static images and generate short-form videos, then package them into ads for formats like Demand Gen.

Each uploaded image can generate a video by Veo that’s up to 10 seconds long.

Google is positioning this around speed and creative variation, and can be used in conjunction with the rollout of Nano Banana Pro. The goal is to make it easier to produce multiple video assets without traditional production.

Why This Matters For Advertisers

Creative production has been a bottleneck for many teams, especially for video.

Veo lowers that barrier immensely for brands. Advertisers can generate variations faster and test more creative without additional resources.

The bigger shift is volume. Google continues to push toward having multiple creative variations in-market at all times. This gives advertisers another way to keep up with that expectation, even if the output still needs review and refinement.

What PPC Professionals Are Saying

This got a lot of traction from advertisers, including 70 comments and over 340 reposts from its LinkedIn announcement.

André Felizol shared:

The key here will be the brands that could create something different. With AI facilitating the creation of videos based on images, everything will be similar. So, the companies that will invest more in creativity with different and creative approaches to show their products will win in the long run.

Brooke Hess is “looking forward to testing” for her agency’s clients while Thomas Eccel has already dug in and created a live demo test of Veo 3.

Personally, I’m excited to test it out after being introduced to the first version of Veo at the 2025 Google Marketing Live event last year:

Theme of the Week: More Ways To Plan, Steer, And Build

This week’s updates all support a more hands-on role for advertisers.

Google added more steering and reporting inside Performance Max, more planning functionality inside Analytics, and more creative production tools inside Google Ads.

Advertisers are getting more ways to shape performance instead of just reacting to it after the fact.

More Resources:


Featured Image: Djile/Shutterstock; Paulo Bobita/Search Engine Journal