How to Use Google Ads Performance Max Channel Reporting via @sejournal, @brookeosmundson

For years, marketers have asked for better visibility into how individual channels contribute to Performance Max results.

Google has released a tutorial walking advertisers through its new Performance Max channel reporting. This reporting feature offers more transparency into how campaigns perform across Search, YouTube, Display, Gmail, Discover, and Maps.

With this new report, you can now dig deeper into performance by channel and format, making it easier to analyze results and troubleshoot.

Here’s a look at how to find the report and what you can do with it.

Where to Find Channel Performance Reporting

To find and access the channel reporting, head to your Google Ads account.

From there, navigate to: Campaign >> Insights & Reports >> Channel Performance

google ads performance max channel reportingImage credit: Google, April 2025

Once you’re there, you’ll see these items:

  • A performance summary overview
  • A channel-to-goals visualization
  • Channel distribution table.

These items provide more than just a static view of performance. You’re able to click on specific channels to drill down into related reports, like placements on the Google Display Network, or Search Terms from the Search channel.

Exploring the Reports and Visualizations

The channel performance page isn’t just a high-level dashboard. It provides several views and reports that give you more context on how your ads are performing across Google’s network. Here’s a closer look at the most useful areas:

Ad Format Views

Not every ad performs the same across channels, which is why Google lets you break results down by ad format.

For example, you can see how video ads perform on YouTube compared to product ads shown on Search. This helps you spot whether one creative type is pulling more weight and whether you need to adjust your creative mix or budgets to support higher-performing formats.

Product-Driven Insights

If you’re running Shopping or retail campaigns, this section shows how ads tied to product data perform across channels.

You can see Shopping ads on Search as well as dynamic remarketing ads on Display. This gives ecommerce advertisers a clearer picture of how product feeds contribute to results beyond just one channel.

Channel Distribution Table

This table is one of the most detailed reports in the new view. It includes impressions, clicks, interactions, conversions, conversion value, and cost, all broken down by channel.

You can customize the table to highlight the metrics that matter most to your goals, such as ROAS or CPA, and even segment results by ad format (like video versus product ads).

Since the table is downloadable, you can also share it with teams or clients for transparent reporting.

Status Column and Diagnostics

The status column acts as a built-in troubleshooting tool. It surfaces issues or recommendations related to specific channels or formats, such as diagnostic warnings if ads aren’t serving as expected.

By reviewing these, you can quickly identify where performance may be limited and take action to resolve issues before they affect results at scale.

Reviewing Single-Channel vs. Cross-Channel CPA

One important takeaway from Google’s tutorial is that looking at average CPA or ROAS for a single channel doesn’t tell the full story.

Performance Max uses marginal ROI optimization, bidding in real time for the most cost-efficient conversions across all channels.

Since users don’t interact with just one channel, this cross-channel view helps advertisers see the broader picture of how campaigns drive results.

That means when evaluating effectiveness, Google recommends to prioritize your goals and audiences over individual channel performance.

How Advertisers Can Benefit From Performance Max Channel Reporting

The new reporting doesn’t change how Performance Max works behind the scenes, but it does help you:

  • Understand which channels support your goals most effectively
  • Identify areas where specific ad formats or channels may need creative or budget adjustments
  • Communicate results more clearly with stakeholders by showing cross-channel contributions

With Search Partner Network reporting coming in the future, Google is signaling a continued investment in giving advertisers deeper visibility.

Performance Max remains a cross-channel campaign type, but channel reporting is a welcome step toward transparency. By digging into these reports, advertisers can better understand how ads perform across Google properties and make smarter optimization decisions.

Google Adds Guidance On JavaScript Paywalls And SEO via @sejournal, @martinibuster

Google is apparently having trouble identifying paywalled content due to a standard way paywalled content is handled by publishers like news sites. It’s asking that publishers with paywalled content change the way they block content so as to help Google out.

Search Related JavaScript Problems

Google updated their guidelines with a call for publishers to consider changing how they block users from paywalled content. It’s fairly common for publishers to use a script to block non-paying users with an interstitial although the full content is still there in the code. This may be causing issues for Google in properly identifying paywalled content.

A recent addition to their search documentation about JavaScript issues related to search they wrote:

“If you’re using a JavaScript-based paywall, consider the implementation.

Some JavaScript paywall solutions include the full content in the server response, then use JavaScript to hide it until subscription status is confirmed. This isn’t a reliable way to limit access to the content. Make sure your paywall only provides the full content once the subscription status is confirmed.”

The documentation doesn’t say what problems Google itself is having, but a changelog documenting the change offers more context about why they are asking for this change:

“Adding guidance for JavaScript-based paywalls

What: Added new guidance on JavaScript-based paywall considerations.

Why: To help sites understand challenges with the JavaScript-based paywall design pattern, as it makes it difficult for Google to automatically determine which content is paywalled and which isn’t.”

The changelog makes it clear that the way some publishers use JavaScript for blocking paywalled content is making it difficult for Google to know if the content is or is not paywalled.

The change was an addition to a numbered list of JavaScript problems publishers should be aware of, item number 10 on their “Fix Search-related JavaScript Problems” page.

Featured Image by Shutterstock/Kues

TablePress WordPress Plugin Vulnerability Affects 700,000+ Sites via @sejournal, @martinibuster

A vulnerability in the TablePress WordPress plugin enables attackers to inject malicious scripts that run when someone visits a compromised page. It affects all versions up to and including version 3.2.

TablePress WordPress plugin

The TablePress plugin is used on more than 700,000 websites. It enables users to create and manage tables with interactive features like sorting, pagination, and search.

What Caused The Vulnerability

The problem came from missing input sanitization and output escaping in how the plugin handled the shortcode_debug parameter. These are basic security steps that protect sites from harmful input and unsafe output.

The Wordfence advisory explains:

“The TablePress plugin for WordPress is vulnerable to Stored Cross-Site Scripting via the ‘shortcode_debug’ parameter in all versions up to, and including, 3.2 due to insufficient input sanitization and output escaping.”

Input Sanitization

Input sanitization filters what users type into forms or fields. It blocks harmful input, like malicious scripts. TablePress didn’t fully apply this security step.

Output Escaping

Output escaping is similar, but it works in the opposite direction, filtering what gets output onto the website. Output escaping prevents the website from publishing characters that can be interpreted by browsers as code.

That’s exactly what can happen with TablePress because it has insufficient input sanitization , which enables an attacker to upload a script , and insufficient escaping to prevent the website from injecting malicious scripts into the live website. That’s what enables the stored cross-site scripting (XSS) attacks.

Because both protections were missing, someone with Contributor-level access or higher could upload a script that gets stored and runs whenever the page is visited. The fact that a Contributor-level authorization is necessary mitigates the potential for an attack to a certain extent.

Plugin users are recommended to update the plugin to version 3.2.1 or higher.

Featured Image by Shutterstock/Nithid

WordPress Ocean Extra Vulnerability Affects Up To 600,000 Sites via @sejournal, @martinibuster

An advisory was issued for the Ocean Extra WordPress plugin that is susceptible to stored cross-site scripting, which enables attackers to upload malicious scripts that execute on the site when a user visits the affected website.

Ocean Extra WordPress Plugin

The vulnerability affects only the Ocean Extra plugin by oceanwp, a plugin that extends the popular OceanWP WordPress theme. The plugin adds extra features to the OceanWP theme, such as the ability to easily host fonts locally, additional widgets, and expanded navigation menu options.

According to the Wordfence advisory, the vulnerability is due to insufficient input sanitization and output escaping.

Input Sanitization

Input sanitization is the term used to describe the process of filtering what’s input into WordPress, like in a form or any field where a user can input something. The goal is to filter out unexpected kinds of input, like malicious scripts**,** for example. This is something that the plugin is said to be missing (insufficient).

Output Escaping

Output escaping is kind of like input sanitization but in the other direction, a security process that makes sure that whatever is being output from WordPress is safe. It checks that the output doesn’t have characters that can be interpreted by a browser as code and subsequently executed, such as what is found in a stored cross-site scripting (XSS) exploit. This is the other thing that the Ocean Extra plugin was missing.

Together, the insufficient input sanitization and insufficient output escaping enable attackers to upload a malicious script and have it output on the WordPress site.

Users Urged To Update Plugin

The vulnerability only affects authenticated users with contributor-level privileges or higher, to a certain extent mitigating the threat level of this specific exploit. This vulnerability affects versions up to and including version 2.4.9. Users are advised to update their plugin to the latest version, currently 2.5.0.

Featured Image by Shutterstock/Nithid

Google: AI Max For Search Has No Conversion Minimums via @sejournal, @MattGSouthern

Google states that AI Max for Search can run in low-volume accounts, confirming there’s no minimum conversion recommendation.

However, you must use a conversion-based Smart Bidding strategy for search-term matching to work.

The clarification was provided during Google’s Ads Decoded podcast, where product managers discussed recent launches.

What Google Said

In the “Ads Decoded” podcast episode, Ginny Marvin, Google’s Ads Product Liaison, addressed whether low-volume accounts can use AI Max.

Marvin stated:

“In earlier testing, we’ve seen that AI Max can be effective for accounts of varied sizes… And there’s no minimum conversion recommendation to enable AI Max, but keep in mind that you do need to use a conversion-based smart bidding strategy in order for search term matching to work.”

This smart bidding requirement ensures the system has signals to work with, even if conversion volume is low.

Hear hear full response in the video below:

Where Smaller Accounts May See Gains

Google says advertisers “mostly using exact and phrase match keywords tend to see the highest uplift in conversions and conversion value” after enabling AI Max.

Keywordless matching can help smaller advertisers find opportunities without extensive research. AI Max identifies relevant search terms based on landing page content and existing ads.

For local campaigns, advertisers can use simple keywords instead of creating separate ones for each location. AI Max handles the geographic matching.

How AI Max Works In Search

AI Max pulls from more than just landing pages. It also uses ad assets and ad-group keywords to expand coverage and tailor RSA copy.

For English content, it’s capable of generating ad variations within brand guardrails.

Product manager Karen Zang described AI Max as an enhancer to existing work:

“I would view AI Max as an amplifier on the work that you’ve already put in… we’re just leveraging that to customize your ads.”

Product manager Tal Kabas framed AI Max as bringing Performance Max-level technology into Search:

“If you’re using all the best practices with AI Max… then it is PMax technology for Search. We wanted to basically bring that value to advertisers wherever they want to buy.”

Implementation Considerations

Small advertisers considering AI MAX should take these preparation steps into account.

First, ensure landing pages are current, as the AI uses them to generate ad variations. Poor or outdated landing page content can negatively impact the output, regardless of account size.

Second, use conversion tracking even if volume is low. While there are no minimums, having any conversion data helps. Smart bidding strategies, such as Target CPA or Target ROAS, must be in place for full functionality.

Third, start with campaigns that use exact and phrase match keywords, as Google’s data shows they benefit the most from AI Max.

Looking Ahead

AI Max is accessible to advertisers of all sizes.

The one-click implementation allows you to test AI Max without restructuring your campaigns. If results don’t meet your expectations, the feature can be disabled.

Google indicated this is the first phase of AI Max development, with more features planned.

Research Shows How To Optimize For Google AIO And ChatGPT via @sejournal, @martinibuster

New research from BrightEdge shows that Google AI Overviews, AI Mode, and ChatGPT recommend different brands nearly 62% of the time. BrightEdge concludes that each AI search platform is interpreting the data in different ways, suggesting different ways of thinking about each AI platform.

Methodology And Results

BrightEdge’s analysis was conducted with its AI Catalyst tool, using tens of thousands of the same queries across ChatGPT, Google AI Overviews (AIO), and Google AI Mode. The research documented a 61.9% overall disagreement rate, with only 33.5% of queries showing the exact same brands in all three AI platforms.

Google AI Overviews averaged 6.02 brand mentions per query, compared to ChatGPT’s 2.37. Commercial intent search queries containing phrases like “buy,” “where,” or “deals” generated brand mentions 65% of the time across all platforms, suggesting that these kinds of high-intent keyword phrases continue to be reliable for ecommerce, just like in traditional search engines. Understandably, e-commerce and finance verticals achieved 40% or more brand-mention coverage across all three AI platforms.

Three Platforms Diverge

Not all was agreement between the three AI platforms in the study. Many identical queries led to very different brand recommendations depending on the AI platform.

BrightEdge shares that:

  • ChatGPT cites trusted brands even when it’s not grounding on search data, indicating that it’s relying on LLM training data.
  • Google AI Overviews cites brands 2.5 times more than ChatGPT.
  • Google AI Mode cites brands less often than both ChatGPT and AIO.

The research indicates that ChatGPT favors trusted brands, Google AIO emphasizes breadth of coverage with more brand mentions per query, and Google AI Mode selectively recommends brands.

Next we untangle why these patterns exist.

Differences Exist

BrightEdge asserts that this split across the three platforms is not random. I agree that there are differences, but I disagree that “authority” has anything to do with it and offer an alternate explanation later on.

These are the conclusions that they draw from the data:

  • The Brand Authority Play:
    ChatGPT’s reliance on training data means established brands with strong historical presence can capture mentions without needing fresh citations. This creates an “authority dividend” that many brands don’t realize they’re already earning—or could be earning with the right positioning.
  • The Volume Opportunity:
    Google AI Overview’s hunger for brand mentions means there are 6+ available slots per relevant query, with clear citation paths showing exactly how to earn visibility. While competitors focus on traditional SEO, innovative brands are reverse-engineering these citation networks.
  • The Quality Threshold:
    Google AI Mode’s selectivity means fewer brands make the cut, but those that do benefit from heavy citation backing that reinforces their authority across the web.”

Not Authority – It’s About Training Data

BrightEdge refers to “authority signals” within ChatGPT’s underlying LLM. My opinion differs in regard to an LLM’s generated output, not retrieval-augmented responses that pull in live citations. I don’t think there are any signals in the sense of ranking-related signals. In my opinion, the LLM is simply reaching for the entity (brand) related to a topic.

What looks like “authority” to someone with their SEO glasses on is more likely about frequency, prominence, and contextual embedding strength.

  • Frequency:
    How often the brand appears in the training data.
  • Prominence:
    How central the brand is in those contexts (headline vs. footnote).
  • Contextual Embedding Strength:
    How tightly the brand is associated with certain topics based on the model’s training data.

If a brand appears widely in appropriate contexts within the training data, then, in my opinion, it is more likely to be generated as a brand mention by the LLM, because this reflects patterns in the training data and not authority.

That said, I agree with BrightEdge that being authoritative is important, and that quality shouldn’t be minimized.

Patterns Emerge

The research data suggests that there are unique patterns across all three platforms that can behave as brand citation triggers. One pattern all three share is that keyword phrases with a high commercial intent generate brand mentions in nearly two-thirds of cases. Industries like e-commerce and finance achieve higher brand coverage, which, in my opinion, reflects the ability of all three platforms to accurately understand the strong commercial intents for keywords inherent to those two verticals.

A little sunshine in a partly cloudy publishing environment is the finding that comparison queries for “best” products generate 43% brand citations across all three AI platforms, again reflecting the ability of those platforms to understand user query contexts.

Citation Network Effect

BrightEdge has an interesting insight about creating presence in all three platforms that it calls a citation network effect. BrightEdge asserts that earning citations in one platform could influence visibility in the others.

They share:

“A well-crafted piece… could:
Earn authority mentions on ChatGPT through brand recognition

Generate 6+ competitive mentions on Google AI Overview through comprehensive coverage

Secure selective, heavily-cited placement on Google AI Mode through third-party validation

The citation network effect means that earning mentions on one platform often creates the validation needed for another. “

Optimizing For Traditional Search Remains

Nevertheless, I agree with BrightEdge that there’s a strategic opportunity in creating content that works across all three environments, and I would make it explicit that SEO, optimizing for traditional search, is the keystone upon which the entire strategy is crafted.

Traditional SEO is still the way to build visibility in AI search. BrightEdge’s data indicates that this is directly effective for AIO and has a more indirect effect for AI Mode and ChatGPT.

ChatGPT can cite brand names directly from training data and from live data. It also cites brands directly from the LLM, which suggests that generating strong brand visibility tied to specific products and services may be helpful, as that is what eventually makes it into the AI training data.

BrightEdge’s conclusion about the data leans heavily into the idea that AI is creating opportunities for businesses that build brand awareness in the topics they want to be surfaced in.
They share:

“We’re witnessing the emergence of AI-native brand discovery. With this fundamental shift, brand visibility is determined not by search rankings but by AI recommendation algorithms with distinct personalities and preferences.

The brands winning this transition aren’t necessarily the ones with the biggest SEO budgets or the most content. They’re the ones recognizing that AI disagreement creates more paths to visibility, not fewer.

As AI becomes the primary discovery mechanism across industries, understanding these platform-specific triggers isn’t optional—it’s the difference between capturing comprehensive brand visibility and watching competitors claim the opportunities you didn’t know existed.

The 62% disagreement gap isn’t breaking the system. It’s creating one—and smart brands are already learning to work it.”

BrightEdge’s report:

ChatGPT vs Google AI: 62% Brand Recommendation Disagreement

Featured Image by Shutterstock/MMD Creative

WordPress Trademark Applications Rejected By USPTO via @sejournal, @martinibuster

The United States Patent and Trademark Office has rejected the WordPress Foundation’s applications for trademarks on the phrases “Managed WordPress” and “Hosted WordPress.” But WordPress isn’t walking away just yet.

The Trademark Office published the following notice for the “Hosted WordPress” trademark application:

“A final Office action refusing registration has been sent (issued) because the applicant neither satisfied nor overcame all requirements and/or refusals previously raised….

SUMMARY OF ISSUES MADE FINAL that applicant must address:

• Disclaimer Requirement

• Identification of Goods and Services

• Applicant Domicile Requirement

DISCLAIMER REQUIREMENT Applicant must disclaim the wording ‘MANAGED’ because it is merely descriptive of an ingredient, quality, characteristic, function, feature, purpose, or use of applicant’s goods and services….

Applicant may respond by submitting a disclaimer in the following format: No claim is made to the exclusive right to use ‘MANAGED’ apart from the mark as shown.”

Screenshot of Document Close-Up

The USPTO also found that the WordPress Foundation’s description of goods and services is too vague and overly broad, especially regarding the phrase “website development software,” and asks them to clarify whether it is downloadable (Class 9) or offered as online services (Class 42). The USPTO suggested acceptable wording that they can adopt, as long as it accurately reflects what they provide.

The Trademark Office also issued the following response for the trademark application for Managed WordPress:

“DISCLAIMER REQUIREMENT
Applicant must disclaim the wording ‘MANAGED’ because it is merely descriptive of an ingredient, quality, characteristic, function, feature, purpose, or use of applicant’s goods and services…. Applicant may respond by submitting a disclaimer in the following format:

No claim is made to the exclusive right to use ‘MANAGED’ apart from the mark as shown.”

The Process Is Not Over

The WordPress Foundation is continuing its efforts to obtain trademarks for both “Managed WordPress” and “Hosted WordPress.” It has filed a Request for Reconsideration after Final Action for each trademark application, which asks the USPTO to reconsider its refusals based on amendments, arguments, or evidence. These requests are a final procedural step before an appeal, although they are not themselves appeals.

Google Says GSC Sitemap Uploads Don’t Guarantee Immediate Crawls via @sejournal, @martinibuster

Google’s John Mueller answered a question about how many sitemaps to upload, and then said there are no guarantees that any of the URLs will be crawled right away.

A member of the r/TechSEO community on Reddit asked if it’s enough to upload the main sitemap.xml file, which then links to the more granular sitemaps. What prompted the question was their concern over recently changing their website page slugs (URL file names).

That person asked:

“I submitted “sitemap.xml” to Google Search Console, is this sufficient or do I also need to submit page-sitemap.xml and sitemap-misc.xml as separate entries for it to work?
I recently changed my website’s page slugs, how long will it take for Google Search Console to consider the sitemap”

Mueller responded that uploading the sitemap index file (sitemap.xml) was enough and that Google would proceed from there. He also shared that it wasn’t necessary to upload the individual granular sitemaps.

What was of special interest were his comments indicating that uploading sitemaps didn’t “guarantee” that all the URLs would be crawled and that there is no set time for when Googlebot would crawl the sitemap URLs. He also suggested using the Inspect URL tool.

He shared:

“You can submit the individual ones, but you don’t really need to. Also, sitemaps don’t guarantee that everything is recrawled immediately + there’s no specific time for recrawling. For individual pages, I’d use the inspect URL tool and submit them (in addition to sitemaps).”

Is There Value In Uploading All Sitemaps?

According to John Mueller, it’s enough to upload the index sitemap file. However, from our side of the Search Console, I think most people would agree that it’s better not to leave it to chance that Google will or will not crawl a URL. For that reason, SEOs may decide it’s reassuring to go ahead and upload all sitemaps that contain the changed URLs.

The URL Inspection tool is a solid approach because it enables SEOs to request crawling for a specific URL. The downside of the tool is that you can only request this for one URL at a time. Google’s URL Inspection tool does not support bulk URL submissions for indexing.

See also: Bing Recommends lastmod Tags For AI Search Indexing

Featured Image by Shutterstock/Denis OREA

LinkedIn Study: Professionals Trust Their Networks Over AI & Search via @sejournal, @MattGSouthern

LinkedIn reports that professionals are more likely to seek workplace advice from people they know than from AI tools or search engines.

A new LinkedIn study finds that 43% turn to their networks first, with nearly two-thirds saying colleagues help them decide faster and with more confidence.

Key Findings

LinkedIn’s research indicates that professional networks rank ahead of AI and search for advice at work, with 43% naming their network as the first stop.

Sixty-four percent say colleagues improve the quality and speed of decision-making. The study also notes an 82% rise in posts about feeling overwhelmed or navigating change, suggesting that people are looking for clarity from trusted human voices.

Pressure To Learn AI

Learning about AI is causing stress for many people. Over half (51%) say upskilling feels like a second job, 33% feel embarrassed about their knowledge, and 35% feel nervous discussing AI at work.

Additionally, 41% say the fast pace of AI changes affects their well-being. Younger workers, especially Gen Z, are more likely to exaggerate their AI skills compared to Gen X.

Among those aged 18 to 24, 75% believe AI cannot replace the intuition from trusted colleagues. This aligns with the finding that people prefer advice from known experts, especially when the stakes are high.

Implications For B2B Buying And Marketing

The study shows that 77% of B2B marketing leaders say audiences rely on both a company’s channels and their professional networks. Millennials and Gen Z now represent 71% of B2B buyers, leading marketers to invest in trusted individuals within those networks.

Eighty percent of marketers plan to increase spending on community-driven content featuring creators, employees, and experts. They believe that trusted creators are key to building credibility with younger buyers.

This highlights that social discovery and community participation matter as much as search rankings. Content that’s easy to share and linked to recognized experts may reach more people than generic brand messages.

Why This Matters

As professionals turn to their networks for advice, you may need to adjust how you build trust and generate demand.

You can do this by encouraging your employees to share messages, working with trusted creators, and creating expert-led content that’s easy to find on social media.

While traditional SEO and paid ads still matter, networks can affect how people find, discuss, and validate your content before they visit your website.

Looking Ahead

As more people use AI, professionals are learning to combine new tools with their own judgment. Marketers can gain lasting benefits by focusing on building real relationships, rather than just mastering AI tools.

Methodology

The findings are based on research commissioned by LinkedIn and conducted by Censuswide. The study included 19,268 professionals and 7,000 B2B marketers from 14 countries, conducted from July 3 to July 15, 2025.

The percentages and program details mentioned above are taken directly from LinkedIn’s pressroom post.


Featured Image: Nurulliaa/Shutterstock