Interaction To Next Paint: 9 Content Management Systems Ranked via @sejournal, @martinibuster

Interaction to Next Paint (INP) is a meaningful Core Web Vitals metric because it represents how quickly a web page responds to user input. It is so important that the HTTPArchive has a comparison of INP across content management systems. The following are the top content management systems ranked by Interaction to Next Paint.

What Is Interaction To Next Paint (INP)?

INP measures how responsive a web page is to user interactions during a visit. Specifically, it measures interaction latency, which is the time between when a user clicks, taps, or presses a key and when the page visually responds.

This is a more accurate measurement of responsiveness than the older metric it replaced, First Input Delay (FID), which only captured the first interaction. INP is more comprehensive because it evaluates all clicks, taps, and key presses on a page and then reports a representative value based on the longest meaningful latency.

The INP score is representative of the page’s responsive performance. For that reason**,** extreme outliers are filtered out of the calculation so that the score reflects typical worst-case responsiveness.

Web pages with poor INP scores create a frustrating user experience that increases the risk of page abandonment. Fast responsiveness enables a smoother experience that supports higher engagement and conversions.

INP Scores Have Three Ratings:

  • Good: Below or at 200 milliseconds
  • Needs Improvement: Above 200 milliseconds and below or at 500 milliseconds
  • Poor: Above 500 milliseconds

Content Management System INP Champions

The latest Interaction to Next Paint (INP) data shows that all major content management systems improved from June to July, but only by incremental improvements.

Joomla posted the largest gain with a 1.12% increase in sites achieving a good score. WordPress followed with a 0.88% increase in the number of sites posting a good score, while Wix and Drupal improved by 0.70% and 0.64%.

Duda and Squarespace also improved, though by smaller margins of 0.46% and 0.22%. Even small percentage changes can reflect real improvements in how users experience responsiveness on these platforms, so it’s encouraging that every publishing platform in this comparison is improving.

CMS INP Ranking By Monthly Improvement

  1. Joomla: +1.12%
  2. WordPress: +0.88%
  3. Wix: +0.70%
  4. Drupal: +0.64%
  5. Duda: +0.46%
  6. Squarespace: +0.22%

Which CMS Has The Best INP Scores?

Month-to-month improvement shows who is doing better, but that’s not the same as which CMS is doing the best. The July INP results show a different ranking order of content management systems when viewed by overall INP scores.

Squarespace leads with 96.07% of sites achieving a good INP score, followed by Duda at 93.81%. This is a big difference from the Core Web Vitals rankings, where Duda is consistently ranked number one. When it comes to arguably the most important Core Web Vital metric, Squarespace takes the lead as the number one ranked CMS for Interaction to Next Paint.

Wix and WordPress are ranked in the middle with 87.52% and 86.77% of sites showing a good INP score, while Drupal, with a score of 86.14%, is ranked in fifth place, just a fraction behind WordPress.

Ranking in sixth place in this comparison is Joomla, trailing the other five with a score of 84.47%. That score is not so bad considering that it’s only two to three percent behind Wix and WordPress.

CMS INP Rankings for July 2025

  1. Squarespace – 96.07%
  2. Duda: 93.81%
  3. Wix: 87.52%
  4. WordPress: 86.77%
  5. Drupal: 86.14%
  6. Joomla: 84.47%

These rankings show that even platforms that lag in INP performance, like Joomla, are still improving, and it could be that Joomla’s performance may best the other platforms in the future if it keeps up its improvement.

In contrast, Squarespace, which already performs well, posted the smallest gain. This indicates that performance improvement is uneven, with systems advancing at different speeds. Nevertheless, the latest Interaction to Next Paint (INP) data shows that all six content management systems in this comparison improved from June to July. That upward performance trend is a positive sign for publishers.

What About Shopify’s INP Performance?

Shopify has strong Core Web Vitals performance, but how well does it compare to these six content management systems? This might seem like an unfair comparison because shopping platforms require features, images, and videos that can slow a page down. But Duda, Squarespace, and Wix offer ecommerce solutions, so it’s actually a fair and reasonable comparison.

We see that the rankings change when Shopify is added to the INP comparison:

Shopify Versus Everyone

  1. Squarespace: 96.07%
  2. Duda: 93.81%
  3. Shopify: 89.58%
  4. Wix: 87.52%
  5. WordPress: 86.77%
  6. Drupal: 86.14%
  7. Joomla: 84.47%

Shopify is ranked number three. Now look at what happens when we compare the three shopping platforms against each other:

Top Ranked Shopping Platforms By INP

  1. BigCommerce: 95.29%
  2. Shopify: 89.58%
  3. WooCommerce: 87.99%

BigCommerce is the number-one-ranked shopping platform for the important INP metric among the three in this comparison.

Lastly, we compare the INP performance scores for all the platforms together, leading to a surprising comparison.

CMS And Shopping Platforms Comparison

  1. Squarespace: 96.07%
  2. BigCommerce: 95.29%
  3. Duda: 93.81%
  4. Shopify: 89.58%
  5. WooCommerce: 87.99%
  6. Wix: 87.52%
  7. WordPress: 86.77%
  8. Drupal: 86.14%
  9. Joomla: 84.47%

All three ecommerce platforms feature in the top five rankings of content management systems, which is remarkable because of the resource-intensive demands of ecommerce websites. WooCommerce, a WordPress-based shopping platform, ranks in position five, but it’s so close to Wix that they are virtually tied for position five.

Takeaways

INP measures the responsiveness of a web page, making it a meaningful indicator of user experience. The latest data shows that while every CMS is improving, Squarespace, BigCommerce, and Duda outperform all other content platforms in this comparison by meaningful margins.

All of the platforms in this comparison show high percentages of good INP scores. The number four-ranked Shopify is only 6.49 percentage points behind the top-ranked Squarespace, and 84.47% of the sites published with the bottom-ranked Joomla show a good INP score. These results show that all platforms are delivering a quality experience for users

View the results here (must be logged into a Google account to view).

Featured Image by Shutterstock/Roman Samborskyi

Google Avoids Breakup As Judge Bars Exclusive Default Search Deals via @sejournal, @MattGSouthern

A federal judge outlined remedies in the U.S. search antitrust case that bar Google from using exclusive default search deals but stop short of forcing a breakup.

Reuters reports that Google won’t have to divest Chrome or Android, but it may have to share some search data with competitors under court-approved terms.

Google says it will appeal.

What The Judge Ordered

Judge Amit P. Mehta barred Google from entering or maintaining exclusive agreements that tie the distribution of Search, Chrome, Google Assistant, or the Gemini app to other apps, licenses, or revenue-share arrangements.

The ruling allows Google to continue paying for placement but prohibits exclusivity that could block rivals.

The order also envisions Google making certain search and search-ad syndication services available to competitors at standard rates, alongside limited data sharing for “qualified competitors.”

Mehta ordered Google to share some search data with competitors under specific protections to help them improve their relevance and revenue. Google argued this could expose its trade secrets and plans to appeal the decision.

The judge directed the parties to meet and submit a revised final judgment by September 10. Once entered, the remedies would take effect 60 days later, run for six years, and be overseen by a technical committee. Final language could change based on the parties’ filing.

How We Got Here

In August 2024, Mehta found Google illegally maintained a monopoly in general search and related text ads.

Judge Amit P. Mehta wrote in his August 2024 opinion:

“Google is a monopolist, and it has acted as one to maintain its monopoly.”

This decision established the need for remedies. Today’s order focuses on distribution and data access, rather than breaking up the company.

What’s Going To Change

Ending exclusivity changes how contracts for default placements can be made across devices and browsers. Phone makers and carriers may need to update their agreements to follow the new rules.

However, the ruling doesn’t require any specific user experience change, like a choice screen. The results will depend on how new contracts are created and approved by the court.

Next Steps

Expect a gradual rollout if the final judgment follows today’s outline.

Here are the next steps to watch for:

  • The revised judgment that the parties will submit by September 10.
  • Changes to contracts between Google and distribution partners to meet the non-exclusivity requirement.
  • Any pilot programs or rules that specify who qualifies as a “qualified competitor” and what data they can access.

Separately, Google faces a remedies trial in the ad-tech case in late September. This trial could lead to changes that affect advertising and measurement.

Looking Ahead

If the parties submit a revised judgment by September 10, changes could start about 60 days after the court’s final order. This might shift if Google gets temporary relief during an appeal.

In the short term, expect contract changes rather than product updates.

The final judgment will determine who can access data and which types are included. If the program is limited, it may not significantly affect competition. If broader, competitors might enhance their relevance and profit over the six-year period.

Also watch the ad tech remedies trial this month. Its results, along with the search remedies, will shape how Google handles search and ads in the coming years.

How to Use Google Ads Performance Max Channel Reporting via @sejournal, @brookeosmundson

For years, marketers have asked for better visibility into how individual channels contribute to Performance Max results.

Google has released a tutorial walking advertisers through its new Performance Max channel reporting. This reporting feature offers more transparency into how campaigns perform across Search, YouTube, Display, Gmail, Discover, and Maps.

With this new report, you can now dig deeper into performance by channel and format, making it easier to analyze results and troubleshoot.

Here’s a look at how to find the report and what you can do with it.

Where to Find Channel Performance Reporting

To find and access the channel reporting, head to your Google Ads account.

From there, navigate to: Campaign >> Insights & Reports >> Channel Performance

google ads performance max channel reportingImage credit: Google, April 2025

Once you’re there, you’ll see these items:

  • A performance summary overview
  • A channel-to-goals visualization
  • Channel distribution table.

These items provide more than just a static view of performance. You’re able to click on specific channels to drill down into related reports, like placements on the Google Display Network, or Search Terms from the Search channel.

Exploring the Reports and Visualizations

The channel performance page isn’t just a high-level dashboard. It provides several views and reports that give you more context on how your ads are performing across Google’s network. Here’s a closer look at the most useful areas:

Ad Format Views

Not every ad performs the same across channels, which is why Google lets you break results down by ad format.

For example, you can see how video ads perform on YouTube compared to product ads shown on Search. This helps you spot whether one creative type is pulling more weight and whether you need to adjust your creative mix or budgets to support higher-performing formats.

Product-Driven Insights

If you’re running Shopping or retail campaigns, this section shows how ads tied to product data perform across channels.

You can see Shopping ads on Search as well as dynamic remarketing ads on Display. This gives ecommerce advertisers a clearer picture of how product feeds contribute to results beyond just one channel.

Channel Distribution Table

This table is one of the most detailed reports in the new view. It includes impressions, clicks, interactions, conversions, conversion value, and cost, all broken down by channel.

You can customize the table to highlight the metrics that matter most to your goals, such as ROAS or CPA, and even segment results by ad format (like video versus product ads).

Since the table is downloadable, you can also share it with teams or clients for transparent reporting.

Status Column and Diagnostics

The status column acts as a built-in troubleshooting tool. It surfaces issues or recommendations related to specific channels or formats, such as diagnostic warnings if ads aren’t serving as expected.

By reviewing these, you can quickly identify where performance may be limited and take action to resolve issues before they affect results at scale.

Reviewing Single-Channel vs. Cross-Channel CPA

One important takeaway from Google’s tutorial is that looking at average CPA or ROAS for a single channel doesn’t tell the full story.

Performance Max uses marginal ROI optimization, bidding in real time for the most cost-efficient conversions across all channels.

Since users don’t interact with just one channel, this cross-channel view helps advertisers see the broader picture of how campaigns drive results.

That means when evaluating effectiveness, Google recommends to prioritize your goals and audiences over individual channel performance.

How Advertisers Can Benefit From Performance Max Channel Reporting

The new reporting doesn’t change how Performance Max works behind the scenes, but it does help you:

  • Understand which channels support your goals most effectively
  • Identify areas where specific ad formats or channels may need creative or budget adjustments
  • Communicate results more clearly with stakeholders by showing cross-channel contributions

With Search Partner Network reporting coming in the future, Google is signaling a continued investment in giving advertisers deeper visibility.

Performance Max remains a cross-channel campaign type, but channel reporting is a welcome step toward transparency. By digging into these reports, advertisers can better understand how ads perform across Google properties and make smarter optimization decisions.

Google Adds Guidance On JavaScript Paywalls And SEO via @sejournal, @martinibuster

Google is apparently having trouble identifying paywalled content due to a standard way paywalled content is handled by publishers like news sites. It’s asking that publishers with paywalled content change the way they block content so as to help Google out.

Search Related JavaScript Problems

Google updated their guidelines with a call for publishers to consider changing how they block users from paywalled content. It’s fairly common for publishers to use a script to block non-paying users with an interstitial although the full content is still there in the code. This may be causing issues for Google in properly identifying paywalled content.

A recent addition to their search documentation about JavaScript issues related to search they wrote:

“If you’re using a JavaScript-based paywall, consider the implementation.

Some JavaScript paywall solutions include the full content in the server response, then use JavaScript to hide it until subscription status is confirmed. This isn’t a reliable way to limit access to the content. Make sure your paywall only provides the full content once the subscription status is confirmed.”

The documentation doesn’t say what problems Google itself is having, but a changelog documenting the change offers more context about why they are asking for this change:

“Adding guidance for JavaScript-based paywalls

What: Added new guidance on JavaScript-based paywall considerations.

Why: To help sites understand challenges with the JavaScript-based paywall design pattern, as it makes it difficult for Google to automatically determine which content is paywalled and which isn’t.”

The changelog makes it clear that the way some publishers use JavaScript for blocking paywalled content is making it difficult for Google to know if the content is or is not paywalled.

The change was an addition to a numbered list of JavaScript problems publishers should be aware of, item number 10 on their “Fix Search-related JavaScript Problems” page.

Featured Image by Shutterstock/Kues

TablePress WordPress Plugin Vulnerability Affects 700,000+ Sites via @sejournal, @martinibuster

A vulnerability in the TablePress WordPress plugin enables attackers to inject malicious scripts that run when someone visits a compromised page. It affects all versions up to and including version 3.2.

TablePress WordPress plugin

The TablePress plugin is used on more than 700,000 websites. It enables users to create and manage tables with interactive features like sorting, pagination, and search.

What Caused The Vulnerability

The problem came from missing input sanitization and output escaping in how the plugin handled the shortcode_debug parameter. These are basic security steps that protect sites from harmful input and unsafe output.

The Wordfence advisory explains:

“The TablePress plugin for WordPress is vulnerable to Stored Cross-Site Scripting via the ‘shortcode_debug’ parameter in all versions up to, and including, 3.2 due to insufficient input sanitization and output escaping.”

Input Sanitization

Input sanitization filters what users type into forms or fields. It blocks harmful input, like malicious scripts. TablePress didn’t fully apply this security step.

Output Escaping

Output escaping is similar, but it works in the opposite direction, filtering what gets output onto the website. Output escaping prevents the website from publishing characters that can be interpreted by browsers as code.

That’s exactly what can happen with TablePress because it has insufficient input sanitization , which enables an attacker to upload a script , and insufficient escaping to prevent the website from injecting malicious scripts into the live website. That’s what enables the stored cross-site scripting (XSS) attacks.

Because both protections were missing, someone with Contributor-level access or higher could upload a script that gets stored and runs whenever the page is visited. The fact that a Contributor-level authorization is necessary mitigates the potential for an attack to a certain extent.

Plugin users are recommended to update the plugin to version 3.2.1 or higher.

Featured Image by Shutterstock/Nithid

WordPress Ocean Extra Vulnerability Affects Up To 600,000 Sites via @sejournal, @martinibuster

An advisory was issued for the Ocean Extra WordPress plugin that is susceptible to stored cross-site scripting, which enables attackers to upload malicious scripts that execute on the site when a user visits the affected website.

Ocean Extra WordPress Plugin

The vulnerability affects only the Ocean Extra plugin by oceanwp, a plugin that extends the popular OceanWP WordPress theme. The plugin adds extra features to the OceanWP theme, such as the ability to easily host fonts locally, additional widgets, and expanded navigation menu options.

According to the Wordfence advisory, the vulnerability is due to insufficient input sanitization and output escaping.

Input Sanitization

Input sanitization is the term used to describe the process of filtering what’s input into WordPress, like in a form or any field where a user can input something. The goal is to filter out unexpected kinds of input, like malicious scripts**,** for example. This is something that the plugin is said to be missing (insufficient).

Output Escaping

Output escaping is kind of like input sanitization but in the other direction, a security process that makes sure that whatever is being output from WordPress is safe. It checks that the output doesn’t have characters that can be interpreted by a browser as code and subsequently executed, such as what is found in a stored cross-site scripting (XSS) exploit. This is the other thing that the Ocean Extra plugin was missing.

Together, the insufficient input sanitization and insufficient output escaping enable attackers to upload a malicious script and have it output on the WordPress site.

Users Urged To Update Plugin

The vulnerability only affects authenticated users with contributor-level privileges or higher, to a certain extent mitigating the threat level of this specific exploit. This vulnerability affects versions up to and including version 2.4.9. Users are advised to update their plugin to the latest version, currently 2.5.0.

Featured Image by Shutterstock/Nithid

Google: AI Max For Search Has No Conversion Minimums via @sejournal, @MattGSouthern

Google states that AI Max for Search can run in low-volume accounts, confirming there’s no minimum conversion recommendation.

However, you must use a conversion-based Smart Bidding strategy for search-term matching to work.

The clarification was provided during Google’s Ads Decoded podcast, where product managers discussed recent launches.

What Google Said

In the “Ads Decoded” podcast episode, Ginny Marvin, Google’s Ads Product Liaison, addressed whether low-volume accounts can use AI Max.

Marvin stated:

“In earlier testing, we’ve seen that AI Max can be effective for accounts of varied sizes… And there’s no minimum conversion recommendation to enable AI Max, but keep in mind that you do need to use a conversion-based smart bidding strategy in order for search term matching to work.”

This smart bidding requirement ensures the system has signals to work with, even if conversion volume is low.

Hear hear full response in the video below:

Where Smaller Accounts May See Gains

Google says advertisers “mostly using exact and phrase match keywords tend to see the highest uplift in conversions and conversion value” after enabling AI Max.

Keywordless matching can help smaller advertisers find opportunities without extensive research. AI Max identifies relevant search terms based on landing page content and existing ads.

For local campaigns, advertisers can use simple keywords instead of creating separate ones for each location. AI Max handles the geographic matching.

How AI Max Works In Search

AI Max pulls from more than just landing pages. It also uses ad assets and ad-group keywords to expand coverage and tailor RSA copy.

For English content, it’s capable of generating ad variations within brand guardrails.

Product manager Karen Zang described AI Max as an enhancer to existing work:

“I would view AI Max as an amplifier on the work that you’ve already put in… we’re just leveraging that to customize your ads.”

Product manager Tal Kabas framed AI Max as bringing Performance Max-level technology into Search:

“If you’re using all the best practices with AI Max… then it is PMax technology for Search. We wanted to basically bring that value to advertisers wherever they want to buy.”

Implementation Considerations

Small advertisers considering AI MAX should take these preparation steps into account.

First, ensure landing pages are current, as the AI uses them to generate ad variations. Poor or outdated landing page content can negatively impact the output, regardless of account size.

Second, use conversion tracking even if volume is low. While there are no minimums, having any conversion data helps. Smart bidding strategies, such as Target CPA or Target ROAS, must be in place for full functionality.

Third, start with campaigns that use exact and phrase match keywords, as Google’s data shows they benefit the most from AI Max.

Looking Ahead

AI Max is accessible to advertisers of all sizes.

The one-click implementation allows you to test AI Max without restructuring your campaigns. If results don’t meet your expectations, the feature can be disabled.

Google indicated this is the first phase of AI Max development, with more features planned.

Research Shows How To Optimize For Google AIO And ChatGPT via @sejournal, @martinibuster

New research from BrightEdge shows that Google AI Overviews, AI Mode, and ChatGPT recommend different brands nearly 62% of the time. BrightEdge concludes that each AI search platform is interpreting the data in different ways, suggesting different ways of thinking about each AI platform.

Methodology And Results

BrightEdge’s analysis was conducted with its AI Catalyst tool, using tens of thousands of the same queries across ChatGPT, Google AI Overviews (AIO), and Google AI Mode. The research documented a 61.9% overall disagreement rate, with only 33.5% of queries showing the exact same brands in all three AI platforms.

Google AI Overviews averaged 6.02 brand mentions per query, compared to ChatGPT’s 2.37. Commercial intent search queries containing phrases like “buy,” “where,” or “deals” generated brand mentions 65% of the time across all platforms, suggesting that these kinds of high-intent keyword phrases continue to be reliable for ecommerce, just like in traditional search engines. Understandably, e-commerce and finance verticals achieved 40% or more brand-mention coverage across all three AI platforms.

Three Platforms Diverge

Not all was agreement between the three AI platforms in the study. Many identical queries led to very different brand recommendations depending on the AI platform.

BrightEdge shares that:

  • ChatGPT cites trusted brands even when it’s not grounding on search data, indicating that it’s relying on LLM training data.
  • Google AI Overviews cites brands 2.5 times more than ChatGPT.
  • Google AI Mode cites brands less often than both ChatGPT and AIO.

The research indicates that ChatGPT favors trusted brands, Google AIO emphasizes breadth of coverage with more brand mentions per query, and Google AI Mode selectively recommends brands.

Next we untangle why these patterns exist.

Differences Exist

BrightEdge asserts that this split across the three platforms is not random. I agree that there are differences, but I disagree that “authority” has anything to do with it and offer an alternate explanation later on.

These are the conclusions that they draw from the data:

  • The Brand Authority Play:
    ChatGPT’s reliance on training data means established brands with strong historical presence can capture mentions without needing fresh citations. This creates an “authority dividend” that many brands don’t realize they’re already earning—or could be earning with the right positioning.
  • The Volume Opportunity:
    Google AI Overview’s hunger for brand mentions means there are 6+ available slots per relevant query, with clear citation paths showing exactly how to earn visibility. While competitors focus on traditional SEO, innovative brands are reverse-engineering these citation networks.
  • The Quality Threshold:
    Google AI Mode’s selectivity means fewer brands make the cut, but those that do benefit from heavy citation backing that reinforces their authority across the web.”

Not Authority – It’s About Training Data

BrightEdge refers to “authority signals” within ChatGPT’s underlying LLM. My opinion differs in regard to an LLM’s generated output, not retrieval-augmented responses that pull in live citations. I don’t think there are any signals in the sense of ranking-related signals. In my opinion, the LLM is simply reaching for the entity (brand) related to a topic.

What looks like “authority” to someone with their SEO glasses on is more likely about frequency, prominence, and contextual embedding strength.

  • Frequency:
    How often the brand appears in the training data.
  • Prominence:
    How central the brand is in those contexts (headline vs. footnote).
  • Contextual Embedding Strength:
    How tightly the brand is associated with certain topics based on the model’s training data.

If a brand appears widely in appropriate contexts within the training data, then, in my opinion, it is more likely to be generated as a brand mention by the LLM, because this reflects patterns in the training data and not authority.

That said, I agree with BrightEdge that being authoritative is important, and that quality shouldn’t be minimized.

Patterns Emerge

The research data suggests that there are unique patterns across all three platforms that can behave as brand citation triggers. One pattern all three share is that keyword phrases with a high commercial intent generate brand mentions in nearly two-thirds of cases. Industries like e-commerce and finance achieve higher brand coverage, which, in my opinion, reflects the ability of all three platforms to accurately understand the strong commercial intents for keywords inherent to those two verticals.

A little sunshine in a partly cloudy publishing environment is the finding that comparison queries for “best” products generate 43% brand citations across all three AI platforms, again reflecting the ability of those platforms to understand user query contexts.

Citation Network Effect

BrightEdge has an interesting insight about creating presence in all three platforms that it calls a citation network effect. BrightEdge asserts that earning citations in one platform could influence visibility in the others.

They share:

“A well-crafted piece… could:
Earn authority mentions on ChatGPT through brand recognition

Generate 6+ competitive mentions on Google AI Overview through comprehensive coverage

Secure selective, heavily-cited placement on Google AI Mode through third-party validation

The citation network effect means that earning mentions on one platform often creates the validation needed for another. “

Optimizing For Traditional Search Remains

Nevertheless, I agree with BrightEdge that there’s a strategic opportunity in creating content that works across all three environments, and I would make it explicit that SEO, optimizing for traditional search, is the keystone upon which the entire strategy is crafted.

Traditional SEO is still the way to build visibility in AI search. BrightEdge’s data indicates that this is directly effective for AIO and has a more indirect effect for AI Mode and ChatGPT.

ChatGPT can cite brand names directly from training data and from live data. It also cites brands directly from the LLM, which suggests that generating strong brand visibility tied to specific products and services may be helpful, as that is what eventually makes it into the AI training data.

BrightEdge’s conclusion about the data leans heavily into the idea that AI is creating opportunities for businesses that build brand awareness in the topics they want to be surfaced in.
They share:

“We’re witnessing the emergence of AI-native brand discovery. With this fundamental shift, brand visibility is determined not by search rankings but by AI recommendation algorithms with distinct personalities and preferences.

The brands winning this transition aren’t necessarily the ones with the biggest SEO budgets or the most content. They’re the ones recognizing that AI disagreement creates more paths to visibility, not fewer.

As AI becomes the primary discovery mechanism across industries, understanding these platform-specific triggers isn’t optional—it’s the difference between capturing comprehensive brand visibility and watching competitors claim the opportunities you didn’t know existed.

The 62% disagreement gap isn’t breaking the system. It’s creating one—and smart brands are already learning to work it.”

BrightEdge’s report:

ChatGPT vs Google AI: 62% Brand Recommendation Disagreement

Featured Image by Shutterstock/MMD Creative

WordPress Trademark Applications Rejected By USPTO via @sejournal, @martinibuster

The United States Patent and Trademark Office has rejected the WordPress Foundation’s applications for trademarks on the phrases “Managed WordPress” and “Hosted WordPress.” But WordPress isn’t walking away just yet.

The Trademark Office published the following notice for the “Hosted WordPress” trademark application:

“A final Office action refusing registration has been sent (issued) because the applicant neither satisfied nor overcame all requirements and/or refusals previously raised….

SUMMARY OF ISSUES MADE FINAL that applicant must address:

• Disclaimer Requirement

• Identification of Goods and Services

• Applicant Domicile Requirement

DISCLAIMER REQUIREMENT Applicant must disclaim the wording ‘MANAGED’ because it is merely descriptive of an ingredient, quality, characteristic, function, feature, purpose, or use of applicant’s goods and services….

Applicant may respond by submitting a disclaimer in the following format: No claim is made to the exclusive right to use ‘MANAGED’ apart from the mark as shown.”

Screenshot of Document Close-Up

The USPTO also found that the WordPress Foundation’s description of goods and services is too vague and overly broad, especially regarding the phrase “website development software,” and asks them to clarify whether it is downloadable (Class 9) or offered as online services (Class 42). The USPTO suggested acceptable wording that they can adopt, as long as it accurately reflects what they provide.

The Trademark Office also issued the following response for the trademark application for Managed WordPress:

“DISCLAIMER REQUIREMENT
Applicant must disclaim the wording ‘MANAGED’ because it is merely descriptive of an ingredient, quality, characteristic, function, feature, purpose, or use of applicant’s goods and services…. Applicant may respond by submitting a disclaimer in the following format:

No claim is made to the exclusive right to use ‘MANAGED’ apart from the mark as shown.”

The Process Is Not Over

The WordPress Foundation is continuing its efforts to obtain trademarks for both “Managed WordPress” and “Hosted WordPress.” It has filed a Request for Reconsideration after Final Action for each trademark application, which asks the USPTO to reconsider its refusals based on amendments, arguments, or evidence. These requests are a final procedural step before an appeal, although they are not themselves appeals.