Automattic’s Response To WP Engine Lawsuit Reframes Narrative via @sejournal, @martinibuster

Lawyers for Matt Mullenweg and Automattic filed a motion to dismiss the lawsuit from WP Engine, offering a different perspective on the dispute’s underlying causes.

The motion to dismiss claims that the one causing harm isn’t Mullenweg and Automattic but WP Engine, asserting that WP Engine is compelling the defendant to provide resources and support free of charge as well as to restrict the Mullenweg’s ability to express his opinions about WP Engine’s practices.

The motion to dismiss begins by accuses WP Engine of selectively choosing recent events as basis for their complaint. It then fills in the parts that were left out, beginning with the founding of WordPress over two decades ago when Matt co-founded a way to create websites that democratized Internet publishing in the process. The motion outlines how his organization devoted thousands of person-years to growing the platform, eventually getting it to a point where it now generates an estimated $10 billion dollars per year for thousands of companies and freelancers.

The point of the first part of the motion is to explain that Mullenweg and Automattic support the open source WordPress project because the project depends on a “symbiotic” relationship between the WordPress community and those who are a part of it, including web hosts like WP Engine.

“But the success and vitality of WordPress depends on a supportive and symbiotic relationship with those in the WordPress community.”

After establishing what the community is, how it was founded and the role of Mullenweg and Automattic as a strongly supportive of the community, it then paints a picture of WP Engine as a company that reaps huge benefits from the volunteer work and donated time without adequately giving back to the community. This is the part that Mullenweg and Automattic feel is left out of WP Engine’s complaint, that Mullenweg was expressing his opinion that not only should WP Engine should provide more support to the community and that Mullenweg was responding to the threat posed by the plaintiff’s behavior.

The motion explains:

“Plaintiff WP Engine’s conduct poses a threat to that community. WP Engine is a website hosting service built on the back of WordPress software and controlled by the private equity firm Silver Lake, which claims over $100B of assets under management.

…In addition to WordPress software, WP Engine also uses various of the free resources on the Website, and its Complaint alleges that access to the Website is now, apparently, critical for its business.”

Lastly, the beginning part of the motion, which explains the defendant’s side of the dispute, asserts that the defendant’s behavior was entirely within their legal right because no agreement exists between WordPress and WP Engine that guarantees them access to WordPress resources and that WP Engine at no time tried to secure rights to access.

The document continues:

“But the Complaint does not (and cannot) allege that WP Engine has any agreement with Matt (or anyone else for that matter) that gives WP Engine the right to use the Website’s resources. The Complaint does not (and cannot) allege that WP Engine at any time has attempted to secure that right from Matt or elsewhere.

Instead, WP Engine has exploited the free resources provided by the Website to make hundreds of millions of dollars annually. WP Engine has done so while refusing to meaningfully give back to the WordPress community, and while unfairly trading off the goodwill associated with the WordPress and WooCommerce trademarks.”

Accusation Of Trademark Infringement

The motion to dismiss filed by Mullenweg and Automattic accuse WP Engine of trademark infringement, a claim that has been at the heart of of Mullenweg’s dispute, which the legal response says is a dispute that Mullenweg attempted to amicably resolve in private.

The legal document asserts:

“In 2021, for the first time, WP Engine incorporated the WordPress trademark into the name of its own product offering which it called “Headless WordPress,” infringing that trademark and violating the express terms of the WordPress Foundation Trademark Policy, which prohibits the use of the WordPress trademarks in product names. And, over time, WP Engine has progressively increased its use and prominence of the WordPress trademark throughout its marketing materials, ultimately using that mark well beyond the recognized limits of nominative fair use.”

What Triggered The Dispute

The defendants claim that WP Engine benefited from the open source community but declined to become an active partner in the open source community. The defendants claim that they tried to bring WP Engine into the community as part of the symbiotic relationship but WP Engine refused.

The motion to dismiss is interesting because it first argues that WP Engine didn’t have an agreement with Automattic for use of the WordPress trademark nor did it had an agreement for the rights to have access to WordPress resources. Then it shows how the defendants tried to reach an agreement and that it was WP Engine’s refusal to “meaningfully give back to the WordPress community” and come to an agreement with Automattic is what triggered the dispute.

The document explains:

“Matt has attempted to raise these concerns with WP Engine and to reach an amicable resolution for the good of the community. In private, Matt also has encouraged WP Engine to give back to the ecosystem from which it has taken so much. Preserving and maintaining the resources made available on the Website requires considerable effort and investment—an effort and investment that Matt makes to benefit those with a shared sense of mission. WP Engine does not
embrace that mission.

WP Engine and Silver Lake cannot expect to profit off the back of others without carrying some of the weight—and that is all Matt has asked of them. For example, Matt suggested that WP Engine either execute a license for the Foundation’s WordPress trademarks or dedicate eight percent of its revenue to the further development of the open source WordPress software.”

Mullenweg Had Two Choices

The above is what Mullenweg and Automattic claim is at the heart of the dispute, the unwillingness of WP Engine to reach an agreement with Automattic and become a stronger partner with the community. The motion to dismiss say that WP Engine’s refusal to reach an agreement left Mullenweg few choices of what to do next, as the motion explains:.

“When it became abundantly clear to Matt that WP Engine had no interest in giving back, Matt was left with two choices: (i) continue to allow WP Engine to unfairly exploit the free resources of the Website, use the WordPress and WooCommerce trademarks without authorization, which would also threaten the very existence of those trademarks, and remain silent on the negative impact of its behavior or (ii) refuse to allow WP Engine to do that and demand publicly that WP Engine do more to support the community.”

Disputes Look Different From Each Side

Matt Mullenweg and Automattic have been portrayed in an unflattering light since the dispute with WP Engine burst into public. The motion to dismiss communicates that Mullenweg’s motivations were in defense of the WordPress community, proving that every dispute looks different depending on who is telling the story. Now it’s up to the judge to decide.

Featured Image by Shutterstock/santypan

Google Chrome DevTools Adds Advanced CLS Debugging Tool via @sejournal, @MattGSouthern

Chrome introduces new debugging tool in Canary build, helping developers identify and fix website layout stability issues.

  • Chrome Canary has added a new “Layout Shift Culprits” feature that visually identifies page layout problems.
  • Developers can now see and replay layout shifts in real-time to pinpoint specific issues.
  • The tool will move from Chrome Canary to regular Chrome in a future release, though no date has been announced.
6 SEO Practices You Need To Stop Right Now via @sejournal, @martinibuster

Some SEO practices haven’t kept pace with changes in search engines and may now be self-defeating, leading to content that fails to rank. Here are six SEO practices that hinder ranking and suggestions for more effective approaches.

1. Redundant SEO Practices

The word redundant means no longer effective, not necessary, superfluous. The following are three redundant SEO practices.

A. Expired Domains

For example, some SEOs think that buying expired domains is a relatively new thing but it’s actually well over  twenty years old. Old school SEOs stopped buying them in 2003 when Google figured out how to reset the PageRank on expired domains. Everyone holding expired domains at that time experienced it when they stopped working.

This is the announcement in 2003 about Google’s handling of expired domains:

“Hey, the index is going to be coming out real soon, so I wanted to give people some idea of what to expect for this index. Of course it’s bigger and deeper (yay!), but we’ve also put more of a focus on algorithmic improvements for spam issues. One resulting improvement with this index is better handling of expired domains–the authority for a domain will be reset when a domain expires, even though dangling links to the expired domain are still out on the web. We’ll be rolling this change in over the next few months starting with this index.”

In 2005 Google became domain name registrar #895 in order to gain access to domain name registration information in order to “increase the quality” of the search results. Becoming a domain name registrar gave them real-time access to when domain names were registered, who registered them and what web hosting address they were pointing to.

It’s surprising to relatively newbie SEOs when I say that Google has a handle on expired domains but it’s not news to those of us who were the very first SEOs in history to buy them. Buying expired domains for ranking purposes is an example of a redundant SEO practice.

B. Google And Paid Links

Another example are paid links. I know for a fact that some paid links will push a site to rank better and this has been the case  for many years and still is. But, those rankings are temporary. Most sites generally don’t get a manual action, they just stop ranking.

A likely reason is that Google’s infrastructure and algorithms can neutralize the PageRank flowing from  paid links thereby allowing the site to rank where it’s supposed to rank without disrupting their business by penalizing their site. That wasn’t always the case.

The recent HCU updates are a blood bath. But the 2012 Google Penguin algorithm update was cataclysmic on a scale several orders larger than what many are experiencing today. It affected big brand sites, affiliate sites and everything in between. Thousands and thousands of websites lost their rankings, nobody was spared.

The paid link business never returned to the mainstream status it formerly enjoyed when so-called white hats endorsed paid links based on the rationalization that paid links weren’t bad because they’re “advertising.”  Wishful thinking.

Insiders at the paid link sellers informed me that a significant amount of paid links didn’t work because Google was able to unravel the link networks.  As early as 2005 Google was using statistical analysis to identify unnatural link patterns. In 2006 Google applied for a patent on a process that used a Reduced Link Graph as a way to map out the link relationships of websites, which included identifying link spam networks.

If you understand the risk, have at it. Most people who aren’t interested in burning a domain and building another one should avoid it. Paid links is another form of redundant SEO.

C. Robots Index, Follow

The epitome of redundant SEO is the use of “follow, index” in the meta robots tag.

This is why index, follow is redundant:

  • Indexing pages and following links are Googlebot’s default mode. Telling it to do that is redundant, like telling yourself to breathe.
  • Meta robots tags are directives. Googlebot can’t be forced to index content and follow links.
  • Google’s Robots Meta documentation only lists nofollow and noindex as valid directives.
  • “index” and “follow” are ignored because you can’t use a directive to force a search engine to follow or index a page.
  • Leaving those values there is a bad look in terms of competence.

Validation:

Google’s Special Tags documentation specifically says that those tags aren’t needed because crawling and indexing are the default behavior.

“The default values are index, follow and don’t need to be specified.”

Here’s the part that’s a head scratcher. Some WordPress SEO plugins add the “index, follow” robots meta tag by default. So if you use one of these SEO plugins, it’s not your fault if “index, follow” is on your web page. SEO plugin makers should know better.

2. Scraping Google’s Search Features

I’m not saying to avoid using Google’s search features for research. That’s fine. What this is about is using that data verbatim “because it’s what Google likes.”  I’ve audited many sites that were hit by Google’s recent updates that exact match these keywords across their entire website and while that’s not the only thing wrong with the content, I feel that it generates a signal that the site was made for search engines, something that Google warns about.

Scraping Google’s search features like People Also Ask and People Also Search For can be a way to get related topics to write about. But in my opinion it’s probably not a good idea to exact match those keywords across the entire website or in an entire web page.

It feels like keyword spamming and building web pages for search engines, two negative signals that Google says it uses.

3. Questionable Keyword Use

Many SEO strategies begin with keyword research and end with adding keywords to content. That’s an old school way of content planning that ignores the fact that Google is a natural language search engine.

If the content is about the keyword, then yes, put your keywords in there. Use the headings for describing what the content is about and titles to say what the page is about. Because Google is a natural language search engine it should recognize your phrasing as meaning what a reader is asking about. That’s what the BERT is about, understanding what a user means.

The decades old practice of regarding headings and titles as a dumping ground for keywords is deeply ingrained. It’s something I encourage you to take some time to think about because a hard focus on keywords can become an example of SEO that gets in the way of SEO.

4. Copy Your Competitors But Do It Better?

A commonly accepted SEO tactic is to analyze the competitors top-ranked content, then use the insights about that content to create the exact same content but better. On the surface it sounds reasonable but it doesn’t take much thinking to recognize the absurdity of a strategy predicated on copying someone else’s content but “do it better.” And then people ask why Google discovers their content but declines to index it.

Don’t overthink it. Overthinking leads to unnecessary things like the whole author bio EEEAT thing the industry recently cycled through.  Just use your expertise, use your experience, use your knowledge to create content that you know will satisfy readers  satisfied  and make them buy more stuff.

5. Adding More Content Because Google

When a publisher acts on the belief that ‘this is what Google likes,’ they’re almost certainly headed in the wrong direction. One example is a misinterpretation of Google’s Information Gain patent which they think means Google ranks sites that contain more content on related topics than what’s already in the search results.

That’s a poor understanding of the patent but more to the point, doing what’s in a patent is generally naïve because ranking is a multi-system process, focusing on one thing will not generally be enough to get a site to the top.

The context of the Information Gain Patent is about ranking web pages in AI Chatbots. The invention of the patent, what makes it new, is that it’s about anticipating what the next natural language question will be and then having those ready to show in the AI search results or showing those additional results after the original answers.

The key point about that patent is that it’s about anticipating what the next question will be in a series of questions. So if you ask an AI chatbot how to build a bird house, the next question the AI Search can anticipate is what kind of wood to use. That’s what information gain is about. Identifying what the next question may be and then ranking another page that answers that additional question.

The patent is not about ranking web pages in the regular organic search results. That’s a misinterpretation caused by cherry picking sentences out of context.

Publishing content that’s aligned with your knowledge, experience and your understanding of what users need is a best practice. That’s what expertise and experience is all about.

6. Basing Decisions On Research Of Millions Of Google Search Results

One of the longtime bad practices in SEO, going back decades, is the one where some SEO does a study of millions of search results and then draws conclusions about factors in isolation. Drawing  conclusions about links, word counts, structured data, and 3rd party domain rating metrics ignores the fact that there are multiple systems at work to rank web pages, including some systems that completely re-rank the search results.

Here’s why SEO “research studies” should be ignored:

A. Isolating one factor in a “study” of millions of search results ignores the reality that pages are ranked due to many signals and systems working together.

B. Examining millions of search results overlooks the ranking influence of natural language-based analysis by systems like BERT and the influence they have on the interpretation of queries and web documents.

C. Search results studies present their conclusions as if Google still ranks ten blue links. Search features with images, videos, featured snippets, shopping results are generally ignored by these correlation studies, making them more obsolete than at any other time in SEO history.

It’s time the SEO industry considers sticking a fork in search results correlations then snapping the handle off.

SEO Is Subjective

SEO is subjective. Everyone has an opinion. It’s up to you to decide what is reasonable for you.

Featured Image by Shutterstock/Roman Samborskyi

YouTube Expands AI-Generated Video Summaries, Adds New Tools via @sejournal, @MattGSouthern

YouTube announced an expansion of its AI-generated video summaries feature alongside several platform updates.

AI Summary Expansion

AI-generated summaries, previously tested on select English-language videos, will now reach a broader global audience.

According to YouTube’s official announcement:

“These video summaries use generative AI to create a short basic summary of a YouTube video which provides viewers with a quick glimpse of what to expect.”

The company emphasized that these AI summaries “do not replace or impact a Creator’s ability to write their own video descriptions” but serve as complementary content to help viewers find relevant information more efficiently.

Studio Mobile

YouTube announced a restructured content management system for creators.

The revamped Studio mobile interface organizes content by format-specific shelves, including videos, Shorts, livestreams, and playlists.

Notable changes include:

  • A new list view option for each content format
  • Simplified visibility of monetization status
  • Scheduled content filter that appears only when relevant

Community Engagement Updates

YouTube is rolling out changes to its community engagement tools.

The former “comments” tab is being rebranded as “Community” and will feature enhanced audience metrics and moderation capabilities.

Notable additions include a community spotlight feature highlighting engaged viewers and AI-powered comment reply suggestions.

YouTube notes this feature “will be limited to a small number of creators while we test the feature.”

Creator Support Chatbot

YouTube is testing an AI-powered support chatbot on Studio desktop.

The feature appears as a clickable icon next to the search field, though currently limited to eligible creators during the testing phase.

Availability

According to the announcement, these features will be rolled out gradually “over the coming weeks and months.”

YouTube requests feedback from creators and viewers as the new features become available, particularly regarding the AI-generated summaries.

See the full announcement below:


Featured Image: Rokas Tenys/Shutterstock

Meta Takes Step To Replace Google Index In AI Search via @sejournal, @martinibuster

Meta is reportedly developing a search engine index for its AI chatbot to reduce reliance on Google for AI-generated summaries of current events. Meta AI appears to be evolving to the next stage of becoming a fully independent AI search engine.

Meta-ExternalAgent

Meta has been crawling the Internet since at least this past summer from a user agent called, Meta-ExternalAgent. There have been multiple reports in various forums about excessive amounts of crawling with one person on Hacker News reporting having received 50,000 hits by the bot. A post in the WebmasterWorld bot crawling forum notes that although the documentation for Meta-ExternalAgent says it respects robots.txt it wouldn’t have made a difference because the bot never visited the file.

It may be that the bot wasn’t fully ready earlier this year and that it’s poor behavior has settled down.

The purpose of the bot is to summarize search results and according to the results it’s to reduce reliance on Google and Bing for search results.

Is This A Challenge To Google?

It may be possible that this is indeed a the prelude to a challenge to Google (and other search engines) in AI search. The information at this time supports that this is about creating a search index to complement their Meta AI. As reported in The Verge, Meta is crawling sites for search summaries to be used within the Meta AI Chatbot:

“The search engine would reportedly provide AI-generated search summaries of current events within the Meta AI chatbot.”

The Meta AI chatbot looks like a search engine and it’s clear that it’s still using Google’s search index.

For example, a search t Meta AI about the recent game four of the World Series showed a summary with an accurate answer that had a link to Google.

Screenshot Of Meta AI With Link To Google Search

Here’s a close up showing the link to Google search results and a link to the sources:

Screenshot Of Close-Up Of Meta AI Results

Clicking on the View Sources button spawns a popup with links to Google Search.

Screenshot Of Meta AI View Sources Pop-Up

Read the original reports:

A report was posted in The Verge, based on another reported published on The Information.

See also:

Featured Image by Shutterstock/Skorzewiak

Google Q3 Report: AI Drives Growth Across Search, Cloud, & YouTube via @sejournal, @MattGSouthern

Alphabet Inc. reported its third-quarter earnings, with revenues reaching $88.3 billion, a 15% increase from last year.

The Google parent company’s operating margin expanded to 32% from 27.8% year-over-year, while net income rose 34% to $26.3 billion.

During the earnings call, the company highlighted the growing role of AI across its products and services.

Google Cloud revenue increased 35% to $11.4 billion, while YouTube surpassed $50 billion in combined advertising and subscription revenue over the past four quarters.

Several operational changes occurred during the quarter, including the reorganization of Google’s AI teams and the expansion of AI features across its products.

The company also reported improvements in AI infrastructure efficiency and increased deployment of AI-powered search capabilities.

Highlights

AI

CEO Sundar Pichai emphasized how AI transforms the search experience, telling investors that “new AI features are expanding what people can search for and how they search for it.”

Google’s AI infrastructure investments are yielding efficiency gains. According to Pichai, over a quarter of all new code at Google is now generated by AI and then reviewed by engineers, accelerating development cycles.

Google has reduced AI Overview query costs by 90% over 18 months while doubling the Gemini model size. These improvements extend across seven Google products, each serving over 2 billion monthly users.

Cloud

The Google Cloud division reported operating income of $1.95 billion, marking an increase from $266 million in the same quarter last year.

Company leadership attributed this growth to increased adoption of AI infrastructure and generative AI solutions among enterprise customers.

In an organizational move, Google announced it will transfer its Gemini consumer AI team to Google DeepMind, signaling a deeper integration of AI development across the company.

YouTube

YouTube achieved a notable milestone: its combined advertising and subscription revenues exceeded $50 billion over the past four quarters.

YouTube ads revenue grew to $8.9 billion in Q3, while the broader Google subscriptions, platforms, and devices segment reached $10.7 billion.

Financials

  • Net income increased 34% to $26.3 billion
  • Operating margin expanded to 32% from 27.8% last year
  • Earnings per share rose 37% to $2.12
  • Total Google Services revenue grew 13% to $76.5 billion

What This Means

Google’s Q3 results point to shifts in search that SEO professionals and businesses need to watch.

With AI Overviews now reaching over 1 billion monthly users, we’re seeing changes in search behavior.

According to CEO Sundar Pichai, users are submitting longer and more complex queries, exploring more websites, and increasing their search activity as they become familiar with AI features.

For publishers, the priorities are clear: create content that addresses complex queries and monitor how AI Overviews affect traffic patterns.

We can expect further advancements across services with Google’s heavy investment in AI. The key will be staying agile and continually testing new features as they roll out.


Featured Image: QubixStudio/Shutterstock

Google Loses €2.4B Battle Against Small Business Founders via @sejournal, @MattGSouthern

A British couple’s legal battle against Google’s search practices has concluded.

Europe’s highest court upheld a €2.4 billion fine against Google, marking a victory for small businesses in the digital marketplace.

Background

Shivaun and Adam Raff launched Foundem, a price comparison website, in June 2006.

On launch day, Google’s automated spam filters hit the site, pushing it deep into search results and cutting off its primary traffic source.

“Google essentially disappeared us from the internet,” says Shivaun Raff.

The search penalties remained in place despite Foundem later being recognized by Channel 5’s The Gadget Show as the UK’s best price comparison website.

From Complaint To Major Investigation

After two years of unanswered appeals to Google, the Raffs took their case to regulators.

Their complaint led to a European Commission investigation in 2010, which revealed similar issues affecting approximately 20 other comparison shopping services, including Kelkoo, Trivago, and Yelp.

The investigation concluded in 2017 with the Commission ruling that Google had illegally promoted its comparison shopping service while demoting competitors, resulting in the €2.4 billion fine.

Here’s a summary of what happened next.

Timeline: From Initial Fine to Final Ruling (2017-2024)

2017

  • European Commission issues €2.4 billion fine against Google
  • Google implements changes to its shopping search results
  • Google files initial appeal against the ruling

2021

  • General Court of the European Union upholds the fine
  • Google launches second appeal to the European Court of Justice

2024 March

  • European Commission launches new investigation under Digital Markets Act
  • Probe examines whether Google continues to favor its services in search results

September

  • European Court of Justice rejects Google’s final appeal A fine of €2.4 billion is definitively upheld
  • Marks the end of main legal battle after 15 years

The seven-year legal process highlights the challenges small businesses face in seeking remedies for anti-competitive practices, despite having clear evidence.

Google’s Response

Google maintains its 2017 compliance changes resolved the issues.

A company spokesperson stated:

“The changes we made have worked successfully for more than seven years, generating billions of clicks for more than 800 comparison shopping services.”

What’s Next?

While the September 2024 ruling validates the Raffs’ claims, it comes too late for Foundem, which closed in 2016.

In March 2024, the European Commission launched a new investigation into Google’s current practices under the Digital Markets Act.

The Raffs are now pursuing a civil damages claim against Google, scheduled for 2026.

Why This Matters

This ruling confirms that Google’s search rankings can be subject to regulatory oversight and legal challenges.

The case has already influenced new digital marketplace regulations, including the EU’s Digital Markets Act.

Although Foundem’s story concluded with the company’s closure in 2016, the legal precedent it set will endure.


Featured Image: Pictrider/Shutterstock

Core Web Vitals Documentation Updated via @sejournal, @martinibuster

The official documentation for how Core Web Vitals are scored was recently updated with new insights into how Interaction to Next Paint (INP) scoring thresholds were chosen and offers a better understanding of Interaction To Next Paint.

Interaction to Next Paint (INP)

Interaction to Next Paint (INP) is a relatively new metric, officially becoming a Core Web Vitals in the Spring of 2024. It’s a metric of how long it takes a site to respond to interactions like clicks, taps, and when users press on a keyboard (actual or onscreen).

The official Web.dev documentation defines it:

“INP observes the latency of all interactions a user has made with the page, and reports a single value which all (or nearly all) interactions were beneath. A low INP means the page was consistently able to respond quickly to all—or the vast majority—of user interactions.”

INP measures the latency of all the interactions on the page, which is different than the now retired First Input Delay metric which only measured the delay of the first interaction. INP is considered a better measurement than INP because it provides a more accurate idea of the actual user experience is.

INP Core Web Vitals Score Thresholds

The main change to the documentation is to provide an explanation for the speed performance thresholds that show poor, needs improvement and good.

One of the choices made for deciding the scoring was how to handle scoring because it’s easier to achieve high INP scores on a desktop versus a mobile device because external factors like network speed and device capabilities heavily favor desktop environments.

But the user experience is not device dependent so rather that create different thresholds for different kinds of devices they settled on one metric that is based on mobile devices.

The new documentation explains:

“Mobile and desktop usage typically have very different characteristics as to device capabilities and network reliability. This heavily impacts the “achievability” criteria and so suggests we should consider separate thresholds for each.

However, users’ expectations of a good or poor experience is not dependent on device, even if the achievability criteria is. For this reason the Core Web Vitals recommended thresholds are not segregated by device and the same threshold is used for both. This also has the added benefit of making the thresholds simpler to understand.
Additionally, devices don’t always fit nicely into one category. Should this be based on device form factor, processing power, or network conditions? Having the same thresholds has the side benefit of avoiding that complexity.

The more constrained nature of mobile devices means that most of the thresholds are therefore set based on mobile achievability. They more likely represent mobile thresholds—rather than a true joint threshold across all device types. However, given that mobile is often the majority of traffic for most sites, this is less of a concern.”

These are scores Chrome settled on:

  • Scores of under 200 ms (milliseconds) were chosen to represent a “good” score.
  • Scores between 200 ms – 500 ms represent a “needs improvement” score.
  • Performance of over 500 ms represent a “poor” score.

Screenshot Of An Interaction To Next Paint Score

Interaction To Next Paint (INP) Core Web Vitals Score

Lower End Devices Were Considered

Chrome was focused on choosing achievable metrics. That’s why the thresholds for INP had to be realistic for lower end mobile devices because so many of them are used to access the Internet.

They explained:

“We also spent extra attention looking at achievability of passing INP for lower-end mobile devices, where those formed a high proportion of visits to sites. This further confirmed the suitability of a 200 ms threshold.

Taking into consideration the 100 ms threshold supported by research into the quality of experience and the achievability criteria, we conclude that 200 ms is a reasonable threshold for good experiences”

Most Popular Sites Influenced INP Thresholds

Another interesting insight in the new documentation is that achievability of the scores in the real world were another consideration for the INP scoring metrics, measured in milliseconds (ms). They examined the performance of the top 10,000 websites because they made up the vast majority of website visits in order to dial in the right threshold for poor scores.

What they discovered is that the top 10,000 websites struggled to achieve performance scores of 300 ms. The CrUX data that reports real-world user experience showed that 55% of visits to the most popular sites were at the 300 ms threshold. That meant that the Chrome team had to choose a higher millisecond score that was achieveable by the most popular sites.

The new documentation explains:

“When we look at the top 10,000 sites—which form the vast majority of internet browsing—we see a more complex picture emerge…

On mobile, a 300 ms “poor” threshold would classify the majority of popular sites as “poor” stretching our achievability criteria, while 500 ms fits better in the range of 10-30% of sites. It should also be noted that the 200 ms “good” threshold is also tougher for these sites, but with 23% of sites still passing this on mobile this still passes our 10% minimum pass rate criteria.

For this reason we conclude a 200 ms is a reasonable “good” threshold for most sites, and greater than 500 ms is a reasonable “poor” threshold.”

Barry Pollard, a Web Performance Developer Advocate on Google Chrome who is a co-author of the documentation, added a comment to a discussion on LinkedIn that offers more background information:

“We’ve made amazing strides on INP in the last year. Much more than we could have hoped for. But less than 200ms is going to be very tough on low-end mobile devices for some time. While high-end mobile devices are absolute power horses now, the low-end is not increasing at anywhere near that rate…”

A Deeper Understanding Of INP Scores

The new documentation offers a better understanding of how Chrome chooses achievable metrics and takes some of the mystery out of the relatively new INP Core Web Vital metric.

Read the updated documentation:

How the Core Web Vitals metrics thresholds were defined

Featured Image by Shutterstock/Vectorslab

Google Proposes New Shipping Structured Data via @sejournal, @martinibuster

Google published a proposal in the Schema.org Project GitHub instance that proposes proposing an update at Schema.org to expand the shopping structured data so that merchants can provide more shipping information that will likely show up in Google Search and other systems.

Shipping Schema.org Structured Data

The proposed new structured data Type can be used by merchants to provide more shipping details. It also suggests adding the flexibility of using a sitewide shipping structured data that can then be nested with the Organization structured data, thereby avoiding having to repeat the same information thousands of times across a website.

The initial proposal states:

“This is a proposal from Google to support a richer representation of shipping details (such as delivery cost and speed) and make this kind of data explicit. If adopted by schema.org and publishers, we consider it likely that search experiences and other consuming systems could be improved by making use of such markup.

This change introduces a new type, ShippingService, that groups shipping constraints (delivery locations, time, weight and size limits and shipping rate). Redundant fields from ShippingRateSettings are therefore been deprecated in this proposal.

As a consequence, the following changes are also proposed:

some fields in OfferShippingDetails have moved to ShippingService;
ShippingRateSettings has more ways to specify the shipping rate, proportional to the order price or shipping weight;
linking from the Offer should now be done with standard Semantic Web URI linking.”

The proposal is open for discussion and many stakeholders are offering opinions on how the updated and new structured data would work.

For example, one person involved in the discussion asked how a sitewide structured data type placed in the Organization level could be superseded by individual products had different information and someone else provided an answer.

A participant in the GitHub discussion named Tiggerito posted:

“I re-read the document and what you said makes sense. The Organization is a place where shared ShippingConditions can be stored. But the ShippingDetails is always at the ProductGroup or Product level.

This is how I currently deal with Shipping Details:

In the back end the owner can define a global set of shipping details. Each contains the fields Google currently support, like location and times, but not specifics about dimensions. Each entry also has conditions for what product the entry can apply to. This can include a price range and a weight range.

When I’m generating the structured data for a page I include the entries where the product matches the conditions.

This change looks like it will let me change from filtering out the conditions on the server, to including them in the Structured Data on the product page.

Then the consumers of the data can calculate which ShippingConditions are a match and therefore what rates are available when ordering a specific number of the product. Currently, you can only provide prices for shipping one.

The split also means it’s easier to provide product specific information as well as shared shipping information without the need for repetition.

Your example in the document at the end for using Organization. It looks like you are referencing ShippingConditions for a product that are on a shipping page. This cross-referencing between pages could greatly reduce the bloat this has on the product page, if supported by Google.”

The Googler responded to Tiggerito:

“@Tiggerito

The Organization is a place where shared ShippingConditions can be stored. But the ShippingDetails is always at the ProductGroup or Product level.

Indeed, and this is already the case. This change also separates the two meanings of eg. width, height, weight as description of the product (in ShippingDetails) and as constraints in the ShippingConditions where they can be expressed as a range (QuantitativeValue has min and max).

In the back end the owner can define a global set of shipping details. Each contains the fields Google currently support, like location and times, but not specifics about dimensions. Each entry also has conditions for what product the entry can apply to. This can include a price range and a weight range.

When I’m generating the structured data for a page I include the entries where the product matches the conditions.

This change looks like it will let me change from filtering out the conditions on the server, to including them in the Structured Data on the product page.

Then the consumers of the data can calculate which ShippingConditions are a match and therefore what rates are available when ordering a specific number of the product. Currently, you can only provide prices for shipping one.

Some shipping constraints are not available at the time the product is listed or even rendered on a page (eg. shipping destination, number of items, wanted delivery speed or customer tier if the user is not logged in). The ShippingDetails attached to a product should contain information about the product itself only, the rest gets moved to the new ShippingConditions in this proposal.
Note that schema.org does not specify a cardinality, so that we could specify multiple ShippingConditions links so that the appropriate one gets selected at the consumer side.

The split also means it’s easier to provide product specific information as well as shared shipping information without the need for repetition.

Your example in the document at the end for using Organization. It looks like you are referencing ShippingConditions for a product that are on a shipping page. This cross-referencing between pages could greatly reduce the bloat this has on the product page, if supported by Google.

Indeed. This is where we are trying to get at.”

Discussion On LinkedIn

LinkedIn member Irina Tuduce (LinkedIn profile), software engineer at Google Shopping, initiated a discussion that received multiple responses that demonstrating interest for the proposal.

Andrea Volpini (LinkedIn profile), CEO and Co-founder of WordLift, expressed his enthusiasm for the proposal in his response:

“Like this Irina Tuduce it would streamline the modeling of delivery speed, locations, and cost for large organizations

Indeed. This is where we are trying to get at.”

Another member, Ilana Davis (LinkedIn profile), developer of the JSON-LD for SEO Shopify App, posted:

“I already gave my feedback on the naming conventions to schema.org which they implemented. My concern for Google is how exactly merchants will get this data into the markup. It’s nearly impossible to get exact shipping rates in the SD if they fluctuate. Merchants can enter a flat rate that is approximate, but they often wonder if that’s acceptable. Are there consequences to them if the shipping rates are an approximation (e.g. a price mismatch in GMC disapproves a product)?”

Inside Look At Development Of New Structured Data

The ongoing LinkedIn discussion offers a peek at how stakeholders in the new structured data feel about the proposal. The official Schema.org GitHub discussion not only provides a view of  how the proposal is progressing, it offers stakeholders an opportunity to provide feedback for shaping what it will ultimately look like.

There is also a public Google Doc titled, Shipping Details Schema Change Proposal, that has a full description of the proposal.

Featured Image by Shutterstock/Stokkete