Google Workspace Support: Unclear If Opting Out AI Features Avoids Price Hike via @sejournal, @MattGSouthern

Google has made its AI-powered features in Gmail, Docs, Sheets, and Meet free for all Workspace users, but questions remain around pricing adjustments and feature visibility for specific accounts.

AI Now Included Without Extra Cost

Google announced that its full suite of AI tools, previously available only through the $20-per-user-per-month Gemini for Workspace plan, is now included in its standard offerings at no additional charge.

AI capabilities like automated email summaries, meeting note-taking, spreadsheet design suggestions, and the Gemini chatbot are now accessible to all customers.

However, this announcement comes with a catch: Workspace plans will see a $2 price hike per user per month.

The new pricing structure raises the base cost of the Workspace Business Standard plan from $12 to $14 per user, effective immediately for new customers.

Starting March 17, existing customers will see the change reflected. Small business accounts are currently exempt from this adjustment.

Confusion Over Pricing & Settings

While the price increase has been widely reported, Google Workspace support has offered additional clarification, indicating that it may not apply to all users.

According to support representatives, it’s unclear whether organizations that opt out of AI features will still face the increased costs. Official guidance on this matter has yet to be issued, leaving many customers uncertain.

Screenshot from Google support chat, January 2025.

Chats between Google Workspace reps and the Search Engine Journal development team reveal that opting out of AI features isn’t straightforward.

The settings to turn off AI features like Gemini aren’t visible by default for business accounts.

Administrators must contact Google support to enable access to these settings. For enterprise customers, the settings are accessible directly within the Workspace admin console.

Competitive Push Against Microsoft

Google’s move to bundle AI features into its standard Workspace offerings mirrors Microsoft’s recent decision to integrate its Copilot Pro AI tools into the standard Microsoft 365 subscription.

Both companies aim to attract more users to their AI-powered productivity platforms while simplifying pricing structures.

Key Takeaways

For organizations using Google Workspace, here are the critical points to note:

  1. AI Features Are Enabled by Default: Gemini and other AI tools will be active for most accounts unless explicitly disabled.
  2. Opt-Out Process Is Complicated: Business account holders must contact Google support to access and change the AI feature settings. Enterprise accounts can manage these settings directly.
  3. Pricing Uncertainty: It’s unclear whether the $2 price increase will still apply if you opt out of AI tools. Google has stated that further updates on this issue are forthcoming.

Businesses are advised to monitor their Workspace settings closely and contact Google support for clarification.

Google Causes Global SEO Tool Outages via @sejournal, @martinibuster

Google cracked own on web scrapers that harvest search results data, triggering global outages at many popular rank tracking tools like SEMRush that depend on providing fresh data from search results pages.

What happens if Google’s SERPs are completely blocked? A certain amount of data provided by tracking services have long been extrapolated by algorithms from a variety of data sources. It’s possible that one way around the current block is to extrapolate the data from other sources.

SERP Scraping Prohibited By Google

Google’s guidelines have long prohibited automated rank checking in the search results but apparently Google has also allowed many companies to scrape their search results and charge for accessing ranking data for the purposes of tracking keywords and rankings.

According to Google’s guidelines:

“Machine-generated traffic (also called automated traffic) refers to the practice of sending automated queries to Google. This includes scraping results for rank-checking purposes or other types of automated access to Google Search conducted without express permission. Machine-generated traffic consumes resources and interferes with our ability to best serve users. Such activities violate our spam policies and the Google Terms of Service.”

Blocking Scrapers Is Complex

It’s highly resource intensive to block scrapers, especially because they can respond to blocks by doing things like changing their IP address and user agent to get by any blocks. Another way to block scrapers is through targeting specific behaviors like how many pages are requested by a user. Excessive amounts of page requests can trigger a block. The problem to that approach is that it can become resource intensive keeping track of all the blocked IP addresses which can quickly number in the millions.

Reports On Social Media

A post in the private SEO Signals Lab Facebook Group announced that Google was striking hard against web scrapers, with one member commenting that the Scrape Owl tool wasn’t working for them while others cited that SEMRush’s data has not updated.

Another post, this time on LinkedIn, noted multiple tools that weren’t refreshing their content but it also noted that the blocking hasn’t affected all data providers, noting that Sistrix and MonitorRank were still working. Someone from a company called HaloScan reported that they made adjustments to resume scraping data from Google and have recovered and someone else reported that another tool called MyRankingMetrics is still reporting data.

So whatever Google is doing it’s not currently affecting all scrapers. It may be that Google is targeting certain scraping behavior, learning from the respones and improving their blocking ability. The coming weeks may reveal that Google is improving its ability to block scrapers or it’s only targeting the biggest ones.

Another post on LinkedIn speculated that blocking may result in higher resources and fees charged to end users of SaaS SEO tools. They posted:

“This move from Google is making data extraction more challenging and costly. As a result, users may face higher subscription fees. “

Ryan Jones tweeted:

“Google seems to have made an update last night that blocks most scrapers and many APIs.

Google, just give us a paid API for search results. we’ll pay you instead.”

No Announcement By Google

So far there has not been any announcement by Google but it may be that the chatter online may force someone at Google to consider making a statement.

Featured Image by Shutterstock/Krakenimages.com

Google Study: 29% In The U.S. & Canada Used AI Last Year via @sejournal, @MattGSouthern

A new Google-Ipsos report shows AI adoption is increasing globally, especially in emerging markets.

However, the study reveals challenges like regional divides, gender disparities, and slower adoption in developed countries.

Critics, including Nate Hake, founder of Travel Lemming, point out how Google overlooks these challenges in its report coverage.

While optimism around AI is rising, it’s not resonating with everyone.

Here’s a closer look at the report and what the numbers indicate.

AI Is Growing, But Unevenly

Globally, 48% of people used generative AI last year, with countries like Nigeria, Mexico, and South Africa leading adoption. These regions also show the most excitement about AI’s potential to boost economies and improve lives.

Adoption lags at 29% in developed nations like the U.S. and Canada, meaning that 71% of people in these regions haven’t knowingly engaged with generative AI tools.

Screenshot: Google-Ipsos Study ‘Our life with AI: From innovation to application,’ January 2025.

Optimism Outweighs Concerns

Globally, 57% of people are excited about AI, compared to 43% who are concerned—a shift from the year prior, when excitement and concerns were evenly split.

People cite AI’s potential in science (72%) and medicine (71%) as reasons for their optimism. Respondents see opportunities for breakthroughs in healthcare and research.

However, in the U.S., skepticism lingers—only 52% believe AI will directly benefit “people like them,” compared to the global average of 59%.

Gender Gaps Persist

The report highlights a gender gap in AI usage: 55% of global AI users are men compared to 45% women.

The disparity is even bigger in workplace adoption, where 41% of professional AI users are women.

Emerging Markets Are Leading the Way

Emerging markets are using AI more and are more optimistic about its potential.

In regions like Nigeria and South Africa, people are more likely to believe AI will transform their economies.

Meanwhile, developed countries like the U.S. and U.K. remain cautious.

Only 53% of Americans prioritize AI innovation, compared to much higher enthusiasm in emerging markets.

Non-Generative AI

While generative AI tools like chatbots and content generators grab headlines, the public is more appreciative of non-generative AI applications.

These include AI for healthcare, fraud detection, flood forecasting, and other practical, high-impact use cases.

Generative AI, on the other hand, gets mixed reviews.

Writing, summarizing, or customer service applications don’t resonate as strongly with the public as AI’s potential to tackle bigger societal issues.

AI at Work: Young, Affluent, and Male-Dominated

AI is making its way into the workplace. 74% of AI users use it professionally for writing, brainstorming, and problem-solving tasks.

However, workplace AI adoption is skewed toward younger, wealthier, and male workers.

Blue-collar workers and older professionals are catching up—67% of blue-collar AI users and 68% of workers aged 50-74 use AI at work—but the gender gap remains pronounced.

Trust in AI Is Growing

Trust in AI governance is improving, with 61% of people confident their governments can regulate AI responsibly (up from 57% in 2023).

72% support collaboration between governments and companies to manage AI’s risks and maximize its benefits.

Takeaway

AI use is growing worldwide, though many people in North America still see little reason to use it.

To increase AI’s adoption, companies must build trust and clearly communicate the technology’s benefits.

For more details, check out the full report at Google Public Policy.


Featured Image: Stokkete/Shutterstock

Evidence That Google Detects AI-Generated Content via @sejournal, @martinibuster

A sharp-eyed Australian SEO spotted indirect confirmation about Google’s use of AI detection as part of search rankings that was hiding in plain sight for years. Although Google is fairly transparent about content policies, the new data from a Googler’s LinkedIn profile adds a little more detail.

Gagan Ghotra tweeted:

“Important FYI Googler Chris Nelson from Search Quality team his LinkedIn says He manages global team that build ranking solutions as part of Google Search ‘detection and treatment of AI generated content’.”

Googler And AI Content Policy

The Googler, Chris Nelson, works at Google in the Search Ranking department and is listed as co-author of Google’s guidance on AI-generated content, which makes knowing a little bit about him

The relevant work experience at Google is listed as:

“I manage a large, global team that builds ranking solutions as part of Google Search and direct the following areas:

-Prevent manipulation of ranking signals (e.g., anti-abuse, spam, harm)
-Provide qualitative and quantitative understanding of quality issues (e.g., user interactions, insights)
-Address novel content issues (e.g., detection and treatment of AI-generated content)
-Reward satisfying, helpful content”

There are no search ranking related research papers or patents listed under his name but that’s probably because his educational background is in business administration and economics.

What may be of special interest to publishers and digital marketers are the following two sections:

1. He lists addressing “detection and treatment of AI-generated content”

2. He provides “qualitative and quantitative understanding of quality issues (e.g., user interactions, insights)”

While the user interaction and insights part might seem unrelated to the detection and treatment of AI-generated content, the user interactions and insights part is in the service of understanding search quality issues, which is related.

His role is defined as evaluation and analysis of quality issues in Google’s Search Ranking department. “Quantitative understanding” refers to analyzing data and “qualitative understanding” is a more subjective part of his job that may be about insights, understanding the “why” and “how” of observed data.

Co-Author Of Google’s AI-Generated Content Policy

Chris Nelson is listed as a co-author of Google’s guidance on AI-generated content. The guidance doesn’t prohibit the use of AI for published content, suggesting that it shouldn’t be used to create content that violates Google’s spam guidelines. That may sound contradictory because AI is virtually synonymous with scaled automated content which has historically been considered spam by Google.

The answers are in the nuance of Google’s policy, which encourages content publishers to prioritize user-first content instead of a search-engine first approach. In my opinion, putting a strong focus on writing about the most popular search queries in a topic, instead of writing about the topic, can lead to search engine-first content as that’s a common approach of sites I’ve audited that contained relatively high quality content but lost rankings in the 2024 Google updates.

Google (and presumably Chris Nelson’s advice) for those considering AI-generated content is:

“…however content is produced, those seeking success in Google Search should be looking to produce original, high-quality, people-first content demonstrating qualities E-E-A-T.”

Why Doesn’t Google Ban AI-Generated Content Outright?

Google’s documentation that Chris Nelson co-authored states that automation has always been a part of publishing, such as dynamically inserting sports scores, weather forecasts, scaled meta descriptions and date-dependent content and products related to entertainment.

The documentation states:

“…For example, about 10 years ago, there were understandable concerns about a rise in mass-produced yet human-generated content. No one would have thought it reasonable for us to declare a ban on all human-generated content in response. Instead, it made more sense to improve our systems to reward quality content, as we did.

…Automation has long been used to generate helpful content, such as sports scores, weather forecasts, and transcripts. …Automation has long been used in publishing to create useful content. AI can assist with and generate useful content in exciting new ways.”

Why Does Googler Detect AI-Generated Content?

The documentation that Nelson co-authored doesn’t explicitly states that Google doesn’t differentiate between how low quality content is generated, which seemingly contradicts his LinkedIn profile that states “detection and treatment of AI-generated content” is a part of his job.

The AI-generated content guidance states:

“Poor quality content isn’t a new challenge for Google Search to deal with. We’ve been tackling poor quality content created both by humans and automation for years. We have existing systems to determine the helpfulness of content. …Our systems continue to be regularly improved.”

How do we reconcile that part of his job is detecting AI-generated content and Google’s policy states that it doesn’t matter how low quality content is generated?

Context is everything, that’s the answer. Here’s the context of his work profile:

Address novel content issues (e.g., detection and treatment of AI-generated content)”

The phrase “novel content issues” means content quality issues that haven’t previously been encountered by Google. This refers to new types of AI-generated content, presumably spam, and how to detect it and “treat” it. Given that the context is “detection and treatment” it could very well be that the context is “low quality content” but it wasn’t expressly stated because he probably didn’t think his LinkedIn profile would be parsed by SEOs for a better understanding of how Google detects and treats AI-generated content (meta!).

Guidance Authored By Chris Nelson Of Google

A list of articles published by Chris Nelson show that he may have played a role in many of the most important updates from the past five years, from the Helpful Content update, site reputation abuse to detecting search-engine first AI-generated content.

List of Articles Authored By Chris Nelson (LinkedIn Profile)

Updating our site reputation abuse policy

What web creators should know about our March 2024 core update and new spam policies

Google Search’s guidance about AI-generated content

What creators should know about Google’s August 2022 helpful content update

Featured Image by Shutterstock/3rdtimeluckystudio

Google Rejects EU’s Call For Fact-Checking In Search & YouTube via @sejournal, @MattGSouthern

Google has reportedly told the EU it won’t add fact-checking to search results or YouTube videos, nor will it use fact-checks to influence rankings or remove content.

This decision defies new EU rules aimed at tackling disinformation.

Google Says No to EU’s Disinformation Code

In a letter to Renate Nikolay of the European Commission, Google’s global affairs president, Kent Walker, said fact-checking “isn’t appropriate or effective” for Google’s services.

The EU’s updated Disinformation Code, part of the Digital Services Act (DSA), would require platforms to include fact-checks alongside search results and YouTube videos and to bake them into their ranking systems.

Walker argued Google’s current moderation tools—like SynthID watermarking and AI disclosures on YouTube—are already effective.

He pointed to last year’s elections as proof Google can manage misinformation without fact-checking.

Google also confirmed it plans to fully exit all fact-checking commitments in the EU’s voluntary Disinformation Code before it becomes mandatory under the DSA.

Context: Major Elections Ahead

This refusal from Google comes ahead of several key European elections, including:

  • Germany’s Federal Election (Feb. 23)
  • Romania’s Presidential Election (May 4)
  • Poland’s Presidential Election (May 18)
  • Czech Republic’s Parliamentary Elections (Sept.)
  • Norway’s Parliamentary Elections (Sept. 8)

These elections will likely test how well tech platforms handle misinformation without stricter rules.

Tech Giants Backing Away from Fact-Checking

Google’s decision follows a larger trend in the industry.

Last week, Meta announced it would end its fact-checking program on Facebook, Instagram, and Threads and shift to a crowdsourced model like X’s (formerly Twitter) Community Notes.

Elon Musk has drastically reduced moderation efforts on X since buying the platform in 2022.

What It Means

As platforms like Google and Meta move away from active fact-checking, concerns are growing about how misinformation will spread—especially during elections.

While tech companies say transparency tools and user-driven features are enough, critics argue they’re not doing enough to combat disinformation.

Google’s pushback signals a growing divide between regulators and platforms over how to manage harmful content.


Featured Image: Wasan Tita/Shutterstock

.AI Domain Migrated To A More Secure Platform via @sejournal, @martinibuster

The Dot AI domain has migrated to a new domain name registry, giving all registrants of .AI domains stronger security and more stability, with greater protection against outages.

Dot AI Domain

.AI is a country-code top-level domain (ccTLD), which is distinct from a gTLD. A CCTLD is a two letter domain that is reserved for a specific country, like .US is reserved for the United States of America. .AI is reserved for the British Overseas Territory in the Caribbean, Anguilla.

.AI Is Now Handled By Identity Digital

The .AI domain was previously handled by a local small business named DataHaven.net but has now fully migrated to the Identity Digital platform, giving the .AI domain availability from over 90% of all registrars worldwide and a 100% availability guarantee. The migration also provides fast distribution of the .AI domain in milliseconds and greater resistance to denial of service attacks.

According to the announcement:

“Beginning today, .AI is exclusively being served on the Identity Digital platform, and we couldn’t be more thrilled for what this means for Anguilla.

The quick migration brings important enhancements to the .AI TLD like 24/7 global support, and a growing list of features that will benefit registrars, businesses and entrepreneurs today and in the years to come.”

Read the full announcement:

.ai Completes a Historic Migration to the Identity Digital Platform

Featured Image by Shutterstock/garagestock

Google Shopping Rankings: Key Factors For Retailers via @sejournal, @MattGSouthern

A new study analyzing 5,000 Google Shopping keywords sheds light on the factors that correlate with higher rankings.

The research, conducted by Jeff Oxford, Founder of 180 Marketing, reveals trends that could help ecommerce stores improve their visibility in Google’s free Shopping listings.

Amazon Dominates Google Shopping

Amazon ranks in the #1 position for 52% of Google Shopping searches, outpacing Walmart (6%) and Home Depot (3%).

Beyond Amazon’s dominance, the study found a strong correlation between website authority and rankings, with higher-ranking sites often belonging to well-established brands.

Takeaway: Building your brand and earning trust is vital to ranking well on Google Shopping.

Backlinks, Reviews, & Pricing

The study identified several trends that separate higher-ranking pages from the rest:

  • Referring Domains: Product pages in the top two positions had more backlinks than lower-ranking pages. Interestingly, most product pages analyzed (98%) had no backlinks at all.
  • Customer Reviews: Product pages with customer reviews ranked higher, and stores with star ratings below 3.5 struggled to rank well.
  • Pricing: Lower-priced products tended to rank higher, with top-performing listings often featuring prices below the category average.

Takeaway: Building backlinks, collecting customer reviews, and offering competitive pricing can make a difference.

Meta Descriptions A Top Signal

Among on-page factors, meta descriptions had the strongest correlation with rankings.

Pages that included exact-match keywords in their meta descriptions consistently ranked higher.

While keyword usage in title tags and H1 headers showed some correlation, the impact was much smaller.

Takeaway: Optimize meta descriptions and product copy with target keywords to improve rankings.

Structured Data Findings

Structured data showed mixed results in the study.

Product structured data had little to no correlation with rankings, and Amazon, despite dominating the top spots, doesn’t use structured data on its product pages.

However, pages using review structured data performed better.

Takeaway: Focus on collecting customer reviews and using review structured data, which appears more impactful than product structured data.

Shipping & Returns Scores

Google Shopping evaluates stores on shipping, returns, and website quality metrics.

The study found that stores with “Exceptional” or “Great” scores for shipping and returns were more likely to rank higher, especially in the top 10 positions.

Takeaway: Prioritize fast shipping and clear return policies to boost your Google Shopping scores.

What Does This Mean?

According to these findings, success in Google Shopping correlates with strong customer reviews, competitive pricing, and fast service.

Optimizing for traditional SEO—like backlinks and well-written metadata—can benefit both organic search and Shopping rankings.

Retailers should prioritize the customer experience, as Google’s scoring for shipping, returns, and website quality affects visibility.

Lastly, remember that correlation doesn’t equal causation—test changes thoughtfully and focus on delivering value to your customers.

Mullenweg’s Grip On WordPress Challenged In New Court Filing via @sejournal, @martinibuster

A Motion to Intervene has been filed in the WP Engine lawsuit against Automattic and Matt Mullenweg, alleging fifteen claims and seeking monetary awards along with changes to WordPress.org’s governance structure.

A motion to intervene is a legal request by a third party that seeks to join an ongoing lawsuit, the success of which hinges on proving that they have a significant interest in the outcome of a case.

Legal Filing Seeks To Take Control Of WordPress

Among  the requests made in the legal filing is one that compels Matt Mullenweg to create a WordPress Oversight Board to oversee the governance of the WordPress Foundation, WordPress.org and other related entities.

“D. Order Defendant Matt Mullenweg to establish a Governance Oversight Board as defined in the Proposed Order For Contempt filed by Michael Willman;”

Moderator Of WPDrama Subreddit

The person filing the court motion is a WordPress web developer and a moderator of the r/WPDrama subreddit named Michael Willman, CEO of Redev, a WordPress development and SEO company, who alleges that Mullenweg banned him, which caused him to lose two clients and a significant amount of earnings because of those losses.

Michael explained what happened in a message to Search Engine Journal:

“Near the start of this dispute, I lost a large ($14,500) contract as a direct result of being banned by Matt along with everyone else loosely associated. We had just closed the contract mere days before and the client is just seeing all these stories, and they back out. Losing that revenue would eventually make us unable to serve our largest client at the time, and we lost them too.

I took this all personally, and I tried to take to his #ranting channel on Slack to respond to his inane blog posts and share how his actions had damaged me and got me to the point of being ready to sue him as well.

He then banned me in retaliation for that and afterwards claimed a message saying I was going to go to Houston to file other legal documents was a “physical threat.”

He has a long history of inconsistent application of the Code of Conduct and I don’t think he can show that his actions here were justified, my own reading of the Code of Conduct implies that some type of warning in private is the first step. “

That last part about the allegedly false claim that he made a physical threat against Matt Mullenweg is now a part of the new motion.

Post On Reddit

Mr. Willman posted about his motion on Reddit, saying that he will donate 5% of any monetary awards to WordPress.

Members of the Reddit WordPress community were supportive, with one member named JonOlds posting:

“A client backing out of a signed contract ($14,500) because you being banned created a significant change is the most clear-cut example of harm from the WPE bans that I’ve seen so far. Fuck MM, and I really hope this is granted.”

Another person wrote:

“Dude you’re my hero ❤

And I’m sorry for all this stuff that’s happened to you, it’s awful. I genuinely admire how well you’ve handled all this, while moderating this sub too.”

Claims For Relief

Section D of the filing lists fifteen claims, among them he cites that Mullenweg’s retaliatory actions disrupted existing client contracts and the ability to cultivate new clients. It also describes attempted extortion, libel and trade libel among the many other claims.

Three of the claims made in the motion:

“1. Intentional Interference with Contractual Relations
Defendant Matt Mullenweg’s actions, including banning Michael Willman from the Make.WordPress.org Slack workspace and retaliating against him, disrupted existing contractual relationships. Some specific examples are the $14,500 website development contract that was canceled due to Michael Willman being banned from WordPress.org, the remainder of another contract with Trellis that was lost valued at $5,526.35, and an ongoing relationship with Trellis that included active retainers valued at $4,700 per month in addition to regular ad-hoc work, the combination of which generated $77,638.65 in invoices in 2024.

2. Intentional Interference with Prospective Economic Relations
By targeting and banning Michael Willman from essential WordPress platforms, Defendants interfered with potential business opportunities. The absence of new website development projects, loss of existing relationships and the unease expressed by clients about the WordPress ecosystem are direct results of these retaliatory actions.

4. Attempted Extortion
During discussions, Matt Mullenweg offered to refer clients to Michael Willman’s business on the condition that he cease working with WP Engine and join Automattic’s affiliate program. This constitutes coercive conduct aimed at disrupting Michael Willman’s business relationships.

6. Libel
Matt Mullenweg publicly claimed that Michael Willman made threats of physical violence, a statement that is objectively false and defamatory. This damaged Michael Willman’s reputation within the WordPress community and beyond.

7. Trade Libel
Public statements by Matt Mullenweg disparaged Michael Willman’s professional services and integrity, causing harm to his business relationships and reputation.”

Possible Outcome Of New Court Motion

The motion to intervene contains serious allegations of abuse of authority by the single most influential person in the open-source WordPress project, a worldwide ecosystem of developers, business users, publishers, plugin and theme developers and thousands of volunteers around the world who contribute to the development of the WordPress content management software.

The filing not only seeks restitution, it also asks the court for changes to the WordPress governance to remove Matt Mullenweg from his position of power at WordPress.

Read The Reddit Post And Legal Document

A link to the legal document is posted on a Reddit discussion about the filing:

Motion to Intervene & Motion for Contempt Filed in WPEngine, Inc. v. Automattic Inc.

Featured Image by Shutterstock/Rose Tamani

Google Retires Web Vitals Extension, Moves Everything to DevTools via @sejournal, @MattGSouthern

Google has officially shut down its Web Vitals Chrome extension with the release of Chrome 132.

All its key features are now fully integrated into DevTools’ Performance panel, making it the go-to tool for measuring Core Web Vitals.

While nearly 200,000 users used the extension, the Chrome team has decided to focus solely on DevTools, which offers a more powerful and centralized platform for debugging site performance.

Why the Extension Was Retired

The Web Vitals extension was great for early Core Web Vitals monitoring, but DevTools now offers the same functionality—and more.

By moving everything into DevTools, Google provides developers with a more seamless performance optimization workflow.

What’s New in DevTools?

The Performance panel in DevTools now replicates and expands on the extension’s capabilities:

  • Live Metrics: Real-time Core Web Vitals data for your local tests.
  • Field Data: Compare local metrics to CrUX data for URLs and origins, including desktop and mobile views.
  • Largest Contentful Paint (LCP) Details: Find the specific element behind your LCP score, see phase breakdowns like Time to First Byte (TTFB), and render delay.
  • Interaction To Next Paint (INP) Interaction Log: Track interactions contributing to INP with detailed timing for input delay, processing, and presentation.
  • Cumulative Layout Shift (CLS) Log: See grouped layout shifts contributing to your CLS score.
  • Diagnostic Metrics: Includes TTFB and First Contentful Paint (FCP).

DevTools provides everything the extension did, plus advanced debugging tools, all in one place.

What Developers Should Do Next

If you’re still using the Web Vitals extension, it’s time to switch to DevTools.

Google has even created a migration guide to make the transition easier.

For those who can’t migrate, Google has shared instructions for maintaining a local copy of the extension.

However, the CrUX API key tied to the extension will soon be revoked, so field data integration may break unless you generate a new key through the CrUX API docs.

Looking Ahead

This move signals Google’s commitment to making DevTools the best performance monitoring tool for developers. The Performance panel covers everything from Core Web Vitals to advanced diagnostics, and more updates are coming.

The Web Vitals extension was a helpful tool, but its best features now live in DevTools, making it easier for developers to monitor and optimize site performance from one place.

For more details, check out the official announcement or the GitHub repository.


Featured Image: William Potter/Shutterstock