Google’s Documentation Update Contains Hidden SEO Insights via @sejournal, @martinibuster

Google quietly updated their Estimated Salary (Occupation) Structured Data page with subtle edits that make the information more relevant and easily understood. The changes show how a page can be analyzed for weaknesses and subsequently improved.

Subtle Word Shifts Make A Difference

The art of writing is something SEO should consider now more than ever. It’s been important for at least the past six years but in my opinion it’s never been more important than it is today because of the preciseness of natural language queries for AI Overviews and AI assistants.

Three Takeaways About Content

  1. The words used on a page can exert a subtle influence in how a reader and a machine understand the page.
  2. Relevance is commonly understood as whether a web page is a match for a user’s search query and the user’s intent, which is an outdated way to think about it, in my opinion.
  3. A query is just a question and the answer is never a web page. The answer is generally a passage in a web page.

Google’s update to their “Estimated Salary (Occupation) Structured Data” web page offers a view of how Google updated one of their own web pages to be more precise.

There were only two changes that were so seemingly minimal they didn’t even merit a mention on their documentation changelog, they just updated it and pushed it live without any notice.

But the changes do make a difference in how precise the page is on the topic.

First Change: Focus Of Content

Google refers to “enriched search results” as different search experiences, like the recipe search experience, event search experience and the job experience.

The original version of the “Estimated Salary (Occupation) Structured Data” documentation focused on talking about the Job Experience search results. The updated version completely removed all references to the Job Experience and is now more precisely focused on the “estimated salary rich result” which is more precise than the less precise “Job Experience” phrasing.

This is the original version:

“Estimated salaries can appear in the job experience on Google Search and as a salary estimate rich result for a given occupation.”

This is the updated version:

“Adding Occupation structured data makes your content eligible to appear in the estimated salary rich result in Google Search results:”

Second Change: Refreshed Image And Simplified

The second change refreshes an example image.

The change has three notable qualities:

  1. Precisely models a search result
  2. Aligns with removal of “job experience”
  3. Simplifies message

The original image contained a screenshot of a laptop with a search result and a closeup of the search result overlaid. The image looks more at home on a product page than an informational page. Someone spent a lot of time creating an attractive image but it’s too complex and neglects the number one rule of content which is that all content must communicate the message quickly.

All content, whether text or image, is like a glass of water: the important part is the water, not the glass.

Screenshot Of Attractive But Slightly Less Effective Image

The image that replaced it is literally an example of the actual rich result. It’s not fancy but it doesn’t have to be. It just has to do the job of communicating.

Screenshot Of Google’s More Effective Image

The other thing this change accomplishes is that it removes the phrase “job experience” and replaces it with a sentence that aligns with the apparent goal of making this page about the Occupation structured data.

This is the new text:

“Adding Occupation structured data makes your content eligible to appear in the estimated salary rich result in Google Search results:”

Third change: Replace Confusing Sentence

The third change corrected a sentence that was grammatically incorrect and confusing.

Original version:

“You must include the required properties for your content to be eligible for display the job experience on Google and rich results.”

Google corrected the grammar error, made the sentence specific to the ‘estimated salary’ rich result, and removed the reference to Job Experience, aligning it more strongly with estimated salary rich results.

This is the updated version:

“You must include the required properties for your content to be eligible for display in the estimated salary rich result.”

Three Examples For Updating Web Pages

On one level the changes were literally about removing the focus on one topic and reinforcing a slightly different one. On another level it’s an example of giving users a better experience by communicating more precisely. Writing for humans is not just a creative art, it’s also a technical one. All writers, even novelists, understand that the craft of writing is technical because one of the most important factors is communicating ideas. Other issues like being comprehensive or fancy don’t matter as much as the communication part.

I think that the revisions Google made fits into what Google means when it says to make content for humans not search engines.

Read the updated documentation here:

Estimated salary (Occupation) structured data

Compare it to the archived original version.

Featured Image by Shutterstock/Lets Design Studio

AI Search Optimization: Make Your Structured Data Accessible via @sejournal, @MattGSouthern

A recent investigation has uncovered a problem for websites relying on JavaScript for structured data.

This data, often in JSON-LD format, is difficult for AI crawlers to access if not in the initial HTML response.

Crawlers like GPTBot (used by ChatGPT), ClaudeBot, and PerplexityBot can’t execute JavaScript and miss any structured data added later.

This creates challenges for websites using tools like Google Tag Manager (GTM) to insert JSON-LD on the client side, as many AI crawlers can’t read dynamically generated content.

Key Findings About JSON-LD & AI Crawlers

Elie Berreby, the founder of SEM King, examined what happens when JSON-LD is added using Google Tag Manager (GTM) without server-side rendering (SSR).

He found out why this type of structured data is often not seen by AI crawlers:

  1. Initial HTML Load: When a crawler requests a webpage, the server returns the first HTML version. If structured data is added with JavaScript, it won’t be in this initial response.
  2. Client-Side JavaScript Execution: JavaScript runs in the browser and changes the Document Object Model (DOM) for users. At this stage, GTM can add JSON-LD to the DOM.
  3. Crawlers Without JavaScript Rendering: AI crawlers that can’t run JavaScript cannot see changes in the DOM. This means they miss any JSON-LD added after the page loads.

In summary, structured data added only through client-side JavaScript is invisible to most AI crawlers.

Why Traditional Search Engines Are Different

Traditional search crawlers like Googlebot can read JavaScript and process changes made to a webpage after it loads, including JSON-LD data injected by Google Tag Manager (GTM).

In contrast, many AI crawlers can’t read JavaScript and only see the raw HTML from the server. As a result, they miss dynamically added content, like JSON-LD.

Google’s Warning on Overusing JavaScript

This challenge ties into a broader warning from Google about the overuse of JavaScript.

In a recent podcast, Google’s Search Relations team discussed the growing reliance on JavaScript. While it enables dynamic features, it’s not always ideal for essential SEO elements like structured data.

Martin Splitt, Google’s Search Developer Advocate, explained that websites range from simple pages to complex applications. It’s important to balance JavaScript use with making key content available in the initial HTML.

John Mueller, another Google Search Advocate, agreed, noting that developers often turn to JavaScript when simpler options, like static HTML, would be more effective.

What To Do Instead

Developers and SEO professionals should ensure structured data is accessible to all crawlers to avoid issues with AI search crawlers.

Here are some key strategies:

  1. Server-Side Rendering (SSR): Render pages on the server to include structured data in the initial HTML response.
  2. Static HTML: Use schema markup directly in the HTML to limit reliance on JavaScript.
  3. Prerendering: Offer prerendered pages where JavaScript has already been executed, providing crawlers with fully rendered HTML.

These approaches align with Google’s advice to prioritize HTML-first development and include important content like structured data in the initial server response.

Why This Matters

AI crawlers will only grow in importance, and they play by different rules than traditional search engines.

If your site depends on GTM or other client-side JavaScript for structured data, you’re missing out on opportunities to rank in AI-driven search results.

By shifting to server-side or static solutions, you can future-proof your site and ensure visibility in traditional and AI searches.


Featured Image: nexusby/Shutterstock

TikTok Ban Sparks 5000% Surge In Alternative App Searches via @sejournal, @MattGSouthern

The recent TikTok ban drama in the U.S. has caused a surge in search activity as people look for answers, alternatives, and workarounds.

The app temporarily shut down over the weekend and was restored after President-elect Donald Trump announced a 90-day extension. This led to a notable rise in search interest.

An SEO consultant named Sobhi Smat compiled a collection of search data and shared it on LinkedIn.

Here’s what the data shows about people’s reactions and what it means for marketers.

The Context: TikTok’s Uncertain Future

On January 17, the U.S. Supreme Court upheld PAFACA. The original deadline for compliance was January 19.

In response, on January 18, TikTok began shutting down its services in the U.S., removing the app from app stores and displaying service discontinuation notices.

On January 19, President-elect Donald Trump announced plans for a 90-day extension via executive order, allowing TikTok to restore operations while negotiations continue temporarily.

Search Behavior: Three Key Trends Emerge

Analysis of Google search data from January 1 to January 16 reveals three dominant categories of search behavior related to the TikTok ban:

  1. Staying Informed
  2. Exploring Alternatives
  3. Circumventing the Ban

1. Staying Informed

One of the largest spikes in search activity was caused by people trying to understand the reasons behind the ban and stay informed about recent developments.

Queries like “TikTok ban update,” “Supreme Court ruling on TikTok,” and “Is the TikTok ban extended?” saw a breakout, with search interest increasing by over 5000%.

2. Exploring TikTok Alternatives: A Battle for User Attention

As fears of TikTok’s potential shutdown grew, people turned to Google to explore alternative platforms.

The search term “TikTok alternatives” saw explosive growth, alongside interest in specific apps such as RedNote, Lemon8, Clapper, and Fanbase.

RedNote: The Rising Star

Among alternatives, RedNote attracted the most attention, with breakout search terms like “What is RedNote?”, “Is RedNote safe?”, and “TikTok vs RedNote”.

However, RedNote’s surge in popularity exposed its challenges, particularly in delivering high-quality English-language content and addressing translation issues. This led to a related search spike for “Chinese to English translation.”

Other Notable Alternatives

Other apps like Lemon8, Clapper, and Fanbase also saw increased search interest:

  • Lemon8: Questions included ” What is the Lemon8 app?” and “WWill Lemon8 be banned, too? “
  • Clapper: Searches like “what is Clapper social media” and “is Clapper safe” highlighted curiosity about this lesser-known platform.
  • Fanbase: Users searched for “how to invest in Fanbase” and “Isaac Hayes Fanbase app,” showing interest in the app’s unique monetization features.

3. Circumventing the Ban

Another trend involved users searching for ways to continue accessing TikTok despite the shutdown.

Queries like “Can I use TikTok with VPN?” “How to change location on TikTok?” and “VPN for TikTok?” spiked dramatically.

The interest in VPNs shows TikTok’s user base is determined to bypass restrictions and maintain access to the platform.

Deletion Trends

While people explored TikTok replacements, search trends indicate they were quickly disappointed.

A spike in searches like “how to delete RedNote account” and “delete Lemon8 app” suggests that not all alternatives met user expectations.

Potential Buyers

Search trends also reflect public curiosity about potential U.S. buyers, with queries mentioning various high-profile figures, including Mr. Beast, Elon Musk, and even Dolly Parton.

This aligns with the legislative requirement for ByteDance to sell to a U.S. company or cease operations.

What This Means for Marketers

For digital marketers, current events show that relying on one platform is risky.

Marketers should monitor these developments closely whether TikTok is sold, banned, or granted an extension.

This situation is a reminder of how legislative actions can influence online behavior and disrupt the market.


Featured Image: RKY Photo/Shutterstock

FTC: GoDaddy Hosting Was “Blind” To Security Threats via @sejournal, @martinibuster

The United States Federal Trade Commission (FTC) charged GoDaddy with violations of the Federal Trade Commission Act for allegedly maintaining “unreasonable” security practices that led to multiple security breaches. The FTC’s proposed settlement order will require GoDaddy to take reasonable steps to tighten security and engage third-party security assessments.

FTC Charged GoDaddy With Security Failures

The FTC complaint charged GoDaddy with misrepresenting itself as a secure web host through marketing on its website, in emails and it’s “Trust Center”, alleging that GoDaddy provided customers with “lax data security” in its web hosting environment.

The FTC complaint (PDF) stated:

“Since at least 2015, GoDaddy has marketed itself as a secure choice for customers to host their websites, touting its commitment to data security and careful threat monitoring practices in multiple locations, including its main website for hosting services, its “Trust Center,” and in email and online marketing.

In fact, GoDaddy’s data security program was unreasonable for a company of its size and complexity. Despite its representations, GoDaddy was blind to vulnerabilities and threats in its hosting environment. Since 2018, GoDaddy has violated Section 5 of the FTC Act by failing to implement standard security tools and practices to protect the environment where it hosts customers’ websites and data, and to monitor it for security threats.”

Proposed Settlement

The FTC is proposing that GoDaddy implement a security program to settle charges that it failed to secure its web hosting services, endangering their customers and the people who visited their customer’s compromised websites during major security breaches between 2019 and 2022.

The settlement proposes the following to settle the charges with GoDaddy:

“Prohibit GoDaddy from making misrepresentations about its security and the extent to which it complies with any privacy or security program sponsored by a government, self-regulatory, or standard-setting organization, including the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks;

Require GoDaddy to establish and implement a comprehensive information-security program that protects the security, confidentiality, and integrity of its website-hosting services; and

Mandate that GoDaddy hire an independent third-party assessor who conducts an initial and biennial review of its information-security program.”

Read the FTC statement:

FTC Takes Action Against GoDaddy for Alleged Lax Data Security for Its Website Hosting Services

Featured Image by Shutterstock/Photo For Everything

OpenAI Secretly Funded Benchmarking Dataset Linked To o3 Model via @sejournal, @martinibuster

Revelations that OpenAI secretly funded and had access to the FrontierMath benchmarking dataset are raising concerns about whether it was used to train its reasoning o3 AI reasoning model, and the validity of the model’s high scores.

In addition to accessing the benchmarking dataset, OpenAI funded its creation, a fact that was withheld from the mathematicians who contributed to developing FrontierMath. Epoch AI belatedly disclosed OpenAI’s funding only in the final paper published on Arxiv.org, which announced the benchmark. Earlier versions of the paper omitted any mention of OpenAI’s involvement.

Screenshot Of FrontierMath Paper

Closeup Of Acknowledgement

Previous Version Of Paper That Lacked Acknowledgement

OpenAI 03 Model Scored Highly On FrontierMath Benchmark

The news of OpenAI’s secret involvement are raising questions about the high scores achieved by  the o3 reasoning AI model and causing disappointment with the FrontierMath project. Epoch AI responded with transparency about what happened and what they’re doing to check if the o3 model was trained with the FrontierMath dataset.

Giving OpenAI access to the dataset was unexpected because the whole point of it is to  test AI models but that can’t be done if the models know the questions and answers beforehand.

A post in the r/singularity subreddit expressed this disappointment and cited a document that claimed that the mathematicians didn’t know about OpenAI’s involvement:

“Frontier Math, the recent cutting-edge math benchmark, is funded by OpenAI. OpenAI allegedly has access to the problems and solutions. This is disappointing because the benchmark was sold to the public as a means to evaluate frontier models, with support from renowned mathematicians. In reality, Epoch AI is building datasets for OpenAI. They never disclosed any ties with OpenAI before.”

The Reddit discussion cited a publication that revealed OpenAI’s deeper involvement:

“The mathematicians creating the problems for FrontierMath were not (actively)[2] communicated to about funding from OpenAI.

…Now Epoch AI or OpenAI don’t say publicly that OpenAI has access to the exercises or answers or solutions. I have heard second-hand that OpenAI does have access to exercises and answers and that they use them for validation.”

Tamay Besiroglu (LinkedIn Profile), associated director at Epoch AI, acknowledged that OpenAI had access to the datasets but also asserted that there was a “holdout” dataset that OpenAI didn’t have access to.

He wrote in the cited document:

“Tamay from Epoch AI here.

We made a mistake in not being more transparent about OpenAI’s involvement. We were restricted from disclosing the partnership until around the time o3 launched, and in hindsight we should have negotiated harder for the ability to be transparent to the benchmark contributors as soon as possible. Our contract specifically prevented us from disclosing information about the funding source and the fact that OpenAI has data access to much but not all of the dataset. We own this error and are committed to doing better in the future.

Regarding training usage: We acknowledge that OpenAI does have access to a large fraction of FrontierMath problems and solutions, with the exception of a unseen-by-OpenAI hold-out set that enables us to independently verify model capabilities. However, we have a verbal agreement that these materials will not be used in model training.

OpenAI has also been fully supportive of our decision to maintain a separate, unseen holdout set—an extra safeguard to prevent overfitting and ensure accurate progress measurement. From day one, FrontierMath was conceived and presented as an evaluation tool, and we believe these arrangements reflect that purpose. “

More Facts About OpenAI & FrontierMath Revealed

Elliot Glazer (LinkedIn profile/Reddit profile), the lead mathematician at Epoch AI confirmed that OpenAI has the dataset and that they were allowed to use it to evaluate OpenAI’s o3 large language model, which is their next state of the art AI that’s referred to as a reasoning AI model. He offered his opinion that the high scores obtained by the o3 model are “legit” and that Epoch AI is conducting an independent evaluation to determine whether or not o3 had access to the FrontierMath dataset for training, which could cast the model’s high scores in a different light.

He wrote:

“Epoch’s lead mathematician here. Yes, OAI funded this and has the dataset, which allowed them to evaluate o3 in-house. We haven’t yet independently verified their 25% claim. To do so, we’re currently developing a hold-out dataset and will be able to test their model without them having any prior exposure to these problems.

My personal opinion is that OAI’s score is legit (i.e., they didn’t train on the dataset), and that they have no incentive to lie about internal benchmarking performances. However, we can’t vouch for them until our independent evaluation is complete.”

Glazer had also shared that Epoch AI was going to test o3 using a “holdout” dataset that OpenAI didn’t have access to, saying:

“We’re going to evaluate o3 with OAI having zero prior exposure to the holdout problems. This will be airtight.”

Another post on Reddit by Glazer described how the “holdout set” was created:

“We’ll describe the process more clearly when the holdout set eval is actually done, but we’re choosing the holdout problems at random from a larger set which will be added to FrontierMath. The production process is otherwise identical to how it’s always been.”

Waiting For Answers

That’s where the drama stands until the Epoch AI evaluation is completed which will indicate whether or not OpenAI had trained their AI reasoning model with the dataset or only used it for benchmarking it.

Featured Image by Shutterstock/Antonello Marangi

Confirmed: Google Is Requiring JavaScript To Block SEO Tools via @sejournal, @martinibuster

Google has made a change to how it’s search results are served which will also help to secure it against bots and scrapers. Whether this will have further effect on SEO Tools or if they can use a headless Chrome that uses JavaScript remains an open question at the moment but it’s likely that Google is using rate limiting to throttle how many pages can be requested within a set period of time.

Google Search Now Requires JavaScript

Google quietly updated their search box to require all users, including bots, to have JavaScript turned on when searching.

Surfing Google Search without JavaScript turned on results in the following message:

Turn on JavaScript to keep searching
The browser you’re using has JavaScript turned off. To continue your search, turn it on.

Screenshot Of Google Search JavaScript Message

In an email to TechCrunch a Google spokesperson shared the following details:

“Enabling JavaScript allows us to better protect our services and users from bots and evolving forms of abuse and spam, …and to provide the most relevant and up-to-date information.”

JavaScript possibly enables personalization in the search experience, which is what that spokesperson may mean by providing the most relevant information. But JavaScript can also be used for blocking bots.

Using the latest version of Chrome I copied some of the JavaScript and ran it through ChatGPT to ask what it does. One part of it may relate to restricting abusive requests for documents.

Screenshot Of Chrome Dev Tools

ChatGPT gave me the following feedback:

“Core Functionalities
Randomized Value Generation (rdb)

Generates a random value based on properties (D_d, idc, and p4b) of the input object a, constrained by p7d.
This may be used for rate-limiting, exponential backoff, or similar logic.

Purpose and Context
From its components, the script:

Likely handles request retries or access control for web resources.

Implements a policy enforcement system, where:

Policies determine if requests are valid.

Errors are logged and sometimes retried based on rules.

Randomized delays or limits might control the retry mechanism.

Appears optimized for error handling and resilience in distributed or high-traffic systems, possibly within a Google service or API.”

ChatGPT said that the code may use rate-limiting which is a way to limit the number of actions a user or a system can take within a specific time period.

Rate-Limiting:

Used to enforce a limit on the number of actions (e.g., API requests) a user or system can perform within a specific time frame.
In this code, the random values generated by rdb could be used to introduce variability in when or how often requests are allowed, helping to manage traffic effectively.

Exponential Backoff:

ChatGPT explained that exponential backoff is a way to limit the amount of retries for a failed action a user or system is allowed to make. The time period between retries for a failed action increases exponentially.

Similar Logic:

ChatGPT explained that random value generation could be used to manage access to resources to prevent abusive requests.

I don’t know for certain that this is what that specific JavaScript is doing, that’s what ChatGPT explained and it definitely matches the information that Google shared that they are using JavaScript as part of their strategy for blocking bots.

Google Workspace Support: Unclear If Opting Out AI Features Avoids Price Hike via @sejournal, @MattGSouthern

Google has made its AI-powered features in Gmail, Docs, Sheets, and Meet free for all Workspace users, but questions remain around pricing adjustments and feature visibility for specific accounts.

AI Now Included Without Extra Cost

Google announced that its full suite of AI tools, previously available only through the $20-per-user-per-month Gemini for Workspace plan, is now included in its standard offerings at no additional charge.

AI capabilities like automated email summaries, meeting note-taking, spreadsheet design suggestions, and the Gemini chatbot are now accessible to all customers.

However, this announcement comes with a catch: Workspace plans will see a $2 price hike per user per month.

The new pricing structure raises the base cost of the Workspace Business Standard plan from $12 to $14 per user, effective immediately for new customers.

Starting March 17, existing customers will see the change reflected. Small business accounts are currently exempt from this adjustment.

Confusion Over Pricing & Settings

While the price increase has been widely reported, Google Workspace support has offered additional clarification, indicating that it may not apply to all users.

According to support representatives, it’s unclear whether organizations that opt out of AI features will still face the increased costs. Official guidance on this matter has yet to be issued, leaving many customers uncertain.

Screenshot from Google support chat, January 2025.

Chats between Google Workspace reps and the Search Engine Journal development team reveal that opting out of AI features isn’t straightforward.

The settings to turn off AI features like Gemini aren’t visible by default for business accounts.

Administrators must contact Google support to enable access to these settings. For enterprise customers, the settings are accessible directly within the Workspace admin console.

Competitive Push Against Microsoft

Google’s move to bundle AI features into its standard Workspace offerings mirrors Microsoft’s recent decision to integrate its Copilot Pro AI tools into the standard Microsoft 365 subscription.

Both companies aim to attract more users to their AI-powered productivity platforms while simplifying pricing structures.

Key Takeaways

For organizations using Google Workspace, here are the critical points to note:

  1. AI Features Are Enabled by Default: Gemini and other AI tools will be active for most accounts unless explicitly disabled.
  2. Opt-Out Process Is Complicated: Business account holders must contact Google support to access and change the AI feature settings. Enterprise accounts can manage these settings directly.
  3. Pricing Uncertainty: It’s unclear whether the $2 price increase will still apply if you opt out of AI tools. Google has stated that further updates on this issue are forthcoming.

Businesses are advised to monitor their Workspace settings closely and contact Google support for clarification.

Google Causes Global SEO Tool Outages via @sejournal, @martinibuster

Google cracked own on web scrapers that harvest search results data, triggering global outages at many popular rank tracking tools like SEMRush that depend on providing fresh data from search results pages.

What happens if Google’s SERPs are completely blocked? A certain amount of data provided by tracking services have long been extrapolated by algorithms from a variety of data sources. It’s possible that one way around the current block is to extrapolate the data from other sources.

SERP Scraping Prohibited By Google

Google’s guidelines have long prohibited automated rank checking in the search results but apparently Google has also allowed many companies to scrape their search results and charge for accessing ranking data for the purposes of tracking keywords and rankings.

According to Google’s guidelines:

“Machine-generated traffic (also called automated traffic) refers to the practice of sending automated queries to Google. This includes scraping results for rank-checking purposes or other types of automated access to Google Search conducted without express permission. Machine-generated traffic consumes resources and interferes with our ability to best serve users. Such activities violate our spam policies and the Google Terms of Service.”

Blocking Scrapers Is Complex

It’s highly resource intensive to block scrapers, especially because they can respond to blocks by doing things like changing their IP address and user agent to get by any blocks. Another way to block scrapers is through targeting specific behaviors like how many pages are requested by a user. Excessive amounts of page requests can trigger a block. The problem to that approach is that it can become resource intensive keeping track of all the blocked IP addresses which can quickly number in the millions.

Reports On Social Media

A post in the private SEO Signals Lab Facebook Group announced that Google was striking hard against web scrapers, with one member commenting that the Scrape Owl tool wasn’t working for them while others cited that SEMRush’s data has not updated.

Another post, this time on LinkedIn, noted multiple tools that weren’t refreshing their content but it also noted that the blocking hasn’t affected all data providers, noting that Sistrix and MonitorRank were still working. Someone from a company called HaloScan reported that they made adjustments to resume scraping data from Google and have recovered and someone else reported that another tool called MyRankingMetrics is still reporting data.

So whatever Google is doing it’s not currently affecting all scrapers. It may be that Google is targeting certain scraping behavior, learning from the respones and improving their blocking ability. The coming weeks may reveal that Google is improving its ability to block scrapers or it’s only targeting the biggest ones.

Another post on LinkedIn speculated that blocking may result in higher resources and fees charged to end users of SaaS SEO tools. They posted:

“This move from Google is making data extraction more challenging and costly. As a result, users may face higher subscription fees. “

Ryan Jones tweeted:

“Google seems to have made an update last night that blocks most scrapers and many APIs.

Google, just give us a paid API for search results. we’ll pay you instead.”

No Announcement By Google

So far there has not been any announcement by Google but it may be that the chatter online may force someone at Google to consider making a statement.

Featured Image by Shutterstock/Krakenimages.com

Google Study: 29% In The U.S. & Canada Used AI Last Year via @sejournal, @MattGSouthern

A new Google-Ipsos report shows AI adoption is increasing globally, especially in emerging markets.

However, the study reveals challenges like regional divides, gender disparities, and slower adoption in developed countries.

Critics, including Nate Hake, founder of Travel Lemming, point out how Google overlooks these challenges in its report coverage.

While optimism around AI is rising, it’s not resonating with everyone.

Here’s a closer look at the report and what the numbers indicate.

AI Is Growing, But Unevenly

Globally, 48% of people used generative AI last year, with countries like Nigeria, Mexico, and South Africa leading adoption. These regions also show the most excitement about AI’s potential to boost economies and improve lives.

Adoption lags at 29% in developed nations like the U.S. and Canada, meaning that 71% of people in these regions haven’t knowingly engaged with generative AI tools.

Screenshot: Google-Ipsos Study ‘Our life with AI: From innovation to application,’ January 2025.

Optimism Outweighs Concerns

Globally, 57% of people are excited about AI, compared to 43% who are concerned—a shift from the year prior, when excitement and concerns were evenly split.

People cite AI’s potential in science (72%) and medicine (71%) as reasons for their optimism. Respondents see opportunities for breakthroughs in healthcare and research.

However, in the U.S., skepticism lingers—only 52% believe AI will directly benefit “people like them,” compared to the global average of 59%.

Gender Gaps Persist

The report highlights a gender gap in AI usage: 55% of global AI users are men compared to 45% women.

The disparity is even bigger in workplace adoption, where 41% of professional AI users are women.

Emerging Markets Are Leading the Way

Emerging markets are using AI more and are more optimistic about its potential.

In regions like Nigeria and South Africa, people are more likely to believe AI will transform their economies.

Meanwhile, developed countries like the U.S. and U.K. remain cautious.

Only 53% of Americans prioritize AI innovation, compared to much higher enthusiasm in emerging markets.

Non-Generative AI

While generative AI tools like chatbots and content generators grab headlines, the public is more appreciative of non-generative AI applications.

These include AI for healthcare, fraud detection, flood forecasting, and other practical, high-impact use cases.

Generative AI, on the other hand, gets mixed reviews.

Writing, summarizing, or customer service applications don’t resonate as strongly with the public as AI’s potential to tackle bigger societal issues.

AI at Work: Young, Affluent, and Male-Dominated

AI is making its way into the workplace. 74% of AI users use it professionally for writing, brainstorming, and problem-solving tasks.

However, workplace AI adoption is skewed toward younger, wealthier, and male workers.

Blue-collar workers and older professionals are catching up—67% of blue-collar AI users and 68% of workers aged 50-74 use AI at work—but the gender gap remains pronounced.

Trust in AI Is Growing

Trust in AI governance is improving, with 61% of people confident their governments can regulate AI responsibly (up from 57% in 2023).

72% support collaboration between governments and companies to manage AI’s risks and maximize its benefits.

Takeaway

AI use is growing worldwide, though many people in North America still see little reason to use it.

To increase AI’s adoption, companies must build trust and clearly communicate the technology’s benefits.

For more details, check out the full report at Google Public Policy.


Featured Image: Stokkete/Shutterstock

Evidence That Google Detects AI-Generated Content via @sejournal, @martinibuster

A sharp-eyed Australian SEO spotted indirect confirmation about Google’s use of AI detection as part of search rankings that was hiding in plain sight for years. Although Google is fairly transparent about content policies, the new data from a Googler’s LinkedIn profile adds a little more detail.

Gagan Ghotra tweeted:

“Important FYI Googler Chris Nelson from Search Quality team his LinkedIn says He manages global team that build ranking solutions as part of Google Search ‘detection and treatment of AI generated content’.”

Googler And AI Content Policy

The Googler, Chris Nelson, works at Google in the Search Ranking department and is listed as co-author of Google’s guidance on AI-generated content, which makes knowing a little bit about him

The relevant work experience at Google is listed as:

“I manage a large, global team that builds ranking solutions as part of Google Search and direct the following areas:

-Prevent manipulation of ranking signals (e.g., anti-abuse, spam, harm)
-Provide qualitative and quantitative understanding of quality issues (e.g., user interactions, insights)
-Address novel content issues (e.g., detection and treatment of AI-generated content)
-Reward satisfying, helpful content”

There are no search ranking related research papers or patents listed under his name but that’s probably because his educational background is in business administration and economics.

What may be of special interest to publishers and digital marketers are the following two sections:

1. He lists addressing “detection and treatment of AI-generated content”

2. He provides “qualitative and quantitative understanding of quality issues (e.g., user interactions, insights)”

While the user interaction and insights part might seem unrelated to the detection and treatment of AI-generated content, the user interactions and insights part is in the service of understanding search quality issues, which is related.

His role is defined as evaluation and analysis of quality issues in Google’s Search Ranking department. “Quantitative understanding” refers to analyzing data and “qualitative understanding” is a more subjective part of his job that may be about insights, understanding the “why” and “how” of observed data.

Co-Author Of Google’s AI-Generated Content Policy

Chris Nelson is listed as a co-author of Google’s guidance on AI-generated content. The guidance doesn’t prohibit the use of AI for published content, suggesting that it shouldn’t be used to create content that violates Google’s spam guidelines. That may sound contradictory because AI is virtually synonymous with scaled automated content which has historically been considered spam by Google.

The answers are in the nuance of Google’s policy, which encourages content publishers to prioritize user-first content instead of a search-engine first approach. In my opinion, putting a strong focus on writing about the most popular search queries in a topic, instead of writing about the topic, can lead to search engine-first content as that’s a common approach of sites I’ve audited that contained relatively high quality content but lost rankings in the 2024 Google updates.

Google (and presumably Chris Nelson’s advice) for those considering AI-generated content is:

“…however content is produced, those seeking success in Google Search should be looking to produce original, high-quality, people-first content demonstrating qualities E-E-A-T.”

Why Doesn’t Google Ban AI-Generated Content Outright?

Google’s documentation that Chris Nelson co-authored states that automation has always been a part of publishing, such as dynamically inserting sports scores, weather forecasts, scaled meta descriptions and date-dependent content and products related to entertainment.

The documentation states:

“…For example, about 10 years ago, there were understandable concerns about a rise in mass-produced yet human-generated content. No one would have thought it reasonable for us to declare a ban on all human-generated content in response. Instead, it made more sense to improve our systems to reward quality content, as we did.

…Automation has long been used to generate helpful content, such as sports scores, weather forecasts, and transcripts. …Automation has long been used in publishing to create useful content. AI can assist with and generate useful content in exciting new ways.”

Why Does Googler Detect AI-Generated Content?

The documentation that Nelson co-authored doesn’t explicitly states that Google doesn’t differentiate between how low quality content is generated, which seemingly contradicts his LinkedIn profile that states “detection and treatment of AI-generated content” is a part of his job.

The AI-generated content guidance states:

“Poor quality content isn’t a new challenge for Google Search to deal with. We’ve been tackling poor quality content created both by humans and automation for years. We have existing systems to determine the helpfulness of content. …Our systems continue to be regularly improved.”

How do we reconcile that part of his job is detecting AI-generated content and Google’s policy states that it doesn’t matter how low quality content is generated?

Context is everything, that’s the answer. Here’s the context of his work profile:

Address novel content issues (e.g., detection and treatment of AI-generated content)”

The phrase “novel content issues” means content quality issues that haven’t previously been encountered by Google. This refers to new types of AI-generated content, presumably spam, and how to detect it and “treat” it. Given that the context is “detection and treatment” it could very well be that the context is “low quality content” but it wasn’t expressly stated because he probably didn’t think his LinkedIn profile would be parsed by SEOs for a better understanding of how Google detects and treats AI-generated content (meta!).

Guidance Authored By Chris Nelson Of Google

A list of articles published by Chris Nelson show that he may have played a role in many of the most important updates from the past five years, from the Helpful Content update, site reputation abuse to detecting search-engine first AI-generated content.

List of Articles Authored By Chris Nelson (LinkedIn Profile)

Updating our site reputation abuse policy

What web creators should know about our March 2024 core update and new spam policies

Google Search’s guidance about AI-generated content

What creators should know about Google’s August 2022 helpful content update

Featured Image by Shutterstock/3rdtimeluckystudio