Google Search History Can Now Power Gemini AI Answers via @sejournal, @martinibuster

Google announced an update to their Gemini personal AI assistant that increases personalization of responses so that it anticipates user’s needs and feels more like a natural personal assistant instead of a tool. Examples of how the new Gemini will help users is for brainstorming travel ideas and making personalized recommendations.

The new feature rolls out first to desktop and then to mobile apps.

Gemini With Personalization

Google announced a new version of Gemini that adapts responses to a user’s unique interests. It does this based on their search history which enables Gemini to deliver responses with a higher level of contextual relevance and personalization. Google intends to expand personalization by integrating other Google apps and services, naming Photos and Images as examples.

Google explained:

“In the coming months, Gemini will expand its ability to understand you by connecting with other Google apps and services, including Photos and YouTube. This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you.”

How Personalization Works

Users can share their personal preferences and details like dietary requirements or their partner’s names in order to obtain a greater degree of personalization in responses that feel specific to the individual. Advanced users can allow Gemini to access past chats to further improve the relevance of responses.

Google’s access to search history and data from other apps may give it an advantage that competing apps like ChatGPT may not be able to match.

Personalization Is Opt-In

There are four key points to understand about personalization in Gemini:

  1. Personalization is currently an opt-in feature that’s labeled “experimental.”
  2. Users need to choose to use Personalization from the model drop-down menu in order to activate it.
  3. Gemini asks for permission to connect to search history and other Google services and apps before it uses them for personalization.
  4. Users can also disconnect from the feature.

That means that millions of Gemini users won’t suddenly begin accessing an increasing amount of information from a contextual AI assistant instead of search. But it does mean the door to that happening exists and the next step is for Google users to open it.

What Publishers Need To Know

This update increasingly blurs the distance between traditional Search and Google’s Assistant while simultaneously making information increasingly accessible in a way that publishers and SEOs should be concerned enough to research to identify how to respond.

Considerations about privacy issues may keep Google from turning personalization into an opt-out feature. And while personalization is currently an opt-in from a drop-down menu because it’s still an experimental feature. But once it’s mature it’s not unreasonable to assume that Google may begin nudging users to adopt it.

Even though this is an experimental feature, publishers and SEOs may want to understand how this impacts them, such as if it’s possible to track personalized Gemini referral traffic or will it be masked because of privacy considerations? Will answers from Gemini reduce the need for clicks to publisher sites?

Read Google’s announcement:

Gemini gets personal, with tailored help from your Google app

Featured Image by Shutterstock/Tada Images

Google Begins Rolling Out March Core Algorithm Update via @sejournal, @MattGSouthern

Google has officially begun rolling out its March 2025 core algorithm update, according to an announcement posted to the Google Search Status Dashboard today.

The update commenced at 9:23 AM PDT and is expected to fully deploy across all of Google’s search systems in up to two weeks.

The company provided minimal details beyond the timing and expected duration of the rollout.

The official announcement reads:

“Released the March 2025 core update. The rollout may take up to 2 weeks to complete.”

What This Means For SEO Professionals

Core updates are comprehensive changes to Google’s main search algorithms and systems.

Unlike more minor updates that might focus on specific issues, core updates typically produce noticeable changes to search rankings across the web.

Website owners and SEO professionals should expect fluctuations in search visibility and rankings over the coming weeks as the update gradually rolls out.

These changes often stabilize once the update is fully implemented, though permanent shifts in positioning can occur based on how the new algorithm evaluates content quality and relevance.

Preparing For Algorithm Changes

As with previous core updates, Google hasn’t provided specific details about changes made to its ranking systems. The company typically advises creating high-quality content rather than trying to fix particular issues when rankings drop after an update.

Monitor your analytics during this period to identify any significant changes in traffic or rankings. Documenting these changes can help determine whether adjustments are needed once the update has been fully implemented.

Search Engine Journal will continue to monitor the impact of this update and provide additional information as it becomes available.

Origins of Google Shopping’s AI Vision Match

Google Shopping added generative artificial intelligence to fashion listings this month, changing how some shoppers discover apparel items and reinforcing ecommerce fundamentals.

Shoppers are often of two minds, according to Google. Some have only a vague idea of what they want. Others have a clear vision.

“It can be hard to translate a vision for an item that fits your personal style (say, ‘colorful midi dress with big daisies’) into something you can buy and have in your closet by Friday,” wrote Lilian Rincon, a Google vice president, in a March 5, 2025 blog post.

Vision Match

Google Shopping’s new AI image generator aims to help shoppers find what they want. Called “Vision Match” in Google’s documentation, the feature is labeled “Create & Shop” on the customer-facing front end.

A shopper can type or speak a description, such as Rincon’s “colorful midi dress with big daisies.” The Vision Match AI generates images based on that description — flowered dresses in this case — and shares shoppable product listings similar to the generated images.

Image from Google of the green dress with daisies

Vision Match ingests a text description and generates images such as the green dress with daisies shown here. Click image to enlarge.

Vision Match may function as a bridge spanning a shopper’s abstract idea and an actual product for sale.

Moreover, Vision Match pairs well with other Google features that deploy shopping data to improve ad performance and product discovery, including:

  • Google Lens, which allows users to search for products by uploading images or taking photos.
  • GenAI search in Google Shopping, such as tools that help shoppers find products.
  • Google Shopping image search and style matching for fashion and home décor.
  • Virtual try-on for beauty and apparel, allowing users to see how products look on models.

Improved Shopping

Google Shopping’s various AI tools will almost certainly improve consumers’ experiences. Folks use Google to shop more than a billion times a day, and the company has an excellent store of data.

Google knows what products are available via its Shopping Graph, which had 45 billion listings as of October 2024, as well as what shoppers want, e.g., a “colorful midi dress with big daisies.”

For example, the press kit Google’s media relations team shared with journalists ahead of the Vision Match announcement included a “trends” document that stated:

  • “Cheetah print jeans” and “leopard jeans” are the top trending types of jeans.
  • In April 2024, search interest in “baggy jeans” surpassed that of “skinny jeans” for the first time, and “baggy jeans” have remained on top ever since.
  • “Shell skirt” is at an all-time high for the second consecutive month.
  • Idaho is the only U.S. state where purple lipstick is the most popular.

For better or worse, Google knows much about shoppers (and advertisers). Google Shopping can find the needle in a haystack of 45 billion products.

Optimizing for AI

With Vision Match, Google is not reinventing ecommerce but becoming better at using the data.

Optimizing products for Google’s AI features typically includes:

  • Aligning product listings for AI. Vision Match and other AI features use data from the Shopping Graph.
  • Creating superior product descriptions. Describe the product’s physical specs and primary benefits.
  • Using quality images. AI tools analyze product images for colors, features, and more.
  • Advertising. Use Performance Max campaigns to ensure products appear across Google Shopping, Search, and YouTube.

None of these tactics, however, are novel. They are fundamental to selling products online. Since 1995 — the year Amazon and eBay launched — sellers have needed structured, descriptive, and visual product information promoted by advertising.

Thus Google Shopping’s AI initiatives are, in a sense, sensible ecommerce practices and an opportunity for merchants. What has worked well — an online seller’s existing tactics — is the path to success in an AI-driven future.

Google Publishes New Robots.txt Explainer via @sejournal, @martinibuster

Google published a new Robots.txt refresher explaining how Robots.txt enables publishers and SEOs to control search engine crawlers and other bots (that obey Robots.txt). The documentation includes examples of blocking specific pages (like shopping carts), restricting certain bots, and managing crawling behavior with simple rules.

From Basics To Advanced

The new documentation offers a quick introduction to what Robots.txt is and gradually progresses to increasingly advanced coverage of what publishers and SEOs can do with Robots.txt and how it benefits them.

The main point of the first part of the document is to introduce robots.txt as a stable web protocol with a 30 year history that’s widely supported by search engines and other crawlers.

Google Search Console will report a 404 error message if the Robots.txt is missing. It’s okay for that to happen but if it bugs you to see that in the GSC you can wait 30 days and the warning will drop off. An alterative is to create a blank Robots.txt file which is also acceptable by Google.

Google’s new documentation explains:

“You can leave your robots.txt file empty (or not have one at all) if your whole site may be crawled, or you can add rules to manage crawling.”

From there it covers the basics like custom rules for restricting specific pages or sections.

The advanced uses of Robots.txt covers these capabilities:

  • Can target specific crawlers with different rules.
  • Enables blocking URL patterns like PDFs or search pages.
  • Enables granular control over specific bots.
  • Supports comments for internal documentation.

The new documentation finishes by describing how simple it is to edit the Robots.txt file (it’s a text file with simple rules), so all you need is a simple text editor. Many content management systems have a way to edit it and there are tools available for testing if the Robots.txt file is using the correct syntax.

Read the new documentation here:

Robots Refresher: robots.txt — a flexible way to control how machines explore your website

Featured Image by Shutterstock/bluestork

Google AIO: 4 Ways To Find Out If Your Brand Is Visible In Generative AI [With Prompts] via @sejournal, @bright_data

This post was sponsored by Bright Data. The opinions expressed in this article are the sponsor’s own.

Imagine this in the time of Google AIO: A potential customer asks Google Gemini, ChatGPT, or Perplexity AI for the best SEO tools, top e-commerce platforms, or leading digital agencies.

Your brand has dominated traditional search rankings for years. But your company isn’t mentioned when AI generates an answer.

No ranking. No link. No Google AIO visibility.

This is the new reality of AI-driven search, and most SEOs aren’t tracking it.

Your brand might be invisible in AI search. Find out if it is now →

For years, you may have relied on keyword rankings, organic traffic, and SERP features to measure success.

But as AI-powered search engines reshape how information is delivered, these traditional SEO tracking methods are no longer enough.

How can brands ensure they are visible, accurately represented, and competitive if AI-generated answers influence user decisions without linking to websites?

With new challenges come new solutions. As AI answer engines continue to evolve, SEO professionals and rank tracking platforms must adapt by finding ways to monitor AI-generated search results in real time.

The ability to track brand mentions, analyze AI-driven recommendations, and compare competitor visibility is becoming just as critical as traditional keyword tracking.

In this article, we’ll explore:

  • Why traditional SEO tracking is becoming obsolete in the AI era.
  • The key queries brands should monitor in AI-generated search results.
  • How Bright Data’s Web Scraper API provides a unique solution for AI search tracking.
  • Why SEO pros must demand AI-ready tracking tools, and why rank tracking platforms must evolve to meet this need.

Why Traditional SEO Tracking Doesn’t Work For AIO

For years, SEO tracking has revolved around keyword rankings, organic traffic, and SERP features like featured snippets and People Also Ask (PAA).

However, AI-generated search results don’t follow these traditional ranking structures.

How AIO Affects SERPs

In a standard Google search, ranking in the top three positions means high visibility and traffic. But in AI-generated search, there are no numbered rankings, just a synthesized response that may or may not include your brand.

For example, if a user asks “What are the best SEO tools?”, Google Gemini or ChatGPT might generate a list of tools based on their training data and real-time web sources.

If your brand isn’t included in that response, you’re invisible to the user, regardless of how well you rank in traditional search.

How AI Affects Search Engine Optimization Tools

Without a way to track how AI search engines mention brands, you may be flying blind, and rank tracking tools are missing a critical data layer.

How To Track Your Brand In Generative AI Search & Artificial Intelligence Overviews (AIOs)

Tracking AI-generated search results isn’t as simple as checking keyword rankings.

Since AI models don’t rank pages but generate answers, you have to rethink what they measure.

Here are the four key query types that can reveal how AI search engines perceive and present your brand:

1. How To Find Out If AIO Knows Your Brand Exists:

If AI answer engines don’t mention your brand when users ask about your industry, you’re invisible in AI search.

Even worse, if they misrepresent your brand, you could be losing trust without realizing it.

For example, if a user asks “What is [Your Brand] known for?”, the AI’s response could shape public perception. If it pulls outdated or incorrect information, you need to intervene.

Google AIO response for Screenshot from Google, March 11, 2025
ChatGPTT response for Screenshot from ChatGPT, March 11, 2025

2. Listicles & Perception Terms: Are You a Top Recommendation?

AI-generated search results frequently generate list-based recommendations like:

  • “Best SEO platforms for enterprise businesses.”
  • “Top marketing automation tools in 2025.”

If your brand doesn’t appear in these AI-generated lists, you’re missing out on potential customers who rely on AI search for recommendations.

3. Competitor Comparisons: How Do You Stack Up?

AI search engines dynamically compare brands, often answering queries like:

  • “Is [Your Brand] better than [Competitor]?”
  • “Best alternative to [Competitor]?”

If AI consistently recommends a competitor over your brand, you need to adjust your positioning and content strategy to improve your AI search presence.

Try it!

Pick a prompt from above and visit:

How Bright Data’s Web Scraper API Enables AIO Tracking & Answer Engine Tracking

Bright Data provides the data collection infrastructure that can extract AI-generated search data for:

  • SEO platforms.
  • Rank tracking tools.
  • Enterprises.

Key Capabilities Of Bright Data’s Web Scraper API:

  • Access AI Answer Engines – Extracts real-time data from Google Gemini, OpenAI’s GPT, Claude, and Perplexity AI.
  • Customizable Data Extraction – Enables platforms to collect AI-generated responses for specific queries, industries, or competitors.
  • Seamless Integration – Allows rank tracking platforms and SEO tools to integrate AI search data into their existing dashboards.
  • Scalable & Real-Time – Provides continuous monitoring of AI-generated search results to track brand mentions, sentiment, and competitor positioning.

By leveraging Bright Data’s Web Scraper API, you can gain visibility into AI search, ensuring you stay ahead in an evolving search landscape.

The Future of Rank Tracking In An AI-Integrated SERP

As AI-generated search results become more dominant, SEO’s are already demanding AI search tracking capabilities from their tools, and rank tracking platforms must evolve to meet this need.

  • Traditional keyword rankings will decline in importance as AI-generated answers take up more space in search results.
  • SEO tools must adapt to track brand presence, AI-generated citations, and dynamic competitor comparisons.
  • SEOs should begin integrating AI search tracking now to stay ahead of the curve.

AI search tracking is no longer optional, it’s essential.

Bright Data’s Web Scraper API provides the data collection infrastructure that enables brands and SEO platforms to monitor their presence in AI-generated search results across multiple platforms.

🔗 Explore Bright Data’s Web Scraper API
🔗 Read more on optimizing for generative AI search


Image Credits

Featured Image: Image by Bright Data. Used with permission.

Using SEO To Capture Growth Opportunities In Emerging Markets via @sejournal, @motokohunt

With the increase in AI-generated search results and the growing popularity of answer engines, multinational businesses experience declining organic traffic and revenue from organic searches in more established markets.

It might be time to take a page from the investment market’s playbook and focus resources to target emerging markets.

Market Opportunity

Market volatility in the U.S. and the potential for trade wars have many multinationals diversifying production away from China, resulting in analysts advocating a significant shift in capital investments into emerging markets.

Emerging markets and developing economies are expected to drive global economic growth through 2035 at an average rate of 4.06% compared to 1.59% in advanced economies.

India is one of the fastest-growing major economies, with projections of 6.5% growth. It is closely followed by Southeast Asia’s 5% growth.

India

  • India represents one of the world’s fastest-growing digital markets, with a 27% CAGR. India’s ecommerce market is expected to reach $350 billion by 2030. The market presents unique revenue opportunities due to its massive scale, rapid digital adoption, and evolving consumer behaviors.
  • With over 750 million active internet users and over 600 million with smartphones, this mobile-first market favors app-based shopping (60%) rather than using websites.
  • Strong marketplace presence in Amazon and Flipkart.

Southeast Asia (Particularly Indonesia, Vietnam, And The Philippines)

  • Southeast Asia offers significant growth potential, with high ecommerce adoption rates representing a $100 billion ecommerce market projected to reach $1 trillion by 2030.
  • With 64% of internet users shopping online and purchasing $73 billion in goods in 2023, Indonesia represents a significant addressable market of high-intent customers.
  • By 2030, the region’s median average age will be 30.5, with a growing middle class with disposable income.
  • Mobile-first markets, with 80% of online purchases made using mobile phones.

Latin America (Brazil, Mexico, And Colombia)

  • Latin America represents a compelling growth opportunity, with its digital market reaching $57.7 million in 2023 and projected to grow at a 27.1% CAGR from 2023 to 2030.
  • The region’s rapid digital transformation post-Covid has created a strong ecommerce infrastructure, with 74.63% of internet users regularly purchasing online.
  • A mobile-first region with over 85% of ecommerce via smartphones.
  • The online marketplace Mercado Libre (Meli) controls over 25% of ecommerce transactions, representing 40 million monthly transactions, offering a solid entry point into the region.

Organic Traffic Diversification

Gartner has predicted a 25% decline in traditional search volume by 2026 to as much as 50% by 2028 as consumers embrace generative AI-powered search, including various AI agents.

Organic traffic diversification through SEO and AI optimization in less saturated markets will help mitigate traffic and revenue risks associated with algorithmic and click volatility in established markets.

As AI-powered search adoption grows globally, emerging markets offer a unique landscape for early movers to capitalize on.

In emerging markets like India, China, and Vietnam, generative AI adoption outpaces developed economies as countries’ investments in digital transformation leverage AI to leapfrog traditional growth paths.

Businesses can tap into new audience segments and gain a competitive edge by focusing on SEO strategies tailored to emerging markets and including AI agents.

Moreover, by establishing a strong presence in emerging markets early on, companies can build brand recognition and authority, which are increasingly important factors in organic search engine rankings.

First-Mover Advantage

All companies are wrestling with leveraging AI-driven search and new platforms, and globally focused CMOs cannot do this in a vacuum, only for mature markets but for all markets simultaneously.

Making AI optimization a global initiative creates a significant opportunity for multinationals to benefit from first-mover advantages in emerging markets.

  1. Establishing Authority: By being among the first to optimize for emerging search platforms and AI-driven search in new markets, CMOs can establish their brand representation as authorities in their respective industries.
  2. Capturing Market Share: Early adoption of AI optimization and advanced SEO strategies in emerging markets allows CMOs to capture a larger share of the search landscape before competitors enter the space.
  3. Building Brand Recognition: As search engines increasingly favor recognized brands, early in the market can help CMOs build strong brand recognition, translating into higher search rankings and increased visibility.

By focusing on SEO to capture growth opportunities in emerging markets, CMOs can drive significant revenue growth and establish a strong presence before competitors.

This approach aligns with the evolving role of CMOs, who are increasingly expected to focus on revenue growth and collaborate across departments and markets to drive business success.

Resource Allocation & Budget Planning

To capture this opportunity, businesses need to think globally and refactor their search marketing budgets, strategies, and programs to be more globally focused.

This will require a shift in the size and types of investments in talent, tools, and research to support the following key areas:

  • Product Insights: Identify products and solutions that can be adapted and aligned to the needs and wants of local market consumers.
  • Local Language Optimization: It is critical to localize your content as 73% want reviews in their local language, and 40% will not buy from websites not in their local language.
  • Mobile-First SEO: In emerging markets, smartphones dominate internet access and purchases, making mobile-first websites and optimizing for mobile commerce critical for success.
  • AI Optimization: Expertise is needed to create and refactor content and optimize their web infrastructures for AI results, leveraging emerging platforms like ChatGPT, Perplexity, or AI-enhanced region-specific search engines and marketplaces, gaining a competitive edge in these markets.
  • International & Technical SEO: Expertise and collaboration will be essential for implementing a team to manage international SEO architecture.
  • Regional And Local Consultants: Local experts will be a great addition to your team for local market nuances, regional search behaviors, localization, and link building.

The Strategic Imperative For CMOs

For forward-thinking multinational CMOs, the shift toward AI-driven search should not be viewed as a threat but as a strategic inflection point.

The traditional search landscape is evolving, and emerging markets present a unique opportunity to future-proof SEO efforts while driving long-term business growth.

Companies that proactively invest in SEO and AI optimization in high-growth regions will position themselves ahead of competitors who hesitate to expand beyond their core markets.

The digital transformation unfolding in emerging economies creates an environment where early movers can establish dominance, capture market share, and future-proof their search visibility.

This is not just an SEO initiative – it’s a strategic business imperative.

As AI continues reshaping search behaviors, those who recognize and act on the potential of emerging markets today will reap the rewards of sustained digital growth tomorrow.

By allocating resources to SEO in these regions, CMOs can help their organizations mitigate risks associated with declining search volumes in traditional markets while building new revenue streams in markets poised for explosive digital adoption.

Now is the time to pivot, invest, and lead the charge in capturing the next wave of organic search-driven growth.

More Resources:


Featured Image: Who is Danny/Shutterstock

‘Do’ Queries Are an SEO Priority

Search engine optimizers have long segregated searchers’ intent into three types: to gain information, make a purchase, or locate a business or a person.

Google’s latest quality raters’ guidelines offer a different intent approach: “know simple,” “know,” and “do.” Google’s method helps prioritize optimization efforts for today’s AI-driven search and elements.

Here’s how to adjust your organic search strategy based on the new guidelines.

‘Know simple’ queries

Per Google, searchers needing a quick answer have a “know simple” intent. Examples include “weather,” “when was the Empire State Building constructed,” or “how much protein in an egg.”

Those are low-priority keywords because Google now provides very satisfying answers in search results, removing the need to click (especially with AI Overviews).

That doesn’t mean ignoring “know simple” questions on your site. Respond to visitor queries to keep them from leaving, but don’t expect the answers to drive traffic.

‘Know’ queries

“Know” queries demand longer, more detailed answers. Before AI Overviews, Google served only “featured snippets” for quick answers that required searchers to click links for the full explanation.

AI Overviews provide much more detail than featured snippets, and there’s often no need to click elsewhere. Consider, for example, the AI Overviews response to “why take probiotics.” The response includes links, but its thoroughness suggests no need for further research.

Nonetheless, creating and optimizing content for “know” queries could generate relevant traffic and, like “know simple” answers, help visitors.

The response in AI Overviews to “why to take probiotics” is thorough and detailed. Click image to enlarge.

‘Do’ queries

“Do” searches imply an action and represent huge organic search potential. The action could be to purchase an item (“magnesium online”) or, say, to take a vacation (“what to see in Hawaii”).

Google states some “do” queries are “open-ended,” meaning many types of content could help. For example, searchers of “bathroom organization ideas” may want to browse images and videos, read how-to guides, or both.

Many “do” queries are impossible to answer in an AI Overview or featured snippet. Searchers will likely click off the results to perform the action.

That is why such queries have solid SEO potential.

The intent of “do” queries could be commercial or informational —  both are important for SEO. Informational queries offer an opportunity to show products in context, such as a cabinet seller responding to the “bathroom ideas” search. This problem-solving content can drive sales while helping AI understand your items.

For example, Home Depot produces many how-to guides that list and link to relevant products. For a “drywall repair” query, Home Depot’s tutorial ranks number 1 in organic search on Google, explaining how to patch and repair drywall and linking to those products.

The tutorial includes a video, difficulty level, and duration to complete. Users can add products without leaving the page. The tutorial responds to a “do” query and generates organic search traffic that drives conversions.

Home Depot’s drywall tutorial includes a video, difficulty level, and duration. Users can add products without leaving the page. Click image to enlarge.

Is Google’s Use Of Compressibility An SEO Myth? via @sejournal, @martinibuster

I recently came across an SEO test that attempted to verify whether compression ratio affects rankings. It seems there may be some who believe that higher compression ratios correlate with lower rankings. Understanding compressibility in the context of SEO requires reading both the original source on compression ratios and the research paper itself before drawing conclusions about whether or not it’s an SEO myth.

Search Engines Compress Web Pages

Compressibility, in the context of search engines, refers to how much web pages can be compressed. Shrinking a document into a zip file is an example of compression. Search engines compress indexed web pages because it saves space and results in faster processing. It’s something that all search engines do.

Websites & Host Providers Compress Web Pages

Web page compression is a good thing because it helps search crawlers quickly access web pages which in turn sends the signal to Googlebot that it won’t strain the server and it’s okay to grab even more pages for indexing.

Compression speeds up websites, providing site visitors a high quality user experience. Most web hosts automatically enable compression because it’s good for websites, site visitors and also good for web hosts because it saves on bandwidth loads. Everybody wins with website compression.

High Levels Of Compression Correlate With Spam

Researchers at a search engine discovered that highly compressible web pages correlated with low-quality content. The study called Spam, Damn Spam, and Statistics: Using Statistical Analysis to Locate Spam Web Pages  (PDF) was conducted in 2006 by two of the world’s leading researchers, Marc Najork and Dennis Fetterly.

Najork currently works at DeepMind as Distinguished Research Scientist. Fetterly, a software engineer at Google, is an author of many important research papers related to search, content analysis and other related topics. This research paper isn’t just any research paper, it’s an important one.

What the research paper shows is that 70% of web pages that compress at a level of 4.0 or higher tended to be low quality pages with a high level of redundant word usage. The average compression level of sites was around 2.0.

Here are the averages of normal web pages listed by the research paper:

  • Compression ratio of 2.0:
    The most frequently occurring compression ratio in the dataset is 2.0.
  • Compression ratio of 2.1:
    Half of the pages have a compression ratio below 2.1, and half have a compression ratio above it.
  • Compression ratio of 2.11:
    On average, the compression ratio of the pages analyzed is 2.11.

It would be an easy first-pass way to filter out the obvious content spam so it makes sense that they would do that to weed out heavy-handed content spam. But weeding out spam is more complicated than simple solutions. Search engines use multiple signals because it results in a higher level of accuracy.

The researchers reported that 70% of sites with a compression level of 4.0 or higher were spam. That means that the other 30% were not spam sites. There are always outliers in statistics and that 30% of non-spam sites is why search engines tend to use more than one signal.

Do Search Engines Use Compressibility?

It’s reasonable to assume that search engines use compressibility to identify heavy handed obvious spam. But it’s also reasonable to assume that if search engines employ it they are using it together with other signals in order to increase the accuracy of the metrics. Nobody knows for certain if Google uses compressibility.

Is There Proof That Compression Is An SEO Myth?

Some SEOs have published research analyzing the rankings of thousands of sites for hundreds of keywords. They found that both the top-ranking and bottom-ranked sites had a compression ratio of about 2.4. The difference between their compression ratios was just 2%, meaning the scores were essentially equal. Those results are close to the normal average range of 2.11 reported in the 2006 scientific study.

The SEOs claimed that the mere 2% higher compression levels of the top-ranked sites over the bottom-ranked sites prove that compressibility is an SEO myth. Of course, that claim is incorrect. The average compression ratio of normal sites in 2006 was 2.11, which means the average 2.4 ratio in 2025 falls well within the range of normal, non-spam websites.

The ratio for spam sites is 4.0, so the fact that both sets of top and bottom ranked sites are about 2.4 ratio is meaningless since both scores fall within the range of normal.

If we assume that Google is using compressibility, a site would have produce a compression ratio of 4.0, plus send other low quality signals, to trigger an algorithmic action. If that happened those sites wouldn’t be in the search results at all because they wouldn’t be in the index and therefore there is no way to test that with the SERPs, right?

It would be reasonable to assume that the sites with high 4.0 compression ratios were removed. But we don’t know that, it’s not a certainty.

Is Compressibility An SEO Myth?

Compressibility may not be an SEO myth. But it’s probably not anything publishers or SEOs should be worry about as long as they’re avoiding heavy-handed tactics like keyword stuffing or repetitive cookie cutter pages.

Google uses de-duplication which removes duplicate pages from their index and consolidates the PageRank signals to whichever page they choose to be the canonical page (if they choose one). Publishing duplicate pages will likely not trigger any kind of penalty, including anything related to compression ratios, because, as was already mentioned, search engines don’t use signals in isolation.

U.S. DOJ Antitrust Filing Proposes 4 Ways To Break Google’s Monopoly via @sejournal, @martinibuster

The plaintiffs in an antitrust lawsuit against Google filed a revised proposed final judgment for the judge in the case to consider. The proposal comes after a previous ruling where the court determined that Google broke antitrust laws by illegally maintaining its monopoly.

The legal filing by the plaintiffs, the United States Department Of Justice and State Attorneys General, argue that Google has maintained monopolies in search services and text advertising through anticompetitive practices.

The filing proposes four ways to loosen Google’s monopolistic hold on search and advertising.

  1. Requiring Google to separate Chrome from its business—this could mean selling it or spinning it off into an independent company.
  2. Limiting Google’s payments to companies like Apple for making Google the default search engine, reducing its ability to secure exclusive deals.
  3. Stopping Google from favoring its own products over competitors in search results and other services, ensuring a more level playing field.
  4. Increasing transparency in Google’s advertising and data practices so competitors have fairer access to key information.

The proposal asks that Google be subjected to continuous oversight through mandatory reporting to ensure transparency in Google’s advertising and data practices:

“Google must provide to the Technical Committee and Plaintiffs a monthly report outlining any changes to its search text ads auction and its public disclosure of those changes.”

It also suggests ongoing enforcement to guarantee that Google doesn’t impose new restrictions that undermine transparency requirements:

“Google must not limit the ability of advertisers to export in real time (by downloading through an interface or API access) data or information relating to their entire portfolio of ads or advertising campaigns bid on, placed through, or purchased through Google.”

The goal of the above section is to increase transparency in Google’s advertising system and make it easier for advertisers to analyze their ad performance, greater transparency.

Real-time access ensures advertisers can make immediate adjustments to their campaigns instead of waiting for delayed reports and it assures that advertisers aren’t locked into the Google advertising system by holding them hostage to their historical data.

The legal filing requires government-imposed restrictions and changes to Google’s advertising business practices. It proposes remedies for how Google should be regulated or restructured following the court’s earlier ruling that Google engaged in monopolistic practices. However, this is not the final judgment and the court must still decide whether to adopt, modify, or reject these proposed remedies.

Why Google May Adopt Vibe Coding For Search Algorithms via @sejournal, @martinibuster

A new trend in Silicon Valley, Vibe Coding, is driving an exponential acceleration in how quickly engineers can develop products and algorithms. This approach aligns with principles outlined by Google co-founder Sergey Brin in a recent email to DeepMind engineers.

Top Silicon Valley insiders call Vibe Coding the “dominant way to code,” and Brin’s message suggests that Google will embrace it to dramatically speed up AI development. Given its potential, this approach may also extend to Google’s search algorithms, leading to more changes to how search results are ranked.

Vibe Coding Is Here To Stay

The four Y Combinator executives agreed that vibe coding is a very big deal but were surprised at how fast it has overtaken the industry. Jarede Friedman observed that it’s like something out of the fairy tale Jack and the Beanstalk, where the world-changing magic beans sprout into gigantic beanstalks over night.

Garry Tan agreed, saying:

“I think our sense right now is this isn’t a fad. This isn’t going away. This is actually the dominant way to code, and if you’re not doing it, you might be left behind. This is here to stay.”

What Is Vibe Coding?

Vibe coding is software engineering with AI:

  • Software engineers use AI to generate code rather than writing it manually.
  • Rely on natural language prompts to guide software development.
  • Prioritize speed and iteration.
  • Time isn’t spent on debugging as code is simply regenerated until it works.
  • Vibe coding shifts software engineering focus from writing code to choosing what kinds of problems to solve.
  • Leverage AI for rapid code regeneration instead of traditional debugging.
  • It is exponentially speeding up coding.

Vibe coding is a way creating code with AI with an emphasis on speed. That means it’s increasingly less necessary to debug code because an engineer can simply re-roll the code generations multiple times until the AI gets it right.

A recent tweet by Andrej Karpathy kicked off a wave of excitement in Silicon Valley. Karpathy, a prominent AI researcher and former director of AI at Tesla, described what Vibe Coding is and explained why it’s the fastest way to code with AI. It’s so reliable that he doesn’t even check the modifications the AI makes (referred to as “diffs”).

Karpathy tweeted:

“There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good.

Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore.

When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while.

Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing.

I’m building a project or webapp, but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

Sergey Brin Emphasizes Vibe Coding Principles

A recent email from Google co-founder Sergey Brin to DeepMind engineers emphasized the need to integrate AI into their workflow to reduce time spent on coding. The email states that code matters most and that AI will improve itself, advising that if it’s simpler to prompt an AI for a solution, then that’s preferable to training an entirely new model. Brin describes this as highly important for becoming efficient coders. These principles align with Vibe Coding, which prioritizes speed, simplicity, and AI-driven development.

Brin also recommends using first-party code (code developed by Google) instead of relying on open-source or third-party software. This strongly suggests that Google intends to keep its AI advancements proprietary rather than open-source. That may mean any advancements created by Google will not be open-sourced and may not show up in research papers but instead may be discoverable through patent filings.

Brin’s message de-emphasizes the use of LoRA, a machine learning technique used to fine-tune AI models efficiently. This implies that he wants DeepMind engineers to prioritize efficient workflows rather than spending excessive time fine-tuning models. This also suggests that Google is shifting focus toward simpler, more scalable approaches like vibe coding which rely on prompt engineering.

Sergey Brin wrote:

“Code matters most — AGI will happen with takeoff, when the Al improves itself. Probably initially it will be with a lot of human help so the most important is our code performance. Furthermore this needs to work on our own 1p code. We have to be the most efficient coder and Al scientists in the world by using our own Al.

Simplicity — Lets use simple solutions where we can. Eg if prompting works, just do that, don’t posttrain a separate model. No unnecessary technical complexities (such as lora). Ideally we will truly have one recipe and one model which can simply be prompted for different uses.

Speed — we need our products, models, internal tools to be fast. Can’t wait 20 minutes to run a bit of python on borg.”

Those statements align with the principles of vibe coding so it’s important to understand what it is and how it may affect how Google develops search algorithms and AI which may be used for the purposes of ranking websites.

Software Engineers Transitioning To Product Engineers

A recent podcast by Y Combinator, a Silicon Valley startup accelerator company, discussed how vibe coding is changing what it means to be a software engineer and how it will affect hiring practices.

The podcast hosts quoted multiple people:

Leo Paz, Founder of Outlit observed:

“I think the role of Software Engineer will transition to Product Engineer. Human taste is now more important than ever as codegen tools make everyone a 10x engineer.”

Abhi Aiyer of Mastra shared how their coding practices changed:

“I don’t write code much. I just think and review.”

One of the podcast hosts, Jarede Friedman, Managing Partner, Y Combinator said:

“This is a super technical founder who’s last company was also a dev tool. He’s extremely able to code and so it’s fascinating to have people like that saying things like this.

They next quoted Abhi Balijepalli of Copycat:

“I am far less attached to my code now, so my decisions on whether we decide to scrap or refactor code are less biased. Since I can code 3 times as fast, it’s easy for me to scrap and rewrite if I need to.”

Garry Tan, President & CEO, Y Combinator commented:

“I guess the really cool thing about this stuff is it actually parallelizes really well.”

He quoted Yoav Tamir of Casixty:

“I write everything with Cursor. Sometimes I even have two windows of Cursor open in parallel and I prompt them on two different features.”

Tan commented on how much sense that makes and why not have three instances of Cursor open in order to accomplish even more.

The panelists on the podcast then cited Jackson Stokes of Trainloop who explains the exponential scale of how fast coding has become:

“How coding has changed six to one months ago: 10X speedup. One month ago to now: 100X speedup. Exponential acceleration. I’m no longer an engineer, I’m a product person.”

Garry Tan commented:

“I think that might be something that’s happening broadly. You know, it really ends up being two different roles you need. It actually maps to how engineers sort of self assign today, in that either you’re front-end or backend. And then backend ends up being about actually infrastructure and then front-end is so much more actually being a PM (product manager)…”

Harj Taggar, Managing Partner, Y Combinator observed that the LLMs are going to push people to the role of making choices, that the actual writing of the code will become less important.

Why Debugging With AI Is Unnecessary

An interesting wrinkle in Code Vibing is that one of the ways it speeds up development is that software engineers no longer have to spend long hours debugging. In fact, they don’t have to debug anymore. This means that they are able to push code out the door faster than ever before.

Tan commented on how poor AI is at debugging:

“…one thing the survey did indicate is that this stuff is terrible at debugging. And so… the humans have to do the debugging still. They have to figure out well, what is the code actually doing?

There doesn’t seem to be a way to just tell it, debug. You were saying that you have to be very explicit, like as if giving instructions to a first time software engineer.”

Jarede offered his observation on AI’s ability to debug:

“I have to really spoon feed it the instructions to get it to debug stuff. Or you can kind of embrace the vibes. I’d say Andrej Karpathy style, sort of re-roll, just like tell it to try again from scratch.

It’s wild how your coding style changes when actually writing the code becomes a 1000x cheaper. Like, as a human you would never just like blow away something that you’d worked on for a very long time and rewrite from scratch because you had a bug. You’d always fix the bug. But for the LLM, if you can just rewrite a thousand lines of code in just six seconds, like why not?”

Tan observed that it’s like how people use AI image generators where if there’s something they don’t like they just reiterate without even changing the prompt, they just simply click re-roll five times and then at the fifth time it works.

Vibe Coding And Google’s Search Algorithms

While Sergey Brin’s email does not explicitly mention search algorithms, it advocates AI-driven, prompt-based development at scale and high speed. Since Vibe Coding is now the dominant way to code, it is likely that Google will adopt this methodology across its projects, including the development of future search algorithms.

Watch the Y Combinator Video Roundtable

Vibe Coding Is The Future

Featured Image by Shutterstock/bluestork