Google Begins Rolling Out March Core Algorithm Update via @sejournal, @MattGSouthern

Google has officially begun rolling out its March 2025 core algorithm update, according to an announcement posted to the Google Search Status Dashboard today.

The update commenced at 9:23 AM PDT and is expected to fully deploy across all of Google’s search systems in up to two weeks.

The company provided minimal details beyond the timing and expected duration of the rollout.

The official announcement reads:

“Released the March 2025 core update. The rollout may take up to 2 weeks to complete.”

What This Means For SEO Professionals

Core updates are comprehensive changes to Google’s main search algorithms and systems.

Unlike more minor updates that might focus on specific issues, core updates typically produce noticeable changes to search rankings across the web.

Website owners and SEO professionals should expect fluctuations in search visibility and rankings over the coming weeks as the update gradually rolls out.

These changes often stabilize once the update is fully implemented, though permanent shifts in positioning can occur based on how the new algorithm evaluates content quality and relevance.

Preparing For Algorithm Changes

As with previous core updates, Google hasn’t provided specific details about changes made to its ranking systems. The company typically advises creating high-quality content rather than trying to fix particular issues when rankings drop after an update.

Monitor your analytics during this period to identify any significant changes in traffic or rankings. Documenting these changes can help determine whether adjustments are needed once the update has been fully implemented.

Search Engine Journal will continue to monitor the impact of this update and provide additional information as it becomes available.

WordPress Backup Plugin Vulnerability Affects 5+ Million Websites via @sejournal, @martinibuster

A high-severity vulnerability was discovered and patched in the All-in-One WP Migration and Backup plugin, which has over five million installations. The vulnerability requires no user authentication, making it easier for an attacker to compromise a website, but this is mitigated by a restricted attack method.

The vulnerability was assigned a severity rating of 7.5 (High), which is below the highest severity level, labeled Critical.

Unauthenticated PHP Object Injection

The vulnerability is called an unauthenticated PHP object injection. But it’s less severe than a typical Unauthenticated PHP Object Injection where an attacker could directly exploit the vulnerability. This specific vulnerability requires that a user with administrator level credentials export and restore a backup with the plugin in order to trigger the exploit.

The way this kind of vulnerability works is that the WordPress plugin processes potentially malicious data during backup restoration without properly verifying it. But because there’s a narrow attack opportunity, it makes exploiting it less straightforward.

Nevertheless, if the right conditions are met, an attacker can delete files, access sensitive information, and run malicious code.

According to a report by Wordfence:

“The All-in-One WP Migration and Backup plugin for WordPress is vulnerable to PHP Object Injection in all versions up to, and including, 7.89 via deserialization of untrusted input in the ‘replace_serialized_values’ function.

This makes it possible for unauthenticated attackers to inject a PHP Object. No known POP chain is present in the vulnerable software. If a POP chain is present via an additional plugin or theme installed on the target system, it could allow the attacker to delete arbitrary files, retrieve sensitive data, or execute code. An administrator must export and restore a backup in order to trigger the exploit.”

The vulnerability affects versions up to and including 7.89. Users of the plugin are recommended to update it to the latest version which at the time of writing is 7.90.

Read the Wordfence vulnerability advisory:

All in One WP Migration <= 7.89 – Unauthenticated PHP Object Injection

Google Publishes New Robots.txt Explainer via @sejournal, @martinibuster

Google published a new Robots.txt refresher explaining how Robots.txt enables publishers and SEOs to control search engine crawlers and other bots (that obey Robots.txt). The documentation includes examples of blocking specific pages (like shopping carts), restricting certain bots, and managing crawling behavior with simple rules.

From Basics To Advanced

The new documentation offers a quick introduction to what Robots.txt is and gradually progresses to increasingly advanced coverage of what publishers and SEOs can do with Robots.txt and how it benefits them.

The main point of the first part of the document is to introduce robots.txt as a stable web protocol with a 30 year history that’s widely supported by search engines and other crawlers.

Google Search Console will report a 404 error message if the Robots.txt is missing. It’s okay for that to happen but if it bugs you to see that in the GSC you can wait 30 days and the warning will drop off. An alterative is to create a blank Robots.txt file which is also acceptable by Google.

Google’s new documentation explains:

“You can leave your robots.txt file empty (or not have one at all) if your whole site may be crawled, or you can add rules to manage crawling.”

From there it covers the basics like custom rules for restricting specific pages or sections.

The advanced uses of Robots.txt covers these capabilities:

  • Can target specific crawlers with different rules.
  • Enables blocking URL patterns like PDFs or search pages.
  • Enables granular control over specific bots.
  • Supports comments for internal documentation.

The new documentation finishes by describing how simple it is to edit the Robots.txt file (it’s a text file with simple rules), so all you need is a simple text editor. Many content management systems have a way to edit it and there are tools available for testing if the Robots.txt file is using the correct syntax.

Read the new documentation here:

Robots Refresher: robots.txt — a flexible way to control how machines explore your website

Featured Image by Shutterstock/bluestork

AI Model Showdown: Top Choices For Text, Image, & Video Generation via @sejournal, @MattGSouthern

With so many AI models available today, it’s tough to decide where to begin. A recent study from Quora’s Poe provides guidance for those unsure about which models to choose.

The study analyzes millions of interactions to highlight the most popular tools for generating text, images, and videos.

With nearly every tech company offering an AI solution, it’s easy to get overwhelmed by choices. Poe’s data clarifies which models are trusted and widely used.

Whether you’re new to AI or experienced, this report shows trends that can help you find the best models. Remember that this data represents Poe subscribers and may not reflect the broader AI community.

Text Generation Trends

A Two-Way Race

The study shows that among Poe subscribers, Anthropic models are quickly becoming as popular as OpenAI, especially after the release of Claude 3.5 Sonnet. The usage of text models from both providers is now almost evenly split.

Rapid Adoption of New Releases

Poe users often switch to the latest models, even if loyal to a specific brand. For example, people rapidly move from OpenAI’s GPT-4 to GPT-4o or from Claude 3 to Claude 3.5.

Emerging Players

DeepSeek’s R1 and V3 have captured about 7% of the messages on Poe. Google’s Gemini family has seen a slight decline in use among Poe subscribers but remains a key player.

Image Generation Trends

Market Share of Early Movers

DALL-E-3 and StableDiffusion were once leaders in image generation, but their shares have dropped by about 80%. This decline occurred as the number of image generation models increased from three to around 25.

Leading Models

The FLUX family from BlackForestLabs is now the leading image model, holding a nearly 40% share, while Google’s Imagen3 family has about a 30% share.

Smaller Models

Smaller image providers like Playground and Ideogram update their services frequently, which helps them maintain a loyal user base. However, they only account for about 10% of Poe’s image generation usage.

Video Generation Trends

An Emerging Industry

Video generation was almost nonexistent on Poe until late 2024, but it has quickly grown in popularity. Now, at least eight providers offer this ability.

Runway: Most Used Model

Runway’s single video model handles 30–50% of video generation requests. Although its usage is lower than it used to be, many people still choose this brand.

New Player: Veo-2

Since launching on Poe, Google’s Veo-2 has gained about 40% of the market, showing how quickly customer preferences can change. Other new models, such as Kling-Pro v1.5, Hailuo-AI, HunyuanVideo, and Wan-2.1, have captured around 15% of the market.

Key Takeaway & Looking Ahead

The data shows a clear pattern of newer models replacing older ones in user preference. If you want the best performance, use the latest version rather than stick with familiar but outdated models.

Whether these usage patterns will hold steady or continue to shift remains to be seen. At some point, cost will be a barrier to adoption, as new models tend to get more expensive with every release.

In future reports, Poe plans to share insights on how different models fit various tasks and price points.


Featured Image: stokkete/Shutterstock

New Wix Automations Makes It Easy To Grow Revenue And Engagement via @sejournal, @martinibuster

Wix announced Automations, a new automation builder that enables businesses to create and manage custom actions, like sending emails based on customer activity. Users are able to create these automations with an easy-to-use visual interface and track their performance from a dashboard.

Wix Automations Is Powerful But Easy To Use

There are four key features:

  • Intuitive Automation Design
    Simplifies the process of creating advanced automations.
  • Advanced Customization
    Supports conditions and formulas for creating highly customizable automations.
  • Centralized Automation Management
    Users can track key metrics, adjust settings in real time, and manage all automations, no matter which apps they’re connected to.
  • Email Automation Insights
    Provides detailed reporting on email success rates and engagement which enables businesses to fine-tune their email messaging.

The new Automations feature integrates with Wix Services, so businesses can use customer data to set up personalized automations like custom discounts based on what customers buy.

A user-friendly interface makes it easy to click and build advanced automations based on site visitor actions. Wix Automations supports conditions and formulas for creating customizable automations. What makes Wix Automations powerful is that these features enables users to easily set up complex, multi-step actions.

For example, a customer purchase can be the trigger to check a condition, such as whether the total is over $50. If the condition is met, a formula calculates a 10% discount, and the automation sends the customer an email with a discount code for a future purchase.

According to the press release:

“The builder’s clear and intuitive design makes it easier than ever to build and manage automations, significantly improving efficiency by streamlining and automating tasks and, ultimately, enhancing overall user experience.

With the addition of conditions and formulas, the automations builder now allows users to create more accurate, highly tailored workflows that adapt to their business needs allowing businesses to operate more smoothly and effectively.”

Mullenweg Considers Delaying WordPress Releases Through 2027 via @sejournal, @martinibuster

A leaked WordPress Slack chat shows that Matt Mullenweg is considering limiting future WordPress releases to just one per year from now through 2027 and insists that the only way to get Automattic to contribute more is to pressure WP Engine to drop their lawsuit. One WordPress developer who read that message characterized it as blackmail.

WordPress Core Development

Mullenweg’s Automattic already reduced their contributions to core, prompting a WordPress developer attending WordCamp Asia 2025 to plead with Matt Mullenweg to increase Automattic’s contributions to WordPress because his and so many other businesses depend on WordPress. Mullenweg smiled and said no without actually saying the word no.

Automattic’s January 2025 statement about reducing contributions:

“…Automattic will reduce its sponsored contributions to the WordPress project. This is not a step we take lightly. It is a moment to regroup, rethink, and strategically plan how Automatticians can continue contributing in ways that secure the future of WordPress for generations to come. Automatticians who contributed to core will instead focus on for-profit projects within Automattic, such as WordPress.com, Pressable, WPVIP, Jetpack, and WooCommerce. Members of the “community” have said that working on these sorts of things should count as a contribution to WordPress.

As part of this reset, Automattic will match its volunteering pledge with those made by WP Engine and other players in the ecosystem, or about 45 hours a week that qualify under the Five For the Future program as benefitting the entire community and not just a single company. These hours will likely go towards security and critical updates.

We’ve made the decision to reallocate resources due to the lawsuits from WP Engine. This legal action diverts significant time and energy that could otherwise be directed toward supporting WordPress’s growth and health. We remain hopeful that WP Engine will reconsider this legal attack, allowing us to refocus our efforts on contributions that benefit the broader WordPress ecosystem.

WP Engine’s historically slim contributions underscore the imbalance that must be addressed for the health of WordPress. We believe in fairness and shared responsibility, and we hope this move encourages greater participation across all organizations that benefit from WordPress.”

Leaked Slack Post

The post on Slack blamed WP Engine for the slowdown and encourages others to put pressure on WP Engine to drop the suit.

The following is a leaked quote of Mullenweg’s post on the WordPress Slack channel, as posted in the Dynamic WordPress Facebook Group (must join the Facebook group to read the post) by a reliable source:

“Would like to put together a Zoom for core committers to discuss future release schedule, hopefully bringing together some of the conversations happening the past 6 weeks:
6.8 includes a lot of “overhang” contributions from Automatticians, including 890+ enhancements and bug fixes in Gutenberg.

I’d like to make sure we get extra testing on 6.8 from web hosts, especially if they can upgrade perhaps their company blogs or something, employee sites, etc to make sure upgrades and everything work well in all environments and with the most popular plugins without regressions.
The Chromecast update issues today (https://x.com/james_dunthorne/status/1898871402049999126 )remind us how easily this can happen.

I’m willing to commit people to early roll-out to WP .com to provide widespread testing with hundreds of thousands of users. This is very resource-intensive, but has contributed a lot to making sure releases are stable before they deploy to the wider array of non-engaged web hosts in the past.

We should consider modifying the release schedule:Other corporate sponsors are protesting WPE’s actions by pulling back contributions, which I think will effect some of the other largest contributors after Automattic.

The court schedule in the WP Engine lawsuit against Automattic, me, and WordPress .org ( https://cloudup.com/c33IWQHdNMj ) goes to jury trial in 2027. WPE appears to be unresponsive to public pressure to resolve things earlier. (As I said at WC Asia, I’m ready to end it yesterday.)

We are approaching 7.0 in two releases, which has an emotional valence and I’d rather not be purely maintenance. (Nor do I want to break our naming structure and do 6.10.)
One approach would be delaying 6.8 and making it the only release this year, 6.9 in 2026, and then aim for a 7.0 in late 2027 assuming a positive outcome of the jury trial.

FWIW I would estimate WPE is spending the equivalent of 60 engineers full-time salary at 250k/yr as plaintiffs / attackers, and Automattic a similar amount on defense. Imagine the project could do for democratizing publishing and competing against proprietary alternatives if that were going into core and community development.
Drop any other thoughts or agenda items you may have in this thread.”

Response to Mullenweg’s leaked post:

One Facebook user accused Mullenweg of trying to “blackmail” the WordPress community to pressure WP Engine (WPE). They wrote that the community is largely sympathetic to WPE than to Mullenweg. But in general Mullenweg’s statement was met with a shrug because they feel that this will give core contributors the chance to catch up on maintaining the core which to them is a greater priority than adding more features to Gutenberg which many of the developers in this group apparently don’t use.

One lone commenter in the Facebook discussion asked if anyone in the discussion had made a positive contribution to WordPress. At the time of writing, nobody had cared to respond.

Is Google’s Use Of Compressibility An SEO Myth? via @sejournal, @martinibuster

I recently came across an SEO test that attempted to verify whether compression ratio affects rankings. It seems there may be some who believe that higher compression ratios correlate with lower rankings. Understanding compressibility in the context of SEO requires reading both the original source on compression ratios and the research paper itself before drawing conclusions about whether or not it’s an SEO myth.

Search Engines Compress Web Pages

Compressibility, in the context of search engines, refers to how much web pages can be compressed. Shrinking a document into a zip file is an example of compression. Search engines compress indexed web pages because it saves space and results in faster processing. It’s something that all search engines do.

Websites & Host Providers Compress Web Pages

Web page compression is a good thing because it helps search crawlers quickly access web pages which in turn sends the signal to Googlebot that it won’t strain the server and it’s okay to grab even more pages for indexing.

Compression speeds up websites, providing site visitors a high quality user experience. Most web hosts automatically enable compression because it’s good for websites, site visitors and also good for web hosts because it saves on bandwidth loads. Everybody wins with website compression.

High Levels Of Compression Correlate With Spam

Researchers at a search engine discovered that highly compressible web pages correlated with low-quality content. The study called Spam, Damn Spam, and Statistics: Using Statistical Analysis to Locate Spam Web Pages  (PDF) was conducted in 2006 by two of the world’s leading researchers, Marc Najork and Dennis Fetterly.

Najork currently works at DeepMind as Distinguished Research Scientist. Fetterly, a software engineer at Google, is an author of many important research papers related to search, content analysis and other related topics. This research paper isn’t just any research paper, it’s an important one.

What the research paper shows is that 70% of web pages that compress at a level of 4.0 or higher tended to be low quality pages with a high level of redundant word usage. The average compression level of sites was around 2.0.

Here are the averages of normal web pages listed by the research paper:

  • Compression ratio of 2.0:
    The most frequently occurring compression ratio in the dataset is 2.0.
  • Compression ratio of 2.1:
    Half of the pages have a compression ratio below 2.1, and half have a compression ratio above it.
  • Compression ratio of 2.11:
    On average, the compression ratio of the pages analyzed is 2.11.

It would be an easy first-pass way to filter out the obvious content spam so it makes sense that they would do that to weed out heavy-handed content spam. But weeding out spam is more complicated than simple solutions. Search engines use multiple signals because it results in a higher level of accuracy.

The researchers reported that 70% of sites with a compression level of 4.0 or higher were spam. That means that the other 30% were not spam sites. There are always outliers in statistics and that 30% of non-spam sites is why search engines tend to use more than one signal.

Do Search Engines Use Compressibility?

It’s reasonable to assume that search engines use compressibility to identify heavy handed obvious spam. But it’s also reasonable to assume that if search engines employ it they are using it together with other signals in order to increase the accuracy of the metrics. Nobody knows for certain if Google uses compressibility.

Is There Proof That Compression Is An SEO Myth?

Some SEOs have published research analyzing the rankings of thousands of sites for hundreds of keywords. They found that both the top-ranking and bottom-ranked sites had a compression ratio of about 2.4. The difference between their compression ratios was just 2%, meaning the scores were essentially equal. Those results are close to the normal average range of 2.11 reported in the 2006 scientific study.

The SEOs claimed that the mere 2% higher compression levels of the top-ranked sites over the bottom-ranked sites prove that compressibility is an SEO myth. Of course, that claim is incorrect. The average compression ratio of normal sites in 2006 was 2.11, which means the average 2.4 ratio in 2025 falls well within the range of normal, non-spam websites.

The ratio for spam sites is 4.0, so the fact that both sets of top and bottom ranked sites are about 2.4 ratio is meaningless since both scores fall within the range of normal.

If we assume that Google is using compressibility, a site would have produce a compression ratio of 4.0, plus send other low quality signals, to trigger an algorithmic action. If that happened those sites wouldn’t be in the search results at all because they wouldn’t be in the index and therefore there is no way to test that with the SERPs, right?

It would be reasonable to assume that the sites with high 4.0 compression ratios were removed. But we don’t know that, it’s not a certainty.

Is Compressibility An SEO Myth?

Compressibility may not be an SEO myth. But it’s probably not anything publishers or SEOs should be worry about as long as they’re avoiding heavy-handed tactics like keyword stuffing or repetitive cookie cutter pages.

Google uses de-duplication which removes duplicate pages from their index and consolidates the PageRank signals to whichever page they choose to be the canonical page (if they choose one). Publishing duplicate pages will likely not trigger any kind of penalty, including anything related to compression ratios, because, as was already mentioned, search engines don’t use signals in isolation.

U.S. DOJ Antitrust Filing Proposes 4 Ways To Break Google’s Monopoly via @sejournal, @martinibuster

The plaintiffs in an antitrust lawsuit against Google filed a revised proposed final judgment for the judge in the case to consider. The proposal comes after a previous ruling where the court determined that Google broke antitrust laws by illegally maintaining its monopoly.

The legal filing by the plaintiffs, the United States Department Of Justice and State Attorneys General, argue that Google has maintained monopolies in search services and text advertising through anticompetitive practices.

The filing proposes four ways to loosen Google’s monopolistic hold on search and advertising.

  1. Requiring Google to separate Chrome from its business—this could mean selling it or spinning it off into an independent company.
  2. Limiting Google’s payments to companies like Apple for making Google the default search engine, reducing its ability to secure exclusive deals.
  3. Stopping Google from favoring its own products over competitors in search results and other services, ensuring a more level playing field.
  4. Increasing transparency in Google’s advertising and data practices so competitors have fairer access to key information.

The proposal asks that Google be subjected to continuous oversight through mandatory reporting to ensure transparency in Google’s advertising and data practices:

“Google must provide to the Technical Committee and Plaintiffs a monthly report outlining any changes to its search text ads auction and its public disclosure of those changes.”

It also suggests ongoing enforcement to guarantee that Google doesn’t impose new restrictions that undermine transparency requirements:

“Google must not limit the ability of advertisers to export in real time (by downloading through an interface or API access) data or information relating to their entire portfolio of ads or advertising campaigns bid on, placed through, or purchased through Google.”

The goal of the above section is to increase transparency in Google’s advertising system and make it easier for advertisers to analyze their ad performance, greater transparency.

Real-time access ensures advertisers can make immediate adjustments to their campaigns instead of waiting for delayed reports and it assures that advertisers aren’t locked into the Google advertising system by holding them hostage to their historical data.

The legal filing requires government-imposed restrictions and changes to Google’s advertising business practices. It proposes remedies for how Google should be regulated or restructured following the court’s earlier ruling that Google engaged in monopolistic practices. However, this is not the final judgment and the court must still decide whether to adopt, modify, or reject these proposed remedies.

YouTube’s Creator Liaison Shares Advice For Mid-Roll Ad Changes via @sejournal, @MattGSouthern

YouTube Creator Liaison Rene Ritchie has advised content creators on adapting to YouTube’s upcoming mid-roll advertising changes.

These changes take effect on May 12 and will alter how ads appear within videos.

Background

Starting May 12, YouTube will implement a new system prioritizing mid-roll ad placements during natural content breaks rather than at potentially disruptive moments.

YouTube will automatically place ads at natural transitions in videos, but creators can manually control ad placements if they prefer.

This update introduces a hybrid approach, allowing creators to use automatic and manual mid-roll placements simultaneously.

According to YouTube’s early testing, channels adopting this combined approach have seen an average increase in ad revenue of 5%.

Ritchie’s Adaptation Strategy

Sharing his approach on X, Ritchie outlined specific steps he’s taking with his own YouTube channel:

“I’m turning on auto mid-rolls, since that system will continue to be improved and optimized by launch and over time. For new videos, I’m manually inserting additional slots if and as needed where I think it’ll provide the best experience for viewers.”

For existing content, Ritchie recommends a prioritized approach, stating:

“For back catalog, I’m sorting by current watch time and doing the same for the top 20-50 most-watched videos.”

Maintaining Creator Control

Ritchie addressed concerns about YouTube potentially removing manual placement options:

“No one is taking away manual mid-roll placements. Creators can still put slots wherever and whenever we want.”

He reminded creators that designated ad slots don’t guarantee ad placement but indicate where ads can potentially appear.

Ritchie drew a parallel to YouTube’s retention analytics and explained how the new ad feedback tool provides valuable insights.

“In the days before the retention graph in Analytics, my 10-second long intro might have caused a ton of people to dip from the video and I never knew it. Similarly, I can still put that mid-roll slot anywhere I want, but now I’m getting data about how it will perform.”

Ongoing Improvements

YouTube is actively refining the automatic detection system and will continue improving it after the May launch.

Ritchie notes there’s a mutual interest in making mid-rolls more effective:

“YouTube and creators share revenue, so it’s in everyone’s best interest to make mid-rolls work better.”

What Creators Should Do Now

Based on both YouTube’s official guidance and Ritchie’s recommendations, creators should:

  • Enable automatic mid-roll placement while maintaining manual control where needed
  • Review high-performing back catalog content first
  • Use the new feedback tool to identify potentially disruptive ad placements

Continue providing feedback to YouTube as the system develops. This interaction with Ritchie shows the team is listening.


Featured Image: Alejo Bernal/Shutterstock

Why Google May Adopt Vibe Coding For Search Algorithms via @sejournal, @martinibuster

A new trend in Silicon Valley, Vibe Coding, is driving an exponential acceleration in how quickly engineers can develop products and algorithms. This approach aligns with principles outlined by Google co-founder Sergey Brin in a recent email to DeepMind engineers.

Top Silicon Valley insiders call Vibe Coding the “dominant way to code,” and Brin’s message suggests that Google will embrace it to dramatically speed up AI development. Given its potential, this approach may also extend to Google’s search algorithms, leading to more changes to how search results are ranked.

Vibe Coding Is Here To Stay

The four Y Combinator executives agreed that vibe coding is a very big deal but were surprised at how fast it has overtaken the industry. Jarede Friedman observed that it’s like something out of the fairy tale Jack and the Beanstalk, where the world-changing magic beans sprout into gigantic beanstalks over night.

Garry Tan agreed, saying:

“I think our sense right now is this isn’t a fad. This isn’t going away. This is actually the dominant way to code, and if you’re not doing it, you might be left behind. This is here to stay.”

What Is Vibe Coding?

Vibe coding is software engineering with AI:

  • Software engineers use AI to generate code rather than writing it manually.
  • Rely on natural language prompts to guide software development.
  • Prioritize speed and iteration.
  • Time isn’t spent on debugging as code is simply regenerated until it works.
  • Vibe coding shifts software engineering focus from writing code to choosing what kinds of problems to solve.
  • Leverage AI for rapid code regeneration instead of traditional debugging.
  • It is exponentially speeding up coding.

Vibe coding is a way creating code with AI with an emphasis on speed. That means it’s increasingly less necessary to debug code because an engineer can simply re-roll the code generations multiple times until the AI gets it right.

A recent tweet by Andrej Karpathy kicked off a wave of excitement in Silicon Valley. Karpathy, a prominent AI researcher and former director of AI at Tesla, described what Vibe Coding is and explained why it’s the fastest way to code with AI. It’s so reliable that he doesn’t even check the modifications the AI makes (referred to as “diffs”).

Karpathy tweeted:

“There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good.

Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore.

When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while.

Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing.

I’m building a project or webapp, but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

Sergey Brin Emphasizes Vibe Coding Principles

A recent email from Google co-founder Sergey Brin to DeepMind engineers emphasized the need to integrate AI into their workflow to reduce time spent on coding. The email states that code matters most and that AI will improve itself, advising that if it’s simpler to prompt an AI for a solution, then that’s preferable to training an entirely new model. Brin describes this as highly important for becoming efficient coders. These principles align with Vibe Coding, which prioritizes speed, simplicity, and AI-driven development.

Brin also recommends using first-party code (code developed by Google) instead of relying on open-source or third-party software. This strongly suggests that Google intends to keep its AI advancements proprietary rather than open-source. That may mean any advancements created by Google will not be open-sourced and may not show up in research papers but instead may be discoverable through patent filings.

Brin’s message de-emphasizes the use of LoRA, a machine learning technique used to fine-tune AI models efficiently. This implies that he wants DeepMind engineers to prioritize efficient workflows rather than spending excessive time fine-tuning models. This also suggests that Google is shifting focus toward simpler, more scalable approaches like vibe coding which rely on prompt engineering.

Sergey Brin wrote:

“Code matters most — AGI will happen with takeoff, when the Al improves itself. Probably initially it will be with a lot of human help so the most important is our code performance. Furthermore this needs to work on our own 1p code. We have to be the most efficient coder and Al scientists in the world by using our own Al.

Simplicity — Lets use simple solutions where we can. Eg if prompting works, just do that, don’t posttrain a separate model. No unnecessary technical complexities (such as lora). Ideally we will truly have one recipe and one model which can simply be prompted for different uses.

Speed — we need our products, models, internal tools to be fast. Can’t wait 20 minutes to run a bit of python on borg.”

Those statements align with the principles of vibe coding so it’s important to understand what it is and how it may affect how Google develops search algorithms and AI which may be used for the purposes of ranking websites.

Software Engineers Transitioning To Product Engineers

A recent podcast by Y Combinator, a Silicon Valley startup accelerator company, discussed how vibe coding is changing what it means to be a software engineer and how it will affect hiring practices.

The podcast hosts quoted multiple people:

Leo Paz, Founder of Outlit observed:

“I think the role of Software Engineer will transition to Product Engineer. Human taste is now more important than ever as codegen tools make everyone a 10x engineer.”

Abhi Aiyer of Mastra shared how their coding practices changed:

“I don’t write code much. I just think and review.”

One of the podcast hosts, Jarede Friedman, Managing Partner, Y Combinator said:

“This is a super technical founder who’s last company was also a dev tool. He’s extremely able to code and so it’s fascinating to have people like that saying things like this.

They next quoted Abhi Balijepalli of Copycat:

“I am far less attached to my code now, so my decisions on whether we decide to scrap or refactor code are less biased. Since I can code 3 times as fast, it’s easy for me to scrap and rewrite if I need to.”

Garry Tan, President & CEO, Y Combinator commented:

“I guess the really cool thing about this stuff is it actually parallelizes really well.”

He quoted Yoav Tamir of Casixty:

“I write everything with Cursor. Sometimes I even have two windows of Cursor open in parallel and I prompt them on two different features.”

Tan commented on how much sense that makes and why not have three instances of Cursor open in order to accomplish even more.

The panelists on the podcast then cited Jackson Stokes of Trainloop who explains the exponential scale of how fast coding has become:

“How coding has changed six to one months ago: 10X speedup. One month ago to now: 100X speedup. Exponential acceleration. I’m no longer an engineer, I’m a product person.”

Garry Tan commented:

“I think that might be something that’s happening broadly. You know, it really ends up being two different roles you need. It actually maps to how engineers sort of self assign today, in that either you’re front-end or backend. And then backend ends up being about actually infrastructure and then front-end is so much more actually being a PM (product manager)…”

Harj Taggar, Managing Partner, Y Combinator observed that the LLMs are going to push people to the role of making choices, that the actual writing of the code will become less important.

Why Debugging With AI Is Unnecessary

An interesting wrinkle in Code Vibing is that one of the ways it speeds up development is that software engineers no longer have to spend long hours debugging. In fact, they don’t have to debug anymore. This means that they are able to push code out the door faster than ever before.

Tan commented on how poor AI is at debugging:

“…one thing the survey did indicate is that this stuff is terrible at debugging. And so… the humans have to do the debugging still. They have to figure out well, what is the code actually doing?

There doesn’t seem to be a way to just tell it, debug. You were saying that you have to be very explicit, like as if giving instructions to a first time software engineer.”

Jarede offered his observation on AI’s ability to debug:

“I have to really spoon feed it the instructions to get it to debug stuff. Or you can kind of embrace the vibes. I’d say Andrej Karpathy style, sort of re-roll, just like tell it to try again from scratch.

It’s wild how your coding style changes when actually writing the code becomes a 1000x cheaper. Like, as a human you would never just like blow away something that you’d worked on for a very long time and rewrite from scratch because you had a bug. You’d always fix the bug. But for the LLM, if you can just rewrite a thousand lines of code in just six seconds, like why not?”

Tan observed that it’s like how people use AI image generators where if there’s something they don’t like they just reiterate without even changing the prompt, they just simply click re-roll five times and then at the fifth time it works.

Vibe Coding And Google’s Search Algorithms

While Sergey Brin’s email does not explicitly mention search algorithms, it advocates AI-driven, prompt-based development at scale and high speed. Since Vibe Coding is now the dominant way to code, it is likely that Google will adopt this methodology across its projects, including the development of future search algorithms.

Watch the Y Combinator Video Roundtable

Vibe Coding Is The Future

Featured Image by Shutterstock/bluestork