Mullenweg Considers Delaying WordPress Releases Through 2027 via @sejournal, @martinibuster

A leaked WordPress Slack chat shows that Matt Mullenweg is considering limiting future WordPress releases to just one per year from now through 2027 and insists that the only way to get Automattic to contribute more is to pressure WP Engine to drop their lawsuit. One WordPress developer who read that message characterized it as blackmail.

WordPress Core Development

Mullenweg’s Automattic already reduced their contributions to core, prompting a WordPress developer attending WordCamp Asia 2025 to plead with Matt Mullenweg to increase Automattic’s contributions to WordPress because his and so many other businesses depend on WordPress. Mullenweg smiled and said no without actually saying the word no.

Automattic’s January 2025 statement about reducing contributions:

“…Automattic will reduce its sponsored contributions to the WordPress project. This is not a step we take lightly. It is a moment to regroup, rethink, and strategically plan how Automatticians can continue contributing in ways that secure the future of WordPress for generations to come. Automatticians who contributed to core will instead focus on for-profit projects within Automattic, such as WordPress.com, Pressable, WPVIP, Jetpack, and WooCommerce. Members of the “community” have said that working on these sorts of things should count as a contribution to WordPress.

As part of this reset, Automattic will match its volunteering pledge with those made by WP Engine and other players in the ecosystem, or about 45 hours a week that qualify under the Five For the Future program as benefitting the entire community and not just a single company. These hours will likely go towards security and critical updates.

We’ve made the decision to reallocate resources due to the lawsuits from WP Engine. This legal action diverts significant time and energy that could otherwise be directed toward supporting WordPress’s growth and health. We remain hopeful that WP Engine will reconsider this legal attack, allowing us to refocus our efforts on contributions that benefit the broader WordPress ecosystem.

WP Engine’s historically slim contributions underscore the imbalance that must be addressed for the health of WordPress. We believe in fairness and shared responsibility, and we hope this move encourages greater participation across all organizations that benefit from WordPress.”

Leaked Slack Post

The post on Slack blamed WP Engine for the slowdown and encourages others to put pressure on WP Engine to drop the suit.

The following is a leaked quote of Mullenweg’s post on the WordPress Slack channel, as posted in the Dynamic WordPress Facebook Group (must join the Facebook group to read the post) by a reliable source:

“Would like to put together a Zoom for core committers to discuss future release schedule, hopefully bringing together some of the conversations happening the past 6 weeks:
6.8 includes a lot of “overhang” contributions from Automatticians, including 890+ enhancements and bug fixes in Gutenberg.

I’d like to make sure we get extra testing on 6.8 from web hosts, especially if they can upgrade perhaps their company blogs or something, employee sites, etc to make sure upgrades and everything work well in all environments and with the most popular plugins without regressions.
The Chromecast update issues today (https://x.com/james_dunthorne/status/1898871402049999126 )remind us how easily this can happen.

I’m willing to commit people to early roll-out to WP .com to provide widespread testing with hundreds of thousands of users. This is very resource-intensive, but has contributed a lot to making sure releases are stable before they deploy to the wider array of non-engaged web hosts in the past.

We should consider modifying the release schedule:Other corporate sponsors are protesting WPE’s actions by pulling back contributions, which I think will effect some of the other largest contributors after Automattic.

The court schedule in the WP Engine lawsuit against Automattic, me, and WordPress .org ( https://cloudup.com/c33IWQHdNMj ) goes to jury trial in 2027. WPE appears to be unresponsive to public pressure to resolve things earlier. (As I said at WC Asia, I’m ready to end it yesterday.)

We are approaching 7.0 in two releases, which has an emotional valence and I’d rather not be purely maintenance. (Nor do I want to break our naming structure and do 6.10.)
One approach would be delaying 6.8 and making it the only release this year, 6.9 in 2026, and then aim for a 7.0 in late 2027 assuming a positive outcome of the jury trial.

FWIW I would estimate WPE is spending the equivalent of 60 engineers full-time salary at 250k/yr as plaintiffs / attackers, and Automattic a similar amount on defense. Imagine the project could do for democratizing publishing and competing against proprietary alternatives if that were going into core and community development.
Drop any other thoughts or agenda items you may have in this thread.”

Response to Mullenweg’s leaked post:

One Facebook user accused Mullenweg of trying to “blackmail” the WordPress community to pressure WP Engine (WPE). They wrote that the community is largely sympathetic to WPE than to Mullenweg. But in general Mullenweg’s statement was met with a shrug because they feel that this will give core contributors the chance to catch up on maintaining the core which to them is a greater priority than adding more features to Gutenberg which many of the developers in this group apparently don’t use.

One lone commenter in the Facebook discussion asked if anyone in the discussion had made a positive contribution to WordPress. At the time of writing, nobody had cared to respond.

Is Google’s Use Of Compressibility An SEO Myth? via @sejournal, @martinibuster

I recently came across an SEO test that attempted to verify whether compression ratio affects rankings. It seems there may be some who believe that higher compression ratios correlate with lower rankings. Understanding compressibility in the context of SEO requires reading both the original source on compression ratios and the research paper itself before drawing conclusions about whether or not it’s an SEO myth.

Search Engines Compress Web Pages

Compressibility, in the context of search engines, refers to how much web pages can be compressed. Shrinking a document into a zip file is an example of compression. Search engines compress indexed web pages because it saves space and results in faster processing. It’s something that all search engines do.

Websites & Host Providers Compress Web Pages

Web page compression is a good thing because it helps search crawlers quickly access web pages which in turn sends the signal to Googlebot that it won’t strain the server and it’s okay to grab even more pages for indexing.

Compression speeds up websites, providing site visitors a high quality user experience. Most web hosts automatically enable compression because it’s good for websites, site visitors and also good for web hosts because it saves on bandwidth loads. Everybody wins with website compression.

High Levels Of Compression Correlate With Spam

Researchers at a search engine discovered that highly compressible web pages correlated with low-quality content. The study called Spam, Damn Spam, and Statistics: Using Statistical Analysis to Locate Spam Web Pages  (PDF) was conducted in 2006 by two of the world’s leading researchers, Marc Najork and Dennis Fetterly.

Najork currently works at DeepMind as Distinguished Research Scientist. Fetterly, a software engineer at Google, is an author of many important research papers related to search, content analysis and other related topics. This research paper isn’t just any research paper, it’s an important one.

What the research paper shows is that 70% of web pages that compress at a level of 4.0 or higher tended to be low quality pages with a high level of redundant word usage. The average compression level of sites was around 2.0.

Here are the averages of normal web pages listed by the research paper:

  • Compression ratio of 2.0:
    The most frequently occurring compression ratio in the dataset is 2.0.
  • Compression ratio of 2.1:
    Half of the pages have a compression ratio below 2.1, and half have a compression ratio above it.
  • Compression ratio of 2.11:
    On average, the compression ratio of the pages analyzed is 2.11.

It would be an easy first-pass way to filter out the obvious content spam so it makes sense that they would do that to weed out heavy-handed content spam. But weeding out spam is more complicated than simple solutions. Search engines use multiple signals because it results in a higher level of accuracy.

The researchers reported that 70% of sites with a compression level of 4.0 or higher were spam. That means that the other 30% were not spam sites. There are always outliers in statistics and that 30% of non-spam sites is why search engines tend to use more than one signal.

Do Search Engines Use Compressibility?

It’s reasonable to assume that search engines use compressibility to identify heavy handed obvious spam. But it’s also reasonable to assume that if search engines employ it they are using it together with other signals in order to increase the accuracy of the metrics. Nobody knows for certain if Google uses compressibility.

Is There Proof That Compression Is An SEO Myth?

Some SEOs have published research analyzing the rankings of thousands of sites for hundreds of keywords. They found that both the top-ranking and bottom-ranked sites had a compression ratio of about 2.4. The difference between their compression ratios was just 2%, meaning the scores were essentially equal. Those results are close to the normal average range of 2.11 reported in the 2006 scientific study.

The SEOs claimed that the mere 2% higher compression levels of the top-ranked sites over the bottom-ranked sites prove that compressibility is an SEO myth. Of course, that claim is incorrect. The average compression ratio of normal sites in 2006 was 2.11, which means the average 2.4 ratio in 2025 falls well within the range of normal, non-spam websites.

The ratio for spam sites is 4.0, so the fact that both sets of top and bottom ranked sites are about 2.4 ratio is meaningless since both scores fall within the range of normal.

If we assume that Google is using compressibility, a site would have produce a compression ratio of 4.0, plus send other low quality signals, to trigger an algorithmic action. If that happened those sites wouldn’t be in the search results at all because they wouldn’t be in the index and therefore there is no way to test that with the SERPs, right?

It would be reasonable to assume that the sites with high 4.0 compression ratios were removed. But we don’t know that, it’s not a certainty.

Is Compressibility An SEO Myth?

Compressibility may not be an SEO myth. But it’s probably not anything publishers or SEOs should be worry about as long as they’re avoiding heavy-handed tactics like keyword stuffing or repetitive cookie cutter pages.

Google uses de-duplication which removes duplicate pages from their index and consolidates the PageRank signals to whichever page they choose to be the canonical page (if they choose one). Publishing duplicate pages will likely not trigger any kind of penalty, including anything related to compression ratios, because, as was already mentioned, search engines don’t use signals in isolation.

U.S. DOJ Antitrust Filing Proposes 4 Ways To Break Google’s Monopoly via @sejournal, @martinibuster

The plaintiffs in an antitrust lawsuit against Google filed a revised proposed final judgment for the judge in the case to consider. The proposal comes after a previous ruling where the court determined that Google broke antitrust laws by illegally maintaining its monopoly.

The legal filing by the plaintiffs, the United States Department Of Justice and State Attorneys General, argue that Google has maintained monopolies in search services and text advertising through anticompetitive practices.

The filing proposes four ways to loosen Google’s monopolistic hold on search and advertising.

  1. Requiring Google to separate Chrome from its business—this could mean selling it or spinning it off into an independent company.
  2. Limiting Google’s payments to companies like Apple for making Google the default search engine, reducing its ability to secure exclusive deals.
  3. Stopping Google from favoring its own products over competitors in search results and other services, ensuring a more level playing field.
  4. Increasing transparency in Google’s advertising and data practices so competitors have fairer access to key information.

The proposal asks that Google be subjected to continuous oversight through mandatory reporting to ensure transparency in Google’s advertising and data practices:

“Google must provide to the Technical Committee and Plaintiffs a monthly report outlining any changes to its search text ads auction and its public disclosure of those changes.”

It also suggests ongoing enforcement to guarantee that Google doesn’t impose new restrictions that undermine transparency requirements:

“Google must not limit the ability of advertisers to export in real time (by downloading through an interface or API access) data or information relating to their entire portfolio of ads or advertising campaigns bid on, placed through, or purchased through Google.”

The goal of the above section is to increase transparency in Google’s advertising system and make it easier for advertisers to analyze their ad performance, greater transparency.

Real-time access ensures advertisers can make immediate adjustments to their campaigns instead of waiting for delayed reports and it assures that advertisers aren’t locked into the Google advertising system by holding them hostage to their historical data.

The legal filing requires government-imposed restrictions and changes to Google’s advertising business practices. It proposes remedies for how Google should be regulated or restructured following the court’s earlier ruling that Google engaged in monopolistic practices. However, this is not the final judgment and the court must still decide whether to adopt, modify, or reject these proposed remedies.

YouTube’s Creator Liaison Shares Advice For Mid-Roll Ad Changes via @sejournal, @MattGSouthern

YouTube Creator Liaison Rene Ritchie has advised content creators on adapting to YouTube’s upcoming mid-roll advertising changes.

These changes take effect on May 12 and will alter how ads appear within videos.

Background

Starting May 12, YouTube will implement a new system prioritizing mid-roll ad placements during natural content breaks rather than at potentially disruptive moments.

YouTube will automatically place ads at natural transitions in videos, but creators can manually control ad placements if they prefer.

This update introduces a hybrid approach, allowing creators to use automatic and manual mid-roll placements simultaneously.

According to YouTube’s early testing, channels adopting this combined approach have seen an average increase in ad revenue of 5%.

Ritchie’s Adaptation Strategy

Sharing his approach on X, Ritchie outlined specific steps he’s taking with his own YouTube channel:

“I’m turning on auto mid-rolls, since that system will continue to be improved and optimized by launch and over time. For new videos, I’m manually inserting additional slots if and as needed where I think it’ll provide the best experience for viewers.”

For existing content, Ritchie recommends a prioritized approach, stating:

“For back catalog, I’m sorting by current watch time and doing the same for the top 20-50 most-watched videos.”

Maintaining Creator Control

Ritchie addressed concerns about YouTube potentially removing manual placement options:

“No one is taking away manual mid-roll placements. Creators can still put slots wherever and whenever we want.”

He reminded creators that designated ad slots don’t guarantee ad placement but indicate where ads can potentially appear.

Ritchie drew a parallel to YouTube’s retention analytics and explained how the new ad feedback tool provides valuable insights.

“In the days before the retention graph in Analytics, my 10-second long intro might have caused a ton of people to dip from the video and I never knew it. Similarly, I can still put that mid-roll slot anywhere I want, but now I’m getting data about how it will perform.”

Ongoing Improvements

YouTube is actively refining the automatic detection system and will continue improving it after the May launch.

Ritchie notes there’s a mutual interest in making mid-rolls more effective:

“YouTube and creators share revenue, so it’s in everyone’s best interest to make mid-rolls work better.”

What Creators Should Do Now

Based on both YouTube’s official guidance and Ritchie’s recommendations, creators should:

  • Enable automatic mid-roll placement while maintaining manual control where needed
  • Review high-performing back catalog content first
  • Use the new feedback tool to identify potentially disruptive ad placements

Continue providing feedback to YouTube as the system develops. This interaction with Ritchie shows the team is listening.


Featured Image: Alejo Bernal/Shutterstock

Why Google May Adopt Vibe Coding For Search Algorithms via @sejournal, @martinibuster

A new trend in Silicon Valley, Vibe Coding, is driving an exponential acceleration in how quickly engineers can develop products and algorithms. This approach aligns with principles outlined by Google co-founder Sergey Brin in a recent email to DeepMind engineers.

Top Silicon Valley insiders call Vibe Coding the “dominant way to code,” and Brin’s message suggests that Google will embrace it to dramatically speed up AI development. Given its potential, this approach may also extend to Google’s search algorithms, leading to more changes to how search results are ranked.

Vibe Coding Is Here To Stay

The four Y Combinator executives agreed that vibe coding is a very big deal but were surprised at how fast it has overtaken the industry. Jarede Friedman observed that it’s like something out of the fairy tale Jack and the Beanstalk, where the world-changing magic beans sprout into gigantic beanstalks over night.

Garry Tan agreed, saying:

“I think our sense right now is this isn’t a fad. This isn’t going away. This is actually the dominant way to code, and if you’re not doing it, you might be left behind. This is here to stay.”

What Is Vibe Coding?

Vibe coding is software engineering with AI:

  • Software engineers use AI to generate code rather than writing it manually.
  • Rely on natural language prompts to guide software development.
  • Prioritize speed and iteration.
  • Time isn’t spent on debugging as code is simply regenerated until it works.
  • Vibe coding shifts software engineering focus from writing code to choosing what kinds of problems to solve.
  • Leverage AI for rapid code regeneration instead of traditional debugging.
  • It is exponentially speeding up coding.

Vibe coding is a way creating code with AI with an emphasis on speed. That means it’s increasingly less necessary to debug code because an engineer can simply re-roll the code generations multiple times until the AI gets it right.

A recent tweet by Andrej Karpathy kicked off a wave of excitement in Silicon Valley. Karpathy, a prominent AI researcher and former director of AI at Tesla, described what Vibe Coding is and explained why it’s the fastest way to code with AI. It’s so reliable that he doesn’t even check the modifications the AI makes (referred to as “diffs”).

Karpathy tweeted:

“There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good.

Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore.

When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while.

Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing.

I’m building a project or webapp, but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

Sergey Brin Emphasizes Vibe Coding Principles

A recent email from Google co-founder Sergey Brin to DeepMind engineers emphasized the need to integrate AI into their workflow to reduce time spent on coding. The email states that code matters most and that AI will improve itself, advising that if it’s simpler to prompt an AI for a solution, then that’s preferable to training an entirely new model. Brin describes this as highly important for becoming efficient coders. These principles align with Vibe Coding, which prioritizes speed, simplicity, and AI-driven development.

Brin also recommends using first-party code (code developed by Google) instead of relying on open-source or third-party software. This strongly suggests that Google intends to keep its AI advancements proprietary rather than open-source. That may mean any advancements created by Google will not be open-sourced and may not show up in research papers but instead may be discoverable through patent filings.

Brin’s message de-emphasizes the use of LoRA, a machine learning technique used to fine-tune AI models efficiently. This implies that he wants DeepMind engineers to prioritize efficient workflows rather than spending excessive time fine-tuning models. This also suggests that Google is shifting focus toward simpler, more scalable approaches like vibe coding which rely on prompt engineering.

Sergey Brin wrote:

“Code matters most — AGI will happen with takeoff, when the Al improves itself. Probably initially it will be with a lot of human help so the most important is our code performance. Furthermore this needs to work on our own 1p code. We have to be the most efficient coder and Al scientists in the world by using our own Al.

Simplicity — Lets use simple solutions where we can. Eg if prompting works, just do that, don’t posttrain a separate model. No unnecessary technical complexities (such as lora). Ideally we will truly have one recipe and one model which can simply be prompted for different uses.

Speed — we need our products, models, internal tools to be fast. Can’t wait 20 minutes to run a bit of python on borg.”

Those statements align with the principles of vibe coding so it’s important to understand what it is and how it may affect how Google develops search algorithms and AI which may be used for the purposes of ranking websites.

Software Engineers Transitioning To Product Engineers

A recent podcast by Y Combinator, a Silicon Valley startup accelerator company, discussed how vibe coding is changing what it means to be a software engineer and how it will affect hiring practices.

The podcast hosts quoted multiple people:

Leo Paz, Founder of Outlit observed:

“I think the role of Software Engineer will transition to Product Engineer. Human taste is now more important than ever as codegen tools make everyone a 10x engineer.”

Abhi Aiyer of Mastra shared how their coding practices changed:

“I don’t write code much. I just think and review.”

One of the podcast hosts, Jarede Friedman, Managing Partner, Y Combinator said:

“This is a super technical founder who’s last company was also a dev tool. He’s extremely able to code and so it’s fascinating to have people like that saying things like this.

They next quoted Abhi Balijepalli of Copycat:

“I am far less attached to my code now, so my decisions on whether we decide to scrap or refactor code are less biased. Since I can code 3 times as fast, it’s easy for me to scrap and rewrite if I need to.”

Garry Tan, President & CEO, Y Combinator commented:

“I guess the really cool thing about this stuff is it actually parallelizes really well.”

He quoted Yoav Tamir of Casixty:

“I write everything with Cursor. Sometimes I even have two windows of Cursor open in parallel and I prompt them on two different features.”

Tan commented on how much sense that makes and why not have three instances of Cursor open in order to accomplish even more.

The panelists on the podcast then cited Jackson Stokes of Trainloop who explains the exponential scale of how fast coding has become:

“How coding has changed six to one months ago: 10X speedup. One month ago to now: 100X speedup. Exponential acceleration. I’m no longer an engineer, I’m a product person.”

Garry Tan commented:

“I think that might be something that’s happening broadly. You know, it really ends up being two different roles you need. It actually maps to how engineers sort of self assign today, in that either you’re front-end or backend. And then backend ends up being about actually infrastructure and then front-end is so much more actually being a PM (product manager)…”

Harj Taggar, Managing Partner, Y Combinator observed that the LLMs are going to push people to the role of making choices, that the actual writing of the code will become less important.

Why Debugging With AI Is Unnecessary

An interesting wrinkle in Code Vibing is that one of the ways it speeds up development is that software engineers no longer have to spend long hours debugging. In fact, they don’t have to debug anymore. This means that they are able to push code out the door faster than ever before.

Tan commented on how poor AI is at debugging:

“…one thing the survey did indicate is that this stuff is terrible at debugging. And so… the humans have to do the debugging still. They have to figure out well, what is the code actually doing?

There doesn’t seem to be a way to just tell it, debug. You were saying that you have to be very explicit, like as if giving instructions to a first time software engineer.”

Jarede offered his observation on AI’s ability to debug:

“I have to really spoon feed it the instructions to get it to debug stuff. Or you can kind of embrace the vibes. I’d say Andrej Karpathy style, sort of re-roll, just like tell it to try again from scratch.

It’s wild how your coding style changes when actually writing the code becomes a 1000x cheaper. Like, as a human you would never just like blow away something that you’d worked on for a very long time and rewrite from scratch because you had a bug. You’d always fix the bug. But for the LLM, if you can just rewrite a thousand lines of code in just six seconds, like why not?”

Tan observed that it’s like how people use AI image generators where if there’s something they don’t like they just reiterate without even changing the prompt, they just simply click re-roll five times and then at the fifth time it works.

Vibe Coding And Google’s Search Algorithms

While Sergey Brin’s email does not explicitly mention search algorithms, it advocates AI-driven, prompt-based development at scale and high speed. Since Vibe Coding is now the dominant way to code, it is likely that Google will adopt this methodology across its projects, including the development of future search algorithms.

Watch the Y Combinator Video Roundtable

Vibe Coding Is The Future

Featured Image by Shutterstock/bluestork

AI Writing Fingerprints: How To Spot (& Fix) AI-Generated Content via @sejournal, @MattGSouthern

New research shows that ChatGPT, Claude, and other AI systems leave distinctive “fingerprints” in their writing.

Here’s how you can use this knowledge to identify AI content and improve your AI-assisted output.

The AI Fingerprint: What You Need to Know

Researchers have discovered that different AI writing systems produce text with unique, identifiable patterns.

Analyzing these patterns, researchers achieved 97.1% accuracy in determining which AI wrote a particular piece of content.

The study (PDF link) reads:

“We find that a classifier based upon simple fine-tuning text embedding models on LLM outputs is able to achieve remarkably high accuracy on this task. This indicates the clear presence of idiosyncrasies in LLMs.”

This matters for two reasons:

  • For readers: As the web becomes increasingly saturated with AI-generated content, knowing how to spot it helps you evaluate information sources.
  • For writers: Understanding these patterns can help you better edit AI-generated drafts to sound more human and authentic.

How To Spot AI-Generated Content By Model

Each major AI system has specific writing habits that give it away.

The researchers discovered these patterns remain even in rewritten content:

“These patterns persist even when the texts are rewritten, translated, or summarized by an external LLM, suggesting that they are also encoded in the semantic content.”

1. ChatGPT

Characteristic Phrases

  • Frequently uses transition words like “certainly,” “such as,” and “overall.”
  • Sometimes begins answers with phrases like “Below is…” or “Sure!”
  • Periodically employs qualifiers (e.g., “typically,” “various,” “in-depth”).

Formatting Habits

  • Utilizes bold or italic styling, bullet points, and headings for clarity.
  • Often includes explicit step-by-step or enumerated lists to organize information.

Semantic/Stylistic Tendencies

  • Provides more detailed, explanatory, and context-rich answers.
  • Prefers a somewhat formal, “helpful explainer” tone, often giving thorough background details.

2. Claude

Characteristic Phrases

  • Uses language like “according to the text,” “based on,” or “here is a summary.”
  • Tends to include shorter transitions: “while,” “both,” “the text.”

Formatting Habits

  • Relies on simple bullet points or minimal lists rather than elaborate markdown.
  • Often includes direct references back to the prompt or text snippet.

Semantic/Stylistic Tendencies

  • Offers concise and direct explanations, focusing on the key point rather than lengthy detail.
  • Adopts a practical, succinct voice, prioritizing clarity over elaboration.

3. Grok

Characteristic Phrases

  • May use words like “remember,” “might,” “but also,” or “helps in.”
  • Occasionally starts with “which” or “where,” creating direct statements.

Formatting Habits

  • Uses headings or enumerations but may do so sparingly.
  • Less likely to embed rich markdown elements compared to ChatGPT.

Semantic/Stylistic Tendencies

  • Often thorough in explanations but uses a more “functional” style, mixing direct instructions with reminders.
  • Doesn’t rely heavily on nuance phrases like “certainly” or “overall,” but rather more factual connectors.

4. Gemini

Characteristic Phrases

  • Known to use “below,” “example,” “for instance,” sometimes joined with “in summary.”
  • Might employ exclamation prompts like “certainly! below.”

Formatting Habits

  • Integrates short markdown-like structures, such as bullet points and occasional headers.
  • Occasionally highlights key instructions in enumerated lists.

Semantic/Stylistic Tendencies

  • Balances concise summaries with moderately detailed explanations.
  • Prefers a clear, instructional tone, sometimes with direct language like “here is how…”

5. DeepSeek

Characteristic Phrases

  • Uses words like “crucial,” “key improvements,” “here’s a breakdown,” “essentially,” “etc.”
  • Sometimes includes transitional phrases like “at the same time” or “also.”

Formatting Habits

  • Frequently employs enumerations and bullet points for organization.
  • May have inline emphasis (e.g., “key improvements”) but not always.

Semantic/Stylistic Tendencies

  • Generally thorough responses that highlight the main takeaways or “breakdowns.”
  • Maintains a relatively explanatory style but can be more succinct than ChatGPT.

6. Llama (Instruct Version)

Characteristic Phrases

  • “Including,” “such as,” “explanation the,” “the following,” which signal examples or expansions.
  • Sometimes references step-by-step guides or “how-tos” within text.

Formatting Habits

  • Levels of markdown usage vary; often places important points in numbered lists or bullet points.
  • Can include simple headers (e.g., “## Topic”) but less likely to use intricate formatting than ChatGPT.

Semantic/Stylistic Tendencies

  • Maintains a somewhat formal, academic tone but can shift to more conversational for instructions.
  • Sometimes offers deeper analysis or context (like definitions or background) embedded in the response.

7. Gemma (Instruct Version)

Characteristic Phrases

  • Phrases like “let me,” “know if,” or “remember” often appear.
  • Tends to include “below is,” “specific,” or “detailed” within clarifications.

Formatting Habits

  • Similar to Llama, frequently uses bullet points, enumerations, and occasionally bold headings.
  • May incorporate transitions (e.g., “## Key Points”) to segment content.

Semantic/Stylistic Tendencies

  • Blends direct instructions with explanatory detail.
  • Often partial to a more narrative approach, referencing how or why a task is done.

8. Qwen (Instruct Version)

Characteristic Phrases

  • Includes “certainly,” “in summary,” or “title” for headings.
  • May appear with transitions like “comprehensive,” “based,” or “example use.”

Formatting Habits

  • Uses lists (sometimes nested) for clarity.
  • Periodically includes short code blocks or snippet-like formatting for technical explanations.

Semantic/Stylistic Tendencies

  • Detailed, with emphasis on step-by-step instructions or bullet-labeled points.
  • Paraphrase-friendly structure, meaning it can rephrase or re-organize content extensively if prompted.

9. Mistral (Instruct Version)

Characteristic Phrases

  • Words like “creating,” “absolutely,” “subject,” or “yes” can appear early in responses.
  • Tends to rely on direct verbs for commands (e.g., “try,” “build,” “test”).

Formatting Habits

  • Usually applies straightforward bullet points without heavy markdown.
  • Occasionally includes headings but often keeps the structure minimal.

Semantic/Stylistic Tendencies

  • Prefers concise, direct instructions or overviews.
  • Focuses on brevity while still aiming to be thorough, giving core details in an organized manner.

How to Make AI-Generated Content More Human

The study revealed that word choice is a primary identifier of AI-generated text:

“After randomly shuffling words in the LLM-generated responses, we observe a minimal decline in classification accuracy. This suggests that a substantial portion of distinctive features is encoded in the word-level distribution.”

If you’re using AI writing tools, here are practical steps to reduce these telltale patterns:

  • Vary your beginnings: The research found that first words are highly predictable in AI content. Edit opening sentences to avoid typical AI starters.
  • Replace characteristic phrases: Watch for and replace model-specific phrases mentioned above.
  • Adjust formatting patterns: Each AI has distinct formatting preferences. Modify these to break recognizable patterns.
  • Restructure content: AI tends to follow predictable organization. Rearrange sections to create a more unique flow.
  • Add personal elements: Incorporate your own experiences, opinions, and industry-specific insights that an AI couldn’t generate.

Top Takeaway

While this research focuses on distinguishing different AI models, it also demonstrates how AI-generated text differs from human writing.

As search engines improve their ability to spot AI content, heavily templated AI writing may lose value.

By understanding how to identify AI text, you can create content that rises above the average chatbot output, appealing to both readers and search engines.

Combining AI’s efficiency with human creativity and expertise is the best approach.

Featured Image: Pixel-Shot/Shutterstock

Google’s Martin Splitt Warns Against Redirecting 404s To Homepage via @sejournal, @MattGSouthern

Google has released a new episode in its “SEO Office Hours Shorts” video series, in which Developer Advocate Martin Splitt addresses a question many website owners face: Should all 404 error pages be redirected to the homepage?

The Clear Answer: Don’t Do It

In the latest installment of the condensed Q&A format, Splitt responds to a question from a user named Chris about whether “redirecting all 404 pages to the homepage with 301 redirects can have a negative impact on rankings or overall website performance in search.”

Splitt’s response was unambiguous: “Yes, and also it annoys me as a user.”

Why 404s Serve A Purpose

404 error pages signal to users and search engine crawlers that a URL is broken or nonexistent. This transparency helps people understand what they’re dealing with rather than being unexpectedly redirected to an unrelated page.

Splitt explained:

“A 404 is a very clear signal this link is wrong and broken or this URL no longer exists because maybe the product doesn’t exist or something has changed.”

Impact on Search Crawlers

Splitt says blanket redirects to the homepage can disrupt search engine crawlers’ efficiency.

When crawlers encounter a legitimate 404, they recognize that the content no longer exists and can move on to other URLs. However, redirecting them to the homepage creates a confusing loop.

Splitt noted:

“For a crawler, they go like homepage and then click through or basically crawl through your website, finding content, and eventually they might run into a URL that doesn’t exist.

But if you redirect, they’re kind of like being redirected, and then it all starts over again.”

Best Practices for Handling Missing Content

Splitt offered clear guidance on proper redirects:

  1. If content has moved to a new location, use a redirect to that specific new URL
  2. If content is truly gone, maintain the 404 status code
  3. Don’t redirect to the homepage or what you think is the “closest” match

Splitt emphasized:

“If it moved somewhere else, use a redirect. If it’s gone, don’t redirect me to the homepage.”

This latest guidance aligns with Google’s longstanding recommendation to maintain accurate HTTP status codes to help users and search engines understand your site structure.

New Format

The SEO Office Hours Shorts format is a new approach from Google’s Search Relations team.

The original format was a live show where anyone could call in and get their questions answered in real time.

This format then transitioned to recorded sessions where Google personnel responded to a selection of pre-approved questions.

Now, SEO Office Hours is presented as short videos. If you prefer one of the previous formats, Splitt encourages feedback in the comments section of the video below:


Featured Image: Screenshot from YouTube.com/GoogleSearchCentral, March 2025.

Google Discontinues Controversial “Page Annotations” On iOS via @sejournal, @MattGSouthern

Google has discontinued its Page Annotations feature in the iOS app, which automatically added search links to webpages.

The feature was introduced late last year and converted certain website text into Google Search links without publisher consent.

Recent updates confirm that it’s no longer supported.

Short-Lived & Controversial

The Page Annotations feature was first announced in November and attracted attention for its potential to divert traffic away from websites.

This feature automatically converted on-page text into tappable links that directed visitors to Google Search results.

Unlike similar features in Google’s ecosystem, Page Annotations used an opt-out model, meaning publishers didn’t need to opt in.

If you didn’t want Google to insert links into your content, you had to submit an opt-out form, and the changes would take effect within 30 days.

Silent Removal

Google has removed all mentions of the Page Annotations feature from its official documentation on “Control what you share with Google.”

The updated text outlines various methods for publishers to control their content’s appearance in search results but does not mention the now-discontinued feature.

Additionally, the annoucement thread on Google’s community forums has been removed.

An archived version of the announcement remains available. See it in the screenshot below:

Screenshot from: web.archive.org, March 2025.

Why the Reversal?

While Google hasn’t publicly stated reasons for discontinuing Page Annotations, the feature’s introduction came at a sensitive time for the company, which has been facing increased scrutiny over its search and advertising practices.

The feature raised concerns about Google’s relationship with publishers. By inserting its links into others’ content without explicit permission, Google influenced how people interacted with websites within its app.

Why This Matters

Google’s quick discontinuation of Page Annotations suggests it may be reevaluating its publisher relationships due to ongoing antitrust concerns.

Publishers no longer need to worry about Google adding links to their content in the iOS app.


Featured Image: Below The Sky/Shutterstock

Google CTR Study: AI Overviews Rise As Click Rates Decline via @sejournal, @MattGSouthern

A new study on Google search behavior examines changes in clickthrough rates across industries. The data correlates with increased AI Overviews (AIOs) in Google’s search results.

Research from Advanced Web Ranking (AWR) reveals that AIOs appeared in 42.51% of search results in Q4, up 8.83 percentage points from the previous quarter.

With this increase, clickthrough rates for informational queries dropped significantly.

Websites in the top four positions for searches using terms like what, when, where, and how saw a combined decrease of 7.31 percentage points in desktop clickthrough rates.

Study author Dan Popa states:

“This surge in AI Overviews may be impacting clickthrough rates for organic listings, as informational content is increasingly getting overrun by these AI-generated summaries.”

Here’s more about the study and what the findings mean for your website.

Industry CTR Gap

The study reveals SEO success is becoming increasingly industry-dependent.

For example, law and politics sites recorded a 38.45% CTR in position one, while science sites get 19.06% for the same ranking. That gap nearly tripled in a single quarter.

CTR shifts were observed in the following sectors:

  • Law & Politics: Recorded Q4’s highest position-specific increase with a 7.39 percentage point CTR gain for top desktop positions, alongside 68.66% higher search demand.
  • Science: Recorded Q4’s largest CTR decline with top desktop positions dropping 6.03 percentage points, while experiencing a 37.63% decrease in search demand.
  • Careers: Despite search demand more than tripling (+334.36%), top three desktop positions lost a combined 4.34 percentage points in CTR.
  • Shopping: The holiday season brought a 142.88% surge in search demand, yet top-ranked sites saw CTR declines of 1.39 and 1.96 percentage points on desktop and mobile, respectively.
  • Education: Mixed bag with top positions gaining nearly 6% in CTR while positions 2-3 declined, all during a traffic increase.

Only the business and style and fashion sectors saw increased search demand and improved CTRs, making them rare bright spots in a challenging market.

Desktop vs. Mobile

The report also looks at behavior patterns between devices.

While desktop CTR for informational queries declined, mobile showed opposing trends, with top-ranked sites gaining 1.81 percentage points.

Similar device-specific shifts appeared across multiple industries. For example, arts and entertainment websites saw a 1.01 percentage point drop in desktop CTR but a 2.28 percentage point mobile gain for position one.

Query length also influenced click behavior differently across devices.

Long-tail queries (four or more keywords) experienced CTR declines on desktop for positions 2-3. In contrast, single-word queries gained nearly two percentage points in CTR on mobile for top positions.

Why This Study Matters

These findings demonstrate that ranking #1 doesn’t guarantee the same traffic it once did. Your industry, query type, and SERP features (especially AI Overviews) all impact click potential.

AWR suggests tracking pixel depth (how far users must scroll to see your listing) alongside rankings for more accurate traffic forecasting.

It’s important to account for these widening performance gaps, particularly for informational content competing with Google’s AIOs.

Study Methodology

Advanced Web Ranking’s research compared CTR averages from Q4 2024 to Q3 2024. It included data from markets like the US and UK, linking CTR shifts with industry search demand.

Using AWR’s free AIO tool, the study found an 8.83 percentage point rise in AI Overview presence. Queries were categorized by intent, length, and 21 industry verticals to identify user behavior patterns.

For more, read the full study.


Featured Image: jack_the_sparrow/Shutterstock

WordPress’s Next Phase: Mullenweg Shares What’s Ahead via @sejournal, @martinibuster

In a recent podcast interview, Matt Mullenweg shared his informal plans for ensuring the future of WordPress. He outlined several areas where WordPress is taking advantage of technological changes, including security, AI integration, and reducing technical debt. He also addressed the long-term future of WordPress leadership, emphasizing the importance of decisive vision.

Mullenweg outlined four ways WordPress is improving in the near future:

  1. Plugins and themes will become more secure.
  2. The suitability of AI integration with WordPress ensures its continued relevance.
  3. WordPress is addressing technical debt.
  4. Governance and succession planning will help maintain WordPress’s strength.

WordPress Will Become More Secure

One of WordPress’s strengths is the third-party themes and plugins that enable publishers to create exactly the kind of website they need. It’s also a shortcoming because the vast majority of vulnerabilities discovered in WordPress stem from coding flaws in plugins and themes, as well as user failure to keep third-party software updated.

Mullenweg mentions current security measures like bug bounties, which are payments made to individuals who discover and responsibly disclose vulnerabilities. The implication of his answer is that relying on humans to find vulnerabilities isn’t enough because the scale of the problem exceeds human capabilities.

He anticipates plugin and theme vulnerabilities becoming less problematic due to new AI code-scanning capabilities that can analyze millions of lines of code to identify patterns consistent with common flaws that lead to vulnerabilities.

Mullenweg shared his thoughts:

“… many of these plugins and themes don’t have the same sort of robust security and review process that core has. So that’s where when you hear about security issues with WordPress, it’s very rarely in core, anymore. We haven’t had a remote exploit in like… I think five years, six years something.

But in the plugins it can be somewhat more frequent. And so one thing I’m very, very excited about, the next year or two, is actually more automated scanning. Because obviously that code base is so many tens of millions, maybe over a hundred million lines of code at this point. It’s impossible for humans to review that.

So we kind of rely on developers to to review that and manage. And of course we have like bug bounties and everything so that when things are reported we fix it quickly.

But I can’t wait for more automated scanning there, and I think that could vastly upgrade the security of open source.”

AI-Powered Website Building

Another development Matt sees for WordPress is further integration of AI into WordPress so that it becomes an engine that an AI uses to develop websites for users. Matt acknowledges that this is already happening and he’s right. Some web hosts are already leveraging AI to assist users in building websites through a chatbot interface.

He explains that writing the code is a strength of AI but that maintaining the code base is a problem that WordPress solves. Software like WordPress currently rely on PHP and other technologies to power those websites and make them interactive but they are constantly improving which means that the software that runs on those technologies must also be maintained. Mullenweg explains that AI can build on top of those technologies as engines that power what they create, building on top of them without having to worry about maintaining the underlying technology that makes them work.

He said that this scenario of building on top of open source is more powerful than leveraging a closed source system. What’s implied in what he said, and went unspoken, is that open source projects like WordPress are not threatened by AI but rather they stand to benefit greatly from it. Thus, Matt foresees that WordPress has a strong future as AI technology progresses.

Matt explained:

“The other thing that’s really exciting is that right now, you see people building apps and stuff and it’s custom generated code. But I think the next generation of these models… as everyone knows, just writing the code is one part of it. It’s maintaining it that really becomes the life cycle of it.

And I think that if, and they’re starting to do that, is when the open source model, you say, build me a website, it actually installs WordPress and builds on top of that and customizes on top of that. Then you get for free, that core engine that’s always being edited and updated and getting passkey support, whatever the new things are, sort of continuously, and the new custom stuff can be on top of that. Which I think is a lot more powerful than sort of building something proprietary or custom from the ground up.”

Technical Debt Needs To Be Addressed

At this point, Lenny observes how everything you acquire carries the burden of having to maintain it, saying that they all have that hidden cost. Mullenweg agreed, saying that WordPress has a similar thing called technical debt which is an issue that WordPress is addressing in order to improve it. Technical debt is a reference to the accumulated burden of outdated code, complexity and development decisions that make future changes more difficult.

Mullenweg said:

“Well, that’s why I think technical debt is one of the most interesting concepts. You know, there’s so many companies …that maybe have like big market caps. But I feel like they might have billions or tens of billions of dollars of technical debt. …how their products interface with themselves.

And I think about that a lot in our own company. We definitely have some products, …we have some variable quality around some of our things right now. …There are parts of WordPress and WordPress.com that we’re a little embarrassed and ashamed of… we kind of have to…. we have a really large surface area that we cover with relatively few people. So there are some parts that we haven’t looked at in a little while that we need to get around to.

And it’s our big focus for us this year, is actually going back to basics, back to core. And improving all of those nooks and crannies… and also ruthlessly editing and and cutting as much as possible. Because we’ve just launched a lot of stuff over the past 21 years that isn’t as relevant today or doesn’t need to be there.”

Governance and Leadership

Mullenweg also debunked the idea of WordPress as an entity that’s led by a single person and shared his vision for how WordPress will be governed in the future. He said that WordPress is a true community where most of the decisions are made by committees formed by core contributors. He also affirmed that he believes that for WordPress to succeed it must have a strong leader who serves as the final decision-maker and that this doesn’t make it weaker, it makes it stronger.

On the points of project leadership and succession he shared:

“If you look at the daily commits and activity and everything, it is run by the community. So it’s hundreds of volunteers everyday that are actually doing the day-to-day work and making the data decisions, everything happens.

…There has been a radical delegation. However, there’s ultimately a hierarchy, and I’m kind of… I’m like a final, final decision-maker.

And you know, I definitely think about succession planning, everything like that, but if for when I’m gone, I don’t want to pass it to a committee, I want to pass it to someone else who could have a role somewhere to mine and really sort of try to be a steward.”

Takeaways

WordPress Security

Matt Mullenweg discussed three plans for improving WordPress in the near future, acknowledging that plugins and themes remain the biggest security risks for WordPress but that advancements in AI technology will enable greater mitigation of those issues.

WordPress Set To Remain The Market Leader

He also said that WordPress is ideally suited for becoming the engine that powers website development in the future, an advantage over closed source systems in that companies will be able to develop layers of AI-powered functionality and conveniences on top of the free WordPress open source CMS.

Addressing Technical Debt

Mullenweg acknowledged that WordPress has many years of technical debt to address and that WordPress is prioritizing the reduction of outdated code and complexity this year.

His statements confirm that WordPress’s long-term stability and viability are assured by technological advancements, adaptability and greater focus on code efficiency.

WordPress Leadership

Lastly, he addressed WordPress governance, insisting that it is led by the community because the overwhelming majority of decisions are made by individual contributors, and that his role is more along the lines of a final decision-maker. He argued that the best software is created through a combination of committees and strong leadership that oversees the long-term direction of the project. Interestingly, he also said that the community serves as a system of checks and balances because contributors are always free to leave and fork their own version of the project.

Watch the interview here:

Matt Mullenweg on the future of open source and why he’s taking a stand

Featured image is a screenshot from the interview.