Newfold Digital Sells MarkMonitor As Part Of Strategic Refocus via @sejournal, @martinibuster

London-headquartered corporate domain management company Com Laude announced the acquisition of its competitor, MarkMonitor, previously one of the holdings of Newfold Digital.

Newfold Digital Simplifies Portfolio

Newfold Digital owns many top Internet brands like Yoast, Bluehost, Register.com, and Domain.com, all businesses that focus on small and medium-sized businesses. This divestiture may be a sign that Newfold Digital may be shifting away from the enterprise market and toward focusing its portfolio of web services on the SMB end of the market.

The official Newfold Digital press release states:

“The sale is part of Newfold Digital’s strategy to simplify its portfolio and double down on the areas where it can deliver the greatest value to customers – its core brands, Bluehost and Network Solutions. ”

Stu Homan, Head of MarkMonitor, commented:

“With this acquisition, Markmonitor has found owners who value our dedicated corporate services as much as our customers do. Com Laude is deeply committed to preserving and building upon our ability to continue to deliver industry-leading customer service while growing to new levels with dedication and investment.

Our entire team is excited to bring Com Laude’s advanced tools and services to our customers, and to be part of the most exciting development in corporate domain services since Markmonitor invented the white glove service model twenty-six years ago.”

Previous to the acquisition, Com Laude was a competitor of MarkMonitor, offering services that were similar to MarkMonitor but with key differences and technologies like an AI-powered domain management dashboard.

Com Laude is headquartered in London, United Kingdom, and MarkMonitor is in Boise, Idaho, which is not commonly regarded as the center of Internet commerce or technology but is actually a growing regional technology hub.

Benjamin Crawford, CEO of Com Laude, remarked:

“Markmonitor is the best-known name in domain services for corporate customers, having virtually invented the category twenty-six years ago, and since then grown a long list of blue-chip customers with its “white glove” customer service. Com Laude offers market leading advanced tools and bespoke services in domains and online brand protection, developed for the world’s largest companies and most valuable brands. Together we will be uniquely positioned to protect and grow the digital presence of any company that needs assistance with its domain names, internet infrastructure and security, online brand protection, internet policy and compliance, and online strategy.”

Read Com Laude’s announcement:

Com Laude to Acquire Markmonitor in a Landmark Transaction

Featured Image by Shutterstock/thodonal88

Are AI Search Summaries Making Evergreen Articles Obsolete? via @sejournal, @martinibuster

Ahrefs’ Tim Soulo recently posted that AI is making publishing evergreen content obsolete and no longer worth the investment because AI summaries leave fewer clicks for publishers.  He posits that it may be more profitable to focus on trending topics, calling it Fast SEO.  Is publishing evergreen content no longer a viable content strategy?

The Reason For Evergreen Content

Evergreen content can be a basic topic that generally doesn’t change much from year to year. For example, the answer to how to change a tire will generally always be the same.

The promise of evergreen content was that it represents a steady source of traffic. Once a web page is ranking for evergreen topics, publishers basically just have to make sure that it’s updated if the topic has changed in some way.

Does AI Break The Evergreen Content Promise?

Tim Soulo is suggesting that evergreen content, which can be easy to answer with a summary, is less likely to send a click because AI summarizes the answer and satisfies the user, who may not need to visit a website.

Soulo tweeted:

“The era of “evergreen SEO content” is over. We’re entering the era of “fast SEO.”

There’s little point in writing yet another “Ultimate Guide To ___.” Most evergreen topics have already been covered to death and turned into common knowledge. Google is therefore happy to give an AI answer, and searchers are fine with that.

Instead, the real opportunity lies in spotting and covering new trends — or even setting them yourself.”

Is Fast SEO The Future Of Publishing?

Fast SEO is another way of describing trending topics. Trending topics have always been around; it’s why Google invented the freshness algorithm, to satisfy users with up-to-date content when a “query deserves freshness.”

Soulo’s idea is that trending topics are not the kind of content that AI summarizes. Perplexity is the exception; it has an entire content discovery section called Perplexity Discover that’s dedicated to showing trending news articles.

Fast SEO is about spotting and seizing short-lived content opportunities. These can be new developments, shifts in the industry or perceptions, or cultural moments.

His tweet captures the current feeling within the SEO and publishing communities that AI is the reason for diminishing traffic from Google.

The Evergreen Content Situation Is Worse Than Imagined

A technical issue that Soulo didn’t mention but is relevant here is that it’s challenging to create an “Ultimate Guide To X, Y, Z” or the “Definitive Guide To Bla, Bla, Bla” and expect it to be fresh and different from what is already published.

The barrier to entry for evergreen content is higher now than it’s ever been for several reasons:

  • There are more people publishing content.
  • People are consuming multiple forms of content (text, audio, and video).
  • Search algorithms are focused on quality, which shuts out those who focus harder on SEO than they do on people.
  • User behavior signals are more reliable than traditional link signals, and SEOs still haven’t caught on to this, making it harder to rank.
  • Query Fan-Out is causing a huge disruption in SEO.

Why Query Fan-Out Is A Disruption

Evergreen content is an uphill struggle, compounded by the seeming inevitability that AI will summarize the content and, because of Query Fan-Out, possibly send the click to another website that is cited because it offers the answer to a follow-up question to the initial search query.

Query Fan-Out displays answers to the initial query and to follow-up questions to the initial search query. If the user is happy with the summary to the initial query, they may become interested in one of the follow-up queries, and one of those will get the click, not the initial query.

This completely changes what it means to target a search query. How does an SEO target a follow-up question? Maybe, instead of targeting the main high-traffic query, it may make sense to target the follow-up queries with evergreen content.

Evergreen Content Publishing Still Has Life

There is another side to this story, and it’s about user demand. Foundational questions stick around for a long time. People will always search “how to tie a bowtie” or “how to set up WordPress.” Many users prefer the stability of an established guide that has been reviewed and updated by a trusted brand. It’s not about being a brand; it’s about being the kind of site that is trusted, well-liked, and recommended.

A strong resource can become the canonical source for a topic, ranking for years and generating the kind of user behavior signals that reinforce its authority and signal the quality of being trusted.

Trend-driven content, by contrast, often delivers only a brief spike before fading. A newsroom model is difficult to maintain because it requires constant work to be first and be the best.

The Third Way: Do It All

The choice between producing evergreen content and trending topics doesn’t have to be binary; there’s a third option where you can do it all. Evergreen and trending topics can complement each other because each side provides opportunities for driving traffic to the other. Fresh, trend-driven content can link back to the evergreen, and this can be reversed to send readers to fresh content from the evergreen.

Trend-driven content sometimes becomes evergreen itself. But in general, creating evergreen content requires deep planning, quality execution, and marketing. Somebody’s going to get the click from evergreen content, it might as well be you.

Featured Image by Shutterstock/Stokkete

Pew: Most Americans Want AI Labels, Few Trust Detection via @sejournal, @MattGSouthern

A new Pew Research Center survey reveals a gap between people’s desire to know when AI is used in content and their confidence in being able to identify it.

Seventy-six percent say it’s extremely or very important to know whether pictures, videos, or text were made by AI or by people. Only 12% feel confident they could tell the difference themselves.

Pew Research Center wrote:

“Americans feel strongly that it’s important to be able to tell if pictures, videos or text were made by AI or by humans. Yet many don’t trust their own ability to spot AI-generated content.”

This confidence gap reflects a rising unease with AI.

Half of Americans believe that the increased presence of AI in daily life raises more concerns than excitement, while just 10% are more excited than worried.

What Pew Research Found

People Want More Control

About 60% of Americans want more control over AI in their lives, an increase from 55% last year.

They’re open to AI helping with daily tasks, but still want clarity on where AI ends and human involvement begins.

When People Accept vs. Reject AI

Most support the use of AI in data-intensive tasks, such as weather prediction, financial crime detection, fraud investigation, and drug development.

About two-thirds oppose AI in personal areas such as religious guidance and matchmaking.

Younger Audiences Are More Aware

Awareness of AI is highest among adults under 30, with 62% claiming they’ve heard a lot about it, compared to only 32% of those 65 and older.

But this awareness doesn’t lead to optimism. Younger adults are more likely than seniors to believe that AI will negatively impact creative thinking and the development of meaningful relationships.

Creativity Concerns

More Americans believe AI will negatively impact essential human skills.

Fifty-three percent think it will reduce creative thinking, and 50% feel it will hinder the ability to connect with others, with only a few expecting improvements.

This suggests labeling alone isn’t sufficient. Human input must also be evident in the work.

Why This Matters

People are generally not against AI, but they do want to know when AI is involved. Being open about AI use can help build trust.

Brands that go the transparent route might find themselves at an advantage in creating connections with their audience.

For more insights, see the full report.


Featured Image: Roman Samborskyi/Shutterstock

Review Signals Gain Influence In Top Google Local Rankings via @sejournal, @MattGSouthern

A new analysis from Search Atlas quantifies the interaction between proximity and reviews in local rankings.

Proximity drives visibility overall, while review signals become stronger differentiators in the highest positions.

This study examines 3,269 businesses across the food, health, law, and beauty sectors.

It shows that for positions 1–21, proximity influences 55% of decisions, while review count accounts for 19%. In the top ten, proximity’s influence decreases to 36%, but review count increases to 26%, with review keyword relevance reaching 22%.

Search Atlas writes:

Proximity is the top driver of local visibility.

The study also notes:

Proximity does not always dominate in elite positions.

What It Means

You’ll have a better chance of achieving top results by focusing on earning more reviews and naturally incorporating service-specific terms into reviews, rather than relying on your pin’s location on the map.

The report suggests that Google understands review text semantically. Using service-specific language in reviews can help your rankings for high-value queries.

How To Apply This

Think of proximity as your default setting. It’s fixed, so focus your attention on the inputs you can control.

When crafting your review requests, aim for natural, service-specific language. For instance, “best dentist for whitening” tends to work better than “great service.”

Also, ensure that your GBP name and profile details are aligned. The research shows that matching your business name to the search intent, such as “Downtown Dental Clinic” for someone searching “dentist near me,” can make a positive difference.

Sector Behavior

While the overall pattern remains consistent, shoppers can exhibit different behaviors across categories.

Per the report:

  • For Law, proximity tends to be the most important factor, with reviews playing a secondary role.
  • In Beauty, reputation signals are more influential. While proximity is still key, review volume and keywords are also important.
  • When it comes to Food, review content and profile relevance become especially valuable, particularly in crowded markets.
  • Health balances proximity with strong reviews and service alignment in reviews.

Looking Ahead

This study quantifies something practitioners have long suspected: proximity earns you a look, but review content helps you secure the top spot in the close contest.

If you can’t change your location, shape the language around it.

For more data on GBP ranking factors, see the full report.

Methods & Limits

The authors applied XGBoost to grid visibility, GBP metadata, website content, and reviews, achieving a global model that explains approximately 92–93% of the variance.

They emphasize that feature importance indicates correlation, not causation. Additionally, they warn that proximity might be overstated due to fixed grid collection and note that their results represent a snapshot in time.

Use these insights as guidance, not a strict rulebook.


Featured Image: Roman Samborskyi/Shutterstock

LLMs.txt For AI SEO: Is It A Boost Or A Waste Of Time? via @sejournal, @martinibuster

Many popular WordPress SEO plugins and content management platforms offer the ability to generate LLMs.txt for the purpose of improving visibility in AI search platforms. With so many popular SEO plugins and CMS platforms offering LLMs.txt functionality, one might come away with the impression that it is the new frontier of SEO. The fact, however, is that LLMs.txt is just a proposal, and no AI platform has signed on to use it.

So why are so many companies rushing to support a standard that no one actually uses? Some SEO tools offer it because their users are asking for it, while many users feel they need to adopt LLMs.txt simply because their favorite tools provide it. A recent Reddit discussion on this very topic is a good place to look for answers.

Third Party SEO Tool And LLMs.txt

Google’s John Mueller addressed the LLMs.txt confusion in a recent Reddit discussion.  The person asking the question was concerned because an SEO tool flagged it as 404, missing. The user had the impression that the tool implied it was needed.

Their question was:

“Why is SEMRush showing that the /llm.txt is a 404? Yes, I. know I don’t have one for the website, but, I’ve heard it’s useless and not needed. Is that true?

If i need it, how do i build it?

Thanks”

The Redditor seems to be confused by the Semrush audit that appears to imply that they need an LLMs.txt. I don’t know what they saw in the audit but this is what the official Semrush audit documentation shares about the usefulness of LLMs.txt:

“If your site lacks a clear llms.txt file it risks being misrepresented by AI systems.

…This new check makes it easy to quickly identify any issues that may limit your exposure in AI search results.”

Their documentation says that it’s a “risk” to not have an LLMs.txt but the fact is that there is absolutely no risk because no AI platform uses it. And that may be why the Redditor was asking the question, “If i need it, how do I build it?”

LLMs.txt Is Unnecessary

Google’s John Mueller confirmed that LLMs.txt is unnecessary.

He explained:

“Good catch! Especially in SEO, it’s important to catch misleading & bad information early, before you invest time into doing something unnecessary. Question everything.”

Why AI Platforms May Choose To Not Use LLMs.txt

Aside from John Mueller’s many informal statements about the uselessness of LLMs.txt, I don’t think there are any formal statements from AI platforms as to why they don’t use LLMs.txt and their associated .md markdown texts. There are, however, many good reasons why an AI platform would choose not to use it.

The biggest reason not to use LLMs.txt is that it is inherently untrustworthy. On-page content is relatively trustworthy because it is the same for users as it is for an AI bot.

A sneaky SEO could add things to structured data and markdown texts that don’t exist in the regular HTML content in order to get their content to rank better. It is naive to think that an SEO or publisher would not use .md files to trick AI platforms.

For example, unscrupulous SEOs add hidden text and AI prompts within HTML content. A research paper from 2024 (Adversarial Search Engine Optimization for Large Language Models) showed that manipulation of LLMs was possible using a technique they called Preference Manipulation Attacks.

Here’s a quote from that research paper (PDF):

“…an attacker can trick an LLM into promoting their content over competitors. Preference Manipulation Attacks are a new threat that combines elements from prompt injection attacks… Search Engine Optimization (SEO)… and LLM ‘persuasion.’

We demonstrate the effectiveness of Preference Manipulation Attacks on production LLM search engines (Bing and Perplexity) and plugin APIs (for GPT-4 and Claude). Our attacks are black-box, stealthy, and reliably manipulate the LLM to promote the attacker’s content. For example, when asking Bing to search for a camera to recommend, a Preference Manipulation Attack makes the targeted camera 2.5× more likely to be recommended by the LLM.”

The point is that if there’s a loophole to be exploited, someone will think it’s a good idea to take advantage of it, and that’s the problem with creating a separate file for AI chatbots: people will see it as the ideal place to spam LLMs.

It’s safer to rely on on-page content than on a markdown file that can be altered exclusively for AI. This is why I say that LLMs.txt is inherently untrustworthy.

What SEO Plugins Say About LLMs.txt

The makers of Squirrly WordPress SEO plugin acknowledge that they provided the feature only because their users asked for it, and they assert that it has no influence on AI search visibility.

They write:

“I know that many of you love using Squirrly SEO and want to keep using it. Which is why you’ve asked us to bring this feature.

So we brought it.

But, because I care about you:

– know that LLMs txt will not help you magically appear in AI search. There is currently zero proof that it helps with being promoted by AI search engines.”

They strike a good balance between giving users what they want while also letting them know it’s not actually needed.

While Squirrly is at one end saying (correctly) that LLMs.txt doesn’t boost AI search visibility, Rank Math is on the opposite end saying that AI chatbots actually use the curated version of the content presented in the markdown files.

Rank Math is generally correct in its description of what an LLMs.txt is and how it works, but it overstates the usefulness by suggesting that AI chatbots use the curated LLMs.txt and the associated markdown files.

They write:

“So when an AI chatbot tries to summarize or answer questions based on your site, it doesn’t guess—it refers to the curated version you’ve given it. This increases your chances of being cited properly, represented accurately, and discovered by users in AI-powered results.”

We know for a fact that AI chatbots do not use a curated version of the content. They don’t even use structured data; they just use the regular HTML content.

Yoast SEO is a little more conservative, occupying a position in the center between Squirrly and Rank Math, explaining the purpose of LLMs.txt but not overstating the benefits by hedging with words like “can” and “could.” That is a fair way to describe LLMs.txt, although I like Squirrly’s approach that says, you asked for it, here it is, but don’t expect a boost in search performance.

The LLMs.txt Misinformation Loop

The conversation around LLMs.txt has become a self-reinforcing loop: business owners and SEOs feel anxiety over AI visibility and feel they must do something, viewing LLMs.txt as the something they can do.

SEO tool providers are compelled to provide the LLMs.txt option, reinforcing the belief that it’s a necessity, unintentionally perpetuating the cycle of misunderstanding.

Concern over AI visibility has led to the adoption of LLMs.txt which at this stage is only a proposal for a standard that no AI platform currently uses.

Featured Image by Shutterstock/James Delia

Google Answers SEO Question About Keyword Cannibalization via @sejournal, @martinibuster

Google’s John Mueller answered a question about a situation where multiple pages were ranking for the same search queries. Mueller affirmed the importance of reducing unnecessary duplication but also downplayed keyword cannibalization.

What Is Keyword/Content Cannibalization?

There is an idea that web pages will have trouble ranking if multiple pages are competing for the same keyword phrases. This is related to the SEO fear of duplicate content. Keyword cannibalization is just a catchall phrase that is applied to low-ranking pages that are on similar topics.

The problem with saying that something is keyword cannibalization is that it does not identify something specific about the content that is wrong. That is why there are people asking John Mueller about it, simply because it is an ill-defined and unhelpful SEO concept.

SEO Confusion

The SEO was confused about the recent &num=100 change, where Google is blocking rank trackers from scraping the search results (SERPs) at the rate of 100 results at a time. Some rank trackers are floating the idea of only showing ranking data for the top 20 search results. This affects rank trackers’ ability to scrape the SERPs and has no effect on Google Search Console other than to show more accurate results.

The SEO was under the wrong impression that Search Console was no longer showing impressions from results beyond the top twenty. This is false.

Mueller didn’t address that question; it is just a misunderstanding on the part of the SEO.

Here is the question that was asked:

“If now we are not seeing data from GSC from positions 20 and over, does that mean in fact there are no pages ranking above those places?

If I want to avoid cannibalization, how would I know which pages are being considered for a query, if I can only see URLs in the top 20 or so positions?”

Different Pages Ranking For Same Query

Mueller said that different pages ranking for the same search query is not a problem. I agree: multiple web pages ranking for the same keyword phrases is not a problem; it’s a good thing.

Mueller explained:

“Search Console shows data for when pages were actually shown, it’s not a theoretical measurement. Assuming you’re looking for pages ranking for the same query, you’d see that only if they were actually shown. (IMO it’s not really “cannibalization” if it’s theoretical.)

All that said, I don’t know if this is actually a good use of time. If you have 3 different pages appearing in the same search result, that doesn’t seem problematic to me just because it’s “more than 1″. You need to look at the details, you need to know your site, and your potential users.

Reduce unnecessary duplication and spend your energy on a fantastic page, sure. But pages aren’t duplicates just because they happen to appear in the same search results page. I like cheese, and many pages could appear without being duplicates: shops, recipes, suggestions, knives, pineapple, etc.”

Actual SEO Problems

Multiple pages ranking for the same keyword phrases is not a problem; it’s a good thing and not a reason for concern. Multiple pages not ranking for keywords is a problem.

Here are some real reasons why pages on the same topic may fail to rank:

  • The pages are too long and consequently are unfocused.
  • The pages contain off-topic passages.
  • The pages are insufficiently linked internally.
  • The pages are thin.
  • The pages are virtually duplicates of the other pages in the group.

The above are just a few real reasons why multiple pages on the same topic may not be ranking. Pointing at the pages and declaring they are cannibalizing each other is not real. It’s not something to worry about because keyword cannibalization is just a catchall phrase that masks all the actual reasons I just listed.

Takeaway

The debate over keyword cannibalization says less about Google’s algorithm and more about how the SEO community is willing to accept ideas without really questioning whether the underlying basis makes sense. The question about keyword cannibalization is frequently discussed, and I think that’s because many SEOs have the intuition that it’s somehow not right.

Maybe the habit of diagnosing ranking issues with convenient labels mirrors the human tendency to prefer simple explanations over complex answers. But, as Mueller reminds us, the real story is not that two or three pages happen to surface for the same query. The real story is whether those pages are useful, well linked, and focused enough to meet a reader’s information needs.

What is diagnosed as “content cannibalization” is more likely something else. So, rather than chasing shadows, it may be better to look at the web pages with the eyes of a user and really dig into what’s wrong with the page or the interlinking patterns of the entire section that is proving problematic. Keyword cannibalization disappears the moment you look closer, and other real reasons become evident.

Featured Image by Shutterstock/Roman Samborskyi

Internal WordPress Conflict Spills Out Into The Open via @sejournal, @martinibuster

An internal dispute within the WordPress core contributor team spilled into the open, causing major confusion among people outside the organization. The friction began with a post from more than a week ago and culminated in a remarkable outburst, exposing latent tensions within the core contributor community.

Mary Hubbard Announcement Triggers Conflict

The incident seemingly began with a September 15 announcement by Mary Hubbard, the Executive Director of WordPress. She announced a new Core Program Team that is meant to improve how Core contributor groups coordinate with each other and improve collaboration between Core contributor teams. But this was just the trigger for the conflict, which was actually part of a longer-term friction.

Hubbard explained the role of the new team:

“The goal of this team is to strengthen coordination across Core, improve efficiency, and make contribution easier. It will focus on documenting practices, surfacing roadmaps, and supporting new teams with clear processes.

The Core Program Team will not set product direction. Each Core team remains autonomous. The Program Team’s role is to listen, connect, and reduce friction so contributors can collaborate more smoothly.”

That announcement was met with the following response by a member of the documentation team (Jenni McKinnon), which was eventually removed:

“For the public record: This Core Program Team announcement was published during an active legal and procedural review that directly affects the structural governance of this project.

I am not only subject to this review—I am one of the appointed officials overseeing it under my legal duty as a recognized lead within SSRO (Strategic Social Resilience Operations). This is a formal governance, safety, and accountability protocol—bound by national and international law—not internal opinion.

Effective immediately:
• This post and the program it outlines are to be paused in full.
• No action is to be taken under the name of this Core Program Team until the review concludes and clearance is formally issued.
• Mary Hubbard holds no valid authority in this matter. Any influence, instruction, or decision traced to her is procedurally invalid and is now part of a legal evidentiary record.
• Direction, oversight, and all official governance relating to this matter is held by SSRO, myself, and verified leadership under secured protocol.

This directive exists to protect the integrity of WordPress contributors, prevent governance sabotage, and ensure future decisions are legally and ethically sound.

Further updates will be provided only through secured channels or when review concludes. Thank you for respecting this freeze and honoring the laws and values that underpin open source.”

The post was followed by astonishment and questions in various Slack and Facebook WordPress groups. The roots of the friction begin with events from a week ago centered on documentation team participation.

Documentation Team Participation

A September 10 post by documentation team member Estela Rueda informed the Core contributor community that the WordPress 6.9 release squad is experimenting with a smaller team that excludes documentation leads, with only a temporary “Docs Liaison” in place. Her post explained why this exclusion is a problem, detailed the importance of documentation in the release cycle, and urged that a formal documentation lead role be reinstated in future releases.

Estela Rueda wrote (in the September 10 post):

“The release team does not include representation from the documentation team. Why is this a problem? Because often documentation gets overlooked in release planning and project-wide coordination: Documentation is not a “nice-to-have,” it is a survival requirement. It’s not something we might do if someone has time; it’s something we must do — or the whole thing breaks down at scale. Removing the role from the release squad, we are not just sending the message that documentation is not important, we are showing new contributors that working on docs will never get them to the top of the credits page, therefore showing that we don’t even appreciate contributing to the Docs.”

Jenni McKinnon, who is a member of the docs team, responded with her opinions:

“This approach isn’t in line with genuine open-source values — it’s exclusionary and risks reinforcing harmful, cult-like behaviors.

By removing the Docs Team from the release squad under the guise of “reducing overhead,” this message sends a stark signal: documentation is not essential. That’s not just unfair — it actively erodes the foundations of transparency, contributor morale, and equitable participation.”

She added further comments, culminating in the post below that accused WordPress Executive Director Mary Hubbard of being behind a shift toward “top-down” control:

“While this post may appear collaborative on the surface, it’s important to state for the record — under Chatham House Rule, and in protection of those who have been directly impacted — that this proposal was pushed forward by Mary Hubbard, despite every Docs Team lead, and multiple long-time contributors, expressing concerns about the ethics, sustainability, and power dynamics involved.

Framing this as ‘streamlining’ or ‘experimenting’ is misleading. What’s happening is a shift toward top-down control and exclusion, and it has already resulted in real harm, including abusive behavior behind the scenes.”

Screenshot Of September 10 Comment

Documentation Team Member Asked To Step Away

Today’s issue appears to have been triggered by a post from earlier today announcing that Jenni McKinnon was asked to “step away.”

Milana Cap wrote a post today titled, “The stepping away of a team member” that explained why McKinnon was asked to step away:

“The Documentation team’s leadership has asked Jenni McKinnon to step away from the team.

Recent changes in the structure of the WordPress release squad started a discussion about the role of the Documentation team in documenting the release. While the team was working with the Core team, the release squad, and Mary Hubbard to find a solution for this and future releases, Jenni posted comments that were out of alignment with the team, including calls for broad changes across the project and requests to remove certain members from leadership roles.

This ran counter to the Documentation team’s intentions. Docs leadership reached out privately in an effort to de-escalate the situation and asked Jenni to stop posting such comments, but this behaviour did not stop. As a result, the team has decided to ask her to step away for a period of time to reassess her involvement. We will work with her to explore rejoining the team in the future, if it aligns with the best outcomes for both her and the team.”

And that post may have been what precipitated today’s blow-up in the comments section of Mary Hubbard’s post.

Zooming Out: The Big Picture

What happened today is an isolated incident. But some in the WordPress community have confided their opinion that the WordPress core technical debt has grown larger and expressed concern that the big picture is being ignored. Separately, in comments on her September 10 post (Docs team participation in WordPress releases), Estela Rueda alluded to the issue of burnout among WordPress contributors:

“…the number of contributors increases in waves depending on the releases or any special projects we may have going. The ones that stay longer, we often feel burned out and have to take breaks.”

Taken together, to an outsider, today’s friction contributes to the appearance of cracks starting to show in the WordPress project.

DOJ Seeks Google Ad Manager Break Up As Remedies Trial Begins via @sejournal, @MattGSouthern

Google returns to court on Monday for the remedies phase of the Department of Justice’s ad-tech antitrust case, where the government is asking the judge to order a divestiture of Google Ad Manager.

The remedies trial follows a ruling that found Google illegally monopolized the publisher ad server and ad exchange markets, while rejecting claims about advertiser ad networks and Google’s past acquisitions.

In a statement published today, Google said it will appeal the earlier decision and argued the DOJ’s proposed remedies “go far beyond the Court’s liability decision and the law.”

What The DOJ Is Seeking

The Justice Department will seek structural remedies, which could include selling parts of Google’s ad-tech stack.

Based on reports and filings, the DOJ appears to be pushing for a divestiture of AdX, and possibly DFP, which are now combined within Google Ad Manager.

The remedies trial is scheduled to start Monday in Alexandria, Virginia, before U.S. District Judge Leonie M. Brinkema.

Google’s Counter

Google says a breakup would disrupt publishers and raise costs for advertisers.

The company proposes a behavioral fix focused on interoperability rather than divestiture.

In Google’s words:

“DOJ’s proposed changes go far beyond the Court’s liability decision and the law, and risk harming businesses across the country.”

“We propose building on Ad Manager’s interoperability, letting publishers use third-party tools to access our advertiser bids in real-time.”

These elements reflect Google’s May filing, which proposed making AdX’s real-time bids available to rival ad servers and phasing out Unified Pricing Rules for open-web display.

What The Court Already Decided

Judge Brinkema’s April opinion found Google violated the Sherman Act in the publisher ad server and ad exchange markets and unlawfully tied DFP and AdX.

The court didn’t find a monopoly in advertiser ad networks and rejected claims tied to Google’s acquisitions.

Why This Matters

Should the court decide on divestiture, you might see changes in how open-web display inventory is auctioned and served, along with costs for transitioning off integrated tools.

If the judge backs Google’s interoperability plan, you can expect required access to real-time bids and rule changes that could make multi-stack setups easier without a corporate split.

Looking Ahead

Google plans to appeal the liability decision, so any ordered remedies may be delayed until the appeal is reviewed.


Featured Image: Roman Samborskyi/Shutterstock

SEO For Paws Live Stream Conference: Free Tickets Out Now via @sejournal, @theshelleywalsh

The next SEO For Paws will be held on Sept. 25, 2025.

The live stream features a stellar speaker list that includes some of the industry’s best SEO professionals and personalities, including Andrey Lipattsev, David Carrasco, Judith Lewis, and Jamie Indigo.

SEO for Paws, is a live-streamed fundraiser founded by Anton Shulke, an expert at organizing events, to help a charity close to his heart.

Anton has tirelessly continued his support for his favorite charity, which aids the many pets that were left behind in Kyiv after war broke out in Ukraine. The previous event in March managed to generate approx $7,000 for the worthy cause, with all funds going straight to the shelters where it’s needed.

Anton is well-known for his love of cats. Dynia, who traveled across Europe with Anton’s family after escaping Kyiv, is a regular feature on his social media channels.

a photo of Dynia the catImage from Anton Shulke, September 2025

One Cat Turned Into A Shelter Of 50

Among the many pet shelters that SEO For Paws has helped is an apartment run by Alya, who cares for up to 50 animals.

Alya has always cared for animals, and meeting an old, sick cat she called Fox was the start of becoming an organized shelter.

In 2016, she started with five cats living in her apartment, and today has 50 alongside 15 of her grandmother’s cats.

There’s a lot involved in care for this many animals, including the feeding, cleaning, washing litter boxes, replacing litter, and performing hygiene or medical procedures when needed.

Running a home-based shelter is not easy. Sometimes it’s sad, sometimes it’s exhausting. But Alya says that looking around at all the little whiskered faces, the furry bodies sprawled across the furniture, makes it worth it. Giving them a life of warmth, food, and love is worth every challenge.

To keep supporting individuals like Alya, we need your help. You can donate via Anton’s Buy Me a Coffee.

SEO For Paws – Cat Lovers, Dog Lovers, And SEO

The upcoming “SEO for Paws” livestream aims to continue fundraising efforts. The event, which runs from 12:00 p.m. to 4:30 p.m. ET, will offer actionable SEO and digital marketing advice from experts while raising money for the animal shelters.

Headline speakers who have donated their time to support his cause include Andrey Lipattsev, David Carrasco, Olga Zarr, Judith Lewis, James Wirth, Zach Chahalis, Jamie Indigo, and Lee Elliott.

Attendance is free, but participants are encouraged to donate to help the charity.

Event Highlights

  • Date and Time: September 25, 2025, from 12:00 p.m. to 4:30 p.m. ET.
  • Access: Free registration with the option to join live, participate in Q&A sessions, and a recording will be made available on YouTube.
  • Speakers: The live stream will feature SEO and digital marketing experts, who will share actionable insights.

How To Make A Difference

The “SEO for Paws” live stream is an opportunity to make a meaningful difference while listening to excellent speakers.

All money raised is donated to help cats and dogs in Ukraine.

You can register for the event here.

And you can help support the charity by buying coffee.

Search Engine Journal is proud to be sponsoring the event.

More Resources:


Featured Image: Anton Shulke/SEO For Paws