LLMs.txt For AI SEO: Is It A Boost Or A Waste Of Time? via @sejournal, @martinibuster

Many popular WordPress SEO plugins and content management platforms offer the ability to generate LLMs.txt for the purpose of improving visibility in AI search platforms. With so many popular SEO plugins and CMS platforms offering LLMs.txt functionality, one might come away with the impression that it is the new frontier of SEO. The fact, however, is that LLMs.txt is just a proposal, and no AI platform has signed on to use it.

So why are so many companies rushing to support a standard that no one actually uses? Some SEO tools offer it because their users are asking for it, while many users feel they need to adopt LLMs.txt simply because their favorite tools provide it. A recent Reddit discussion on this very topic is a good place to look for answers.

Third Party SEO Tool And LLMs.txt

Google’s John Mueller addressed the LLMs.txt confusion in a recent Reddit discussion.  The person asking the question was concerned because an SEO tool flagged it as 404, missing. The user had the impression that the tool implied it was needed.

Their question was:

“Why is SEMRush showing that the /llm.txt is a 404? Yes, I. know I don’t have one for the website, but, I’ve heard it’s useless and not needed. Is that true?

If i need it, how do i build it?

Thanks”

The Redditor seems to be confused by the Semrush audit that appears to imply that they need an LLMs.txt. I don’t know what they saw in the audit but this is what the official Semrush audit documentation shares about the usefulness of LLMs.txt:

“If your site lacks a clear llms.txt file it risks being misrepresented by AI systems.

…This new check makes it easy to quickly identify any issues that may limit your exposure in AI search results.”

Their documentation says that it’s a “risk” to not have an LLMs.txt but the fact is that there is absolutely no risk because no AI platform uses it. And that may be why the Redditor was asking the question, “If i need it, how do I build it?”

LLMs.txt Is Unnecessary

Google’s John Mueller confirmed that LLMs.txt is unnecessary.

He explained:

“Good catch! Especially in SEO, it’s important to catch misleading & bad information early, before you invest time into doing something unnecessary. Question everything.”

Why AI Platforms May Choose To Not Use LLMs.txt

Aside from John Mueller’s many informal statements about the uselessness of LLMs.txt, I don’t think there are any formal statements from AI platforms as to why they don’t use LLMs.txt and their associated .md markdown texts. There are, however, many good reasons why an AI platform would choose not to use it.

The biggest reason not to use LLMs.txt is that it is inherently untrustworthy. On-page content is relatively trustworthy because it is the same for users as it is for an AI bot.

A sneaky SEO could add things to structured data and markdown texts that don’t exist in the regular HTML content in order to get their content to rank better. It is naive to think that an SEO or publisher would not use .md files to trick AI platforms.

For example, unscrupulous SEOs add hidden text and AI prompts within HTML content. A research paper from 2024 (Adversarial Search Engine Optimization for Large Language Models) showed that manipulation of LLMs was possible using a technique they called Preference Manipulation Attacks.

Here’s a quote from that research paper (PDF):

“…an attacker can trick an LLM into promoting their content over competitors. Preference Manipulation Attacks are a new threat that combines elements from prompt injection attacks… Search Engine Optimization (SEO)… and LLM ‘persuasion.’

We demonstrate the effectiveness of Preference Manipulation Attacks on production LLM search engines (Bing and Perplexity) and plugin APIs (for GPT-4 and Claude). Our attacks are black-box, stealthy, and reliably manipulate the LLM to promote the attacker’s content. For example, when asking Bing to search for a camera to recommend, a Preference Manipulation Attack makes the targeted camera 2.5× more likely to be recommended by the LLM.”

The point is that if there’s a loophole to be exploited, someone will think it’s a good idea to take advantage of it, and that’s the problem with creating a separate file for AI chatbots: people will see it as the ideal place to spam LLMs.

It’s safer to rely on on-page content than on a markdown file that can be altered exclusively for AI. This is why I say that LLMs.txt is inherently untrustworthy.

What SEO Plugins Say About LLMs.txt

The makers of Squirrly WordPress SEO plugin acknowledge that they provided the feature only because their users asked for it, and they assert that it has no influence on AI search visibility.

They write:

“I know that many of you love using Squirrly SEO and want to keep using it. Which is why you’ve asked us to bring this feature.

So we brought it.

But, because I care about you:

– know that LLMs txt will not help you magically appear in AI search. There is currently zero proof that it helps with being promoted by AI search engines.”

They strike a good balance between giving users what they want while also letting them know it’s not actually needed.

While Squirrly is at one end saying (correctly) that LLMs.txt doesn’t boost AI search visibility, Rank Math is on the opposite end saying that AI chatbots actually use the curated version of the content presented in the markdown files.

Rank Math is generally correct in its description of what an LLMs.txt is and how it works, but it overstates the usefulness by suggesting that AI chatbots use the curated LLMs.txt and the associated markdown files.

They write:

“So when an AI chatbot tries to summarize or answer questions based on your site, it doesn’t guess—it refers to the curated version you’ve given it. This increases your chances of being cited properly, represented accurately, and discovered by users in AI-powered results.”

We know for a fact that AI chatbots do not use a curated version of the content. They don’t even use structured data; they just use the regular HTML content.

Yoast SEO is a little more conservative, occupying a position in the center between Squirrly and Rank Math, explaining the purpose of LLMs.txt but not overstating the benefits by hedging with words like “can” and “could.” That is a fair way to describe LLMs.txt, although I like Squirrly’s approach that says, you asked for it, here it is, but don’t expect a boost in search performance.

The LLMs.txt Misinformation Loop

The conversation around LLMs.txt has become a self-reinforcing loop: business owners and SEOs feel anxiety over AI visibility and feel they must do something, viewing LLMs.txt as the something they can do.

SEO tool providers are compelled to provide the LLMs.txt option, reinforcing the belief that it’s a necessity, unintentionally perpetuating the cycle of misunderstanding.

Concern over AI visibility has led to the adoption of LLMs.txt which at this stage is only a proposal for a standard that no AI platform currently uses.

Featured Image by Shutterstock/James Delia

Google Answers SEO Question About Keyword Cannibalization via @sejournal, @martinibuster

Google’s John Mueller answered a question about a situation where multiple pages were ranking for the same search queries. Mueller affirmed the importance of reducing unnecessary duplication but also downplayed keyword cannibalization.

What Is Keyword/Content Cannibalization?

There is an idea that web pages will have trouble ranking if multiple pages are competing for the same keyword phrases. This is related to the SEO fear of duplicate content. Keyword cannibalization is just a catchall phrase that is applied to low-ranking pages that are on similar topics.

The problem with saying that something is keyword cannibalization is that it does not identify something specific about the content that is wrong. That is why there are people asking John Mueller about it, simply because it is an ill-defined and unhelpful SEO concept.

SEO Confusion

The SEO was confused about the recent &num=100 change, where Google is blocking rank trackers from scraping the search results (SERPs) at the rate of 100 results at a time. Some rank trackers are floating the idea of only showing ranking data for the top 20 search results. This affects rank trackers’ ability to scrape the SERPs and has no effect on Google Search Console other than to show more accurate results.

The SEO was under the wrong impression that Search Console was no longer showing impressions from results beyond the top twenty. This is false.

Mueller didn’t address that question; it is just a misunderstanding on the part of the SEO.

Here is the question that was asked:

“If now we are not seeing data from GSC from positions 20 and over, does that mean in fact there are no pages ranking above those places?

If I want to avoid cannibalization, how would I know which pages are being considered for a query, if I can only see URLs in the top 20 or so positions?”

Different Pages Ranking For Same Query

Mueller said that different pages ranking for the same search query is not a problem. I agree: multiple web pages ranking for the same keyword phrases is not a problem; it’s a good thing.

Mueller explained:

“Search Console shows data for when pages were actually shown, it’s not a theoretical measurement. Assuming you’re looking for pages ranking for the same query, you’d see that only if they were actually shown. (IMO it’s not really “cannibalization” if it’s theoretical.)

All that said, I don’t know if this is actually a good use of time. If you have 3 different pages appearing in the same search result, that doesn’t seem problematic to me just because it’s “more than 1″. You need to look at the details, you need to know your site, and your potential users.

Reduce unnecessary duplication and spend your energy on a fantastic page, sure. But pages aren’t duplicates just because they happen to appear in the same search results page. I like cheese, and many pages could appear without being duplicates: shops, recipes, suggestions, knives, pineapple, etc.”

Actual SEO Problems

Multiple pages ranking for the same keyword phrases is not a problem; it’s a good thing and not a reason for concern. Multiple pages not ranking for keywords is a problem.

Here are some real reasons why pages on the same topic may fail to rank:

  • The pages are too long and consequently are unfocused.
  • The pages contain off-topic passages.
  • The pages are insufficiently linked internally.
  • The pages are thin.
  • The pages are virtually duplicates of the other pages in the group.

The above are just a few real reasons why multiple pages on the same topic may not be ranking. Pointing at the pages and declaring they are cannibalizing each other is not real. It’s not something to worry about because keyword cannibalization is just a catchall phrase that masks all the actual reasons I just listed.

Takeaway

The debate over keyword cannibalization says less about Google’s algorithm and more about how the SEO community is willing to accept ideas without really questioning whether the underlying basis makes sense. The question about keyword cannibalization is frequently discussed, and I think that’s because many SEOs have the intuition that it’s somehow not right.

Maybe the habit of diagnosing ranking issues with convenient labels mirrors the human tendency to prefer simple explanations over complex answers. But, as Mueller reminds us, the real story is not that two or three pages happen to surface for the same query. The real story is whether those pages are useful, well linked, and focused enough to meet a reader’s information needs.

What is diagnosed as “content cannibalization” is more likely something else. So, rather than chasing shadows, it may be better to look at the web pages with the eyes of a user and really dig into what’s wrong with the page or the interlinking patterns of the entire section that is proving problematic. Keyword cannibalization disappears the moment you look closer, and other real reasons become evident.

Featured Image by Shutterstock/Roman Samborskyi

Internal WordPress Conflict Spills Out Into The Open via @sejournal, @martinibuster

An internal dispute within the WordPress core contributor team spilled into the open, causing major confusion among people outside the organization. The friction began with a post from more than a week ago and culminated in a remarkable outburst, exposing latent tensions within the core contributor community.

Mary Hubbard Announcement Triggers Conflict

The incident seemingly began with a September 15 announcement by Mary Hubbard, the Executive Director of WordPress. She announced a new Core Program Team that is meant to improve how Core contributor groups coordinate with each other and improve collaboration between Core contributor teams. But this was just the trigger for the conflict, which was actually part of a longer-term friction.

Hubbard explained the role of the new team:

“The goal of this team is to strengthen coordination across Core, improve efficiency, and make contribution easier. It will focus on documenting practices, surfacing roadmaps, and supporting new teams with clear processes.

The Core Program Team will not set product direction. Each Core team remains autonomous. The Program Team’s role is to listen, connect, and reduce friction so contributors can collaborate more smoothly.”

That announcement was met with the following response by a member of the documentation team (Jenni McKinnon), which was eventually removed:

“For the public record: This Core Program Team announcement was published during an active legal and procedural review that directly affects the structural governance of this project.

I am not only subject to this review—I am one of the appointed officials overseeing it under my legal duty as a recognized lead within SSRO (Strategic Social Resilience Operations). This is a formal governance, safety, and accountability protocol—bound by national and international law—not internal opinion.

Effective immediately:
• This post and the program it outlines are to be paused in full.
• No action is to be taken under the name of this Core Program Team until the review concludes and clearance is formally issued.
• Mary Hubbard holds no valid authority in this matter. Any influence, instruction, or decision traced to her is procedurally invalid and is now part of a legal evidentiary record.
• Direction, oversight, and all official governance relating to this matter is held by SSRO, myself, and verified leadership under secured protocol.

This directive exists to protect the integrity of WordPress contributors, prevent governance sabotage, and ensure future decisions are legally and ethically sound.

Further updates will be provided only through secured channels or when review concludes. Thank you for respecting this freeze and honoring the laws and values that underpin open source.”

The post was followed by astonishment and questions in various Slack and Facebook WordPress groups. The roots of the friction begin with events from a week ago centered on documentation team participation.

Documentation Team Participation

A September 10 post by documentation team member Estela Rueda informed the Core contributor community that the WordPress 6.9 release squad is experimenting with a smaller team that excludes documentation leads, with only a temporary “Docs Liaison” in place. Her post explained why this exclusion is a problem, detailed the importance of documentation in the release cycle, and urged that a formal documentation lead role be reinstated in future releases.

Estela Rueda wrote (in the September 10 post):

“The release team does not include representation from the documentation team. Why is this a problem? Because often documentation gets overlooked in release planning and project-wide coordination: Documentation is not a “nice-to-have,” it is a survival requirement. It’s not something we might do if someone has time; it’s something we must do — or the whole thing breaks down at scale. Removing the role from the release squad, we are not just sending the message that documentation is not important, we are showing new contributors that working on docs will never get them to the top of the credits page, therefore showing that we don’t even appreciate contributing to the Docs.”

Jenni McKinnon, who is a member of the docs team, responded with her opinions:

“This approach isn’t in line with genuine open-source values — it’s exclusionary and risks reinforcing harmful, cult-like behaviors.

By removing the Docs Team from the release squad under the guise of “reducing overhead,” this message sends a stark signal: documentation is not essential. That’s not just unfair — it actively erodes the foundations of transparency, contributor morale, and equitable participation.”

She added further comments, culminating in the post below that accused WordPress Executive Director Mary Hubbard of being behind a shift toward “top-down” control:

“While this post may appear collaborative on the surface, it’s important to state for the record — under Chatham House Rule, and in protection of those who have been directly impacted — that this proposal was pushed forward by Mary Hubbard, despite every Docs Team lead, and multiple long-time contributors, expressing concerns about the ethics, sustainability, and power dynamics involved.

Framing this as ‘streamlining’ or ‘experimenting’ is misleading. What’s happening is a shift toward top-down control and exclusion, and it has already resulted in real harm, including abusive behavior behind the scenes.”

Screenshot Of September 10 Comment

Documentation Team Member Asked To Step Away

Today’s issue appears to have been triggered by a post from earlier today announcing that Jenni McKinnon was asked to “step away.”

Milana Cap wrote a post today titled, “The stepping away of a team member” that explained why McKinnon was asked to step away:

“The Documentation team’s leadership has asked Jenni McKinnon to step away from the team.

Recent changes in the structure of the WordPress release squad started a discussion about the role of the Documentation team in documenting the release. While the team was working with the Core team, the release squad, and Mary Hubbard to find a solution for this and future releases, Jenni posted comments that were out of alignment with the team, including calls for broad changes across the project and requests to remove certain members from leadership roles.

This ran counter to the Documentation team’s intentions. Docs leadership reached out privately in an effort to de-escalate the situation and asked Jenni to stop posting such comments, but this behaviour did not stop. As a result, the team has decided to ask her to step away for a period of time to reassess her involvement. We will work with her to explore rejoining the team in the future, if it aligns with the best outcomes for both her and the team.”

And that post may have been what precipitated today’s blow-up in the comments section of Mary Hubbard’s post.

Zooming Out: The Big Picture

What happened today is an isolated incident. But some in the WordPress community have confided their opinion that the WordPress core technical debt has grown larger and expressed concern that the big picture is being ignored. Separately, in comments on her September 10 post (Docs team participation in WordPress releases), Estela Rueda alluded to the issue of burnout among WordPress contributors:

“…the number of contributors increases in waves depending on the releases or any special projects we may have going. The ones that stay longer, we often feel burned out and have to take breaks.”

Taken together, to an outsider, today’s friction contributes to the appearance of cracks starting to show in the WordPress project.

DOJ Seeks Google Ad Manager Break Up As Remedies Trial Begins via @sejournal, @MattGSouthern

Google returns to court on Monday for the remedies phase of the Department of Justice’s ad-tech antitrust case, where the government is asking the judge to order a divestiture of Google Ad Manager.

The remedies trial follows a ruling that found Google illegally monopolized the publisher ad server and ad exchange markets, while rejecting claims about advertiser ad networks and Google’s past acquisitions.

In a statement published today, Google said it will appeal the earlier decision and argued the DOJ’s proposed remedies “go far beyond the Court’s liability decision and the law.”

What The DOJ Is Seeking

The Justice Department will seek structural remedies, which could include selling parts of Google’s ad-tech stack.

Based on reports and filings, the DOJ appears to be pushing for a divestiture of AdX, and possibly DFP, which are now combined within Google Ad Manager.

The remedies trial is scheduled to start Monday in Alexandria, Virginia, before U.S. District Judge Leonie M. Brinkema.

Google’s Counter

Google says a breakup would disrupt publishers and raise costs for advertisers.

The company proposes a behavioral fix focused on interoperability rather than divestiture.

In Google’s words:

“DOJ’s proposed changes go far beyond the Court’s liability decision and the law, and risk harming businesses across the country.”

“We propose building on Ad Manager’s interoperability, letting publishers use third-party tools to access our advertiser bids in real-time.”

These elements reflect Google’s May filing, which proposed making AdX’s real-time bids available to rival ad servers and phasing out Unified Pricing Rules for open-web display.

What The Court Already Decided

Judge Brinkema’s April opinion found Google violated the Sherman Act in the publisher ad server and ad exchange markets and unlawfully tied DFP and AdX.

The court didn’t find a monopoly in advertiser ad networks and rejected claims tied to Google’s acquisitions.

Why This Matters

Should the court decide on divestiture, you might see changes in how open-web display inventory is auctioned and served, along with costs for transitioning off integrated tools.

If the judge backs Google’s interoperability plan, you can expect required access to real-time bids and rule changes that could make multi-stack setups easier without a corporate split.

Looking Ahead

Google plans to appeal the liability decision, so any ordered remedies may be delayed until the appeal is reviewed.


Featured Image: Roman Samborskyi/Shutterstock

SEO For Paws Live Stream Conference: Free Tickets Out Now via @sejournal, @theshelleywalsh

The next SEO For Paws will be held on Sept. 25, 2025.

The live stream features a stellar speaker list that includes some of the industry’s best SEO professionals and personalities, including Andrey Lipattsev, David Carrasco, Judith Lewis, and Jamie Indigo.

SEO for Paws, is a live-streamed fundraiser founded by Anton Shulke, an expert at organizing events, to help a charity close to his heart.

Anton has tirelessly continued his support for his favorite charity, which aids the many pets that were left behind in Kyiv after war broke out in Ukraine. The previous event in March managed to generate approx $7,000 for the worthy cause, with all funds going straight to the shelters where it’s needed.

Anton is well-known for his love of cats. Dynia, who traveled across Europe with Anton’s family after escaping Kyiv, is a regular feature on his social media channels.

a photo of Dynia the catImage from Anton Shulke, September 2025

One Cat Turned Into A Shelter Of 50

Among the many pet shelters that SEO For Paws has helped is an apartment run by Alya, who cares for up to 50 animals.

Alya has always cared for animals, and meeting an old, sick cat she called Fox was the start of becoming an organized shelter.

In 2016, she started with five cats living in her apartment, and today has 50 alongside 15 of her grandmother’s cats.

There’s a lot involved in care for this many animals, including the feeding, cleaning, washing litter boxes, replacing litter, and performing hygiene or medical procedures when needed.

Running a home-based shelter is not easy. Sometimes it’s sad, sometimes it’s exhausting. But Alya says that looking around at all the little whiskered faces, the furry bodies sprawled across the furniture, makes it worth it. Giving them a life of warmth, food, and love is worth every challenge.

To keep supporting individuals like Alya, we need your help. You can donate via Anton’s Buy Me a Coffee.

SEO For Paws – Cat Lovers, Dog Lovers, And SEO

The upcoming “SEO for Paws” livestream aims to continue fundraising efforts. The event, which runs from 12:00 p.m. to 4:30 p.m. ET, will offer actionable SEO and digital marketing advice from experts while raising money for the animal shelters.

Headline speakers who have donated their time to support his cause include Andrey Lipattsev, David Carrasco, Olga Zarr, Judith Lewis, James Wirth, Zach Chahalis, Jamie Indigo, and Lee Elliott.

Attendance is free, but participants are encouraged to donate to help the charity.

Event Highlights

  • Date and Time: September 25, 2025, from 12:00 p.m. to 4:30 p.m. ET.
  • Access: Free registration with the option to join live, participate in Q&A sessions, and a recording will be made available on YouTube.
  • Speakers: The live stream will feature SEO and digital marketing experts, who will share actionable insights.

How To Make A Difference

The “SEO for Paws” live stream is an opportunity to make a meaningful difference while listening to excellent speakers.

All money raised is donated to help cats and dogs in Ukraine.

You can register for the event here.

And you can help support the charity by buying coffee.

Search Engine Journal is proud to be sponsoring the event.

More Resources:


Featured Image: Anton Shulke/SEO For Paws

The Future Of Rank Tracking Can Go Two Ways via @sejournal, @martinibuster

Digital marketers are providing more evidence that Google’s disabling of the num=100 search parameter correlates exactly with changes in Google Search Console impression rates. What looked like reliable data may, in fact, have been a distorted picture shaped by third-party SERP crawlers. It’s becoming clear that squeezing meaning from the top 100 search results is increasingly a thing of the past and that this development may be a good thing for SEO.

Num=100 Search Parameter

Google recently disabled the use of a search parameter that caused web searches to display 100 organic search results for a given query. Search results keyword trackers depended on this parameter for efficiently crawling Google’s search results. By eliminating the search parameter, Google is forcing data providers into an unsustainable position that requires them to scale their crawling by ten times in order to extract the top 100 search results.

Rank Tracking: Fighting To Keep It Alive

Mike Roberts, founder of SpyFu, wrote a defiant post saying that they will find a way to continue bringing top 100 data to users.

His post painted an image of an us versus them moment:

“We’re fighting to keep it alive. But this hits hard – delivering is very expensive.

We might even lose money trying to do this… but we’re going to try anyway.

If we do this alone, it’s not sustainable. We need your help.

This isn’t about SpyFu vs. them.

If we can do it – the way the ecosystem works – all your favorite tools will be able to do it. If nothing else, then by using our API (which has 100% of our keyword and ranking data).”

Rank Tracking: Where The Wind Is Blowing

Tim Soulo, CMO of Ahrefs, sounded more pragmatic about the situation, tweeting that the future of ranking data will inevitably be focused on the Top 20 search results.

Tim observed:

“Ramping up the data pulls by 10x is just not feasible, given the scale at which all SEO tools operate.

So the question is:

‘Do you need keyword data below Top 20?’

Because most likely it’s going to come at a pretty steep premium going forward.

Personally, I see it this way:

▪️ Top 10 – is where all the traffic is at. Definitely a must-have.

▪️ Top 20 – this is where “opportunity” is at, both for your and your competitors. Also must-have.

▪️ Top 21-100 – IMO this is merely an indication that a page is “indexed” by Google. I can’t recall any truly actionable use cases for this data.”

Many of the responses to his tweet were in agreement, as am I. Anything below the top 20, as Tim suggested, only tells you that a site is indexed. The big picture, in my opinion, is that it doesn’t matter whether a site is ranked in position 21 or 91; they’re pretty much equivalently suffering from serious quality or relevance issues that need to be worked out. Any competitors in that position shouldn’t be something to worry about because they are not up and coming; they’re just limping their way in the darkness of page three and beyond.

Page two positions, however, provide actionable and useful information because they show that a page is relevant for a given keyword term but that the sites ranked above it are better in terms of quality, user experience, and/or relevance. They could even be as good as what’s on page one but, in my experience, it’s less about links and more often it’s about user preference for the sites in the top ten.

Distorted Search Console Data

It’s becoming clear that search results scraping distorted Google’s Search Console data. Users are reporting that Search Console keyword impression data is significantly lower since Google blocked the Num=100 search parameter. Impressions are the times when Google shows a web page in the search results, meaning that the site is ranking for a given keyword phrase.

SEO and web developer Tyler Gargula (LinkedIn profile) posted the results of an analysis of over three hundred Search Console properties, showing that 87.7% of the sites experienced drops in impressions. 77.6% of the sites in the analysis experienced losses in query counts, losing visibility for unique keyword phrases.

Tyler shared:

“Keyword Length: Short-tail and mid-tail keywords experienced the largest drops in impressions, with single word keywords being much lower than I anticipated. This could be because short and mid-tail keywords are popular across the SEO industry and easier to track/manage within popular SEO tracking tools.

Keyword Ranking Positions: There has been reductions in keywords ranking on page 3+, and in turn an increase in keywords ranking in the top 3 and page 1. This suggests keywords are now more representative of their actual ranking position, versus receiving skewed positions from num=100.”

Google Is Proactively Fighting SERP Scraping

Disabling the num=100 search parameter is just the prelude to a bigger battle. Google is hiring an engineer to assist in statistical analysis of SERP patterns and to work together with other teams to develop models for combating scrapers. It’s obvious that this activity negatively affects Search Console data, which in turn makes it harder for SEOs to get an accurate reading on search performance.

What It Means For The Future

The num=100 parameter was turned off in a direct attack on the scraping that underpinned the rank-tracking industry. Its removal is forcing the search industry to reconsider the value of data beyond the top 20 results. This may be a turning point toward better attribution and and clearer measures of relevance.

Featured Image by Shutterstock/by-studio

Google Brings AI Mode To Chrome’s Address Bar via @sejournal, @MattGSouthern

Google is rolling out AI Mode to the address bar in Chrome for U.S. users.

This move is part of a series of AI updates, including Gemini in Chrome, page-aware question prompts, improved scam protection, and instant password changes.

See Google’s launch video below:

What’s New

Google Chrome will enable you to access AI Mode directly from the search bar on desktop, ask follow-up questions, and explore the web more in-depth.

Additionally, Google is introducing contextual prompts that are connected to the page you’re currently viewing. When you use these prompts, an AI Overview will appear on the right side of the screen, allowing you to continue using AI Mode without leaving the page.

For now, this feature is available in English in the U.S., with plans to expand internationally.

Gemini In Chrome

Gemini in Chrome is rollout out to to Mac and Windows users in the U.S.

You can ask it to clarify complex information across multiple tabs, summarize open tabs, and consolidate details into a single view.

With integrations with Calendar, YouTube, and Maps, you can jump to a specific point in a video, get location details, or set meetings without switching tabs.

Google plans to add agentic capabilities in the coming months. Gemini will be able to perform tasks for you on the web, such as booking appointments or placing orders, with the option to stop it at any time.

Regarding availability, Google notes that business access will be available “in the coming weeks” through Workspace with enterprise-grade protections.

Security Enhancements

Enhanced protection in Safe Browsing now uses Gemini Nano to detect tech-support-style scams, making browsing safer. Google is also working on extending this protection to block fake virus alerts and fake giveaways.

Chrome is using AI to help reduce annoying spammy site notifications and to lower the prominence of intrusive permission prompts.

Additionally, Chrome will soon serve as a password helper, automatically changing compromised passwords with a single click on supported sites.

Why This Matters

Adding AI Mode to the omnibox makes it easier to ask conversational questions and follow-ups.

Content that answers related questions and compares options side by side may align better with these types of searches. Page-aware prompts also create new ways to explore related topics from article pages, which could change how people click through to other content.

Looking Ahead

Google frames this as “the biggest upgrade to Chrome in its history,” with staged rollouts and more countries and languages to come.


Featured Image: Photo Agency / Shutterstock

Google Introduces Three-Tier Store Widget Program For Retailers via @sejournal, @MattGSouthern

Google is expanding its store widget program into three eligibility-based tiers that you can embed on your site to display ratings, policies, and reviews, helping customers make informed decisions.

Google announces:

“When shoppers are online, knowing which store to buy from can be a tough decision. The new store widget powered by Google brings valuable information directly to a merchant’s website, which can turn shopper hesitation into sales. It addresses two fundamental challenges ecommerce retailers face: boosting visibility and establishing legitimacy.”

What’s New

Google now offers three versions of the widget, shown based on your current standing in Merchant Center: Top Quality store widget, Store rating widget, and a generic store widget for stores still building reputation.

This replaces the earlier single badge and expands access to more merchants.

Google’s announcement continues:

“It highlights your store’s quality to shoppers by providing visual indicators of excellence and quality. Besides your store rating on Google, the widget can also display other important details, like shipping and return policies, and customer reviews. The widget is displayed on your website and stays up to date with your current store quality ratings.

Google says sites using the widget saw up to 8% higher sales within 90 days compared to similar businesses without it.

Implementation

You add the widget by embedding Google’s snippet on any page template, similar to adding analytics or chat tools.

It’s responsive and updates automatically from your Merchant Center data, which means minimal maintenance after setup.

Check eligibility in Google Merchant Center, then place your badge wherever reassurance can influence conversion.

Context

Google first announced a store widget last year. Today’s update introduces the three-tier structure, which is why Google is framing it as a “new” development.

Why This Matters

Bringing trusted signals from Google onto your product and checkout pages can reduce hesitation and help close sales that would otherwise bounce.

You can surface store rating, shipping and returns, and recent reviews without manual updates, since the widget reflects your current store quality data from Google.


Featured Image: Roman Samborskyi/Shutterstock

Google Discover Adds Social Media Posts & Follow Buttons via @sejournal, @MattGSouthern

Google is updating Discover with two changes that could change how your content finds readers.

You can now follow publishers directly in Discover, and Google will start showing social posts from platforms like X, Instagram, and YouTube Shorts in the feed.

Rollout begins today, with social integrations coming in the weeks ahead.

Layla Amjadi, Senior Director of Product Management for Search, wrote:

“We’re updating Discover to make it even easier to find, follow and engage with the content and creators you care about most. From creators to news publishers, Discover will be a more helpful and personalized jumping-off point for exploring the web.

What’s New Today

Signed-in users can follow publishers and creators right inside Discover.

When someone taps a name in Discover, they’ll see a dedicated space with that source’s content across formats. If they like what they see, they can follow to get more from that source over time.

Here’s an example of a dedicated publisher space in Google Discover:

Screenshot from: : blog.google/products/search/discover-updates-september-2025, September 2025.

Google’s announcement reads:

“Now, you can “follow” publishers or creators right on Discover to see more of their content. You can preview a publisher or creator’s content — including articles, YouTube videos and posts from social channels — before you follow. Just tap their name to find a new, dedicated space for their content.”

Social Posts In Discover

Google Discover will soon begin showing posts from X and Instagram, along with YouTube Shorts, with more platforms planned.

Google’s announcement reads:

“In the coming weeks, you’ll start to see more types of content in Discover from publishers and creators across the web, such as posts from X and Instagram and YouTube Shorts, with more platforms to come. In our research, people told us they enjoyed seeing a mix of content in Discover, including videos and social posts, in addition to articles.”

Here’s an example of an in-feed social media post in Google Discover:

Screenshot from: : blog.google/products/search/discover-updates-september-2025, September 2025.

Why This Matters

If you publish across web + social, your posts could reach Discover audiences without an extra tap into each platform.

The Follow button gives you a direct, opt-in signal that may help stabilize Discover traffic over time.

This follows Google’s preferred sources feature in Top Stories, which lets people pick news outlets they want to see more often for timely queries.

Looking Ahead

The follow button is available starting today for signed-in users. Social posts will appear in the coming weeks as the integration rolls out.

Together, these updates point to more user-directed personalization across Search surfaces.