Why Do Web Standards Matter? Google Explains SEO Benefits via @sejournal, @MattGSouthern

Google Search Relations team members recently shared insights about web standards on the Search Off the Record podcast.

Martin Splitt and Gary Illyes explained how these standards are created and why they matter for SEO. Their conversation reveals details about Google’s decisions that affect how we optimize websites.

Why Some Web Protocols Become Standards While Others Don’t

Google has formally standardized robots.txt through the Internet Engineering Task Force (IETF). However, they left the sitemap protocol as an informal standard.

This difference illustrates how Google determines which protocols require official standards.

Illyes explained during the podcast:

“With robots.txt, there was a benefit because we knew that different parsers tend to parse robots.txt files differently… With sitemap, it’s like ‘eh’… it’s a simple XML file, and there’s not that much that can go wrong with it.”

This statement from Illyes reveals Google’s priorities. Protocols that confuse platforms receive more attention than those that work well without formal standards.

The Benefits of Protocol Standardization for SEO

The standardization of robots.txt created several clear benefits for SEO:

  • Consistent implementation: Robots.txt files are now interpreted more consistently across search engines and crawlers.
  • Open-source resources: “It allowed us to open source our robots.txt parser and then people start building on it,” Illyes noted.
  • Easier to use: According to Illyes, standardization means “there’s less strain on site owners trying to figure out how to write the damned files.”

These benefits make technical SEO work more straightforward and more effective, especially for teams managing large websites.

Inside the Web Standards Process

The podcast also revealed how web standards are created.

Standards groups, such as the IETF, W3C, and WHATWG, work through open processes that often take years to complete. This slow pace ensures security, clear language, and broad compatibility.

Illyes explained:

“You have to show that the thing you are working on actually works. There’s tons of iteration going on and it makes the process very slow—but for a good reason.”

Both Google engineers emphasized that anyone can participate in these standards processes. This creates opportunities for SEO professionals to help shape the protocols they use on a daily basis.

Security Considerations in Web Standards

Standards also address important security concerns. When developing the robots.txt standard, Google included a 500-kilobyte limit specifically to prevent potential attacks.

Illyes explained:

“When I’m reading a draft, I would look at how I would exploit stuff that the standard is describing.”

This demonstrates how standards establish security boundaries that safeguard both websites and the tools that interact with them.

Why This Matters

For SEO professionals, these insights indicate several practical strategies to consider:

  • Be precise when creating robots.txt directives, since Google has invested heavily in this protocol.
  • Use Google’s open-source robots.txt parser to check your work.
  • Know that sitemaps offer more flexibility with fewer parsing concerns.
  • Consider joining web standards groups if you want to help shape future protocols.

As search engines continue to prioritize technical quality, understanding the underlying principles behind web protocols becomes increasingly valuable for achieving SEO success.

This conversation shows that even simple technical specifications involve complex considerations around security, consistency, and ease of use, all factors that directly impact SEO performance.

Hear the full discussion in the video below:

AI Use Jumps to 78% Among Businesses As Costs Drop via @sejournal, @MattGSouthern

Stanford University’s latest AI Index Report reveals a significant increase in AI adoption among businesses.

Now 78% of organizations use AI, up from 55% a year ago. At the same time, the cost of using AI has dropped, becoming 280 times cheaper in less than two years.

More Businesses Than Ever Are Using AI

The latest report, now in its eighth year, shows a turning point for AI in business.

The number of organizations using generative AI in at least one business area more than doubled, from 33% in 2023 to 71% in 2024.

“Business is all in on AI, fueling record investment and usage,” the report states.

In 2024, U.S. companies invested $109.1 billion in AI, nearly 12 times more than China’s $9.3 billion and 24 times more than the U.K.’s $4.5 billion.

AI Costs Are Dropping

One reason more companies are using AI is that it’s becoming increasingly affordable. The report indicates that the cost of running AI queries has decreased significantly.

The report highlights:

“The cost of querying an AI model that performs like GPT-3.5 dropped from $20.00 per million tokens in November 2022 to just $0.07 per million tokens by October 2024.”

That’s 280 times cheaper in about 18 months.

Prices have dropped between 9 and 900 times per year, depending on the use case for AI. This makes powerful AI tools much more affordable for companies of all sizes.

Regional Differences and Business Impact

Different regions are adopting AI at different rates.

North America remains the leader, but Greater China has shown the most significant jump, with a 27-point increase in company AI use. Europe was next with a 23-point increase.

For marketing teams, AI is starting to show financial benefits. About 71% of companies using AI in marketing and sales report increased revenue, although most say the increase is less than 5%.

This suggests that while AI is helping, most companies are still figuring out how to use it best.

What This Means for Marketers & SEO Pros

These findings matter for several reasons:

  1. The drop in AI costs means powerful tools are getting more affordable, even for smaller teams.
  2. Companies report that AI boosts productivity and helps bridge skill gaps. This can enable you to accomplish more with limited resources.
  3. The report notes that “smaller models drive stronger performance.” Today’s models are 142 times smaller than the 2022 versions, so more AI tools can run on regular computers.

The 2025 AI Index Report clarifies that AI is no longer an experimental technology, it’s a mainstream business tool. For marketers, the question isn’t whether to use AI, but how to utilize it effectively to stay ahead of competitors.

For more insights, see the full report.


Featured Image: kanlaya wanon/Shutterstock

OpenAI Expresses Interest In Buying Chrome Browser via @sejournal, @martinibuster

Nick Turley, Head of Product at ChatGPT, testified that OpenAI would be interested in acquiring the Chrome browser should a judge decide to break it off from Alphabet, Google’s parent company.

According to a report in Reuters:

“ChatGPT head of product Nick Turley made the statement while testifying at trial in Washington where U.S. Department of Justice seeks to require Google to undertake far-reaching measures restore competition in online search.”

Perplexity Comes Out Against Chrome Divestiture

On Monday Perplexity CEO Aravind Srinivas wrote a post on X (formerly Twitter) stating that he intends to testify in support of Google at the U.S. governments anti-trust trial.

Perplexity simultaneously published an article explaining how their position isn’t so much about supporting Google as it is about supporting the future of web browsers and a more open Android ecosystem, two things that he explains will preserve a high level of quality for browsers and create more opportunity and innovation on mobile devices, a win-win for consumers and businesses.

The United States department of Justice wants to split Chrome off from Google as a way to minimize Google’s monopoly position across multiple industries which it asserts is having a negative effect on competition. Srinivas argues that separating Chrome from Google would have the opposite effect.

Srinivas laid out his two key concerns:

1. Google should not be broken up. Chrome should remain within and continue to be run by Google. Google deserves a lot of credit for open-sourcing Chromium, which powers Microsoft’s Edge and will also power Perplexity’s Comet. Chrome has become the dominant browser due to incredible execution quality at the scale of billions of users.

2. Android should become more open to consumer choice. There shouldn’t be a tight coupling to the default apps set by Google, and the permission for OEMs to have the Play Store and Maps. Consumers should have the choice to pick who they want as a default search and default voice assistant, and OEMs should be able to offer consumers this choice without having to be blocked by Google on the ability to have the Play Store and other Google apps (Maps, YouTube).”

Takeaways

OpenAI Expresses Interest In Chrome Browser

  • Nick Turley, Head of Product at ChatGPT, stated OpenAI would be interested in purchasing Chrome if a court orders Google to divest it.
  • His statement was made during testimony in the U.S. Department of Justice’s antitrust trial against Google.

Perplexity AI’s Position Against Chrome Divestiture

  • Perplexity CEO Aravind Srinivas publicly opposed the idea of separating Chrome from Google.
  • He announced plans to testify in support of Google in the antitrust case.
  • Perplexity emphasized that their stance is focused on preserving innovation.

Call for a More Open Android Ecosystem

  • Srinivas advocated for a more open Android ecosystem.
  • He proposed that consumers should freely choose their default search engine and voice assistant.
  • He criticized Google’s practice of requiring OEMs to bundle Google services like the Play Store and Maps.
  • He urged regulators to focus on increasing consumer choice on Android rather than breaking up Chrome.

Featured Image by Shutterstock/Prathmesh T

DOJ’s Google Search Trial: What If Google Must Sell Chrome? via @sejournal, @MattGSouthern

The next phase of the DOJ’s antitrust case against Google started Monday. Both sides presented different views on the future of search and AI.

This follows Judge Amit Mehta’s ruling last year that Google illegally kept its dominance by making exclusive deals with device makers.

DOJ Wants Major Changes to Break Google’s Control

Assistant Attorney General Gail Slater made the government’s position clear:

“Each generation has called for the DOJ to challenge a behemoth that crushed competition. In the past, it was Standard Oil and AT&T. Today’s behemoth is Google.”

The Justice Department wants several changes, including:

  • Making Google sell the Chrome browser
  • Ending exclusive search deals with Apple and Samsung
  • Forcing Google to share search results with competitors
  • Limiting Google’s AI deals
  • Possibly selling off Android if other changes don’t work

DOJ attorney David Dahlquist stated that the court needs to look ahead to prevent Google from expanding its search power into AI. He revealed that Google pays Samsung a monthly sum to install Gemini AI on its devices.

Dahlquist said:

“Now is the time to tell Google and all other monopolists that there are consequences when you break the antitrust laws.”

Google Says These Ideas Would Hurt Innovation

Google disagrees with the DOJ’s plans. Attorney John Schmidtlein called them “a wishlist for competitors looking to get the benefits of Google’s extraordinary innovations.”

In a blog post before the trial, Google VP Lee-Anne Mulholland warned the changes would:

“DOJ’s proposal would also hamstring how we develop AI and have a government committee regulate our products. That would hold back American innovation when we’re in a race with China for technology leadership.”

Google also claims that sharing search data would risk user privacy. They say ending distribution deals would make devices more expensive and hurt companies like Mozilla.

Perplexity Suggests “Choice” as Better Solution

AI search startup Perplexity offers a middle-ground approach.

CEO Aravind Srinivas doesn’t support forcing Google to sell Chrome, posting:

“We don’t believe anyone else can run a browser at that scale without a hit on quality.”

Instead, Perplexity focuses on Android’s restrictive environment. In a blog post called “Choice is the Remedy,” the company argues:

“Google stays dominant by paying to force a subpar experience on consumers–not by building better products.”

Perplexity wants to separate Android from the requirements to include all Google apps. They also want to end penalties for carriers that offer alternatives.

AI Competition Takes Center Stage

The trial shows how important AI has become to search competition. OpenAI’s ChatGPT product head, Nick Turley, will testify Tuesday, highlighting how traditional search and AI are now connected.

The DOJ argues that Google’s search monopoly enhances its AI products, which then direct users back to Google search, creating a cycle that stifles competition.

What’s Next?

The trial is expected to last several weeks, with testimony from representatives of Mozilla, Verizon, and Apple. Google plans to appeal after the final judgment.

This case represents the most significant tech antitrust action since Microsoft in the late 1990s. It shows that both political parties are serious about addressing the market power of Big Tech. Slater notes that the case was “filed during President Trump’s first term and litigated across three administrations.”


Featured Image: Muhammad khoidir/Shutterstock

Google Ads 2024 Safety Report Unveils AI Protections via @sejournal, @brookeosmundson

Google has released its 2024 Ads Safety Report, and the message is clear: accountability is scaling fast thanks to AI.

With billions of ads removed and millions of accounts suspended, the report paints a picture of an advertising ecosystem under tighter scrutiny than ever.

For marketers, especially those managing significant media budgets, these shifts aren’t just background noise.

They directly impact strategy, spend efficiency, and brand safety. Here’s a closer look at the biggest takeaways and how marketers should respond.

A Record-Setting Year in Ad Removals and Account Suspensions

Google removed 5.1 billion ads in 2024, up slightly from the previous year.

The real eye-opener was the surge in account suspensions. Over 39 million advertiser accounts were shut down, more than triple the number from 2023.

That figure tells us two things:

  • Enforcement is no longer just about the ads themselves.
  • Google is focusing upstream, stopping abuse at the account level before it can scale.

In addition to individual ad removals, 9.1 billion ads were restricted (meaning they were limited in where and how they could serve). Google also took action on over 1.3 billion publisher pages and issued site-level enforcements across 220,000 sites in the ad network.

Whether you’re running Search, Display, or YouTube campaigns, this scale of enforcement can influence delivery, reach, and trust signals in subtle ways.

AI is Doing the Heavy Lifting

The scale of these removals wouldn’t be possible without automation. In 2024, Google leaned heavily on AI, introducing over 50 improvements to its large language models (LLMs) for ad safety.

One notable example: Google is now using AI to detect patterns in illegitimate payment information during account setup. This enables enforcement to occur before an ad even goes live.

And as concerns around deepfakes and impersonation scams continue to grow, Google formed a specialized team to target AI-generated fraud. They focused on content that mimicked public figures, brands, and voices.

The result? Over 700,000 advertiser accounts were permanently disabled under updated misrepresentation rules, and reports of impersonation scams dropped by 90%.

AI isn’t just a marketing tool anymore. It’s a core part of how ad platforms decide what gets to run.

A Shift in Ad Policy That Marketer’s Shouldn’t Overlook

One of the more under-the-radar updates was a policy change made in April 2025 to Google’s long-standing Unfair Advantage rules.

Previously, the policy limited a single advertiser from having more than one ad appear in a given results page auction. But the update now allows the same brand to serve multiple ads on the same search page, as long as they appear in different placements.

This creates both opportunity and risk. Larger brands with multiple Google Ads accounts or aggressive agency strategies can now gain more real estate.

For smaller brands or advertisers with limited budgets, this may lead to increased competition for top spots and inflated CPCs.

Even though this change is meant to address transparency and competition, it could cause performance swings in high-intent keyword auctions.

It’s the kind of change that may not be immediately obvious in your dashboard but can quietly reshape performance over time.

What Advertisers Should Keep in Mind Moving Forward

Staying compliant isn’t just about avoiding policy violations.

It’s now about being proactive with AI and understanding how enforcement impacts delivery.

Here are a few ways to stay ahead:

1. Know your ad strength tools, but don’t rely on them blindly

AI is behind many of Google’s enforcement and performance scoring systems, including Ad Strength and Asset Diagnostics. These are helpful tools, but they’re not guarantees of policy compliance.

Always cross-check new ad formats or copy variants against the most recent policy updates.

2. Double-check account structures if you’re running multiple brands or regions.

With the rise in multi-account suspensions, it’s more important than ever to document relationships between brands, resellers, and advertisers.

Google’s systems are increasingly adept at pattern recognition, and even unintentional overlap could flag your account.

3. Be careful with impersonation-style creative or influencer tie-ins

If you’re featuring people in ads (especially public figures), ensure that the usage rights are clear.

AI-generated content that resembles celebrities or influencers, even if satirical, could trip enforcement filters.

When in doubt, opt for original or clearly branded creative.

4. Review how recent policy changes could affect your real estate in search results

Marketers should test how often their brand appears on a single search page now that the Unfair Advantage update allows more flexibility.

Use tools like Ad Preview and multi-account diagnostics to understand if your visibility is shifting.

Wrapping It Up

Google’s latest Ads Safety Report is a reminder that digital advertising is becoming more regulated, more automated, and more tied to platform-defined trust.

Google’s tolerance for risk is dropping fast. And enforcement isn’t just about bad actors anymore. It’s about building an ecosystem where consumers trust what they see.

Marketers who pay attention to these shifts, stay flexible, and put transparency front and center will be in a stronger position. Those who assume “business as usual” are more at risk to be caught off guard.

Don’t wait for a suspension notice to rethink your ads strategy.

Have you noticed any account changes as a result of Google’s ad safety updates?

AI Overviews Glitch May Hint at Google’s Algorithm via @sejournal, @martinibuster

A glitch in Google’s AI Overviews may inadvertently expose how Google’s algorithm understands search queries and chooses answers. Bugs in Google Search are useful to examine because they may expose parts of Google’s algorithms that are normally unseen.

AI-Splaining?

Lily Ray re-posted a tweet that showed how typing nonsense phrases into Google results in a wrong answer where AI Overviews essentially makes up an answer. She called it AI-Splaining.

User Darth Autocrat (Lyndon NA) responded:

“It shows how G have broken from “search”.

It’s not “finding relevant” or “finding similar”, it’s literally making stuff up, which means G are not

a) A search engine
b) An answer engine
c) A recommendation engine they are now
d) A potentially harmful joke”

Google has a long history of search bugs but this is different because there’s an LLM summarizing answers based on grounding data (web, knowledge graph, etc.) and the LLM itself. So, the search marketer known as Darth Autocrat has a point that this Google search bug is in an entirely different level than anything that has been seen before.

Yet there’s one thing that remains the same and that is that search bugs represent an opportunity to see something that’s going on behind the search box that isn’t normally viewable.

AI Bug Is Not Limited To Google AIO

What I think is happening is that Google’s systems are parsing the words to understand what the user means. So in the case where a user query is vague, I think that the LLM will decide what the user is asking based on several likely meanings, like a decision tree in machine learning where a machine will map out likely meanings, remove out the branches that are least likely and predict the likely meaning.

I was reading a patent that Google recently filed that’s on a related theme, where an AI tries to guess what a user means by guiding a user through a decision tree and then storing that information for future interactions with them or with others. This patent, Real-Time Micro-Profile Generation Using a Dynamic Tree Structure, is for AI voice assistants, but it gives an idea of how an AI will try to guess what a user means and then proceed.

So I tested Google, ChatGPT and Claude and discovered that all three make similar mistakes of inferring what a user’s query means and confidently providing a wrong answer.

The sample question was:

What is the parallel puppy fishing technique for striped bass?

There is no such thing as a “parallel puppy fishing technique” but there is a technique called “walking the dog” and there is another technique where an angler in a kayak or a boat will cast parallel to the shore or some other structure.

AI Overviews (AIO)

AIO confidently offered a hallucinated response that is incorrect because it assumed that the user was confusing multiple actual fishing tactics so it blended multiple tactics and fishing lures to invent a non-existent tactic.

AI Overviews gave the following wrong answer:

“The “parallel puppy” technique for striped bass involves a specific retrieve method for topwater plugs, often referred to as “walking the dog”. It’s characterized by a zig-zagging motion imparted to the lure by sweeping the rod side to side, often while maintaining it parallel to the water. This action mimics a wounded baitfish, which can be highly attractive to striped bass.”

Screenshot Of AIO’s Hallucinated Answer

ChatGPT 4o

ChatGPT made the same mistake that Google’s AIO did and hallucinated a complex fishing tactic.

ChatGPT gave the following wrong answer:

“The parallel puppy fishing technique for striped bass is a specialized casting and retrieval method often used when targeting striped bass in shallow water near shoreline structure like jetties, sod banks, or rocky points.

Here’s a breakdown of what it typically involves:

What it is:
“Parallel” refers to casting your lure or bait parallel to the shoreline, rather than out into deeper water.

“Puppy” is often a nickname for small soft plastics, swimbaits, or small-profile plugs, typically in the 3″ to 5″ range—mimicking juvenile baitfish”.

That last sentence is interesting because it’s completely wrong, there are no style of lures that are nicknamed “puppy” but there is a technique called walking the dog.”

Screenshot Of ChatGPT’s Incorrect Answer

Anthropic Claude

Anthropic Claude, using the latest 3.7 Sonnet model, provided a correct answer. It correctly said it didn’t recognize a “legitimate fishing technique” with the provided name and then moved on with the presumption that the user wants to learn striped bass fishing tactics and provides a list of techniques from which a user can select a topic as a follow-up question.

Screenshot Of Anthropic Claude’s Correct Answer

Google Gemini Pro 2.5

Lastly I queried Google Gemini, using the latest Pro 2.5 model. Gemini also offered a correct answer plus a decision tree output that enables a user to decide:

A. That they are misunderstanding fishing tactics

B. Referring to a highly localized tactic

C. Is combining multiple fishing tactics

D. Or is confusing a tactic for another species of fish.

Screenshot of Correct Gemini Pro 2.5 Answer

What’s interesting about that decision tree, which resembles the decision tree approach in the unrelated Google patent, is that those possibilities kind of reflect what Google’s AI Overviews LLM and ChatGPT may have considered when trying to answer the question. They both may have selected from a decision tree and chosen option C, that the user is combining fishing tactics and based their answers on that.

Both Claude and Gemini were confident enough to select option E, that the user doesn’t know what they’re talking about and resorted to a decision tree to guide the user into selecting the right answer.

What Does This Mean About AI Overviews (AIO)?

Google recently announced it’s rolling out Gemini 2.0 for advanced math, coding, and multimodal queries but the hallucinations in AIO suggest that the model Google is using to answer text queries may be inferior to Gemini 2.5.

That’s probably what is happening with gibberish queries and like I said, it offers an interesting insight to how Google AIO actually works.

Featured Image by Shutterstock/Slladkaya

SEOFOMO Survey Shows How Ecommerce SEOs Use AI In 2025 via @sejournal, @martinibuster

Aleyda Solis’ SEOFOMO published a survey of ecommerce owners and SEOs that indicates a wide range of uses of AI, reflecting popular SEO tactics and novel ways to increase productivity, but also reveals that a significant number of the respondents have yet to fully adopt the technology because they are still figuring out how it best fits into their workflow. Very few of the survey respondents said they were not considering AI.

The survey responses showed that there are five popular category uses for AI:

  1. Content
  2. Analysis & Research
  3. Technical SEO
  4. User Experience & Conversion Rate Optimization
  5. Generate Client Documentation, Education & Learning

Content Creation

The survey respondents used AI for important reasons like product listing and descriptions, as well as for scaling meta descriptions, titles, and alt text. Other uses include creating content outlines, grammar checks and other assistive uses of AI.

But some also used it for blog content, landing pages, and for generating FAQ content. There’s no details of how extensively AI was used for blog content but a case could be made against using it for fully generating main content with AI (if that’s how some people are using it) because of Google’s recent cautionary guidance about extensive use of AI for main content.
Google’s Danny Sullivan at the recent Search Central NYC event cautioned about low effort content lacking in originality.

The other reported uses of AI was for grammar checking and clarity which are excellent ways to use AI. Care should be used even for these purposes because AI has a style that can get injected into the content even for something as simple as checking for grammar.

Another interesting use of AI is for revising content so that it matches a company’s “brand voice” which is checking for word choices, tone, and even sentence structure.

Lastly, the ecommerce survey respondents reported using AI for brainstorming content ideas which is another excellent way to use AI.

Analysis & Research

The part about keyword analysis is interesting because the report lists keyword research and clustering as one of the uses. Clustering keywords according to similarity is a good practice because it’s somewhat repetitive and spammy to write pages of content about related things, one page for each keyword phrase when one strong page that represents the entire topic is enough.

Focusing on keywords for SEO has been around longer than Google, and even Google itself has evolved from using keywords as a way to understand content to also incorporating an understanding of queries and content as topics.This is seen in the fact that Google uses core topicality systems as part of its ranking algorithm. So it’s somewhat curious that topicality research wasn’t mentioned as one of the uses, unless keyword clustering is considered part of that. Nevertheless, data analysis is a great use of AI.

Technical SEO

Technical SEO is a fantastic application of AI because that’s all about automating repetitive SEO tasks but also for assisting on making decisions about what to do. There’s lots of ways to do this, including by uploading a set of guidelines and/or charts and asking AI to analyze for specific things. Apps like Screaming Frog allow integration with OpenAI, so it’s leaving money and time on the table to not be investigating all the ways AI can integrate with tools as well as just asking it to analyze data.https://www.screamingfrog.co.uk/seo-spider/tutorials/how-to-crawl-with-chatgpt/

For example, one of the uses reported in the survey was for generating an internal linking strategy.

User Experience (UX) & Conversion Rate Optimization (CRO)

Another way ecommerce store owners are using AI is for improving the user experience and CRO.

The survey reports:

  • “AI-powered product recommendations
  • Chatbots for product discovery or customer support
  • CRO/UX audits based on user behavior”

Training & Education

Lastly, an increasing number of the ecommerce respondents reported using AI for generating training documentation for internal use and for creating customer documentation.

The survey reports:

“Less common but growing:

  • Learning how AI tools function
  • Using AI to create training material or SEO learning resources”

Not Using AI Or Limited Use

What was surprising is the amount of SEOs that are not using AI in a meaningful way. 31% of respondents said they are not using AI but are planning to, 3% of the survey respondents were digging their heels into the ground and flatly refusing to use AI in any way, while an additional 4% answered that they weren’t sure.

That makes a full 37% that aren’t using AI in any meaningful way. Looked at another way, 31% of respondents were getting ready to adopt AI into their workflow. Many managed WordPress hosting companies are integrating AI into their WordPress builder workflow as are some WordPress builders. AI can be integrated via WordPress SEO plugins as well. Wix has already integrated AI into their customer workflow through their proprietary Astro chatbot and companies like Shopify are also planning meaningful and useful ways to integrate AI.

The SEOFOMO survey makes it clear that AI is a significant part of the SEO and ecommerce workflow. Those who don’t use AI shouldn’t feel like they have to. But if you’re unsure how to integrate it, one way to think about it is to ask: what kinds of tasks would you hand off to an intern? Those are the kinds of tasks that AI excels at, enabling one worker to produce at a level five times greater than they could without using AI.

Read the SEOFOMO in ecommerce survey results:

The SEOFOMO Ecommerce SEO in 2025 Survey Results

Featured Image by Shutterstock/tete_escape

Google Says LLMs.Txt Comparable To Keywords Meta Tag via @sejournal, @martinibuster

Google’s John Mueller answered a question about LLMs.txt, a proposed standard for showing website content to AI agents and crawlers, downplaying its usefulness and comparing it to the useless keywords meta tag, confirming the experience of others who have used it.

LLMS.txt

LLMS.txt has been compared to as a Robots.txt for large language models but that’s 100% incorrect. The main purpose of a robots.txt is to control how bots crawl a website. The proposal for LLMs.txt is not about controlling bots. That would be superfluous because a standard for that already exists with robots.txt.

The proposal for LLMs.txt is generally about showing content to LLMs with a text file that uses the markdown format so that they can consume just the main content of a web page, completely devoid of advertising and site navigation. Markdown language is a human and machine readable format that indicates headings with the pound sign (#) and lists with the minus sign (-). LLMs.txt does a few other things similar to that functionality and that’s all it’s about.

What LLMs.txt is:

  • LLMs.txt is not a way to control AI bots.
  • LLMs.txt is a way to show the main content to AI bots.
  • LLMs.txt is just a proposal and not a widely used and accepted standard.

That last part is important because it relates to what Google’s John Mueller said:

LLMs.txt Is Comparable To Keywords Meta Tag

Someone started a discussion on Reddit about LLMs.txt to ask if anyone else shared their experience that the AI bots were not checking their LLMs.txt files.

They wrote:

“I’ve submitted to my blog’s root an LLM.txt file earlier this month, but I can’t see any impact yet on my crawl logs. Just curious to know if anyone had a tracking system in place,e or just if you picked up on anything going on following the implementation.

If you haven’t implemented it yet, I am curious to hear your thoughts on that.”

One person in that discussion shared that they host over 20,000 domains and that no AI agents or bots are downloading the LLMs.txt files, only niche bots like one from BuiltWith is grabbing those files.

The commenter wrote:

“Currently host about 20k domains. Can confirm that no bots are really grabbing these apart from some niche user agents…”

John Mueller answered:

“AFAIK none of the AI services have said they’re using LLMs.TXT (and you can tell when you look at your server logs that they don’t even check for it). To me, it’s comparable to the keywords meta tag – this is what a site-owner claims their site is about … (Is the site really like that? well, you can check it. At that point, why not just check the site directly?)”

He’s right, none of the major AI services, Anthropic, OpenAI, and Google, have announced support for the proposed LLMs.txt standard. So if none of them are actually using it then what’s the point?

Mueller also raises the point that an LLMs.txt file is redundant because why use that markdown file if the original content (and structured data) have already been downloaded? A bot that uses the LLMs.txt will have to check the other content to make sure it’s not spam so why bother?

Lastly, what’s to stop a publisher or SEO from showing one set of content in LLMs.txt to spam AI agents and another set of content for users and search engines? It’s too easy to generate spam this way, essentially cloaking for LLMs.

In that regard it is very similar to the keywords meta tag that no search engine uses because it would be too sketchy to trust a site that it’s really about those keywords and search engines are better and more sophisticated nowadays about parsing the content to understand what it’s about.

Read the LinkedIn discussion here:

LLM.txt – where are we at?

Featured Image by Shutterstock/Jemastock

Google Found Guilty of Illegal Ad Tech Monopoly in Court Ruling via @sejournal, @MattGSouthern

A federal judge has ruled that Google maintained illegal monopolies in the digital advertising technology market.

In a landmark case, the Department of Justice and 17 states found Google liable for antitrust violations.

Federal Court Finds Google Violated Sherman Act

U.S. District Judge Leonie Brinkema ruled that Google illegally monopolized two key markets in digital advertising:

  • The publisher ad server market
  • The ad exchange market

The 115-page ruling (PDF link) states Google violated Section 2 of the Sherman Antitrust Act by “willfully acquiring and maintaining monopoly power.”

It also found that Google unlawfully tied its publisher ad server (DFP) and ad exchange (AdX) together.

Judge Brinkema wrote in the ruling:

“Plaintiffs have proven that Google possesses monopoly power in the publisher ad server for open-web display advertising market. Google’s publisher ad server DFP has a durable and ‘predominant share of the market’ that is protected by high barriers both to entry and expansion.”

Google’s Dominant Market Position

The court found that Google controlled approximately 91% of the worldwide publisher ad server market for open-web display advertising from 2018 to 2022.

In the ad exchange market, Google’s AdX handled between 54% and 65% of total transactions, roughly nine times larger than its closest competitor.

The judge cited Google’s pricing power as evidence of its monopoly. Google maintained a 20% take rate for its ad exchange services for over a decade, despite competitors charging only 10%.

The ruling states:

“Google’s ability to maintain AdX’s 20% take rate under these market conditions is further direct evidence of the firm’s sustained and substantial power.”

Illegal Tying of Services Found

A key part of the ruling focused on Google’s practice of tying its publisher ad server (DFP) to its ad exchange (AdX).

The court determined that Google effectively forced publishers to use DFP if they wanted access to real-time bidding with AdWords advertisers, a crucial feature of AdX.

Judge Brinkema wrote, quoting internal Google communications:

“By tying DFP to AdX, Google took advantage of its ‘owning the platform, the exchange, and a huge network’ of advertising demand.”

This was compared to “Goldman or Citibank own[ing] the NYSE [i.e., the New York Stock Exchange].”

Case History & State Involvement

The Department of Justice initially filed this lawsuit in January 2023, with eight states. Nine more states later joined, bringing the total to 17 states challenging Google’s practices.

Michigan Attorney General Dana Nessel explained why states joined the case:

“The power that Google wields in the digital advertising space has had the effect of either pushing smaller companies out of the market or making them beholden to Google ads.”

Google has consistently denied wrongdoing. Dan Taylor, Vice President of Global Ads, stated that the DOJ’s lawsuit would “reverse years of innovation, harming the broader advertising sector.”

What This Means for Digital Marketers

This ruling has implications for the digital marketing world:

  1. For publishers: If Google must restructure its ad tech business, the decision could give publishers more control over ad inventory and potentially higher revenue shares.
  2. For advertisers: Changes to Google’s ad tech stack may lead to more transparent bidding and lower costs over time.
  3. For marketing agencies: Using a variety of ad tech providers may become more important as Google faces these challenges.

What’s Next?

Judge Brinkema has yet to decide on penalties for Google’s violations. Soon, the court will “set a briefing schedule and hearing date to determine the appropriate remedies.”

Possible penalties include forcing Google to sell parts of its ad tech business. This would dramatically change the digital advertising landscape.

This ruling signals that changes may be coming for marketers relying on Google’s integrated advertising system.

Google intends to appeal the decision, extending the legal battle for years.

From it’s newsroom on X:


Featured Image: sirtravelalot/Shutterstock