Google AI Max For Search Goes Global In Beta via @sejournal, @MattGSouthern

Google’s AI Max for Search campaigns is now available worldwide in beta across Google Ads, Google Ads Editor, Search Ads 360, and the Google Ads API.

AI Max packages Google’s AI features as a one-click suite inside Search campaigns. New built-in experiments allow you to test the impact with minimal setup.

Image Credit: Google

What’s New

One-Click Experiments

AI Max is positioned as a faster path to smarter optimization inside Search campaigns.

New one-click experiments are integrated in the campaign flow, so you can compare performance without rebuilding campaigns.

Availability spans all major surfaces, including the API for teams that automate workflows.

How The Built-In Experiments Work

AI Max experiments are run within the same Search campaign by splitting traffic between a control (with AI Max off) and a trial (with AI Max on).

Since the test doesn’t clone the campaign, you’ll avoid sync errors and can ramp up faster. Once the experiment ends, review the performance and decide whether to apply the change or discard it.

Controls You Can Tweak During A Test

By default, your experiment starts with Search term matching and Asset optimization enabled, but it’s easy to customize these settings.

You can choose to turn off Search term matching at the ad group level or disable Asset optimization at the campaign level if that better suits your goals.

For more control over your landing pages, consider using URL exclusions at the campaign level and URL inclusions at the ad group level.

Brand controls are also available for added flexibility: you can set brand inclusions or exclusions at the campaign level, and specify brand inclusions within ad groups.

The “locations of interest” feature at the ad group level offers more geographic targeting precision.

Reporting Surfaces

Results appear under Experiments with an expanded Experiment summary.

AI Max also adds transparency across reports. These include “AI Max” match-type indicators in Search terms and Keywords reports, plus combined views that show the matched term, headlines, and landing URLs.

Auto-Apply Option

If you want, you can set the experiment to auto-apply when results are favorable. Otherwise, apply manually from the Experiments table or enable AI Max from Campaign settings after the test concludes.

Setup Limits To Know

You can’t create an AI Max experiment via this flow if the campaign:

  • Has legacy features like text customization (old ACA), brand inclusions/exclusions, or ad-group location inclusion already configured
  • Targets the Display Network
  • Uses a Portfolio bid strategy
  • Uses Shared budgets

Coming Soon: Text Guidelines

Google is working on a feature that will provide text guidelines to help AI create brand-safe content that meets your business needs.

This will be available to more advertisers this fall for both AI Max and Performance Max. In the meantime, stick to your usual brand approvals and policy checks.

Getting Started

Google recommends checking out a best-practices guide and Think Week materials if you’re interested in getting started with AI Max.

If you’re already handling Search at scale, the API support simplifies standardizing experiments and comparing results to your existing setup.

Looking Ahead

Expect more controls around creative and safety as text guidelines roll out. Until then, low-lift experiments let you measure AI Max without committing your entire account.

Google Uses Infinite 301 Redirect Loops For Missing Documentation via @sejournal, @martinibuster

Google removed outdated structured data documentation, but instead of returning a 404 response, they have chosen to redirect the old URLs to a changelog that links to the old URL, thereby causing an infinite loop between the two pages. Although that is technically not a soft 404, it is an interesting use of a 301 redirect for a missing web page and not how SEOs typically handle missing web pages and 404 server responses. Did Google make a mistake?

Google Removed Structured Data Documentation

Google quitely published a changelog note announcing they had removed obsolete structured data documentation. An announcement was made three months ago in June and today they finally removed the obsolete documentation.

The missing pages are for the following structured data that is no longer supported:

  • Course info
  • Estimated salary
  • Learning video
  • Special announcement
  • Vehicle listing.

Those pages are completely missing. Gone, and likely never coming back. The usual procedure in that kind of situation is to return a 404 Page Not Found server response. But that’s not what is happening.

Instead of a 404 response Google is returning a 301 redirect back to the changelog. What makes this setup somewhat weird is that Google is linking back to the missing web page from the changelog, which then redirects back to the changelog, creating an infinite loop between the two pages.

Screenshot Of Changelog

In the above screenshot I’ve underlined  in red the link to the Course Info structured data.

The words “course info” are a link to this URL:
https://developers.google.com/search/docs/appearance/structured-data/course-info

Which redirects right back to the changelog here:
https://developers.google.com/search/updates#september-2025

Which of course contains the links to the five URLs that  no longer exist, essentially causing an infinite loop.

It’s not a good user experience and it’s not good for crawlers. So the question is, why did Google do that? 

301 redirects are an option for pages that are missing, so Google is technically correct to use a 301 redirect. However, 301 redirects are generally used to point “to a more accurate URL” which generally means a redirect to a replacement page, one that serves the same or similar purpose.

Technically they didn’t create a soft 404. But the way they handled the missing pages creates a loop that sends crawlers back and forth between a missing web page and the changelog. It seems that it would have been a better user and crawler experience to instead link to the June 2025 blog post that explains why these structured data types are no longer supported  rather than create an infinite loop.

I don’t think it’s anything most SEOs or publishers would do, so why does Google think it’s a good idea?

Featured Image by Shutterstock/Kues

Google Gemini Adds Audio File Uploads After Being Top User Request via @sejournal, @MattGSouthern

Google’s Gemini app now accepts audio file uploads, answering what the company acknowledges was its most requested feature.

For marketers and content teams, it means you can push recordings straight into Gemini for analysis, summaries, and repurposed content without jumping between tools.

Josh Woodward, VP at Google Labs and Gemini, announced the change on X:

“You can now upload any file to @GeminiApp. Including the #1 request: audio files are now supported!”

What’s New

Gemini can now ingest audio files in the same multi-file workflow you already use for documents and images.

You can attach up to 10 files per prompt, and files inside ZIP archives are supported, which helps when you want to upload raw tracks or several interview takes together.

Limits

  • Free plan: total audio length up to 10 minutes per prompt; up to 5 prompts per day.
  • AI Pro and AI Ultra: total audio length up to 3 hours per prompt.
  • Per prompt: up to 10 files across supported formats. Details are listed in Google’s Help Center.

Why This Matters

If your team works with podcasts, webinars, interviews, or customer calls, this closes a gap that often forced a separate transcription step.

You can upload a full interview and turn it into show notes, pull quotes, or a working draft in one place. It also helps meeting-heavy teams: a recorded strategy session can become action items and a brief without exporting to another tool first.

For agencies and networks, batching multiple episodes or takes into one prompt reduces friction in weekly workflows.

The practical win is fewer handoffs: source audio goes in, and the outlines, summaries, and excerpts you need come out. Inside the same system you already use for text prompting.

Quick Tip

Upload your audio together with any supporting context in the same prompt. That gives Gemini the grounding it needs to produce cleaner summaries and more accurate excerpts.

If you’re testing on the free tier, plan around the 10-minute ceiling; longer content is best on AI Pro or Ultra.

Looking Ahead

Google’s limits pages do change, so keep an eye on total length, file-count rules, and any new guardrails that affect longer recordings or larger teams. Also watch for deeper Workspace tie-ins (for example, easier handoffs from Meet recordings) that would streamline getting audio into Gemini without manual uploads.


Featured Image: Photo Agency/Shutterstock

Google Drops Search Console Reporting For Six Structured Data Types via @sejournal, @MattGSouthern

Google will stop reporting six deprecated structured data types in Search Console and remove them from the Rich Results Test and appearance filters.

  • Search Console and Rich Results Test stop reporting on deprecated structured data types.
  • Rankings are unaffected; you can keep the markup, it just won’t show rich results.
  • API returns continue through December.
Anthropic Agrees To $1.5B Settlement Over Pirated Books via @sejournal, @MattGSouthern

Anthropic agreed to a proposed $1.5 billion settlement in Bartz v. Anthropic over claims it downloaded pirated books to help train Claude.

If approved, plaintiffs’ counsel says it would be the largest U.S. copyright recovery to date. A preliminary approval hearing is set for today.

In June, Judge William Alsup held that training on lawfully obtained books can qualify as fair use, while copying and storing millions of pirated books is infringement. That order set the stage for settlement talks.

Settlement Details

The deal would pay about $3,000 per eligible title, with an estimated class size of roughly 500,000 books. Plaintiffs allege Anthropic pulled at least 7 million copies from piracy sites Library Genesis and Pirate Library Mirror.

Justin Nelson, counsel for the authors, said:

“As best as we can tell, it’s the largest copyright recovery ever.”

How Payouts Would Work

According to the Authors Guild’s summary, the fund is paid in four tranches after court approvals: $300M soon after preliminary approval, $300M after final approval, then $450M at 12 months and 450M at 24 months, with interest accruing in escrow.

A final “Works List” is due October 10, which will drive a searchable database for claimants.

The Guild notes the agreement requires destruction of pirated copies and resolves only past conduct.

Why This Matters

If you rely on AI tools in content workflows, provenance now matters more. Expect more licensing deals and clearer disclosures from vendors about training data sources.

For publishers and creators, the per-work payout sets a reference point that may strengthen negotiating leverage in future licensing talks.

Looking Ahead

The judge will consider preliminary approval today. If granted, the notice process begins this fall and payments to rightsholders would follow final approval and claims processing, funded on the installment schedule above.


Featured Image: Tigarto/Shutterstock

Google Publishes Exact Gemini Usage Limits Across All Tiers via @sejournal, @MattGSouthern

Google has published exact usage limits for Gemini Apps across the free tier and paid Google AI plans, replacing earlier vague language with concrete numbers marketers can plan around.

The Help Center update covers daily caps for prompts, images, Deep Research, video generation, and context windows, and notes that you’ll see in-product notices when you’re close to a limit.

What’s New

Until recently, Google’s documentation used general phrasing about “limited access” without specifying amounts.

The Help Center page now lists per-tier allowances for Gemini 2.5 Pro prompts, image generation, Deep Research, and more. It also clarifies that practical caps can vary with prompt complexity, file sizes, and conversation length, and that limits may change over time.

Google’s Help Center states:

“Gemini Apps has usage limits designed to ensure an optimal experience for everyone… we may at times have to cap the number of prompts, conversations, and generated assets that you can have within a specific timeframe.”

Free vs. Paid Tiers

On the free experience, you can use Gemini 2.5 Pro for up to five prompts per day.

The page lists general access to 2.5 Flash and includes:

  • 100 images per day
  • 20 Audio Overviews per day
  • Five Deep Research reports per month using 2.5 Flash).

Because overall app limits still apply, actual throughput depends on how long and complex your prompts are and how many files you attach.

Google AI Pro increases ceilings to:

  • 100 prompts per day on Gemini 2.5 Pro
  • 1,000 images per day
  • 20 Deep Research reports per day (using 2.5 Pro).

Google AI Ultra raises those to

  • 500 prompts per day
  • 200 Deep Research reports per day
  • Includes Deep Think with 10 prompts per day at a 192,000-token context window for more complex reasoning tasks.

Context Windows and Advanced Features

Context windows differ by tier. The free tier lists a 32,000-token context size, while Pro and Ultra show 1 million tokens, which is helpful when you need longer conversations or to process large documents in one go.

Ultra’s Deep Think is separate from the 1M context and is capped at 192k tokens for its 10 daily prompts.

Video generation is currently in preview with model-specific limits. Pro shows up to three videos per day with Veo 3 Fast (preview), while Ultra lists up to five videos per day with Veo 3 (preview).

Google indicates some features receive priority or early access on paid plans.

Availability and Requirements

The Gemini app in Google AI Pro and Ultra is available in 150+ countries and territories for users 18 or older.

Upgrades are tied to select Google One paid plans for personal accounts, which consolidate billing with other premium Google services.

Why This Matters

Clear ceilings make it easier to scope deliverables and budgets.

If you produce a steady stream of social or ad creative, the image caps and prompt totals are practical planning inputs.

Teams doing competitive analysis or longer-form research can evaluate whether the free tier’s five Deep Research reports per month cover occasional needs or if Pro’s daily allotment, Ultra’s higher limit, and Deep Think are a better fit for heavier workloads.

The documentation also emphasizes that caps can vary with usage patterns, so it’s worth watching the in-app limit warnings on busy days.

Looking Ahead

Google notes that limits may evolve. If your workflows depend on specific daily counts or large context windows, it’s sensible to review the Help Center page periodically and adjust plans as features move from preview to general availability.


Featured Image: Evolf/Shutterstock

Google: Your Login Pages May Be Hurting Your SEO Performance via @sejournal, @MattGSouthern

Google’s Search Relations team says generic login pages can confuse indexing and hurt rankings.

When many private URLs all show the same bare login form, Google may treat them as duplicates and show the login page in search.

In a recent “Search Off the Record” episode, John Mueller and Martin Splitt explained how this happens and what to do about it.

Why It Happens

If different private URLs all load the same login screen, Google sees those URLs as the same page.

Mueller said on the podcast:

“If you have a very generic login page, we will see all of these URLs that show that login page that redirect to that login page as being duplicates… We’ll fold them together as duplicates and we’ll focus on indexing the login page because that’s kind of what you give us to index.”

That means people searching for your brand may land on a login page instead of helpful information.

“We regularly see Google services getting this wrong,” Mueller admitted, noting that with many teams, “you invariably run across situations like that.”

Search Console fixed this by sending logged-out visitors to a marketing page with a clear sign-in link, which gave Google indexable context.

Don’t Rely On robots.txt To Hide Private URLs

Blocking sensitive areas in robots.txt can still let those URLs appear in search with no snippet. That’s risky if the URLs expose usernames or email addresses.

Mueller warned:

“If someone does something like a site query for your site… Google and other search engines might be like, oh, I know about all of these URLs. I don’t have any information on what’s on there, but feel free to try them out essentially.”

If it’s private, avoid leaking details in the URL, and use noindex or a login redirect instead of robots.txt.

What To Do Instead

If content must stay private, serve a noindex on private endpoints or redirect requests to a dedicated login or marketing page.

Don’t load private text into the page and then hide it with JavaScript. Screen readers and crawlers may still access it.

If you want restricted pages indexed, use the paywall structured data. It allows Google to fetch the full content while understanding that regular visitors will hit an access wall.

Paywall structured data isn’t only for paid content, Mueller explains:

“It doesn’t have to be something that’s behind like a clear payment thing. It can just be something like a login or some other mechanism that basically limits the visibility of the content.”

Lastly, add context to login experiences. Include a short description of the product or the section someone is trying to reach.

As Mueller advised:

“Put some information about what your service is on that login page.”

A Quick Test

Open an incognito window. While logged out, search for your brand or service and click the top results.

If you land on bare login pages with no context, you likely need updates. You can also search for known URL patterns from account areas to see what shows up.

Looking Ahead

As more businesses use subscriptions and gated experiences, access design affects SEO.

Use clear patterns (noindex, proper redirects, and paywalled markup where needed) and make sure public entry points provide enough context to rank for the right queries.

Small changes to login pages and redirects can prevent duplicate grouping and improve how your site appears in search.


Featured Image: Roman Samborskyi/Shutterstock

AI Search Sends Users to 404 Pages Nearly 3X More Than Google via @sejournal, @MattGSouthern

New research examining 16 million URLs aligns with Google’s predictions that hallucinated links will become an issue across AI platforms.

An Ahrefs study shows that AI assistants send users to broken web pages nearly three times more often than Google Search.

The data arrives six months after Google’s John Mueller raised awareness about this issue.

ChatGPT Leads In URL Hallucination Rates

ChatGPT creates the most fake URLs among all AI assistants tested. The study found that 1% of URLs people clicked led to 404 pages. Google’s rate is just 0.15%.

The problem gets worse when looking at all URLs ChatGPT mentions, not just clicked ones. Here, 2.38% lead to error pages. Compare this to Google’s top search results, where only 0.84% are broken links.

Claude came in second with 0.58% broken links for clicked URLs. Copilot had 0.34%, Perplexity 0.31%, and Gemini 0.21%. Mistral had the best rate at 0.12%, but it also sends the least traffic to websites.

Why Does This Happen?

The research found two main reasons why AI creates fake links.

First, some URLs used to exist but don’t anymore. When AI relies on old information instead of searching the web in real-time, it might suggest pages that have been deleted or moved.

Second, AI sometimes invents URLs that sound right but never existed.

Ryan Law from Ahrefs shared examples from their own site. AI assistants created fake URLs like “/blog/internal-links/” and “/blog/newsletter/” because these sound like pages Ahrefs might have. But they don’t actually exist.

Limited Impact on Overall Traffic

The problem may seem significant, but most websites won’t notice much impact. AI assistants only bring in about 0.25% of website traffic. Google, by comparison, drives 39.35% of traffic.

This means fake URLs affect a tiny portion of an already small traffic source. Still, the issue might grow as more people use AI for research and information.

The study also found that 74% of new web pages contain AI-generated content. When this content includes fake links, web crawlers might index them, spreading the problem further.

Mueller’s Prediction Proves Accurate

These findings match what Google’s John Mueller predicted in March. He forecasted a “slight uptick of these hallucinated links being clicked” over the next 6-12 months.

Mueller suggested focusing on better 404 pages rather than chasing accidental traffic.

His advice to collect data before making big changes looks smart now, given the small traffic impact Ahrefs found.

Mueller also predicted the problem would fade as AI services improve how they handle URLs. Time will tell if he’s right about this, too.

Looking Forward

For now, most websites should focus on two things. Create helpful 404 pages for users who hit broken links. Then, set up redirects only for fake URLs that get meaningful traffic.

This allows you to handle the problem without overreacting to what remains a minor issue for most sites.

Google Antitrust Case: AI Overviews Use FastSearch, Not Links via @sejournal, @martinibuster

A sharp-eyed search marketer discovered the reason why Google’s AI Overviews showed spammy web pages. The recent Memorandum Opinion in the Google antitrust case featured a passage that offers a clue as to why that happened and speculates how it reflects Google’s move away from links as a prominent ranking factor.

Ryan Jones, founder of SERPrecon (LinkedIn profile), called attention to a passage in the recent Memorandum Opinion that shows how Google grounds its Gemini models.

Grounding Generative AI Answers

The passage occurs in a section about grounding answers with search data. Ordinarily, it’s fair to assume that links play a role in ranking the web pages that an AI model retrieves from a search query to an internal search engine. So when someone asks Google’s AI Overviews a question, the system queries Google Search and then creates a summary from those search results.

But apparently, that’s not how it works at Google. Google has a separate algorithm that retrieves fewer web documents and does so at a faster rate.

The passage reads:

“To ground its Gemini models, Google uses a proprietary technology called FastSearch. Rem. Tr. at 3509:23–3511:4 (Reid). FastSearch is based on RankEmbed signals—a set of search ranking signals—and generates abbreviated, ranked web results that a model can use to produce a grounded response. Id. FastSearch delivers results more quickly than Search because it retrieves fewer documents, but the resulting quality is lower than Search’s fully ranked web results.”

Ryan Jones shared these insights:

“This is interesting and confirms both what many of us thought and what we were seeing in early tests. What does it mean? It means for grounding Google doesn’t use the same search algorithm. They need it to be faster but they also don’t care about as many signals. They just need text that backs up what they’re saying.

…There’s probably a bunch of spam and quality signals that don’t get computed for fastsearch either. That would explain how/why in early versions we saw some spammy sites and even penalized sites showing up in AI overviews.”

He goes on to share his opinion that links aren’t playing a role here because the grounding uses semantic relevance.

What Is FastSearch?

Elsewhere the Memorandum shares that FastSearch generates limited search results:

“FastSearch is a technology that rapidly generates limited organic search results for certain use cases, such as grounding of LLMs, and is derived primarily from the RankEmbed model.”

Now the question is, what’s the RankEmbed model?

The Memorandum explains that RankEmbed is a deep-learning model. In simple terms, a deep-learning model identifies patterns in massive datasets and can, for example, identify semantic meanings and relationships. It does not understand anything in the same way that a human does; it is essentially identifying patterns and correlations.

The Memorandum has a passage that explains:

“At the other end of the spectrum are innovative deep-learning models, which are machine-learning models that discern complex patterns in large datasets. …(Allan)

…Google has developed various “top-level” signals that are inputs to producing the final score for a web page. Id. at 2793:5–2794:9 (Allan) (discussing RDXD-20.018). Among Google’s top-level signals are those measuring a web page’s quality and popularity. Id.; RDX0041 at -001.

Signals developed through deep-learning models, like RankEmbed, also are among Google’s top-level signals.”

User-Side Data

RankEmbed uses “user-side” data. The Memorandum, in a section about the kind of data Google should provide to competitors, describes RankEmbed (which FastSearch is based on) in this manner:

“User-side Data used to train, build, or operate the RankEmbed model(s); “

Elsewhere it shares:

“RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: _____% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.”

Then:

“The RankEmbed model itself is an AI-based, deep-learning system that has strong natural-language understanding. This allows the model to more efficiently identify the best documents to retrieve, even if a query lacks certain terms. PXR0171 at -086 (“Embedding based retrieval is effective at semantic matching of docs and queries”);

…RankEmbed is trained on 1/100th of the data used to train earlier ranking models yet provides higher quality search results.

…RankEmbed particularly helped Google improve its answers to long-tail queries.

…Among the underlying training data is information about the query, including the salient terms that Google has derived from the query, and the resultant web pages.

…The data underlying RankEmbed models is a combination of click-and-query data and scoring of web pages by human raters.

…RankEmbedBERT needs to be retrained to reflect fresh data…”

A New Perspective On AI Search

Is it true that links do not play a role in selecting web pages for AI Overviews? Google’s FastSearch prioritizes speed. Ryan Jones theorizes that it could mean Google uses multiple indexes, with one specific to FastSearch made up of sites that tend to get visits. That may be a reflection of the RankEmbed part of FastSearch, which is said to be a combination of “click-and-query data” and human rater data.

Regarding human rater data, with billions or trillions of pages in an index, it would be impossible for raters to manually rate more than a tiny fraction. So it follows that the human rater data is used to provide quality-labeled examples for training. Labeled data are examples that a model is trained on so that the patterns inherent to identifying a high-quality page or low-quality page can become more apparent.

Featured Image by Shutterstock/Cookie Studio

Google Quietly Raised Ad Prices, Court Orders More Transparency via @sejournal, @MattGSouthern

Google raised ad prices incrementally through internal “pricing knobs” that advertisers couldn’t detect, according to federal court documents.

  • Google raised ad prices 5-15% at a time using “pricing knobs” that made increases look like normal auction fluctuations.
  • Google’s surveys showed advertisers noticed higher costs but didn’t realize Google was causing the increases.
  • A federal judge now requires Google to publicly disclose auction changes that could raise advertiser costs.