Google Gemini Adds Audio File Uploads After Being Top User Request via @sejournal, @MattGSouthern

Google’s Gemini app now accepts audio file uploads, answering what the company acknowledges was its most requested feature.

For marketers and content teams, it means you can push recordings straight into Gemini for analysis, summaries, and repurposed content without jumping between tools.

Josh Woodward, VP at Google Labs and Gemini, announced the change on X:

“You can now upload any file to @GeminiApp. Including the #1 request: audio files are now supported!”

What’s New

Gemini can now ingest audio files in the same multi-file workflow you already use for documents and images.

You can attach up to 10 files per prompt, and files inside ZIP archives are supported, which helps when you want to upload raw tracks or several interview takes together.

Limits

  • Free plan: total audio length up to 10 minutes per prompt; up to 5 prompts per day.
  • AI Pro and AI Ultra: total audio length up to 3 hours per prompt.
  • Per prompt: up to 10 files across supported formats. Details are listed in Google’s Help Center.

Why This Matters

If your team works with podcasts, webinars, interviews, or customer calls, this closes a gap that often forced a separate transcription step.

You can upload a full interview and turn it into show notes, pull quotes, or a working draft in one place. It also helps meeting-heavy teams: a recorded strategy session can become action items and a brief without exporting to another tool first.

For agencies and networks, batching multiple episodes or takes into one prompt reduces friction in weekly workflows.

The practical win is fewer handoffs: source audio goes in, and the outlines, summaries, and excerpts you need come out. Inside the same system you already use for text prompting.

Quick Tip

Upload your audio together with any supporting context in the same prompt. That gives Gemini the grounding it needs to produce cleaner summaries and more accurate excerpts.

If you’re testing on the free tier, plan around the 10-minute ceiling; longer content is best on AI Pro or Ultra.

Looking Ahead

Google’s limits pages do change, so keep an eye on total length, file-count rules, and any new guardrails that affect longer recordings or larger teams. Also watch for deeper Workspace tie-ins (for example, easier handoffs from Meet recordings) that would streamline getting audio into Gemini without manual uploads.


Featured Image: Photo Agency/Shutterstock

Google Drops Search Console Reporting For Six Structured Data Types via @sejournal, @MattGSouthern

Google will stop reporting six deprecated structured data types in Search Console and remove them from the Rich Results Test and appearance filters.

  • Search Console and Rich Results Test stop reporting on deprecated structured data types.
  • Rankings are unaffected; you can keep the markup, it just won’t show rich results.
  • API returns continue through December.
Anthropic Agrees To $1.5B Settlement Over Pirated Books via @sejournal, @MattGSouthern

Anthropic agreed to a proposed $1.5 billion settlement in Bartz v. Anthropic over claims it downloaded pirated books to help train Claude.

If approved, plaintiffs’ counsel says it would be the largest U.S. copyright recovery to date. A preliminary approval hearing is set for today.

In June, Judge William Alsup held that training on lawfully obtained books can qualify as fair use, while copying and storing millions of pirated books is infringement. That order set the stage for settlement talks.

Settlement Details

The deal would pay about $3,000 per eligible title, with an estimated class size of roughly 500,000 books. Plaintiffs allege Anthropic pulled at least 7 million copies from piracy sites Library Genesis and Pirate Library Mirror.

Justin Nelson, counsel for the authors, said:

“As best as we can tell, it’s the largest copyright recovery ever.”

How Payouts Would Work

According to the Authors Guild’s summary, the fund is paid in four tranches after court approvals: $300M soon after preliminary approval, $300M after final approval, then $450M at 12 months and 450M at 24 months, with interest accruing in escrow.

A final “Works List” is due October 10, which will drive a searchable database for claimants.

The Guild notes the agreement requires destruction of pirated copies and resolves only past conduct.

Why This Matters

If you rely on AI tools in content workflows, provenance now matters more. Expect more licensing deals and clearer disclosures from vendors about training data sources.

For publishers and creators, the per-work payout sets a reference point that may strengthen negotiating leverage in future licensing talks.

Looking Ahead

The judge will consider preliminary approval today. If granted, the notice process begins this fall and payments to rightsholders would follow final approval and claims processing, funded on the installment schedule above.


Featured Image: Tigarto/Shutterstock

Google Publishes Exact Gemini Usage Limits Across All Tiers via @sejournal, @MattGSouthern

Google has published exact usage limits for Gemini Apps across the free tier and paid Google AI plans, replacing earlier vague language with concrete numbers marketers can plan around.

The Help Center update covers daily caps for prompts, images, Deep Research, video generation, and context windows, and notes that you’ll see in-product notices when you’re close to a limit.

What’s New

Until recently, Google’s documentation used general phrasing about “limited access” without specifying amounts.

The Help Center page now lists per-tier allowances for Gemini 2.5 Pro prompts, image generation, Deep Research, and more. It also clarifies that practical caps can vary with prompt complexity, file sizes, and conversation length, and that limits may change over time.

Google’s Help Center states:

“Gemini Apps has usage limits designed to ensure an optimal experience for everyone… we may at times have to cap the number of prompts, conversations, and generated assets that you can have within a specific timeframe.”

Free vs. Paid Tiers

On the free experience, you can use Gemini 2.5 Pro for up to five prompts per day.

The page lists general access to 2.5 Flash and includes:

  • 100 images per day
  • 20 Audio Overviews per day
  • Five Deep Research reports per month using 2.5 Flash).

Because overall app limits still apply, actual throughput depends on how long and complex your prompts are and how many files you attach.

Google AI Pro increases ceilings to:

  • 100 prompts per day on Gemini 2.5 Pro
  • 1,000 images per day
  • 20 Deep Research reports per day (using 2.5 Pro).

Google AI Ultra raises those to

  • 500 prompts per day
  • 200 Deep Research reports per day
  • Includes Deep Think with 10 prompts per day at a 192,000-token context window for more complex reasoning tasks.

Context Windows and Advanced Features

Context windows differ by tier. The free tier lists a 32,000-token context size, while Pro and Ultra show 1 million tokens, which is helpful when you need longer conversations or to process large documents in one go.

Ultra’s Deep Think is separate from the 1M context and is capped at 192k tokens for its 10 daily prompts.

Video generation is currently in preview with model-specific limits. Pro shows up to three videos per day with Veo 3 Fast (preview), while Ultra lists up to five videos per day with Veo 3 (preview).

Google indicates some features receive priority or early access on paid plans.

Availability and Requirements

The Gemini app in Google AI Pro and Ultra is available in 150+ countries and territories for users 18 or older.

Upgrades are tied to select Google One paid plans for personal accounts, which consolidate billing with other premium Google services.

Why This Matters

Clear ceilings make it easier to scope deliverables and budgets.

If you produce a steady stream of social or ad creative, the image caps and prompt totals are practical planning inputs.

Teams doing competitive analysis or longer-form research can evaluate whether the free tier’s five Deep Research reports per month cover occasional needs or if Pro’s daily allotment, Ultra’s higher limit, and Deep Think are a better fit for heavier workloads.

The documentation also emphasizes that caps can vary with usage patterns, so it’s worth watching the in-app limit warnings on busy days.

Looking Ahead

Google notes that limits may evolve. If your workflows depend on specific daily counts or large context windows, it’s sensible to review the Help Center page periodically and adjust plans as features move from preview to general availability.


Featured Image: Evolf/Shutterstock

Google: Your Login Pages May Be Hurting Your SEO Performance via @sejournal, @MattGSouthern

Google’s Search Relations team says generic login pages can confuse indexing and hurt rankings.

When many private URLs all show the same bare login form, Google may treat them as duplicates and show the login page in search.

In a recent “Search Off the Record” episode, John Mueller and Martin Splitt explained how this happens and what to do about it.

Why It Happens

If different private URLs all load the same login screen, Google sees those URLs as the same page.

Mueller said on the podcast:

“If you have a very generic login page, we will see all of these URLs that show that login page that redirect to that login page as being duplicates… We’ll fold them together as duplicates and we’ll focus on indexing the login page because that’s kind of what you give us to index.”

That means people searching for your brand may land on a login page instead of helpful information.

“We regularly see Google services getting this wrong,” Mueller admitted, noting that with many teams, “you invariably run across situations like that.”

Search Console fixed this by sending logged-out visitors to a marketing page with a clear sign-in link, which gave Google indexable context.

Don’t Rely On robots.txt To Hide Private URLs

Blocking sensitive areas in robots.txt can still let those URLs appear in search with no snippet. That’s risky if the URLs expose usernames or email addresses.

Mueller warned:

“If someone does something like a site query for your site… Google and other search engines might be like, oh, I know about all of these URLs. I don’t have any information on what’s on there, but feel free to try them out essentially.”

If it’s private, avoid leaking details in the URL, and use noindex or a login redirect instead of robots.txt.

What To Do Instead

If content must stay private, serve a noindex on private endpoints or redirect requests to a dedicated login or marketing page.

Don’t load private text into the page and then hide it with JavaScript. Screen readers and crawlers may still access it.

If you want restricted pages indexed, use the paywall structured data. It allows Google to fetch the full content while understanding that regular visitors will hit an access wall.

Paywall structured data isn’t only for paid content, Mueller explains:

“It doesn’t have to be something that’s behind like a clear payment thing. It can just be something like a login or some other mechanism that basically limits the visibility of the content.”

Lastly, add context to login experiences. Include a short description of the product or the section someone is trying to reach.

As Mueller advised:

“Put some information about what your service is on that login page.”

A Quick Test

Open an incognito window. While logged out, search for your brand or service and click the top results.

If you land on bare login pages with no context, you likely need updates. You can also search for known URL patterns from account areas to see what shows up.

Looking Ahead

As more businesses use subscriptions and gated experiences, access design affects SEO.

Use clear patterns (noindex, proper redirects, and paywalled markup where needed) and make sure public entry points provide enough context to rank for the right queries.

Small changes to login pages and redirects can prevent duplicate grouping and improve how your site appears in search.


Featured Image: Roman Samborskyi/Shutterstock

AI Search Sends Users to 404 Pages Nearly 3X More Than Google via @sejournal, @MattGSouthern

New research examining 16 million URLs aligns with Google’s predictions that hallucinated links will become an issue across AI platforms.

An Ahrefs study shows that AI assistants send users to broken web pages nearly three times more often than Google Search.

The data arrives six months after Google’s John Mueller raised awareness about this issue.

ChatGPT Leads In URL Hallucination Rates

ChatGPT creates the most fake URLs among all AI assistants tested. The study found that 1% of URLs people clicked led to 404 pages. Google’s rate is just 0.15%.

The problem gets worse when looking at all URLs ChatGPT mentions, not just clicked ones. Here, 2.38% lead to error pages. Compare this to Google’s top search results, where only 0.84% are broken links.

Claude came in second with 0.58% broken links for clicked URLs. Copilot had 0.34%, Perplexity 0.31%, and Gemini 0.21%. Mistral had the best rate at 0.12%, but it also sends the least traffic to websites.

Why Does This Happen?

The research found two main reasons why AI creates fake links.

First, some URLs used to exist but don’t anymore. When AI relies on old information instead of searching the web in real-time, it might suggest pages that have been deleted or moved.

Second, AI sometimes invents URLs that sound right but never existed.

Ryan Law from Ahrefs shared examples from their own site. AI assistants created fake URLs like “/blog/internal-links/” and “/blog/newsletter/” because these sound like pages Ahrefs might have. But they don’t actually exist.

Limited Impact on Overall Traffic

The problem may seem significant, but most websites won’t notice much impact. AI assistants only bring in about 0.25% of website traffic. Google, by comparison, drives 39.35% of traffic.

This means fake URLs affect a tiny portion of an already small traffic source. Still, the issue might grow as more people use AI for research and information.

The study also found that 74% of new web pages contain AI-generated content. When this content includes fake links, web crawlers might index them, spreading the problem further.

Mueller’s Prediction Proves Accurate

These findings match what Google’s John Mueller predicted in March. He forecasted a “slight uptick of these hallucinated links being clicked” over the next 6-12 months.

Mueller suggested focusing on better 404 pages rather than chasing accidental traffic.

His advice to collect data before making big changes looks smart now, given the small traffic impact Ahrefs found.

Mueller also predicted the problem would fade as AI services improve how they handle URLs. Time will tell if he’s right about this, too.

Looking Forward

For now, most websites should focus on two things. Create helpful 404 pages for users who hit broken links. Then, set up redirects only for fake URLs that get meaningful traffic.

This allows you to handle the problem without overreacting to what remains a minor issue for most sites.

Google Antitrust Case: AI Overviews Use FastSearch, Not Links via @sejournal, @martinibuster

A sharp-eyed search marketer discovered the reason why Google’s AI Overviews showed spammy web pages. The recent Memorandum Opinion in the Google antitrust case featured a passage that offers a clue as to why that happened and speculates how it reflects Google’s move away from links as a prominent ranking factor.

Ryan Jones, founder of SERPrecon (LinkedIn profile), called attention to a passage in the recent Memorandum Opinion that shows how Google grounds its Gemini models.

Grounding Generative AI Answers

The passage occurs in a section about grounding answers with search data. Ordinarily, it’s fair to assume that links play a role in ranking the web pages that an AI model retrieves from a search query to an internal search engine. So when someone asks Google’s AI Overviews a question, the system queries Google Search and then creates a summary from those search results.

But apparently, that’s not how it works at Google. Google has a separate algorithm that retrieves fewer web documents and does so at a faster rate.

The passage reads:

“To ground its Gemini models, Google uses a proprietary technology called FastSearch. Rem. Tr. at 3509:23–3511:4 (Reid). FastSearch is based on RankEmbed signals—a set of search ranking signals—and generates abbreviated, ranked web results that a model can use to produce a grounded response. Id. FastSearch delivers results more quickly than Search because it retrieves fewer documents, but the resulting quality is lower than Search’s fully ranked web results.”

Ryan Jones shared these insights:

“This is interesting and confirms both what many of us thought and what we were seeing in early tests. What does it mean? It means for grounding Google doesn’t use the same search algorithm. They need it to be faster but they also don’t care about as many signals. They just need text that backs up what they’re saying.

…There’s probably a bunch of spam and quality signals that don’t get computed for fastsearch either. That would explain how/why in early versions we saw some spammy sites and even penalized sites showing up in AI overviews.”

He goes on to share his opinion that links aren’t playing a role here because the grounding uses semantic relevance.

What Is FastSearch?

Elsewhere the Memorandum shares that FastSearch generates limited search results:

“FastSearch is a technology that rapidly generates limited organic search results for certain use cases, such as grounding of LLMs, and is derived primarily from the RankEmbed model.”

Now the question is, what’s the RankEmbed model?

The Memorandum explains that RankEmbed is a deep-learning model. In simple terms, a deep-learning model identifies patterns in massive datasets and can, for example, identify semantic meanings and relationships. It does not understand anything in the same way that a human does; it is essentially identifying patterns and correlations.

The Memorandum has a passage that explains:

“At the other end of the spectrum are innovative deep-learning models, which are machine-learning models that discern complex patterns in large datasets. …(Allan)

…Google has developed various “top-level” signals that are inputs to producing the final score for a web page. Id. at 2793:5–2794:9 (Allan) (discussing RDXD-20.018). Among Google’s top-level signals are those measuring a web page’s quality and popularity. Id.; RDX0041 at -001.

Signals developed through deep-learning models, like RankEmbed, also are among Google’s top-level signals.”

User-Side Data

RankEmbed uses “user-side” data. The Memorandum, in a section about the kind of data Google should provide to competitors, describes RankEmbed (which FastSearch is based on) in this manner:

“User-side Data used to train, build, or operate the RankEmbed model(s); “

Elsewhere it shares:

“RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: _____% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.”

Then:

“The RankEmbed model itself is an AI-based, deep-learning system that has strong natural-language understanding. This allows the model to more efficiently identify the best documents to retrieve, even if a query lacks certain terms. PXR0171 at -086 (“Embedding based retrieval is effective at semantic matching of docs and queries”);

…RankEmbed is trained on 1/100th of the data used to train earlier ranking models yet provides higher quality search results.

…RankEmbed particularly helped Google improve its answers to long-tail queries.

…Among the underlying training data is information about the query, including the salient terms that Google has derived from the query, and the resultant web pages.

…The data underlying RankEmbed models is a combination of click-and-query data and scoring of web pages by human raters.

…RankEmbedBERT needs to be retrained to reflect fresh data…”

A New Perspective On AI Search

Is it true that links do not play a role in selecting web pages for AI Overviews? Google’s FastSearch prioritizes speed. Ryan Jones theorizes that it could mean Google uses multiple indexes, with one specific to FastSearch made up of sites that tend to get visits. That may be a reflection of the RankEmbed part of FastSearch, which is said to be a combination of “click-and-query data” and human rater data.

Regarding human rater data, with billions or trillions of pages in an index, it would be impossible for raters to manually rate more than a tiny fraction. So it follows that the human rater data is used to provide quality-labeled examples for training. Labeled data are examples that a model is trained on so that the patterns inherent to identifying a high-quality page or low-quality page can become more apparent.

Featured Image by Shutterstock/Cookie Studio

Google Quietly Raised Ad Prices, Court Orders More Transparency via @sejournal, @MattGSouthern

Google raised ad prices incrementally through internal “pricing knobs” that advertisers couldn’t detect, according to federal court documents.

  • Google raised ad prices 5-15% at a time using “pricing knobs” that made increases look like normal auction fluctuations.
  • Google’s surveys showed advertisers noticed higher costs but didn’t realize Google was causing the increases.
  • A federal judge now requires Google to publicly disclose auction changes that could raise advertiser costs.
Interaction To Next Paint: 9 Content Management Systems Ranked via @sejournal, @martinibuster

Interaction to Next Paint (INP) is a meaningful Core Web Vitals metric because it represents how quickly a web page responds to user input. It is so important that the HTTPArchive has a comparison of INP across content management systems. The following are the top content management systems ranked by Interaction to Next Paint.

What Is Interaction To Next Paint (INP)?

INP measures how responsive a web page is to user interactions during a visit. Specifically, it measures interaction latency, which is the time between when a user clicks, taps, or presses a key and when the page visually responds.

This is a more accurate measurement of responsiveness than the older metric it replaced, First Input Delay (FID), which only captured the first interaction. INP is more comprehensive because it evaluates all clicks, taps, and key presses on a page and then reports a representative value based on the longest meaningful latency.

The INP score is representative of the page’s responsive performance. For that reason**,** extreme outliers are filtered out of the calculation so that the score reflects typical worst-case responsiveness.

Web pages with poor INP scores create a frustrating user experience that increases the risk of page abandonment. Fast responsiveness enables a smoother experience that supports higher engagement and conversions.

INP Scores Have Three Ratings:

  • Good: Below or at 200 milliseconds
  • Needs Improvement: Above 200 milliseconds and below or at 500 milliseconds
  • Poor: Above 500 milliseconds

Content Management System INP Champions

The latest Interaction to Next Paint (INP) data shows that all major content management systems improved from June to July, but only by incremental improvements.

Joomla posted the largest gain with a 1.12% increase in sites achieving a good score. WordPress followed with a 0.88% increase in the number of sites posting a good score, while Wix and Drupal improved by 0.70% and 0.64%.

Duda and Squarespace also improved, though by smaller margins of 0.46% and 0.22%. Even small percentage changes can reflect real improvements in how users experience responsiveness on these platforms, so it’s encouraging that every publishing platform in this comparison is improving.

CMS INP Ranking By Monthly Improvement

  1. Joomla: +1.12%
  2. WordPress: +0.88%
  3. Wix: +0.70%
  4. Drupal: +0.64%
  5. Duda: +0.46%
  6. Squarespace: +0.22%

Which CMS Has The Best INP Scores?

Month-to-month improvement shows who is doing better, but that’s not the same as which CMS is doing the best. The July INP results show a different ranking order of content management systems when viewed by overall INP scores.

Squarespace leads with 96.07% of sites achieving a good INP score, followed by Duda at 93.81%. This is a big difference from the Core Web Vitals rankings, where Duda is consistently ranked number one. When it comes to arguably the most important Core Web Vital metric, Squarespace takes the lead as the number one ranked CMS for Interaction to Next Paint.

Wix and WordPress are ranked in the middle with 87.52% and 86.77% of sites showing a good INP score, while Drupal, with a score of 86.14%, is ranked in fifth place, just a fraction behind WordPress.

Ranking in sixth place in this comparison is Joomla, trailing the other five with a score of 84.47%. That score is not so bad considering that it’s only two to three percent behind Wix and WordPress.

CMS INP Rankings for July 2025

  1. Squarespace – 96.07%
  2. Duda: 93.81%
  3. Wix: 87.52%
  4. WordPress: 86.77%
  5. Drupal: 86.14%
  6. Joomla: 84.47%

These rankings show that even platforms that lag in INP performance, like Joomla, are still improving, and it could be that Joomla’s performance may best the other platforms in the future if it keeps up its improvement.

In contrast, Squarespace, which already performs well, posted the smallest gain. This indicates that performance improvement is uneven, with systems advancing at different speeds. Nevertheless, the latest Interaction to Next Paint (INP) data shows that all six content management systems in this comparison improved from June to July. That upward performance trend is a positive sign for publishers.

What About Shopify’s INP Performance?

Shopify has strong Core Web Vitals performance, but how well does it compare to these six content management systems? This might seem like an unfair comparison because shopping platforms require features, images, and videos that can slow a page down. But Duda, Squarespace, and Wix offer ecommerce solutions, so it’s actually a fair and reasonable comparison.

We see that the rankings change when Shopify is added to the INP comparison:

Shopify Versus Everyone

  1. Squarespace: 96.07%
  2. Duda: 93.81%
  3. Shopify: 89.58%
  4. Wix: 87.52%
  5. WordPress: 86.77%
  6. Drupal: 86.14%
  7. Joomla: 84.47%

Shopify is ranked number three. Now look at what happens when we compare the three shopping platforms against each other:

Top Ranked Shopping Platforms By INP

  1. BigCommerce: 95.29%
  2. Shopify: 89.58%
  3. WooCommerce: 87.99%

BigCommerce is the number-one-ranked shopping platform for the important INP metric among the three in this comparison.

Lastly, we compare the INP performance scores for all the platforms together, leading to a surprising comparison.

CMS And Shopping Platforms Comparison

  1. Squarespace: 96.07%
  2. BigCommerce: 95.29%
  3. Duda: 93.81%
  4. Shopify: 89.58%
  5. WooCommerce: 87.99%
  6. Wix: 87.52%
  7. WordPress: 86.77%
  8. Drupal: 86.14%
  9. Joomla: 84.47%

All three ecommerce platforms feature in the top five rankings of content management systems, which is remarkable because of the resource-intensive demands of ecommerce websites. WooCommerce, a WordPress-based shopping platform, ranks in position five, but it’s so close to Wix that they are virtually tied for position five.

Takeaways

INP measures the responsiveness of a web page, making it a meaningful indicator of user experience. The latest data shows that while every CMS is improving, Squarespace, BigCommerce, and Duda outperform all other content platforms in this comparison by meaningful margins.

All of the platforms in this comparison show high percentages of good INP scores. The number four-ranked Shopify is only 6.49 percentage points behind the top-ranked Squarespace, and 84.47% of the sites published with the bottom-ranked Joomla show a good INP score. These results show that all platforms are delivering a quality experience for users

View the results here (must be logged into a Google account to view).

Featured Image by Shutterstock/Roman Samborskyi

Google Avoids Breakup As Judge Bars Exclusive Default Search Deals via @sejournal, @MattGSouthern

A federal judge outlined remedies in the U.S. search antitrust case that bar Google from using exclusive default search deals but stop short of forcing a breakup.

Reuters reports that Google won’t have to divest Chrome or Android, but it may have to share some search data with competitors under court-approved terms.

Google says it will appeal.

What The Judge Ordered

Judge Amit P. Mehta barred Google from entering or maintaining exclusive agreements that tie the distribution of Search, Chrome, Google Assistant, or the Gemini app to other apps, licenses, or revenue-share arrangements.

The ruling allows Google to continue paying for placement but prohibits exclusivity that could block rivals.

The order also envisions Google making certain search and search-ad syndication services available to competitors at standard rates, alongside limited data sharing for “qualified competitors.”

Mehta ordered Google to share some search data with competitors under specific protections to help them improve their relevance and revenue. Google argued this could expose its trade secrets and plans to appeal the decision.

The judge directed the parties to meet and submit a revised final judgment by September 10. Once entered, the remedies would take effect 60 days later, run for six years, and be overseen by a technical committee. Final language could change based on the parties’ filing.

How We Got Here

In August 2024, Mehta found Google illegally maintained a monopoly in general search and related text ads.

Judge Amit P. Mehta wrote in his August 2024 opinion:

“Google is a monopolist, and it has acted as one to maintain its monopoly.”

This decision established the need for remedies. Today’s order focuses on distribution and data access, rather than breaking up the company.

What’s Going To Change

Ending exclusivity changes how contracts for default placements can be made across devices and browsers. Phone makers and carriers may need to update their agreements to follow the new rules.

However, the ruling doesn’t require any specific user experience change, like a choice screen. The results will depend on how new contracts are created and approved by the court.

Next Steps

Expect a gradual rollout if the final judgment follows today’s outline.

Here are the next steps to watch for:

  • The revised judgment that the parties will submit by September 10.
  • Changes to contracts between Google and distribution partners to meet the non-exclusivity requirement.
  • Any pilot programs or rules that specify who qualifies as a “qualified competitor” and what data they can access.

Separately, Google faces a remedies trial in the ad-tech case in late September. This trial could lead to changes that affect advertising and measurement.

Looking Ahead

If the parties submit a revised judgment by September 10, changes could start about 60 days after the court’s final order. This might shift if Google gets temporary relief during an appeal.

In the short term, expect contract changes rather than product updates.

The final judgment will determine who can access data and which types are included. If the program is limited, it may not significantly affect competition. If broader, competitors might enhance their relevance and profit over the six-year period.

Also watch the ad tech remedies trial this month. Its results, along with the search remedies, will shape how Google handles search and ads in the coming years.