Google Links To Itself: 43% Of AI Overviews Point Back To Google via @sejournal, @MattGSouthern

New research shows that Google’s AI Overviews often link to Google, contributing to the walled garden effect that encourages users to stay longer on Google’s site.

A study by SE Ranking examined Google’s AI Overviews in five U.S. states. It found that 43% of these AI answers contain links redirecting users to Google’s search results. Each answer typically includes 4-6 links to Google.

This aligns with recent data indicating that Google users make 10 clicks before visiting other websites. These patterns suggest that Google is working to keep users within its ecosystem for longer periods.

Google Citing Itself in AI Answers

The SE Ranking study analyzed 100,013 keywords across five states: Colorado, Texas, California, New York, and Washington, D.C.

It tracked how Google’s AI summaries function in different regions. Although locations showed slight variance, the study found that Google.com is the most-cited website in AI Overviews.

Google appears in about 44% of all AI answers, significantly ahead of the next most-cited sources, YouTube, Reddit, Quora, and Wikipedia, appearing in about 13%.

The research states:

“Based on the data combined from all five states (141,507 total AI Overview appearances), our data analysis shows that 43.42% (61,437 times) of AI Overview responses contain links to Google organic results, while 56.58% of responses do not.”

Image Credit: SE Ranking

Building on the Walled Garden Trend

These findings complement a recent analysis from Momentic, which found that Google’s “pages per visit” has reached 10, indicating users spend significantly more clicks on Google before visiting other sites.

Overall, this research reveals Google is creating a more self-contained search experience:

  • AI Overviews appear in approximately 30% of all searches
  • Nearly half of these AI answers link back to Google itself
  • Users now make 10 clicks within Google before leaving
  • Longer, more specific queries trigger AI Overviews more frequently

Google still drives substantial traffic outward; 175.5 million visits in March, according to Momentic.

However, it’s less effective at sending users away than ChatGPT. Google produces just 0.6 external visits per user, while ChatGPT generates 1.4 visits per user.

More Key Stats from the Study

The SE Ranking research uncovered several additional findings:

  • AI Overviews almost always appear alongside other SERP features (99.25% of the time), most commonly with People Also Ask boxes (98.5%)
  • The typical AI Overview consists of about 1,766 characters (roughly 254 words) and cites an average of 13.3 sources
  • Medium-difficult keywords (21-40 on the difficulty scale) most frequently trigger AI Overviews (33.4%), whereas highly competitive terms (81-100) rarely generate them (just 3.7%)
  • Keywords with CPC values between $2-$5 produce the highest rate of AI Overviews (32%), while expensive keywords ($10+) yield them the least (17.3%)
  • Fashion and Beauty has the lowest AI Overview appearance rate (just 1.4%), followed by E-Commerce (2.1%) and News/Politics (3.8%)
  • The longer an AI Overview’s answer, the more sources it cites. Responses under 600 characters cite about five sources, while those over 6,600 characters cite around 28 sources.

These statistics further emphasize how Google’s AI Overviews are reshaping search behavior.

This data stresses the need to optimize for multiple traffic sources while remaining visible within Google’s results pages.

U.S. Copyright Office Cites Legal Risk At Every Stage Of Generative AI via @sejournal, @martinibuster

The United States Copyright Office released a pre-publication version of a report on the use of copyrighted materials for training generative AI, outlining a legal and factual case that identifies copyright risks at every stage of generative AI development.

The report was created in response to public and congressional concern about the use of copyrighted content, including pirated versions, by AI systems without first obtaining permission. While the Copyright Office doesn’t make legal rulings, the reports it creates offer legal and technical guidance that can influence legislation and court decisions.

The report offers four reasons AI technology companies should be concerned:

  1. The report states that many acts of data acquisition, the process of creating datasets from copyrighted work, and training could “constitute prima facie infringement.”
  2. It challenges the common industry defense that training models does not involve “copying,” noting that the process of creating datasets involves the creation of multiple copies, and that improvements in model weights can also contain copies of those works. The report cites reports of instances where AI reproduces copyrighted works, either word for word or “near identical” copies.
  3. It states that the training process implicates the right of reproduction, one of the exclusive rights granted to emphasizes that memorization and regurgitation of copyrighted content by models may constitute infringement, even if unintended.
  4. Transformative use, where it adds a new meaning to an original work, is an important consideration in fair use analysis. The report acknowledges that “some uses of copyrighted works in AI training are likely to be transformative,” but it “disagrees” with the argument that AI training is transformative simply because it resembles “human learning,” such as when a person reads a book and learns from it.

Copyright Implications At Every Stage of AI Development

Perhaps the most damning part of the report is where it says that there may be copyright issues at every stage of the AI development and lists each stage of development and what may be wrong with it.

A. Data Collection and Curation

The steps required to produce a training dataset containing copyrighted works clearly implicate the right of reproduction…

B. Training

The training process also implicates the right of reproduction. First, the speed and scale of training requires developers to download the dataset and copy it to high-performance storage prior to training.96 Second, during training, works or substantial portions of works are temporarily reproduced as they are “shown” to the model in batches.

Those copies may persist long enough to infringe the right of reproduction,160 depending on the model at issue and the specific hardware and software implementations used by developers.

Third, the training process—providing training examples, measuring the model’s performance against expected outputs, and iteratively updating weights to improve performance—may result in model weights that contain copies of works in the training data. If so, then subsequent copying of the model weights, even by parties not involved in the training process, could also constitute prima facie infringement.

C. RAG

RAG also involves the reproduction of copyrighted works.110 Typically, RAG works in one of two ways. In one, the AI developer copies material into a retrieval database, and the generative AI system can later access that database to retrieve relevant material and supply it to the model along with the user’s prompt.111 In the other, the system retrieves material from an external source (for example, a search engine or a specific website).181 Both methods involve making reproductions, including when the system copies retrieved content at generation time to augment its response.

D. Outputs

Generative AI models sometimes output material that replicates or closely resembles copyrighted works. Users have demonstrated that generative AI can produce near exact replicas of still images from movies,112 copyrightable characters,113 or text from news stories.114 Such outputs likely infringe the reproduction right and, to the extent they adapt the originals, the right to prepare derivative works.”

The report finds infringement risks at every stage of generative AI development, and while its findings are not legally binding, they could be used to create legislation and serve as guidance for courts.

Takeaways

  • AI Training And Copyright Infringement:
    The report argues that both data acquisition and model training can involve unauthorized copying, possibly constituting “prima facie infringement.”
  • Rejection Of Industry Defenses:
    The Copyright Office disputes common AI industry claims that training does not involve copying and that AI training is analogous to human learning.
  • Fair Use And Transformative Use:
    The report disagrees with the broad application of transformative use as a defense, especially when based on comparisons to human cognition.
  • Concern About All Stages Of AI Development:
    Copyright concerns are identified at every stage of AI development, from data collection, training, retrieval-augmented generation (RAG), and model outputs.
  • Memorization and Model Weights:
    The Office warns that AI models may retain copyrighted content in weights, meaning even use or distribution of those weights could be infringing.
  • Output Reproduction and Derivative Works:
    The ability of AI to generate near-identical outputs (e.g., movie stills, characters, or articles) raises concerns about violations of both reproduction and derivative work rights.
  • RAG-Specific Infringement Risk:
    Both methods of RAG, copying content into a database or retrieving from external sources, are described as involving potentially infringing reproductions.

The U.S. Copyright Office report describes multiple ways that generative AI development may infringe copyright law, challenging the legality of using copyrighted data without permission at every technical stage, from dataset creation to model outputs. It rejects the use of the analogy of human learning as a defense and the industry’s broad application of fair use. Although the report doesn’t have the same force as a judicial finding, the report can be used as guidance for lawmakers and courts.

Featured Image by Shutterstock/Treecha

Google Reminds That Hreflang Tags Are Hints, Not Directives via @sejournal, @MattGSouthern

A recent exchange between SEO professional Neil McCarthy and Google Search Advocate John Mueller has highlighted how Google treats hreflang tags.

McCarthy observed pages intended for Belgian French users (fr-be) appearing in France. Mueller clarified that hreflang is a suggestion, not a guarantee.

Here’s what this interaction shows us about hreflang, canonical tags, and international SEO.

French-Belgian Pages in French Search Results

McCarthy noticed that pages tagged for French-Belgian audiences were appearing in searches conducted from France.

In a screenshot shared on Bluesky, Google stated the result:

  • Contains the search terms
  • Is in French
  • “Seems coherent with this search, even if it usually appears in searches outside of France”

McCarthy asked whether Google was ignoring his hreflang instructions.

What Google Says About hreflang

Mueller replied:

“hreflang doesn’t guarantee indexing, so it can also just be that not all variations are indexed. And, if they are the same (eg fr-fr, fr-be), it’s common that one is chosen as canonical (they’re the same).”

In a follow-up, he added:

“I suspect this is a ‘same language’ case where our systems just try to simplify things for sites. Often hreflang will still swap out the URL, but reporting will be on the canonical URL.”

Key Takeaways

Hreflang is a Hint, Not a Command

Google uses hreflang as a suggestion for which regional URL to display. It doesn’t require that each version be indexed or shown separately.

Canonical Tags Can Override Variations

Google may select one as the canonical URL when two pages are nearly identical. That URL then receives all the indexing and reporting.

“Same Language” Simplification

If two pages share the same language, Google’s systems may group them. Even if hreflang presents the correct variant to users, metrics often consolidate into the canonical version.

What This Means for International SEO Teams

Add unique elements to each regional page. The more distinct the content, the less likely Google is to group it under one canonical URL.

In Google Search Console, verify which URL is shown as canonical. Don’t assume that hreflang tags alone will generate separate performance data.

Use VPNs or location-based testing tools to search from various countries. Ensure Google displays the correct pages for the intended audience.

Review Google’s official documentation on hreflang, sitemaps, and HTTP headers. Remember that hreflang signals are hints that work best alongside a solid site structure.

Next Steps for Marketers

International SEO can be complex, but clear strategies help:

  1. Audit Your hreflang Setup: Check tag syntax, XML sitemaps, and HTTP header configurations.
  2. Review Page Similarity: Ensure each language-region version serves a unique user need.
  3. Monitor Continuously: Set up alerts for unexpected traffic patterns or drops in regional performance.

SEO teams can set realistic goals and fine-tune their international strategies by understanding hreflang’s limits and Google’s approach to canonical tags. Regular testing, precise localization, and vigilant monitoring will keep regional campaigns on track.


Featured Image: Roman Samborskyi/Shutterstock

Is The SEO Job Market Undergoing A Major Shift? via @sejournal, @martinibuster

Anecdotal reports and an SEO jobs study describe a search marketing industry undergoing profound changes, not only in the skills in demand but also in hiring practices that may be making it difficult for experienced SEOs to get the jobs they are well qualified for.

Short History Of SEO Jobs

Twenty five years ago getting into SEO and earning a living was relatively easy. Many top corporations across all industries were hiring freelancers and agencies for specialized SEO assistance. I suspect that marketing departments didn’t view SEOs as a subset of marketing and that many didn’t have SEO staff. That gradually changed as more organizations hired dedicated SEO staff with third party SEOs providing specialized assistance.

What’s Going On With SEO Jobs?

A recent report on the state of SEO jobs provided by SEOJobs.com shared the state of SEO jobs in 2024.

The following insights show that the job of SEO continues to evolve:

  • SEO job openings declined in 2024
  • Median SEO salaries dropped
  • 65% of SEO jobs are in-house
  • Remote SEOs jobs dropped
  • SEO job titles related to content strategy and writing dropped by 28%
  • SEO Analyst job titles dropped by 12%.
  • Technical SEO and related titles dropped by a small percentage
  • Senior level titles like manager, director, and VP had the strongest increases.

The report says that job titles related to Technical SEO dropped:

“Positions in the Technical SEO and related title group represented 5.8 percent of all SEO jobs during the first quarter of 2024, falling slightly to 5.4 percent by the end of the fourth quarter – a decrease of seven percent.”

But the report also states that Technical SEO is still an in-demand skill:

“…demand for skill in technical SEO grew at the fastest rate of any skill during the fourth quarter, rising to 75 percent from 71 percent the previous quarter.”

Experienced SEOs Having Trouble Getting Hired… By AI?

Keith Goode read the above referenced report and commented that he believes the reason many highly experienced SEOs are failing to get a job is because of a poor implementation of AI into the hiring process.

He shared his insights on a LinkedIn post:

“I have seen superior SEOs languish amongst thousands of candidates, immediately rejected for a lack of experience (??) or funneled through multiple rounds of interviews and work assignments, only to be rudely ghosted by the recruiters.

The cause? I guess you could blame AI if you wanted to shoot the messenger. But the reality is that companies have overinvested in an unproven technology to handle things that it’s not yet ready to handle. I get that recruitment teams are deluged with thousands of resumes for every opening, and I understand they need a way to streamline the screening process.

However, AI has proven to be more of an enemy within than a helper. Anecdotally, I’ve heard about a hiring manager who applied for their own job opening (presumably one they were more than qualified for) only to receive an immediate rejection from the AI-powered ATS. That person fired their hiring team.

(By the way, I’m not anti-AI. I’m anti-foolishness, and a lot of companies are acting like fools.)”

Experienced SEOs Are Getting Ghosted

It may be true that SEOs with decades of experience are being left behind by poor AI vetting. A glaring example is the one shared by Brian Harnish, an SEO with decades of hands-on experience.

Brian recently published the following on LinkedIn and Facebook:

“In this job market, for me it simply appears that nothing matters.

  • You can apply at 6:15 a.m. the day the job posting pops up and be one of the first.
  • You can change your resume 15 times like I have.
  • You can use ResumeWorded. com for an ATS version of your resume.
  • You can write your resume yourself until you’re blue in the face.
  • You can follow up on the interview with thank yous immediately after.
  • You can follow up on interview decisions later.
  • You can agree to their salary ranges exactly. Even when it’s a pay cut for you.
  • A/B testing long vs. short resumes yield the same results.
  • You can tie in all of your achievements with task > impact > website statements on your resume.
  • You write an entirely customized LinkedIn profile.
  • You can know all the right people.
  • You can network up the wazoo.
  • You can have the greatest interview that you feel you’ve ever put forth.

But companies don’t provide feedback. It’s always the same form letter: “while your qualifications are impressive, we went with another candidate.” Or you’re ghosted.

This market is brutal. I really want a job. Not a handout. But nobody appears to want to hire me. At all. Despite doing EVERYthing right. I used to get hired on the spot. Now it’s just crickets.”

What The Heck Is Going On?

I know of other SEOs, also with decades of experience across all areas of SEO who should have just bounced to a new job in a matter of days but took months to get hired. I’m talking about people with SEO director level experience at top Fortune 500 companies.

How does this happen?

Are you experiencing something similar?

Featured Image by Shutterstock/Ollyy

How Google Protects Searchers From Scams: Updates Announced via @sejournal, @MattGSouthern

Google has announced improvements to its security systems, revealing that AI now plays a crucial role in protecting users from scams.

Additionally, Google has released a report detailing the effectiveness of AI in combating scams in search results.

Google’s AI-Powered Defense Strategy

Google’s report highlights its progress in spotting scams. Its AI systems block hundreds of millions of harmful search results daily.

Google claims it can now catch 20 times more scammy pages before they appear in search results compared to three years ago. This comes from investments in AI systems designed to spot fraud patterns.

Google explains in its report:

“Advancements in AI have bolstered our scam-fighting technologies — enabling us to analyze vast quantities of text on the web, identify coordinated scam campaigns and detect emerging threats — staying one step ahead to keep you safe on Search.”

How Google’s AI Identifies Sophisticated Scams

Google’s systems can now spot networks of fake websites that might look real when viewed alone. This broader view helps catch coordinated scam campaigns that used to slip through the cracks.

Google says its AI is most effective in two areas:

  1. Fake customer service: After spotting a rise in fake airline customer service scams, Google added protections that cut these scams by more than 80% in search results.
  2. Fake official sites: New protections launched in 2024 reduced scams pretending to be government services by over 70%.

Cross-Platform Protection Extends Beyond Search

Google is expanding its scam-fighting to Chrome and Android, too.

Chrome’s Enhanced Protection with Gemini Nano

Chrome’s Enhanced Protection mode now uses Gemini Nano, an AI model that works right on your device. It analyzes websites in real-time to spot dangers.

Jasika Bawa, Group Product Manager for Chrome, says:

“The on-device approach provides instant insight on risky websites and allows us to offer protection, even against scams that haven’t been seen before.”

Android’s Expanded Defenses

For mobile users, Google has added:

  • AI warnings in Chrome for Android that flag suspicious notifications
  • Scam detection in Google Messages and Phone by Google that spots call and text scams

Multilingual Protection Through Language Models

Google is improving its ability to fight scams across languages. Using large language models, Google can find a scam in one language and then protect users searching in other languages.

This matters for international SEO specialists and marketers with global audiences. It shows that Google is getting better at analyzing content in different languages.

What This Means

As Google enhances its ability to detect deceptive content, the standard for quality keeps rising for all websites.

Google now views security as an interconnected system across all its products, rather than as separate features.

Maintaining high transparency, accuracy, and user focus remains the best strategy for long-term search success.

10Web Releases API For Scaled White Label AI Website Building via @sejournal, @martinibuster

10Web has launched an AI Website Builder API that turns text prompts into fully functional WordPress websites hosted on 10Web’s infrastructure, enabling platforms to embed AI website creation into their product workflows. Designed for SaaS tools, resellers, developers, and agencies, the API delivers business-ready sites with ecommerce features, AI-driven customization, and full white-label support to help entrepreneurs launch quickly and at scale.

Developer And Platform Focused API

10Web AI website builder API was designed for developers and platforms who serve entrepreneurs, enabling them to embed website creation into their own tools so that non-technical users (entrepreneurs and small business owners) can launch websites with zero coding or technical knowledge.

10Web describes their product capabilities:

“Text-to-website AI: Generates structure, content, sections, and visuals

Plugin presets: Define default tools per client, project, or vertical

Drag-and-drop editing: Built-in Elementor-based editor for post-generation control

Managed WordPress infrastructure: Hosting, SSL, staging, backups, and DNS

Dashboards & sandbox: Analytics, developer tools, and real-time preview”

Learn more at 10Web:

Integrate the #1 AI Website Builder API into your platform

Featured Image by Shutterstock/Surf Ink

New AI Models Make More Mistakes, Creating Risk for Marketers via @sejournal, @MattGSouthern

The newest AI tools, built to be smarter, make more factual errors than older versions.

As The New York Times highlights, tests show errors as high as 79% in advanced systems from companies like OpenAI.

This can create problems for marketers who rely on these tools for content and customer service.

Rising Error Rates in Advanced AI Systems

Recent tests reveal a trend: newer AI systems are less accurate than their predecessors.

OpenAI’s latest system, o3, got facts wrong 33% of the time when answering questions about people. That’s twice the error rate of their previous system.

Its o4-mini model performed even worse, with a 48% error rate on the same test.

For general questions, the results (PDF link) were:

  • OpenAI’s o3 made mistakes 51% of the time
  • The o4-mini model was wrong 79% of the time

Similar problems appear in systems from Google and DeepSeek.

Amr Awadallah, CEO of Vectara and former Google executive, tells The New York Times:

“Despite our best efforts, they will always hallucinate. That will never go away.”

Real-World Consequences For Businesses

These aren’t just abstract problems. Real businesses are facing backlash when AI gives wrong information.

Last month, Cursor (a tool for programmers) faced angry customers when its AI support bot falsely claimed users couldn’t use the software on multiple computers.

This wasn’t true. The mistake led to canceled accounts and public complaints.

Cursor’s CEO, Michael Truell, had to step in:

“We have no such policy. You’re of course free to use Cursor on multiple machines.”

Why Reliability Is Declining

Why are newer AI systems less accurate? According to a New York Times report, the answer lies in how they’re built.

Companies like OpenAI have used most of the available internet text for training. Now they’re using “reinforcement learning,” which involves teaching AI through trial and error. This approach helps with math and coding, but seems to hurt factual accuracy.

Researcher Laura Perez-Beltrachini explained:

“The way these systems are trained, they will start focusing on one task—and start forgetting about others.”

Another issue is that newer AI models “think” step-by-step before answering. Each step creates another chance for mistakes.

These findings are concerning for marketers using AI for content, customer service, and data analysis.

AI content with factual errors could hurt your search rankings and brand.

Pratik Verma, CEO of Okahu, tells the New York Times:

“You spend a lot of time trying to figure out which responses are factual and which aren’t. Not dealing with these errors properly basically eliminates the value of AI systems.”

Protecting Your Marketing Operations

Here’s how to safeguard your marketing:

  • Have humans review all customer-facing AI content
  • Create fact-checking processes for AI-generated material
  • Use AI for structure and ideas rather than facts
  • Consider AI tools that cite sources (called retrieval-augmented generation)
  • Create clear steps to follow when you spot questionable AI information

The Road Ahead

Researchers are working on these accuracy problems. OpenAI says it’s “actively working to reduce the higher rates of hallucination” in its newer models.

Marketing teams need their own safeguards while still using AI’s benefits. Companies with strong verification processes will better balance AI’s efficiency with the need for accuracy.

Finding this balance between speed and correctness will remain one of digital marketing’s biggest challenges as AI continues to evolve.


Featured Image: The KonG/Shutterstock

Google Disputes News That Search Engine Use Is Falling via @sejournal, @martinibuster

Google took the unusual step of issuing a response to news reports that AI search engines and chatbots were causing a decline in traditional search engine use, directly contradicting testimony given by an Apple executive in the ongoing U.S. government antitrust lawsuit against Google.

Apple Testimony That Triggered Stock Sell-Off

Google’s stock price took a steep dive on the news that people were turning away from traditional search engines, dropping by 7.51% on Wednesday. What triggered the stock sell-off was testimony by Eddy Cue, Apple’s senior vice president of services, who testified that search engine use by users of Apple’s Safari browser declined for the first time last month, expressing his opinion that a technological shift is underway that is undercutting the use of traditional search engines.

Early AI Adopters Turning Away From Google?

There is a view in Silicon Valley that Google Search is legacy technology. A recent episode of the Y Combinator show featured the host sharing that their Google search traffic has dropped by 15% and that he attributes that to AI use in both Google and chatbots. He explained that if you want to see the future you look to the early adopters, commenting that everyone he knows in Silicon Valley uses ChatGPT to get answers and that Google Search is defacto legacy technology.

The host described how 25 years ago the early adopters were using Google but that now, Google Search feels weird to him.

He said:

“People are now switching their behavior to where your default action if you’re looking for information is, you know ChatGPT or perplexity, or one of these things, and even just, you know, observing my own behavior. I’ll use Google mostly for kind of navigational. Like, if I’m just looking for a specific website and I know it’s going to give the same thing, but it’s starting to have that weird kind of, like legacy website, like I’m using eBay or something.”

Google’s Statement

Google’s statement was short and to the point, with no accompanying images to make it look like a blog post. Google’s statement could even be seen as terse.

Here’s what Google published:

“Here’s our statement on this morning’s press reports about Search traffic.

We continue to see overall query growth in Search. That includes an increase in total queries coming from Apple’s devices and platforms. More generally, as we enhance Search with new features, people are seeing that Google Search is more useful for more of their queries — and they’re accessing it for new things and in new ways, whether from browsers or the Google app, using their voice or Google Lens. We’re excited to continue this innovation and look forward to sharing more at Google I/O.”

AI Revolution: What Nobody Else Is Seeing

Here’s the video of the Y Combinator show that offers a peek at how people in Silicon Valley relate to Google Search. The part I quoted is at about the 24 minute mark.

Featured Image by Shutterstock/Framalicious

Apple May Add AI Search Engines to Safari As Google Use Drops via @sejournal, @MattGSouthern

Apple is reportedly planning to redesign Safari to focus on AI search engines.

According to recent testimony in the Google antitrust case, this comes as the company prepares for possible changes to its profitable Google deal.

Apple Signals Shift In Search Strategy

Eddy Cue, Apple’s senior vice president of services, testified that Safari searches dropped for the first time last month.

He believes users are choosing AI tools over regular search engines. This change happens as courts decide what to do after Google lost its antitrust case in August.

Per a report from Bloomberg, Cue testified:

“You may not need an iPhone 10 years from now as crazy as it sounds. The only way you truly have true competition is when you have technology shifts. Technology shifts create these opportunities. AI is a new technology shift, and it’s creating new opportunities for new entrants.”

AI Search Providers May Replace Traditional Search

Cue believes AI search providers such as OpenAI, Perplexity AI, and Anthropic will eventually replace traditional search engines like Google.

“We will add them to the list — they probably won’t be the default,” Cue said, noting Apple has already talked with Perplexity.

Currently, Apple offers ChatGPT as an option in Siri and plans to add Google’s Gemini later this year.

Cue admitted that these AI search tools need to improve their search indexes. However, he said their other features are “so much better that people will switch.”

“There’s enough money now, enough large players, that I don’t see how it doesn’t happen,” he said about the shift from standard search to AI-powered options.

Context: Google’s Antitrust Battle Timeline

This testimony comes during a key moment in the case against Google:

  • August 2024: Judge Mehta ruled Google broke antitrust law through exclusive search deals
  • October 2024: DOJ proposed remedies targeting search distribution, data usage, search results, and advertising
  • December 2024: Google offered counter-proposals to loosen search deals
  • March 2025: DOJ filed revised proposals, including possibly forcing Google to sell Chrome

The $20 Billion Question

The core issue is Google’s deal with Apple, worth a reported $20 billion per year, that makes Google the default search engine on Safari.

While expecting changes to this deal, Cue admitted he has “lost sleep over the possibility of losing the revenue share from their agreement.”

We learned about this payment during the trial. In 2022, Google paid Apple $20 billion to be Safari’s default search engine.

Last year, they expanded their partnership to add Google Lens to the Visual Intelligence feature on new iPhones.

Proposed Remedies & Responses

The DOJ’s latest filing suggests several significant changes:

  • Making Google sell off Chrome
  • Limiting Google’s payments for default search placement
  • Stopping Google from favoring its products in search results
  • Making Google’s advertising practices more transparent

Google has criticized these proposals, calling them a “radical interventionist agenda” that would “break a range of Google products.”

Instead, Google suggests letting browser companies deal with multiple search engines and giving device makers more freedom about which search options are preloaded.

What This Means

If Apple shifts Safari toward AI, prepare for significant changes in search.

It’s not a stretch to say the outcome could reshape search competition and digital marketing for years.


Featured Image: Bendix M/Shutterstock