Google Launches AI-Powered Virtual Try-On & Shopping Tools via @sejournal, @MattGSouthern

Google unveiled three new shopping features today that use AI to enhance the way people discover and buy products.

The updates include a virtual try-on tool for clothing, more flexible price tracking alerts, and an upcoming visual style inspiration feature powered by AI.

Virtual Try-On Now Available Nationwide

Following a limited launch in Search Labs, Google’s virtual try-on tool is now available to all U.S. searchers.

The feature lets you upload a full-length photo and use AI to see how clothing items might look on your body. It works across Google Search, Shopping, and even product results in Google Images.

Tap the “try it on” icon on an apparel listing, upload a photo, and you’ll receive a visualization of yourself wearing the item. You can also save favorite looks, revisit past try-ons, and share results with others.

Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.

The tool draws from billions of apparel items in its Shopping Graph, giving shoppers a wide range of options to explore.

Smarter Price Alerts

Google is also rolling out an enhanced price tracking feature for U.S. shoppers.

You can now set alerts based on specific criteria like size, color, and target price. This update makes it easier to track deals that match your exact preferences.

Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.

AI-Powered Style Inspiration Arrives This Fall

Later in 2025, Google plans to launch a new shopping experience within AI Mode, offering outfit and room design inspiration based on your query.

This feature uses Google’s vision match technology and taps into 50 billion products indexed in the Shopping Graph.

Screenshot from: blog.google/products/shopping/back-to-school-ai-updates-try-on-price-alerts, July 2025.

What This Means for E-Commerce Marketers

These updates carry a few implications for marketers and online retailers:

  • Improve Product Images: With virtual try-on now live, high-quality and standardized apparel images are more likely to be included in AI-driven displays.
  • Competitive Pricing Matters: The refined price alert system could influence purchase behavior, especially as consumers gain more control over how they track product deals.
  • Optimize for Visual Search: The upcoming inspiration features suggest a growing role for visual-first shopping. Retailers should ensure their product feeds contain rich attribute data that helps Google’s systems surface relevant items.

Looking Ahead

Google’s suite of AI-powered shopping features can help create more personalized and interactive retail experiences.

For search marketers, these tools offer new ways to engage, but also raise the bar in terms of presentation and data quality.

For e-commerce teams, staying competitive may require rethinking how products are priced, presented, and positioned within Google’s growing suite of AI-enhanced tools.


Featured Image: Roman Samborskyi/Shutterstock

Google Search Central APAC 2025: Everything From Day 2 via @sejournal, @TaylorDanRW

The second day of the Google Search Central Live APAC 2025 kicked off with a brief tie‑in to the previous day’s deep dive into crawling, before moving squarely into indexing.

Cherry Prommawin opened by walking us through how Google parses HTML and highlights the key stages in indexing:

  1. HTML parsing.
  2. Rendering and JavaScript execution.
  3. Deduplication.
  4. Feature extraction.
  5. Signal extraction.

This set the theme for the rest of the day.

Cherry noted that Google first normalizes the raw HTML into a DOM, then looks for header and navigation elements, and determines which section holds the main content. During this process, it also extracts elements such as rel=canonical, hreflang, links and anchors, and meta-robots tags.

“There is no preference between responsive websites versus dynamic/adaptive websites. Google doesn’t try to detect this and doesn’t have a preferential weighting.” – Cherry Prommawin

Links remain central to the web’s structure, both for discovery and for ranking:

“Links are still an important part of the internet and used to discover new pages, and to determine site structure, and we use them for ranking.” – Cherry Prommawin

Controlling Indexing With Robots Rules

Gary Illyes clarified where robots.txt and robots‑meta tags fit into the flow:

  • Robots.txt controls what crawlers can fetch.
  • Meta robot tags control how that fetched data is used downstream.

He highlighted several lesser‑known directives:

  • none: Equivalent to noindex,nofollow combined into a single rule. Is there a benefit to this? While functionally identical, using one directive instead of two may simplify tag management.
  • notranslate: If set, Chrome will no longer offer to translate the page.
  • noimageindex: Also applies to video assets.
  • Unavailable after: Despite being introduced by engineers who have since moved on, it still works. This could be useful for deprecating time‑sensitive blog posts, such as limited‑time deals and promotions, so they don’t persist in Google’s AI features and risk misleading users or harming brand perception.

Understanding What’s On A Page

Gary Illyes emphasized that the main content, as defined by Google’s Quality Rater Guidelines, is the most critical element in crawling and indexing. It might be text, images, videos, or rich features like calculators.

He showed how shifting a topic into the main content area can boost rankings.

In one example, moving references to “Hugo 7” from a sidebar into the central (main) content led to a measurable increase in visibility.

“If you want to rank for certain things, put those words and topics in important places (on the page).” – Gary Illyes

Tokenization For Search

You can’t dump raw HTML into a searchable index at scale. Google breaks it into “tokens,” individual words or phrases, and stores those in its index.

The first HTML segmentation system dates back to Google’s 2001 Tokyo engineering office, and the same tokenization methods power its AI products, since “why reinvent the wheel.”

When the main content is thin or low value, what Google labels as a “soft 404,” it’s flagged with a centerpiece annotation to show that this deficiency is at the heart of the page, not just in a peripheral section.

Handling Web Duplication

handling web duplicationImage from author, July 2025

Cherry Prommawin explained deduplication in three focus areas:

  1. Clustering: Using redirects, content similarity, and rel=canonical to group duplicate pages.
  2. Content checks: Checksums that ignore boilerplate and catch many soft‑error pages. Note that soft errors can bring down an entire cluster.
  3. Localization: When pages differ only by locale (for example via geo‑redirects), hreflang bridges them without penalty.

She contrasted permanent versus temporary redirects: Both play a role in crawling and clustering, but only permanent redirects influence which URL is chosen as the cluster’s canonical.

Google prioritizes hijacking risk first, user experience second, and site-owner signals (such as your rel=canonical) third when selecting the representative URL.

Geotargeting

Geotargeting allows you to signal to Google which country or region your content is most relevant for, and it works differently from simple language targeting.

Prommawin emphasized that you don’t need to hide duplicate content across two country‑specific sites; hreflang will handle those alternates for you.

geotargetingImage from author, July 2025

If you serve the duplicate content on multiple regional URLs without localization, you risk confusing both crawlers and users.

To geotarget effectively, ensure that each version has unique, localized content tailored to its specific audience.

The primary geotargeting signals Google uses are:

  1. Country‑code top‑level domain (ccTLD): Domains like .sg or .au indicate the target country.
  2. Hreflang annotations: Use tags, HTTP headers, or sitemap entries to declare language and regional alternates.
  3. Server location: The IP address or hosting location of your server can act as a geographic hint.
  4. Additional local signals, such as language and currency on the page, links from other regional websites, and signals from your local Business Profile, all reinforce your target region.

By combining these signals with genuinely localized content, you help Google serve the right version of your site to the right users, and avoid the pitfalls of unintended duplicate‑content clusters.

Structured Data & Media

Gary Illyes introduced the feature extraction phase, which runs after deduplication and is computationally expensive. It starts with HTML, then kicks off separate, asynchronous media indexing for images and videos.

If your HTML is in the index but your media isn’t, it simply means the media pipeline is still working.

Sessions in this track included:

  • Structured Data with William Prabowo.
  • Using Images with Ian Huang.
  • Engaging Users with Video with William Prabowo.

Q&A Takeaway On Schema

Schema markup can help Google understand the relationships between entities and enable LLM-driven features.

But, excessive or redundant schema only adds page bloat and has no additional ranking benefits. And Schema is not used as part of the ranking process.

Calculating Signals

During signal extraction, also part of indexing, Google computes a mix of:

  • Indirect signals (links, mentions by other pages).
  • Direct signals (on‑page words and placements).
calculating signalsImage from author, July 2025

Illyes confirmed that Google still uses PageRank internally. It is not the exact algorithm from the 1996 White Paper, but it bears the same name.

Handling Spam

Google’s systems identify around 40 billion spam pages each day, powered by their LLM‑based “SpamBrain.”

handling spamImage from author, July 2025

Additionally, Illyes emphasized that E-E-A-T is not an indexing or ranking signal. It’s an explanatory principle, not a computed metric.

Deciding What Gets Indexed

Index selection boils down to quality, defined as a combination of trustworthiness and utility for end users. Pages are dropped from the index for clear negative signals:

  • noindex directives.
  • Expired or time‑limited content.
  • Soft 404s and slipped‑through duplicates.
  • Pure spam or policy violations.

If a page has been crawled but not indexed, the remedy is to improve the content quality.

Internal linking can help, but only insofar as it makes the page genuinely more useful. Google’s goal is to reward user‑focused improvements, not signal manipulation.

Google Doesn’t Care If Your Images Are AI-Generated

AI-generated images have become common in marketing, education, and design workflows. These visuals are produced by deep learning models trained on massive picture collections.

During the session, Huang outlined that Google doesn’t care whether your images are generated by AI or humans, as long as they accurately and effectively convey the information or tell the story you intend.

As long as images are understandable, their AI origins are irrelevant. The primary goal is effective communication with your audience.

Huang highlighted an example of an AI image used by the Google team during the first day of the conference that, on close inspection, does have some visual errors, but as a “prop,” its job was to represent a timeline and was not the main content of the slide, so these errors do not matter.

Image from author, July 2025

We can adopt a similar approach to our use of AI-generated imagery. If the image conveys the message and isn’t the main content of the page, minor issues won’t lead to penalization, nor will using AI-generated imagery in general.

Images should undergo a quick human review to identify obvious mistakes, which can prevent production errors.

Ongoing oversight remains essential to maintain trust in your visuals and protect your brand’s integrity.

Google Trends API Announced

Finally, Daniel Waisberg and Hadas Jacobi unveiled the new Google Trends API (Alpha). Key features of the new API will include:

  • Consistently scaled search interest data that does not recalibrate when you change queries.
  • A five‑year rolling window, updated up to 48 hours ago, for seasonal and historical comparisons.
  • Flexible time aggregation (weekly, monthly, yearly).
  • Region and sub‑region breakdowns.

This opens up a world of programmatic trend analysis with reliable, comparable metrics over time.

That wraps up day two. Tomorrow, we have coverage of the final day three at Google Search Central Live, with more breaking news and insights.

More Resources:


Featured Image: Dan Taylor/SALT.agency

Beyond Fan-Out: Turning Question Maps Into Real AI Retrieval via @sejournal, @DuaneForrester

If you spend time in SEO circles lately, you’ve probably heard query fan-out used in the same breath as semantic SEO, AI content, and vector-based retrieval.

It sounds new, but it’s really an evolution of an old idea: a structured way to expand a root topic into the many angles your audience (and an AI) might explore.

If this all sounds familiar, it should. Marketers have been digging for this depth since “search intent” became a thing years ago. The concept isn’t new; it just has fresh buzz, thanks to GenAI.

Like many SEO concepts, fan-out has picked up hype along the way. Some people pitch it as a magic arrow for modern search (it’s not).

Others call it just another keyword clustering trick dressed up for the GenAI era.

The truth, as usual, sits in the middle: Query fan-out is genuinely useful when used wisely, but it doesn’t magically solve the deeper layers of today’s AI-driven retrieval stack.

This guide sharpens that line. We’ll break down what query fan-out actually does, when it works best, where its value runs out, and which extra steps (and tools) fill in the critical gaps.

If you want a full workflow from idea to real-world retrieval, this is your map.

What Query Fan-Out Really Is

Most marketers already do some version of this.

You start with a core question like “How do you train for a marathon?” and break it into logical follow-ups: “How long should a training plan be?”, “What gear do I need?”, “How do I taper?” and so on.

In its simplest form, that’s fan-out. A structured expansion from root to branches.

Where today’s fan-out tools step in is the scale and speed; they automate the mapping of related sub-questions, synonyms, adjacent angles, and related intents. Some visualize this as a tree or cluster. Others layer on search volumes or semantic relationships.

Think of it as the next step after the keyword list and the topic cluster. It helps you make sure you’re covering the terrain your audience, and the AI summarizing your content, expects to find.

Why Fan-Out Matters For GenAI SEO

This piece matters now because AI search and agent answers don’t pull entire pages the way a blue link used to work.

Instead, they break your page into chunks: small, context-rich passages that answer precise questions.

This is where fan-out earns its keep. Each branch on your fan-out map can be a stand-alone chunk. The more relevant branches you cover, the deeper your semantic density, which can help with:

1. Strengthening Semantic Density

A page that touches only the surface of a topic often gets ignored by an LLM.

If you cover multiple related angles clearly and tightly, your chunk looks stronger semantically. More signals tell the AI that this passage is likely to answer the prompt.

2. Improving Chunk Retrieval Frequency

The more distinct, relevant sections you write, the more chances you create for an AI to pull your work. Fan-out naturally structures your content for retrieval.

3. Boosting Retrieval Confidence

If your content aligns with more ways people phrase their queries, it gives an AI more reason to trust your chunk when summarizing. This doesn’t guarantee retrieval, but it helps with alignment.

4. Adding Depth For Trust Signals

Covering a topic well shows authority. That can help your site earn trust, which nudges retrieval and citation in your favor.

Fan-Out Tools: Where To Start Your Expansion

Query fan-out is practical work, not just theory.

You need tools that take a root question and break it into every related sub-question, synonym, and niche angle your audience (or an AI) might care about.

A solid fan-out tool doesn’t just spit out keywords; it shows connections and context, so you know where to build depth.

Below are reliable, easy-to-access tools you can plug straight into your topic research workflow:

  • AnswerThePublic: The classic question cloud. Visualizes what, how, and why people ask around your seed topic.
  • AlsoAsked: Builds clean question trees from live Google People Also Ask data.
  • Frase: Topic research module clusters root queries into sub-questions and outlines.
  • Keyword Insights: Groups keywords and questions by semantic similarity, great for mapping searcher intent.
  • Semrush Topic Research: Big-picture tool for surfacing related subtopics, headlines, and question ideas.
  • Answer Socrates: Fast People Also Ask scraper, cleanly organized by question type.
  • LowFruits: Pinpoints long-tail, low-competition variations to expand your coverage deeper.
  • WriterZen: Topic discovery clusters keywords and builds related question sets in an easy-to-map layout.

If you’re short on time, start with AlsoAsked for quick trees or Keyword Insights for deeper clusters. Both deliver instant ways to spot missing angles.

Now, having a clear fan-out tree is only step one. Next comes the real test: proving that your chunks actually show up where AI agents look.

Where Fan-Out Stops Working Alone

So, fan-out is helpful. But it’s only the first step. Some people stop here, assuming a complete query tree means they’ve future-proofed their work for GenAI. That’s where the trouble starts.

Fan-out does not verify if your content is actually getting retrieved, indexed, or cited. It doesn’t run real tests with live models. It doesn’t check if a vector database knows your chunks exist. It doesn’t solve crawl or schema problems either.

Put plainly: Fan-out expands the map. But, a big map is worthless if you don’t check the roads, the traffic, or whether your destination is even open.

The Practical Next Steps: Closing The Gaps

Once you’ve built a great fan-out tree and created solid chunks, you still need to make sure they work. This is where modern GenAI SEO moves beyond traditional topic planning.

The key is to verify, test, and monitor how your chunks behave in real conditions.

Image Credit: Duane Forrester

Below is a practical list of the extra work that brings fan-out to life, with real tools you can try for each piece.

1. Chunk Testing & Simulation

You want to know: “Does an LLM actually pull my chunk when someone asks a question?” Prompt testing and retrieval simulation give you that window.

Tools you can try:

  • LlamaIndex: Popular open-source framework for building and testing RAG pipelines. Helps you see how your chunked content flows through embeddings, vector storage, and prompt retrieval.
  • Otterly: Practical, non-dev tool for running live prompt tests on your actual pages. Shows which sections get surfaced and how well they match the query.
  • Perplexity Pages: Not a testing tool in the strict sense, but useful for seeing how a real AI assistant surfaces or summarizes your live pages in response to user prompts.

2. Vector Index Presence

Your chunk must live somewhere an AI can access. In practice, that means storing it in a vector database.

Running your own vector index is how you test that your content can be cleanly chunked, embedded, and retrieved using the same similarity search methods that larger GenAI systems rely on behind the scenes.

You can’t see inside another company’s vector store, but you can confirm your pages are structured to work the same way.

Tools to help:

  • Weaviate: Open-source vector DB for experimenting with chunk storage and similarity search.
  • Pinecone: Fully managed vector storage for larger-scale indexing tests.
  • Qdrant: Good option for teams building custom retrieval flows.

3. Retrieval Confidence Checks

How likely is your chunk to win out against others?

This is where prompt-based testing and retrieval scoring frameworks come in.

They help you see whether your content is actually retrieved when an LLM runs a real-world query, and how confidently it matches the intent.

Tools worth looking at:

  • Ragas: Open-source framework for scoring retrieval quality. Helps test if your chunks return accurate answers and how well they align with the query.
  • Haystack: Developer-friendly RAG framework for building and testing chunk pipelines. Includes tools for prompt simulation and retrieval analysis.
  • Otterly: Non-dev tool for live prompt testing on your actual pages. Shows which chunks get surfaced and how well they match the prompt.

4. Technical & Schema Health

No matter how strong your chunks are, they’re worthless if search engines and LLMs can’t crawl, parse, and understand them.

Clean structure, accessible markup, and valid schema keep your pages visible and make chunk retrieval more reliable down the line.

Tools to help:

  • Ryte: Detailed crawl reports, structural audits, and deep schema validation; excellent for finding markup or rendering gaps.
  • Screaming Frog: Classic SEO crawler for checking headings, word counts, duplicate sections, and link structure: all cues that affect how chunks are parsed.
  • Sitebulb: Comprehensive technical SEO crawler with robust structured data validation, clear crawl maps, and helpful visuals for spotting page-level structure problems.

5. Authority & Trust Signals

Even if your chunk is technically solid, an LLM still needs a reason to trust it enough to cite or summarize it.

That trust comes from clear authorship, brand reputation, and external signals that prove your content is credible and well-cited. These trust cues must be easy for both search engines and AI agents to verify.

Tools to back this up:

  • Authory: Tracks your authorship, keeps a verified portfolio, and monitors where your articles appear.
  • SparkToro: Helps you find where your audience spends time and who influences them, so you can grow relevant citations and mentions.
  • Perplexity Pro: Lets you check whether your brand or site appears in AI answers, so you can spot gaps or new opportunities.

Query fan-out expands the plan. Retrieval testing proves it works.

Putting It All Together: A Smarter Workflow

When someone asks, “Does query fan-out really matter?” the answer is yes, but only as a first step.

Use it to design a strong content plan and to spot angles you might miss. But always connect it to chunk creation, vector storage, live retrieval testing, and trust-building.

Here’s how that looks in order:

  1. Expand: Use fan-out tools like AlsoAsked or AnswerThePublic.
  2. Draft: Turn each branch into a clear, stand-alone chunk.
  3. Check: Run crawls and fix schema issues.
  4. Store: Push your chunks to a vector DB.
  5. Test: Use prompt tests and RAG pipelines.
  6. Monitor: See if you get cited or retrieved in real AI answers.
  7. Refine: Adjust coverage or depth as gaps appear.

The Bottom Line

Query fan-out is a valuable input, but it’s never been the whole solution. It helps you figure out what to cover, but it does not prove what gets retrieved, read, or cited.

As GenAI-powered discovery keeps growing, smart marketers will build that bridge from idea to index to verified retrieval. They’ll map the road, pave it, watch the traffic, and adjust the route in real time.

So, next time you hear fan-out pitched as a silver bullet, you don’t have to argue. Just remind people of the bigger picture: The real win is moving from possible coverage to provable presence.

If you do that work (with the right checks, tests, and tools), your fan-out map actually leads somewhere useful.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Deemerwha studio/Shutterstock

Google Trends API (Alpha) Launching: Breaking News via @sejournal, @TaylorDanRW

Google has just unveiled an alpha version of its Trends API at Google Search Central Live, Deep Dive APAC 2025. This new offering brings explore-page data directly into applications.

The API will provide consistently scaled search interest figures. These figures align more predictably than the current website numbers.

Announced by Daniel Waisberg and Hadas Jacobi, the Alpha will be opening up from today, and they are looking for testers who will use the Alpha throughout 2025.

The API will not include Trending Now.

Image from author, July 2025

Key Features

Consistently Scaled Search Interest

The standout feature in this Alpha release is consistent scaling.

Unlike the web interface, where search interest values shift depending on your query mix, the API returns values that remain stable across requests.

These won’t be complete search volumes, but in the sample response shown, we can see an indicative search volume presented alongside the scaled number for comparison in the Google Trends website interface.

Five-Year Rolling Window

The API surfaces data across a five-year rolling window.

Data is available up to 48 hours ago to preserve temporal patterns, such as annual events or weekly cycles.

This longer context helps you contrast today’s search spikes with those of previous years. It’s ideal for spotting trends tied to seasonal events and recurring news cycles.

Flexible Aggregations And Geographic Breakdown

You choose how to aggregate data: weekly, monthly, or annually.

This flexibility allows you to zoom in for fine-grained analysis or step back for long-term trends.

Regional and sub-regional breakdowns are also exposed via the API. You can pinpoint interest in countries, states, or even cities without extra work.

Sample API Request & Response

Hadas shared an example request prompt using Python, as well as a sample response.

The request:

Image from author, July 2025

The response:

Image from author, July 2025
print(time_series)
{
"points": [
{
"time_range": {
"start_time": (2024-01-01),
"end_time": (2024-01-07),
},
"search_interest": 4400.0,
"scaled_search_interest": 62,
},
{
"time_range": {
"start_time": (2024-01-08),
"end_time": (2024-01-14),
},
"search_interest": 7100.0,
"scaled_search_interest": 100,
},
…
]
}

Sign up now to get early access to the Google Trends API alpha.


More Resources:


Featured Image: Dan Taylor/SALT.agency

Google Says You Don’t Need AEO Or GEO To Rank In AI Overviews via @sejournal, @martinibuster

Google’s Gary Illyes confirmed that AI Search does not require specialized optimization, saying that “AI SEO” is not necessary and that standard SEO is all that is needed for both AI Overviews and AI Mode.

AI Search Is Everywhere

Standard search, in the way it used to be with link algorithms playing a strong role, no longer exists. AI is embedded within every step of the organic search results, from crawling to indexing and ranking. AI has been a part of Google Search for ten years, beginning with RankBrain and expanding from there.

Google’s Gary Illyes made it clear that AI is embedded within every step of today’s search ranking process.

Kenichi Suzuki (LinkedIn Profile) posted a detailed summary of what Illyes discussed, covering four main points:

AI Search features use the same infrastructure as traditional search

  1. AI Search Optimization = SEO
  2. Google’s focus is on content quality and is agnostic as to how it was created
  3. AI is deeply embedded into every stage of search
  4. Generative AI has unique features to ensure reliability

There’s No Need For AEO Or GEO

The SEO community has tried to wrap their minds around AI search, with some insisting that ranking in AI search requires an approach to optimization so distinct from SEO that it warrants its own acronym. Other SEOs, including an SEO rockstar, have insisted that optimizing for AI search is fundamentally the same as standard search. I’m not saying that one group of SEOs is right and another is wrong. The SEO community collectively discussing a topic and reaching different conclusions is one of the few things that doesn’t change in search marketing.

According to Google, ranking in AI Overviews and AI Mode requires only standard SEO practices.

Suzuki shared why AI search doesn’t require different optimization strategies:

“Their core message is that new AI-powered features like AI Overviews and AI Mode are built upon the same fundamental processes as traditional search. They utilize the same crawler (Googlebot), the same core index, and are influenced by the same ranking systems.

They repeatedly emphasized this with the phrase “same as above” to signal that a separate, distinct strategy for “AI SEO” is unnecessary. The foundation of creating high-quality, helpful content remains the primary focus.”

Content Quality Is Not About How It’s Created

The second point that Google made was that their systems are tuned to identify content quality and that identifying whether the content was created by a human or AI is not part of that quality assessment.

Gary Illyes is quoted as saying:

“We are not trying” to differentiate based on origin.”

According to Kenichi, the objective is to:

“…identify and reward high-quality, helpful, and reliable content, regardless of whether it was created by a human or with the assistance of AI.”

AI Is Embedded Within Every Stage Of Search

The third point that Google emphasized is that AI plays a role at every stage of search: crawling, indexing, and ranking.

Regarding the ranking part, Suzuki wrote:

“RankBrain helps interpret novel queries, while the Multitask Unified Model (MUM) understands information across various formats (text, images, video) and 75 different languages.”

Unique Processes Of Generative AI Features

The fourth point that Google emphasized is to acknowledge that AI Overviews does two different things at the ranking stage:

  1. Query Fan-Out
    Generates multiple queries in order to provide deeper answers to queries, using the query fan-out technique.
  2. Grounding
    AI Overviews checks the generated answers against online sources to make sure that they are factually accurate, a process called grounding.

Suzuki explains:

“It then uses a process called “grounding” to check the generated text against the information in its search index, a crucial step designed to verify facts and reduce the risk of AI ‘hallucinations.’”

Takeaways:

AI SEO vs. Traditional SEO

  • Google explicitly states that specialized “AI SEO” is not necessary.
  • Standard SEO practices remain sufficient to rank in AI-driven search experiences.

Integration of AI in Google Search

  • AI technology is deeply embedded across every stage of Google’s organic search: crawling, indexing, and ranking.
  • Technologies like RankBrain and the Multitask Unified Model (MUM) are foundational to Google’s current search ranking system.

Google’s Emphasis on Content Quality

  • Content quality assessment by Google is neutral regarding whether humans or AI produce the content.
  • The primary goal remains identifying high-quality, helpful, and reliable content.

Generative AI-Specific Techniques

  • Google’s AI Overviews employ specialized processes like “query fan-out” to answer queries thoroughly.
  • A technique called “grounding” is used to ensure factual accuracy by cross-checking generated content against indexed information.

Google clarified that there’s no need for AEO/GEO for Google AI Overviews and AI Mode. Standard search engine optimization is all that’s needed to rank across both standard and AI-based search. Content quality remains an important part of Google’s algorithms, and they made a point to emphasize that they don’t check whether content is created by a human or AI.

Featured Image by Shutterstock/Luis Molinero

The Download: what’s next for AI agents, and how Trump protects US tech companies overseas

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Navigating the rise of AI agents

AI agents is a buzzy term that essentially refers to AI models and algorithms that can not only provide you with information, but take actions on your behalf. Companies like OpenAI and Anthropic have launched ‘agentic’ products that can do things for you like making bookings, filling in forms, and collaborating with you on coding projects. 

On a LinkedIn Live event yesterday our editor-in-chief Mat Honan, senior editor for AI Will Douglas Heaven, and senior AI reporter Grace Huckins discussed what’s exciting about agents and where the technology will go next, but also its limitations, and the risks that currently come with adopting it. Check out what they had to say!

And if you’re interested in learning more about AI agents, read our stories:

+ Are we ready to hand AI agents the keys? We’re starting to give AI agents real autonomy, and we’re not prepared for what could happen next. Read the full story.

+ Anthropic’s chief scientist on 4 ways agents will get even better. Read the full story.

+ Cyberattacks by AI agents are coming. Agents could make it easier and cheaper for criminals to hack systems at scale. We need to be ready.

+ When AIs bargain, a less advanced agent could cost you. In AI-to-AI price negotiations, weaker models often lose out—costing users real money and raising concerns about growing digital inequality. Read the full story.

+ There’s been huge hype about a new general AI agent from China called Manus. We put it to the test

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Trump administration is seeking to protect US tech firms abroad 
It’s using its global trade wars as a way to prevent other countries from imposing new taxes, regulations and tariffs on American tech companies. (WSJ $)
+ Tech firms are increasingly trying to shape US AI policy. (FT $)

2 UK border officials plan to use AI to assess child asylum seekers 
A pilot scheme will estimate the age of new arrivals to the country.  (The Guardian)
+ US border patrol is arresting immigrants nowhere near the US-Mexico border.  (WP $)
+ The US wants to use facial recognition to identify migrant children as they age. (MIT Technology Review)

3 AI is hitting web traffic hard
Google’s AI Overviews are causing a massive drop in clicks to actual websites. (Ars Technica)
+ It’s good news for Google, bad news for everyone else. (The Register)
+ AI means the end of internet search as we’ve known it. (MIT Technology Review)

4 Dozens of Iranians’ iPhones have been targeted with government spyware 
But the actual total number of targets is likely to be far higher. (Bloomberg $)

5 Amazon is shutting down its AI lab in Shanghai
It’s the latest in a line of US tech giants to scale back their research in the country. (FT $)

6 Californian billionaires have set their sights on building an industrial park
After their plans to create a brand new city didn’t get off the ground. (Gizmodo)

7 Tesla’s robotaxi launch didn’t quite go to plan
Prospective customers appear to be a bit freaked out. (Wired $)
+ Ride-hailing companies aren’t meeting their EV adoption targets. (Rest of World)

8 Why AI slop could finally help us to log off
If AI garbage renders a lot of the web unusable, it could be our only option. (The Atlantic $)
+ How to fix the internet. (MIT Technology Review)

9 You may regrow your own teeth in the future 🦷
The age of dentures and implants could be nearly over. (New Scientist $)
+ Humanlike “teeth” have been grown in mini pigs. (MIT Technology Review)

10 Inside one man’s hunt for an elusive Chinese typewriter
It made it possible to type tens of thousands of characters using just 72 keys. (NYT $)
+ How the quest to type Chinese on a QWERTY keyboard created autocomplete. (MIT Technology Review)

Quote of the day

“The truth is, China’s really doing ‘007’ now—midnight to midnight, seven days a week.”

—Venture capitalist Harry Stebbings explains how Chinese startups have moved from ‘996’ work schedules (9am to 9pm, six days a week) to a routine that’s even more punishing, Wired reports.

One more thing

Inside a new quest to save the “doomsday glacier”

The Thwaites glacier is a fortress larger than Florida, a wall of ice that reaches nearly 4,000 feet above the bedrock of West Antarctica, guarding the low-lying ice sheet behind it.

But a strong, warm ocean current is weakening its foundations and accelerating its slide into the sea. Scientists fear the waters could topple the walls in the coming decades, kick-starting a runaway process that would crack up the West Antarctic Ice Sheet, marking the start of a global climate disaster. As a result, they are eager to understand just how likely such a collapse is, when it could happen, and if we have the power to stop it. 

Scientists at MIT and Dartmouth College founded the Arête Glacier Initiative last year in the hope of providing clearer answers to these questions. The nonprofit research organization will officially unveil itself, launch its website, and post requests for research proposals today, timed to coincide with the UN’s inaugural World Day for Glaciers, MIT Technology Review can report exclusively. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)+ A fun-looking major retrospect of David Bailey’s starry career is opening in Spain.
+ Creepy new horror flick Weapons is getting rave reviews.
+ This amazing website takes you through Apollo 11’s first landing on the moon in real time.
+ Rest in power Ozzy Osbourne, the first ever heavy metal frontman, and the undisputed Prince of Darkness.

Google DeepMind’s new AI can help historians understand ancient Latin inscriptions

Google DeepMind has unveiled new artificial-intelligence software that could help historians recover the meaning and context behind ancient Latin engravings. 

Aeneas can analyze words written in long-weathered stone to say when and where they were originally inscribed. It follows Google’s previous archaeological tool Ithaca, which also used deep learning to reconstruct and contextualize ancient text, in its case Greek. But while Ithaca and Aeneas use some similar systems, Aeneas also promises to give researchers jumping-off points for further analysis.

To do this, Aeneas takes in partial transcriptions of an inscription alongside a scanned image of it. Using these, it gives possible dates and places of origins for the engraving, along with potential fill-ins for any missing text. For example, a slab damaged at the start and continuing with … us populusque Romanus would likely prompt Aeneas to guess that Senat comes before us to create the phrase Senatus populusque Romanus, “The Senate and the people of Rome.” 

This is similar to how Ithaca works. But Aeneas also cross-references the text with a stored database of almost 150,000 inscriptions, which originated everywhere from modern-day Britain to modern-day Iraq, to give possible parallels—other catalogued Latin engravings that feature similar words, phrases, and analogies. 

This database, alongside a few thousand images of inscriptions, makes up the training set for Aeneas’s deep neural network. While it may seem like a good number of samples, it pales in comparison to the billions of documents used to train general-purpose large language models like Google’s Gemini. There simply aren’t enough high-quality scans of inscriptions to train a language model to learn this kind of task. That’s why specialized solutions like Aeneas are needed. 

The Aeneas team believes it could help researchers “connect the past,” said Yannis Assael, a researcher at Google DeepMind who worked on the project. Rather than seeking to automate epigraphy—the research field dealing with deciphering and understanding inscriptions—he and his colleagues are interested in “crafting a tool that will integrate with the workflow of a historian,” Assael said in a press briefing. 

Their goal is to give researchers trying to analyze a specific inscription many hypotheses to work from, saving them the effort of sifting through records by hand. To validate the system, the team presented 23 historians with inscriptions that had been previously dated and tested their workflows both with and without Aeneas. The findings, which were published today in Nature, showed that Aeneas helped spur research ideas among the historians for 90% of inscriptions and that it led to more accurate determinations of where and when the inscriptions originated.

In addition to this study, the researchers tested Aeneas on the Monumentum Ancyranum, a famous inscription carved into the walls of a temple in Ankara, Turkey. Here, Aeneas managed to give estimates and parallels that reflected existing historical analysis of the work, and in its attention to detail, the paper claims, it closely matched how a trained historian would approach the problem. “That was jaw-dropping,” Thea Sommerschield, an epigrapher at the University of Nottingham who also worked on Aeneas, said in the press briefing. 

However, much remains to be seen about Aeneas’s capabilities in the real world. It doesn’t guess the meaning of texts, so it can’t interpret newly found engravings on its own, and it’s not clear yet how useful it will be to historians’ workflows in the long term, according to Kathleen Coleman, a professor of classics at Harvard. The Monumentum Ancyranum is considered to be one of the best-known and most well-studied inscriptions in epigraphy, raising the question of how Aeneas will fare on more obscure samples. 

Google DeepMind has now made Aeneas open-source, and the interface for the system is freely available for teachers, students, museum workers, and academics. The group is working with schools in Belgium to integrate Aeneas into their secondary history education. 

“To have Aeneas at your side while you’re in the museum or at the archaeological site where a new inscription has just been found—that is our sort of dream scenario,” Sommerschield said.

New Ecommerce Tools: July 23, 2025

Every week we publish a handpicked list of new products and services from vendors to ecommerce merchants. This installment includes updates on affiliate marketing, AI agents, marketplace platforms, ad networks, WordPress features, and recurring payments.

Got an ecommerce product release? Email releases@practicalecommerce.com.

New Tools for Merchants

Amazon debuts AWS Marketplace category to simplify AI agent adoption. Amazon Web Services has unveiled “AI Agents and Tools” in AWS Marketplace, which allows customers to discover, buy, deploy, and manage AI tools from leading providers. According to Amazon, customers using AI Agents and Tools gain access to ready-to-integrate solutions for developing AI strategies with services that specialize in building, maintaining, and scaling agents.

Home page of AWS Marketplace

AWS Marketplace

Rokt acquires Canal to bolster ecommerce offering. Rokt, an ecommerce post-purchase platform, is acquiring the marketplace platform Canal. According to Rokt, the acquisition enables Rokt’s ecommerce partners to broaden their product range by tapping into a curated network of third-party inventory from direct-to-consumer brands, without taking on production, logistics, or inventory management. Rokt says its acquisition enables brands to acquire customers through a premium, performance-driven distribution channel — an extension of Rokt Ads — by presenting their products in shoppable, contextually relevant formats during the ecommerce transaction.

Mirakl and Criteo partner on retail media for third-party sellers. Criteo, an AI-powered advertising platform for brands, retailers, and media owners, has integrated with Mirakl ads, a retail media solution. The partnership intends to serve third-party sellers and mid-to-long-tail advertisers seeking to invest in retail media but are outside the usual sales and media management channels. The integration provides advertisers with self-service tools and automated campaign management, enabling them to scale their retail media efforts across multiple marketplace platforms.

Cimulate launches CommerceGPT, an AI-native context engine. Cimulate has introduced CommerceGPT, an AI-native platform built for agentic commerce. CommerceGPT is a product discovery infrastructure, built for the shift from human to agentic shopping experiences. The launch also introduces the Cimulate MCP Server, which supports an emergent interface for agent-to-agent commerce. Merchants can build agents that “talk” to answer engine agents and can get products surfaced in ChatGPT, Perplexity, and Claude.

Home page of Cimulate

Cimulate

2Performant launches BusinessLeague app on Shopify for affiliate marketing. 2Performant, an affiliate marketing platform, has launched its BusinessLeague app on the Shopify App Store to enable affiliate program integration for Shopify merchants across Europe. According to 2Performant, store owners can launch an affiliate program through BusinessLeague with just a few clicks. The platform connects merchants with performance-based marketers through a gamified affiliate program. Merchants gain access to key performance indicators, including clicks, sales, and conversion rates, with user leaderboards helping them identify effective growth partners.

Payabl expands platform with SEPA direct debit for businesses. Payabl, a U.K.-based financial technology company, has launched SEPA direct debit capabilities within its payment services for businesses. According to Payabl, the launch will facilitate the automatic collection of recurring euro payments for companies operating across the 36 countries of the Single Euro Payment Area. The functionality allows merchants to accept recurring payments from customers through Payabl’s gateway. The SEPA direct debit service also offers the ability to automate outgoing payments, including real-time notifications and approval workflows, per Payabl.

Pietra launches AI Assistants for ecommerce businesses. Pietra, an ecommerce operations platform, has launched AI Assistants, an operating system for entrepreneurs. According to Pietra, AI Assistance (i) automates sourcing, fulfillment, marketing, analytics and more, (ii) generates brand names, logos, packaging, product designs, ads, and social content, (iii) provides tailored go-to-market strategies, ads campaigns, influencer outreach, marketing calendars, and an AI coach to guide growth accross Amazon, Shopify, and TikTok Shop.

Home page of Pietra

Pietra

Amazon adds Fee Explainer tool. Amazon has launched a Fee Explainer tool to help merchants better understand charges to their selling accounts. For each fee type, the tool provides a definition, relevant attributes or variables, and the calculation to explain the amount. The tool covers the following fee types: subscription, referral, variable closing, fixed closing, refund administration, customer return, high-return rate processing, removal, and disposal. Amazon plans to add explainers for other fee types this year.

Edge Conversion releases custom AI systems for small businesses. Edge Conversion, an AI automation agency, has released a suite of AI systems to simplify operations for service-based and online businesses. The systems automate critical functions, such as lead generation, customer communication, and proposal creation. When a lead responds, AI-driven response systems instantly follow up to keep the conversation going. When a lead is ready to buy, dynamic proposal generators deliver client-ready quotes. Edge Conversion also offers systems for intake, onboarding, hiring, and customer-management optimization.

WP Engine launches AI toolkit for WordPress. WP Engine, a provider of tools for WordPress sites, has launched an AI toolkit. According to WP Engine, the toolkit makes advanced AI capabilities accessible to the WordPress community at scale, featuring smart search, recommendations, and a managed vector database with an open-source chatbot framework. Merchants can activate the toolkit’s features without coding, technical knowledge, or third-party tools.

Vivid and Adyen enable instant payments for SMBs in Europe. Adyen, a payment processing platform, and Vivid, a digital banking service, are launching a tool for small to medium-sized businesses in the E.U. to accept card payments (online and point of sale) and instantly access the funds. Vivid and Adyen aim to streamline the payment process for SMBs, catering to the demand for swift, flexible payment solutions.

Home page of Ayden

Adyen

Google: AI Overviews Drive 10% More Queries, Per Q2 Earnings via @sejournal, @MattGSouthern

New data from Google’s Q2 2025 earnings call suggests that AI features in Search are driving higher engagement.

Google reported that AI Overviews contribute to more than 10% additional queries for the types of searches where they appear.

With AI Overviews now reaching 2 billion monthly users, this is a notable shift from the early speculation that AI would reduce the need to search.

AI Features Linked to Higher Query Volume

Google reported $54.2 billion in Search revenue for Q2, marking a 12% increase year-over-year.

CEO Sundar Pichai noted that both overall and commercial query volumes are up compared to the same period last year.

Pichai said during the earnings call:

“We are also seeing that our AI features cause users to search more as they learn that Search can meet more of their needs. That’s especially true for younger users.”

He added:

“We see AI powering an expansion in how people are searching for and accessing information, unlocking completely new kinds of questions you can ask Google.”

This is the first quarter where Google has quantified how AI Overviews impact behavior, rather than just reporting usage growth.

More Visual, Conversational Search Activity

Google highlighted continued growth in visual and multi-modal search, especially among younger demographics. The company pointed to increased use of Lens and Circle to Search, often in combination with AI Overviews.

AI Mode, Google’s conversational interface, now has over 100 million monthly active users across the U.S. and India. The company plans to expand its capabilities with features like Deep Search and personalized results.

Language Model Activity Is Accelerating

In a stat that received little attention, Google disclosed it now processes more than 980 trillion tokens per month across its products. That figure has doubled since May.

Pichai stated:

“At I/O in May, we announced that we processed 480 trillion monthly tokens across our surfaces. Since then we have doubled that number.”

The rise in token volume shows how quickly AI is being used across Google products like Search, Workspace, and Cloud.

Enterprise AI Spending Continues to Climb

Google Cloud posted $13.6 billion in revenue for the quarter, up 32% year-over-year.

Adoption of AI tools is a major driver:

  • Over 85,000 enterprises are now building with Gemini
  • Deal volume is increasing, with as many billion-dollar contracts signed in the first half of 2025 as in all of last year
  • Gemini usage has grown 35 times compared to a year ago

To support growth across AI and Cloud, Alphabet raised its projected capital expenditures for 2025 to $85 billion.

What You Should Know as a Search Marketer

Google’s data challenges the idea that AI-generated answers are replacing search. Instead, features like AI Overviews appear to prompt follow-up queries and enable new types of searches.

Here are a few areas to watch:

  • Complex queries may become more common as users gain confidence in AI
  • Multi-modal search is growing, especially on mobile
  • Visibility in AI Overviews is increasingly important for content strategies
  • Traditional keyword targeting may need to adapt to conversational phrasing

Looking Ahead

With Google now attributing a 10% increase in queries to AI Overviews, the way people interact with search is shifting.

For marketers, that shift isn’t theoretical, it’s already in progress. Search behavior is leaning toward more complex, visual, and conversational inputs. If your strategy still assumes a static SERP, it may already be out of date.

Keep an eye on how these AI experiences roll out beyond the U.S., and watch how query patterns change in the months ahead.


Featured Image: bluestork/shutterstock

Google Makes It Easier To Talk To Your Analytics Data With AI via @sejournal, @MattGSouthern

Google has released an open-source Model Context Protocol (MCP) server that lets you analyze Google Analytics data using large language models like Gemini.

Announced by Matt Landers, Head of Developer Relations for Google Analytics, the tool serves as a bridge between LLMs and analytics data.

Instead of navigating traditional report interfaces, you can ask questions in plain English and receive responses instantly.

A Shift From Traditional Reports

The MCP server offers an alternative to digging through menus or configuring reports manually. You can type queries like “How many users did I have yesterday?” and get the answer you need.

Screenshot from: YouTube.com/GoogleAnalytics, July 2025.

In a demo, Landers used the Gemini CLI to retrieve analytics data. The CLI, or Command Line Interface, is a simple text-based tool you run in a terminal window.

Instead of clicking through menus or dashboards, you type out questions or commands, and the system responds in plain language. It’s like chatting with Gemini, but from your desktop or laptop terminal.

When asked about user counts from the previous day, the system returned the correct total. It also handled follow-up questions, showing how it can refine queries based on context without requiring additional technical setup.

You can watch the full demo in the video below:

What You Can Do With It

The server uses the Google Analytics Admin API and Data API to support a range of capabilities.

According to the project documentation, you can:

  • Retrieve account and property information
  • Run core and real-time reports
  • Access standard and custom dimensions and metrics
  • Get links to connected Google Ads accounts
  • Receive hints for setting date ranges and filters

To set it up, you’ll need Python, access to a Google Cloud project with specific APIs enabled, and Application Default Credentials that include read-only access to your Google Analytics account.

Real-World Use Cases

The server is especially helpful in more advanced scenarios.

In the demo, Landers asked for a report on top-selling products over the past month. The system returned results sorted by item revenue, then re-sorted them by units sold after a follow-up prompt.

Screenshot from: YouTube.com/GoogleAnalytics, July 2025.

Later, he entered a hypothetical scenario: a $5,000 monthly marketing budget and a goal to increase revenue.

The system generated multiple reports, which revealed that direct and organic search had driven over $419,000 in revenue. It then suggested a plan with specific budget allocations across Google Ads, paid social, and email marketing, each backed by performance data.

Screenshot from: YouTube.com/GoogleAnalytics, July 2025.

How To Set It Up

You can install the server from GitHub using a tool called pipx, which lets you run Python-based applications in isolated environments. Once installed, you’ll connect it to Gemini CLI by adding the server to your Gemini settings file.

Setup steps include:

  • Enabling the necessary Google APIs in your Cloud project
  • Configuring Application Default Credentials with read-only access to your Google Analytics account
  • (Optional) Setting environment variables to manage credentials more consistently across different environments

The server works with any MCP-compatible client, but Google highlights full support for Gemini CLI.

To help you get started, the documentation includes sample prompts for tasks like checking property stats, exploring user behavior, or analyzing performance trends.

Looking Ahead

Google says it’s continuing to develop the project and is encouraging feedback through GitHub and Discord.

While it’s still experimental, the MCP server gives you a hands-on way to explore what natural language analytics might look like in the future.

If you’re on a marketing team, this could help you get answers faster, without requiring dashboards or custom reports. And if you’re a developer, you might find ways to build tools that automate parts of your workflow or make analytics more accessible to others.

The full setup guide, source code, and updates are available on the Google Analytics MCP GitHub repository.


Featured Image: Mijansk786/Shutterstock