Validity of Pew Research On Google AI Search Results Challenged via @sejournal, @martinibuster

Questions about the methodology used by the Pew Research Center suggest that its conclusions about Google’s AI summaries may be flawed. Facts about how AI summaries are created, the sample size, and statistical reliability challenge the validity of the results.

Google’s Official Statement

A spokesperson for Google reached out with an official statement and a discussion about why the Pew research findings do not reflect actual user interaction patterns related to AI summaries and standard search.

The main points of Google’s rebuttal are:

  • Users are increasingly seeking out AI features
  • They’re asking more questions
  • AI usage trends are increasing visibility for content creators.
  • The Pew research used flawed methodology.

Google shared:

“People are gravitating to AI-powered experiences, and AI features in Search enable people to ask even more questions, creating new opportunities for people to connect with websites.

This study uses a flawed methodology and skewed queryset that is not representative of Search traffic. We consistently direct billions of clicks to websites daily and have not observed significant drops in aggregate web traffic as is being suggested.”

Sample Size Is Too Low

I discussed the Pew Research with Duane Forrester (formerly of Bing, LinkedIn profile) and he suggested that the sampling size of the research was too low to be meaningful (900+ adults and 66,000 search queries). Duane shared the following opinion:

“Out of almost 500 billion queries per month on Google and they’re extracting insights based on 0.0000134% sample size (66,000+ queries), that’s a very small sample.

Not suggesting that 66,000 of something is inconsequential, but taken in the context of the volume of queries happening on any given month, day, hour or minute, it’s very technically not a rounding error and were it my study, I’d have to call out how exceedingly low the sample size is and that it may not realistically represent the real world.”

How Reliable Are Pew Center Statistics?

The Methodology page for the statistics used list how reliable the statistics are for the following age groups:

  • Ages 18-29 were ranked at plus/minus 13.7 percentage points. That ranks as a low level of reliability.
  • Ages 30–49 were ranked at plus/minus 7.9 percentage points. That ranks in the moderate, somewhat reliable, but still a fairly wide range.
  • Ages 50–64 were ranked at plus/minus 8.9 percentage points. That ranks as a moderate to low level of reliability.
  • Age 65+ were ranked at at plus/minus 10.2 percentage points, which is firmly in the low range of reliability.

The above reliability scores are from Pew Research’s Methodology page. Overall, all of these results have a high margin of error, making them statistically unreliable. At best, they should be seen as rough estimates, although as Duane says, the sample size is so low that it’s hard to justify it as reflecting real-world results.

Pew Research Results Compare Results In Different Months

After thinking about it overnight and reviewing the methodology, an aspect of the Pew Research methodology that stood out is that they compared the actual search queries from users during the month of March with the same queries the researchers conducted in one week in April.

That’s problematic because Google’s AI summaries change from month to month. For example, the kinds of queries that trigger an AI Overview changes, with AIOs becoming more prominent for certain niches and less so for other topics. Additionally user trends may impact what gets searched on which itself could trigger a temporary freshness update to the search algorithms that prioritize videos and news.

The takeaway is that comparing search results from different months is problematic for both standard search and AI summaries.

Pew Research Ignores That AI Search Results Are Dynamic

With respect to AI overviews and summaries, these are even more dynamic, subject to change not just for every user but to the same user.

Searching for a query in AI Overviews then repeating the query in an entirely different browser will result in a different AI summary and completely different set of links.

The point is that the Pew Research Center’s methodology where they compare user queries with scraped queries a month later are flawed because the two sets of queries and results cannot be compared, they are each inherently different because of time, updates, and the dynamic nature of AI summaries.

The following screenshots are the links shown for the query, What is the RLHF training in OpenAI?

Google AIO Via Vivaldi Browser

Screenshot shows links to Amazon Web Services, Medium, and Kili Technology

Google AIO Via Chrome Canary Browser

Screenshot shows links to OpenAI, Arize AI, and Hugging Face

Not only are the links on the right hand side different, AI summary content and the links embedded within that content are also different.

Could This Be Why Publishers See Inconsistent Traffic?

Publishers and SEOs are used to static ranking positions in search results for a given search query. But Google’s AI Overviews and AI Mode show dynamic search results. The content in the search results and the links that are shown are dynamic, showing a wide range of sites in the top three positions for the exact same queries. SEOs and publishers have asked Google to show a broader range of websites and that, apparently, is what Google’s AI features are doing. Is this a case of be careful of what you wish for?

Featured Image by Shutterstock/Stokkete

Google Search Central APAC 2025: Everything From Day 2 via @sejournal, @TaylorDanRW

The second day of the Google Search Central Live APAC 2025 kicked off with a brief tie‑in to the previous day’s deep dive into crawling, before moving squarely into indexing.

Cherry Prommawin opened by walking us through how Google parses HTML and highlights the key stages in indexing:

  1. HTML parsing.
  2. Rendering and JavaScript execution.
  3. Deduplication.
  4. Feature extraction.
  5. Signal extraction.

This set the theme for the rest of the day.

Cherry noted that Google first normalizes the raw HTML into a DOM, then looks for header and navigation elements, and determines which section holds the main content. During this process, it also extracts elements such as rel=canonical, hreflang, links and anchors, and meta-robots tags.

“There is no preference between responsive websites versus dynamic/adaptive websites. Google doesn’t try to detect this and doesn’t have a preferential weighting.” – Cherry Prommawin

Links remain central to the web’s structure, both for discovery and for ranking:

“Links are still an important part of the internet and used to discover new pages, and to determine site structure, and we use them for ranking.” – Cherry Prommawin

Controlling Indexing With Robots Rules

Gary Illyes clarified where robots.txt and robots‑meta tags fit into the flow:

  • Robots.txt controls what crawlers can fetch.
  • Meta robot tags control how that fetched data is used downstream.

He highlighted several lesser‑known directives:

  • none: Equivalent to noindex,nofollow combined into a single rule. Is there a benefit to this? While functionally identical, using one directive instead of two may simplify tag management.
  • notranslate: If set, Chrome will no longer offer to translate the page.
  • noimageindex: Also applies to video assets.
  • Unavailable after: Despite being introduced by engineers who have since moved on, it still works. This could be useful for deprecating time‑sensitive blog posts, such as limited‑time deals and promotions, so they don’t persist in Google’s AI features and risk misleading users or harming brand perception.

Understanding What’s On A Page

Gary Illyes emphasized that the main content, as defined by Google’s Quality Rater Guidelines, is the most critical element in crawling and indexing. It might be text, images, videos, or rich features like calculators.

He showed how shifting a topic into the main content area can boost rankings.

In one example, moving references to “Hugo 7” from a sidebar into the central (main) content led to a measurable increase in visibility.

“If you want to rank for certain things, put those words and topics in important places (on the page).” – Gary Illyes

Tokenization For Search

You can’t dump raw HTML into a searchable index at scale. Google breaks it into “tokens,” individual words or phrases, and stores those in its index.

The first HTML segmentation system dates back to Google’s 2001 Tokyo engineering office, and the same tokenization methods power its AI products, since “why reinvent the wheel.”

When the main content is thin or low value, what Google labels as a “soft 404,” it’s flagged with a centerpiece annotation to show that this deficiency is at the heart of the page, not just in a peripheral section.

Handling Web Duplication

handling web duplicationImage from author, July 2025

Cherry Prommawin explained deduplication in three focus areas:

  1. Clustering: Using redirects, content similarity, and rel=canonical to group duplicate pages.
  2. Content checks: Checksums that ignore boilerplate and catch many soft‑error pages. Note that soft errors can bring down an entire cluster.
  3. Localization: When pages differ only by locale (for example via geo‑redirects), hreflang bridges them without penalty.

She contrasted permanent versus temporary redirects: Both play a role in crawling and clustering, but only permanent redirects influence which URL is chosen as the cluster’s canonical.

Google prioritizes hijacking risk first, user experience second, and site-owner signals (such as your rel=canonical) third when selecting the representative URL.

Geotargeting

Geotargeting allows you to signal to Google which country or region your content is most relevant for, and it works differently from simple language targeting.

Prommawin emphasized that you don’t need to hide duplicate content across two country‑specific sites; hreflang will handle those alternates for you.

geotargetingImage from author, July 2025

If you serve the duplicate content on multiple regional URLs without localization, you risk confusing both crawlers and users.

To geotarget effectively, ensure that each version has unique, localized content tailored to its specific audience.

The primary geotargeting signals Google uses are:

  1. Country‑code top‑level domain (ccTLD): Domains like .sg or .au indicate the target country.
  2. Hreflang annotations: Use tags, HTTP headers, or sitemap entries to declare language and regional alternates.
  3. Server location: The IP address or hosting location of your server can act as a geographic hint.
  4. Additional local signals, such as language and currency on the page, links from other regional websites, and signals from your local Business Profile, all reinforce your target region.

By combining these signals with genuinely localized content, you help Google serve the right version of your site to the right users, and avoid the pitfalls of unintended duplicate‑content clusters.

Structured Data & Media

Gary Illyes introduced the feature extraction phase, which runs after deduplication and is computationally expensive. It starts with HTML, then kicks off separate, asynchronous media indexing for images and videos.

If your HTML is in the index but your media isn’t, it simply means the media pipeline is still working.

Sessions in this track included:

  • Structured Data with William Prabowo.
  • Using Images with Ian Huang.
  • Engaging Users with Video with William Prabowo.

Q&A Takeaway On Schema

Schema markup can help Google understand the relationships between entities and enable LLM-driven features.

But, excessive or redundant schema only adds page bloat and has no additional ranking benefits. And Schema is not used as part of the ranking process.

Calculating Signals

During signal extraction, also part of indexing, Google computes a mix of:

  • Indirect signals (links, mentions by other pages).
  • Direct signals (on‑page words and placements).
calculating signalsImage from author, July 2025

Illyes confirmed that Google still uses PageRank internally. It is not the exact algorithm from the 1996 White Paper, but it bears the same name.

Handling Spam

Google’s systems identify around 40 billion spam pages each day, powered by their LLM‑based “SpamBrain.”

handling spamImage from author, July 2025

Additionally, Illyes emphasized that E-E-A-T is not an indexing or ranking signal. It’s an explanatory principle, not a computed metric.

Deciding What Gets Indexed

Index selection boils down to quality, defined as a combination of trustworthiness and utility for end users. Pages are dropped from the index for clear negative signals:

  • noindex directives.
  • Expired or time‑limited content.
  • Soft 404s and slipped‑through duplicates.
  • Pure spam or policy violations.

If a page has been crawled but not indexed, the remedy is to improve the content quality.

Internal linking can help, but only insofar as it makes the page genuinely more useful. Google’s goal is to reward user‑focused improvements, not signal manipulation.

Google Doesn’t Care If Your Images Are AI-Generated

AI-generated images have become common in marketing, education, and design workflows. These visuals are produced by deep learning models trained on massive picture collections.

During the session, Huang outlined that Google doesn’t care whether your images are generated by AI or humans, as long as they accurately and effectively convey the information or tell the story you intend.

As long as images are understandable, their AI origins are irrelevant. The primary goal is effective communication with your audience.

Huang highlighted an example of an AI image used by the Google team during the first day of the conference that, on close inspection, does have some visual errors, but as a “prop,” its job was to represent a timeline and was not the main content of the slide, so these errors do not matter.

Image from author, July 2025

We can adopt a similar approach to our use of AI-generated imagery. If the image conveys the message and isn’t the main content of the page, minor issues won’t lead to penalization, nor will using AI-generated imagery in general.

Images should undergo a quick human review to identify obvious mistakes, which can prevent production errors.

Ongoing oversight remains essential to maintain trust in your visuals and protect your brand’s integrity.

Google Trends API Announced

Finally, Daniel Waisberg and Hadas Jacobi unveiled the new Google Trends API (Alpha). Key features of the new API will include:

  • Consistently scaled search interest data that does not recalibrate when you change queries.
  • A five‑year rolling window, updated up to 48 hours ago, for seasonal and historical comparisons.
  • Flexible time aggregation (weekly, monthly, yearly).
  • Region and sub‑region breakdowns.

This opens up a world of programmatic trend analysis with reliable, comparable metrics over time.

That wraps up day two. Tomorrow, we have coverage of the final day three at Google Search Central Live, with more breaking news and insights.

More Resources:


Featured Image: Dan Taylor/SALT.agency

Beyond Fan-Out: Turning Question Maps Into Real AI Retrieval via @sejournal, @DuaneForrester

If you spend time in SEO circles lately, you’ve probably heard query fan-out used in the same breath as semantic SEO, AI content, and vector-based retrieval.

It sounds new, but it’s really an evolution of an old idea: a structured way to expand a root topic into the many angles your audience (and an AI) might explore.

If this all sounds familiar, it should. Marketers have been digging for this depth since “search intent” became a thing years ago. The concept isn’t new; it just has fresh buzz, thanks to GenAI.

Like many SEO concepts, fan-out has picked up hype along the way. Some people pitch it as a magic arrow for modern search (it’s not).

Others call it just another keyword clustering trick dressed up for the GenAI era.

The truth, as usual, sits in the middle: Query fan-out is genuinely useful when used wisely, but it doesn’t magically solve the deeper layers of today’s AI-driven retrieval stack.

This guide sharpens that line. We’ll break down what query fan-out actually does, when it works best, where its value runs out, and which extra steps (and tools) fill in the critical gaps.

If you want a full workflow from idea to real-world retrieval, this is your map.

What Query Fan-Out Really Is

Most marketers already do some version of this.

You start with a core question like “How do you train for a marathon?” and break it into logical follow-ups: “How long should a training plan be?”, “What gear do I need?”, “How do I taper?” and so on.

In its simplest form, that’s fan-out. A structured expansion from root to branches.

Where today’s fan-out tools step in is the scale and speed; they automate the mapping of related sub-questions, synonyms, adjacent angles, and related intents. Some visualize this as a tree or cluster. Others layer on search volumes or semantic relationships.

Think of it as the next step after the keyword list and the topic cluster. It helps you make sure you’re covering the terrain your audience, and the AI summarizing your content, expects to find.

Why Fan-Out Matters For GenAI SEO

This piece matters now because AI search and agent answers don’t pull entire pages the way a blue link used to work.

Instead, they break your page into chunks: small, context-rich passages that answer precise questions.

This is where fan-out earns its keep. Each branch on your fan-out map can be a stand-alone chunk. The more relevant branches you cover, the deeper your semantic density, which can help with:

1. Strengthening Semantic Density

A page that touches only the surface of a topic often gets ignored by an LLM.

If you cover multiple related angles clearly and tightly, your chunk looks stronger semantically. More signals tell the AI that this passage is likely to answer the prompt.

2. Improving Chunk Retrieval Frequency

The more distinct, relevant sections you write, the more chances you create for an AI to pull your work. Fan-out naturally structures your content for retrieval.

3. Boosting Retrieval Confidence

If your content aligns with more ways people phrase their queries, it gives an AI more reason to trust your chunk when summarizing. This doesn’t guarantee retrieval, but it helps with alignment.

4. Adding Depth For Trust Signals

Covering a topic well shows authority. That can help your site earn trust, which nudges retrieval and citation in your favor.

Fan-Out Tools: Where To Start Your Expansion

Query fan-out is practical work, not just theory.

You need tools that take a root question and break it into every related sub-question, synonym, and niche angle your audience (or an AI) might care about.

A solid fan-out tool doesn’t just spit out keywords; it shows connections and context, so you know where to build depth.

Below are reliable, easy-to-access tools you can plug straight into your topic research workflow:

  • AnswerThePublic: The classic question cloud. Visualizes what, how, and why people ask around your seed topic.
  • AlsoAsked: Builds clean question trees from live Google People Also Ask data.
  • Frase: Topic research module clusters root queries into sub-questions and outlines.
  • Keyword Insights: Groups keywords and questions by semantic similarity, great for mapping searcher intent.
  • Semrush Topic Research: Big-picture tool for surfacing related subtopics, headlines, and question ideas.
  • Answer Socrates: Fast People Also Ask scraper, cleanly organized by question type.
  • LowFruits: Pinpoints long-tail, low-competition variations to expand your coverage deeper.
  • WriterZen: Topic discovery clusters keywords and builds related question sets in an easy-to-map layout.

If you’re short on time, start with AlsoAsked for quick trees or Keyword Insights for deeper clusters. Both deliver instant ways to spot missing angles.

Now, having a clear fan-out tree is only step one. Next comes the real test: proving that your chunks actually show up where AI agents look.

Where Fan-Out Stops Working Alone

So, fan-out is helpful. But it’s only the first step. Some people stop here, assuming a complete query tree means they’ve future-proofed their work for GenAI. That’s where the trouble starts.

Fan-out does not verify if your content is actually getting retrieved, indexed, or cited. It doesn’t run real tests with live models. It doesn’t check if a vector database knows your chunks exist. It doesn’t solve crawl or schema problems either.

Put plainly: Fan-out expands the map. But, a big map is worthless if you don’t check the roads, the traffic, or whether your destination is even open.

The Practical Next Steps: Closing The Gaps

Once you’ve built a great fan-out tree and created solid chunks, you still need to make sure they work. This is where modern GenAI SEO moves beyond traditional topic planning.

The key is to verify, test, and monitor how your chunks behave in real conditions.

Image Credit: Duane Forrester

Below is a practical list of the extra work that brings fan-out to life, with real tools you can try for each piece.

1. Chunk Testing & Simulation

You want to know: “Does an LLM actually pull my chunk when someone asks a question?” Prompt testing and retrieval simulation give you that window.

Tools you can try:

  • LlamaIndex: Popular open-source framework for building and testing RAG pipelines. Helps you see how your chunked content flows through embeddings, vector storage, and prompt retrieval.
  • Otterly: Practical, non-dev tool for running live prompt tests on your actual pages. Shows which sections get surfaced and how well they match the query.
  • Perplexity Pages: Not a testing tool in the strict sense, but useful for seeing how a real AI assistant surfaces or summarizes your live pages in response to user prompts.

2. Vector Index Presence

Your chunk must live somewhere an AI can access. In practice, that means storing it in a vector database.

Running your own vector index is how you test that your content can be cleanly chunked, embedded, and retrieved using the same similarity search methods that larger GenAI systems rely on behind the scenes.

You can’t see inside another company’s vector store, but you can confirm your pages are structured to work the same way.

Tools to help:

  • Weaviate: Open-source vector DB for experimenting with chunk storage and similarity search.
  • Pinecone: Fully managed vector storage for larger-scale indexing tests.
  • Qdrant: Good option for teams building custom retrieval flows.

3. Retrieval Confidence Checks

How likely is your chunk to win out against others?

This is where prompt-based testing and retrieval scoring frameworks come in.

They help you see whether your content is actually retrieved when an LLM runs a real-world query, and how confidently it matches the intent.

Tools worth looking at:

  • Ragas: Open-source framework for scoring retrieval quality. Helps test if your chunks return accurate answers and how well they align with the query.
  • Haystack: Developer-friendly RAG framework for building and testing chunk pipelines. Includes tools for prompt simulation and retrieval analysis.
  • Otterly: Non-dev tool for live prompt testing on your actual pages. Shows which chunks get surfaced and how well they match the prompt.

4. Technical & Schema Health

No matter how strong your chunks are, they’re worthless if search engines and LLMs can’t crawl, parse, and understand them.

Clean structure, accessible markup, and valid schema keep your pages visible and make chunk retrieval more reliable down the line.

Tools to help:

  • Ryte: Detailed crawl reports, structural audits, and deep schema validation; excellent for finding markup or rendering gaps.
  • Screaming Frog: Classic SEO crawler for checking headings, word counts, duplicate sections, and link structure: all cues that affect how chunks are parsed.
  • Sitebulb: Comprehensive technical SEO crawler with robust structured data validation, clear crawl maps, and helpful visuals for spotting page-level structure problems.

5. Authority & Trust Signals

Even if your chunk is technically solid, an LLM still needs a reason to trust it enough to cite or summarize it.

That trust comes from clear authorship, brand reputation, and external signals that prove your content is credible and well-cited. These trust cues must be easy for both search engines and AI agents to verify.

Tools to back this up:

  • Authory: Tracks your authorship, keeps a verified portfolio, and monitors where your articles appear.
  • SparkToro: Helps you find where your audience spends time and who influences them, so you can grow relevant citations and mentions.
  • Perplexity Pro: Lets you check whether your brand or site appears in AI answers, so you can spot gaps or new opportunities.

Query fan-out expands the plan. Retrieval testing proves it works.

Putting It All Together: A Smarter Workflow

When someone asks, “Does query fan-out really matter?” the answer is yes, but only as a first step.

Use it to design a strong content plan and to spot angles you might miss. But always connect it to chunk creation, vector storage, live retrieval testing, and trust-building.

Here’s how that looks in order:

  1. Expand: Use fan-out tools like AlsoAsked or AnswerThePublic.
  2. Draft: Turn each branch into a clear, stand-alone chunk.
  3. Check: Run crawls and fix schema issues.
  4. Store: Push your chunks to a vector DB.
  5. Test: Use prompt tests and RAG pipelines.
  6. Monitor: See if you get cited or retrieved in real AI answers.
  7. Refine: Adjust coverage or depth as gaps appear.

The Bottom Line

Query fan-out is a valuable input, but it’s never been the whole solution. It helps you figure out what to cover, but it does not prove what gets retrieved, read, or cited.

As GenAI-powered discovery keeps growing, smart marketers will build that bridge from idea to index to verified retrieval. They’ll map the road, pave it, watch the traffic, and adjust the route in real time.

So, next time you hear fan-out pitched as a silver bullet, you don’t have to argue. Just remind people of the bigger picture: The real win is moving from possible coverage to provable presence.

If you do that work (with the right checks, tests, and tools), your fan-out map actually leads somewhere useful.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Deemerwha studio/Shutterstock

Google Trends API (Alpha) Launching: Breaking News via @sejournal, @TaylorDanRW

Google has just unveiled an alpha version of its Trends API at Google Search Central Live, Deep Dive APAC 2025. This new offering brings explore-page data directly into applications.

The API will provide consistently scaled search interest figures. These figures align more predictably than the current website numbers.

Announced by Daniel Waisberg and Hadas Jacobi, the Alpha will be opening up from today, and they are looking for testers who will use the Alpha throughout 2025.

The API will not include Trending Now.

Image from author, July 2025

Key Features

Consistently Scaled Search Interest

The standout feature in this Alpha release is consistent scaling.

Unlike the web interface, where search interest values shift depending on your query mix, the API returns values that remain stable across requests.

These won’t be complete search volumes, but in the sample response shown, we can see an indicative search volume presented alongside the scaled number for comparison in the Google Trends website interface.

Five-Year Rolling Window

The API surfaces data across a five-year rolling window.

Data is available up to 48 hours ago to preserve temporal patterns, such as annual events or weekly cycles.

This longer context helps you contrast today’s search spikes with those of previous years. It’s ideal for spotting trends tied to seasonal events and recurring news cycles.

Flexible Aggregations And Geographic Breakdown

You choose how to aggregate data: weekly, monthly, or annually.

This flexibility allows you to zoom in for fine-grained analysis or step back for long-term trends.

Regional and sub-regional breakdowns are also exposed via the API. You can pinpoint interest in countries, states, or even cities without extra work.

Sample API Request & Response

Hadas shared an example request prompt using Python, as well as a sample response.

The request:

Image from author, July 2025

The response:

Image from author, July 2025
print(time_series)
{
"points": [
{
"time_range": {
"start_time": (2024-01-01),
"end_time": (2024-01-07),
},
"search_interest": 4400.0,
"scaled_search_interest": 62,
},
{
"time_range": {
"start_time": (2024-01-08),
"end_time": (2024-01-14),
},
"search_interest": 7100.0,
"scaled_search_interest": 100,
},
…
]
}

Sign up now to get early access to the Google Trends API alpha.


More Resources:


Featured Image: Dan Taylor/SALT.agency

Google Says You Don’t Need AEO Or GEO To Rank In AI Overviews via @sejournal, @martinibuster

Google’s Gary Illyes confirmed that AI Search does not require specialized optimization, saying that “AI SEO” is not necessary and that standard SEO is all that is needed for both AI Overviews and AI Mode.

AI Search Is Everywhere

Standard search, in the way it used to be with link algorithms playing a strong role, no longer exists. AI is embedded within every step of the organic search results, from crawling to indexing and ranking. AI has been a part of Google Search for ten years, beginning with RankBrain and expanding from there.

Google’s Gary Illyes made it clear that AI is embedded within every step of today’s search ranking process.

Kenichi Suzuki (LinkedIn Profile) posted a detailed summary of what Illyes discussed, covering four main points:

AI Search features use the same infrastructure as traditional search

  1. AI Search Optimization = SEO
  2. Google’s focus is on content quality and is agnostic as to how it was created
  3. AI is deeply embedded into every stage of search
  4. Generative AI has unique features to ensure reliability

There’s No Need For AEO Or GEO

The SEO community has tried to wrap their minds around AI search, with some insisting that ranking in AI search requires an approach to optimization so distinct from SEO that it warrants its own acronym. Other SEOs, including an SEO rockstar, have insisted that optimizing for AI search is fundamentally the same as standard search. I’m not saying that one group of SEOs is right and another is wrong. The SEO community collectively discussing a topic and reaching different conclusions is one of the few things that doesn’t change in search marketing.

According to Google, ranking in AI Overviews and AI Mode requires only standard SEO practices.

Suzuki shared why AI search doesn’t require different optimization strategies:

“Their core message is that new AI-powered features like AI Overviews and AI Mode are built upon the same fundamental processes as traditional search. They utilize the same crawler (Googlebot), the same core index, and are influenced by the same ranking systems.

They repeatedly emphasized this with the phrase “same as above” to signal that a separate, distinct strategy for “AI SEO” is unnecessary. The foundation of creating high-quality, helpful content remains the primary focus.”

Content Quality Is Not About How It’s Created

The second point that Google made was that their systems are tuned to identify content quality and that identifying whether the content was created by a human or AI is not part of that quality assessment.

Gary Illyes is quoted as saying:

“We are not trying” to differentiate based on origin.”

According to Kenichi, the objective is to:

“…identify and reward high-quality, helpful, and reliable content, regardless of whether it was created by a human or with the assistance of AI.”

AI Is Embedded Within Every Stage Of Search

The third point that Google emphasized is that AI plays a role at every stage of search: crawling, indexing, and ranking.

Regarding the ranking part, Suzuki wrote:

“RankBrain helps interpret novel queries, while the Multitask Unified Model (MUM) understands information across various formats (text, images, video) and 75 different languages.”

Unique Processes Of Generative AI Features

The fourth point that Google emphasized is to acknowledge that AI Overviews does two different things at the ranking stage:

  1. Query Fan-Out
    Generates multiple queries in order to provide deeper answers to queries, using the query fan-out technique.
  2. Grounding
    AI Overviews checks the generated answers against online sources to make sure that they are factually accurate, a process called grounding.

Suzuki explains:

“It then uses a process called “grounding” to check the generated text against the information in its search index, a crucial step designed to verify facts and reduce the risk of AI ‘hallucinations.’”

Takeaways:

AI SEO vs. Traditional SEO

  • Google explicitly states that specialized “AI SEO” is not necessary.
  • Standard SEO practices remain sufficient to rank in AI-driven search experiences.

Integration of AI in Google Search

  • AI technology is deeply embedded across every stage of Google’s organic search: crawling, indexing, and ranking.
  • Technologies like RankBrain and the Multitask Unified Model (MUM) are foundational to Google’s current search ranking system.

Google’s Emphasis on Content Quality

  • Content quality assessment by Google is neutral regarding whether humans or AI produce the content.
  • The primary goal remains identifying high-quality, helpful, and reliable content.

Generative AI-Specific Techniques

  • Google’s AI Overviews employ specialized processes like “query fan-out” to answer queries thoroughly.
  • A technique called “grounding” is used to ensure factual accuracy by cross-checking generated content against indexed information.

Google clarified that there’s no need for AEO/GEO for Google AI Overviews and AI Mode. Standard search engine optimization is all that’s needed to rank across both standard and AI-based search. Content quality remains an important part of Google’s algorithms, and they made a point to emphasize that they don’t check whether content is created by a human or AI.

Featured Image by Shutterstock/Luis Molinero

Google: AI Overviews Drive 10% More Queries, Per Q2 Earnings via @sejournal, @MattGSouthern

New data from Google’s Q2 2025 earnings call suggests that AI features in Search are driving higher engagement.

Google reported that AI Overviews contribute to more than 10% additional queries for the types of searches where they appear.

With AI Overviews now reaching 2 billion monthly users, this is a notable shift from the early speculation that AI would reduce the need to search.

AI Features Linked to Higher Query Volume

Google reported $54.2 billion in Search revenue for Q2, marking a 12% increase year-over-year.

CEO Sundar Pichai noted that both overall and commercial query volumes are up compared to the same period last year.

Pichai said during the earnings call:

“We are also seeing that our AI features cause users to search more as they learn that Search can meet more of their needs. That’s especially true for younger users.”

He added:

“We see AI powering an expansion in how people are searching for and accessing information, unlocking completely new kinds of questions you can ask Google.”

This is the first quarter where Google has quantified how AI Overviews impact behavior, rather than just reporting usage growth.

More Visual, Conversational Search Activity

Google highlighted continued growth in visual and multi-modal search, especially among younger demographics. The company pointed to increased use of Lens and Circle to Search, often in combination with AI Overviews.

AI Mode, Google’s conversational interface, now has over 100 million monthly active users across the U.S. and India. The company plans to expand its capabilities with features like Deep Search and personalized results.

Language Model Activity Is Accelerating

In a stat that received little attention, Google disclosed it now processes more than 980 trillion tokens per month across its products. That figure has doubled since May.

Pichai stated:

“At I/O in May, we announced that we processed 480 trillion monthly tokens across our surfaces. Since then we have doubled that number.”

The rise in token volume shows how quickly AI is being used across Google products like Search, Workspace, and Cloud.

Enterprise AI Spending Continues to Climb

Google Cloud posted $13.6 billion in revenue for the quarter, up 32% year-over-year.

Adoption of AI tools is a major driver:

  • Over 85,000 enterprises are now building with Gemini
  • Deal volume is increasing, with as many billion-dollar contracts signed in the first half of 2025 as in all of last year
  • Gemini usage has grown 35 times compared to a year ago

To support growth across AI and Cloud, Alphabet raised its projected capital expenditures for 2025 to $85 billion.

What You Should Know as a Search Marketer

Google’s data challenges the idea that AI-generated answers are replacing search. Instead, features like AI Overviews appear to prompt follow-up queries and enable new types of searches.

Here are a few areas to watch:

  • Complex queries may become more common as users gain confidence in AI
  • Multi-modal search is growing, especially on mobile
  • Visibility in AI Overviews is increasingly important for content strategies
  • Traditional keyword targeting may need to adapt to conversational phrasing

Looking Ahead

With Google now attributing a 10% increase in queries to AI Overviews, the way people interact with search is shifting.

For marketers, that shift isn’t theoretical, it’s already in progress. Search behavior is leaning toward more complex, visual, and conversational inputs. If your strategy still assumes a static SERP, it may already be out of date.

Keep an eye on how these AI experiences roll out beyond the U.S., and watch how query patterns change in the months ahead.


Featured Image: bluestork/shutterstock

Google Search Central APAC 2025: Everything From Day 1 via @sejournal, @TaylorDanRW

Search Central Live Deep Dive Asia Pacific 2025 brings together SEOs from across the region for three days of insight, networking, and practical advice.

Held at the Carlton Hotel Bangkok Sukhumvit, the event features an impressive speaker lineup alongside structured networking breaks.

Attendees have the chance to meet familiar faces, connect with global SEO leaders, and share ideas on the latest trends shaping our industry.

The conference is split over three days, with each day covering a key part of Google’s processes: crawling, indexing, and serving.

Some of the practical tips that emerged from day one:

  1. Keep building human‑focused content. Google’s models favor natural, expert writing above all.
  2. Optimize for multiple modalities. Make sure images have descriptive alt text, videos have transcripts, and voice search is supported by conversational language.
  3. Monitor crawl budget. Fix 5XX errors promptly and streamline your site’s structure to guide Googlebot efficiently.
  4. Use Search Console recommendations. Non‑expert site owners can benefit from the guided suggestions feature to improve usability and performance.
  5. Stay flexible. Long‑held traffic trends may shift as AI features grow. Past success does not equal future success.

A Pivotal Moment For Search

Mike Jittivanich, director of marketing for South East Asia and South Asia Frontier, set the tone in his keynote by declaring that we’ve reached a pivotal moment in search. He identified three forces at work:

  1. AI innovation that rivals past major shifts such as mobile and social media.
  2. Evolving user consumption patterns, as people expect faster, more conversational ways to find information.
  3. Changing habits of younger generations, who interact with search differently from their parents.

This trio of drivers underlines that past success no longer guarantees future success in search.

As Liz Reid, VP of Search at Google, has put it, “Search is never a solved problem.”

Image from author, July 2025

New formats, from AI Overviews to multimodal queries, must be woven alongside traditional blue links in a way that keeps pace with user expectations.

Gen Z: The Fastest‑Growing Search Demographic

One of the most eye-opening statistics came from a session on generational trends: Gen Z (aged 18-24) is the fastest-growing group of searchers.

Image from author, July 2025

Lens usage alone grew 65% year‑on‑year, with over 100 billion Lens searches so far in 2025. Remarkably, 1 in 5 searches via Lens now carries commercial intent.

Younger users are also more likely to initiate searches in non-traditional ways.

Roughly 10% of their journeys begin with Circle to Search or other AI‑powered experiences, rather than typing into a search box. For SEOs, this means optimizing for image and voice queries is no longer optional.

Why Human‑Centered Content Wins

Across several talks, speakers emphasized that Google’s machine‑learning ranking algorithms learn from content created by humans for humans.

These models understand natural language patterns and reward authentic, informative writing.

In contrast, AI‑generated text occupies its own space in the index, and Google’s ranking systems are not trained on that portion. Gary Illyes explained that:

Our algorithms train on the highest‑quality content in the index, which is clearly human‑created.

For your site, the takeaway is clear: Keep focusing on well‑researched, engaging content.

SEO fundamentals, like clear structure, relevant keywords, and solid internal linking, remain vital.

There is no separate checklist for AI features. If you’re doing traditional SEO well, you’ll naturally be eligible for AI Overviews and AI Mode features.

AI In Crawling And Indexing

Two sessions shed light on how AI is touching the crawling and indexing process:

  • AI Crawl Impact: Sites are seeing increased crawl rates as Googlebot adapts to new AI‑powered features. However, a higher crawl rate does not automatically boost ranking.
  • Status Codes and Crawl Budget: Only server errors (5XX) consume crawl budget; 1XX and 4XX codes do not affect it, though 4XX can influence scheduling and prioritization.

Cherry Prommawin explained that crawl budget is the product of crawl rate limit (how fast Googlebot can crawl) and crawl demand (how much it wants to crawl).

If your site has broken links or slow responses, it may slow down the overall crawling process.

Google Search Is Evolving In Two Ways

Google Search is evolving along two main focus points: the types of queries users can pose and the range of answers Google can deliver.

The Questions Users Can Ask

Queries are becoming longer and more conversational. Searches of five or more words are growing at 1.5X the rate of shorter queries.

Beyond text, users now routinely turn to voice, images, and Circle to Search: For Gen Z, about 10% of journeys start with these AI-powered entry points.

The Results Google Can Provide

AI Overviews can generate balanced summaries when there’s no single “right” answer, while AI Mode offers end‑to‑end generative experiences for shopping, meal planning, and multi‑modal queries.

Google is bringing DeepMind’s reasoning models into Search to power these richer, more nuanced results, blending text, images, and action‑oriented guidance in a single interface.

Image from author, July 2025

LLMs.txt & Robots.txt

Gary Illyes and Amir Taboul discussed Google’s stance on robots.txt and the IETF working group’s proposed LLMs.txt standard.

Much like meta keywords of old, LLMs.txt is not a Google initiative and not seen as beneficial, or something they’re looking to adopt.

Google’s view is that robots.txt remains the primary voluntary standard for controlling crawlers. If you choose to block AI‑specific bots, you can do so in robots.txt, but know that not all AI crawlers will obey it.

AI Features As Extensions Of Search

AI Mode and AI Overviews rely on the exact same crawling, indexing, and serving infrastructure as traditional Search.

Googlebot handles both blue‑link results and AI features, while other crawlers in the same system feed Gemini and large language models (LLMs).

Image from author, July 2025

Every page still undergoes HTML parsing, rendering, deduplication, and statistical models, such as BERT, for understanding and spam detection when it’s time to serve results. The same query‑interpretation pipelines and ranking signals, such as RankBrain, MUM, and other ML models, order information for both classic blue links and AI‑powered answers.

AI Mode and AI Overviews are simply new front-end features built on the familiar Search foundations that SEOs have been optimizing for all along.

Making The Most Of Google Search Console

Finally, Daniel Waisberg led a session on effectively utilizing Search Console in this new era.

Waisberg described Search Console as the bridge between Google’s infrastructure (crawling, indexing, serving) and your site. Key points that came from these sessions included:

  • Data latency: Finalized data in Search Console is typically two days old, based on the Pacific time zone. Partial and near-final data sit behind the scenes and may differ by up to 1%.
  • Feature lifecycle: New enhancements progress from user need to available data, then through design and development, to testing and launch.
  • Recommendations feature: This tool is aimed at users who are not data experts, suggesting actionable improvements without overwhelming them.

By understanding how Search Console presents data, you can diagnose crawl issues, track performance, and identify opportunities for AI-driven features.

That’s it for the end of day one. Watch out tomorrow for our coverage of day two at Google Search Central Live, with more Google insights to come.

More Resources:


Featured Image: Dan Taylor/SALT.agency

Google Shares SEO Guidance For State-Specific Product Pricing via @sejournal, @MattGSouthern

In a recent SEO Office Hours video, Google addressed whether businesses can show different product prices to users in different U.S. states, and what that means for search visibility.

The key point: Google only indexes one version of a product page, even if users in different locations see different prices.

Google Search Advocate John Mueller stated in the video:

“Google will only see one version of your page. It won’t crawl the page from different locations within the U.S., so we wouldn’t necessarily recognize that there are different prices there.”

How Google Handles Location-Based Pricing

Google confirmed it doesn’t have a mechanism for indexing multiple prices for the same product based on a U.S. state.

However, you can reflect regional cost differences by using the shipping and tax fields in structured data.

Mueller continued:

“Usually the price difference is based on what it actually costs to ship this product to a different state. So with those two fields, maybe you could do that.”

For example, you might show a base price on the page, while adjusting the final cost through shipping or tax settings depending on the buyer’s location.

When Different Products Make More Sense

If you need Google to recognize distinct prices for the same item depending on state-specific factors, Google recommends treating them as separate products entirely.

Mueller added:

“You would essentially want to make different products in your structured data and on your website. For example, one product for California specifically, maybe it’s made with regards to specific regulations in California.”

In other words, rather than dynamically changing prices for one listing, consider listing two separate products with different pricing and unique product identifiers.

Key Takeaway

Google’s infrastructure currently doesn’t support state-specific price indexing for a single product listing.

Instead, businesses will need to adapt within the existing framework. That means using structured data fields for shipping and tax, or publishing distinct listings for state variants when necessary.

Hear Mueller’s full response in the video below:

Do We Need A Separate Framework For GEO/AEO? Google Says Probably Not via @sejournal, @TaylorDanRW

At Google Search Central Live Deep Dive Asia Pacific 2025, Cherry Prommawin and Gary Illyes led a session on how AI fits into Search.

They asked whether we need separate frameworks for Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO).

Their insights suggest that GEO and AEO do not require wholly new disciplines.

Photo taken by author, Search Central Live Deep Dive Asia Pacific, July 2025

AI Features Are Just Features

Cherry Prommawin explained that AI Mode, AI Overviews, Circle to Search, and Lens behave like featured snippets or knowledge panels.

These features draw on the same ranking signals and data sources as traditional Search.

They all run on Google’s core indexing and ranking engine without requiring a standalone platform. Adding an AI component is simply a matter of introducing extra interpretation layers.

Gary Illyes emphasized that both AI-driven tools and classic Search services share a single, unified infrastructure. This underlying infrastructure handles indexing, ranking, and serving for all result types.

AI Mode and AI Overviews are just features of Search, and built on the same Search infrastructure.

Deploying new AI capabilities means integrating additional models into the same system. Circle to Search and Lens simply add their query-understanding modules on top.

Crawling

All the AI Overviews and AI Mode features rely on the same crawler that powers Googlebot. This crawler visits pages, follows links, and gathers fresh content.

Gemini is treated as a separate system within Google’s crawler ecosystem and uses its own bots within Google’s ecosystem to feed data into its models.

Indexing

In AI Search, the core indexing process mirrors the methods used for traditional search. Pages that have been crawled are analyzed and organized into the index, then statistical models and BERT are applied to refine that data.

These statistical models have been in use for more than 20 years and were first created to support the “did you mean” feature and help catch spam.

BERT adds a deeper understanding of natural language to the mix.

Photo taken by author, Search Central Live Deep Dive Asia Pacific, July 2025

Serving

Once the index is built, the system must interpret each user query. It looks for stop words, identifies key terms, and breaks the query into meaningful parts.

The ranking phase then orders hundreds of potential results based on various signals. Different formats, such as text, images, and video, carry different weightings.

RankBrain applies machine learning to adjust those signals while MUM brings a multimodal, multitask approach to understanding complex queries and matching them with the best possible answers.

What This Means: Use The Same Principles From SEO

Given the tight integration of AI features with standard Search, creating distinct GEO or AEO programs may duplicate existing efforts.

As SEOs, we should be able to apply existing optimization practices to both AI Search and “traditional” Search products. Focusing on how AI enhancements fit into current workflows lets teams leverage their expertise.

Spreading resources to build separate frameworks could pull attention away from higher-impact tasks.

Cherry Prommawin and Gary Illyes concluded their session by reinforcing that AI is another feature in the Search product.

SEO professionals can continue to refine their strategies using the same principles that guide traditional search engine optimization.

More Resources:


Featured Image taken by author

Pew Research Confirms Google AI Overviews Is Eroding Web Ecosystem via @sejournal, @martinibuster

Pew Research Center tracked real web browsing behavior and confirmed what many publishers and SEOs have claimed: AI Overviews does not send traffic back to websites. The results show that the damage caused by AI summaries to the web ecosystem is as bad as or worse than is commonly understood.

Methodology

The Pew Research study tracked over 900 adults who consented to installing an online browsing tracker to record their browsing behavior in the month of March 2025. The dataset contains 68,879 unique Google search queries, and a total of 12,593 queries triggered an AI summary.

Confirmed: Google AI Search Is Eroding Referral Traffic

The tracked user data confirms publisher complaints about a drop in referral traffic caused by AI search results. Google users who encounter an AI search result are less likely to click on a link and visit a website than users who see only a standard search result.

Only 8% of users who encountered an AI summary clicked a link (in the AI summary or the standard search results) to visit a website. Users who only saw a standard search result tended to click to visit a website 15% of the time, nearly twice as many as users who viewed an AI summary.

Users rarely click a link within an AI summary. Only 1% of users clicked an AI summary link and visited a website.

AI Summaries Cause Less Web Engagement

In a recent interview, Google’s CEO Sundar Pichai pushed back on the notion that AI summaries have a negative impact on the web ecosystem. He said that the fact that there is more content being created on the web than at any other time is proof that the web ecosystem is thriving. He said that

“So, generally there are more web pages… I think people are producing a lot of content, and I see consumers consuming a lot of content. We see it in our products.”

Pichai also insisted that people are consuming content across multiple forms of content (video, images, text) and that publishers today should be presenting content within more than just one format.

However, contrary to what Google’s CEO said, AI is not encouraging users to consume more content, it’s having the opposite effect. The Pew research data shows that AI summaries cause users to engage less with web content.

According to the research findings:

Users End Their Browsing Session

“Google users are more likely to end their browsing session entirely after visiting a search page with an AI summary than on pages without a summary.

This happened on 26% of pages with an AI summary, compared with 16% of pages with only traditional search results.”

Users Refrain From Clicking On Traditional Search Links

It also says that users tended to not click on a traditional search result when faced with an AI summary:

“Users who encountered an AI summary clicked on a traditional search result link in 8% of all visits. Those who did not encounter an AI summary clicked on a search result nearly twice as often (15% of visits).”

Only 1% Click Citation Links In AI Summaries

Users who see an AI summary overwhelmingly do not click the citations to the websites that the AI summary links to.

The report shows:

“Google users who encountered an AI summary also rarely clicked on a link in the summary itself. This occurred in just 1% of all visits to pages with such a summary.”

This confirms what publishers and SEOs have been saying to Google over and over again: Google AI Overviews robs publishers of referral traffic. Rob is a strong word but given the context that Google is using web content to “synthesize” an answer to a search query that does not result in a referral click, the word “rob” is what inevitably comes to mind to a publisher or SEO who worked hard to create the content.

Another startling fact shared in research is that almost 66% of users either browsed somewhere else on Google or completely bailed on Google without clicking a link to visit a website. In other words, nearly 66% of Google’s users do not click a link to visit the web ecosystem.

The report explains:

“…the largest share of Google searches in our study resulted in the user either browsing elsewhere on Google or leaving the site entirely without clicking a link in the search results. Around two-thirds of all searches resulted in one of these actions.”

Wikipedia, YouTube And Reddit Dominate Google Searches

Google has been holding publisher events and Search Central Live events all around the world to listen to publisher feedback and to promise that Google will work harder to surface a greater variety of content. I know that the Googlers at these events are not lying, but those promises of surfacing more high-quality content are subverted by the grim facts presented in the Pew research of actual users.

One of the biggest complaints is that Reddit and Wikipedia dominate the search results. The research validates publisher and SEO concerns because it shows that not only are Reddit and Wikipedia the most commonly cited websites, but Google’s own YouTube ranks among the top three most cited web destinations.

The report explains:

“The most frequently cited sources in both Google AI summaries and standard search results are Wikipedia, YouTube and Reddit. These three sites are the most commonly linked sources in AI summaries and standard search results alike.

Collectively, they accounted for 15% of the sources that were listed in the AI summaries we examined. They made up a similar share (17%) of the sources listed in standard search results.”

The report also shows:

  • “Wikipedia links are somewhat more common in AI summaries than in standard search pages”
  • “YouTube links are somewhat more common in standard search results than in AI summaries.”

These Are The Facts

Pew Research’s study of over 68,000 search queries from the browsing habits of over 900 adults reveals that Google’s AI summaries sharply reduce clicks to websites, with just 8% of users clicking any link and only 1% engaging with citations in AI answers.

Users encountering AI summaries are more likely to end their sessions or stay within Google’s ecosystem rather than visiting independent websites. This confirms publisher and SEO concerns that AI-driven search erodes web traffic and concentrates attention on a few dominant platforms like Wikipedia, Reddit, and YouTube.

These are the facts. They show that SEOs and publishers are right that AI Overviews is siphoning traffic out of the web ecosystem.

Featured Image by Shutterstock/Asier Romero