Controversial Proposal To Label Sections Of AI Generated Content via @sejournal, @martinibuster

A new proposal was published for creating an HTML attribute that can be helpful for notifying crawlers what part of a web page is generated by AI. The proposal is quickly becoming relevant because of new rules coming into effect in Europe this summer, but some are questioning whether this is the right solution to that problem.

AI Disclosure

The proposal was created by David E. Weekly (LinkedIn profile), who noted that there are currently proposals that provide a more general signal that an entire web page is AI generated but nothing that labels only a section of a web page in a page that is otherwise authored by a human.

Weekly’s proposal acknowledges the reality that many web pages are partially AI generated. One example is the AI generated summaries of news content. The proposal specifically mentions news sites that contain a sidebar with AI generated summaries.

The proposal suggests creating an HTML attribute that can be applied at the section level using the

Google Shows How To Get More Traffic From Top Stories Feature via @sejournal, @martinibuster

Google added new documentation to Search Central covering their Preferred Sources program that helps news websites get into the Top Stories feature. The documentation explains what publishers can do to make it more likely to be ranked in Top Stories and get more traffic.

Top Stories

Given that Top Stories is about breaking news, freshness may be a factor for ranking.Top Stories surfaces local news as well as breaking news. Schema structured data is not necessary to rank in Top Stories but adding Schema.org Article structured data helps Google better understand what the page is about. While the Top Stories display resembles Google’s carousel feature, the ItemList structured data for Carousel displays has no effect.

Source Preferences Tool

The preferred sources program is available only to English language web pages globally. Google also states that sites that are already in the Preferred Sources tool are eligible to deep link to encourage users to add your site as a preferred source. https://www.google.com/preferences/source

According to Google:

If your site appears in the source preferences tool, you can use the following methods to guide your readers to select your site as a preferred source:

Add the deeplink to your social posts or promotions. Use the following URL format, which takes users directly to your site in the source preferences tool:

https://google.com/preferences/source?q=Your_Website's_URL

For example, if your site is https://example.com, use the following URL:

https://google.com/preferences/source?q=example.com

Do What You Can For More Traffic From Top Stories

Getting traffic out of Google appears to be getting increasingly difficult. So it’s useful to take advantage of every available opportunity.

Featured Image by Shutterstock/RealPeopleStudio


 
		
	
WordPress Announces AI Agent Skill For Speeding Up Development via @sejournal, @martinibuster

WordPress announced wp-playground, a new AI agent skill designed to be used with the Playground CLI so AI agents can run WordPress for testing and check their work as they write code. The skill helps agents test code quickly while they work.

Playground CLI

Playground is a WordPress sandbox that enables users to run a full WordPress site without setting it all up on a traditional server. It is used for testing plugins, creating and adjusting themes, and experimenting safely without affecting a live site.

The new AI agent skill is for use with Playground CLI, which runs locally and requires knowledge of terminal commands, Node.js, and npm to manage local WordPress environments.

The wp-playground skill starts WordPress automatically and determines where generated code should exist inside the installation. The skill then mounts the code into the correct directory, which allows the agent to move directly from generated code to a running the WordPress site without manual setup.

Once WordPress is running, the agent can test behavior and verify results using common tools. In testing, agents interacted with WordPress through tools like curl and Playwright, checked outcomes, applied fixes, and then re-tested using the same environment. This process creates a repeatable loop where the agent can confirm whether a change works before making further changes.

The skill also includes helper scripts that manage startup and shutdown. These scripts reduce the time it takes for WordPress to become ready for testing from about a minute to only a few seconds. The Playground CLI can also log into WP-Admin automatically, which removes another manual step during testing.

The creator of the AI agent skill, Brandon Payton, is quoted explaining how it works:

“AI agents work better when they have a clear feedback loop. That’s why I made the wp-playground skill. It gives agents an easy way to test WordPress code and makes building and experimenting with WordPress a lot more accessible.”

The WordPress AI agent skill release also introduces a new GitHub repository dedicated to hosting WordPress agent skill. Planned ideas include persistent Playground sites tied to a project directory, running commands against existing Playground instances, and Blueprint generation.

Featured Image by Shutterstock/Here

AI Recommendations Change With Nearly Every Query: Sparktoro via @sejournal, @MattGSouthern

AI tools produce different brand recommendation lists nearly every time they answer the same question, according to a new report from SparkToro.

The data showed a <1-in-100 chance that ChatGPT or Google>

Rand Fishkin, SparkToro co-founder, conducted the research with Patrick O’Donnell from Gumshoe.ai, an AI tracking startup. The team ran 2,961 prompts across ChatGPT, Claude, and Google Search AI Overviews (with AI Mode used when Overviews didn’t appear) using hundreds of volunteers over November and December.

What The Data Found

The authors tested 12 prompts requesting brand recommendations across categories, including chef’s knives, headphones, cancer care hospitals, digital marketing consultants, and science fiction novels.

Each prompt was run 60-100 times per platform. Nearly every response was unique in three ways: the list of brands presented, the order of recommendations, and the number of items returned.

Fishkin summarized the core finding:

“If you ask an AI tool for brand/product recommendations a hundred times nearly every response will be unique.”

Claude showed slightly higher consistency in producing the same list twice, but was less likely to produce the same ordering. None of the platforms came close to the authors’ definition of reliable repeatability.

The Prompt Variability Problem

The authors also examined how real users write prompts. When 142 participants were asked to write their own prompts about headphones for a traveling family member, almost no two prompts looked similar.

The semantic similarity score across those human-written prompts was 0.081. Fishkin compared the relationship to:

“Kung Pao Chicken and Peanut Butter.”

The prompts shared a core intent but little else.

Despite the prompt diversity, the AI tools returned brands from a relatively consistent consideration set. Bose, Sony, Sennheiser, and Apple appeared in 55-77% of the 994 responses to those varied headphone prompts.

What This Means For AI Visibility Tracking

The findings question the value of “AI ranking position” as a metric. Fishkin wrote: “any tool that gives a ‘ranking position in AI’ is full of baloney.”

However, the data suggests that how often a brand appears across many runs of similar prompts is more consistent. In tight categories like cloud computing providers, top brands appeared in most responses. In broader categories like science fiction novels, the results were more scattered.

This aligns with other reports we’ve covered. In December, Ahrefs published data showing that Google’s AI Mode and AI Overviews cite different sources 87% of the time for the same query. That report focused on a different question: the same platform but with different features. This SparkToro data examines the same platform and prompt, but with different runs.

The pattern across these studies points in the same direction. AI recommendations appear to vary at every level, whether you’re comparing across platforms, across features within a platform, or across repeated queries to the same feature.

Methodology Notes

The research was conducted in partnership with Gumshoe.ai, which sells AI tracking tools. Fishkin disclosed this and noted that his starting hypothesis was that AI tracking would prove “pointless.”

The team published the full methodology and raw data on a public mini-site. Survey respondents used their normal AI tool settings without standardization, which the authors said was intentional to capture real-world variation.

The report is not peer-reviewed academic research. Fishkin acknowledged methodological limitations and called for larger-scale follow-up work.

Looking Ahead

The authors left open questions about how many prompt runs are needed to obtain reliable visibility data and whether API calls yield the same variation as manual prompts.

When assessing AI tracking tools, the findings suggest you should ask providers to demonstrate their methodology. Fishkin wrote:

“Before you spend a dime tracking AI visibility, make sure your provider answers the questions we’ve surfaced here and shows their math.”


Featured Image: NOMONARTS/Shutterstock

Google Analytics To Become A Growth Engine For Business via @sejournal, @brookeosmundson

On the first episode of the Google Ads Decoded podcast, host Ginny Marvin sat down with Eleanor Stribling, Group Product Manager for Google Analytics.

In the episode, Stribling noted an ambitious two-phase vision for the GA4 platform.

After acknowledging GA4’s rough transition from Universal Analytics, especially for marketers, she shared where the platform is headed over the next few years.

What Stribling Shared on Google Ads Decoded

After discussing the foundations of the importance of data strength, Stribling broke down the vision of GA4 into two timelines.

Over the next year or two, GA4 will focus on becoming a cross-channel, full-funnel measurement platform. She states the goal of this is:

To be that one place where you can really understand the impact of your media with data that makes sense and resonates and that you can take and make a business decision with.

This means moving beyond outdated siloed channel reporting to understand how all your media works together across the complete customer journey.

The longer-term vision she shared looks 3+ years beyond what GA4 is capable of today.

Stribling says GA4 will become a decision-making platform for businesses, essentially a growth engine that translates data into business outcomes.

“Making a world-class analyst available to every single person,” is how Stribling described this vision. AI will be the layer that makes this shift possible.

It will be interesting to see how Google’s vision for this will build out over the next few years. Considering they already have the reporting visualization tool, Looker Studio, my prediction is that there will be better or easier integration into it.

Beyond just better integration with Looker Studio, trying to become a growth engine or decision-making platform sounds like they’re trying to set themselves apart from the competition of other reporting platforms out there today, like Funnel or Power BI.

What’s Coming in the Advertising Workspace

Stribling pointed to the Advertising Workspace in GA4 as an area where marketers will see significant changes over the next year.

Expect improvements to reporting that better illustrate the user journey. Google is also building out budgeting and planning tools that let you upload cost data from other media buys and create spend plans based on your goals.

The platform will also suggest optimizations for in-flight campaigns, offering AI-powered recommendations to help you get closer to your campaign objectives.

Personally, I’m excited to see if they make the Explorer report building any more intuitive for marketers. I think it’s highly under-utilized right now because you’re essentially starting from a blank slate. It takes time, effort, and the right type of mindset to really sit down and try to re-learn an Analytics platform.

Why This Matters & Looking Ahead

GA4’s reputation amongst marketers hasn’t been stellar since it replaced Universal Analytics. In the podcast episode, Marvin reiterated that as a long-time marketer:

The platform felt designed for developers rather than marketers, and the transition left many advertisers frustrated.

Stribling’s comments signal that Google has been listening. Google seems to be heavily investing in making GA4 more accessible, while simultaneously building towards a future where the platform goes beyond its traditional reporting.

The two-phase vision shared is ambitious, particularly the long-term vision of GA4 as a business decision engine. Whether Google will move full steam ahead on this remains up in the air, but it seems that the direction GA4 is going is beyond just a measurement tool.

For now, the practical move for marketers is to keep working on your data strength. This includes auditing your tagging setup, testing the existing AI features that already exist today, and reviewing key conversion and event data.

Google’s SAGE Agentic AI Research: What It Means For SEO via @sejournal, @martinibuster

Google published a research paper about creating a challenging dataset for training AI agents for deep research. The paper offers insights into how agentic AI deep research works, which implies insights for optimizing content.

The acronym SAGE stands for Steerable Agentic Data Generation for Deep Search with Execution Feedback.

Synthetic Question And Answer Pairs

The researchers noted that the previous state of the art AI training datasets (like Musique and HotpotQA) required no more than four reasoning steps in order to answer the questions. On the number of searches needed to answer a question, Musique averages 2.7 searches per question and HotpotQA averaged 2.1 searches. Another commonly used dataset named Natural Questions (NQ) only required an average of 1.3 searches per question.

These datasets that are used to train AI agents created a training gap for deep search tasks that required more reasoning steps and a greater number of searches. How can you train an AI agent for complex real-world deep search tasks if the AI agents haven’t been trained to tackle genuinely difficult questions.

The researchers created a system called SAGE that automatically generates high-quality, complex question-answer pairs for training AI search agents. SAGE is a “dual-agent” system where one AI writes a question and a second “search agent” AI tries to solve it, providing feedback on the complexity of the question.

  • The goal of the first AI is to write a question that’s challenging to answer and requires many reasoning steps and multiple searches to solve.
  • The goal of the second AI is try to measure if the question is answerable and calculate how difficult it is (minimum number of search steps required).

The key to SAGE is that if the second AI solves the question too easily or gets it wrong, the specific steps and documents it found (the execution trace) is fed back to the first AI. This feedback enables the first AI to identify one of four shortcuts that enable the second AI to solve the question in fewer steps.

It’s these shortcuts that provide insights into how to rank better for deep research tasks.

Four Ways That Deep Research Was Avoided

The goal of the paper was to create a set of question and answer pairs that were so difficult that it took the AI agent multiple steps to solve. The feedback showed four ways that made it less necessary for the AI agent to do additional searches to find an answer.

Four Reasons Deep Research Was Unnecessary

  1. Information Co-Location
    This is the most common shortcut, accounting for 35% of the times when deep research was not necessary. This happens when two or more pieces of information needed to answer a question are located in the same document. Instead of searching twice, the AI finds both answers in one “hop”.
  2. Multi-query Collapse
    This happened in 21% of cases. The cause is when a single, clever search query retrieves enough information from different documents to solve multiple parts of the problem at once. This “collapses” what should have been a multi-step process into a single step.
  3. Superficial Complexity
    This accounts for 13% of times when deep research was not necessary. The question looks long and complicated to a human, but a search engine (that an AI agent is using) can jump straight to the answer without needing to reason through the intermediate steps.
  4. Overly Specific Questions
    31% of the failures are questions that contain so much detail that the answer becomes obvious in the very first search, removing the need for any “deep” investigation.

The researchers found that some questions look hard but are actually relatively easy because the information is “co-located” in one document. If an agent can answer a 4-hop question in 1 hop because one website was comprehensive enough to have all the answers, that data point is considered a failure for training the agent for reasoning but it’s still something that can happen in real-life and the agent will take advantage of finding all the information on one page.

SEO Takeaways

It’s possible to gain some insights into what kinds of content satisfies the deep research. While these aren’t necessarily tactics for ranking better in agentic AI deep search, these insights do show what kinds of scenarios caused the AI agents to find all or most of the answers in one web page.

“Information Co-location” Could Be An SEO Win
The researchers found that when multiple pieces of information required to answer a question occur in the same document, it reduces the number of search steps needed. For a publisher, this means consolidating “scattered” facts into one page prevents an AI agent from having to “hop” to a competitor’s site to find the rest of the answer.

Triggering “Multi-query Collapse”
The authors identified a phenomenon where information from different documents can be retrieved using a single query. By structuring content to answer several sub-questions at once, you enable the agent to find the full solution on your page faster, effectively “short-circuiting” the long reasoning chain the agent was prepared to undertake.

Eliminating “Shortcuts” (The Reasoning Gap)
The research paper notes that the data generator fails when it accidentally creates a “shortcut” to the answer. As an SEO, your goal is to be that shortcut—providing the specific data points like calculations, dates, or names that allow the agent to reach the final answer without further exploration.

The Goal Is Still To Rank In Classic Search

For an SEO and a publisher, these shortcuts underline the value of creating a comprehensive document because it will remove the need for an AI agent from getting triggered to hop somewhere else. This doesn’t mean it will be helpful to add all the information in one page. If it makes sense for a user it may be useful to link out from one page to another page for related information.

The reason I say that is because the AI agent is conducting classic search looking for answers, so the goal remains to optimize a web page for classic search. Furthermore, in this research, the AI agent is pulling from the top three ranked web pages for each query that it’s executing. I don’t know if this is how agentic AI search works in a live environment, but this is something to consider.

In fact, one of the tests that the researchers did was conducted using the Serper API to extract search results from Google.

So when it comes to ranking in agentic AI search, consider these takeaways:

  • It may be useful to consider the importance of ranking in the top three.
  • Do optimize web pages for classic search.
  • Do not optimize web pages for AI search
  • If it’s possible to be comprehensive, remain on-topic, and rank in the top three, then do that.
  • Interlink to relevant pages to help those rank in classic search, preferably in the top three (to be safe).

It could be that agentic AI search will consider pulling from more than the top three in classic search. But it may be helpful to set the goal of ranking for the top 3 in classic search and to focus on ranking other pages that may be a part of the multi-hop deep research.

The research paper was published by Google on January 26, 2026. It’s available in PDF form:  SAGE: Steerable Agentic Data Generation for Deep Search with Execution Feedback.

Featured Image by Shutterstock/Shutterstock AI Generator

Chrome Updated With 3 AI Features Including Nano Banana via @sejournal, @martinibuster

Gemini in Chrome has just been refreshed with three new features that integrate more Gemini capabilities within Chrome for Windows, MacOS, and Chromebook Plus. The update adds an AI side panel, agentic AI Auto Browse, and Nano Banana image editing of whatever image is in the browser window.

AI Side Panel For Multitasking

Chrome adds a new side panel that enables users to slide open a side panel to open up a session with Gemini without having to jump around across browser tabs. The feature is described as a way to save time by making it easier to multitask.

Google explains:

“Our testers have been using it for all sorts of things: comparing options across too-many-tabs, summarizing product reviews across different sites, and helping find time for events in even the most chaotic of calendars.”

Opt-In Requirement For AI Chat

Before enabling the side panel AI chat feature, a user must first consent to sending their URLs and browser data back to Google.

Screenshot Of Opt-In Form

Nano Banana In Chrome

Using the AI side panel, users can tell it to update and change an image in the browser window without having to do any copying, downloading, or uploading. Nano banana will change it right there in the open browser window.

Chrome Autobrowse (Agentic AI)

This feature is for subscribers of Google’s AI Pro and Ultra tiers. Autobrowse enables an agentic AI to take action on behalf of the user. It’s described as being able to researching hotel and flights and doing cost comparisons across a given range of dates, obtaining quotes for work, and checking if bills are paid.

Autobrowse is multimodal which means that it can identify items in a photo then go out and find where they can be purchased and add them to a cart, including adding any relevant discount codes. If given permission, the AI agent can also access passwords and log in to online stores and services.

Adds More Features To Existing Ones

Google announced on January 12, 2026 that Chrome’s AI was upgraded with app connections, able to connect to Calendar, Gmail,Google Shopping, Google Flights, Maps, and YouTube. This is part of Google’s Personal Intelligence initiative, which it said is Google’s first step toward a more personalized AI assistant.

Personalization And User Intent Extraction For AI Chat And Agents

On a related note, Google recently published a research paper that shows how an on-device and in-browser AI can extract a user’s intent so as to provide better personalized and proactive responses, pointing to how on-device AI may be used in the near future. Read Google’s New User Intent Extraction Method.

Featured Image by Shutterstock/f11photo

Google May Let Sites Opt Out Of AI Search Features via @sejournal, @MattGSouthern

Google says it’s exploring updates that could let websites opt out of AI-powered search features specifically.

The blog post came the same day the UK’s Competition and Markets Authority opened a consultation on potential new requirements for Google Search, including controls for websites to manage their content in Search AI features.

Ron Eden, Principal, Product Management at Google, wrote:

“Building on this framework, and working with the web ecosystem, we’re now exploring updates to our controls to let sites specifically opt out of Search generative AI features.”

Google provided no timeline, technical specifications, or firm commitment. The post frames this as exploration, not a product roadmap.

What’s New

Google currently offers several controls for how content appears in Search, but none cleanly separate AI features from traditional results.

Google-Extended lets publishers block their content from training Gemini and Vertex AI models. But Google’s documentation states Google-Extended doesn’t impact inclusion in Google Search and isn’t a ranking signal. It controls AI training, not AI Overviews appearance.

The nosnippet and max-snippet directives do apply to AI Overviews and AI Mode. But they also affect traditional snippets in regular search results. Publishers wanting to limit AI feature exposure currently lose snippet visibility everywhere.

Google’s post acknowledges this gap exists. Eden wrote:

“Any new controls need to avoid breaking Search in a way that leads to a fragmented or confusing experience for people.”

Why This Matters

I wrote in SEJ’s SEO Trends 2026 ebook that people would have more influence on the direction of search than platforms do. Google’s post suggests that dynamic is playing out.

Publishers and regulators have spent the past year pushing back on AI Overviews. The UK’s Independent Publishers Alliance, Foxglove, and Movement for an Open Web filed a complaint with the CMA last July, asking for the ability to opt out of AI summaries without being removed from search entirely. The US Department of Justice and South African Competition Commission have proposed similar measures.

The BuzzStream study we covered earlier this month found 79% of top news publishers block at least one AI training bot, and 71% block retrieval bots that affect AI citations. Publishers are already voting with their robots.txt files.

Google’s post suggests it’s responding to pressure from the ecosystem by exploring controls it previously didn’t offer.

Looking Ahead

Google’s language is cautious. “Exploring” and “working with the web ecosystem” are not product commitments.

The CMA consultation will gather input on potential requirements. Regulatory processes move slowly, but they do produce outcomes. The EU’s Digital Markets Act investigations have already pushed Google to make changes in Europe.

For now, publishers wanting to limit AI feature exposure can use nosnippet or max-snippet directives, but note that these affect traditional snippets as well. Google’s robots meta tag documentation covers the current options.

If Google follows through on specific opt-out controls, the technical implementation will matter. Whether it’s a new robots directive, a Search Console setting, or something else will determine how practical it is for publishers to use.


Featured Image: ANDRANIK HAKOBYAN/Shutterstock

New Yahoo Scout AI Search Delivers The Classic Search Flavor People Miss via @sejournal, @martinibuster

Yahoo has announced Yahoo Scout, a new AI-powered answer engine now available in beta to users in the United States, providing a clean Classic Search experience with the power of personalized AI. The launch also includes the Yahoo Scout Intelligence Platform, which brings AI features across Yahoo’s core products, including Mail, News, Finance, and Sports.

Screenshot Of Yahoo Scout

Yahoo’s Existing Products and User Reach

Yahoo’s announcement states that it operates some of the most popular websites and services in the United States, reaching what they say is 90% of all internet users in the United States (based on Comscore data), through its email, news, finance, and sports properties. The company says that Yahoo Scout builds on the foundation of decades of search behavior and user interaction data.

How Yahoo Scout Generates Answers

Yahoo has partnered with Anthropic to use the Claude model as the primary AI system behind Yahoo Scout. Yahoo’s announcement said it selected Claude for speed, clarity, judgment, and safety, which it described as essential qualities for a consumer-facing answer engine. Yahoo also continues its partnership with Microsoft by using Microsoft Bing’s grounding API, which connects AI-generated answers to information from across the open web. Yahoo said this approach ensures that answers are informed by authoritative sources rather than unsupported text generation.

According to Yahoo, Scout relies on a combination of traditional web search and generative AI to produce answers that are grounded using Microsoft Bing’s grounding API and informed by sources from across the open web.

According to  Yahoo:

“It’s informed by 500 million user profiles, a knowledge graph spanning more than 1 billion entities, and 18 trillion consumer events that occur annually across Yahoo, which allow Yahoo Scout to provide effective and personalized answers and suggested actions.”

Yahoo’s announcement says that this data, its use of Claude, and reliance on Bing for grounding work together to provide responses to answers that are personalized and helpful for researching and making decisions in the “moments that matter” to people.

They explain:

“Yahoo Scout continues Yahoo’s focus on the moments that matter to people’s daily lives, such as understanding upcoming weather patterns before a vacation, getting details about an important game, tracking stock price movements after earnings, comparing products before buying, or fact-checking a news story.”

Where Yahoo Scout Appears Inside Yahoo Products

The Yahoo Scout Intelligence Platform embeds these AI capabilities directly into Yahoo’s existing services.

For example:

  • In Yahoo Mail, Scout supports AI-generated message summaries.
  • In Yahoo Sports, it produces game breakdowns.
  • In Yahoo News, it surfaces key takeaways.
  • In Yahoo Finance, Scout adds interactive tools for analysis that allow readers to explore market news and stock performance context through AI-powered questions.

According to Eric Feng, Senior Vice President and General Manager of Yahoo Research Group:

“Yahoo’s deep knowledge base, 30 years in the making, allows us to deliver guidance that our users can trust and easily understand, and will become even more personalized over the coming months. Yahoo Scout now powers a new generation of intelligence experiences across Yahoo, seamlessly integrated into the products people use every day.”

What Yahoo Says Comes Next

Yahoo said Scout will continue to develop over the coming months. Planned updates include deeper personalization, expanded capabilities within specific verticals, and new formats for search advertising designed to work in generative AI search. The company did not provide a timeline for when the beta period will end or when additional features will move beyond testing.

Yahoo explained:

“Yahoo Scout will continue to evolve in the months ahead, expanding to power new products across Yahoo. In particular, the new answer engine will become more personalized, will add new capabilities focused on deeper experiences within key verticals, and will introduce new, improved opportunities for search advertisers to effectively cross the chasm to generative AI search advertising. “

Yahoo’s Search Experience

Something that’s notable about Yahoo’s AI answer engine experience is how clean and straightforward it is. It’s like a throwback to classic search but with the sophistication of AI answers.

For example, I asked it to give me information on where I can buy an esoteric version of a Levi’s trucker jacket in a specific color (Midnight Harvest) and it presented a clean summary of where to get it, a table with a list of retailers ordered by the lowest prices.

Screenshot Of Yahoo Scout

Notice that there are no product images? It’s just giving me the prices. I don’t know if that’s because they don’t have a product feed but I already know what the jacket looks like in the color I specified so images aren’t really necessary.  This is what I mean when I say that Yahoo Scout offers that Classic Search flavor without the busy overly fussy search experience that Google has been providing lately.

With Yahoo Scout, the company is applying AI systems to tasks its users perform when they search for, read, or compare information online. Rather than positioning AI as a replacement for search or content platforms, Yahoo is using it as a tool that organizes, summarizes, and explains information in a clean and easy to read format.

Yahoo Scout is easy to like because it delivers the clean and uncluttered search experience that many people miss.

Check out Yahoo Scout at scout.yahoo.com

The Yahoo Scout app is available for Android and Apple devices.

Google AI Overviews Now Powered By Gemini 3 via @sejournal, @MattGSouthern

Google is making Gemini 3 the default model for AI Overviews in markets where the feature is available and adding a direct path into AI Mode conversations.

The updates, shared in a Google blog post, bring Gemini 3’s reasoning capabilities to AI Overviews. Google says the feature now reaches over one billion users.

What’s New

Gemini 3 For AI Overviews

The Gemini 3 upgrade brings the same reasoning capabilities to AI Overviews that previously powered AI Mode.

Robby Stein, VP of Product for Google Search, wrote:

“We’re rolling out Gemini 3 as the default model for AI Overviews globally, so even more people will be able to access best-in-class AI responses, directly in the results page for questions where it’s helpful.”

Gemini 3 launched in November, and Google shipped it to AI Mode on release day. This expands Gemini 3 from AI Mode into AI Overviews as the default.

AI Overview To AI Mode Transition

You can now ask a follow-up question right from an AI Overview and continue into AI Mode. The context from the original response carries into the conversation, so you don’t start over.

Stein described the thinking behind the change:

“People come to Search for an incredibly wide range of questions – sometimes to find information quickly, like a sports score or the weather, where a simple result is all you need. But for complex questions or tasks where you need to explore a topic deeply, you should be able to seamlessly tap into a powerful conversational AI experience.”

He called the result “one fluid experience with prominent links to continue exploring.”

An earlier test of this flow ran globally on mobile back in December.

In testing, Google found people prefer this kind of natural flow into conversation. The company also found that keeping AI Overview context in follow-ups makes Search more helpful.

Why This Matters

The pattern has held since AI Overviews launched. Each update makes it easier to stay within AI-powered responses.

When Gemini 3 arrived in AI Mode, it brought deeper query fan-out and dynamic response layouts. AI Overviews running on the same model could produce different citation patterns.

That makes today’s update an important one to monitor. Model changes can affect which pages get cited and how responses are structured.

Looking Ahead

Google says the updates are rolling out starting today, though availability may vary by market.

Google previously indicated plans to add automatic model selection that routes complex questions to Gemini 3 while using faster models for simpler tasks. Whether that affects AI Overviews beyond today’s default model change isn’t specified.


Featured Image: Darshika Maduranga/Shutterstock