Google Rolls Out ‘Preferred Sources’ For Top Stories In Search via @sejournal, @MattGSouthern

Google is rolling out a new setting that lets you pick which news outlets you want to see more often in Top Stories.

The feature, called Preferred Sources, is launching today in English in the United States and India, with broader availability in those markets over the next few days.

What’s Changing

Preferred Sources lets you choose one or more outlets that should appear more frequently when they have fresh, relevant coverage for your query.

Google will also show a dedicated From your sources section on the results page. You will still see reporting from other publications, so Top Stories remains a mix of outlets.

Google Product Manager Duncan Osborn says the goal is to help you “stay up to date on the latest content from the sites you follow and subscribe to.”

How To Turn It On

Image Credit: Google
  1. Search for a topic that is in the news.
  2. Tap the icon to the right of the Top stories header.
  3. Search for and select the outlets you want to prioritize.
  4. Refresh the results to see the updated mix.

You can update your selections at any time. If you previously opted in to the experiment through Labs, your saved sources will carry over.

In early testing through Labs, more than half of participants selected four or more sources. That suggests people value seeing a range of outlets while still leaning toward publications they trust.

Why It Matters

For publishers, Preferred Sources creates a direct way to encourage loyal readers to see more of your coverage in Search.

Loyal audiences are more likely to add your site as a preferred source, which can increase the likelihood of showing up for them when you have fresh, relevant reporting.

You can point your audience to the new setting and explain how to add your site to their list. Google has also published help resources for publishers that want to promote the feature to followers and subscribers.

This adds another personalization layer on top of the usual ranking factors. Google says you will still see a diversity of sources, and that outlets only appear more often when they have new, relevant content.

Looking Ahead

Preferred Sources fits into Google’s push to let you customize Search while keeping a variety of perspectives in Top Stories.

If you have a loyal readership, this feature is another reason to invest in retention and newsletters, and to make it easy for readers to follow your coverage on and off Search.

Google Says AI-Generated Content Should Be Human Reviewed via @sejournal, @martinibuster

Google’s Gary Illyes confirmed that AI content is fine as long as the quality is high. He said that “human created” isn’t precisely the right way to describe their AI content policy, and that a more accurate description would be “human curated.”

The questions were asked by Kenichi Suzuki in the context of an exclusive interview with Illyes.

AI Overviews and AI Mode Models

Kenichi asked about the AI models used for AI Overviews and AI Mode, and he answered that they are custom Gemini models.

Illyes answered:

“So as you noted, the the model that we use for AIO (for AI Overviews) and for AI mode is a custom Gemini model and that might mean that it was trained differently. I don’t know the exact details, how it was trained, but it’s definitely a custom model.”

Kenichi then asked if AI Overviews (AIO) and AI Mode use separate indexes for grounding.

Grounding is where an LLM will connect answers to a database or a search index so that answers are more reliable, truthful, and based on verifiable facts, helping to cut down on hallucinations. In the context of AIO and AI Mode, grounding generally happens with web-based data from Google’s index.

Suzuki asked:

“So, does that mean that AI Overviews and AI Mode use separate indexes for grounding?”

Google’s Illyes answered:

“As far as I know, Gemini, AI Overview and AI Mode all use Google search for grounding. So basically they issue multiple queries to Google Search and then Google Search returns results for that those particular queries.”

Kenichi was trying to get an answer regarding the Google Extended crawler, and Illyes’s response was to explain when the Google Extended crawler comes into play.

“So does that mean that the training data are used by AIO and AI Mode collected by regular Google and not Google Extended?”

And Illyes answered:

“You have to remember that when grounding happens, there’s no AI involved. So basically it’s the generation that is affected by the Google extended. But also if you disallow Google Extended then Gemini is not going to ground for your site.”

AI Content In LLMs And Search Index

The next question that Illyes answered was about whether AI content published online is polluting LLMs. Illyes said that this is not a problem with the search index, but it may be an issue for LLMs.

Kenichi’s question:

“As more content is created by AI, and LLMs learn from that content. What are your thoughts on this trend and what are its potential drawbacks?”

Illyes answered:

“I’m not worried about the search index, but model training definitely needs to figure out how to exclude content that was generated by AI. Otherwise you end up in a training loop which is really not great for for training. I’m not sure how much of a problem this is right now, or maybe because how we select the documents that we train on.”

Content Quality And AI-Generated Content

Suzuki then followed up with a question about content quality and AI.

He asked:

“So you don’t care how the content is created… so as long as the quality is high?”

Illyes confirmed that a leading consideration for LLM training data is content quality, regardless of how it was generated. He specifically cited the factual accuracy of the content as an important factor. Another factor he mentioned is that content similarity is problematic, saying that “extremely” similar content shouldn’t be in the search index.

He also said that Google essentially doesn’t care how the content is created, but with some caveats:

“Sure, but if you can maintain the quality of the content and the accuracy of the content and ensure that it’s of high quality, then technically it doesn’t really matter.

The problem starts to arise when the content is either extremely similar to something that was already created, which hopefully we are not going to have in our index to train on anyway.

And then the second problem is when you are training on inaccurate data and that is probably the riskier one because then you start introducing biases and they start introducing counterfactual data in your models.

As long as the content quality is high, which typically nowadays requires that the human reviews the generated content, it is fine for model training.”

Human Reviewed AI-Generated Content

Illyes continued his answer, this time focusing on AI-generated content that is reviewed by a human. He emphasizes human review not as something that publishers need to signal in their content, but as something that publishers should do before publishing the content.

Again, “human reviewed” does not mean adding wording on a web page that the content is human reviewed; that is not a trustworthy signal, and it is not what he suggested.

Here’s what Illyes said:

“I don’t think that we are going to change our guidance any time soon about whether you need to review it or not.

So basically when we say that it’s human, I think the word human created is wrong. Basically, it should be human curated. So basically someone had some editorial oversight over their content and validated that it’s actually correct and accurate.”

Takeaways

Google’s policy, as loosely summarized by Gary Illyes, is that AI-generated content is fine for search and model training if it is factually accurate, original, and reviewed by humans. This means that publishers should apply editorial oversight to validate the factual accuracy of content and to ensure that it is not “extremely” similar to existing content.

Watch the interview:

Featured Image by Shutterstock/SuPatMaN

Google Says AI-Generated Content Will Not Cause Ranking Penalty via @sejournal, @martinibuster

Google’s Gary Illyes recently answered the question of whether AI-generated images used together with “legit” content can impact rankings. Gary discussed whether it had an impact on SEO and called attention to a technical issue involving server resources that is a possible outcome.

Does Google Penalize for AI-Generated Content?

How does Google react to AI image content when it’s encountered in the context of a web page? Google’s Gary Illyes answered that question within the context of a Q&A and offered some follow-up observations about how it could lead to extra traffic from Google Image Search. The question was asked at about the ten-minute mark of the interview conducted by Kenichi Suzuki and published on YouTube.

This is the question that was asked:

“Say if there’s a content that the content itself is legit, the sentences are legit but and also there are a lot of images which are relevant to the content itself, but all of them, let’s say all of them are generated by AI. Will that content or the overall site, is it going to be penalized or not?”

This is an important and reasonable question because Google ran an update about a year ago that appeared to de-rank low quality AI-generated content.

Google’s Gary Ilyes’ answer was clear that AI-generated content will not result in penalization and that it has no direct impact on SEO.

He answered:

“No, no. So AI generated image doesn’t impact the SEO. Not direct.

So obviously when you put images on your site, you will have to sacrifice some resources to those images… But otherwise you are not going to, I don’t think that you’re going to see any negative impact from that.

If anything, you might get some traffic out of image search or video search or whatever, but otherwise it should just be fine.”

AI-Generated Content

Gary Illyes did not discuss authenticity; however it’s a good thing to consider in the context of using AI-generated content. Authenticity is an important quality for users, especially in contexts where there is an expectation that an illustration is a faithful depiction of an actual outcome or product. For example, users expect product illustrations to accurately reflect the products they are purchasing and screenshots of food to reasonably represent the completed dishes after following the recipe instructions.

Google often says that content should be created for users and that many questions about SEO are adequately answered by the context of how users will react to it. Illyes did not reflect on any of that, but it is something that publishers should consider if they care about how content resonates with users.

Gary’s answer makes it clear that AI-generated content will not have a negative impact on SEO.

Featured Image by Shutterstock/Besjunior

Brave Announces AI Grounding API With Plans Starting At Free via @sejournal, @martinibuster

Brave Search announced the release of AI Grounding with the Brave Search API, a way to connect an AI system to grounding in search to reduce hallucinations and improve answers. The API is available in Free, Base AI, and Pro AI plans.

The Brave Search API is for developers and organizations that want to add AI grounding from authoritative web information to their AI applications. The Brave API supports agentic search, foundation model training, and creating search-enabled applications.

State Of The Art Performance (SOTA)

Brave’s announcement says that their AI Grounding API enables state of the art performance in both single-search and multi-search configurations, outperforming competitors in accuracy, claiming they can answer more than half of all questions with a single search.

According to Brave:

“Brave can answer more than half of the questions in the benchmark using a single search, with a median response time of 24.2 seconds. On average (arithmetic mean), answering these questions involves issuing 7 search queries, analyzing 210 unique pages (containing 6,257 statements or paragraphs), and takes 74 seconds to complete. The fact that most questions can be resolved with just a single query underscores the high quality of results returned by Brave Search.”

Pricing

There are three pricing tiers:

  • Free AI
    1 query/second and a limit of 5,000 queries/month
  • Base AI
    $5.00 per 1,000 requests
    A limit of up to 20 queries/second
    20M queries/month
    Rights to use in AI apps
  • Pro AI
    $9.00 per 1,000 requests
    A limit of up to 50 queries/second
    Unlimited queries/month
    Rights to use in AI apps

Brave’s AI Grounding API offers a reliable way to supply AI systems and apps with trustworthy information from across the web. Its independence and privacy practices make it a viable choice for developers building search-enabled AI applications.

Read Brave’s announcement:

Introducing AI Grounding with Brave Search API, providing enhanced search performance in AI applications

Featured Image by Shutterstock/Mamun_Sheikh

Google Is Testing An AI-Powered Finance Page via @sejournal, @martinibuster

Google announced that they’re testing a new AI-powered Google Finance tool. The new tool enables users to ask natural language questions about finance and stocks, get real-time information about financial and cryptocurrency topics, and access new charting tools that visualize the data.

Three Ways To Access Data

Google’s AI finance page offers three ways to explore financial data:

  1. Research
  2. Charting Tools
  3. Real-Time Data And News

Screenshot Of Google Finance

The screenshot above shows a watchlist panel on the left, a chart in the middle, a “latest updates” section beneath that, and a “research” section on the right hand panel.

Research

The new finance page enables users to ask natural language questions about finance, including the stock market, and the AI will return comprehensive answers, plus links to the websites where the relevant answers can be found.

Closeup Screenshot Of Research Section

Charting Tools

Google’s finance page also features charting tools that enable users to visualize financial data.

According to Google:

“New, powerful charting tools will help you visualize financial data beyond simple asset performance. You can view technical indicators, like moving average envelopes, or adjust the display to see candlestick charts and more.”

Real-Time Data

The new finance page also provides real-time data and tools, enabling users to explore finance news, including cryptocurrency information. This part features a live news feed.

The AI-powered page will roll out over the next few weeks on Google.com/finance/.

Read more at Google:

We’re testing a new, AI-powered Google Finance.

Featured Image by Shutterstock/robert_s

Google Cautions Businesses Against Generic Keyword Domains via @sejournal, @MattGSouthern

Google’s John Mueller says small businesses may be hurting their search visibility by choosing generic keyword domains instead of building distinctive brand names.

Speaking on a recent episode of Search Off the Record, Mueller and fellow Search Advocate Martin Splitt discussed common challenges for photography websites.

During the conversation, Mueller noted that many small business owners fall into a “generic domain” trap that can make it harder to connect the business name with its work.

Why Keyword Domains Can Be a Problem

The topic came up when Splitt mentioned that his photography site uses a German term for “underwater photo” as its domain. Mueller responded:

“I see a lot of small businesses make the mistake of taking a generic term and calling it their brand.”

He explained that businesses choosing keyword-rich domains often end up competing with directories, aggregators, and other established sites targeting the same phrases.

Even if the domain name exactly matches a service, there’s little room to stand out in search.

The Advantage Of A Distinct Brand

Mueller contrasted this with using a unique business name:

“If your brand were Martin Splitt Photos then people would be able to find you immediately.”

When customers search for a brand they remember, competition drops. Mentions and links from other websites also become clearer signals to search engines, reducing the chance of confusion with similarly named businesses.

Lost Opportunities For Word-of-Mouth

Relying on a generic keyword domain can also make offline marketing less effective.

If a potential client hears about a business at an event but can’t remember its exact generic name, finding it later becomes more difficult.

Mueller noted:

“If you’ve built up a reputation as being kind of this underwater photography guy and they remember your name, it’s a lot easier to find you with a clear brand name.”

Why This Matters

For service providers like photographers, event planners, or contractors, including the service and location in a domain name can feel like a shortcut to local rankings.

Mueller’s advice suggests otherwise: location targeting can be achieved through content, structured data, and Google Business Profile optimization, without giving up a distinctive brand.

Looking Ahead

While Mueller didn’t recommend immediate rebrands for existing sites, he made it clear that unique, brandable domains give small businesses a defensible advantage in search and marketing.

For those still choosing a domain, the long-term benefits of memorability and differentiation can outweigh any short-term keyword gains.

Listen to the full podcast episode below:


Featured Image: Roman Samborskyi/Shutterstock

Google: Unique Image Landing Pages Can Help Boost Search Visibility via @sejournal, @MattGSouthern

Google’s John Mueller says giving each image its own landing page can help it appear in image search, while gallery setups may limit visibility.

  • Google recommends unique landing pages for important images instead of JavaScript-only galleries or URL fragments.
  • Responsive images and modern formats improve user experience but aren’t direct ranking factors.
  • Auditing your site’s image URLs could reveal search visibility gains you’re currently missing.
OpenAI Launches GPT-5 In ChatGPT To All Users via @sejournal, @MattGSouthern

OpenAI has released GPT-5, now the default model in ChatGPT for all users, including those on the free tier.

The new model is positioned as OpenAI’s most capable and reliable system to date. OpenAI emphasizes a stronger focus on accuracy, instruction following, and long-form reasoning.

Available Now To All ChatGPT Users

For the first time, OpenAI is making its latest flagship model available to free users.

GPT-5 is rolling out now to Free, Plus, Pro, and Team accounts, with support for Enterprise and Education expected next week.

Free-tier access includes basic usage of GPT-5, with requests routed to a smaller “GPT-5 mini” variant once limits are reached.

Paid subscribers receive higher usage limits, and Pro users gain access to GPT-5 Pro, a version designed for more complex, resource-intensive tasks.

Accuracy, Reasoning, and Transparency Take Priority

According to OpenAI, GPT-5 significantly reduces hallucinated facts and is more likely to admit when it lacks the context to provide a reliable answer.

Evaluations show GPT-5 produces 45% fewer factual errors than GPT-4o, with up to 80% fewer errors when deeper reasoning is enabled.

The model also performs better on benchmarks tied to real-world problem-solving, such as coding, legal analysis, and health-related queries.

What’s Changed in the System

Rather than a single model, GPT-5 acts as a dynamic system that automatically decides whether to respond quickly or think more deeply, depending on prompt complexity.

Users can also request explicit reasoning with natural language prompts like “think hard about this.”

Other updates include:

  • A redesigned safety system that favors partially helpful answers over blanket refusals
  • Reduced sycophantic responses and more honest communication about limitations
  • Improvements in coding performance, including front-end UI generation and debugging
  • Support for multimodal input, including charts and images

Looking Ahead

GPT-5 is positioned less as a revolutionary jump and more as an effort to build trust through accuracy, reliability, and broader access.

For SEOs and digital marketers who rely on AI tools for drafting, analysis, or ideation, GPT-5’s improvements may help reduce the time spent verifying or correcting outputs.


Featured Image: JarTee/Shutterstock

Google Expands Performance Max Controls and Reporting via @sejournal, @brookeosmundson

Google Ads just dropped another wave of updates to Performance Max today.

For those who’ve been asking for better audience targeting, clearer reporting on new customer acquisition, and more transparency around auto-generated assets, these updates will feel like long-overdue upgrades.

Let’s break down what’s new, why it matters, and how advertisers should respond.

What’s New in Performance Max

Google has announced three core areas of updates for Performance Max campaigns:

  1. Expanded audience and campaign controls
  2. Improved new customer acquisition reporting and diagnostics
  3. More granular creative reporting and AI-powered asset recommendations

Most are either rolling out now or available broadly, with some elements in beta. Let’s walk through the details.

Expanded Controls Over Audience Targeting and Search Inventory

Performance Max has long leaned on automation, sometimes at the expense of control. Google is slowly changing that, and this release continues that shift.

Campaign-Level Negative Keyword Lists

Advertisers can now apply negative keyword lists across Performance Max campaigns. Previously, campaign-level negatives had to be managed individually, which created friction for accounts with dozens of asset groups.

With this update, advertisers can centralize keyword exclusions. For example, excluding terms like “cheap” or “free” across multiple luxury or premium product campaigns.

Campaign-level negative keyword lists in Performance Max.Image credit: Google, August 2025

You still have the option to apply unique negative keywords to individual campaigns, but this rollout makes managing brand suitability far more scalable.

More Search Themes per Asset Group

Google has doubled the search theme limit from 25 to 50 per asset group. This matters for brands that want to influence where their Performance Max ads show up in Search, without leaning on historical keyword builds.

By expanding your search theme input, you’re giving Google more information to better match your ads to queries. It also helps widen your eligible inventory while staying relevant.

Device and Demographic Targeting Updates

You can now fully control which device types your Performance Max campaigns appear on, something that was previously only partially available.

For example, a gaming company can restrict campaigns to mobile devices, or a B2B advertiser can exclude tablets entirely.

Age targeting is now also available, allowing advertisers to exclude or target specific age ranges.

Google is also testing gender-based demographic targeting in beta. These controls bring Performance Max closer in line with what’s long been possible in Search, Display, and YouTube campaigns.

New Customer Acquisition Reporting Gets Smarter

One of the most frustrating parts of new customer acquisition bidding has been the vague “Unknown” label in reporting. That’s changing with today’s updates in Performance Max reporting.

No More “Unknown” Conversions

In lifecycle reporting for new vs. returning customers, Google previously bucketed a portion of conversions as “Unknown”. This left advertisers with limited visibility into actual performance.

Google has now improved the backend logic that determines if a user is new or existing, meaning those “Unknown” labels should be gone moving forward.

This matters for two key reasons:

  • You can now get a more accurate read on how many new customers you’re acquiring.
  • Bidding strategies that rely on new customer signals will become more effective as the data improves.

For even more precision, Google encourages advertisers to update their conversion tracking tags to include the new customer acquisition parameter. This signals to Google whether a conversion is from a new or returning customer, based on first-party data.

New Goal Diagnostics and Recommendations

Alongside the reporting improvements, Google has added new diagnostics that surface goal-related issues in Performance Max.

These include broken or missing conversion tags, goal misconfigurations, or other tracking issues that could be holding back performance.

The diagnostics come with actionable recommendations to help advertisers resolve the problem. While this might not be the most glamorous update, it will save time and frustration during campaign setup and troubleshooting.

Creative Reporting and Asset Control Get a Boost

Asset transparency in Performance Max has been a long-standing pain point. While things have improved in the last year, these new changes go further.

Final URL Expansion Asset Reporting

Advertisers can now view reporting for assets generated through Final URL Expansion (FUE). This is Google’s feature that dynamically creates assets based on landing page content.

You’ll be able to see what text and visuals were created through FUE and how they performed.

Expanded Final URL reporting in Google AdsImage credit: Google, August 2025

More importantly, if you don’t like what Google created, you now have the ability to remove those assets from your campaign.

This is a big win for brands concerned about creative consistency, especially when it comes to legal language or brand tone. While FUE can be useful for scale, it hasn’t always produced on-brand results. So, this added visibility is a welcome change.

AI-Powered Creative Recommendations

Performance Max will now generate image-specific recommendations to help you improve performance. These suggestions will include both what types of visuals to add and how to optimize existing ones for better performance on various channels (like YouTube vs. Discover).

New creative asset recommendations in Google AdsImage credit: Google, August 2025

Best of all, these recommendations link directly into the built-in AI-powered image editor in Google Ads, so you can make changes right inside the platform without needing to re-upload or redesign assets elsewhere.

It’s clear Google wants advertisers to take a more active role in creative strategy, even inside an automated campaign structure.

Wrapping Up

Google is clearly listening to advertisers’ calls for more transparency and control. These updates to Performance Max mark another step toward striking a better balance between scale and strategy.

While not every advertiser will need to use every new feature, the option to do so means there’s more room to tailor Performance Max campaigns to your business goals, creative preferences, and customer insights.

Whether you’re looking to fine-tune audience reach, fix tracking issues, or clean up your creative assets, there’s something in this update that’s worth your attention.

Ecosia & Qwant Launch European Search Infrastructure via @sejournal, @MattGSouthern

Ecosia has begun delivering its own search results for the first time in its 16-year history, starting with users in France who will receive a portion of results from a new European search index developed jointly with Qwant.

The rollout marks the first implementation of the European Search Perspective (EUSP) joint venture, which has created Staan (Search Trusted API Access Network), a privacy-focused search infrastructure designed for Europe.

Current Implementation & Timeline

French users are now receiving search results directly from EUSP’s independent European index. Ecosia aims to serve 30% of French search queries through the new infrastructure by the end of 2025.

In a statement to Tech.eu, Christian Kroll, CEO of Ecosia, said:

“Having our own search infrastructure is a critical step for digital plurality and for building a sovereign European alternative. With more control over our offering, we can better serve users, develop ethical AI, and double down on our mission to build tech that benefits people and the planet.”

Technical Independence

Ecosia and Qwant have historically relied on syndication platforms from major US tech companies. The new infrastructure allows both companies to deliver results independently and make backend improvements without relying on external providers.

The broader goal is to reduce reliance on digital infrastructure controlled by foreign companies.

Open Index, Structured For Growth

EUSP isn’t limited to Ecosia and Qwant. The index is open to other companies building search or generative AI tools.

It is also structured to allow outside investment, unlike Ecosia’s steward-owned model, where 99.99% of shares belong to a foundation.

Kroll said the goal is to create an infrastructure that supports competition and innovation in Europe while maintaining strong privacy protections:

“This isn’t just about better search. It’s about the freedom to build and shape the future of tech in Europe.”

Looking Ahead

Ecosia’s partnership with Qwant could lead to more diversity in how European users access and interact with search.

While the initial rollout is limited to France, the infrastructure is designed to scale and support other companies and markets over time.


Featured Image: George Khelashvili/Shutterstock