Google: Invalid Ad Traffic From Deceptive Serving Down 40% via @sejournal, @MattGSouthern

Google cites a 40% drop in invalid ad traffic from deceptive serving, helping protect budgets and keep billing clean for advertisers.

  • Google reports a 40% reduction in invalid traffic from deceptive or disruptive serving.
  • Google now reviews content, placements, and interactions more precisely.
  • Advertisers are not charged for invalid traffic, with credits applied after detection.
Critical Vulnerability Affects Tutor LMS Pro WordPress Plugin via @sejournal, @martinibuster

An advisory was issued about a critical vulnerability in the popular Tutor LMS Pro WordPress plugin. The vulnerability, rated 8.8 on a scale of 1 to 10, allows an authenticated attacker to extract sensitive information from the WordPress database. The vulnerability affects all versions up to and including 3.7.0.

Tutor LMS Pro Vulnerability

The vulnerability results from improper handling of user-supplied data, enabling attackers to inject SQL code into a database query. The Wordfence advisory explains:

“The Tutor LMS Pro – eLearning and online course solution plugin for WordPress is vulnerable to time-based SQL Injection via the ‘order’ parameter used in the get_submitted_assignments() function in all versions up to, and including, 3.7.0 due to insufficient escaping on the user supplied parameter and lack of sufficient preparation on the existing SQL query. “

Time-Based SQL Injection

A time-based SQL injection attack is one in which an attacker determines whether a query is valid by measuring how long the database takes to respond. An attacker could use the vulnerable order parameter to insert SQL code that delays the database’s response. By timing these delays, the attacker can deduce information stored in the database.

Why This Vulnerability Is Dangerous

While exploitation requires authenticated access, a successful exploitation of the flaw could be used to access sensitive information. Updating to the latest version, 3.7.1 or higher is recommended.

Featured Image by Shutterstock/Ollyy

Vulnerability In 3 WordPress File Plugins Affects 1.3 Million Sites via @sejournal, @martinibuster

An advisory was issued for three WordPress file management plugins that are affected by a vulnerability that allows unauthenticated attackers delete arbitrary files. The three plugins are installed in over 1.3 million websites.

Outdated Version Of elFinder

The vulnerability is caused by outdated versions of the elFinder file manager, specifically versions 2.1.64 and earlier. These versions contain a Directory Traversal vulnerability that allows attackers to manipulate file paths to reach outside the intended directory. By sending requests with sequences such as example.com/../../../../, an attacker could make the file manager access and delete arbitrary files.

Affected Plugins

Wordfence named the following three plugins as affected by this vulnerability:

1. File Manager WordPress Plugin
Installations: 1 Million

2. Advanced File Manager – Ultimate WP File Manager And Document Library Solution
Installations: 200,000+

3. File Manager Pro – Filester
Installations: 100,000+

According to the Wordfence advisory, the vulnerability can be exploited without authentication, but only if a site owner has made the file manager publicly accessible, which mitigates the possibility of exploitation. That said, two of the plugins indicated in their changelogs that an attacker needs at least a subscriber level authentication, the lowest level of website credentials.

Once exploited, the flaw allowed deletion of arbitrary files. Users of the named WordPress plugins should consider updating to the latest versions.

Featured Image by Shutterstock/Lili1992

WordPress Contact Form Entries Plugin Vulnerability Affects 70K Websites via @sejournal, @martinibuster

A vulnerability advisory was issued for a WordPress plugin that saves contact form submissions. The flaw enables unauthenticated attackers to delete files, launch a denial of service attack, or perform remote code execution. The vulnerability was given a severity rating of 9.8 on a scale of 1 to 10, indicating the seriousness of the issue.

Database for Contact Form 7, WPForms, Elementor Forms Plugin

The Database for Contact Form 7, WPForms, Elementor Forms, also apparently known as the Contact Form Entries Plugin, saves contact form entries into the WordPress database. It enables users to view contact form submissions, search them, mark them as read or unread, export them, and perform other functions. The plugin has over 70,000 installations.

The plugin is vulnerable to PHP Object Injection by an unauthenticated attacker, which means that an attacker does not need to log in to the website to launch the attack.

A PHP object is a data structure in PHP. PHP objects can be turned into a sequence of characters (serialized) in order to store them and then deserialized (turned back into an object). The flaw that gives rise to this vulnerability is that the plugin allows an unauthenticated attacker to inject an untrusted PHP object.

If the WordPress site also has the Contact Form 7 plugin installed, then it can trigger a POP chain during deserialization.

According to the Wordfence advisory:

“This makes it possible for unauthenticated attackers to inject a PHP Object. The additional presence of a POP chain in the Contact Form 7 plugin, which is likely to be used alongside, allows attackers to delete arbitrary files, leading to a denial of service or remote code execution when the wp-config.php file is deleted.”

All versions of the plugin up to and including 1.4.3 are vulnerable. Users are advised to update their plugin to the latest version, which as of this date is version 1.4.5.

Featured Image by Shutterstock/tavizta

Google Rolls Out ‘Preferred Sources’ For Top Stories In Search via @sejournal, @MattGSouthern

Google is rolling out a new setting that lets you pick which news outlets you want to see more often in Top Stories.

The feature, called Preferred Sources, is launching today in English in the United States and India, with broader availability in those markets over the next few days.

What’s Changing

Preferred Sources lets you choose one or more outlets that should appear more frequently when they have fresh, relevant coverage for your query.

Google will also show a dedicated From your sources section on the results page. You will still see reporting from other publications, so Top Stories remains a mix of outlets.

Google Product Manager Duncan Osborn says the goal is to help you “stay up to date on the latest content from the sites you follow and subscribe to.”

How To Turn It On

Image Credit: Google
  1. Search for a topic that is in the news.
  2. Tap the icon to the right of the Top stories header.
  3. Search for and select the outlets you want to prioritize.
  4. Refresh the results to see the updated mix.

You can update your selections at any time. If you previously opted in to the experiment through Labs, your saved sources will carry over.

In early testing through Labs, more than half of participants selected four or more sources. That suggests people value seeing a range of outlets while still leaning toward publications they trust.

Why It Matters

For publishers, Preferred Sources creates a direct way to encourage loyal readers to see more of your coverage in Search.

Loyal audiences are more likely to add your site as a preferred source, which can increase the likelihood of showing up for them when you have fresh, relevant reporting.

You can point your audience to the new setting and explain how to add your site to their list. Google has also published help resources for publishers that want to promote the feature to followers and subscribers.

This adds another personalization layer on top of the usual ranking factors. Google says you will still see a diversity of sources, and that outlets only appear more often when they have new, relevant content.

Looking Ahead

Preferred Sources fits into Google’s push to let you customize Search while keeping a variety of perspectives in Top Stories.

If you have a loyal readership, this feature is another reason to invest in retention and newsletters, and to make it easy for readers to follow your coverage on and off Search.

Google Says AI-Generated Content Should Be Human Reviewed via @sejournal, @martinibuster

Google’s Gary Illyes confirmed that AI content is fine as long as the quality is high. He said that “human created” isn’t precisely the right way to describe their AI content policy, and that a more accurate description would be “human curated.”

The questions were asked by Kenichi Suzuki in the context of an exclusive interview with Illyes.

AI Overviews and AI Mode Models

Kenichi asked about the AI models used for AI Overviews and AI Mode, and he answered that they are custom Gemini models.

Illyes answered:

“So as you noted, the the model that we use for AIO (for AI Overviews) and for AI mode is a custom Gemini model and that might mean that it was trained differently. I don’t know the exact details, how it was trained, but it’s definitely a custom model.”

Kenichi then asked if AI Overviews (AIO) and AI Mode use separate indexes for grounding.

Grounding is where an LLM will connect answers to a database or a search index so that answers are more reliable, truthful, and based on verifiable facts, helping to cut down on hallucinations. In the context of AIO and AI Mode, grounding generally happens with web-based data from Google’s index.

Suzuki asked:

“So, does that mean that AI Overviews and AI Mode use separate indexes for grounding?”

Google’s Illyes answered:

“As far as I know, Gemini, AI Overview and AI Mode all use Google search for grounding. So basically they issue multiple queries to Google Search and then Google Search returns results for that those particular queries.”

Kenichi was trying to get an answer regarding the Google Extended crawler, and Illyes’s response was to explain when the Google Extended crawler comes into play.

“So does that mean that the training data are used by AIO and AI Mode collected by regular Google and not Google Extended?”

And Illyes answered:

“You have to remember that when grounding happens, there’s no AI involved. So basically it’s the generation that is affected by the Google extended. But also if you disallow Google Extended then Gemini is not going to ground for your site.”

AI Content In LLMs And Search Index

The next question that Illyes answered was about whether AI content published online is polluting LLMs. Illyes said that this is not a problem with the search index, but it may be an issue for LLMs.

Kenichi’s question:

“As more content is created by AI, and LLMs learn from that content. What are your thoughts on this trend and what are its potential drawbacks?”

Illyes answered:

“I’m not worried about the search index, but model training definitely needs to figure out how to exclude content that was generated by AI. Otherwise you end up in a training loop which is really not great for for training. I’m not sure how much of a problem this is right now, or maybe because how we select the documents that we train on.”

Content Quality And AI-Generated Content

Suzuki then followed up with a question about content quality and AI.

He asked:

“So you don’t care how the content is created… so as long as the quality is high?”

Illyes confirmed that a leading consideration for LLM training data is content quality, regardless of how it was generated. He specifically cited the factual accuracy of the content as an important factor. Another factor he mentioned is that content similarity is problematic, saying that “extremely” similar content shouldn’t be in the search index.

He also said that Google essentially doesn’t care how the content is created, but with some caveats:

“Sure, but if you can maintain the quality of the content and the accuracy of the content and ensure that it’s of high quality, then technically it doesn’t really matter.

The problem starts to arise when the content is either extremely similar to something that was already created, which hopefully we are not going to have in our index to train on anyway.

And then the second problem is when you are training on inaccurate data and that is probably the riskier one because then you start introducing biases and they start introducing counterfactual data in your models.

As long as the content quality is high, which typically nowadays requires that the human reviews the generated content, it is fine for model training.”

Human Reviewed AI-Generated Content

Illyes continued his answer, this time focusing on AI-generated content that is reviewed by a human. He emphasizes human review not as something that publishers need to signal in their content, but as something that publishers should do before publishing the content.

Again, “human reviewed” does not mean adding wording on a web page that the content is human reviewed; that is not a trustworthy signal, and it is not what he suggested.

Here’s what Illyes said:

“I don’t think that we are going to change our guidance any time soon about whether you need to review it or not.

So basically when we say that it’s human, I think the word human created is wrong. Basically, it should be human curated. So basically someone had some editorial oversight over their content and validated that it’s actually correct and accurate.”

Takeaways

Google’s policy, as loosely summarized by Gary Illyes, is that AI-generated content is fine for search and model training if it is factually accurate, original, and reviewed by humans. This means that publishers should apply editorial oversight to validate the factual accuracy of content and to ensure that it is not “extremely” similar to existing content.

Watch the interview:

Featured Image by Shutterstock/SuPatMaN

Google Says AI-Generated Content Will Not Cause Ranking Penalty via @sejournal, @martinibuster

Google’s Gary Illyes recently answered the question of whether AI-generated images used together with “legit” content can impact rankings. Gary discussed whether it had an impact on SEO and called attention to a technical issue involving server resources that is a possible outcome.

Does Google Penalize for AI-Generated Content?

How does Google react to AI image content when it’s encountered in the context of a web page? Google’s Gary Illyes answered that question within the context of a Q&A and offered some follow-up observations about how it could lead to extra traffic from Google Image Search. The question was asked at about the ten-minute mark of the interview conducted by Kenichi Suzuki and published on YouTube.

This is the question that was asked:

“Say if there’s a content that the content itself is legit, the sentences are legit but and also there are a lot of images which are relevant to the content itself, but all of them, let’s say all of them are generated by AI. Will that content or the overall site, is it going to be penalized or not?”

This is an important and reasonable question because Google ran an update about a year ago that appeared to de-rank low quality AI-generated content.

Google’s Gary Ilyes’ answer was clear that AI-generated content will not result in penalization and that it has no direct impact on SEO.

He answered:

“No, no. So AI generated image doesn’t impact the SEO. Not direct.

So obviously when you put images on your site, you will have to sacrifice some resources to those images… But otherwise you are not going to, I don’t think that you’re going to see any negative impact from that.

If anything, you might get some traffic out of image search or video search or whatever, but otherwise it should just be fine.”

AI-Generated Content

Gary Illyes did not discuss authenticity; however it’s a good thing to consider in the context of using AI-generated content. Authenticity is an important quality for users, especially in contexts where there is an expectation that an illustration is a faithful depiction of an actual outcome or product. For example, users expect product illustrations to accurately reflect the products they are purchasing and screenshots of food to reasonably represent the completed dishes after following the recipe instructions.

Google often says that content should be created for users and that many questions about SEO are adequately answered by the context of how users will react to it. Illyes did not reflect on any of that, but it is something that publishers should consider if they care about how content resonates with users.

Gary’s answer makes it clear that AI-generated content will not have a negative impact on SEO.

Featured Image by Shutterstock/Besjunior

Brave Announces AI Grounding API With Plans Starting At Free via @sejournal, @martinibuster

Brave Search announced the release of AI Grounding with the Brave Search API, a way to connect an AI system to grounding in search to reduce hallucinations and improve answers. The API is available in Free, Base AI, and Pro AI plans.

The Brave Search API is for developers and organizations that want to add AI grounding from authoritative web information to their AI applications. The Brave API supports agentic search, foundation model training, and creating search-enabled applications.

State Of The Art Performance (SOTA)

Brave’s announcement says that their AI Grounding API enables state of the art performance in both single-search and multi-search configurations, outperforming competitors in accuracy, claiming they can answer more than half of all questions with a single search.

According to Brave:

“Brave can answer more than half of the questions in the benchmark using a single search, with a median response time of 24.2 seconds. On average (arithmetic mean), answering these questions involves issuing 7 search queries, analyzing 210 unique pages (containing 6,257 statements or paragraphs), and takes 74 seconds to complete. The fact that most questions can be resolved with just a single query underscores the high quality of results returned by Brave Search.”

Pricing

There are three pricing tiers:

  • Free AI
    1 query/second and a limit of 5,000 queries/month
  • Base AI
    $5.00 per 1,000 requests
    A limit of up to 20 queries/second
    20M queries/month
    Rights to use in AI apps
  • Pro AI
    $9.00 per 1,000 requests
    A limit of up to 50 queries/second
    Unlimited queries/month
    Rights to use in AI apps

Brave’s AI Grounding API offers a reliable way to supply AI systems and apps with trustworthy information from across the web. Its independence and privacy practices make it a viable choice for developers building search-enabled AI applications.

Read Brave’s announcement:

Introducing AI Grounding with Brave Search API, providing enhanced search performance in AI applications

Featured Image by Shutterstock/Mamun_Sheikh

Google Is Testing An AI-Powered Finance Page via @sejournal, @martinibuster

Google announced that they’re testing a new AI-powered Google Finance tool. The new tool enables users to ask natural language questions about finance and stocks, get real-time information about financial and cryptocurrency topics, and access new charting tools that visualize the data.

Three Ways To Access Data

Google’s AI finance page offers three ways to explore financial data:

  1. Research
  2. Charting Tools
  3. Real-Time Data And News

Screenshot Of Google Finance

The screenshot above shows a watchlist panel on the left, a chart in the middle, a “latest updates” section beneath that, and a “research” section on the right hand panel.

Research

The new finance page enables users to ask natural language questions about finance, including the stock market, and the AI will return comprehensive answers, plus links to the websites where the relevant answers can be found.

Closeup Screenshot Of Research Section

Charting Tools

Google’s finance page also features charting tools that enable users to visualize financial data.

According to Google:

“New, powerful charting tools will help you visualize financial data beyond simple asset performance. You can view technical indicators, like moving average envelopes, or adjust the display to see candlestick charts and more.”

Real-Time Data

The new finance page also provides real-time data and tools, enabling users to explore finance news, including cryptocurrency information. This part features a live news feed.

The AI-powered page will roll out over the next few weeks on Google.com/finance/.

Read more at Google:

We’re testing a new, AI-powered Google Finance.

Featured Image by Shutterstock/robert_s

Google Cautions Businesses Against Generic Keyword Domains via @sejournal, @MattGSouthern

Google’s John Mueller says small businesses may be hurting their search visibility by choosing generic keyword domains instead of building distinctive brand names.

Speaking on a recent episode of Search Off the Record, Mueller and fellow Search Advocate Martin Splitt discussed common challenges for photography websites.

During the conversation, Mueller noted that many small business owners fall into a “generic domain” trap that can make it harder to connect the business name with its work.

Why Keyword Domains Can Be a Problem

The topic came up when Splitt mentioned that his photography site uses a German term for “underwater photo” as its domain. Mueller responded:

“I see a lot of small businesses make the mistake of taking a generic term and calling it their brand.”

He explained that businesses choosing keyword-rich domains often end up competing with directories, aggregators, and other established sites targeting the same phrases.

Even if the domain name exactly matches a service, there’s little room to stand out in search.

The Advantage Of A Distinct Brand

Mueller contrasted this with using a unique business name:

“If your brand were Martin Splitt Photos then people would be able to find you immediately.”

When customers search for a brand they remember, competition drops. Mentions and links from other websites also become clearer signals to search engines, reducing the chance of confusion with similarly named businesses.

Lost Opportunities For Word-of-Mouth

Relying on a generic keyword domain can also make offline marketing less effective.

If a potential client hears about a business at an event but can’t remember its exact generic name, finding it later becomes more difficult.

Mueller noted:

“If you’ve built up a reputation as being kind of this underwater photography guy and they remember your name, it’s a lot easier to find you with a clear brand name.”

Why This Matters

For service providers like photographers, event planners, or contractors, including the service and location in a domain name can feel like a shortcut to local rankings.

Mueller’s advice suggests otherwise: location targeting can be achieved through content, structured data, and Google Business Profile optimization, without giving up a distinctive brand.

Looking Ahead

While Mueller didn’t recommend immediate rebrands for existing sites, he made it clear that unique, brandable domains give small businesses a defensible advantage in search and marketing.

For those still choosing a domain, the long-term benefits of memorability and differentiation can outweigh any short-term keyword gains.

Listen to the full podcast episode below:


Featured Image: Roman Samborskyi/Shutterstock