Report Links Original Research to Higher B2B ROI via @sejournal, @MattGSouthern

TopRank Marketing and Ascend2 released a survey involving 797 B2B leaders. They discovered that 97% consider thought leadership as essential for achieving success throughout the entire marketing funnel.

The report frames the findings as building an “Answer Engine” for how buyers discover information across SEO and GenAI answer platforms (ChatGPT, Perplexity, AI search).

The authors analyzed differences between high-ROI and lower-performing marketers. Here’s what stands out.

What The Research Says

Respondents report strong results from original, data-driven content. 93% percent of teams that use original research say it effectively drives engagement and leads, and 48% call it “very effective.”

When asked to compare formats, 35% rated original research significantly more valuable than AI-generated content for building trust, and another 32% said it’s more impactful overall.

The study positions trusted experts and partners as part of a broader “trust system” that validates research-based content. The takeaway is quality over quantity. Partnerships work best when they add credibility and insight, not just reach.

Formats & Distribution

Marketers point to video, live or virtual events, and interactive experiences as the most effective vehicles for thought leadership.

Topic selection is guided primarily by customer signals. Direct customer feedback leads at 53%, followed by CRM/customer data and market-trend analysis at 44% each. Seasonal moments and industry events also influence planning.

High-performing teams run integrated multi-channel programs that weave SEO, advertising, experts and partners, media, email, and social into a cohesive plan.

Barriers To Success

One persistent barrier is channel concentration. About one-third cite over-reliance on a few channels or tactics as a top reason programs underperform.

Measurement is a key friction point: 41% cite difficulty proving ROI as a cause of underperforming content. High-performing programs use full-funnel analytics linking brand metrics to demand and revenue.

Why This Matters

The survey data questions two common strategies: relying on AI-generated content to foster trust and viewing SEO solely as a top-of-funnel tactic. Original research tends to be more trusted, which aligns with longer B2B sales cycles.

Additionally, successful programs link SEO to multi-channel activation and pipeline development. If your analytics don’t connect search performance with closed-won deals, that disconnect likely accounts for inconsistent ROI.

Looking Ahead

For 2026, center your plan on original research and treat search and GenAI answer platforms as connected discovery surfaces. Pair the research with credible experts, then extend it through video, events, and interactive pieces where it fits.


Featured Image: Roman Samborskyi/Shutterstock

Google Is Not Diminishing The Use Of Structured Data In 2026 via @sejournal, @martinibuster

A recent announcement on the Google Search Central blog gave a Redditor the impression that Google was significantly reducing the use of structured data, causing them to ask if it’s worthwhile to use it anymore.

The person on Reddit posted:

“Google just posted a new update — they’re removing support for some structured data types starting in January 2026. Dataset already works only in Dataset Search, and rich results are getting more selective.

So… is schema still worth it? Or are we moving past it entirely?”

Matt Southern covered the blog post (Google Deprecates Practice Problem Structured Data In Search), focusing on the specific structured data that Google was deprecating. Google’s blog post, authored by John Mueller, could, if read quickly, be accidentally interpreted to be more alarming than it was intended to be.

Google’s announcement explained:

“We’re constantly working to simplify the search results page, so that it’s quick and easy to find the information and websites you’re looking for. As part of this effort, we regularly evaluate all of our existing features to make sure they’re still useful, both for people searching on Google and for website owners.

Through this process, we’ve identified some features that aren’t being used very often and aren’t adding significant value to users. In these cases, we’ve found that other advancements on the search results page are able to get people what they’re looking for more seamlessly. So we’re beginning to phase these lesser-used features out.

For most searches, you likely won’t notice a major difference — most of these features didn’t trigger often and weren’t interacted with much by users. But overall, this update will simplify the page and improve the speed of search results.”

Ending with the following sentence:

“Starting in January 2026, we’ll remove support for the structured data types in Search Console and its API.”

Google’s Search Features Are Always Changing

Someone responded to the initial post to reassure them that Google’s search features and the structured data that triggers them are always changing. That’s true. Google Search has consistently been in a state of change and never more visibly on the front end as it is today with AI search.

Google’s John Mueller responded to the Redditor who noted that Google is constantly changing by affirming that markup types (which includes Schema.org structured data) are always changing.

He responded:

“Exactly. Understand that markup types come and go, but a precious few you should hold on to (like title, and meta robots).”

Structured Data Curation Is Automatic

Keeping up with Schema.org structured data is easy with any modern content management system through plugins or as part of a native functionality because they are responsive to Google’s structured data guidance. So in general, it’s not something that a publisher or SEO needs to think about. Publishers on WordPress just need to keep their plugins updated.

Featured Image by Shutterstock/pathdoc

How To Cultivate Brand Mentions For Higher AI Search Rankings via @sejournal, @martinibuster

Building brand awareness has long been an important but widely overlooked part of SEO. AI Search has brought this activity to the forefront. The following ideas should assist in forming a strategy for achieving brand name mentions at a ubiquitous scale, with the goal of achieving similar ubiquity in AI search results.

Tell People About The Site

SEOs and businesses can become overly concerned with getting links and forget that the more important thing to do is to get the word out about a website. A website must have unique qualities that will positively impress people and make them enthusiastic about the brand. If the site you’re trying to build traffic to lacks those unique qualities then building links or brand awareness can become a futile activity.

User behavior signals have been a part of Google’s algorithms since the 2004 Navboost signals were kicking in and the recent Google antitrust lawsuit shows that user behavior signals have continued to play a role. What has changed is that SEOs have noticed that AI search results tend to recommend sites that are recommended by other sites, brand mentions.

The key to all of this has been to tell other sites about your site and make it clear to potential consumers or website visitors what makes your site special.

  • So the first task is always to make a site special in every possible way.
  • The second task is to tell others about the site in order to build word of mouth and top-of-mind brand presence.

Optimizing a website for users and cultivating awareness of that site are the building blocks of the external signals of authoritativeness, expertise, and popularity that Google is always talks about.

Downside of Backlink Searches

Everyone knows how to do a backlink search with third-party tools but a lot of the data consists of garbage-y sites; that’s not the tool’s fault, it’s just the state of the Internet. In any case, a backlink search is limited, it doesn’t surface the conversations real people are having about a website.

In my experience, a better way to do it is to identify all instances of where a site is linked from another site or discussed by another site.

Brand And Link Mentions

Some websites have bookmark and resource pages. These are low hanging fruit.

Search for a competitor’s links:

example.com site:.com “bookmarks” -site:example.com

example.com site:.com “resources” -site:example.com

The “-site:example.com” removes the competitor site from the search results, showing you just the sites that might mention the full URL of the site which may or may not be linked.

The TLD segmented variants are:

example.com site:.net "resources" 
example.com site:.org "resources" 
example.com site:.edu "resources" 
example.com site:.ai "resources" 
example.com site:.net "links" 
example.com site:.org "links" 
example.com site:.edu "links" 
example.com site:.ai "links" 
Etc.

The goal is not necessarily to get links. It’s to build awareness of the site and build popularity.

Brand Mentions By Company Name

One way to identify brand mentions is to search by company name using the TLD segmentation technique. Making a broad search for a company’s name will only get you some of the brand mentions. Segmenting the search by TLD will reveal a wider range of sites.

Segmented Brand Mention Search

The following assumes that the competitor’s site is on the .com domain and you’re limiting the search to .com websites.

Competitor's Brand Name site:.com -site:example.com

Segmented Variants:

Competitor's Brand Name site:.org
Competitor's Brand Name site:.edu
Competitor's Brand Name site:.Reddit.com
Competitor's Brand Name site:.io
etc.

Sponsored Articles

Sponsored articles are indexed by search engines and ranked in AI search surfaces like AI Mode and ChatGPT. These can present opportunities to purchase a sponsored post that enables you to present your message with links that are nofollow and a prominent “sponsored post” disclaimer at the top of the web page – all in compliance with Google and FTC guidelines.

Brand Mentions: Authoritativeness Is Key

The thing that some SEOs never learned is that authoritativeness is important and quite likely millions of dollars have been wasted on paying for links from low-quality blogs and higher quality sites.

ChatGPT and AI Mode are found to recommend sites that are mentioned in high quality authoritative sites. Do not waste time or money paying for mentions on low quality sites.

Some Ways To Search

Product/Service/Solution Search

Name Of Product Or Service Or Problem Needing Solving site:.com “sponsored article”
Name Of Product Or Service Or Problem Needing Solving site:.net “sponsored article”
Name Of Product Or Service Or Problem Needing Solving site:.org “sponsored article”
Name Of Product Or Service Or Problem Needing Solving site:.edu “sponsored article”
Name Of Product Or Service Or Problem Needing Solving site:.io “sponsored article”
etc.

Sponsored Post Variant

Name Of Product Or Service Or Problem Needing Solving site:.com “sponsored post”
Name Of Product Or Service Or Problem Needing Solving site:.net “sponsored post”
Name Of Product Or Service Or Problem Needing Solving site:.org “sponsored post”
Name Of Product Or Service Or Problem Needing Solving site:.edu “sponsored post”
Name Of Product Or Service Or Problem Needing Solving site:.io “sponsored post”
etc.

Key insight: Test whether “sponsored post” or “sponsored article” provides better results or just more results. Using quotation marks, or if necessary the verbatim search tool, will stop Google from stemming the search results and prevents it from showing a mix of both “post” and “article” results. By forcing Google to be specific, you’re forcing Google to show more search results.

Competitor Search

Competitor’s Brand Name site:.com “sponsored post”
Competitor’s Brand Name site:.net “sponsored post”
Competitor’s Brand Name site:.org “sponsored post”
Competitor’s Brand Name site:.edu “sponsored post”
Competitor’s Brand Name site:.io “sponsored post”
etc.

Pure Awareness Building With Zero Internet Presence

This method of getting the word out is pure gold, especially for B2B but also for professional businesses such as in the legal niches. There are organizations and associations that print magazines or send out newsletters to thousands, sometimes tens of thousands, of people who are an exact match for the people you want to build top of mind brand name recognition with.

Emails and magazines do not have links and that’s okay. The goal is to build name brand recognition with positive associations. What better way than getting interviewed in a newsletter or magazine? What better way than submitting an article to a newsletter or magazine?

Don’t Forget PDF Magazines

Not all magazines are print, many magazines are in the form of a PDF. For example, I subscribe to a surf fishing magazine that is entirely in a proprietary web format that can only be viewed by subscribers. If I were a fishing company, I would make an effort to meet some of article authors, in addition to the publishers, at fishing industry conferences where they appear as presenters and in product booths.

This kind of outreach is in-person, it’s called relationship building. 

Getting back to the industry organizations and associations, this is an entire topic in itself and I’ll follow up with another article, but many of the techniques covered in this guide will work with this kind of brand building.

Using the filetype search operator in combination with the TLD segmentation will yield some of these kinds of brand building opportunities.

[product/service/keyword/niche] filetype:pdf site:.com newsletter
[product/service/keyword/niche] filetype:pdf site:.org newsletter

1. Segment the search for opportunities search by TLD .net/.com/.org/.us/.edu, etc.
Segmenting by TLD will help you discover different kinds of brand building opportunities. Websites on a Dot Org domain often link to a site for different reasons than a Dot Com website. Dot org domains represent article writing projects, free links on a links page, newsletter article opportunity, and charity link opportunities, just to name a few.

2. Consider Segmenting Dot Com Searches
The Dot Com TLD will yields an overabundance of search results, not all of them useful. This makes it imperative to segment the results to find all available opportunities. Even if you’re

Ways to segment the Dot Com are by:

  • A. Kinds of sites (blog/shopping related keywords/product or service keywords/forum/etc.)
    This is pretty straightforward. If you’re looking for brand mentions be sure to add keywords to the searches that are directly relevant to what your business is about. If your site is about car injuries then sites about cars as well as specific makes, models, and kinds of automobiles are how you would segment a .com search
  • B. Context – Audience Relevance Not Keyword Match
    Context of a sponsored article is important. This is not about whether the website content matches what your site, business, product, or service are about.  What’s important is to identify if the audience reach is an exact match to the audience that will be interested in your product, business, or service.
  • C. Quality And Authoritativeness
    This is not about third-party metrics related to links. This is just about making a common sense judgment about whether a site where you want a mention is well-regarded by those who are likely to be interested in your brand. That’s it.

Takeaway

The thing I want you to walk away with is that it’s useful to just tell people about a site and to get as many people as possible aware of it. Identify opportunities for ways to get them to tell a friend. There is no better recommendation than the one you can get from a friend or from a trusted organization.  This is the true source of authoritativeness and popularity.

Featured Image by Shutterstock/Bird stocker TH

Google AI Overviews Appear On 21% Of Searches: New Data via @sejournal, @MattGSouthern

Ahrefs analyzed 146 million search results to determine which query types trigger AI Overviews. The research tracked AIO appearance across 86 keyword characteristics.

Here’s a concise look at the patterns and how they may affect your strategy.

What The Analysis Found

AI Overviews appear on 20.5% of all keywords. Specific query types show notable variance, with some categories hitting 60% trigger rates while others stay below 2%.

Patterns Observed Across Query Types

Single-word queries activate AIOs only 9.5% of the time, whereas queries with seven or more words trigger them 46.4%. This correlation indicates that Google primarily uses AIOs for complex informational searches rather than simple lookups.

The question format also shows a similar trend: question-based queries result in AIOs 57.9% of the time, while non-question queries have a much lower rate of 15.5%.

The most significant distinctions are seen based on intent. Informational queries make up 99.9% of all AIO appearances, while navigational queries trigger AIOs just 0.09%. Commercial queries account for 4.3%, and transactional queries for 2.1%.

Patterns Observed Across Industry Categories

Science queries have an AIO rate of 43.6%, while health queries are at 43.0%, and pets & animals reach 36.8%. People & society questions result in AIOs 35.3% of the time.

In contrast, commerce categories exhibit opposite trends. Shopping queries are associated with AIOs only 3.2% of the time, the lowest in the dataset. Real estate remains at 5.8%, sports at 14.8%, and news at 15.1%.

YMYL queries display unexpectedly high trigger rates. Medical YMYL searches trigger AI Overviews 44.1% of the time, financial YMYL hits 22.9%, and safety YMYL reaches 31.0%.

These findings contradict Google’s focus on expert content for topics that could impact health, financial security, or safety.

Queries With Low Presence Of AI Overviews

6.3% of “very newsy” keywords trigger AI Overviews, while 20.7% of non-news queries display AIOs.

The pattern indicates that Google deliberately limits AIOs for time-sensitive content where accuracy and freshness are essential.

Local searches demonstrate a similar trend, with only 7.9% of local queries showing AI Overviews compared to 22.8% for non-local queries.

NSFW content consistently avoids AIOs across categories: adult queries trigger AIOs 1.5% of the time, gambling 1.4%, and violence 7.7%. Drug-related queries have the highest NSFW trigger rate at 12.6%, yet this remains well below the baseline.

Brand vs. Non-Brand

Branded keywords show slight differences compared to non-branded ones. Non-branded queries trigger AIOs 24.9% of the time, whereas branded queries do so 13.1% of the time.

The data indicates that AIOs occur 1.9 times more frequently for generic searches than for brand-specific lookups.

No Correlation With CPC

CPC shows no meaningful correlation with AIO appearance. Keyword cost-per-click values don’t affect trigger rates across any price range tested, with rates hovering between 12.4% and 27.6% regardless of commercial value.

Why This Matters

Publishers focused on informational content encounter the greatest AIO exposure. Question-based and how-to guides align closely with Google’s trigger criteria, putting educational content publishers at the highest risk of traffic loss.

Medical content has the highest category-specific AIO rate, despite concerns about AI accuracy in health advice.

Ecommerce and news publishers are relatively less affected by AIOs. The low trigger rates for shopping and news queries indicate these sectors experience less AI-driven traffic disruption compared to informational sites.

Looking Ahead

Using this data, publishers can review their current keyword portfolios to identify AIO exposure patterns. The most reliable indicators are query intent and length, with industry category and question format also playing significant roles.

AIO exposure varies considerably across different industry categories, with differences exceeding 40 percentage points between the highest and lowest. Content strategies need to consider this variation at the category level instead of assuming consistent baseline risk across all topics.

For a more in-depth examination of this data, see the full analysis.


Featured Image: Zorion Art Production/Shutterstock

Meta Projected $16B From Scam Ads, Internal Docs Show via @sejournal, @MattGSouthern

Advertisers on Meta may be unknowingly competing against suspected scam ads that stay in auctions at higher “penalty bid” prices.

Internal documents obtained by Reuters estimate that around 10% of Meta’s 2024 ad revenue, approximately $16 billion, would come from scam ads and banned goods.

Although Meta disagrees with these estimates, the real impact for advertisers includes potential increases in CPM, brand safety concerns, and uneven enforcement risks.

What Advertisers Should Know

Meta reportedly displays an estimated 15 billion ‘higher-risk’ scam advertisements daily across Facebook, Instagram and WhatsApp.

Meta earns about $7 billion annually just from these higher-risk scam ads that show clear signs of fraud, a late 2024 document states.

The company only bans advertisers when automated systems predict they are at least 95% certain to be committing fraud. Advertisers below that threshold face higher ad rates as a penalty but can continue running campaigns.

Internal Review: Easier To Run Scams On Meta Than Google

An internal Meta review concluded it’s easier to advertise scams on its platforms than on Google. The document doesn’t explain why.

Meta restricted anti-scam enforcement in the first half of the year to actions costing no more than 0.15% of total revenue, or approximately $135 million. A manager overseeing the effort wrote: “Let’s be cautious. We have specific revenue guardrails.”

Company spokesman Andy Stone said the internal estimates were “rough and overly-inclusive” and included many legitimate ads. He declined to provide an updated figure.

Meta reduced user reports of scam ads globally by 58% over the past 18 months and removed more than 134 million pieces of scam ad content in 2025, Stone said.

Why This Matters

On Meta’s platforms, internal documents projected about one in ten ad dollars in 2024 came from ads for scams and banned goods.

Meta’s penalty bid system charges suspected scammers higher rates but keeps them in ad auctions. You don’t know when you’re bidding against these inflated rates.

The revenue guardrails mean Meta caps how much fraud enforcement it will do if it impacts financial projections. Small advertisers must be flagged eight times for financial fraud before getting banned. Some large “High Value Accounts” accrued more than 500 strikes without Meta shutting them down.

A Meta presentation estimated the company’s platforms were involved in one-third of all successful scams in the United States.

The SEC is investigating Meta for running ads for financial scams, according to internal documents reviewed by Reuters. The UK Payment Systems Regulator said Meta’s products were linked to 54% of payment-related scam incidents in 2023.

What Meta Says

Stone clarified that the idea Meta should only take action when regulators demand it isn’t how the company operates.

He explained that the 0.15% figure mentioned in strategy documents was based on a revenue forecast and isn’t a strict cutoff. Additionally, testing the penalty bid program revealed a decrease in scam reports and a small dip in total ad revenue.

The main goal was to cut down on scam advertising by making suspicious advertisers less competitive in auctions.

Meta also outlines recent enforcement actions against scam centers in a Newsroom update.

Looking Ahead

Meta plans to lower the share of revenue from scams, illegal gambling, and prohibited goods from an estimated 10.1% in 2024 to 7.3% by the end of 2025. The target is to reach 6% by the end of 2026 and 5.8% in 2027, as outlined in strategy documents.


Featured Image: JarTee/Shutterstock

Automattic Disputes Use Of Word “Automatic” For WordPress Product via @sejournal, @martinibuster

Lawyers representing Automattic, the for-profit founded by WordPress co-founder Matt Mullenweg, sent a trademark complaint letter to WordPress developer Kevin Geary, asking him to rebrand his WordPress CSS framework, which is currently named Automatic.css, claiming that the similarity to Mullenweg’s Automattic could lead to consumer confusion.

The letter caught some in the WordPress industry by surprise, since Geary had months ago shown good-faith compliance after Mullenweg tweeted a request for Geary to place a disclaimer in the footer of Automatic.css.

Screenshot Of Mullenweg’s July 2025 Tweet To Geary

Kevin Geary

Kevin Geary is a well-liked and popular member of the WordPress developer community since 2005. He’s currently developing a WordPress page builder called EtchWP (currently in Alpha stage) and is behind the well-received CSS framework called Automatic CSS (ACSS). ACSS is a CSS framework that simplifies design consistency within a website, easiliy integrating with page builders like Bricks, Gutenberg, and Oxygen which are popular within the web design community.

A YouTube video and accompanying article from a year ago caused a stir because he documented himself trying to use WordPress’s native Block Editor and coming away from the experience with a large list of issues that need fixing.

He wrote about the Gutenberg workflow:

“Is this the “for everyone” experience? Is this the true vision of the WordPress block editor? …it’s wildly inefficient and impractical.”

Elsewhere he noted that most people are confused about what Gutenberg is supposed to be, citing results of an informal poll of his Twitter followers showing disagreement whether it’s supposed to be a page builder or not.

He concluded:

“It’s NOT for:

Beginner web developers who want to learn how to build websites.

Intermediate web developers who want to build custom websites.

Advanced web developers who want to build custom websites.

Most agencies & freelancers (unless they’re committed to building custom blocks).

I want to like it, I really do. As it stands now, though, the only viable way to use the block editor to build a custom site is with third-party tools. Native ain’t cutting it.”

All of this is to say that Geary is a passionate supporter of WordPress, even when he criticizes the block editor or the “tragedy of the commons” support model underlying WordPress.

Automattic’s Letter To Geary

Geary tweeted a copy of the letter sent to him in which Mullenweg’s lawyers asked him to rebrand his WordPress CSS framework.

Part of the letter stated:

“We represent Automattic Inc. in intellectual property matters. As you know, our client owns and operates a wide range of software brands and services, including the very popular web building and hosting platform WordPress.com. Automattic is also well-known for its longtime and extensive contributions to the WordPress system.

Our client owns many trademark registrations for its Automattic mark covering those types of services and software. As a result of our client’s extensive marketing efforts and support of the WordPress system, consumers have come to closely associate Automattic with WordPress and its related offerings.

We are writing about your use of the name and mark Automatic (sometimes with a CSS or .CSS suffix) to provide a CSS framework specifically designed for WordPress page builders. As we hope you can appreciate, our client is concerned about your use of a nearly identical name and trademark to provide closely related WordPress services. Automattic and Automatic differ by only one letter, are phonetically identical, and are marketed to many of the same people. This all enhances the potential for consumer confusion and dilution of our client’s Automattic mark.

We assume you share Automattic’s interest in ensuring that consumers are not confused or misled by the use of nearly identical names and trademarks to provide related services in the WordPress ecosystem. To protect against any such confusion or dilution, Automattic requests that you rebrand away from using Automatic or anything similar to Automattic. I suggest that we schedule a time to discuss the logistics and a mutually agreeable transition timeline for the change. Please let me know some days and times when you are available.”

Matt Mullenweg responded to Kevin Geary’s tweet by noting that he “owns” the automatic.com domain. But that’s actually a misstatement. Nobody “owns” a domain name. A domain name can only be registered.

Mullenweg’s tweet:

“We also own http://automatic.com. You had to know this was a fraught naming area.”

To which Geary responded:

“AutomaticCSS is called “automatic” because it’s the only CSS framework that does a lot of things automatically.

Congratulations on owning the domain name for a generic term. Let me know when that fact becomes relevant.”

Social Response To Automattic’s Letter

Most of the responses to Geary’s tweet were supportive although one person questioned Geary’s use of the word Automatic, tweeting:

“Why go with “AutomaticCSS” as the name though?

Options like “AutoCSS” or even “AutomatedCSS” would have been even more suitable IMHO.

It could indeed raise the question of whether there was some other motive at play. Just sharing my thoughts!”

That tweet was the outlier, most of the responses were supportive.

Simon Zeimke tweeted:

“A letter from hell. How could a generic Term be IP?”

Lee Milroy responded:

“This is absurd, a product that has been around for 4 years is all of a sudden going to create “confusion”?

Really Matt needs to do some work… like the terrible WP Dashboard experience”

WordPress Drama

Geary hasn’t tweeted about his next move, and it’s been over a week now. Many in the WordPress community would probably prefer to see the drama fade so everyone can get back to making WordPress better.

Featured Image by Shutterstock/IgorZh

Google’s Preferred Sources Tool Is Jammed With Spam via @sejournal, @martinibuster

Google’s Preferred Sources tool is meant to let fans of certain websites tell Google they want to see more of their favorite sites in the Top News feature. However, Google is surfacing copycat spam sites, random sites, and parked domains. Some of the sites appearing in the tool are so low quality that only their home pages are indexed. Shouldn’t this tool just show legitimate websites and not spam?

Google Preferred Sources

Google’s Preferred Sources feature gives users control over which news outlets appear more often in Google’s Top Stories feature. Rather than relying on Google’s ranking system alone, users can make their preferred news sources appear more frequently. This change doesn’t block other sites from appearing, it only personalizes what a user sees to reflect their chosen sources. Preferred Sources enablers users to have more control over which news sources appear more often.

Similar Domains In Preferred Sources

What appears to be happening is that people are registering domains that are similar to those of well-known websites. One way they’re doing it is by domain squatting on an exact match to domain name using a different TLD. For example, when a popular domain name is registered with a .com or .net the domain squatters will register the same domain name using a .com.in or .net.in domain name.

Screenshot Of A Random Subdomain Ranking For Automattic

Preferred Sources Errors

It’s unclear if people are registering domain names and adding them to the Preferred Sources tool or if they are being added in some different manner. A search for a popular SEO tool surfaces the correct domain but also a parked domain in the Indian .com.in ccTLD:

Screenshot Of An Indian Parked Domain

What is known is that people are registering copycat domains but how they’re getting into Google’s Preferred Sources tool is not well known. Preferred Sources is currently available in the USA and in India, which may explain the Indian domains showing up in the tool.

Screenshot Of Indian NYTimes Parked Domain

For example, a search within the Preferred Sources tool for Huffpost surfaces a copycat site on an Indian country code level domain.

Screenshot Of HuffPost In Source Preferences

That site Indian Huffpost site features articles (and links) to topics like payday loans, personal injury lawyers, and luxury watches. Not surprisingly, it doesn’t look like Google is indexing more than the home page of that site.

Screenshot Of A Site Search

There’s also an Indian site squatting on Search Engine Journal’s domain name.

Screenshot Of SEJ In Source Preferences Tool

What Is Going On?

It’s possible that SEOs are registering copycat domains and then submitting their domains to the Preferred Sources tool. Or it could be that Google picks them up automatically and is just listing whatever is out there.

Google Warns Against Relying On SEO Audit Tool Scores via @sejournal, @MattGSouthern

Google warned against relying on tool-generated scores for technical SEO audits.

Search Relations team member Martin Splitt outlined a three-step framework in a Search Central Lightning Talk that emphasizes site-specific context over standardized metrics.

The Three-Step Framework

Splitt outlined the core objective in the video:

“A technical audit, in my opinion, should make sure no technical issues prevent or interfere with crawling or indexing. It can use checklists and guidelines to do so, but it needs experience and expertise to adapt these guidelines and checklists to the site you audit.”

His recommended framework has three phases.

First, use tools and guidelines to identify potential issues. Second, create a report tailored to the specific site. Third, make recommendations based on actual site needs.

Understanding site technology comes before running diagnostic tools. Group findings by effort required and potential impact, Splitt said.

When 404s Are Normal

High 404 counts don’t always mean problems.

The red flag is unexplained rises without corresponding website changes.

Splitt explained:

“A high number of 404s, for instance, is expected if you removed a lot of content recently. That’s not a problem. It’s a normal consequence of that. But if you have an unexplained rise in 404 responses, though, that’s something you want to point out and investigate…”

Google Search Console’s Crawl Stats report shows whether 404 patterns match normal site maintenance or indicate technical issues.

Context Over Scores

Tools generate numerical scores that lack site-specific context.

Not everything tools flag carries equal weight. An international site needs hreflang auditing, while a single-language site doesn’t.

Splitt emphasized human judgment over automation:

“Please, please don’t follow your tools blindly. Make sure your findings are meaningful for the website in question and take the time to prioritize them for maximum impact.”

Talk to people who know the site and its technology. They’ll tell you if findings make sense.

Why This Matters

Generic checklists waste time on low-impact fixes while missing critical issues.

Tool scores may flag normal site behavior as problems. They assign priority to issues that don’t affect how search engines crawl your content.

Understanding when metrics reflect normal operations helps you focus audit resources where they matter. This applies whether you’re running internal audits or evaluating agency reports.

Looking Ahead

Audit platforms continue adding automated checks and scoring systems. This widens the gap between generic findings and actionable recommendations.

Google’s guidance reinforces that technical SEO requires expertise beyond tool automation.

Sites with international setups, large content archives, or frequent publishing benefit most from context-driven audits.

Hear Splitt’s full talk in the video below:

Google Finance Gets AI Deep Search & Prediction Market Data via @sejournal, @MattGSouthern

Google Finance is rolling out Deep Search capabilities, prediction markets data, and enhanced earnings tracking features across its AI-powered platform.

The updates expand Google Finance beyond basic market data into multi-step research workflows and crowd-sourced probability forecasting. Google announced the changes today, with features rolling out over the coming weeks, starting with Labs users.

Deep Search For Financial Research

Deep Search handles complex financial queries by issuing up to hundreds of simultaneous searches and synthesizing information across multiple sources.

You can ask detailed questions and select the Deep Search option. Gemini models then generate fully cited comprehensive responses within minutes, displaying the research plan during generation.

Image Credit: Google

Robert Dunnette, Director of Product Management for Google Search, wrote:

“From there, our advanced Gemini models will get to work, issuing up to hundreds of simultaneous searches and reasoning across disparate pieces of information to produce a fully cited, comprehensive response in just a few minutes.”

Deep Search offers higher usage limits for Google AI Pro and AI Ultra subscribers. Users can access it through the Google Finance experiment in Labs.

Prediction Markets Integration

Google Finance is adding support for prediction markets data from Kalshi and Polymarket, with availability rolling out over the coming weeks, starting with Labs users.

You can query future market events directly from the search box to see current probabilities and historical trends.

An example query includes “What will GDP growth be for 2025?”

The feature rolls out this week to Labs users first.

Enhanced Earnings Tracking

Google launched earnings tracking features that provide live audio streams, real-time transcripts, and AI-generated insights during corporate earnings calls.

The Earnings tab shows scheduled calls, streams live audio during calls, and maintains transcripts for later reference. AI-powered insights under “At a glance” update before, during, and after calls with information from news reports and analyst reactions.

You can compare financial data against historical results, view performance versus expectations, and access earnings documents and SEC forms.

India Expansion

Google Finance begins rolling out in India this week with support for English and Hindi.

The India launch initially offers the core Google Finance experience. Deep Search, prediction markets, and earnings features launch first in the U.S. and will expand internationally over time.

Why This Matters

Deep Search reduces the time needed to gather financial data from multiple sources, potentially resulting in fewer webpage visits.

Prediction markets offer crowd-sourced probability estimates that complement analyst forecasts. Live earnings tracking integrates call audio, transcripts, and analyst reactions into a single interface during reporting season.

Looking Ahead

Deep Search and prediction markets roll out over the coming weeks, with Labs users getting early access. Google AI Pro and AI Ultra subscribers receive higher usage limits for Deep Search queries.

The India expansion marks Google Finance’s first international launch beyond the U.S. Access the beta at google.com/finance/beta while signed into a Google account.


Featured Image: Juan Alejandro Bernal/Shutterstock