Google On Generic Top Level Domains For SEO via @sejournal, @martinibuster

Google’s John Mueller answered a question about whether a generic Top Level Domain (gTLD) with a keyword in it offered any SEO advantage. His answer was in the context of a specific keyword TLD, but the topic involves broader questions about how Google evaluates TLDs in general.

Generic Top Level Domains (gTLDs)

gTLDs are domains that have a theme that relates to a topic or a purpose. The most commonly known ones are .com (generally used for commercial purposes) and .org (typically used for non-profit organizations).

The availability of unique keyword based gTLDs exploded in 2013. Now there are hundreds of gTLDs with which a website can brand themselves with and stand out.

Is There SEO Value In gTLDs?

The person asking the question on Reddit wanted to know if there’s an SEO value to registering a .music gTLD. The regular .com version of the domain name they wanted was not available but the .music version was.

The question they asked was:

“Noticed .music domains available and curious if it is relevant, growing etc or does the industry not care about it whatsoever? Is it worth reserving yours anyways just so someone else can’t have it, in case it becomes a thing?”

Are gTLDs Useful For SEO Purposes?

Google’s John Mueller limited his response to whether there gTLDs offered SEO value and his answer was no.

He answered:

“There’s absolutely no SEO advantage from using a .music domain.”

The funny thing about SEO is that Google’s standard of relevance is based on humans while SEOs think of relevance in terms of what Google thinks is relevant.

This sets up a huge disconnect between SEOs on one side who are creating websites that are keyword optimized for Google while Google itself is analyzing billions of user behavior signals because it’s optimizing search results for humans.

Optimizing For Humans With gTLDs

The thing about SEO is that it’s search engine optimization. When venturing out on the web it’s easy to forget that every website must be optimized for humans, too. Aside from spammy TLDs which can be problematic for SEO, the choice of a TLD isn’t important for SEO but it could be important for Human Optimization.

Optimizing for humans is a good idea because the signals generated by human interactions with the search engines and websites generate signals that Google uses at scale to better understand what users mean by their queries and what kinds of sites they expect to see for those queries. Some user generated signals, like searching by brand name, can send Google a signal that a particular brand is popular and is associated with a particular service, product, or keyword phrase (read about Google’s patent on branded search).

Circling back to optimizing for humans, if a particular gTLD is something that humans may associate with a brand, product, or service then there is something there that can be useful for making a site attractive to users.

I have experimented in the past with various gTLDs and found that I was able to build links more easily to .org domains than to the .com or .net versions. That’s an example of how a gTLD can be optimized for humans and lead to success.

I discovered that overtly commercial affiliate sites on .org domains ranked and converted well. They didn’t rank because of they were .org, though. The sites were top-ranked because humans responded well to the sites I created with that gTLD. It was easier to build links to them, for example. I have no doubt that people trusted my affiliate sites a little more because they were created on .org domains.

Optimizing for humans is conversion optimization. It’s super important.

Optimizing For Humans With Keyword-Based gTLDs

I haven’t played around with keyword gTLDs but I suspect that what I experienced with .org domains could happen with a keyword-based gTLD because a meaningful gTLD may communicate positive feelings or relevance to humans. You can call it branding but I think that the word “branding” is too abstract. I prefer the phrase optimizing for humans because in the end that’s what branding is really about.

So maybe it’s time we ditched bla,bla,bla-ing about branding and started talking about optimizing for humans. If that person had considered the question from the perspective of human optimization they may have been able to answer the question themselves.

When SEOs talk about relevance it seems like they’re generally referring to how relevant something is to Google. Relevance to Google is what was top of mind to the person asking the question about the .music gTLD and it might be why you’re reading this article.

Heck, relevance to search engines is what all that “entity” optimization hand waving is all about, right? Focusing on being relevant to search engines is a limited way to chase after success. For example, I cracked the code with the .org domains by focusing on humans.

At a certain point, if you’re trying to be successful online, it may be useful to take a step back and start thinking more about how relevant the content, colors, and gTLDs are to humans and you might discover that being relevant to humans makes it easier to be relevant to search engines.

Featured Image by Shutterstock/Kues

Google Search Console Adds Custom Annotations To Reports via @sejournal, @MattGSouthern

Google launched custom annotations in Search Console performance reports, giving you a way to add contextual notes directly to traffic data charts.

The feature lets you mark specific dates with notes explaining site changes or external events that affected search performance.

What The Feature Does

Custom annotations appear as markers on Search Console charts. Google’s announcement highlights several common use cases, including infrastructure changes, SEO work, content strategy shifts, and external events that affect business performance such as holidays.

All annotations are visible to everyone with access to a Search Console property. Google recommends avoiding sensitive personal information in notes due to the shared visibility.

Why This Matters

Connecting traffic changes with specific actions taken weeks or months earlier usually means maintaining separate documentation outside Search Console.

Annotations create a change log inside the performance reports you already use.

If you manage multiple properties or work with a larger team, annotations can give everyone a shared record of releases, migrations, and campaigns without relying on external spreadsheets or project tools.

How To Use It

You can add an annotation by right-clicking on a performance chart, selecting “Add annotation,” choosing a date, and entering up to 120 characters of text. The note then appears directly on the chart as a visual reference point alongside clicks, impressions, or other metrics.

Custom annotations are now part of Search Console performance reports and available through the chart context menu.

Google Extends AI Travel Planning And Agentic Booking In Search via @sejournal, @MattGSouthern

Google announced three AI-powered updates to Search that extend how users plan and book travel within AI Mode.

The company is launching Canvas for travel planning on desktop, expanding Flight Deals globally, and rolling out agentic booking capabilities that connect users directly to reservation partners.

The announcement continues Google’s push to handle complete user journeys inside Search rather than directing traffic to publisher sites and booking platforms.

What’s New

Canvas Travel Planning

Canvas creates travel itineraries inside AI Mode’s side panel interface. You describe your trip requirements, select “Create with Canvas,” and receive plans combining flight and hotel data, Google Maps information, and web content.

Canvas travel planning is available on desktop in the US for users opted into the AI Mode experiment in Google Labs.

Flight Deals Global Expansion

Flight Deals uses AI to match flexible travelers with affordable destinations based on natural language descriptions of travel preferences.

The tool launched previously in the US, Canada, and India. The feature has started rolling out to more than 200 countries and territories.

Agentic Booking Expansion

AI Mode now searches across multiple reservation platforms to find real-time availability for restaurants, events, and local appointments. The system presents curated options with direct booking links to partner sites.

Restaurant booking launches this week in the US without requiring Labs access. Event tickets and local appointment booking remain available to US Labs users.

Why This Matters

Canvas and agentic booking capabilities represent Google handling trip research, planning, and reservations inside its own interface.

People who would previously visit multiple publisher sites to research destinations and compare options can now complete those tasks in AI Mode.

The updates fit Google’s established pattern of verticalizing high-value query types. Rather than presenting traditional search results that send users to external sites, AI Mode guides users through multi-step processes from research to transaction completion.

Looking Ahead

Google provided no timeline for direct flight and hotel booking in AI Mode beyond confirming active development with industry partners.

Watch for whether Google provides analytics or attribution tools that let businesses track bookings initiated through AI Mode. Without visibility into these flows, measuring the impact of AI Mode on travel and local business traffic will be difficult.

Server Security Scanner Vulnerability Affects Up To 56M Sites via @sejournal, @martinibuster

A critical vulnerability was recently discovered in Imunify360 AV, a security scanner used by web hosting companies to protect over 56 million websites. An advisory by cybersecurity company Patchstack warns that the vulnerability can allow attackers to take full control of the server and every website on it.

Imunify360 AV

Imunify360 AV is a malware scanning system used by multiple hosting companies. The vulnerability was discovered within its AI-Bolit file-scanning engine and within the separate database-scanning module. Because both the file and database scanners are affected, attackers can compromise the server through two paths, which can allow full server takeover and potentially put millions of websites at risk.

Patchstack shared details of the potential impact:

“Remote attackers can embed specifically crafted obfuscated PHP that matches imunify360AV (AI-bolit) deobfuscation signatures. The deobfuscator will execute extracted functions on attacker-controlled data, allowing execution of arbitrary system commands or arbitrary PHP code. Impact ranges from website compromise to full server takeover depending on hosting configuration and privileges.

Detection is non-trivial because the malicious payloads are obfuscated (hex escapes, packed payloads, base64/gzinflate chains, custom delta/ord transformations) and are intended to be deobfuscated by the tool itself.

imunify360AV (Ai-Bolit) is a malware scanner specialized in website-related files like php/js/html. By default, the scanner is installed as a service and works with a root privileges

Shared hosting escalation: On shared hosting, successful exploitation can lead to privilege escalation and root access depending on how the scanner is deployed and its privileges. if imunify360AV or its wrapper runs with elevated privileges an attacker could leverage RCE to move from a single compromised site to complete host control.”

Patchstack shows that the scanner’s own design gives attackers both the method of entry and the mechanism for execution. The tool is built to deobfuscate complex payloads, and that capability becomes the reason the exploit works. Once the scanner decodes attacker-supplied functions, it can run them with the same privileges it already has.

In environments where the scanner operates with elevated access, a single malicious payload can move from a website-level compromise to control of the entire hosting server. This connection between deobfuscation, privilege level, and execution explains why Patchstack classifies the impact as ranging up to full server takeover.

Two Vulnerable Paths: File Scanner and Database Scanner

Security researchers initially discovered a flaw in the file scanner, but the database-scanning module was later found to be vulnerable in the same way. According to the announcement: “the database scanner (imunify_dbscan.php) was also vulnerable, and vulnerable in the exact same way.” Both of the malware scanning components (file and database scanners) pass malicious code into Imunify360’s internal routines that then execute the untrusted code, giving attackers two different ways to trigger the vulnerability.

Why The Vulnerability Is Easy To Exploit

The file-scanner part of the vulnerability required attackers to place a harmful file onto the server in a location that Imunify360 would eventually scan. But the database-scanner part of the vulnerability needs only the ability to write to the database, which is common on shared hosting platforms.

Because comment forms, contact forms, profile fields, and search logs can write data to the database, injecting malicious content becomes easy for an attacker, even without authentication. This makes the vulnerability broader than a normal malware-execution flaw because it turns a common user input into a vulnerability vector for remote code execution.

Vendor Silence And Disclosure Timeline

According to Patchstack, a patch has been issued by Imunify360 AV but no public statement has been made about the vulnerability and no CVE has been issued for it. A CVE (Common Vulnerabilities and Exposures) is a unique identifier assigned to a specific vulnerability in software. It serves as a public record and provides a standardized way to catalog a vulnerability so that interested parties are made aware of the flaw, particularly for risk management. If no CVE is issued then users and potential users may not learn about the vulnerability, even though the issue is already publicly listed on Imunify360’s Zendesk.

Patchstack explains:

“This vulnerability has been known since late October, and customers began receiving notifications shortly thereafter, and we advise affected hosting providers to reach out to the vendor for additional information on possible exploitation in the wild or any internal investigation results.

Unfortunately there has been no statement released about the issue by Imunify360’s team, and no CVE has yet been assigned. At the same time, the issue has been publicly available on their Zendesk since November 4, 2025.

Based on our review of this vulnerability , we consider the CVSS score to be: 9.9”

Recommended Actions for Administrators

Patchstack recommends that server administrators immediately apply vendor security updates if running Imunify360 AV (AI-bolit) prior to version 32.7.4.0, or remove the tool if patching is not possible. If an immediate patch cannot be applied, the tool’s execution environment should be restricted, such as running it in an isolated container with minimal privileges. All administrators are also urged to contact CloudLinux / Imunify360 support to report potential exposure, confirm if their environment was affected, and to collaborate on post-incident guidance.

Featured Image by Shutterstock/DC Studio

ChatGPT Outage Affects APIs And File Uploads via @sejournal, @martinibuster

OpenAI is experiencing a widespread outage affecting two systems, APIs and ChatGPT. The outage has been ongoing for at least a half an hour as of publication date.

ChatGPT API Jobs Stuck Outage

The first issue is that batch API jobs get stuck in the finalization state. There are twelve components of APIs that are monitored for uptime and it’s the Batch part that’s experiencing “degraded” performance. The issue has been ongoing since 3:54 PM.

According to OpenAI:

“Subset of Batch API jobs stuck in finalizing state”

ChatGPT Uploads Outage

The other error pertains to ChatGPT file uploads are failing. This is described as a partial outage.

OpenAI’s official explanation:

“File uploads to ChatGPT conversations are failing for some users, giving an error message indicating the file has expired.

…File uploads to ChatGPT conversations are failing for some users, giving an error message indicating the file has expired.”

This issue has been ongoing since 3:53 PM.

Screenshot of OpenAI Uploads Outage

Google Sharpens Suspension Accuracy and Speeds Up Appeals for Advertisers via @sejournal, @brookeosmundson

Google account suspensions have long been one of the most stressful issues advertisers face. A single notification can pause revenue, disrupt campaigns, and leave teams scrambling to understand what went wrong, often at no fault of their own.

Over the past several months, Google has heard that feedback and is now rolling out measurable improvements aimed at reducing the burden on legitimate advertisers.

These updates should bring meaningful relief. Misapplied suspensions are down, appeals are moving faster, and Google is promising more transparency into why enforcement actions happen at all.

What’s Changed in Google’s Process

Google announced several updates aimed at preventing unnecessary enforcement actions and speeding up resolutions when mistakes happen.

Google Ads Liaison Ginny Marvin shared additional context in a LinkedIn video. She explained that advertisers often faced long, unclear appeal processes. Many of those advertisers were compliant, but still got caught in broad enforcement filters designed to protect users. The new improvements are meant to address that gap and create a smoother experience for legitimate businesses.

Screenshot taken by author, November 2025

According to Google’s data:

  • Incorrect account suspensions are down more than 80%
  • Appeals are being resolved 70% faster
  • 99% of appeals are reviewed within 24 hours

These numbers reflect improvements in Google’s automated systems, better internal checks, and more precise policy evaluation. The goal is to reduce the number of trusted advertisers who get suspended by mistake and to shorten the time it takes to recover when an account needs review.

Google also mentioned ongoing work to make enforcement decisions easier to understand. While full visibility into every signal is unlikely, these updates indicate an effort to give advertisers clearer direction when issues occur.

How This Helps Advertisers

These changes bring meaningful stability to daily operations. When incorrect suspensions drop by such a large margin, advertisers experience fewer unexpected pauses in performance.

That consistency matters for both in-house teams and agencies managing multiple accounts.

The faster appeal timeline also reduces the fallout from any suspension that does occur. Getting nearly all appeals reviewed within a day helps advertisers avoid extended downtime and protects campaign momentum.

Clarity matters as well. Advertisers have long asked for more detail when suspensions happen.

Even small improvements in transparency can save hours of troubleshooting and prevent repeated appeals that contribute to delays.

These updates should also improve confidence in Google’s enforcement systems. When advertisers trust the process, they can focus on optimization instead of worrying that a routine change will trigger a policy issue.

How This Shapes Future Enforcement

Google’s changes reflect a broader effort to balance user protection with a better advertiser experience. Automated enforcement will always play a significant role in preventing harmful behavior, but legitimate businesses need a system that treats them fairly and resolves issues quickly.

The latest results show encouraging progress. There is still room for improvement, especially in policy clarity and long-term consistency, but the direction is positive.

Google has stated that this work will continue and that advertiser feedback remains central to future updates. For marketers, this signals a more stable and predictable enforcement environment, which supports healthier performance and stronger planning across campaigns.

Google Reminds Websites To Use One Review Target via @sejournal, @MattGSouthern

Google updated its review snippet documentation to clarify that each review or rating in structured data should point to one clear target, reducing ambiguity.

  • Google updated its review snippet docs to clarify how review targets should be specified
  • You should avoid attaching the same review or rating to multiple different entities
  • A quick audit of templates and plugins can catch confusing nesting.
Lazy Link Building Building Strategies That Work via @sejournal, @martinibuster

I like coming up with novel approaches to link building. One way to brainstorm an approach is to reverse a common method. I created a couple of approaches to link building, several are passive and two others are a little more active but have very little to do with email outreach. I wrote about these tips back around 2013, but I’ve polished them up and updated them for today.

Passive Link Building

Someone asked that I put together some tips for those who are too lazy to do link building. So here it goes!

Guilt Trip Copyright Infringers

Check who’s stealing your content. Be  hard on scrapers. But if it’s an otherwise legit site, you might want to hold off asking them to take down your content. Check if they’re linking to a competitor or similar sites, like from a links page.

You can ask them nicely to take down the content and after they email you back to confirm the link is down, email them back to thank them. But then say something like, “I see you are linking to Site-X.com. If my content was good enough to show on your site, then I would be grateful and much obliged if you considered it good enough to list from your links page.

I heard a keynote speaker at an SEO conference once encouraging people to come down hard on people who steal your content. I strongly disagree with that approach. Some people who steal your content sometimes are under the impression that if it’s on the Internet then it’s free and they can use it on their own site.  Some think it’s free to use as long as they link back to your site.

If they are linking to your site, tell them that you prefer they don’t infringe on your copyright but that you would be happy to write them a different article they can use as long as they link back to your site. You can be nice to people and still get a link.

Reverse Guest Posting

Instead of publishing articles on someone else’s site, solicit people to publish on your site. Many people tweet, promote, and link from their sites to sites that they are interviewed on. An interesting thing about doing this is that interviewing people who have a certain amount of celebrity helps to bring more people to your site, especially if people are searching for that person.

Relationship Building

Authors of books are great for this kind of outreach. People are interested in what authors and experts say. Sometimes you can find the most popular authors and influencers at industry conferences. I’ve met some really famous and influential people at conferences and got their email address and scored interviews by just going up and talking to these people.

This is called relationship building. SEOs and digital marketers are so overly focused on sending out emails and doing everything online that they forget that people actually get together in person at industry events, meetups, and other kinds of social events.

Giveaways

This is an oldie and I get it that many SEOs have talked about this. But this is something that I used successfully from way back around 2005. I did an annual giveway to my readers and website members.

The way I did it was to contact some manufacturers of products that are popular with my readers and ask for a discount if I buy in bulk and tell them I’ll be promoting their products to my subscribers, readers, and members. I’ve been responsible for making several companies popular by bringing attention to their products, elevating them from a regional business to a nationwide business.

Leverage Niche Audience For Links

The way to do this is to identify an underserved subtopic of your niche, then create a useful section that addresses a need for that niche. The idea is to create a compelling reason to link to the site.

Here is an example of how to do this for a travel destination site.

Research gluten free, dairy free, nut-free, raw food dining destinations. Then make a point to visit, interview, and build a resource for those.

Conduct interviews with lodging and restaurant owners that offer gluten free options. You’ll be surprised by how many restaurants and lodgings might decide on their own to link to your site or maybe just hint at it.

Summary

Outreach to sites about a niche topic, not just to businesses but also to organizations and associations related to that niche that have links and resources pages. Just tell them about the site, quickly explain what it offers and ask for a link. This method is flexible and can be adapted to a wide range of niche topics. And if they have an email or publish articles, suggest contributing to those but don’t ask for a link, just ask for a mention.

Don’t underestimate the power of building positive awareness of your site. Focus on creating positive feelings for your site (goodwill) and generating positive word of mouth, otherwise known as external signals of quality. The rankings will generally follow.

Featured Image by Shutterstock/pathdoc

The Quid Pro No Method Of Link Building via @sejournal, @martinibuster

Expressly paying for links has been out for awhile. Quid Pro No is in. These are some things you can do when a website asks for money in exchange for a link. During the course of building links, whether it’s free links, publishing an article or getting a brand mention, it’s not unusual to get solicited for money. It’s tempting to take the bait and get a project done. But I’m going to suggest some considerations prior to making a decision as well as a way to turn it around using an approach that I call Quid Pro No.

Link building, digital pr, brand mention building can often lead to solicitations for a paid link. There are many good reasons for not engaging in paid links and in my experience it’s possible to get a link without doing it their way when someone asks you for money in return for a link.

Red Light Means Stop

The first consideration is that someone who has their hand out for money is a red light is because it’s highly likely they have done this before and are highly likely linking to low quality websites that are in really bad neighborhoods, putting the publisher’s site and any sites associated with it into the outlier part of the web graph where sites are identified as spam and tend to not get indexed. In this case consider it a favor that they outed their site for the crap neighborhood it resides in and walk away. Quid pro… no.

Getting solicited for money can be a frequent occurrence. Site publishers, some of them apparently legit, are publishing Guest Post Submission Guidelines for the purpose of attracting paying submissions. It’s an industry and overly normalized in certain circles. Beware.

Spook The Fish

A less frequent occurrence is by the newb who’s trying to extract something. If the site checks out then there may be room for some kind of concession. If they’re asking for money, in this case, Quid Pro No means to FUD them away from this kind of activity THEN turn them around to doing the project on your terms.

When angling on a river fish that’s on the hook might make a run downstream away from you which makes it tough to land the fish because you’re fighting the fish and the current. Sometimes a tap on the rod will spook them into changing position. Sometimes a sharp pull can direct them to turn around. For this character I have found it efficacious to spook them with all the bad things that can happen and turn them around to where I want them to be.

Very briefly, and in the most polite terms, explain you’d love to do business, but that there are other considerations. Here’s what you can trot out:

  • FTC Guidelines
    FTC guidelines prohibit a web publisher from accepting money for an unlabeled advertisement.
  • Google Guidelines
    Google prohibits paid links

Land The Link

What’s in it for me is a useful concept that can be used to convince someone that it’s in their interest to do things your way. It’s important to convince the other party that there’s something in it for them. They want something so sometimes it’s worthwhile to make them feel as if they’re getting something out of the deal.

The approach I take for closing a project, whether it’s a free link or an article project is to circle back to asking for an article project by focusing on communicating why my site is high quality and ways that we can cross-promote. It’s essentially relationship building. The message is that your site is authoritative, well promoted and that there are ways that both sites can benefit without doing a straight link buy.

But at this point I want to emphasize again that any site that’s asking for money in exchange for a link is not necessarily a good neighborhood. So you might not actually want a link from them if they’re linking out to low quality sites.

Or Go For A Labeled Sponsored Post

However, another way to turn this around is to just go ahead and pay them as long as it’s a labeled as a sponsored post and contains either multiple no-follow links and or brand mentions. Sponsored posts get indexed by search engines and AI platforms that will use those as validation for how great your site is and recommend it.

What’s beautiful about a labeled sponsored post is that they give you full control over the messaging, which can be more valuable than a tossed-off link in a random paragraph. And because everything is disclosed and compliant, you reduce the long-term risk while still capturing visibility in AI Mode, ChatGPT and Perplexity through the citation signals.

Quid Pro No

Quid Pro No is about negatively responding to a solicitation and turning it around and getting something you want without actually saying the word no.

Featured Image by Shutterstock/Studio Romantic

Google Defends Parasite SEO Crackdown As EU Opens Investigation via @sejournal, @MattGSouthern

Google has defended its enforcement of site reputation abuse policies after the European Commission announced an investigation into whether the company unfairly demotes news publishers in search results.

The company published a blog post stating the investigation “is misguided and risks harming millions of European users” and that it “risks rewarding bad actors and degrading the quality of search results.”

Google’s Chief Scientist for Search, Pandu Nayak, wrote the response.

Background

The European Commission announced an investigation under the Digital Markets Act examining whether Google’s anti-spam policies unfairly penalize legitimate publisher revenue models.

Publishers complained that Google demotes news sites running sponsored content and third-party promotional material. EU antitrust chief Teresa Ribera said:

“We are concerned that Google’s policies do not allow news publishers to be treated in a fair, reasonable and non-discriminatory manner in its search results.”

Google updated its site reputation abuse policy last year to combat parasite SEO. The practice involves spammers paying publishers to host content on established domains to manipulate search rankings.

The policy targets content like payday loan reviews on educational sites, casino content on medical sites, or third-party coupon pages on news publishers. Google provided specific examples in its announcement including weight-loss pill spam and payday loan promotions.

Manual enforcement began shortly after. Google issued penalties to major publishers including Forbes, The Wall Street Journal, Time and CNN in November 2024.

Google later updated the policy to clarify that first-party oversight doesn’t exempt content primarily designed to exploit ranking signals.

Google’s Defense

Google’s response emphasized three points.

First, Google stated that a German court dismissed a similar claim, ruling the anti-spam policy was “valid, reasonable, and applied consistently.”

Second, Google says its policy protects users from scams and low-quality content. Allowing pay-to-play ranking manipulation would “enable bad actors to displace sites that don’t use those spammy tactics.”

Third, Google says smaller creators support the crackdown. The company claims its policy “helps level the playing field” so legitimate sites competing on content quality aren’t outranked by sites using deceptive tactics.

Nayak argues the Digital Markets Act is already making Search ‘less helpful for European businesses and users,’ and says the new probe risks rewarding bad actors.

The company has relied exclusively on manual enforcement so far. Google confirmed in May 2024 that it hadn’t launched algorithmic actions for site reputation abuse, only manual reviews by human evaluators.

Google added site reputation abuse to its Search Quality Rater Guidelines in January 2025, defining it as content published on host sites “mainly because of that host site’s already-established ranking signals.”

Why This Matters

The investigation creates a conflict between spam enforcement and publisher business models.

Google maintains parasite SEO degrades search results regardless of who profits. Publishers argue sponsored content with editorial oversight provides legitimate value and revenue during challenging times for media.

The distinction matters. If Google’s policy captures legitimate publisher-advertiser partnerships, it restricts how news organizations monetize content. If the policy only targets manipulative tactics, it protects search quality.

The EU’s position suggests regulators view Google’s enforcement as potentially discriminatory. The Digital Markets Act prohibits gatekeepers from unfairly penalizing others, with fines up to 10% of global revenue for violations.

Google addressed concerns about the policy in December 2024, confirming that affiliate content properly marked isn’t affected and that publishers must submit reconsideration requests through Search Console to remove penalties.

The updated policy documentation clarified that simply having third-party content isn’t a violation unless explicitly published to exploit a site’s rankings.

The policy has sparked debate in the SEO community about whether Google should penalize sites based on business arrangements rather than content quality.

Looking Ahead

The European Commission has opened the investigation under the Digital Markets Act and will now gather evidence and define the specific DMA provisions under examination.

Google will receive formal statements of objections outlining alleged violations. The company can respond with arguments defending its policies.

DMA investigations move faster than traditional antitrust cases. Publishers may submit formal complaints providing evidence of traffic losses and revenue impacts.

The outcome could force changes to how Google enforces spam policies in Europe or validate its current approach to protecting search quality.


Featured Image: daily_creativity/Shutterstock