Duda Announces ActiveCampaign Marketing Automation Integration via @sejournal, @martinibuster

Duda announced a partnership with ActiveCampaign, a marketing automation and CRM company. The partnership allows agencies to bring marketing automation to their client sites to help boost sales and customer engagement.

ActiveCampaign

ActiveCampaign is a marketing automation platform that makes it easy to cultivate post-sales engagement and repeat sales. It’s also functions as a CRM and enables sales automation, specifically for SMBs.

The integration within the Duda editor makes it easy for agencies that build sites on the Duda platform to bring advanced marketing automation to their client sites.

This is a major feature upgrade for Duda’s users because it allows agencies to add advanced capabilities like post-sale emails, and personalized follow up for cultivating repeat sales and increased engagement.

Integration With Duda

The partnership enables integration with Duda’s native contact form, eCommerce, and membership features with the advanced marketing automation of ActiveCampaign.

According to Shay Howe, chief marketing officer at ActiveCampaign:

“Partnering with Duda amplifies the value we bring to agencies and their clients by seamlessly integrating marketing automation into the web development process.

This collaboration empowers agencies and SaaS companies to deliver personalized, data-driven customer experiences without the need for complex technical setups, allowing them to focus on what truly matters — growing their customers’ businesses.”

Read more at:

Duda Partners With ActiveCampaign, Expanding Email Marketing and Automation Capabilities for Agencies Building Websites

Featured Image by Shutterstock/ViDI Studio

Google Offers Solutions for Inaccurate Product Pricing In Search via @sejournal, @MattGSouthern

In the latest edition of Google’s SEO office-hours Q&A video, Senior Search Analyst John Mueller addressed concerns about inaccurate prices in search results.

His advice may be helpful if you’re having similar problems with Google displaying the wrong prices for your products.

Ensuring Accurate Product Prices in Organic Search Results

One of the questions focused on ensuring accurate prices are displayed in organic text results for their products.

In his response, Mueller drew attention to Google Merchant Center feeds.

“I’d recommend using the Merchant Center feeds if you can,” Mueller advised.

He pointed out that Merchant Center offers ways to submit pricing data directly.

He recommends retailers look into this option, implying it’s a low-effort way to ensure accurate prices across search results.

Mueller continues:

“There are ways to submit pricing information in Merchant Center that don’t require a lot of work, so check that out. If you can’t find ways to resolve this, then please drop us a note in the help forums with the details needed to reproduce the issue.”

For those unfamiliar, Google Merchant Center is a tool that allows businesses to upload their product data to Google, making it available for Shopping ads, free listings, and other Google services.

This gives retailers more control over how their product information, including prices, appears across Google’s ecosystem.

Addressing Currency Discrepancies in Rich Results

Another question during the session concerned wrong currencies showing up in rich results.

This can impact how shoppers interact with search results, leading to confused customers and lost sales.

Mueller said this problem stems from Google’s systems potentially viewing pages as duplicates, especially when content is nearly identical across different regional site versions.

He explained,

“Often this is a side effect of Google systems seeing the page as being mostly duplicate. For example, if you have almost exactly the same content on pages for Germany and Switzerland, our systems might see the pages as duplicates, even if there’s a different price shown.”

To resolve this issue, Mueller suggests:

  1. Differentiate content: Pages for different regions or currencies should have sufficiently different content to avoid being flagged as duplicates.
  2. Use Merchant Center: As with the previous question, Mueller recommended using Merchant Center feeds for pricing information instead of relying solely on structured data.

Key Takeaways

This Q&A highlights a common challenge for online stores: Getting Google to show the correct prices in search results.

To help Google get it right, retailers should:

  • Think about using Google Merchant Center to feed in more accurate prices.
  • Ensure different country product page versions aren’t too similar to avoid duplicate content problems.
  • Monitor how prices and currencies look in search results and rich snippets.
  • Use Google’s help forums if there are problems you can’t fix.

Listen to the full Q&A session below:


Featured Image: Tada Images/Shutterstock

Google Casts Doubt On Popular SEO Audit Advice via @sejournal, @martinibuster

Google’s Martin Splitt questioned the usefulness of specific suggestions made by SEO auditing tools, noting that while some advice may be valid, much of it has little to no impact on SEO. He acknowledged that these audits can be valuable for other purposes, but their direct influence on SEO is limited.

Automated SEO Audits

There were two hosts of this month’s Google SEO Office Hours, John Mueller and Martin Splitt. It sounded like the person answering the question was Martin Splitt and the technical level of his answer seems to confirm it.

The person asking the question wanted to know what they should proceed with suggestions made by automated SEO tools that suggest changes that don’t match anything in Google’s documentation.

The person asked:

“I run several free website audits, some of them suggested me things that were never mentioned in the search central documentation. Do these things matter for SEO?”

Martin Splitt On Automated SEO Audits

Martin’s answer acknowledged that some of the suggestions made by SEO audit tools aren’t relevant to SEO.

He answered:

“A lot of these audits don’t specifically focus on SEO and those that don’t still mention a bunch of outdated or downright irrelevant things. unfortunately.

I’ll give you some examples. The text to code ratio, for instance, is not a thing. Google search doesn’t care about it.”

Text to code ratio is an analysis of how much code there is in comparison to how much text is on the page. I believe there was a Microsoft research paper in the early 2000s about statistical analysis of spam sites and one of the qualities of spammy sites that was noted was that there was more text on a typical spam page than code. That might be where that idea came from.

But back in the day (before WordPress) I used to create PHP templates that weighed mere kilobytes, a fraction of what a typical featured image weighs, and it never stopped my pages from ranking, so I knew first-hand that text to code ratio was not a thing.

Next he mentions minification of CSS and JavaScript. Minification is condensing the code by reducing empty spaces and line breaks in the code, resulting in a smaller file.

He continued his answer:

“CSS, JavaScript, not minified that you got apparently as well is suboptimal for your users because you’re shipping more data over the wire, but it doesn’t have direct implications on your SEO. It is a good practice though.”

SEO Is Subjective

Some people believe that SEO practices are an objective set of clearly defined with black and white rules about how to “properly” SEO a site. The reality is that, except for what Google has published in official documentation, SEO is largely a matter of opinion.

The word “canonical” means a known standard that is accepted and recognized as authoritative. Google’s Search Central documentation sets a useful baseline for what can be considered canonical SEO. Official documentation is the baseline of SEO, what can be agreed upon as what is verified to be true for SEO.

The word “orthodox” refers to beliefs and practices that are considered traditional and conventional. A large part of what SEOs consider best practices are orthodox in that they are based on beliefs and traditions, it’s what everyone says is the right way to do it.

The problem with orthodox SEO is that it doesn’t evolve. People do it a certain way because it’s always been done that way. A great example is keyword research, an SEO practice that’s literally older than Google but practiced largely the same way it’s always been done.

Other examples of decades-old SEO orthodoxy are:

  • Meta description should be under 164 words
  • Belief that keywords are mandatory in titles, headings, meta description and alt tags
  • Belief that titles should be “compelling” and “click-worthy”
  • Belief that H1 is a strong SEO signal

Those are the things that were important twenty years ago and became part of the orthodox SEO belief system, but they no longer impact how Google ranks websites (and some of those never did) because Google has long moved beyond those signals.

Limitations Of Google’s Documentation

Martin Splitt encouraged cross-referencing official Google documentation with advice given by SEO auditing tools to be certain that the recommendations align with Google’s best practices, which is a good suggestion that I agree with 100%.

However, Google’s official documentation is purposely limited in scope because they don’t tell SEOs how to impact ranking algorithms. They only show the best practices for optimizing a site so that a search engine understands the page, is easily indexed and is useful for site visitors.

Google has never shown how to manipulate their algorithms, which is why relatively noob SEOs who analyzed Google’s Search Quality Raters guidelines fell short and eventually had to retract their recommendations for creating “authorship signals,” “expertise signals” and so on.

SEJ Has Your Back On SEO

I’ve been in this business long enough to have experienced firsthand that Google is scrupulous about not divulging algorithm signals, not in their raters guidelines, not in their search operators, not in their official documentation. To this day, despite the so-called leaks, nobody knows what “helpfulness signals” are.  Google only shares the general outlines of what they expect and it’s up to SEOs to figure out what’s canonical, what’s outdated orthodoxy and what’s flat out making things up out of thin air.

One of the things I like about Search Engine Journal’s SEO advice is that the editors make an effort to put out the best information, even if it conflicts with what many might assume. It’s SEJ’s opinion but it’s an informed opinion.

Listen to the question and answer at the 11:56 minute mark:

Featured Image by Shutterstock/Ljupco Smokovski

Google: Search Console Data Survives After Domain Expiration via @sejournal, @MattGSouthern

In the August edition of Google’s SEO office-hours Q&A video, John Mueller, Senior Search Analyst at Google, tackled a unique question about domain expiration and Search Console data.

The question highlights the potential risks to Search Console data when domains change hands.

A site owner facing the expiration of their domain and loss of hosting raised concerns about future domain owners’ potential misuse of their Search Console data.

They asked how to remove all URLs associated with the domain to prevent misuse after expiration.

Mueller’s Response

Mueller explained several important points about Search Console data:

  • Search Console info sticks with the website, not the user. New owners who prove they own the site can see all the old data.
  • There’s no “delete all” button for Search Console data.
  • To control the data, you need to keep owning the domain name.
  • If you keep the domain, you can regain ownership in Search Console without losing any old data.
  • If you’ve already taken down the website, you can use domain verification in Search Console to ask Google to hide it from search results temporarily. This doesn’t erase it from Google’s records; it just keeps it out of sight temporarily.
  • Mueller suggests telling the buyer if you have any active removal requests when selling a domain. This way, they can undo it if they want.

His full response:

“This is an interesting question that I don’t think we’ve run across yet. The data in search console is not tied to users, so anyone who verifies a site later on will see that data. There’s no way to reset the data shown there, so you’d have to prevent the domain name from expiring. The advantage of this process is that you can reverify in search console without any data loss.

To remove all content from search for a site that’s already removed from the server you can use the domain verification for search console and submit a temporary site removal request. This doesn’t remove the site from the index, but it will prevent it from being shown for a period of time.

If you’re selling the domain name it would be nice to tell the new owner of this removal request so that they can cancel it.”

Why This Matters

This topic is relevant for all website owners, especially those who might sell or lose their domain. It shows how Search Console data is retained from owner to owner.

It also reminds us to be careful with domain names and search data when ownership changes hands.

What To Do With This Info

  • If you plan to let your domain name expire, remember that whoever buys it next can see your old Search Console data.
  • Even if you’re not using your website anymore, it might be worth keeping the domain name to control who sees your Search Console info.
  • If you can’t access your website anymore, you can use Search Console to ask Google to hide it from search results for a while.
  • If you’re selling your domain, tell the buyer about any requests you’ve made to hide the site from search and about the old data in Search Console.

Understanding these points can help you protect your data and manage how content appears in search, even when domain ownership changes.

Hear the full question and answer below:

Google’s Mueller On August Core Update vs. Ranking Bug Effects via @sejournal, @MattGSouthern

Google recently rolled out two changes that affected website rankings: the August 2024 core update and a fix for a separate ranking problem. These happened around the same time, making it tricky for SEO experts to figure out which change caused what effect on their sites’ search rankings.

John Mueller, Senior Search Analyst at Google, commented on LinkedIn to help clarify things.

The conversation started when Rüdiger Dalchow asked how to tell apart the effects of the core update from those of the ranking issue fix since they happened so close together.

Mueller: Wait Before Analyzing Changes

Mueller’s main advice was to be patient.

He suggests not trying to separate the effects of the ranking fix from those of the core update while everything is still settling.

He pointed out that it’s normal to see rankings fluctuate during this time.

He stated:

“You’d really need to wait until the core update finishes rolling out to make any call about its effect. That’s not to say you should wait with working on your website, it’s just if you want to compare before vs after, waiting for it to be finished is important. From looking at social posts from SEOs tracking these things, there are often fluctuations during the rollout, I don’t think it’s worth trying to separate out the effects from the ranking issue.”

Mueller: Don’t Wait To Improve Your Website

Mueller said not to put off improving your website, but he stressed that it’s important to let the core update finish before jumping to conclusions about how it affected your site.

This is consistent with Google’s usual advice about core updates: wait until they’re fully rolled out before you examine what changed.

In the same conversation, Hans Petter Blindheim suggested adding a special notice in Google Search Console during core updates or when issues occur.

Mueller responded carefully, saying:

“Most websites don’t see big changes, so I’m hesitant to make it seem like an even bigger deal for them.”

He mentioned that Chrome add-ons are available for those who want to monitor these events more closely.

Why This Matters

This conversation shows how complicated Google’s search updates can be and how challenging it is to pinpoint exactly what caused changes in rankings.

While the August 2024 core update is rolling out, Mueller advises focusing on the big picture of site quality instead of figuring out which specific change caused what.

More Information

For those wanting to learn more, Google has recently updated its advice on core updates. This new guide gives more detailed tips for websites affected by these changes.

It includes a step-by-step walkthrough on using Search Console to check if your traffic has dropped and stresses making user-focused improvements to your content.

As always, we’ll monitor this core update closely over the next few weeks. Once it’s fully rolled out, we should have a clearer idea of how it’s affected websites.


Featured Image: Daniel Pawer/Shutterstock

Google Quietly Launches New AI Crawler via @sejournal, @martinibuster

Google quietly added a new bot to their crawler documentation that crawls on behalf of commercial clients of their Vertex AI product. It appears that the new crawler may only crawl sites controlled by the site owners, but the documentation isn’t entirely clear on that point.

Vertex AI Agents

Google-CloudVertexBot, the new crawler, ingests website content for Vertex AI clients, unlike other bots listed in the Search Central documentation that are tied to Google Search or advertising.

The official Google Cloud documentation offers the following information:

“In Vertex AI Agent Builder, there are various kinds of data stores. A data store can contain only one type of data.”

It goes on to list six types of data, one of which is public website data. On crawling the documentation says that there are two kinds of website crawling with limitations specific to each kind.

  1. Basic website indexing
  2. Advanced website indexing

Documentation Is Confusing

The documentation explains website data:

“A data store with website data uses data indexed from public websites. You can provide a set of domains and set up search or recommendations over data crawled from the domains. This data includes text and images tagged with metadata.”

The above description doesn’t say anything about verifying domains. The description of Basic website indexing doesn’t say anything about site owner verification either.

But the documentation for Advanced website indexing does say that domain verification is required and also imposes indexing quotas.

However, the documentation for the crawler itself says that the new crawler crawls on the “site owners’ request” so it may be that it won’t come crawling public sites.

Now here’s the confusing part, the Changelog notation for this new crawler indicates that the new crawler could come to scrape your site.

Here’s what the changelog says:

“The new crawler was introduced to help site owners identify the new crawler traffic.”

New Google Crawler

The new crawler is called Google-CloudVertexBot.

This is the new information on it:

“Google-CloudVertexBot crawls sites on the site owners’ request when building Vertex AI Agents.

User agent tokens

  • Google-CloudVertexBot
  • Googlebot”

User agent substring
Google-CloudVertexBot

Unclear Documentation

The documentation seems to indicate that the new crawler doesn’t index public sites but the changelog indicates that it was added so that site owners can identify traffic from the new crawler. Should you block the new crawler with a robots.txt just in case? It’s not unreasonable to consider given that the documentation is fairly unclear on whether it only crawls domains that are verified to be under the control of the entity initiating the crawl.

Read Google’s new documentation:

Google-CloudVertexBot

Featured Image by Shutterstock/ShotPrime Studio

Meta’s New Ad Tools Promise More Precise Customer Targeting via @sejournal, @MattGSouthern

Meta is rolling out ad platform upgrades for Facebook and Instagram.

The updates, coming in the next few months, focus on boosting performance and customization through AI-powered campaign optimization.

New Features For Precise Value Definition

Meta is rolling out a new “Conversion Value Rules” tool to give advertisers more flexibility.

This feature lets you adjust the value of different customer actions or groups to your business within a single campaign.

Let’s say you know some customers tend to spend way more over time. Now, you can tell the system to bid higher for those folks without setting up a separate campaign.

Incremental Attribution Model

Meta plans to introduce a new optional attribution setting later this year. This feature will focus on what it terms “incremental conversions.”

Instead of maximizing the total number of attributed conversions, this new model aims to optimize ad delivery for conversions likely to occur only because of ad exposure.

In other words, the model identifies and targets potential customers who wouldn’t have converted without seeing the advertisement.

Initial tests of this feature have yielded positive results. Advertisers participating in these trials have observed an average increase of over 20% in incremental conversions.

Enhanced Analytics Integration

Meta is launching direct connections with external analytics platforms, starting now and continuing through 2025. They’re kicking off with Google Analytics and Northbeam and plan to add Triple Whale and Adobe later.

These connections let businesses share combined campaign data from different channels with Meta’s ad system. The goal is to give advertisers a complete picture of how their campaigns perform across various platforms.

By getting this broader data set, Meta expects to fine-tune its AI models and help advertisers run more effective campaigns.

Cross-Publisher Journey Optimization

Meta is using what it’s learned from its early connections with analytics tools to update its ad system. These changes consider how customers interact with ads across different platforms before purchasing.

Early tests of this update have been positive. On average, third-party analytics tools show a 30% increase in conversions attributed to Meta ads. However, advertisers might see higher costs per thousand impressions (CPMs).

Right now, this update is being applied to campaigns that aim to increase the number or value of conversions under the sales objective. Meta plans to extend this to other campaign objectives soon.

Google Analytics Integration: What It Means

The Google Analytics connection is big news for industry folks, as it could offer the following benefits:

  • Unified view of Meta ads and overall site performance
  • Better multi-touch attribution
  • Insights to refine SEO strategy based on paid social impact
  • Smarter budget decisions between paid social and SEO
  • Easier reporting
  • Cross-channel optimization opportunities

This integration blurs the lines between paid social, organic social, and SEO, offering a more holistic view of digital marketing efforts.

Why This Matters

As privacy changes shake up digital advertising, Meta’s updates address the need for more accurate, valuable insights.

The move towards AI-driven features and cross-channel integration marks a new era in ad sophistication.

To make the most of these updates, review your Meta ad strategy and clearly define your customer journey and value metrics.

Stay tuned for the rollout, and be ready to test these new features as they become available.


Featured Image: Cristian Valderas/Shutterstock

Google’s Gary Illyes Continues To Warn About URL Parameter Issues via @sejournal, @MattGSouthern

Google’s Gary Illyes recently highlighted a recurring SEO problem on LinkedIn, echoing concerns he’d previously voiced on a Google podcast.

The issue? URL parameters cause search engines difficulties when they’re crawling websites.

This problem is especially challenging for big sites and online stores. When different parameters are added to a URL, it can result in numerous unique web addresses that all lead to the same content.

This can impede search engines, reducing their efficiency in crawling and indexing sites properly.

The URL Parameter Conundrum

In both the podcast and LinkedIn post, Illyes explains that URLs can accommodate infinite parameters, each creating a distinct URL even if they all point to the same content.

He writes:

“An interesting quirk of URLs is that you can add an infinite (I call BS) number of URL parameters to the URL path, and by that essentially forming new resources. The new URLs don’t have to map to different content on the server even, each new URL might just serve the same content as the parameter-less URL, yet they’re all distinct URLs. A good example for this is the cache busting URL parameter on JavaScript references: it doesn’t change the content, but it will force caches to refresh.”

He provided an example of how a simple URL like “/path/file” can expand to “/path/file?param1=a” and “/path/file?param1=a&param2=b“, all potentially serving identical content.

“Each [is] a different URL, all the same content,” Illyes noted.

Accidental URL Expansion & Its Consequences

Search engines can sometimes find and try to crawl non-existent pages on your site, which Illyes calls “fake URLs.”

These can pop up due to things like poorly coded relative links. What starts as a normal-sized site with around 1,000 pages could balloon to a million phantom URLs.

This explosion of fake pages can cause serious problems. Search engine crawlers might hit your servers hard, trying to crawl all these non-existent pages.

This can overwhelm your server resources and potentially crash your site. Plus, it wastes the search engine’s crawl budget on useless pages instead of your content.

In the end, your pages might not get crawled and indexed properly, which could hurt your search rankings.

Illyes states:

“Sometimes you might create these new fake URLs accidentally, exploding your URL space from a balmy 1000 URLs to a scorching 1 million, exciting crawlers that in turn hammer your servers unexpectedly, melting pipes and whistles left and right. Bad relative links are one relatively common cause. But robotstxt is your friend in this case.”

E-commerce Sites Most Affected

The LinkedIn post didn’t specifically call out online stores, but the podcast discussion clarified that this issue is a big deal for ecommerce platforms.

These websites typically use URL parameters to handle product tracking, filtering, and sorting.

As a result, you might see several different URLs pointing to the same product page, with each URL variant representing color choices, size options, or where the customer came from.

Mitigating The Issue

Illyes consistently recommends using robots.txt to tackle this issue.

On the podcast, Illyes highlighted possible fixes, such as:

  • Creating systems to spot duplicate URLs
  • Better ways for site owners to tell search engines about their URL structure
  • Using robots.txt in smarter ways to guide search engine bots

The Deprecated URL Parameters Tool

In the podcast discussion, Illyes touched on Google’s past attempts to address this issue, including the now-deprecated URL Parameters tool in Search Console.

This tool allowed websites to indicate which parameters were important and which could be ignored.

When asked on LinkedIn about potentially bringing back this tool, Illyes was skeptical about its practical effectiveness.

He stated, “In theory yes. in practice no,” explaining that the tool suffered from the same issues as robots.txt, namely that “people couldn’t for their dear life figure out how to manage their own parameters.”

Implications for SEO and Web Development

This ongoing discussion from Google has several implications for SEO and web development:

  1. Crawl Budget: For large sites, managing URL parameters can help conserve crawl budget, ensuring that important pages are crawled and indexed.
  2. Site Architecture: Developers may need to reconsider how they structure URLs, particularly for large e-commerce sites with numerous product variations.
  3. Faceted Navigation: E-commerce sites using faceted navigation should be mindful of how this impacts URL structure and crawlability.
  4. Canonical Tags: Canonical tags help Google understand which URL version should be considered primary.

Why This Matters

Google is discussing URL parameter issues across multiple channels, which indicates a genuine concern for search quality.

For industry experts, staying informed on these technical aspects is essential for maintaining search visibility.

While Google works on solutions, proactive URL management and effective crawler guidance are recommended.

Google’s John Mueller On Removing Unwanted Content From Search via @sejournal, @MattGSouthern

Google’s John Mueller explained on Reddit how to remove unwanted content from search results.

This came up when someone asked about getting rid of an old article about their arrest that kept showing up in Google searches.

The person was arrested for a minor offense in 2018, but a news article appears in Google searches years later.

Even though the case was settled, the article is still on the first page of results, and the person wants it removed.

What can they do? Here’s what Mueller advised.

Mueller’s Guidance On Getting Content Removed

Mueller explained that even though the news outlet said they “de-indexed” the article, this process isn’t always quick or simple.

He suggested a few ways to tackle the issue:

  1. Complete Takedown: The news outlet said no to removing the article, but this is the most effective way, showing the page as a 404 error.
  2. Noindex Tag: This is probably what the news outlet did. It keeps the article on its site but tells search engines to ignore it. Mueller advised checking the page’s code for this tag.
  3. Name Swap: Mueller suggested asking the news outlet to replace the person’s name with something generic like “John Doe” as a workaround. This could make the article harder to find in name searches.
  4. Right to be Forgotten: For folks in some areas, especially Europe, this legal option might help.

About the article still showing up in searches, Mueller said that even after de-indexing, it can take up to six months for a page to disappear from results:

“Regarding how long it takes to “see” a noindex, there’s no specific time, but it’s usually less than a few months. I think I’ve seen it take up to 6 months. They’re not kept in the index forever without being refreshed. If you use the public removal tool (for non-site-owners), Google will check the page fairly quickly (within a few days) and use that to confirm that the page has a noindex.”

He assured that pages don’t stay indexed forever without being rechecked.

Mueller mentioned that while some “hidden” de-indexing methods exist, they’re not common.

He recommended using Google’s public removal tool, which allows Google to recheck the page within days. This might speed things up if the news outlet has properly de-indexed the article.

Mueller stated:

“It’s really rare (I can’t think of any case in the last year or so) that someone would use a kind of “hidden” noindex; it’s complicated to set up & maintain. Most sites just use the visible robots meta tag for switching things to noindex, which you would be able to see yourself fairly quickly. If you use the removal tool, Google will also see any “more hidden” noindex settings.”

This advice gave the person a better grasp of their situation and possible next moves to deal with their lingering online content problem.

Tools for Content Removal

Mueller listed two main ways to get rid of content from search results:

  • For website owners: The Removals and SafeSearch reports tool
  • For everyone else: The Refresh Outdated Content tool

If you own the site, Google removes the content on request.

For non-owners, Google does a few checks before taking anything down.

Mueller said using these tools won’t accidentally make your page show up more in searches.

He stated:

“The removal tool for site-owners has a help page titled “Removals and SafeSearch reports Tool”, subtitle “Temporarily block search results from your site, or manage SafeSearch filtering”. (Site-owner = the person running the website, in their Search Console account)

The public removal tool for non-site-owners is titled “Refresh Outdated Content tool” / subtitle: “Request an update to outdated content in Google Search results” (non-site-owner would be someone who doesn’t work on the website themselves, like you).

The site-owner tool will process a removal very quickly, without checking if the page is actually noindex or not. The assumption is that as the site-owner, you can block whatever you want. If they’re willing to do this for you, that’s the fastest way.

For non-site-owners, the tool will check multiple times to confirm that the page is removed, noindex, or appropriately changed. It won’t do anything until it has confirmed that, so there’s no harm in trying it. Neither of these tools will make a page more visible (SEOs would love that). The tools are also labeled as “temporary” removals – because if the page becomes indexable again, it can show up again in search.”

Why This Matters

This shows how difficult it can be to manage what people see about you online.

While Google offers ways to remove old or unwanted articles, it can take a while, and sometimes, the publisher must cooperate.

Featured Image: tomeqs/Shutterstock