Google: It’s Easy To Mess Up A Site Revamp via @sejournal, @martinibuster

Google’s John Mueller answered a question about the SEO effects from making user interface (UI) and user experience (UX) changes to a website, advising that it’s better to plan in advance because it takes longer to fix problems if the changes break the SEO.

User Interface (UI) And User Experience (UX)

The UI and UX are major elements of a website that affect how easy it is for site visitors to accomplish what they are there for, ultimately affecting user satisfaction.

User Interface elements affect how site visitors interact with a website such as navigational, input forms, and informational icons.

User experience elements are a wider range of considerations that relate to accessibility, consistency in design, mobile responsiveness, readability, site speed, usability, and many other elements that in a well considered design have a positive impact on user satisfaction.

This can affect SEO ranking from technical issues related to how Google crawls the page to on-page ranking considerations such as how content is displayed and consequently understood on a webpage.

Making sitewide changes to the UI and UX changes, in addition to adding new webpages, is a major undertaking and should not be considered lightly.

Google Office Hours Site Changes

Google’s John Mueller read the submitted question:

“Anjaney asks: We’re preparing to launch a new website design for my company, involving UI/UX improvements and new pages.

Is it better to change the page design one at a time?”

Mueller answered:

“One complexity is that a relaunch can mean so many different things, from just shifting a website to a new server and leaving everything else the same, to changing domain names and all of the content as well.

First, you need to be absolutely clear what’s changing on the website.

Ideally map out all changes in a document, and annotate which ones might have SEO implications.

If you’re changing URLs, we have some great guidance on handling site migrations in our documentation.

If you’re changing the content or the UI, of course that will affect SEO too.

If you’re unsure about the effects, I’d strongly recommend getting help from someone more experienced – it’s easy to mess up a bigger revamp or migration in terms of SEO, if it’s done without proper preparation.

Even with everything done properly, I get nervous when I see them being done.

Fixing a broken migration will take much more time and effort than preparing well. In any case, good luck!”

Plan Before Making Changes

As Mueller recommends, it’s wise to create a plan for how the changes will roll out. Of particular importance is to document the state of the website before any changes are made, create a backup and use a staging environment.

1 – Crawl The Website

An important thing to do prior to making major changes to a website is to crawl it with an app like Screaming Frog. Crawl the website in the original form and then crawl the updated version (preferably before it goes live).

The crawl data can be used to identify a range of possible issues that can be fixed before the site goes live.

Things that can be checked:

  • Spot missing pages
  • Catch misconfigured links
  • Missing meta and title elements
  • Review changes to linking patterns
  • Catch 404 errors
  • Check that 301 redirects are in place and properly functioning  pages

2 – Create A Backup Of The Website

Always have multiple backups of a website. There are so many things that can go wrong with a website backup. I’ve learned the hard way to have multiple redundant backups.

For example, a consultant who was working on my server downloaded the images using the wrong transfer type, corrupting hundreds of thousands of images. Fortunately, I had backups on my desktop and duplicate backups on the server itself, so the images could be recovered.

I have over 20 years of experience managing my own websites and dealing with client websites. The value of making multiple backups, including backups of backups stored on separate media, cannot be overstated.

3 – Stage The Website

It’s a best practice to stage the new website so that any changes can be reviewed before they make it to the live server.

The live version of a site is referred to as the production environment and the non-live duplicate version is called the staging environment.

Website staging refers to the best practice of creating a duplicate version of a website (also known as a “staging environment”).

Ideally the staging environment is on a separate server or at least on a separate location on the server so that there is no chance of changes on the staging environment accidentally making it to the live (production environment).

The staging environment is used for development and quality assurance testing before changes are implemented on the live website. The main goal of staging a website staging is to identify and fix issues, technical bugs, or errors before they are carried over to the live version of the website (the production environment).

More Website Migration Resources

Managing Successful SEO Migrations

Enterprise-level Site Migrations

Essential Steps For A Seamless Website Migration

Mistakes To Avoid With Site Migrations

Featured Image by Shutterstock/Roman Samborskyi

WordPress Google Fonts Plugin Vulnerability Affects Up To +300,000 Users via @sejournal, @martinibuster

A vulnerability rated as High was recently patched in a Google Fonts optimization plugin for WordPress, allowing attackers to delete entire directories and upload malicious scripts.

OMGF | GDPR/DSGVO Compliant WordPress Plugin

The plugin, OMGF | GDPR/DSGVO Compliant, Faster Google Fonts. Easy., optimizes the use of Google Fonts to reduce page speed impact and is also GDPR compliant, making it valuable for users in the European Union looking to implement Google Fonts.

Screenshot of Wordfence Vulnerability Rating

WordPress Google Fonts Plugin Vulnerability Affects Up To +300,000 Users

Vulnerability

The vulnerability is particularly concerning because it allows unauthenticated attackers. “Unauthenticated” means that an attacker doesn’t need to be registered on the website or have any level of credentials.

The vulnerability is described as enabling unauthenticated directory deletion and allowing the upload of Cross-Site Scripting (XSS) payloads.

Cross-Site Scripting (XSS) is a type of attack where a malicious script is uploaded to a website server, which can then be used to remotely attack the browsers of any visitors. This can result in accessing a user’s cookies or session information, enabling the attacker to assume the privilege level of that user visiting the site.

The cause of the vulnerability, as identified by Wordfence researchers, is a lack of a capability check – a security feature that checks whether a user has access to a specific feature of a plugin, in this case, an admin-level feature.

An official WordPress developer page for plugin makers says this about capability checking:

“User capabilities are the specific permissions that you assign to each user or to a User role.

For example, Administrators have the “manage_options” capability which allows them to view, edit and save options for the website. Editors on the other hand lack this capability which will prevent them from interacting with options.

These capabilities are then checked at various points within the Admin. Depending on the capabilities assigned to a role; menus, functionality, and other aspects of the WordPress experience may be added or removed.

As you build a plugin, make sure to run your code only when the current user has the necessary capabilities.”

Wordfence describes the cause of the vulnerability:

“…is vulnerable to unauthorized modification of data and Stored Cross-Site Scripting due to a missing capability check on the update_settings() function hooked via admin_init in all versions up to, and including, 5.7.9.”

Wordfence also states that previous updates attempted to close the security gap but considers version 5.7.10 to be the most secure version of the plugin.

Read the Wordfence vulnerability warning:

OMGF | GDPR/DSGVO Compliant, Faster Google Fonts. Easy. <= 5.7.9 – Missing Authorization to Unauthenticated Directory Deletion and Cross-Site Scripting

Are Homepages The Most Important To Google? via @sejournal, @martinibuster

A recent statement by a Googler appears to strongly emphasize that the home page is the most important page of a website to Google. And that’s kind of different from how some in the search community consider which page is most important.

It’s commonly  understood that links play an important role in telling Google what pages of a website are important.

But there are multiple statements from Googlers that may indicate that the homepage may be the most important part of a website.

Home Page Importance

The home page used to be considered the most important page of a website in the early days of SEO because everyone obtained links from directories and reciprocal linking which overwhelmingly resulted in home page links. That in turn made the home page the most powerful.

But nowadays the most important pages for many (but not all) sites are generally the inner pages because people link to content (or build links to it). It’s a longtime link building trend to build links to important inner pages so that pages about XYZ have a better chance to rank for XYZ and so on.

How Important Is the Home Page To Google?

So it was a little surprising to see Gary Illyes emphatically assert that the homepage of a site is the most important page to Google.

The context is in a Search Off the Record podcast (linked at the end of this article) about debugging technical SEO issues with a website. This specific part was about how to figure out if dropped traffic is a technical or a quality issue.

It’s happened on other sites that people quote a few sentences that a Googler spoke or quote two sentences from a 31 page patent and make unfounded claims based on the out of context statements.

Which is why I’m showing the context so that the statement makes sense in the way it was spoken.

Illyes says:

“First, you want to figure out whether the page is in Search or not. Because if the page is not in Search, then you already narrowed it down to two very specific things.”

Within that context of checking if Google can crawl the site he continues:

“Usually I start with the home page, because I can’t speak for other search engines, obviously, but from Google’s perspective, the homepage is the most important page on the site, and homepage is a little vague here, because it can be the page wherever users are landing on when they enter your domain name or host name.

Like if www.example.com redirects to www.example.com/foo/bar, then that will be your homepage.

Check that because we, as in Google, will try very hard to index that or crawl and index that.

If that’s not indexed, then you probably have some problems.”

That’s a fairly emphatic statement about how important the homepage is to Google.

Different Ways Homepage Is Important To Google

John Mueller has made statements about how Google uses the homepage as a starting point for crawling a website to find new pages and also for understanding how important pages are by observing how many clicks away a page is from the homepage.

In 2020, John Mueller said something similar in the context of answering a question about the importance of pages being linked from the home page.

Mueller explained:

“…on a lot of websites the home page is the most important part of the website. So we re-crawl that fairly often and from there we try to find new and updated pages or other important pages.

So what will happen is, we’ll see the home page is really important, things linked from the home page are generally pretty important as well.

And then… as it moves away from the home page we’ll think probably this is less critical.

That pages linked directly from the home page are important is fairly well known but it’s worth repeating. In a well organized website the major category pages and any other important pages are going to be linked from the home page.”

Mueller has been consistent about this point of view about the homepage.

We can even go as far back as 2018 where Mueller said something similar.

In a response to a question about whether it matters how many forward slashes there are in a URL John Mueller mentioned that the homepage was “the strongest” page on a website.

That could be a reference to the homepage being the most linked to and therefore the “strongest” page of the site. But he didn’t elaborate on that point so it can’t be said for certain.

Mueller said:

“What does matter for us a little bit is how easy it is to actually find the content.

So especially if your homepage is generally the strongest page on your website, and from the homepage it takes multiple clicks to actually get to one of these stores, then that makes it a lot harder for us to understand that these stores are actually pretty important.

On the other hand, if it’s one click from the home page to one of these stores then that tells us that these stores are probably pretty relevant, and that probably we should be giving them a little bit of weight in the search results as well.

So it’s more a matter of how many links you have to click through to actually get to that content rather than what the URL structure itself looks like.”

How Important Is The Homepage To Google?

I’m not saying that the homepage is the most important part of a website to Google regardless of whether it’s the most linked to page or not. It could be that this is just a generality and not an across the board fact that the homepage is the most important page to Google.

But…

It is interesting that there are many statements out there from Googlers about how Google uses links from the homepage as the starting point for understanding how important inner pages are. Plus, there’s the recent statement from Gary Illyes where he emphasizes that the homepage of a site is important to Google.

Listen to the Search Off The Record podcast at the 5:50 minute mark and make up your own mind:

Featured Image by Shutterstock/fizkes

Tree Of Thoughts Prompting For Better Generative AI Results via @sejournal, @martinibuster

Many are aware of the popular Chain of Thoughts (CoT) method of prompting generative AI in order to obtain better and more sophisticated responses. Researchers from Google DeepMind and Princeton University developed an improved prompting strategy called Tree of Thoughts (ToT) that takes prompting to a higher level of results, unlocking more sophisticated reasoning methods and better outputs.

The researchers explain:

“We show how deliberate search in trees of thoughts (ToT) produces better results, and more importantly, interesting and promising new ways to use language models to solve problems requiring search or planning.”

Researchers Compare Against Three Kinds Of Prompting

The research paper compares ToT against three other prompting strategies.

1. Input-output (IO) Prompting
This is basically giving the language model a problem to solve and getting the answer.

An example based on text summarization is:

Input Prompt: Summarize the following article.
Output Prompt: Summary based on the article that was input

2. Chain Of Thought Prompting

This form of prompting is where a language model is guided to generate coherent and connected responses by encouraging it to follow a logical sequence of thoughts. Chain-of-Thought (CoT) Prompting is a way of guiding a language model through the intermediate reasoning steps to solve problems.

Chain Of Thought Prompting Example:

Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
Reasoning: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer: 11

Question: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?

3. Self-consistency with CoT

In simple terms, this is a prompting strategy of prompting the language model multiple times then choosing the most commonly arrived at answer.

The research paper on Sel-consistency with CoT from March 2023 explains it:

“It first samples a diverse set of reasoning paths instead of only taking the greedy one, and then selects the most consistent answer by marginalizing out the sampled reasoning paths. Self-consistency leverages the intuition that a complex reasoning problem typically admits multiple different ways of thinking leading to its unique correct answer.”

Dual Process Models in Human Cognition

The researchers take inspiration from a theory of how human decision thinking called dual process models in human cognition or dual process theory.

Dual process models in human cognition proposes that humans engage in two kinds of decision-making processes, one that is intuitive and fast and another that is more deliberative and slower.

  • Fast, Automatic, Unconscious
    This mode involves fast, automatic, and unconscious thinking that’s often said to be based on intuition.
  • Slow, Deliberate, Conscious
    This mode of decision-making is a slow, deliberate, and conscious thinking process that involves careful consideration, analysis, and step by step reasoning before settling on a final decision.

The Tree of Thoughts (ToT) prompting framework uses a tree structure of each step of the reasoning process that allows the language model to evaluate each reasoning step and decide whether or not that step in the reasoning is viable and lead to an answer. If the language model decides that the reasoning path will not lead to an answer the prompting strategy requires it to abandon that path (or branch) and keep moving forward with another branch, until it reaches the final result.

Tree Of Thoughts (ToT) Versus Chain of Thoughts (CoT)

The difference between ToT and and CoT is that ToT is has a tree and branch framework for the reasoning process whereas CoT takes a more linear path.

In simple terms, CoT tells the language model to follow a series of steps in order to accomplish a task, which resembles the system 1 cognitive model that is fast and automatic.

ToT resembles the system 2 cognitive model that is more deliberative and tells the language model to follow a series of steps but to also have an evaluator step in and review each step and if it’s a good step to keep going and if not to stop and follow another path.

Illustrations Of Prompting Strategies

The research paper published schematic illustrations of each prompting strategy, with rectangular boxes that represent a “thought” within each step toward completing the task, solving a problem.
The following is a screenshot of what the reasoning process for ToT looks like:

Tree Of Thoughts Prompting For Better Generative AI Results

Illustration of Chain of Though Prompting

This is the schematic illustration for CoT, showing how the thought process is more of a straight path (linear):

Tree Of Thoughts Prompting For Better Generative AI Results

The research paper explains:

“Research on human problem-solving suggests that people search through a combinatorial problem space – a tree where the nodes represent partial solutions, and the branches correspond to operators
that modify them. Which branch to take is determined by heuristics that help to navigate the problem-space and guide the problem-solver towards a solution.

This perspective highlights two key shortcomings of existing approaches that use LMs to solve general problems:

1) Locally, they do not explore different continuations within a thought process – the branches of the tree.

2) Globally, they do not incorporate any type of planning, lookahead, or backtracking to help evaluate these different options – the kind of heuristic-guided search that seems characteristic of human problem-solving.

To address these shortcomings, we introduce Tree of Thoughts (ToT), a paradigm that allows LMs to explore multiple reasoning paths over thoughts…”

Tested With A Mathematical Game

The researchers tested the method using a Game of 24 math game. Game of 24 is a mathematical card game where players use four numbers (that can only be used once) from a set of cards to combine them using basic arithmetic (addition, subtraction, multiplication, and division) to achieve a result of 24.

Results and Conclusions

The researchers tested the ToT prompting strategy against the three other approaches and found that it produced consistently better results.

However they also note that ToT may not be necessary for completing tasks that GPT-4 already does well at.

They conclude:

“The associative “System 1” of LMs can be beneficially augmented by a “System 2″ based on searching a tree of possible paths to the solution to a problem.

The Tree of Thoughts framework provides a way to translate classical insights about problem-solving into actionable methods for contemporary LMs.

At the same time, LMs address a weakness of these classical methods, providing a way to solve complex problems that are not easily formalized, such as creative
writing.

We see this intersection of LMs with classical approaches to AI as an exciting direction.”

Read the original research paper:

Tree of Thoughts: Deliberate Problem Solving with Large Language Models

Beyond SEO: John Mueller On AI-Generated Images & Stock Photography via @sejournal, @kristileilani

John Mueller, Google Search Advocate, recently shared his “shower” thoughts on the use of AI-generated images on websites versus stock photography.

His discussion opened up an intriguing debate on how users perceive images created with generative AI tools like DALL·E, especially in contexts that are not primarily focused on art or AI.

The post included a disclaimer it is not meant to serve as SEO advice or foreshadowing of an upcoming Google search update.

AI-Generated Images Vs. Stock Photography

Mueller begins by distinguishing between situations where a specific photograph is necessary and those where imagery serves as mere decoration.

He argues that in some circumstances, like a suitcase a website aims to sell, authentic photographs are essential.

While real photos might undergo digital enhancements or editing, the foundation of product photography must be rooted in reality to provide consumers with an accurate representation of a future investment.

On the other hand, Mueller points out that for general content embellishment, there is little difference between using stock photography and AI-generated images.

Both types of imagery can enhance the aesthetic appeal of a website, making the content more engaging and enjoyable for the reader.

This distinction underlines that the decision to use real photos versus AI-generated images largely depends on the specific needs and goals of the website content.

The Value Of Images For User Experience

Mueller also touches upon the relevance of the subject matter of the website.

He suggests that for certain topics, audiences expect real images, while for others, the distinction between real and AI-generated images may be negligible.

This expectation ties into search engine optimization (SEO), as Mueller hypothesizes that users are more inclined to search visually for topics where real images are valued.

Further, Mueller offers practical advice for website owners considering the use of AI-generated images.

He encourages them to reflect on whether they would typically use stock photography in the same context. This approach can help in making an informed decision about the appropriateness of AI-generated images for their website.

Quality Standards Of AI-Generated Images

Mueller also cautions about the ease and temptation of using AI-generated images as a time and cost-saving measure.

He notes that taking a quick photo with a phone could be considered as creating ‘stock photography,’ but this might not meet the professional standards expected on a business website.

He emphasizes that quality and professionalism often require time and experience.

AI Images, AR Models, And Consumer Trust

Throughout the comments, Mueller answered questions about images, AI, and SEO. Here are some of the best responses.

Should you add rel=nofollow for an image credit link?

“Links are fine. No need to use rel=nofollow if they’re normal links.”

AR For 3D Modeling

Mueller expressed a desire for augmented reality (AR) support in online product displays, emphasizing the value of using 3D models.

“Seeing a photo is a good start, trying it out in my own space is so much better.”

He also differentiated between 3D-rendered images based on actual building plans and fully AI-generated images, likening the latter to decorative blog post imagery.

Decorative Images & Real Product Photography

Regarding conceptual illustrations, Mueller noted that decorative images indicated the level of effort put into the content, enhancing user trust.

However, he criticizes the use of AI images for product photos, comparing it to low-quality imported product sites where photoshopped images often lead to unrealistic representations.

“…if you have the product, why not get real photos, and if you don’t have the product, you wouldn’t be able to confirm that the image is ok.”

AI-Generated Images As ‘Low-Effort’ Content

Considering that creative visualizations and real product photos are considered indicators of high-quality content, it’s no surprise that some uses of AI-generated images could be considered the opposite.

Mueller also offered another perspective: if real images represent an original source of content, AI images could represent scraped content.

“If I noticed a recipe site were using AI-generated images, I’d assume all of the content is scraped spam and just go somewhere else.”

AI Content Decreases Consumer Trust

When visitors discover content has been “faked,” it could harm their trust in anything else on the website. Mueller suggested that even an obvious “team” stock photo was less deceptive than one created by AI.

He acknowledged the value of good stock photography over a unique smartphone photo and how the latter did not equal professional-quality content.

But he is also aware that the lines are blurred more now that companies like Getty and Shutterstock have launched AI tools trained on licensed stock photography.

Conclusion

The discussion on Mueller’s LinkedIn post is particularly relevant, highlighting the evolving role of AI tools in content creation and its impact on user experience and SEO.

As marketers continually adapt to new technologies, understanding these nuances is crucial for effective digital marketing strategies. It prompts us to consider the authenticity of our visual content and its alignment with our audience’s expectations.

It’s essential to strike a balance between authenticity, professionalism, and the practical benefits of AI-generated images, keeping in mind the nature of the content and audience expectations.

Featured image: Thongden Studio/Shutterstock

John Mueller On Keyword Domain Names And Branding For SEO via @sejournal, @kristileilani

In a recent discussion on Reddit, a user inquired about the advantages of using keyword domain names for SEO to improve visibility for a specific phrase.

The user shared an example of utilizing a domain like “swimsuits.com” for selling swimsuits, a keyword with over 10,000 monthly searches in their country.

John Mueller, Google’s Search Advocate, offered a response that should help marketers decide on the best approach for SEO.

Are Keywords In Domain Names A Ranking Factor?

The big question: do relevant keywords in a domain name offer additional advantages in Google Search?

Mueller’s response was direct and insightful. He advised against relying on keywords in domain names for long-term SEO strategy.

“A keyword domain name is not going to give you any recognizable SEO advantage on Google.”

keyword domain names google search rankings seo valueScreenshot from r/SEO, December 2023

The introductory if  “If you’re planning on using this [domain] for the long run” – suggests the advice does not apply to all website owners and that there could be short-term value in keyword-based domain names.

Omnichannel Brand Recognition & Versatility

For those with long-term marketing goals, domain names focused on a specific keyword or phrase could become a hindrance.

According to Mueller, such domain names can also be restrictive for targeting other keywords, making it difficult to diversify product or service offerings on the same website.

Mueller’s advice sheds light on the evolving nature of omnichannel marketing marketing. It highlights the importance of building a strong, recognizable brand for search and other platforms to connect with customers.

Conclusion

In conclusion, Mueller’s insights are valuable for marketing and SEO professionals polishing up their strategy for 2024. It encourages a shift in focus from relying on keywords to developing a comprehensive, brand-focused online presence.

This strategy aligns with the current SEO trends, where brand recognition and quality content play a significant role in achieving SEO success.

Featured image: Linaimages/Shutterstock

Google On Knowing Best SEO Was Applied via @sejournal, @martinibuster

In a recent Google office hours recording, Google’s John Mueller answered a question about how one knows if their SEO coverage is complete and if there are any tools that can spot something that they might have missed.

That question the way it was originally asked sounds somewhat strange and that may be because of a language translation issue. The person asking the question has an Indian name so it’s quite possible that the oddness of the question (how do I know my SEO is perfect) was due to language.

That the question makes more sense if it’s  understood as how does one know that their SEO is optimal and that nothing was left undone. That’s a legitimate question and fits into the idea of SEO that is free from flaws, given that the definition of “perfect” is being as free as possible from flaws.

Here’s the question and answer as read by John Mueller:

“Charan asks: How to know if my SEO is perfect? Are there any tools, apps or websites available for it?”

What the person asking the question appears to be asking is if there are any standards or best practices to measure their SEO efforts against.

That’s why they also asked if there are any tools or websites that could offer a support in that direction.

Mueller answered the question as asked, without taking into account that the person asking the question may have had a different intent apart from the literal meaning of the question.

Mueller Offers “Disappointing” Answer

John Mueller offered what he self-described as a disappointing answer.

Given that there may have been a language issue, maybe answering the literal question of “what is perfect SEO,” wasn’t the most nuanced approach.

Here’s Mueller’s response:

“Sorry to disappoint, Charan, but your SEO is not perfect. In fact, no SEO is perfect.

The Internet, search engines, and how users search is always changing, so SEO will evolve over time as well.

This includes both technical elements, like structured data, as well as considerations around quality. Just because you can’t do perfect SEO shouldn’t discourage you though!”

A Less Disappointing Answer

Mueller’s right that search engines and user queries are constantly evolving.

SEOs are focused on keeping up with Google but keeping up with users is not a bad idea, given that’s what Google is doing.

Nevertheless, there are standard best practices and it’s good to know what they are.

So, another way to answer that question is to think in terms of what Google recommends as a best practice.

Google makes that easy because it offers an SEO starter guide that breaks down the process of optimizing SEO into a series of steps that can become the basis of an SEO checklist to measure your SEO against.

Here are a selection SEO topics covered:

  • Create a sitemap
  • Can Google access appropriate JS and CSS to render the website?
  • Are titles unique and precise?
  • Are meta descriptions relevant to the content?
  • Do heading elements accurately summarize blocks of content on the webpage?
  • Is there structured data and is it used appropriately?
  • Does the site use a hierarchical site structure?
  • Are URLs simple and convey user-friendly useful information?
  • Is the content useful, unique and interesting?

There’s a lot on that one page that can form the basis of an SEO plan to check if you’re efforts have covered the fundamentals of SEO.

Suggested reading:

Why Google Recommends Hierarchical Site Structure For SEO

Six SEO Concepts You Need To Know

A Complete Local SEO Checklist

A 10-Point Ecommerce SEO Checklist for Marketers

How to Do an SEO Audit: The Ultimate Checklist

The Complete Guide to On-Page SEO

What E-E-A-T Really Means For SEO

Listen to the Google Office Hours recording at the 19:12 minute mark:

Featured Image by Shutterstock/Vectorium

Google Answers How It Handles A Specific Non-Standard Meta Tag via @sejournal, @martinibuster

Google’s Martin Splitt answered a question about how Googlebot responds to a pre-render meta tag that has the value of 404 page not found. It’s a good question because this is the kind of meta tag, a non-standard meta element, isn’t often encountered so it’s good to know what to do when something like this comes up.

The person asking the question wanted to know about how Google might respond to a meta tag in the head section that has the name “prerender-status-code” and a value of “404” which means that the requested page is not found.

The question was asked by a person named Martin and Martin Splitt of Google is the one who answered it.

This is the question :

“Martin is asking: What does Googlebot do when it finds ?”

Martin Splitt answered:

“Well Martin, that’s easy to say, Googlebot currently ignores that status code.

I guess this is coming from a single page application that is client-side rendered and you want to avoid soft-404s, in that case consider adding or redirect to a page where the server responds with the 404 status code.

For more information on that see our documentation at developers.google.com/search.”

What is Prerender-Status-Code?

The prerender-status-code meta element (sometimes referred to as meta tag) is not an official meta tag and there is no documentation on it at the Worldwide Web Consortium (W3C.org), where the official HTML standards are created.

This is more of a proprietary or non-standard meta element. Non-standard meta elements are are not part of the official W3C HTML specifications. Some non-standard meta elements are browser-specific or are created for specific purposes. Consequently, they may not be supported by different browsers or by search engines. and their behavior may not be consistent across different browsers

The prerender-status-code meta element is an example of a non-standard meta element that also happens to not be supported by Google.

Another non-standard meta element that is not supported by Google is the meta keywords element. There is no reference to it at the W3C.org and it was never a part of the official HTML standards. It was a meta element that was invented by search engines in the 1990s.

The X-UA-Compatible meta element is an example of a browser-specific non-standard meta element that is an outdated meta element that was specific to the old Internet Explorer web browser.

This is an example of the X-UA-Compatible meta element:

The takeaway from Martin’s answer about the prerender-status-code meta element is that many non-standard meta elements are not supported by Google.

Another takeaway is that not every meta tag is a part of the official HTML standards which can be found at the World Wide Web Consortium website (W3C.org). Those non-official meta elements are called non-standard meta elements.

More information can be found at Google’s support page about supported meta tags, which was last updated on December 1, 2023.

Meta tags and attributes that Google supports

Listen to the Google Office hours video at the 3:46 minute mark:

Featured Image by Shutterstock/Jaaak

Google On How Subdomain Caused Perception Of Indexing Issue via @sejournal, @martinibuster

In a Google Office Hours Q & A, Google’s John Mueller answered a question about a site not getting indexed that brought to light a technical SEO fact about domains and subdomains that not everyone may be aware of and how that might cause the perception that a site is having trouble getting indexed.

The person asking the question created a website using a framework (Core MVC) and then published it under the secure HTTPS protocol.

However they shortly discovered that they were experiencing problems getting the content indexed.

Given that the known variables of the scenario was the framework used to build the site and that they used HTTPS protocol, the person asking the question mentioned those two factors in their question.

Could it be that the web application framework used is somehow interfering with the indexing?

Or is there something about the HTTPS configuration that’s getting in the way of indexing?

The question asked:

“I built a new site on Core MVC and moved it to HTTPS and I have problems with indexing the new pages.”

Google’s John Mueller took a look at the site and discovered that the MVC framework and HTTPS had nothing to do with the perceived indexing issues.

Mueller answered:

“…I took a look at your site and how it’s indexed.

It looks like your site is indexed without the www subdomain, so if you explicitly look for the www version of your site, you won’t find much.

If you look for the domain alone, such as site:domain.com, then you’ll find it indexed.”

There’s Always More Than What Is Seen

Sometimes the answer to a problem is hidden by the most obvious possible causes of the issue, which limits the ability to identify the true solution. The best path for solving an SEO problem is to never stop the search for an answer at any of the obvious issues.

John Mueller may have redefined the problem from the site is not indexed to the WWW version of the site is not indexed, so let’s check the non-WWW version.

WWW Is A Subdomain

The identification of the solution is also a reminder of an important technical fact about www and non-www versions of a website in that the WWW version is a subdomain of the non-www version.

Listen to the Google Office Hours at the 2:51 minute mark of the recording:

Why Is A Site Not Indexing?

Featured Image by Shutterstock/Luis Molinero

How To Increase Localized Traffic To EU Domain Hosted In US via @sejournal, @kristileilani

In a post on the r/TechSEO subreddit, Google’s Search Advocate, John Mueller, responded to a Reddit user asking how to increase localized traffic to a European Union (EU) domain hosted in the United States (US).

The user’s client, who owns a .com and a .eu subdomain, hopes to increase targeted traffic to the latter. However, the user is concerned that the site’s server location could reduce the domain’s visibility in international search results.

Here are the five things Mueller suggested the user should focus on – or safely ignore – to increase localized traffic from the EU.

1. Utilize Hreflang Tags

Mueller’s first recommendation is the use of hreflang tags. These tags are instrumental in directing users from various European countries to the EU subdomain, making the site more accessible and relevant to the European audience.

This approach is crucial for a site targeting multiple regions with potentially overlapping or similar content. He emphasizes that hreflang should connect major European countries to the EU domain, and the rest of the visitors would default to the .com.

2. Server Location Isn’t A Factor

Secondly, Mueller downplays the importance of server location.

Contrary to the common belief that server proximity to the target audience enhances performance, he suggests that server location is less pivotal, thus offering more flexibility in server hosting decisions.

3. Canonical Tags Can Prevent Content Duplication

The third point addresses the issue of content duplication, particularly when the same language is used across multiple domains.

Mueller advises using canonical tags carefully in such scenarios to avoid Google interpreting the content as duplicate. Alternatively, slight variations in content across these domains can help distinguish them.

4. Support Local Currency With Google Shopping Feeds

Fourthly, Mueller recommends leveraging Google Shopping feeds.

This approach involves optimizing product listings for visibility in Google’s Shopping search results, an effective tool for reaching a broader European audience and enhancing e-commerce performance.

5. Focus On The Homepage And High-Level Pages

Lastly, Mueller suggests the best results can be achieved by focusing on the homepage and other high-level pages.

This strategy implies that a comprehensive site-wide overhaul may not be necessary; instead, prioritizing key pages can lead to substantial improvements in traffic with efficient resource allocation.

Conclusion

This insight is crucial for marketing and SEO professionals aiming to expand their reach within the EU market.

By implementing Mueller’s strategies, businesses can enhance their website’s visibility and relevance in European search results, thereby driving targeted traffic and potentially boosting conversions.

These tactics align with the latest SEO best practices and offer practical solutions for multinational digital marketing.

Featured image: WDnet Creation/Shutterstock