Google’s Now Translating SERPs Into More Languages via @sejournal, @martinibuster

Google updated their documentation to reflect that it added eight new languages to its translated results feature, broadening the reach of publishers to an increasingly global scale, with automatic  translations to a site visitor’s native language.

Google Translated Results

Translated Results is a Google Search feature that will automatically translate the title link and meta description into the local language of a user, making a website published in one language available to a searcher in another language. If the searcher clicks on the link of a translated result the web page itself will also be automatically translated.

According to Google’s documentation for this feature:

“Google doesn’t host any translated pages. Opening a page through a translated result is no different than opening the original search result through Google Translate or using Chrome in-browser translation. This means that JavaScript on the page is usually supported, as well as embedded images and other page features.”

This feature benefits publishers because it makes their website available to a larger audience.

Search Feature Available In More Languages

Google’s documentation for this feature was updated to reflect that it is now available in eight more languages.

Users who speak the following languages will now have automatic access to a broader range of websites.

List Of Added Languages

  • Arabic
  • Gujarati
  • Korean
  • Persian
  • Thai
  • Turkish
  • Urdu
  • Vietnamese

Why Did It Take So Long?

It seems odd that Google didn’t already translate results into so many major languages like Turkish, Arabic or Korean. So I asked international SEO expert Christopher Shin (LinkedIn profile) about why it might have taken so long for Google to do this in the Korean language.

Christopher shared:

Google was always facing difficulties in the South Korean market as a search engine, and that has to do mainly with Naver and Kakao, formerly known as Daum.

But the whole paradigm shift to Google began when more and more students that went abroad to where Google is the dominant search engine came back to South Korea. When more and more students, travelers abroad etc., returned to Korea, they started to realize the strengths and weaknesses of the local search portals and the information capabilities these local portals provided. Laterally, more and more businesses in South Korea like Samsung, Hyundai etc., started to also shift marketing and sales to global markets, so the importance of Google as a tool for companies was also becoming more important with the domestic population.

Naver is still the dominant search portal, but not to retrieve answers to specific queries, rather for the purpose of shopping, reviews etc.

So I believe that market prioritization may be a big part as to the delayed introduction of Translated Google Search Results. And in terms of numbers, Korea is smaller with only roughly 52M nationwide and continues to decline due to poor birth rates.

Another big factor as I see it, has to do with the complexity of the Korean language which would make it more challenging to build out a translation tool that only replicates a simple English version. We use the modern Korean Hangeul but also the country uses Hanja, which are words from the Chinese origin. I used to have my team use Google Translate until all of them complained that Naver’s Papago does a better job, but with the introduction of ChatGPT, the competitiveness offered by Google was slim.”

Takeaway

It’s not an understatement to say that 2024 has not been a good year for publishers, from the introduction of AI Overviews to the 2024 Core Algorithm Update, and missing image thumbnails on recipe blogger sites, there hasn’t been much good news coming out of Google. But this news is different because it creates the opportunity for publisher content to be shown in even more languages than ever.

Read the updated documentation here:

Translated results in Google Search

Featured Image by Shutterstock/baranq

Google’s Response To Experts Outranked By Redditors via @sejournal, @martinibuster

An SEO asked on LinkedIn why an anonymous user on Reddit could outrank a credible website with a named author. Google’s answer gives a peek at what’s going on with search rankings and why Reddit can outrank expert articles.

Why Do Anonymous Redditors Outrank Experts?

The person asking the question wanted to know why an anonymous author on Reddit can outrank an actual author that has “credibility” such as in a brand name site like PCMag.

The person wrote:

“I was referring to how important the credibility of the writer is now. If we search for ‘best product under X amount,’ we see, let’s say, PCMag and Reddit both on the first page.

PCMag is a reliable source for that product, while Reddit has UGC and surely doesn’t guarantee authenticity. Where do you see this in terms of credibility?

In my opinion, Google must be focusing more on this, especially after the AI boom, where misinformation can be easily and massively spread.

Do you think this is an important factor in rankings anymore?”

This is their question that points out what the SEO feels is wrong with Google’s search results:

“As we can see, Reddit, popular for anonymous use, ranks much higher than many other websites.

This means that content from anonymous users is acceptable.

Can I conclude that a blog without any ‘about’ page or ‘author profile’ can also perform as well?”

Relevance And Usefulness Versus Credibility

Google’s John Mueller answered the question by pointing out that there are multiple kinds of websites, not just sites that are perceived to be credible and everyone else. The idea of credibility is one dimension of what a site can be, one quality of a website. Mueller’s answer reminded that search (and SEO) is multidimensional.

Google’s John Mueller answered:

“Both are websites, but also, they’re quite different, right? Finding the right tools for your needs, the right platforms for your audience and for your messages – it’s worth looking at more than just a simplification like that. Google aims to provide search results that are relevant & useful for users, and there’s a lot involved in that.

I feel this might fit, perhaps you have seen it before -“

Does Reddit Lack Credibility?

When it comes to recipes , in my opinion, Reddit users lack a  lot of credibility in some contexts.  When it comes to recipes, I’ll take the opinions of a recipe blogger or Serious Eats over what a random Redditor “thinks” a recipe should be.

The person asking the question mentioned product reviews as a topic that Reddit fails at credibility and ironically that’s actually a topic where Reddit actually shines. A person on Reddit who is sharing their hands-on experience using a brand of air fryer or mobile phone is the epitome of what Google is trying to rank for reviews because it’s the opinion of someone with days, weeks, months, years of actual experience with a product.

Saying that UGC product reviews are useful doesn’t invalidate the professional product reviews. It’s possible that both UGC and  professional reviews have value, right? And I think that’s the point that John Mueller was trying to get across about not simplifying search to one ranking criteria, one dimension.

This a dimension of search that the person asking the question overlooked, the hands-on experience of the reviewer and it illustrates what Mueller means when he says that “it’s worth looking at more than just a simplification” of what’s ranking in the search results.

OTOH… Feels Like A Slap In The Face

There are many high quality sites with original photos, actual reviews and content based on real experience that are no longer ranking the search results. I know because I have seen many of these sites that in my opinion should be ranking but are not. Googlers have expressed the possibility that a future update will help more quality sites bounce back and many expert publishers are counting on that.

Nevertheless, it must be acknowledged that it must feel like a slap in the face for an expert author to see an anonymous Redditor outranking them in Google’s search results.

Multidimensional Approach To SEO

A common issue I see in how some digital marketers and bloggers debug the search engine results pages (SERPs) is that they see it through one, two, or three dimensions such as:

  • Keywords,
  • Expertise
  • Credibility
  • Links

Reviewing the SERPs to understand why Google is ranking something is a good idea. But reviewing it with just a handful of dimensions, a limited amount of “signals” can be frustrating and counterproductive.

It was only a few years ago that SEOs convinced themselves that “author signals” were a critical part of ranking and now almost everyone (finally) understands that this was all a misinterpretation of what Google and Googlers said (despite the Googlers consistently denying that authorship was a ranking signal).

The “authorship” SEO trend is an example of a one dimensional approach to SEO that overlooked the multidimensional quality of how Google ranks web pages.

There are thousands of contexts that contribute to what is ranked, like solving a problem from the user perspective, interpreting user needs, adapting to cultural and language nuances, nationwide trends, local trends, and so on. There are also ranking contexts (dimensions) that are related to Google’s Core Topicality Systems which are used to understand search queries and web pages.

Ranking web pages, from Google’s perspective, is a multidimensional problem. What that means is that reducing a search ranking problem to one dimension, like the anonymity of User Generated Content, inevitably leads to frustration. Broadening the perspective leads to better SEO.

Read the discussion on LinkedIn:

Can I conclude that a blog without any ‘about’ page or ‘author profile’ can also perform as well?

Featured Image by Shutterstock/Master1305

Google Says These Are Not Good Signals via @sejournal, @martinibuster

Google’s Gary Illyes’ answer about authorship shared insights about why Google has less trust for signals that are under direct control of site owners and SEOs and provides a better understanding about what site owners and SEOs should focus on when optimizing a website.

The question that Illyes answered was in the context of a live interview at a search conference in May 2024. The interview went largely unnoticed but it’s full of great information related to digital marketing and how Google ranks web pages.

Authorship Signals

Someone asked the question about whether Google would bring back authorship signals. Authorship has been a fixation by some SEOs based on Google’s encouragement that SEOs and site owners review the Search Quality Raters Guidelines to understand what Google aspires to rank. SEOs however took the encouragement too literally and started to parse the document for ranking signal ideas instead.

Digital marketers came to see the concept of EEAT (Expertise, Experience, Authoritativeness, and Trustworthiness) as actual signals that Google’s algorithms were looking for and from there came the idea that authorship signals were important for ranking.

The idea of authorship signals is not far-fetched because Google at one time created a way for site owners and SEOs pass along metadata about webpage authorship but Google eventually abandoned that idea.

SEO-Controlled Markup Is Untrustworthy

Google’s Gary Illyes answered the question about authorship signals and very quickly, within the same sentence, shared that Google’s experience with SEO-controlled data on the web page (markup) tends to become spammy (implying that it’s untrustworthy).

This is the question as relayed by the interviewer:

“Are Google planning to release some authorship sooner or later, something that goes back to that old authorship?”

Google’s Gary Illyes answered:

“Uhm… I don’t know of such plans and honestly I’m not very excited about anything along those lines, especially not one that is similar to what we had back in 2011 to 2013 because pretty much any markup that SEOs and site owners have access to will be in some form spam.”

Gary next went into greater detail by saying that SEO and author controlled markup are not good signals.

Here is how he explained it:

“And generally they are not good signals. That’s why rel-canonical, for example is not a directive but a hint. And that’s why Meta description is not a directive, but something that we might consider and so on.

Having something similar for authorship, I think would be a mistake.”

The concept of SEO-controlled data not being a good signal is important to understand because many in search marketing believe that they can manipulate Google by spoofing authorship signals with fake author profiles, with reviews that pretend to be hands-on, and with metadata (like titles and meta descriptions) that is specifically crafted to rank for keywords.

What About Algorithmically Determined Authorship?

Gary then turned to the idea of algorithmically determined authorship signals and it may surprise some that Gary describes those siganls as lacking in value. This may come as a blow to SEOs and site owners who have spent significant amounts of time updating their web pages to improve their authorship data.

The concept of the importance of “authorship signals” for ranking is something that some SEOs created all by themselves, it’s not an idea that Google encouraged. In fact, Googlers like John Mueller and SearchLiaison have consistently downplayed the necessity of author profiles for years.

Gary explained about algorithmically determined authorship signals:

“Having something similar for authorship, I think would be a mistake. If it’s algorithmically determined, then perhaps it would be more accurate or could be higher accuracy, but honestly I don’t necessarily see the value in it.”

The interviewer commented about rel-canonicals sometimes being a poor source of information:

“I’ve seen canonical done badly a lot of times myself, so I’m glad to hear that it is only a suggestion rather than a rule.”

Gary’s response to the observation about poor canonicals is interesting because he doesn’t downplay the importance of “suggestions” but implies that some of them are stronger although still falling short of a directive. A directive is something that Google is obligated to obey, like a noindex meta tag.

Gary explained about rel-canonicals being a strong suggestion:

“I mean it’s it’s a strong suggestion, but still it’s a suggestion.”

Gary affirmed that even though rel=canonicals is a suggestion, it’s a strong suggestion. That implies a relative scale of how much Google trusts certain inputs that publishers make. In the case of a canonical, Google’s stronger trust in rel-canonical is probably a reflection of the fact that it’s in a publisher’s best interest to get it right, whereas other data like authorship could be prone to exaggeration or outright deception and therefore less trustworthy.

What Does It All Mean?

Gary’s comments should give a foundation for setting the correct course on what to focus on when optimizing a web page. Gary (and other Googlers) have said multiple times that authorship is not really something that Google is looking for. That’s something that SEOs invented, not something that Google encouraged.

This also provides guidance on not overestimating the importance of metadata that is controlled by a site owner or SEO.

Watch the interview starting at about the two minute mark:

Featured Image by Shutterstock/Asier Romero

Google Search Now Supports Labeling AI Generated Or Manipulated Images via @sejournal, @martinibuster

Google Search Central updated their documentation to reflect support for labeling images that were extended or manipulated with AI. Google also quietly removed the “AI generated” metadata from Beta status, indicating that the “AI Generated” label is now fully supported in search.

IPTC Photo Metadata

The International Press Telecommunications Council (IPTC) is a standards making body that among other things creates standards for photo metadata. Photo metadata enables a photograph to be labeled with information about the photo, like information about copyright, licensing and image descriptions.

Although the standards is made for by an international press standards organization the meta data standards they curate are used by Google Images in a context outside of Google News. The metadata allows Google Images to show additional information about the image.

Google’s documentation explains the use case and benefit of the metadata:

“When you specify image metadata, Google Images can show more details about the image, such as who the creator is, how people can use an image, and credit information. For example, providing licensing information can make the image eligible for the Licensable badge, which provides a link to the license and more detail on how someone can use the image.”

AI Image Manipulation Metadata

Google quietly adopted the metadata standards pertaining to images that were manipulated with AI algorithms that are typically used to manipulate images, like convolutional neural networks (CNNs) and generative adversarial networks (GANs).

There are two forms of AI image manipulation that are covered by the new metadata:

  • Inpainting
  • Outpainting

Inpainting

Inpainting is generally conceived as enhancing an image for the purpose of restoring or reconstructing it, to fill in the missing parts. But inpainting is also any algorithm manipulation that adds to an image.

Outpainting

Outpainting is the algorithm process of adding to an image, extending it beyond the borders of the original photograph, adding more to it than what was in the original image.

Google now supports labeling images that were manipulated in both those ways with a new metadata property of the Digital Source Type that’s called compositeWithTrainedAlgorithmicMedia.

compositeWithTrainedAlgorithmicMedia

While the new property looks like structured data, it’s not Schema structured data. It’s metadata that’s embedded in a digital image.

This is what was added to Google’s documentation:

“Digital Source Type

compositeWithTrainedAlgorithmicMedia: The image is a composite of trained algorithmic media with some other media, such as with inpainting or outpainting operations.”

Label For “AI Generated” – algorithmicMedia Metadata

Google also lifted the Beta status of the algorithmicMedia metadata specifications, which means that images that are created with AI can now be labeled as AI Generated if the algorithmicMedia metadata is embedded within an image.

This is the documentation before the change:

“algorithmicMedia: The image was created purely by an algorithm not based on any sampled training data (for example, an image created by software using a mathematical formula).

Beta: Currently, this property is in beta and only available for IPTC photo metadata. Adding this property makes your image eligible for display with an AI-generated label, but you may not see the label in Google Images right away, as we’re still actively developing it.”

The change in the documentation was to remove the entirety of the second paragraph to remove any mention of Beta status. Curiously, this change is not reflected in Google’s changelog.

Google’s Search Central documentation changelog noted:

“Supporting a new IPTC digital source type
What: Added compositeWithTrainedAlgorithmicMedia to the IPTC photo metadata documentation.

Why: Google can now extract the compositeWithTrainedAlgorithmicMedia IPTC NewsCode.”

Read Google’s updated documentation:

Image metadata in Google Images

Featured Image by Shutterstock/Roman Samborskyi

Google Struggles To Boost Search Traffic On Its iPhone Apps via @sejournal, @MattGSouthern

According to a report by The Information, Google is working to reduce its reliance on Apple’s Safari browser, but progress has been slower than anticipated.

As Google awaits a ruling on the U.S. Department of Justice’s antitrust lawsuit, its arrangement with Apple is threatened.

The current agreement, which makes Google the default search engine on Safari for iPhones, could be in jeopardy if the judge rules against Google.

To mitigate this risk, Google encourages iPhone users to switch to its Google Search or Chrome apps for browsing. However, these efforts have yielded limited success.

Modest Gains In App Adoption

Over the past five years, Google has increased the percentage of iPhone searches conducted through its apps from 25% to the low 30s.

While this represents progress, it falls short of Google’s internal target of 50% by 2030.

The company has employed various marketing strategies, including campaigns showcasing features like Lens image search and improvements to the Discover feed.

Despite these efforts, Safari’s preinstalled status on iPhones remains an obstacle.

Financial Stakes & Market Dynamics

The financial implications of this struggle are considerable for both Google and Apple.

In 2023, Google reportedly paid over $20 billion to Apple to maintain its status as the default search engine on Safari.

By shifting more users to its apps, Google aims to reduce these payments and gain leverage in future negotiations.

Antitrust Lawsuit & Potential Consequences

The ongoing antitrust lawsuit threatens Google’s business model.

If Google loses the case, it could potentially lose access to approximately 70% of searches conducted on iPhones, which account for about half of the smartphones in the U.S.

This outcome could impact Google’s mobile search advertising revenue, which exceeded $207 billion in 2023.

New Initiatives & Leadership

To address these challenges, Google has brought in new talent, including former Instagram and Yahoo product executive Robby Stein.

Stein is now tasked with leading efforts to shift iPhone users to Google’s mobile apps, exploring ways to make the apps more compelling, including the potential use of generative AI.

Looking Ahead

With the antitrust ruling on the horizon, Google’s ability to attract users to its apps will determine whether it maintains its search market share.

We’ll be watching closely to see how Google navigates these challenges and if it can reduce its reliance on Safari.


Featured Image: photosince/shutterstock

WordPress Nested Pages Plugin High Severity Vulnerability via @sejournal, @martinibuster

The U.S. National Vulnerability Database (NVD) and Wordfence published a security advisory of a high severity Cross Site Request Forgery (CSRF) vulnerability affecting the Nested Pages WordPress plugin affecting up to +100,000 installations. The vulnerability received a Common Vulnerability Scoring System (CVSS) rating of 8.8 on a scale of 1 – 10, with ten representing the highest level severity.

Cross Site Request Forgery (CSRF)

The Cross Site Request Forgery (CSRF) is a type of attack that takes advantage of a security flaw in the Nested Pages plugin that allows unauthenticated attackers to call (execute) PHP files, which are the code level files of WordPress.

There is a missing or incorrect nonce validation, which is a common security feature used in WordPress plugins to secure forms and URLs. A second flaw in the plugin is a missing security feature called sanitization. Sanitization is a method of securing data that’s input or output which is also common to WordPress plugins but in this case is missing.

According to Wordfence:

“This is due to missing or incorrect nonce validation on the ‘settingsPage’ function and missing santization of the ‘tab’ parameter.”

The CSRF attack relies on getting a signed in WordPress user (like an Administrator) to click a link which in turn allows the attacker to complete the attack. This vulnerability is rated 8.8 which makes it a high severity threat. To put that into perspective, a score of 8.9 is a critical level threat which is an even higher level. So at 8.8 it is just short of a critical level threat.

This vulnerability affects all versions of the Nested Pages plugin up to and including version 3.2.7. The developers of the plugin released a security fix in version 3.2.8 and responsibly published the details of the security update in their changelog.

The official changelog documents the security fix:

“Security update addressing CSRF issue in plugin settings”

Read the advisory at Wordfence:

Nested Pages <= 3.2.7 – Cross-Site Request Forgery to Local File Inclusion

Read the advisory at the NVD:

CVE-2024-5943 Detail

Featured Image by Shutterstock/Dean Drobot

Google Gives Exact Reason Why Negative SEO Doesn’t Work via @sejournal, @martinibuster

Google’s Gary Illyes answered a question about negative SEO provides useful insights into the technical details of how Google prevents low quality spam links from affecting normal websites.

The answer about negative SEO was given in an interview in May and has gone unnoticed until now.

Negative SEO

Negative SEO is the practice of sabotaging a competitor with an avalanche of low quality links. The idea is that Google will assume that the competitor is spamming and knock them out of the search engine results pages (SERPs).

The practice of negative SEO originated in the online gambling space where the rewards for top ranking are high and the competition is fierce. I first heard of it around the mid-2000s (probably before 2010) when someone involved in the gambling space told me about it.

Virtually all websites that rank for meaningful search queries attract low quality links and there is nothing unusual about, it’s always been this way. The concept of negative SEO became more prominent after the Penguin link spam update caused site owners to become more aware of the state of their inbound links.

Does Negative SEO Cause Harm?

The person interviewing Gary Illyes was taking questions from the audience.

She asked:

“Does negative SEO via spammy link building, a competitor throwing tens of thousands of links at another competitor, does that kind of thing still harm people or has Google kind of pushed that off to the side?

Google’s Gary Illyes answered the question by first asking the interviewer if she remembered the Penguin update to which she answered yes.

He then explained his experience reviewing examples of negative SEO that site owners and SEOs had sent him. He said that out of hundreds of cases he reviewed there was only one case that might have actually been negative SEO but that the web spam team wasn’t 100% sure.

Gary explained:

“Around the time we released Penguin, there was tons and tons of tons of complaints about negative SEO, specifically link based negative SEO and then very un-smartly, I requested examples like show me examples, like show me how it works and show me that it worked.

And then I got hundreds, literally hundreds of examples of alleged negative SEO and all of them were not negative SEO. It was always something that was so far away from negative SEO that I didn’t even bother looking further, except one that I sent to the web spam team for double checking and that we haven’t made up our mind about it, but it could have been negative SEO.

With this, I want to say that the fear about negative SEO is much bigger than or much larger than it needs to be, we disable insane numbers of links…”

The above is Gary’s experience of negative SEO. Next he explains the exact reason why “negative SEO links” have no effect.

Links From Irrelevant Topics Are Not Counted

At about the 30 minute mark of the interview, Gary confirmed something interesting about how links evaluated that is important to understand. Google has, for a very long time, examined the context of the site that’s linking out to match it to the site that’s being linked to, and if they don’t match up then Google wouldn’t pass the PageRank signal.

Gary continued his answer:

“If you see links from completely irrelevant sites, be that p–n sites or or pure spam sites or whatever, you can safely assume that we disabled the links from those sites because, one of the things is that we try to match the the topic of the target page plus whoever is linking out, and if they don’t match then why on Earth would we use those links?

Like for example if someone is linking to your flower page from a Canadian casino that sells Viagra without prescription, then why would we trust that link?

I would say that I would not worry about it. Like, find something else to worry about.”

Google Matches Topics From Page To Page

There was a time, in the early days of SEO, when thousands of links from non-matching topics could boost a site to the top of Google’s search results.  Some link builders used to offer “free” traffic counter widgets to universities that when placed in the footer would contain a link back to their client sites and they used to work. But Google tightened up on those kinds of links.

What Gary said about links having to be relevant matches up with what link builders have known for at least twenty years. The concept of off topic links not being counted by Google was understood way in the days when people did reciprocal links.

Although I can’t remember everything every Googler has ever said about negative SEO, this seems to be one of the rare occasions that a Googler offered a detailed reason why negative SEO doesn’t work.

Watch Gary Illyes answer the question at the 26 minute mark:

Featured Image by Shutterstock/MDV Edwards

Instagram Algorithm Shift: Why ‘Sends’ Matter More Than Ever via @sejournal, @MattGSouthern

In a recent Instagram Reel, Adam Mosseri, the head of Instagram, revealed a top signal the platform uses to rank content: sends per reach.

This metric measures the number of people who share a post with friends through direct messages (DMs) relative to the total number of viewers.

Mosseri advises creating content people want to share directly with close friends and family, saying it can improve your reach over time.

This insight helps demystify Instagram’s ranking algorithms and can assist your efforts to improve visibility on the platform.

Instagram’s ‘Sends Per Reach’ Ranking Signal

Mosseri describes the sends per reach ranking signal and its reasoning:

“Some advice: One of the most important signals we use in ranking is sends per reach. So out of all the people who saw your video or photo, how many of them sent it to a friend in a DM? At Instagram we’re trying to be a place where people can be creative, but in a way that brings people together.

We want to not only be a place where you passively consume content, but where you discover things you want to tell your friends about.

A reel that made you laugh so hard you want to send it to your brother or sister. Or a soccer highlight that blew your mind and you want to send it to another fan. That kind of thing.

So, don’t force it as a creator. But if you can, think about making content that people would want to send to a friend, or to someone they care about.”

The emphasis on sends as a ranking factor aligns with Instagram’s desire to become a platform where users discover and share content that resonates with them personally.

Advice For Creators

While encouraging creators to produce shareworthy content, Mosseri cautioned against forced attempts to game the system.

However, prompting users to share photos and videos via DM is said to boost reach

What Does This Mean For You?

Getting people to share posts and reels with friends can improve reach, resulting in more engagement and leads.

Content creators and businesses can use this information to refine their Instagram strategies.

Rather than seeing Instagram’s focus on shareable content as an obstacle, consider it an opportunity to experiment with new approaches.

If your reach has been declining lately, and you can’t figure out why, this may be the factor that brings it back up.


Featured Image: soma sekhar/Shutterstock