Google’s Stance On AI Translations & Content Drafting Tools via @sejournal, @MattGSouthern

In a recording of Google’s June SEO office-hours Q&A session, John Mueller, a Google’s Search Relations team member, discussed the impact of AI-generated content on SEO.

The discussion focused on two key areas: the indexing of AI-translated content and using AI tools for initial content drafting.

As the use of AI in content creation grows, Mueller’s advice can help you decide what’s best for your website and audience.

AI-Generated Translations

One of the questions posed to Mueller was: “How can one be transparent in the use of AI translations without being punished for AI-heavy content?”

In response, Mueller clarified that there’s no specific markup or labeling for automatically translated pages.

Instead, website owners should evaluate whether the translated content meets their quality standards and resonates with their target audience.

Mueller advised:

“If the pages are well-translated, if it uses the right wording for your audience, in short, if you think they’re good for your users, then making them indexable is fine.”

However, if the translated content falls short of expectations, website owners can exclude those pages from search engines’ indexing using the “noindex” robots meta tag.

Mueller encouraged website owners to go beyond the bare minimum of word-for-word translation, stating:

“Ultimately, a good localization is much more than just a translation of words and sentences, so I would definitely encourage you to go beyond the minimal bar if you want users in other regions to cherish your site.”

AI-Assisted Content Creation

Another question addressed using AI tools to generate initial content drafts, with human editors reviewing and refining the content.

Mueller’s response focused on the overall quality of the published content, regardless of the tools or processes used in its creation.

Mueller explained:

“What matters for us is the overall quality that you end up publishing on your website.”

He acknowledged that using tools to assist with spelling, formulations, and initial drafting is not inherently problematic.

However, he cautioned that AI-generated content is only sometimes considered high-quality.

Mueller recommended referring to Google’s guidance on AI-generated content and the company’s “helpful content” page, which provides a framework for evaluating content quality.

He also encourages seeking input from independent third-party reviewers, stating:

“I realize it’s more work, but I find getting input from independent third-party folks on these kinds of questions extremely insightful.”

Analyzing Google’s Advice

On the surface, Mueller’s guidance is straightforward: evaluate the quality of AI-translated or AI-assisted content and ensure it meets quality standards.

However, his repetition of Google’s oft-cited “focus on quality” mantra offered little in the way of specific, actionable advice.

While Mueller acknowledged AI tools can assist with drafting, formatting, and other content creation tasks, his warning that AI output isn’t automatically “high-quality” hints at Google’s underlying skepticism toward the technology.

Reading between the lines, one could interpret Google’s stance as an attempt to discourage reliance on AI, at least for now.

Until more transparent and practical guidelines emerge, websites will be left to take their own calculated risks with AI-assisted content creation.

How This Can Help You

Whether using AI for translations or initial drafting, the key takeaway is prioritizing overall content quality, audience relevance, and adherence to Google’s guidelines.

Additionally, seeking third-party feedback can provide help ensure that AI-assisted content meets the highest standards for user experience and SEO.

Listen to the full episode of Google’s June SEO office-hours below:


Featured Image: Bakhtiar Zein/Shutterstock

Is Google Broken Or Are Googlers Right That It’s Working Fine? via @sejournal, @martinibuster

Recent statements by Googlers indicate that the algorithm is working the way it’s supposed to and that site owners should just focus more on their users and less on trying to give the algorithm what it’s looking for. But the same Googlers also say that the search team is working on a way to show more good content.

That can seem confusing because if the algorithm isn’t broken then why are they also working on it as if it’s broken in some way? The answer to the question is a bit surprising.

Google’s Point Of View

It’s important to try to understand how search looks like from Google’s point of view. Google makes it easier to do with their Search Off The Record (SOTR) podcast because it’s often just Googlers talking about search from their side of the search box.

And in a recent SOTR podcast Googlers Gary Illyes and John Mueller talked about how something inside Google might break but from their side of the search box it’s a minor thing, not worth making an announcement. But then people outside of Google notice that something’s broken.

It’s in that context that Gary Illyes made the following statement about deciding whether to “externalize” (communicate) that something is broken.

He shared:

“There’s also the flip side where we are like, “Well, we don’t actually know if this is going to be noticed,” and then two minutes later there’s a blog that puts up something about “Google is not indexing new articles anymore. What up?” And I say, “Okay, let’s externalize it.””

John Mueller then asks:

“Okay, so if there’s more pressure on us externally, we would externalize it?”

And Gary answered:

“Yeah. For sure. Yeah.”

John follows up with:

“So the louder people are externally, the more likely Google will say something?”

Gary then answered yes and no because sometimes nothing is broken and there’s nothing to announce, even though people are complaining that something is broken.

He explained:

“I mean, in certain cases, yes, but it doesn’t work all the time, because some of the things that people perceive externally as a failure on our end is actually working as intended.”

So okay, sometimes things are working as they should but what’s broken is on the site owner’s side and maybe they can’t see it for whatever reason and you can tell because sometimes people tweet about getting caught in an update that didn’t happen, like some people thought their sites were mistakenly caught in Site Reputation Abuse crackdown because their sites lost rankings at the same time that the manual actions went out.

The Non-Existent Algorithms

Then there are the people who continue to insist that their sites are suffering from the HCU (the helpful content update) even though there is no HCU system anymore.

SearchLiaison recently tweeted about the topic of people who say they were caught in the HCU.

“I know people keep referring to the helpful content system (or update), and I understand that — but we don’t have a separate system like that now. It’s all part of our core ranking systems: https://developers.google.com/search/help/helpful-content-faq

It’s a fact, all the signals of the HCU are now a part of the core algorithm which consists of a lot of parts and there is no longer that one thing that used to be the HCU. So the algorithm is still looking for helpfulness but there are other signals as well because in a core update there are a lot of things changing.

So it may be the case that people should focus less on helpfulness related signals and be more open to the possibility of a wider range of issues instead of just one thing (helpfulness) which might not even be the reason why a site lost rankings.

Mixed Signals

But then there are the mixed signals where Googlers say that things are working the way they should but that the search team is working on showing more sites, which kind of implies the algorithm isn’t working the way it should be working.

On June 3rd SearchLiaison discussed how people who claim they have algorithmic actions against them don’t. The context of the statement was in answering a June 3rd tweet by someone who said they were hit by an algorithm update on May 6th and that they don’t know what to fix because they didn’t receive a manual action. Please note that the tweet has a type where they wrote June 6th when they meant May 6th.

The original June 3rd tweet refers to the site reputation abuse manual actions:

“I know @searchliaison says that there was no algorithmic change on June 6, but the hits we’ve taken since then have been swift and brutal.

Something changed, and we didn’t get the luxury of manual actions to tell us what we did wrong, nor did anyone else in games media.”

Before we get into what SearchLiason said, the above tweet could be seen as an example of focusing on the wrong “signal” or thing and instead it might be more productive to be open to a wider range of possible reasons why the site lost rankings.

SearchLiaison responded:

“I totally understand that thinking, and I won’t go back over what I covered in my long post above other than to reiterate that 1) some people think they have an algorithmic spam action but they don’t and 2) you really don’t want a manual action.”

In the same response, SearchLiaison left the door open that it’s possible search could do better and that they’re researching on how to do that.

He said:

“And I’ll also reiterate what both John and I have said. We’ve heard the concerns such as you’ve expressed; the search team that we’re both part of has heard that. We are looking at ways to improve.”

And it’s not just SearchLiaison leaving the door open to the possibility of something changing at Google so that more sites are shown, John Mueller also said something similar last month.

John tweeted:

“I can’t make any promises, but the team working on this is explicitly evaluating how sites can / will improve in Search for the next update. It would be great to show more users the content that folks have worked hard on, and where sites have taken helpfulness to heart.”

SearchLiaison said that they’re looking at ways to improve and Mueller said they’re evaluating how sites “can/will improve in Search for the next update.” So, how does one reconcile that something is working the way it’s supposed to and yet there’s room to be improved?

Well, one way to consider it is that the algorithm is functional and satisfactory but that it’s not perfect. And because nothing is perfect that means there is room for refinement and opportunities to improve, which is the case about everything, right?

Takeaways:

1. It may be helpful to consider that everything can be refined and made better is not necessarily broken because nothing is perfect.

2. It may also be productive to consider that helpfulness is just one signal out of many signals and what might look like an HCU issue might not be that at all, in which case a wider range of possibilities should be considered.

Featured Image by Shutterstock/ViDI Studio

Vulnerabilities In WooCommerce And Dokan Pro Plugins via @sejournal, @martinibuster

WooCommerce published an advisory about an XSS vulnerability while Wordfence simultaneously advised about a critical vulnerability in a WooCommerce plugin named Dokan Pro. The advisory about Dokan Pro warned that a SQL Injection vulnerability allows unauthenticated attackers to extract sensitive information from a website database.

Dokan Pro WordPress Plugin

The Dokan Pro plugin allows user to transform their WooCommerce website into a multi-vendor marketplace similar to sites like Amazon and Etsy. It currently has over 50,000 installations Plugin versions up to and including 3.10.3 are vulnerable.

According to WordFence, version 3.11.0 represents the fully patched and safest version.

WordPress.org lists the current number of plugin installations of the lite version at over 50,000 and a total all-time number of installations of over 3 million. As of this moment only 30.6% of installations were using the most up to date version, 3.11 which may mean that 69.4% of all Dokan Pro plugins are vulnerable.

Screenshot Of Dokan Plugin Download Statistics

Changelog Doesn’t Show Vulnerability Patch

The changelog is what tells users of a plugin what’s contained in an update. Most plugin and theme makers will publish a clear notice that an update contains a vulnerability patch. According to Wordfence, the vulnerability affects versions up to and including version  3.10.3. But the changelog notation for version 3.10.4 that was released Apr 25, 2024 (which is supposed to be patched) does not show that there’s a patch. It’s possible that the publisher of Dokan Pro and Dokan Lite didn’t want to alert hackers to the critical vulnerability.

Screenshot Of Dokan Pro Changelog

CVSS Score 10

The Common Vulnerability Scoring System (CVSS) is an open standard for assigning a score that represents the severity of a vulnerability. The severity score is based on how exploitable it is, the impact of it, plus supplemental metrics such as safety and urgency which together add up to a total score from least severe (1) to the highest severity (10).

The Dokan Pro plugin received a CVSS score of 10, the highest level severity, which means that any users of the plugin are recommended to take immediate action.

Screenshot Of Dokan Pro Vulnerability Severity Score

Description Of Vulnerability

Dokan Pro was found to contain an Unauthenticated SQL Injection vulnerability. There are authenticated and unauthenticated vulnerabilities. Unauthenticated means that an attacker does not need to acquire user credentials in order to launch an attack. Between the two kinds of vulnerabilities, unauthenticated is the worst case scenario.

A WordPress SQL Injection vulnerability is one in which a plugin or theme allows an attacker to manipulate the database. The database is the heart of every WordPress website, where every password, login names, posts, themes and plugin data. A vulnerability that allows anyone to manipulate the database is considerably severe – this is really bad.

This is how Wordfence describes it:

“The Dokan Pro plugin for WordPress is vulnerable to SQL Injection via the ‘code’ parameter in all versions up to, and including, 3.10.3 due to insufficient escaping on the user supplied parameter and lack of sufficient preparation on the existing SQL query. This makes it possible for unauthenticated attackers to append additional SQL queries into already existing queries that can be used to extract sensitive information from the database.”

Recommended Action For Dokan Pro Users

Users of the Dokan Pro plugin are recommended to consider updating their sites as soon as possible. It’s always prudent to test updates before their uploaded live to a website. But due to the severity of this vulnerability, users should consider expediting this update.

WooCommerce published an advisory of a vulnerability that affects versions 8.8.0 and higher. The vulnerability is rated 5.4 which is a medium level threat, and only affects users who have the Order Attribute feature enabled activated. Nevertheless, WooCommerce “strongly” recommends users update as soon as possible to the most current version (as of this writing), WooCommerce 8.9.3.

WooCommerce Cross Site Scripting (XSS) Vulnerability

The type of vulnerability that affects WooCommerce is called Cross Site Scripting (XSS) which is a type of vulnerability that depends on a user (like a WooCommerce store admin) to click a link.

According to WooCommerce:

“This vulnerability could allow for cross-site scripting, a type of attack in which a bad actor manipulates a link to include malicious content (via code such as JavaScript) on a page. This could affect anyone who clicks on the link, including a customer, the merchant, or a store admin.

…We are not aware of any exploits of this vulnerability. The issue was originally found through Automattic’s proactive security research program with HackerOne. Our support teams have received no reports of it being exploited and our engineering team analyses did not reveal it had been exploited.”

Should Web Hosts Be More Proactive?

Web developer and search marketing expert Adam J. Humphreys, Of Making 8, inc. (LinkedIn profile), feels that web hosts should be more proactive about patching critical vulnerabilities, even though that may cause some sites to lose functionality if there’s a conflict with some other plugin or theme in use.

Adam observed:

“The deeper issue is the fact that WordPress remains without auto updates and a constant vulnerability which is the illusion their sites are safe. Most core updates are not performed by hosts and almost every single host doesn’t perform any plugin updates even if they do them until a core update is performed. Then there is the fact most premium plugin updates will often not perform automatically. Many of which contain critical security patches.”

I asked if he meant a push update, where an update is forced onto a website.

“Correct, many hosts will not perform updates until a WordPress core update. Softaculous engineers confirmed this for me. WPEngine which claims fully managed updates doesn’t do it on the frequency to patch in a timely fashion for said plugins. WordPress without ongoing management is a vulnerability and yet half of all websites are made with it. This is an oversight by WordPress that should be addressed, in my opinion.”

Read more at Wordfence:

Dokan Pro <= 3.10.3 – Unauthenticated SQL Injection

Read the official WooCommerce vulnerability documentation:

WooCommerce Updated to Address Cross-site Scripting Vulnerability

Featured Image by Shutterstock/New Africa

Google Warns Of Quirk In Some Hreflang Implementations via @sejournal, @martinibuster

Google updated their hreflang documentation to note a quirk in how some websites are using it which (presumably) can lead to unintended consequences with how Google processes it.

hreflang Link Tag Attributes

is an HTML attribute that can be used to communicate data to the browser and search engines about linked resources relevant to the webpage. There are multiple kinds of data that can be linked to such as CSS, JS, favicons and hreflang data.

In the case of the hreflang attribute (attribute of the link element), the purpose is to specify the languages. All of the link elements belong in the section of the document.

Quirk In hreflang

Google noticed that there is an unintended behavior that happens when publishers combine multiple in attributes in one link element so they updated the hreflang documentation to make this more broadly known.

The changelog explains:

“Clarifying link tag attributes
What: Clarified in our hreflang documentation that link tags for denoting alternate versions of a page must not be combined in a single link tag.

Why: While debugging a report from a site owner we noticed we don’t have this quirk documented.”

What Changed In The Documentation

There was one change to the documentation that warns publishers and SEOs to watch out for this issue. Those who audit websites should take notice of this.

This is the old version of the documentation:

“Put your tags near the top of the element. At minimum, the tags must be inside a well-formed section, or before any items that might cause the to be closed prematurely, such as

or a tracking pixel. If in doubt, paste code from your rendered page into an HTML validator to ensure that the links are inside the element.”

This is the newly updated version:

“The tags must be inside a well-formed section of the HTML. If in doubt, paste code from your rendered page into an HTML validator to ensure that the links are inside the element. Additionally, don’t combine link tags for alternate representations of the document; for example don’t combine hreflang annotations with other attributes such as media in a single tag.”

Google’s documentation didn’t say what the consequence of the quirk is but if Google was debugging it then that means it did cause some kind of issue. It’s a seemingly minor thing that could have an outsized impact.

Read the newly updated documentation here:

Tell Google about localized versions of your page

Featured Image by Shutterstock/Mix and Match Studio

Want More Clicks? Use Simple Headlines, Study Advises via @sejournal, @MattGSouthern

A new study shows that readers prefer simple, straightforward headlines over complex ones.

The researchers, Hillary C. Shulman, David M. Markowitz, and Todd Rogers, did over 30,000 experiments with The Washington Post and Upworthy.

They found that readers are likelier to click on and read headlines with common, easy-to-understand words.

The study, published in Science Advances, suggests that people are naturally drawn to simpler writing.

In the crowded online world, plain headline language can help grab more readers’ attention.

Field Experiments and Findings

Between March 2021 and December 2022, researchers conducted experiments analyzing nearly 9,000 tests involving over 24,000 headlines.

Data from The Washington Post showed that simpler headlines had higher click-through rates.

The study found that using more common words, a simpler writing style, and more readable text led to more clicks.

In the screenshot below, you can see examples of headline tests conducted at The Washington Post.

Screenshot from: science.org, June 2024.

A follow-up experiment looked more closely at how people process news headlines.

This experiment used a signal detection task (SDT) to find that readers more closely read simpler headlines when presented with a set of headlines of varied complexity.

The finding that readers engage less deeply with complex writing suggests that simple writing can help publishers increase audience engagement even for complicated stories.

Professional Writers vs. General Readers

The study revealed a difference between professional writers and general readers.

A separate survey showed that journalists didn’t prefer simpler headlines.

This finding is important because it suggests that journalists may need help understanding how their audiences will react to and engage with the headlines they write.

Implications For Publishers

As publishers compete for readers’ attention, simpler headline language could create an advantage.

Simplified writing makes content more accessible and engaging, even for complex articles.

To show how important this is, look at The Washington Post’s audience data from March 2021 to December 2022. They averaged around 70 million unique digital visitors per month.

If each visitor reads three articles, a 0.1% increase in click-through rates (from 2.0% to 2.1%) means 200,000 more readers engaging with stories due to the simpler language.

See also: Title Tag Optimization: A Complete How-to Guide

Why SEJ Cares

Google’s recurring message to websites is to create the best content for your readers. This study helps demonstrate what readers want from websites.

While writers and journalists may prefer more complex language, readers are more drawn to simpler, more straightforward headlines.

How This Can Help You

Using simpler headlines can increase the number of people who click on and read your stories.

The study shows that even a tiny increase in click-through rates means more readers.

Writing simple headlines also makes your content accessible to more people, including those who may not understand complex terminology or jargon.

To implement this, test different headline styles and analyze the data on what works best for your audience.


Featured Image: marekuliasz/Shutterstock

Google Launches Custom Event Data Import For GA4 via @sejournal, @MattGSouthern

Google announced a new feature for Google Analytics 4 (GA4), rolling out support for custom event data import.

This allows you to combine external data sources with existing GA4 data for more comprehensive reporting and analysis.

Google’s announcement reads:

“With this feature, you can use a combination of standard fields and event-scoped custom dimensions to join and analyze imported event metadata with your existing Analytics data.

You can then create custom reports for a more complete view of your Analytics data and imported event metadata.”

Custom Event Data Import: How It Works

Google’s help documentation describes the new capability:

“Custom event data import allows you to import and join data in ways that make sense to you. You have more flexibility in the choice of key and import dimensions.”

You begin the process by defining reporting goals and identifying any relevant external data sources not collected in Google Analytics.

You can then set up custom, event-scoped dimensions to use as “join keys” to link the imported data with Analytics data.

Mapping Fields & Uploading Data

Once the custom dimensions are configured, Google provides a detailed mapping interface for associating the external data fields with the corresponding Analytics fields and parameters.

This allows seamless integration of the two data sources.

Google’s help documentation reads:

“In the Key fields table, you’ll add the Analytics fields to join your imported data. In the Import fields table, you’ll select the external fields to include via the join key across both standard Analytics fields/dimensions and custom typed-in event parameters.”

After the data is uploaded through the import interface, Google notes it can take up to 24 hours for the integrated data set to become available in Analytics reports, audiences, and explorations.

Why SEJ Cares

GA4’s custom event data import feature creates opportunities for augmenting Google Analytics data with a business’s proprietary sources.

This allows you to leverage all available data, extract actionable insights, and optimize strategies.

How This Can Help You

Combining your data with Google’s analytics data can help in several ways:

  1. You can create a centralized data repository containing information from multiple sources for deeper insights.
  2. You can analyze user behavior through additional lenses by layering your internal data, such as customer details, product usage, marketing campaigns, etc., on top of Google’s engagement metrics.
  3. Combining analytics data with supplementary data allows you to define audience segments more granularly for targeted strategies.
  4. Using the new data fields and dimensions, You can build custom reports and dashboards tailored to your specific business.

For businesses using GA4, these expanded reporting possibilities can level up your data-driven decision-making.


Featured Image: Muhammad Alimaki/Shutterstock

Google Quietly Ends Covid-Era Rich Results via @sejournal, @martinibuster

Google removed the Covid-era structured data associated with the Home Activities rich results that allowed online events to be surfaced in search since August 2020, publishing a mention of the removal in the search documentation changelog.

Home Activities Rich Results

The structured data for the Home Activities rich results allowed providers of online livestreams, pre-recorded events and online events to be findable in Google Search.

The original documentation has been completely removed from the Google Search Central webpages and now redirects to a changelog notation that explains that the Home Activity rich results is no longer available for display.

The original purpose was to allow people to discover things to do from home while in quarantine, particularly online classes and events. Google’s rich results surfaced details of how to watch, description of the activities and registration information.

Providers of online events were required to use Event or Video structured data. Publishers and businesses who have this kind of structured data should be aware that this kind of rich result is no longer surfaced but it’s not necessary to remove the structured data if it’s a burden, it’s not going to hurt anything to publish structured data that isn’t used for rich results.

The changelog for Google’s official documentation explains:

“Removing home activity documentation
What: Removed documentation on home activity structured data.

Why: The home activity feature no longer appears in Google Search results.”

Read more about Google’s Home Activities rich results:

Google Announces Home Activities Rich Results

Read the Wayback Machine’s archive of Google’s original announcement from 2020:

Home activities

Featured Image by Shutterstock/Olga Strel

YouTube Rolls Out Thumbnail A/B Testing To All Channels via @sejournal, @MattGSouthern

YouTube will launch a new “Thumbnail Test & Compare” feature for all channels over the next few weeks.

This tool allows you to upload and test up to three different thumbnails for each video to see which performs best.

How Thumbnail Testing Works

The ‘Thumbnail Test & Compare‘ feature lets you upload multiple thumbnail options when publishing a new YouTube video.

During the testing period, YouTube will randomly display the different thumbnails to some of the video’s viewers.

After collecting enough data, which takes around two weeks, YouTube analyzes which thumbnail generated the highest “watch time share” from viewers.

It will then designate one of three potential outcomes:

  • Winner: A clear winner outperforming the other options based on watch time. The winning thumbnail is automatically applied.
  • Preferred: One thumbnail likely performed better than others, but the results are less certain statistically.
  • None: No thumbnail emerged as a clear winner. The original uploaded thumbnail is kept.

You can manually select your preferred video thumbnail even if it isn’t the winning option.

For a full demonstration, see the video below:

YouTube Thumbnail Best Practices

As part of the demonstration, YouTube outlined best practices for designing and testing thumbnails.

YouTube suggests creators start by testing thumbnails on a limited number of older videos to get initial guidance. Then, apply any learnings to testing thumbnails for more recent videos.

For thumbnail design itself, YouTube offers these tips:

Balance & Focal Point

“Ensure your images are balanced with a focal point to direct viewers’ attention towards.”

High Contrast

“Utilize high contrast allowing your subject to stand out against the background in both Light and Dark mode.”

Facial Expressions

“If there’s a face in your thumbnail, consider emotion. Be expressive and consider how you want viewers to feel when seeing your thumbnail.”

Concise Text

“With text, remember that fewer words can be impactful while too much text can be difficult to process while scrolling.”

Depth & Blank Space

“When it comes to depth of field keep your background in mind and play with blank space.”

Rollout To All Eligible Channels

All channels can access the ‘Thumbnail Test & Compare’ feature in the YouTube Studio desktop application. To do so, the “Advanced features” setting must be enabled.

YouTube is gradually rolling this out over the next few weeks to all channels that have opted in and meet those requirements.

The company says it will provide updates on expanding availability, such as potential mobile app support, in the future.

Optimizing For Watch Time

In an FAQ addressing common questions, YouTube explains that thumbnails are judged solely based on their ability to drive watch time, not other metrics like click-through rates.

YouTube states

“We want to make sure that your thumbnail and content gets you the highest amount of viewer engagement, so we are optimizing for overall watch time share over other metrics.

We believe that this metric is the best way to guide your content strategy decisions & support your chances of success on the platform.”

Why SEJ Cares

The Thumbnail Test & Compare tool addresses a pain point by allowing true A/B testing. Previously, creators had to rely on best guesses or small-sample polls when selecting thumbnails for new videos.

By optimizing for watch time as the key success metric, YouTube is putting an emphasis on long-term viewer engagement over short-term clicks.

However, it’s understandable that some channels may also want data on how thumbnails impact initial impressions and click-through rates.

How This Can Help You

Smarter, higher-performing thumbnails could boost your content in YouTube’s recommendations and keep viewers watching more videos.

Video openers and thumbnails are the first make-or-break moments on YouTube, so having data-backed tools to perfect those first impressions could be a difference-maker.


Featured Image: Chayjitti Hongmanee/Shutterstock

Google’s Structured Data Update May Boost Merchant Sales via @sejournal, @martinibuster

Google updated their structured data guidelines to reflect support for a sitewide return policy within the Organization structured data. This eliminates the need to add redundant return policy information for every product listing structured data and can result in more traffic and sales to online merchants.

This doesn’t mean that merchants are required to change their current structured data, the old method remains unchanged. This simply adds an alternative way that is more streamlined and reduces the size of product structured data.

Improvement To Brand Knowledge Panel

Google’s change to the organization structured data will be reflected in the brand panel that Google shows when someone searches on a brand name. The updated brand panel will feature a new entry that reflects the company’s return policy.

Screenshot Of Brand Knowledge Panel Example

Benefits Of Organization-Level Return Policy

As part of this change Google is adding search features in Knowledge Panels and in Brand Panels that can show a merchant’s return policies. This means that a merchant’s search feature will be eligible to show a returns policy which in turn can encourage a higher clickthrough rate from the search engine results pages (SERPs) and a higher conversion rate.

Research conducted by the International Council of Shopping Centers (ICSC) in 2024 shows that online shoppers are strongly influenced by a merchant’s returns policy.

They discovered:

“82% of respondents said that when shopping online, return policies influence whether they decide to purchase from a retailer.

… If retailers charged a fee to ship back purchases made online, nearly three-fourths (71%) of respondents said they’d likely stop shopping online from that company altogether, while 6 in 10 said they’d likely stop shopping online with retailers that shortened the free return window.”

Clearly a return policy can be a way to generate more online sales and Google’s new support for a sitewide returns policy structured data helps to communicate that information to online shoppers directly from search.

Google’s announcement explained:

“A return policy is a major factor considered by shoppers when buying products online, and so last year we enabled the extraction of structured data return policies for individual products. Today we’re adding support for return policies at the organization level as well, which means you’ll be able to specify a general return policy for your business instead of having to define one for each individual product you sell.

Adding a return policy to your organization structured data is especially important if you don’t have a Merchant Center account and want the ability to provide a return policy for your business. Merchant Center already lets you provide a return policy for your business, so if you have a Merchant Center account we recommend defining your return policy there instead.

…If your site is an online or local business, we recommend using one of the OnlineStore, or LocalBusiness subtypes of Organization.

We hope this addition makes it easier for you to add return policies for your business, and enable them to be shown across Google shopping experiences.”

Google Updates Organization Structured Data Documentation

Google added a new section to their Organization structured data documentation to reflect support for this new way to show return policies in the search results.

The new documentation states:

“MerchantReturnPolicy
Use the following properties to describe general return policies for your entire Organization, if applicable to your business. If you have specific policies for individual products, use merchant listing markup instead.”

Read Google’s announcement:

Adding markup support for organization-level return policies

Read the new MerchantReturnPolicy documentation on Google’s official Organization structured data page:

Organization (Organization) structured data – MerchantReturnPolicy

Google’s Gary Illyes: Lastmod Signal Is Binary via @sejournal, @MattGSouthern

In a recent LinkedIn discussion, Gary Illyes, Analyst at Google, revealed that the search engine takes a binary approach when assessing a website’s lastmod signal from sitemaps.

The revelation came as Illyes encouraged website owners to upgrade to WordPress 6.5, which now natively supports the lastmod element in sitemaps.

When Mark Williams-Cook asked if Google has a “reputation system” to gauge how much to trust a site’s reported lastmod dates, Illyes stated, “It’s binary: we either trust it or we don’t.”

No Shades Of Gray For Lastmod

The lastmod tag indicates the date of the most recent significant update to a webpage, helping search engines prioritize crawling and indexing.

Illyes’ response suggests Google doesn’t factor in a website’s history or gradually build trust in the lastmod values being reported.

Google either accepts the lastmod dates provided in a site’s sitemap as accurate, or it disregards them.

This binary approach reinforces the need to implement the lastmod tag correctly and only specify dates when making meaningful changes.

Illyes commends the WordPress developer community for their work on version 6.5, which automatically populates the lastmod field without extra configuration.

Accurate Lastmod Essential For Crawl Prioritization

While convenient for WordPress users, the native lastmod support is only beneficial if Google trusts you’re using it correctly.

Inaccurate lastmod tags could lead to Google ignoring the signal when scheduling crawls.

With Illyes confirming Google’s stance, it shows there’s no room for error when using this tag.

Why SEJ Cares

Understanding how Google acts on lastmod can help ensure Google displays new publish dates in search results when you update your content.

It’s an all-or-nothing situation – if the dates are deemed untrustworthy, the signal could be disregarded sitewide.

With the information revealed by Illyes, you can ensure your implementation follows best practices to the letter.


Featured Image: Danishch/Shutterstock