Ecommerce in Brazil: Growth Despite Hurdles

Retail ecommerce in Brazil more than doubled to 185 billion reais ($34.5 billion) in 2023 from $70 billion reais in 2018, while the average order increased in the same period from 435 reais to 470, according to the Brazilian Electronic Commerce Association.

By comparison, U.S. retail ecommerce sales in 2023 were $1.14 trillion, per eMarketer.

In Brazil, perfumery and cosmetics had the most online orders in 2023, followed by home and decor, health and food, and beverages.

Electronics in 2023 represented 31% of total ecommerce revenue, according to ECBD, a Brazil-based analysis firm, followed by fashion at 27%, hobby and leisure at 14%, and furniture and homeware at 11%.

Mercado Livre holds a dominant ecommerce position in Latin America. It was Brazil’s most trafficked retail website in March, with over 216 million visits, followed by Amazon, Shopee, OLX, and Ali Express. All are marketplaces. Amazon.com in the U.S. received 3.15 billion visits in March.

In Q1 2024, about 16% of total retail sales in Brazil came from digital channels — apps, sites, email. That’s comparable to the U.S. for the same period. In China, ecommerce in Q1 was 23% of total retail sales.

International Sellers

A 2024 study commissioned by Alibaba showed that cross-border ecommerce represented a mere 0.5% of total retail sales in Brazil, likely due to the difficulty of doing business there.

Despite consumer demand for phones, brand-name clothing, and baby gear, among other goods, it’s expensive and difficult to get things into the country.

“Doing business in Brazil requires in-depth knowledge of the local environment, including the high direct and indirect costs of doing business,” according to the U.S. International Trade Administration. Regulators have for years attempted to enact reforms but continue to face complex tax schemes, restrictive labor laws, and vexing import barriers.

Those hurdles have collectively restricted access to international goods, prompting many Brazilians to shop abroad.

Last year Brazilian lawmakers created a tax exemption for online purchases of $50 or less from international sellers, but a pushback from domestic merchants may result in its revocation, replaced by a 20% fee. Purchases above $50 are already subject to a 60% tax.

Brazilian logistics are an ecommerce barrier, with inadequate infrastructure in the world’s fifth largest country, most of which is rainforest. There aren’t enough roads, maintenance is poor, and ports have limited capacity. Cargo theft is a problem.

That’s as inflation has expanded, reaching a five-year monthly peak of 12% in April 2022.

Payments

Despite the challenges, the country has excelled in modernizing payments. In 2020 the Brazilian Central Bank introduced Pix, a real-time payments system requiring only an email address, phone number, or local ID — no bank account.

By 2023 Pix represented 41% of all retail transactions — online and in-store — followed by credit cards at 15% and debit cards at 13%. Buy-now pay-later services are also popular.

Brazil is the largest economy in Latin America, representing 57% of ecommerce sales with projected growth of about 14% annually through 2026, according to Payments and Commerce Market Intelligence, a global research firm.

The growth was bolstered by the pandemic, forcing Brazilians who didn’t fully trust the web to go online anyway. But Brazil remains among the most unequal countries, with the bottom 40% of families earning less in 2021 than in 2016, per the World Bank. Fewer jobs, persistent inflation, and a drop in government support could limit ecommerce growth, at least for the medium term.

LinkedIn Rolls Out New Newsletter Tools via @sejournal, @MattGSouthern

LinkedIn is launching several new features for people who publish newsletters on its platform.

The professional networking site wants to make it easier for creators to grow their newsletter audiences and engage readers.

More People Publishing Newsletters On LinkedIn

The company says the number of LinkedIn members publishing newsletter articles has increased by 59% over the past year.

Engagement on these creator-hosted newsletters is also up 47%.

With this growing interest, LinkedIn is updating its newsletter tools.

A New Way To View & Comment

One of the main changes is an updated reading experience that displays comments alongside the newsletter articles.

This allows readers to view and participate in discussions more easily while consuming the content.

See an example of the new interface below.

Screenshot from: linkedin.com, June 2024.

Design Your Own Cover Images

You can now use Microsoft’s AI-powered Designer tool to create custom cover images for their newsletters.

The integration provides templates, size options, and suggestions to help design visually appealing covers.

More Subscriber Notifications

LinkedIn is improving the notifications sent to newsletter subscribers to drive more readership.

When a new issue is published, subscribers will receive email alerts and in-app messages. LinkedIn will also prompt your followers to subscribe.

Mention Other Profiles In Articles

You can now embed links to other LinkedIn profiles and pages directly into their newsletter articles.

This lets readers click through and learn more about the individuals or companies mentioned.

In the example below, you can see it’s as easy as adding a link.

Screenshot from: linkedin.com, June 2024.

Preview Links Before Publishing

Lastly, LinkedIn allows you to access a staging link that previews the newsletter URL before hitting publish.

This can help you share and distribute their content more effectively.

Why SEJ Cares

As LinkedIn continues to lean into being a publishing platform for creators and thought leaders, updates that enhance the newsletter experience are noteworthy for digital marketers and industry professionals looking to build an audience.

The new tools are part of LinkedIn’s broader effort to court creators publishing original content on its platform amid rising demand for newsletters and knowledge-sharing.

How This Can Help You

If you publish a newsletter on LinkedIn, these new tools can help you design more visually appealing content, grow your subscriber base, interact with your audience through comments, and preview your content before going live.


Featured Image: Tada Images/Shutterstock

When Is Duplicate Content Acceptable For Local SEO? Google Explains via @sejournal, @MattGSouthern

Google’s John Mueller clarified that localized duplicate content across regional websites is acceptable. Unique content is still recommended for specific page types.

  • Google doesn’t penalize duplicate content on localized websites.
  • Translating or customizing core content for local markets is acceptable.
  • However, unique content is still needed for certain pages.
Google’s Response to Affiliate Link Heavy Content via @sejournal, @martinibuster

Google’s John Mueller responded to a question about whether affiliate links have a negative impact on rankings, touching on factors that affiliate sites should keep in mind.

Hypothesis: Google Targets Affiliate Sites

There is a decades-long hypothesis that Google targets affiliate sites. SEOs were talking about it as far back as Pubcon Orlando 2004 and for longer than that on SEO forums.

In hindsight it’s easy to see that that Google wasn’t targeting affiliate sites, Google was targeting the quality level of sites that followed certain tactics like keyword stuffing, organized link rings, scaled automated content and so on.

Image Representing A Low Quality Site

The idea that Google targets affiliate sites persists, probably because so many affiliate sites tend to lose rankings every update. But it’s also true that those same affiliate sites have shortcomings that the marketers are may or may not be aware of.

It’s those shortcomings that John Mueller’s answer implies that affiliates should focus on.

Do Many Affiliate Links Hurt Rankings?

This is the question:

“…do many affiliate links hurt the ranking of a page?”

Google’s John Mueller answered:

“We have a blog post from about 10 years ago about this, and it’s just as relevant now. The short version is that having affiliate links on a page does not automatically make your pages unhelpful or bad, and also, it doesn’t automatically make the pages helpful.

You need to make sure that your pages can stand on their own, that they’re really useful and helpful in the context of the web, and for your users.”

Pages That Can Stand On Their Own

The thing about some affiliate marketers that encounter ranking issues is that even though they “did everything perfect” a lot of their ideas of perfections come from reading blogs tha recommend outdated tactics.

Consider that today, in 2024, there are some SEOs who are still insisting that Google uses simple clickthrough rates as a ranking factor, as if AI hasn’t been a part of Google’s algorithm for the past 10+ years, insisting as if machine learning couldn’t use clicks to create classifiers that can be used to predict which content is most likely to satisfy users.

What Are Common Outdated Tactics?

These are in my opinion the kind of tactics that can lead to unhelpful content:

  • Targeting Keywords Not People
    Keywords, in my opinion, are the starting point for identifying topics that people are interested in. Google doesn’t rank keywords, they rank content that’s about the topics and concepts associated with those keywords. An affiliate, or anyone else, who begins and ends their content by targeting keywords is unintentionally creating content for search engines not people and lacks the elements of usefulness and helpfulness that Google’s signals are looking for.
  • Copying Competitors
    Another tactic that’s more harmful than helpful is the ones that advise site owners to copy what competitors who rank are doing and then do it ten times better. That’s basically just giving Google what they already have in the search results and is the kind of thing that Google will not find unique or original and risks getting discovered/not indexed at worst and ranking on page two or three at best.

The essence of outcompeting a competitor isn’t copying them, it’s doing something users appreciate that competitor’s aren’t doing.

Takeaways:

The following are my takeaways, my opinion on three ways to do better in search.

  • Don’t just target keywords.
    Focus on the people who are searching for those keywords and what their needs are.
  • Don’t research your competitors to copy what their doing.
    Research your competitors to identify what they’re not doing (or doing poorly) and make that your competitive strength.
  • Don’t just build links to promote your site to other sites.
    Promote your sites to actual people. Identify where your typical site visitor might be and identify ways of making your website known to them, there. Promotion does not begin and end with links.

What Does Google Say About Affiliate Sites?

Mueller mentioned that he wrote something ten years ago but he didn’t link to it. Good luck finding it.

But Google has published content about the topic and here are a few things to keep in mind.

1. Use the rel=sponsored link attribute. The following is from 2021:

“Affiliate links on pages such as product reviews or shopping guides are a common way for blogs and publishers to monetize their traffic. In general, using affiliate links to monetize a website is fine. We ask sites participating in affiliate programs to qualify these links with rel=”sponsored”, regardless of whether these links were created manually or dynamically.

As a part of our ongoing effort to improve ranking for product-related searches and better reward high-quality content, when we find sites failing to qualify affiliate links appropriately, we may issue manual actions to prevent these links from affecting Search, and our systems might also take algorithmic actions. Both manual and algorithmic actions may affect how we see a site in Search, so it’s good to avoid things that may cause actions, where possible.”

2. Google’s ten year old advice about affiliate programs and added value:

“If your site syndicates content that’s available elsewhere, a good question to ask is: “Does this site provide significant added benefits that would make a user want to visit this site in search results instead of the original source of the content?” If the answer is “No,” the site may frustrate searchers and violate our quality guidelines. As with any violation of our quality guidelines, we may take action, including removal from our index, in order to maintain the quality of our users’ search results. “

3. Site reputation abuse

“Affiliate content on a site previously used by a government agency”

Not site reputation abuse:

“Embedding third-party ad units throughout a page or using affiliate links throughout a page, with links treated appropriately”

4. Thin affiliate pages:

“Thin affiliate pages are pages with product affiliate links on which the product descriptions and reviews are copied directly from the original merchant without any original content or added value.”

5. Google has an entire webpage that documents how to write high quality reviews:

Write high quality reviews

Affiliate Sites Rank Highly All The Time

It’s a fact that affiliate sites routinely rank at the top of the search results. It’s also true that Google doesn’t target affiliate sites, Google generally targets spammy tactics and low quality content.

Yes there are false positives and Google’s algorithms have room for improvement. But in general, it’s best to keep an open mind about why a site might not be ranking.

Listen to the Office Hours podcast at the 4:55 minute mark:

Featured Image by Shutterstock/Dilen

seo enhancements
The Google Leak: insights and implications for SEO best practices

In the SEO universe, paradigm shifting insights can suddenly appear out of nowhere, like a decloaking warbird a little too close to the neutral zone, catching even the most seasoned captains by surprise. The recent Google API leak has sent such a shockwave through the SEO quadrant, igniting intense discussions and debates within the community. We’ve laid in a course to navigate the nebula of this new information, examining the implications of the leak and how it correlates with Yoast’s established SEO protocols, providing clear guidance amidst these cosmic shifts.

Yoast’s perspective on understanding The Leak:

The contents of the leaked documents do not contain the secret recipe for the algorithm. They are lists of API calls that can retrieve specific bits of information from the data that Google is tracking and storing. When reading through the API documentation, remember that presence or existence is not definite confirmation of current use. Please do not confuse existence with use, or mistake any of the data points included in the leak as “proof” that “Google’s been lying to us!” The existence or presence of certain data points in Google’s API documentation does not mean those data points are actively being used as ranking factors in the algorithm. The only thing confirmed is that the data is collected and stored for potential use, but the existence of the data itself does not prove it is actively used in the ranking algorithm.

Personally, I like to think of the information collected as ingredients in the pantry. I can assemble them in a number of different ways and in different amounts to make a variety of recipes. They’re useful to have handy; however, the fact that they’re in the pantry doesn’t mean I am definitely using them in the recipe I am cooking right now. However, it is possible I will use some of those ingredients to make dinner tomorrow — or maybe next week. The point is, I keep them on hand so I have options, and that is why Google is collecting the data, to have options when testing or modifying the algorithm.

At Yoast, we believe in equipping our users with tools and knowledge that stand the test of time and tide. The recent leak does not change our core philosophy; rather, it reinforces our commitment to adhering to SEO best practices.

Clarifying the Leak’s contents:

  1. API documentation vs. algorithmic use: The leaked documents reveal API calls to a data warehouse, not direct insights into the algorithm’s operational mechanics. Because the data is being tracked and stored, does not mean it is actively used for ranking, but it also doesn’t mean it isn’t.
  2. Long-term data storage: The data points revealed confirm that Google maintains a kind of ‘permanent record’ of a site’s performance and usage data (if you went to public school in the US, you’ll know what a permanent record is). This underscores the importance of maintaining consistent, quality SEO practices over the life of a domain.
  3. Potential data utilization:
    • Usage data and performance metrics: Data from sources like Chrome, tracking speed, and user interaction, highlight the breadth of Google’s data collection, which could influence or be used in future algorithm updates.
    • Non-link mentions and click data: The storage of click data and non-link mentions suggests a broader scope of interest in metrics that many believed were not being tracked or used in the algorithm. Any or all of these could be tested or integrated into ranking factors at any point, or could be used currently.

Key takeaways and recommendations:

  • Transparency and behavior: Everything is potentially recorded, so engage in SEO practices with the awareness that any ‘cheaty’ behavior not only risks penalties but may also impact your site’s long-term reputation and performance.
  • Comprehensive optimization: Optimize all aspects of your site’s performance, from speed to user engagement, not just for current benefits but for future-proofing against potential algorithm updates that might use some of the other data being recorded and stored.

Conclusion

The Google Leak, while certainly not a nothing-burger, isn’t quite the massive disturbance in the subspace continuum we originally thought. It is absolutely interesting and the potential uses of some of the data points are fascinating to contemplate, but ultimately, this is a reminder of the complex, evolving/mutating nature of SEO. It’s also a reminder that it is not enough to just adapt to immediate changes, but to consistently practice a holistic, ethical approach to SEO. At Yoast, we continue to support our community by providing tools that guide you through these uncharted galaxies with integrity and foresight.

Coming up next!

Google’s Stance On AI Translations & Content Drafting Tools via @sejournal, @MattGSouthern

In a recording of Google’s June SEO office-hours Q&A session, John Mueller, a Google’s Search Relations team member, discussed the impact of AI-generated content on SEO.

The discussion focused on two key areas: the indexing of AI-translated content and using AI tools for initial content drafting.

As the use of AI in content creation grows, Mueller’s advice can help you decide what’s best for your website and audience.

AI-Generated Translations

One of the questions posed to Mueller was: “How can one be transparent in the use of AI translations without being punished for AI-heavy content?”

In response, Mueller clarified that there’s no specific markup or labeling for automatically translated pages.

Instead, website owners should evaluate whether the translated content meets their quality standards and resonates with their target audience.

Mueller advised:

“If the pages are well-translated, if it uses the right wording for your audience, in short, if you think they’re good for your users, then making them indexable is fine.”

However, if the translated content falls short of expectations, website owners can exclude those pages from search engines’ indexing using the “noindex” robots meta tag.

Mueller encouraged website owners to go beyond the bare minimum of word-for-word translation, stating:

“Ultimately, a good localization is much more than just a translation of words and sentences, so I would definitely encourage you to go beyond the minimal bar if you want users in other regions to cherish your site.”

AI-Assisted Content Creation

Another question addressed using AI tools to generate initial content drafts, with human editors reviewing and refining the content.

Mueller’s response focused on the overall quality of the published content, regardless of the tools or processes used in its creation.

Mueller explained:

“What matters for us is the overall quality that you end up publishing on your website.”

He acknowledged that using tools to assist with spelling, formulations, and initial drafting is not inherently problematic.

However, he cautioned that AI-generated content is only sometimes considered high-quality.

Mueller recommended referring to Google’s guidance on AI-generated content and the company’s “helpful content” page, which provides a framework for evaluating content quality.

He also encourages seeking input from independent third-party reviewers, stating:

“I realize it’s more work, but I find getting input from independent third-party folks on these kinds of questions extremely insightful.”

Analyzing Google’s Advice

On the surface, Mueller’s guidance is straightforward: evaluate the quality of AI-translated or AI-assisted content and ensure it meets quality standards.

However, his repetition of Google’s oft-cited “focus on quality” mantra offered little in the way of specific, actionable advice.

While Mueller acknowledged AI tools can assist with drafting, formatting, and other content creation tasks, his warning that AI output isn’t automatically “high-quality” hints at Google’s underlying skepticism toward the technology.

Reading between the lines, one could interpret Google’s stance as an attempt to discourage reliance on AI, at least for now.

Until more transparent and practical guidelines emerge, websites will be left to take their own calculated risks with AI-assisted content creation.

How This Can Help You

Whether using AI for translations or initial drafting, the key takeaway is prioritizing overall content quality, audience relevance, and adherence to Google’s guidelines.

Additionally, seeking third-party feedback can provide help ensure that AI-assisted content meets the highest standards for user experience and SEO.

Listen to the full episode of Google’s June SEO office-hours below:


Featured Image: Bakhtiar Zein/Shutterstock

Ask A PPC: Why Have My Google Ads Not Got Any Impressions? via @sejournal, @navahf

This month’s Ask A PPC comes from Vijay, who asks:

“Why are my Google Ads approved but have no impressions? How do you fix it?”

We’re going to go into the timely question of why a Google Ad entity (keyword, ad, ad group, and campaign) might not have impressions.

We will tackle the main and solvable ones, but there will always be edge cases.

If you have questions beyond these, don’t hesitate to reach out!

Why Doesn’t A Google Entity Have Impressions?

The biggest reason is low search volume.

If you’re targeting a long-tail (5+) exact match keyword or a keyword in a hyper-niche industry, that keyword concept may have zero impressions. Also, if the keyword is in a brand-new ad account, it will have a really, really hard time ramping up.

This is why Google tends to suggest using looser ideas in the beginning. You need data to get the ad account up and running, though it’s important to put protections in place.

A common way to do this is to put in bid caps (either through bidding strategies, portfolio bidding, or manual bidding).

Dynamic Search Ads (DSA) can help you get ideas of how people search. When paired with max clicks with a bid cap, DSA can give you a reasonable sense of how much your industry will cost as well as search volume.

You may also decide that you want to use a limited broad match with lots of negatives. If you go this route, be careful about which conversion actions you set, as broad match does factor in conversions when considering matching.

There is a reality that some ideas will have lower search volume. If you’re creating a new offering, you may benefit from running visual content (Performance Max should only be used if you have at least 30+ conversions in a 30-day period).

Another reason a keyword might have zero impressions is that the ad hasn’t been approved yet. Google can take up to two days to approve ads (especially in new accounts), so it’s important to factor those timelines in.

Additionally, a previously running ad might have been flagged for editorial review (very common when discussing a trademarked term or anything relating to credit).

You also may have accidental duplicate keywords, which can cause serving issues. If you have more than one keyword that can capture the same traffic, there will be inevitable winners and losers. However, sometimes, they can cancel each other out, and neither will serve.

Another reason for low impressions or zero impressions is that your bids and budgets don’t align with the keyword concepts you’re targeting.

We know that Google has instituted a floor for the auction. If you’re not able to bid for the correct idea or if you ask for a budget to support too many things, you will inevitably end up with zero impressions.

A great way to check for this is to use the Keyword Planner to get a rough sense of what the auction prices will be.

You’ll also want to leverage Google Trends to see how people in different areas are searching and what is trending in different parts of the country that you’re trying to target.

How Can You Solve Low Impressions?

If your low impressions are tied to budgets or bids, and there is no way to invest more, you will need to look for traffic and leads on other channels or other types of Google properties. This may include using display or video.

You may also want to look at Microsoft or other social plays like Meta/Instagram. Part of why auction prices can be cheaper on those channels than Google Search is the inherent transactional bias towards buying off of the search.

If the issue is structure, you likely have too many entities in an ad group or campaign. The answer is to move a little bit of budget and set up a different campaign to cover those ideas or to pause ideas that are hogging the budget that aren’t worth as much.

A very common problem, particularly in campaigns that are running smart bidding, is that there will be initial winners and losers. If you include too many keyword concepts, valuable ones may get lost.

This is part of why Google will be pausing keywords that have zero impressions in the past 13 months so that your account isn’t penalized for having too many zero impression keywords.

If the issue is creative, then the best advice is just to use responsive search and display ads, as well as Performance Max, and keep cycling through the creative and ways to talk about it.

Consider layering in Google’s AI for creative because you’ll have something that they have outright said is correct. Granted, you want to make sure that the creative meets your brand standards.

Final Takeaways

It’s very frustrating when a keyword or ad has zero impressions, and you’re not sure why.

As we’ve discussed, it could be a low search volume issue – you may need to widen what you’re willing to accept.

It could be a bid and budget issue, and you’re just not entering the auction at all (or at least not enough for the spend to matter).

Have a question about PPC? Submit via this form or tweet me @navahf with the #AskPPC hashtag. See you next month!

More resources: 


Featured Image: Paulo Bobita/Search Engine Journal

Is Google Broken Or Are Googlers Right That It’s Working Fine? via @sejournal, @martinibuster

Recent statements by Googlers indicate that the algorithm is working the way it’s supposed to and that site owners should just focus more on their users and less on trying to give the algorithm what it’s looking for. But the same Googlers also say that the search team is working on a way to show more good content.

That can seem confusing because if the algorithm isn’t broken then why are they also working on it as if it’s broken in some way? The answer to the question is a bit surprising.

Google’s Point Of View

It’s important to try to understand how search looks like from Google’s point of view. Google makes it easier to do with their Search Off The Record (SOTR) podcast because it’s often just Googlers talking about search from their side of the search box.

And in a recent SOTR podcast Googlers Gary Illyes and John Mueller talked about how something inside Google might break but from their side of the search box it’s a minor thing, not worth making an announcement. But then people outside of Google notice that something’s broken.

It’s in that context that Gary Illyes made the following statement about deciding whether to “externalize” (communicate) that something is broken.

He shared:

“There’s also the flip side where we are like, “Well, we don’t actually know if this is going to be noticed,” and then two minutes later there’s a blog that puts up something about “Google is not indexing new articles anymore. What up?” And I say, “Okay, let’s externalize it.””

John Mueller then asks:

“Okay, so if there’s more pressure on us externally, we would externalize it?”

And Gary answered:

“Yeah. For sure. Yeah.”

John follows up with:

“So the louder people are externally, the more likely Google will say something?”

Gary then answered yes and no because sometimes nothing is broken and there’s nothing to announce, even though people are complaining that something is broken.

He explained:

“I mean, in certain cases, yes, but it doesn’t work all the time, because some of the things that people perceive externally as a failure on our end is actually working as intended.”

So okay, sometimes things are working as they should but what’s broken is on the site owner’s side and maybe they can’t see it for whatever reason and you can tell because sometimes people tweet about getting caught in an update that didn’t happen, like some people thought their sites were mistakenly caught in Site Reputation Abuse crackdown because their sites lost rankings at the same time that the manual actions went out.

The Non-Existent Algorithms

Then there are the people who continue to insist that their sites are suffering from the HCU (the helpful content update) even though there is no HCU system anymore.

SearchLiaison recently tweeted about the topic of people who say they were caught in the HCU.

“I know people keep referring to the helpful content system (or update), and I understand that — but we don’t have a separate system like that now. It’s all part of our core ranking systems: https://developers.google.com/search/help/helpful-content-faq

It’s a fact, all the signals of the HCU are now a part of the core algorithm which consists of a lot of parts and there is no longer that one thing that used to be the HCU. So the algorithm is still looking for helpfulness but there are other signals as well because in a core update there are a lot of things changing.

So it may be the case that people should focus less on helpfulness related signals and be more open to the possibility of a wider range of issues instead of just one thing (helpfulness) which might not even be the reason why a site lost rankings.

Mixed Signals

But then there are the mixed signals where Googlers say that things are working the way they should but that the search team is working on showing more sites, which kind of implies the algorithm isn’t working the way it should be working.

On June 3rd SearchLiaison discussed how people who claim they have algorithmic actions against them don’t. The context of the statement was in answering a June 3rd tweet by someone who said they were hit by an algorithm update on May 6th and that they don’t know what to fix because they didn’t receive a manual action. Please note that the tweet has a type where they wrote June 6th when they meant May 6th.

The original June 3rd tweet refers to the site reputation abuse manual actions:

“I know @searchliaison says that there was no algorithmic change on June 6, but the hits we’ve taken since then have been swift and brutal.

Something changed, and we didn’t get the luxury of manual actions to tell us what we did wrong, nor did anyone else in games media.”

Before we get into what SearchLiason said, the above tweet could be seen as an example of focusing on the wrong “signal” or thing and instead it might be more productive to be open to a wider range of possible reasons why the site lost rankings.

SearchLiaison responded:

“I totally understand that thinking, and I won’t go back over what I covered in my long post above other than to reiterate that 1) some people think they have an algorithmic spam action but they don’t and 2) you really don’t want a manual action.”

In the same response, SearchLiaison left the door open that it’s possible search could do better and that they’re researching on how to do that.

He said:

“And I’ll also reiterate what both John and I have said. We’ve heard the concerns such as you’ve expressed; the search team that we’re both part of has heard that. We are looking at ways to improve.”

And it’s not just SearchLiaison leaving the door open to the possibility of something changing at Google so that more sites are shown, John Mueller also said something similar last month.

John tweeted:

“I can’t make any promises, but the team working on this is explicitly evaluating how sites can / will improve in Search for the next update. It would be great to show more users the content that folks have worked hard on, and where sites have taken helpfulness to heart.”

SearchLiaison said that they’re looking at ways to improve and Mueller said they’re evaluating how sites “can/will improve in Search for the next update.” So, how does one reconcile that something is working the way it’s supposed to and yet there’s room to be improved?

Well, one way to consider it is that the algorithm is functional and satisfactory but that it’s not perfect. And because nothing is perfect that means there is room for refinement and opportunities to improve, which is the case about everything, right?

Takeaways:

1. It may be helpful to consider that everything can be refined and made better is not necessarily broken because nothing is perfect.

2. It may also be productive to consider that helpfulness is just one signal out of many signals and what might look like an HCU issue might not be that at all, in which case a wider range of possibilities should be considered.

Featured Image by Shutterstock/ViDI Studio

Why Now’s The Time To Adopt Schema Markup via @sejournal, @marthavanberkel

There is no better time for organizations to prioritize Schema Markup.

Why is that so, you might ask?

First of all, Schema Markup (aka structured data) is not new.

Google has been awarding sites that implement structured data with rich results. If you haven’t taken advantage of rich results in search, it’s time to gain a higher click-through rate from these visual features in search.

Secondly, now that search is primarily driven by AI, helping search engines understand your content is more important than ever.

Schema Markup allows your organization to clearly articulate what your content means and how it relates to other things on your website.

The final reason to adopt Schema Markup is that, when done correctly, you can build a content knowledge graph, which is a critical enabler in the age of generative AI. Let’s dig in.

Schema Markup For Rich Results

Schema.org has been around since 2011. Back then, Google, Bing, Yahoo, and Yandex worked together to create the standardized Schema.org vocabulary to enable website owners to translate their content to be understood by search engines.

Since then, Google has incentivized websites to implement Schema Markup by awarding rich results to websites with certain types of markup and eligible content.

Websites that achieve these rich results tend to see higher click-through rates from the search engine results page.

In fact, Schema Markup is one of the most well-documented SEO tactics that Google tells you to do. With so many things in SEO that are backward-engineered, this one is straightforward and highly recommended.

You might have delayed implementing Schema Markup due to the lack of applicable rich results for your website. That might have been true at one point, but I’ve been doing Schema Markup since 2013, and the number of rich results available is growing.

Even though Google deprecated how-to rich results and changed the eligibility of FAQ rich results in August 2023, it introduced six new rich results in the months following – the most new rich results introduced in a year!

These rich results include vehicle listing, course info, profile page, discussion forum, organization, vacation rental, and product variants.

There are now 35 rich results that you can use to stand out in search, and they apply to a wide range of industries such as healthcare, finance, and tech.

Here are some widely applicable rich results you should consider utilizing:

  • Breadcrumb.
  • Product.
  • Reviews.
  • JobPosting.
  • Video.
  • Profile Page.
  • Organization.

With so many opportunities to take control of how you appear in search, it’s surprising that more websites haven’t adopted it.

A statistic from Web Data Commons’ October 2023 Extractions Report showed that only 50% of pages had structured data.

Of the pages with JSON-LD markup, these were the top types of entities found.

  • http://schema.org/ListItem (2,341,592,788 Entities)
  • http://schema.org/ImageObject (1,429,942,067 Entities)
  • http://schema.org/Organization (907,701,098 Entities)
  • http://schema.org/BreadcrumbList (817,464,472 Entities)
  • http://schema.org/WebSite (712,198,821 Entities)
  • http://schema.org/WebPage (691,208,528 Entities)
  • http://schema.org/Offer (623,956,111 Entities)
  • http://schema.org/SearchAction (614,892,152 Entities)
  • http://schema.org/Person (582,460,344 Entities)
  • http://schema.org/EntryPoint (502,883,892 Entities)

(Source: October 2023 Web Data Commons Report)

Most of the types on the list are related to the rich results mentioned above.

For example, ListItem and BreadcrumbList are required for the Breadcrumb Rich Result, SearchAction is required for Sitelink Search Box, and Offer is required for the Product Rich Result.

This tells us that most websites are using Schema Markup for rich results.

Even though these Schema.org types can help your site achieve rich results and stand out in search, they don’t necessarily tell search engines what each page is about in detail and help your site be more semantic.

Help AI Search Engines Understand Your Content

Have you ever seen your competitor’s sites using specific Schema.org Types that are not found in Google’s structured data documentation (i.e. MedicalClinic, IndividualPhysician, Service, etc)?

The Schema.org vocabulary has over 800 types and properties to help websites explain what the page is about. However, Google’s structured data features only require a small subset of these properties for websites to be eligible for a rich result.

Many websites that solely implement Schema Markup to get rich results tend to be less descriptive with their Schema Markup.

AI search engines now look at the meaning and intent behind your content to provide users with more relevant search results.

Therefore, organizations that want to stay ahead should use more specific Schema.org types and leverage appropriate properties to help search engines better understand and contextualize their content. You can be descriptive with your content while still achieving rich results.

For example, each type (e.g. Article, Person, etc.) in the Schema.org vocabulary has 40 or more properties to describe the entity.

The properties are there to help you fully describe what the page is about and how it relates to other things on your website and the web. In essence, it’s asking you to describe the entity or topic of the page semantically.

The word ‘semantic’ is about understanding the meaning of language.

Note that the word “understanding” is part of the definition. Funny enough, in October 2023, John Mueller at Google released a Search Update video. In this six-minute video, he leads with an update on Schema Markup.

For the first time, Mueller described Schema Markup as “a code you can add to your web pages, which search engines can use to better understand the content. ”

While Mueller has historically spoken a lot about Schema Markup, he typically talked about it in the context of rich result eligibility. So, why the change?

This shift in thinking about Schema Markup for enhanced search engine understanding makes sense. With AI’s growing role and influence in search, we need to make it easy for search engines to consume and understand the content.

Take Control Of AI By Shaping Your Data With Schema Markup

Now, if being understood and standing out in search is not a good enough reason to get started, then doing it to help your enterprise take control of your content and prepare it for artificial intelligence is.

In February 2024, Gartner published a report on “30 Emerging Technologies That Will Guide Your Business Decisions,”  highlighting generative AI and knowledge graphs as critical emerging technologies companies should invest in within the next 0-1 years.

Knowledge graphs are collections of relationships between entities defined using a standardized vocabulary that enables new knowledge to be gained by way of inferencing.

Good news! When you implement Schema Markup to define and connect the entities on your site, you are creating a content knowledge graph for your organization.

Thus, your organization gains a critical enabler for generative AI adoption while reaping its SEO benefits.

Learn more about building content knowledge graphs in my article, Extending Your Schema Markup From Rich Results to Knowledge Graphs.

We can also look at other experts in the knowledge graph field to understand the urgency of implementing Schema Markup.

In his LinkedIn post, Tony Seale, Knowledge Graph Architect at UBS in the UK, said,

“AI does not need to happen to you; organizations can shape AI by shaping their data.

It is a choice: We can allow all data to be absorbed into huge ‘data gravity wells’ or we can create a network of networks, each of us connecting and consolidating our data.”

The “networks of networks” Seale refers to is the concept of knowledge graphs – the same knowledge graph that can be built from your web data using semantic Schema Markup.”

The AI revolution has only just begun, and there is no better time than now to shape your data, starting with your web content through the implementation of Schema Markup.

Use Schema Markup As The Catalyst For AI

In today’s digital landscape, organizations must invest in new technology to keep pace with the evolution of AI and search.

Whether your goal is to stand out on the SERP or ensure your content is understood as intended by Google and other search engines, the time to implement Schema Markup is now.

With Schema Markup, SEO pros can become heroes, enabling generative AI adoption through content knowledge graphs while delivering tangible benefits, such as increased click-through rates and improved search visibility.

More resources: 


Featured Image by author

Charts: Global M&A Trends Q2 2024

Worldwide mergers and acquisitions are expected to increase through 2024, with CEOs viewing acquisitions and divestitures as crucial for their immediate priorities. That’s according to the quarterly “CEO Outlook Pulse” survey from EY, the accounting and consulting firm.

EY surveyed 1,200 global executives and 300 institutional investors in March and April 2024 about their plans for capital allocation, investment, and business transformation.

According to EY’s data, M&A deals in Q1 2024 totaled $796 billion, a 36% increase from the same period in 2023. The purpose of most deals was to acquire technology, enhance production, or integrate startups.

Per the EY survey, divestitures, spinoffs, and IPOs will be the top M&A initiatives this year.


In addition, the primary M&A goals of CEOs are to acquire technology or product capabilities and benefit from innovative startups.

Accounting and consulting firm KPMG surveyed (PDF) managers of U.S. private equity firms in early 2024. According to the survey, healthcare, infrastructure, and life sciences deals will be their top targets this year