Google Rolling Out Changes To Shopping Searches In Europe via @sejournal, @MattGSouthern

Google has begun implementing changes to its search results in the European Economic Area (EEA) to comply with the European Union’s recently enacted Digital Markets Act (DMA).

The DMA, which took effect on March 6, aims to promote fair competition in the European digital market by imposing new rules on large technology companies designated as “gatekeepers.”

Independent SEO consultant Brodie Clark shared recent examples of these changes on Twitter, noting the quiet introduction of default ‘Products’ and ‘Product Sites’ tabs featuring organic merchant listing results and focused web pages, respectively.

Previously Announced Search Features

In a February announcement, Google provided an early look at these new search experiences for Europeans.

According to the announcement, these updates are designed to present users with rich and relevant information while improving the visibility of aggregators, suppliers, and businesses.

The new carousel rich result may appear for travel, local, and shopping queries.

Accompanying these changes are dedicated ‘aggregator units’ that include links to aggregator sites across the web.

Compliance Reports Reveal Extent Of Changes

As part of the DMA’s transparency requirements, Google and other “gatekeepers” had to submit detailed compliance reports to the European Commission by March 7, outlining the measures they’ve taken to follow the new rules.

In its report, Google revealed it’s making the following changes to search results in Europe:

  • Removing an unspecified series of features
  • Introducing new features, such as opportunities for third-party vertical search services and direct suppliers
  • Implementing new controls for cross-service exchanges of personal data
  • Developing new Google-wide policy and compliance training
  • Enhancing existing data portability opportunities

Looking Ahead

As Google and other “gatekeepers” continue to adapt their products and services to comply with the DMA, users in the EEA can anticipate further changes in their digital experiences.

The DMA is part of a broader effort by the European Commission to reform the digital landscape and rein in the power of large technology companies.

The coming months will reveal how effective the DMA is in achieving its stated goals.


FAQ

What new features has Google introduced in Europe to conform to the DMA regulations?

Google has introduced several new features in its search results within the European Economic Area (EEA) as a response to the DMA regulations:

  • A ‘Products’ tab and ‘Product Sites’ tab in the search results to promote fair competition and enhance organic merchant visibility.
  • The rollout of carousel rich results, which are likely to appear for searches related to travel, local offerings, and shopping.
  • Dedicated ‘aggregator units’ have been included to direct users to various aggregator websites.

These features are part of a broader set of changes aimed at modifying Google’s services to offer more opportunities for third-party vertical search services and direct suppliers, enhance data protection, and provide greater data portability.

What measures has Google taken to ensure compliance with the DMA’s transparency requirements?

Google has instituted various measures to align with the transparency requirements put forward by the Digital Markets Act:

  • Google has submitted a detailed compliance report to the European Commission, as mandated by the DMA, outlining its actions.
  • It has removed certain unspecified features from its platform to comply with the new rules.
  • New controls for cross-service personal data exchanges have been implemented to strengthen user privacy.
  • Google has developed company-wide policy and compliance training.
  • The company has also worked on enhancing existing data portability opportunities for users.

These actions showcase Google’s efforts to adjust its operational and product practices to meet the DMA’s regulations.


Featured Image: Michael Derrer Fuchs/Shutterstock

Google Completes March Spam Update, Core Update Continues via @sejournal, @MattGSouthern

Google has announced the completion of its spam update rollout, which began on March 5, alongside a core update.

The spam update, which took approximately two weeks to deploy fully, targeted scaled content production, expired domain misuse, and authority abuse.

The core update, which incorporates the “Helpful Content” evaluation into the core algorithm, is still ongoing and is expected to take around four total weeks to complete.

Spam Update Rollout Completed

On March 20 at 6:09 PDT, Google confirmed that the spam update rollout had concluded.

The announcement came via an update to Google’s Search Status Dashboard.

Google Completes March Spam Update, Core Update Continues

With this update, Google has implemented three new spam policies targeting scaled content abuse, expired domain abuse, and site reputation abuse.

While the first two policies were immediately enforced, the site reputation abuse policy will be effective starting May.

Impact On Websites & Search Results

The completion of the spam update rollout brings new clarity to the search landscape, as publishers and SEO professionals can now get an idea of the changes’ impact.

Early data from SISTRIX revealed significant gains and losses across various domains, with some websites experiencing a complete removal from Google’s search index.

Following introducing the new spam policies, Google issued numerous manual actions to address pure spam issues.

A study by Originality.ai found that 100% of the deindexed websites showed signs of AI-generated content, with half of the sites having 90-100% of their posts created by AI.

However, these manual actions are separate from algorithmic demotions.

The main difference is that manual actions are carried out by Google’s human reviewers and are accompanied by a notification in Search Console.

In contrast, algorithmic ranking updates, such as the March spam update, happen automatically and do not result in notifications from Google.

A website can be impacted simultaneously by both a manual action and an algorithmic update.

Core Update Still In Progress

While the spam update rollout has concluded, the core update, which began simultaneously, is still in progress.

Google has indicated that the core update will take a few more weeks to complete as it integrates the “Helpful Content” evaluation into the core algorithm.

As the core update continues, SEO professionals and content creators should focus on creating original, high-value content that resonates with their target audience.

Google’s Search Liaison, Danny Sullivan, has urged patience and caution, emphasizing the importance of waiting for the update to be fully completed before making significant changes in response to ranking fluctuations.

Adapting To The New Search Landscape

With the spam update rollout now complete, website owners and content creators can begin to adapt to the new search landscape. This may involve auditing existing content, reworking AI-generated material, and prioritizing human creativity and editorial oversight.

As the SEO community continues to monitor the impact of both updates, it’s clear that Google remains committed to promoting original, high-value content and combating low-quality, spammy material.

The full impact of these updates will become more evident in the coming weeks as the core update completes its rollout and the search results settle into their new patterns.


Featured Image: Bayu Eka Y/Shutterstock

Google Migrates SafeSearch Tool To Search Console via @sejournal, @MattGSouthern

Google integrates SafeSearch tool into Search Console, enabling website owners to manage explicit content settings alongside SEO tools.

  • Google has moved its SafeSearch tool functionality to the Search Console platform.
  • The SafeSearch tool allows website owners to control the level of explicit content in their search results.
  • The integration aims to provide a more unified experience while maintaining the same features and functionality.
Google On Spammy Backlinks & Negative Impact On Rankings via @sejournal, @martinibuster

Google’s John Mueller answered a question on Reddit about what to do about an increase in spammy backlinks that are perceived as having a negative impact on rankings. Mueller’s answer showed what publishers should focus on.

Noticing Spammy Backlinks

The person asking the question said that they had noticed an increase in spammy backlinks and that they associated it with a negative impact on their rankings. They also said that it was affecting their “overall credibility.”

They didn’t elaborate what they meant by “overall credibility” but perhaps they were talking about a third party site metric like Domain Authority.

This is what the person asked:

“I’ve noticed a significant increase in spammy backlinks pointing to my website, and it’s negatively impacting my site’s search engine rankings and overall credibility. Despite my efforts, I’m struggling to effectively remove these spammy backlinks.

Can anyone provide guidance or suggestions on the best practices and tools for removing spammy backlinks and restoring the integrity of my website’s link profile? Any tips or suggestions will be helpful.”

John Mueller Answers Question About Spammy Backlinks

Mueller answered that it’s not necessary to do anything about “spammy backlinks” because Google ignores them. He didn’t even suggest using the Disavow Tool, a tool that tells Google to ignore specific links that a publisher is responsible for.

Mueller responded:

“I’d strongly recommend focusing on other things – Google’s systems are really good at dealing with random spammy links, but – like users – they do get hung up on websites that aren’t awesome. Make your site awesome instead of chasing those links.”

About “Overall Credibility”

Third party metrics don’t offer insights into how Google sees a website. They’re just the opinion of a third party that can be used to measure one site against another.

My background in SEO goes back 25 years to a time when Google used to show a representation of PageRank on Google’s toolbar. I was an authoritative source of information that related data about the quantity of links and whether or not a site was indexed or not indexed. Yet even Google’s own PageRank tool didn’t accurately reflect the ability of a site to rank well.

Majestic’s Topical Trust Flow scores are useful because they communicate the kinds of links flowing to a website and gives an idea of what the backlinks say about a site.

But other than that, a third party “authority” metric is not anything I have ever used and will never use. Many SEOs with longtime experience don’t use those metrics.

Read the Reddit discussion:

Can anyone help me on how to remove spammy backlinks?

Featured Image by Shutterstock/Krakenimages.com

Google Had Discussed Allowing Noindex In Robots.txt via @sejournal, @martinibuster

Google’s John Mueller responded to a question on LinkedIn to discuss the use of an unsupported noindex directive on the robots.txt of his own personal website. He explained the pros and cons of search engine support for the directive and offered insights into Google’s internal discussions about supporting it.

John Mueller’s Robots.txt

Mueller’s robots.txt has been a topic of conversation for the past week because of the general weirdness of the odd and non-standard directives he used within it.

It was almost inevitable that Mueller’s robots.txt was scrutinized and went viral in the search marketing community.

Noindex Directive

Everything that’s in a robots.txt is called a directive. A directive is a request to a web crawler that it is obligated to obey (if it obeys robots.txt directives).

There are standards for how to write a robots.txt directive and anything that doesn’t conform to those those standards is likely to be ignored. A non-standard directive in Mueller’s robots.txt caught the eye of someone who decided to post a question about it to John Mueller via LinkedIn, to know if Google supported the non-standard directive.

It’s a good question because it’s easy to assume that if a Googler is using it then maybe Google supports it.

The non-standard directive was noindex. Noindex is a part of the meta robots standard but not the robots.txt standard. Mueller had not just one instance of the noindex directive, he had 5,506 noindex directives.

The SEO specialist who asked the question, Mahek Giri, wrote:

“In John Mueller’s robots.txt file,

there’s an unusual command:

“noindex:”

This command isn’t part of the standard robots.txt format,

So do you think it will have any impact on how search engine indexes his pages?

John Mueller curious to know about noindex: in robots.txt”

Why Noindex Directive In Robots.txt Is Unsupported By Google

Google’s John Mueller answered that it was unsupported.

Mueller answered:

“This is an unsupported directive, it doesn’t do anything.”

Mueller then went on to explain that Google had at one time considered supporting the noindex directive from within the robots.txt because it would provide a way for publishers to block Google from both crawling and indexing content at the same time.

Right now it’s possible to block crawling in robots.txt or to block indexing with the meta robots noindex directive. But you can’t block indexing with the meta robots directive and block crawling in the robots.txt at the same time because a block on the crawl will prevent the crawler from “seeing” the meta robots directive.

Mueller explained why Google decided to not move ahead with the idea of honoring the noindex directive within the robots.txt.

He wrote:

“There were many discussions about whether it should be supported as part of the robots.txt standard. The thought behind it was that it would be nice to block both crawling and indexing at the same time. With robots.txt, you can block crawling, or you can block indexing (with a robots meta tag, if you allow crawling). The idea was that you could have a “noindex” in robots.txt too, and block both.

Unfortunately, because many people copy & paste robots.txt files without looking at them in detail (few people look as far as you did!), it would be very, very easy for someone to remove critical parts of a website accidentally. And so, it was decided that this should not be a supported director, or a part of the robots.txt standard… probably over 10 years ago at this point.”

Why Was That Noindex In Mueller’s Robots.txt

Mueller made clear that it’s unlikely that Google would support that tag and that this was confirmed about ten years ago. The revelation about those internal discussions is interesting but it’s also deepens the sense of weirdness about Mueller’s robots.txt.

See also: 8 Common Robots.txt Issues And How To Fix Them

Featured Image by Shutterstock/Kues

YouTube Introduces Mandatory Disclosure For AI-Generated Content via @sejournal, @MattGSouthern

YouTube has announced the introduction of a new tool in Creator Studio that requires creators to disclose when their videos contain realistic content generated or altered by AI.

As stated in YouTube’s announcement:

“Viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic.”

To address this growing demand, YouTube will now mandate that creators inform their audience when a video features content that could easily be mistaken for a real person, place, or event but has been created or modified using generative AI or other synthetic media.

Disclosure Labels & Requirements

Disclosures will appear as labels in the expanded description or directly on the video player.

YouTube Introduces Mandatory Disclosure For AI-Generated ContentScreenshot from: blog.youtube/news-and-events/disclosing-ai-generated-content/, March 2024.
YouTube Introduces Mandatory Disclosure For AI-Generated ContentScreenshot from: blog.youtube/news-and-events/disclosing-ai-generated-content/, March 2024.

YouTube believes this new label will “strengthen transparency with viewers and build trust between creators and their audience.”

Examples of content that will require disclosure include:

  • Digitally altering a video to replace one person’s face with another’s.
  • Synthetically generating a person’s voice for narration.
  • Altering footage of actual events or places to make them appear different from reality.
  • Generating realistic scenes depicting fictional significant events.

Exceptions To Disclosure

YouTube acknowledges that creators use generative AI in various ways throughout the creative process and will not require disclosure when AI is used for productivity purposes, such as generating scripts, content ideas, or automatic captions.

Additionally, disclosure will not be necessary for unrealistic or inconsequential changes, such as color adjustments, lighting filters, special effects, or beauty filters.

Rollout & Enforcement

The labels will be rolled out across all YouTube surfaces and formats in the coming weeks, starting with the YouTube app on mobile devices and eventually expanding to desktop and TV.

While creators will be given time to adjust to the new process, YouTube may consider enforcement measures in the future for those who consistently fail to disclose the use of AI-generated content.

In some cases, particularly when the altered or synthetic content has the potential to confuse or mislead people, YouTube may add a label even if the creator has not disclosed it themselves.

YouTube is working on updating its privacy process to allow individuals to request the removal of AI-generated or synthetic content that simulates an identifiable person, including their face or voice.

In Summary

As viewers demand more transparency, marketers must be upfront about using AI-generated content to remain in good standing on YouTube.

While AI can be a powerful tool for content creation, marketers should strive to balance leveraging AI’s capabilities and maintaining a human touch.


FAQ

How does YouTube’s new policy on AI-generated content disclosure impact content creators?

  • The policy mandates content creators to disclose if their videos include AI-generated or significantly altered content that mimics real-life scenarios.
  • Disclosure labels will be added in video descriptions or directly on the player.
  • Failure to comply with disclosure requirements could result in YouTube taking enforcement measures.

What are the exceptions to YouTube’s mandatory AI content disclosure requirements?

  • AI used to produce unrealistic or animated content, including special effects and production assistance, is exempt from disclosure.
  • This also applies to minor adjustments like color correction, lighting filters, or beautification enhancements.

What actions will YouTube take if creators do not disclose AI-generated content?

  • In cases where AI-generated content could mislead viewers, YouTube may add a disclosure label if the creator hasn’t.
  • Creators will have a grace period to adjust to the new policy, but persistent non-compliance may lead to enforcement measures.
  • YouTube is refining its privacy guidelines to allow individuals to request the removal of AI-generated content that uses their likeness without consent.


Featured Image: Muhammad Alimaki/Shutterstock

OpenAI’s Sam Altman On Challenging Google With AI Search via @sejournal, @martinibuster

OpenAI’s Sam Altman answered questions about challenging Google’s search monopoly and reveals that he’d rather entirely change the paradigm of how people get information rather than copy what Google has been doing it for the past twenty+ years.  His observations were made in the context of a podcast interview by Lex Fridman.

What Altman proposed is that the best way to challenge Google is to completely replace its entire business category, including the advertising.

1. Is OpenAI Building A Challenge Google Search?

The discussion began with a question from Fridman asking if it’s true that OpenAI is going to challenge Google.

Lex Fridman asked:

“So is OpenAI going to really take on this thing that Google started 20 years ago, which is how do we get-“

Sam Altman responded that the whole idea of building a better search engine limits what the future of information retrieval can be, calling the current conception of search boring.

Altman answered:

“I find that boring. I mean, if the question is if we can build a better search engine than Google or whatever, then sure, we should go, people should use the better product, but I think that would so understate what this can be. Google shows you 10 blue links, well, 13 ads and then 10 blue links, and that’s one way to find information.

But the thing that’s exciting to me is not that we can go build a better copy of Google search, but that maybe there’s just some much better way to help people find and act on and synthesize information. Actually, I think ChatGPT is that for some use cases, and hopefully we’ll make it be like that for a lot more use cases.”

2. The World Doesn’t Need Another Google

Altman expanded on his thoughts by saying that the idea of creating another Google in order to challenge Google is not interesting. He said that the more interesting path to follow is completely change not just how people get information but to do it in a way that fits into how people are using information.

Altman continued:

“But I don’t think it’s that interesting to say, “How do we go do a better job of giving you 10 ranked webpages to look at than what Google does?”

Maybe it’s really interesting to go say, “How do we help you get the answer or the information you need? How do we help create that in some cases, synthesize that in others, or point you to it in yet others?’

But a lot of people have tried to just make a better search engine than Google and it is a hard technical problem, it is a hard branding problem, it is a hard ecosystem problem. I don’t think the world needs another copy of Google.”

3. AI Search Hasn’t Been Cracked

The part where the conversation seemed fall off the rails is when Fridman steered the discussion over to integrating a chatbot with a search engine, which itself is already done to death and boring. Bing created the chat on top of search experience over a year ago and now there are at least six AI search engines.that integrate a chatbot on top of a traditional search engine.

Fridman’s direction of the discussion threw cold water on what Altman was talking about.

Altman said that nobody has “cracked the code yet,” which implied that repeating what Bing did was not what Sam Altman had in mind. He called it an “example of a cool thing.”

Fridman and Altman continued:

“And integrating a chat client, like a ChatGPT, with a search engine-

Sam Altman
As you might guess, we are interested in how to do that well. That would be an example of a cool thing.

…The intersection of LLMs plus search, I don’t think anyone has cracked the code on yet. I would love to go do that. I think that would be cool.”

4. Advertisement Supported AI Search Is Dystopian

Altman used the word “dystopic” to characterize a world in which AI search was based on an advertising model. Dystopic means dystopian, which means a dehumanizing existence that lacks justice and is characterized by distrust.

He noted that ChatGPT as a subscription-based model can be perceived as more trustworthy as an advertising-based search engine. He raised the idea of an AI suggesting that users try a specific product and questioning whether the recommendation was influenced by advertising or what was best for the user.

That makes sense because there’s a high level of trust involved with AI that doesn’t exist with traditional search. Many consumers don’t trust Google search because, rightly or wrongly, it’s perceived as influenced by advertising and spammy SEO.

Fridman steered the conversation to advertising:

“Lex Fridman
…What about the ad side? Have you ever considered monetization of-

Sam Altman
I kind of hate ads just as an aesthetic choice. I think ads needed to happen on the internet for a bunch of reasons, to get it going, but it’s a momentary industry. The world is richer now.

I like that people pay for ChatGPT and know that the answers they’re getting are not influenced by advertisers.

I’m sure there’s an ad unit that makes sense for LLMs, and I’m sure there’s a way to participate in the transaction stream in an unbiased way that is okay to do, but it’s also easy to think about the dystopic visions of the future where you ask ChatGPT something and it says, “Oh, you should think about buying this product,” or, “You should think about going here for your vacation,” or whatever.”

5. A Search Experience Where The Consumer Is Not The Product

Altman next commented that he didn’t like how consumers are the product when they used social media or search engines. What he means is that user interactions are sold to advertisers who then turn around to target the users based on their interests.

Altman continued:

“And I don’t know, we have a very simple business model and I like it, and I know that I’m not the product. I know I’m paying and that’s how the business model works.

And when I go use Twitter or Facebook or Google or any other great product but ad-supported great product, I don’t love that, and I think it gets worse, not better, in a world with AI.”

6. Altman Is Biased Against Advertising

Sam Altman explicitly said that he was biased against search and expressed confidence that there is a path toward an AI-based information retrieval system that is profitable without having to serve advertising. His statement that he was biased against advertising was made in the context of the interviewer raising the idea of “completely” throwing out ads, which Altman refused to confirm.

“Lex Fridman
…I could imagine AI would be better at showing the best kind of version of ads, not in a dystopic future, but where the ads are for things you actually need. But then does that system always result in the ads driving the kind of stuff that’s shown?

….I think it was a really bold move of Wikipedia not to do advertisements, but then it makes it very challenging as a business model. So you’re saying the current thing with OpenAI is sustainable, from a business perspective?

Sam Altman
Well, we have to figure out how to grow, but looks like we’re going to figure that out.

If the question is do I think we can have a great business that pays for our compute needs without ads, …I think the answer is yes.

Lex Fridman
Hm. Well, that’s promising. I also just don’t want to completely throw out ads as a…

Sam Altman
I’m not saying that. I guess I’m saying I have a bias against them.”

Is OpenAI Building A Challenge To Google?

Sam Altman did not directly say that OpenAI was building a challenge to Google. He did imply that a proper challenge to Google that uses AI doesn’t yet exist, saying that nobody has “cracked the code” on that yet.

What Altman offered was a general vision of an AI search that didn’t commoditize and sell the users to advertisers and thereby be more trustworthy and useful. He said that a proper  challenge to Google would be something that was completely different than what Google has been doing.

Watch the podcast at the 01:17:27 minute mark:

Featured Image by Shutterstock/photosince

TikTok’s New Creator Rewards Program Prioritizes Search Value via @sejournal, @MattGSouthern

TikTok has launched its new Creator Rewards Program and introduced a Search Insights tool.

The program, which was previously in beta, rewards creators based on four key metrics:

  • Originality
  • Play duration
  • Search value
  • Audience engagement

The new Creator Search Insights tool provides data on trending search topics, helping you to create content that meets TikTok’s requirements for “search value.”

Here are the details about the new monetization program and keyword research tool.

Emphasis On Search Value

One of the most notable aspects of the Creator Rewards Program is its emphasis on search value.

In an announcement, TikTok defines search value as:

“Search value is a metric assigned to content based on popular search terms. Content that aligns with in-demand search topics increases its value for searchers.”

To help people create more content with search value, TikTok launched the Creator Search Insights tool, which identifies trending search topics on the platform.

Creator Search Insights Tool

The Creator Search Insights tool, available in select regions, allows you to explore topics that are frequently searched for on TikTok.

By typing “Creator Search Insights” in the search bar, you can access a wealth of information to inspire your content creation process.

The tool surfaces popular search topics sorted by category or personalized based on your content niche. Additionally, you can filter for content gap topics, which are popular terms with limited video content available on the platform.

TikTok stated in its announcement:

“With these insights, creators can source inspiration for their content, tailor their creative strategies to meet audience interests and create content that people want to see more of. Our hope is that Creator Search Insights empowers creators to make content that’s relevant to searchers and positioned to perform well on TikTok.”

Additional Program Metrics & Eligibility

The Creator Rewards Program also prioritizes originality, encouraging creators to showcase their unique perspectives and creative processes in a way that resonates with their audience.

Additionally, the program considers play duration, which accounts for watch time and finish rate, and audience engagement, including likes, comments, and shares.

To be eligible for the program, creators must be at least 18 years old, have a minimum of 10,000 followers, and have generated at least 100,000 views in the past 30 days. Their accounts must also be in good standing and located in a region where the program is available.

Enhanced Dashboard & Creator Academy

Alongside the Creator Rewards Program launch and the Creator Search Insights tool, TikTok has introduced an enhanced dashboard that provides creators with detailed analytics, insights, and customization options to help them analyze their content performance and estimated rewards.

The dashboard also includes direct access to the Creator Academy, a comprehensive education hub offering resources, courses, and insights to support creators at all levels.

TikTok has also improved its Monetization module within the in-app Creator tools. It provides a centralized hub for creators to access all monetization-related operations, including revenue analysis and personalized recommendations.

In Summary

The Creator Rewards Program and Search Insights tool are the latest examples of how TikTok enables creators to make money on its platform.

As the platform continues to evolve and prioritize search, marketers who adapt their strategies accordingly may find new opportunities to earn revenue.


FAQ

How do the new TikTok Creator Rewards Program and Search Insights tool influence content strategy?

Marketers focusing on TikTok must adjust their content strategies in light of the platform’s recent updates. The Creator Rewards Program incentivizes content that aligns with specific criteria, emphasizing:

  • Originality – Offering a fresh and unique perspective in the content.
  • Play duration – Creating content that encourages viewers to watch until the end.
  • Search value – Crafting content that corresponds with in-demand search terms and topics.
  • Audience engagement – Generating content that fosters likes, comments, and shares.

Content creators and marketers should leverage the Creator Search Insights tool to capitalize on these metrics. By focusing on trending searches and identifying content gaps, they can tailor their content to meet user interests and improve its potential for visibility and monetization on the platform.

To be eligible for TikTok’s Creator Rewards Program, what criteria must creators meet?

For creators to take advantage of TikTok’s monetization opportunities through the Creator Rewards Program, they must meet the following eligibility criteria:

  • Age – Must be at least 18 years old.
  • Followers – Must have a minimum of 10,000 followers.
  • View count – Must have garnered at least 100,000 views in the past 30 days.
  • Account standing – Must have an account in good standing and based in a region where the program is available.

These prerequisites ensure that active, influential, and engaged creators are rewarded, incentivizing high-quality and engaging content creation on TikTok.

What resources does TikTok provide to creators to help them succeed on the platform beyond the Creator Rewards Program?

TikTok offers resources to support creators in honing their craft and optimizing their content. These resources include:

  • An enhanced dashboard that delivers in-depth analytics, insights, and customization options, thus enabling creators to analyze content performance and projected rewards.
  • Direct access to the Creator Academy presents an extensive array of educational material, courses, and insights aimed at aiding creators of varying experience levels.
  • Improvements to the in-app Monetization module, featuring a centralized hub where creators can manage all monetization aspects, including revenue analysis and tailored recommendations.

These tools empower creators with the knowledge and data to create compelling content and maximize their earning potential on TikTok.


Featured Image: newsroom.tiktok.com, March 2024. 

Why Prediction Of 25% Search Volume Drop Due to Chatbots Fails Scrutiny via @sejournal, @martinibuster

Gartner’s predictions that AI Chatbots are the future and will account for a 25% drop in search market share got a lot of attention. What didn’t get attention is the fact that the claim fails to account for seven facts that call into question the accuracy of the prediction and demonstrates that it simply does not hold up to scrutiny.

1. AI Search Engines Don’t Actually Exist

The problem with AI technology is that it’s currently impossible to use AI infrastructure to create a constantly updated search index of web content in addition to billions of pages of news and social media that is constantly generated in real-time. Attempts to create a real-time AI search index fail because the nature of the technology requires retraining the entire language model to update it with new information. That’s why language models like GPT-4 don’t have access to current information.

Current so-called AI search engines are not actually AI search engines. The way they work is that when a user asks a question, a traditional search engine finds the answers and the AI chatbot chooses the questions and summarizes them. AI search engines aren’t really AI search engines. They are traditional search engines with a chatbot interface. When you use a so-called AI search engine what’s really happening is that you’re asking a chatbot to Google this for me.

2. Generative AI Is Not Ready For Widescale Use

The recent fiasco with Gemini’s image search underscores the fact that generative AI as a technology is still in its infancy. Microsoft Copilot completely went off the rails in March 2024 by assuming a godlike persona, calling itself “SupremacyAGI,” and demanding to be worshipped under the threat of imprisoning users of the service.

This is the technology that Gartner predicts will take away 25% of market share? Really?

Generative AI is unsafe and despite attempts to add guardrails the technology still manages to jump off the cliffs with harmful responses. The technology is literally in its infancy. To assert that it will be ready for widescale use in two years is excessively optimistic about the progress of the technology

3. True AI Search Engines Are Economically Unviable

AI Search Engines are exponentially more expensive than traditional search engines. It currently costs $20/month to subscribe to a Generative AI chatbot and that comes with limits of 40 queries every 3 hours and the reason for that is because generating AI answers is vastly more expensive than generating traditional search engine responses.

The economic feasibility of AI search engines rules out the use of AI as a replacement for traditional search engines.

4. Gartner’s Prediction Of 25% Decrease Assumes Search Engines Will Remain Unchanged

Gartner predicts a 25% decrease in traditional search query volume by 2026 but that prediction assumes that traditional search engines will remain the same. The Gartner analysis fails to account for the fact that search engines evolve not just on a yearly basis but on a month to month basis.

Search engines currently integrate AI technologies that increase search relevance in ways that innovate the entire search engine paradigm with applications such as making images tappable in as a way to launch an image-based search for text answers about the subject within an image.

That’s called multi-modal search, a way to search using sound and vision in addition to traditional text-based searching.  There is absolutely no mention of multimodality in traditional search, a technology that shows how traditional search engines evolve to meet user’s needs.

So-called AI chatbot search engines are in their infancy and offer zero multimodality. How can a technology so comparatively primitive even be considered competitive to traditional search?

5. Why Claim That AI Chatbots Will Steal Market Share Is Unrealistic

The Gartner report assumes that AI chatbots and virtual agents will become more popular but that fails to consider that Gartner’s own research from June 2023 shows that users distrust AI Chatbots.

Gartner’s own report states:

“Only 8% of customers used a chatbot during their most recent customer service experience, according to a survey by Gartner, Inc. Of those, just 25% said they would use that chatbot again in the future.”

Customer’s lack of trust is especially noticeable in Your Money Or Your Life (YMYL) tasks that involve money.

Gartner reported:

“Just 17% of billing disputes are resolved by customers who used a chatbot at some stage in their journey…”

Gartner’s enthusiastic assumption that users will trust AI chatbots may be unfounded because it may not have considered that users do not trust chatbots for important YMYL search queries, according to Gartner’s own research data.

are expected to become more popular, this does not necessarily mean they will diminish the value of search marketing. Search engines may incorporate AI technologies to enhance user experiences, keeping them as a central part of digital marketing strategies.

6. Gartner Advice Is To Rethink What?

Gartner’s advice to search marketers is to incorporate more experience, expertise, authoritativeness and trustworthiness in their content, which betrays a misunderstanding what EEAT actually is. For example, trustworthiness is not something that is added to content like a feature, trustworthiness is the sum of the experience, expertise and authoritativeness that the author of the content brings to an article.

Secondly, EEAT is a concept of what Google aspires to rank in search engines but they’re not actual ranking factors, they’re just concepts.

Third, marketers are already furiously incorporating the concept of EEAT into their search marketing strategy. So the advice to incorporate EEAT as part of the future marketing strategy is itself too late and a bit bereft of unique insight.

The advice also fails to acknowledge that user interactions and user engagement not only a role in search engine success in the present but that they will likely increase in importance as search engines incorporate AI to improve their relevance and meaningfulness to users.

That means traditional that search marketing will remain effective and in demand for creating awareness and demand.

7. Why Watermarking May Not Have An Impact

Gartner suggests that watermarking and authentication will increasingly become common due to government regulation. But that prediction fails to understand the supporting role that AI can play in content creation.

For example, there are workflows where a human reviews a product, scores it, provides a sentiment score and insights about which users may enjoy the product and then submits the review data to an AI to write the article based on the human insights. Should that be watermarked?

Another way that content creators use AI is to dictate their thoughts into a recording then hand it over to the AI with the instruction to polish it up and turn into to a professional article. Should that be watermarked as AI generated?

The ability of AI to analyze vast amounts of data complements the content production workflow and can pick out key qualities of the data such key concepts and conclusions, which in turn can be used by humans to create a document that is filled with their insights, bringing to bear their human expertise on interpreting the data. Now, what if that human then uses an AI to polish up the document and make it professional. Should that be watermarked?

The Gartner’s predictions about watermarking AI content fails to take into account how AI is actually used by many publishers to create well written content with human-first insights, which absolutely complicate the use of watermarking and calls into question the adoption of it in the long term, not to mention the adoption of it by 2026.

Gartner Predictions Don’t Hold Up To Scrutiny

The Gartner predictions cite actual facts from the real-world. But it fails to consider real-world factors that make AI technology as an impotent threat to traditional search engines. For example, there is no consideration of the inability to of AI to create a fresh search index or that AI Chatbot search engines aren’t even actual AI search engines.

It is incredible that the analysis failed to cite the fact that Bing Chat experienced no significant increase in users and has failed to peel way search volume from Google. These failures cast serious doubt on the accuracy of the predictions that search volume will decrease by 25%.

Read Gartner’s press release here:

Gartner Predicts Search Engine Volume Will Drop 25% by 2026, Due to AI Chatbots and Other Virtual Agents

Featured Image by Shutterstock/Renovacio

Google Confirms: High-Quality Content Is Crawled More Often via @sejournal, @MattGSouthern

SEO professionals have long discussed the concept of a “crawl budget,” which refers to the limited number of pages search engines can crawl daily.

The assumption is that sites must stay within this allotted budget to get pages indexed. In a recent podcast, Google search engineers debunked some misconceptions about crawl budget and shed light on how Google prioritizes crawling.

How Googlebot Prioritizes Crawling

“I think there’s a lot of myths out there about crawling, about what it is and what it isn’t. And things like crawl budgets and phrases you hear thrown around that may be quite confusing to people,” said Dave Smart, an SEO consultant and Google Product Expert, during the podcast.

So, how does Google decide what to crawl?

“You need to do it by looking at what’s known, finding somewhere to start, a starting point. And from that, you get the links and stuff, and then you would try and determine what’s important to go and fetch now, and maybe what can wait until later and maybe what’s not important at all,” explained Smart.

Gary Illyes from Google’s search relations team agreed with this framework.

“If search demand goes down, that also correlates to the crawl limit going down. So if you want to increase how much we crawl, you somehow have to convince search that your stuff is worth fetching,” he said.

The key, then, is to produce content that Google recognizes as valuable based on user interaction.

Focus On Quality & User Experience

“Scheduling is very dynamic. As soon as we get the signals back from search indexing that the quality of the content has increased across this many URLs, we would just start turning up demand,” said Illyes.

This means there is no fixed “budget” that sites must adhere to. Improving page quality and proving usefulness to searchers can overcome any assumed limitations.

No One-Size-Fits-All Approach

“We don’t have an answer for every site,” Illyes admitted regarding crawl prioritization. “If you improved that section, then probably it’s going to help a lot.”

According to Google, the bottom line is to Focus on producing high-quality content rather than trying to reverse engineer a non-existent crawl quota. Earning links naturally and better serving users will take care of the rest.

Hear the full discussion in the podcast episode linked below:


FAQ

How does the concept of a crawl budget affect SEO strategies?

SEO professionals have discussed the concept of a crawl budget, believing that staying within a certain limit of pages crawled daily is essential. However, Google’s search engineers have clarified that there is no set crawl budget that websites must adhere to.

Instead, Google prioritizes crawling based on content quality and user interaction signals. Therefore, SEO strategies should shift focus from managing a crawl budget to optimizing for high-quality, user-centric content to increase the chances of being crawled and indexed effectively.

What factors influence Googlebot’s prioritization for crawling web pages?

A dynamic set of factors influences Googlebot’s prioritization for crawling web pages, predominantly content quality and user engagement. According to Google search engineers, the more valuable the content appears based on user interactions, the more likely the site will be crawled more frequently.

Factors such as earning organic links and improving user experience can enhance content quality signals, thus implying that enhancing overall page quality can increase a site’s crawl rate.

In what ways can marketers enhance the crawlability of their website’s content?

Marketers looking to improve their website’s crawlability should concentrate on the following:

  • Producing high-quality content that is informative, relevant, and engaging to the target audience.
  • Ensuring the website offers a superior user experience with fast loading times, mobile-friendliness, and navigational ease.
  • Gaining natural backlinks from reputable sources to increase credibility and visibility to search engines.
  • Regularly updating content to reflect the latest information, trends, and user needs.


Featured Image: BestForBest/Shutterstock