YouTube Launches ‘Data Stories’ For First-Day Video Metrics via @sejournal, @MattGSouthern

YouTube’s latest update introduces experimental features to enhance creator analytics and community engagement,

Additionally, YouTube is rolling out improvements to brand collaboration.

Key Updates

Data Stories

YouTube is testing a “data story” feature in its Studio mobile app that helps creators understand their video performance.

This tool provides a visual breakdown of the first 24 hours’ metrics, making it easier to identify key performance drivers.

Screenshot from: YouTube.com/CreatorInsider, November 2024.

Data stories transform YouTube’s analytics for creators by offering a narrative format instead of the traditional multi-tab dashboard.

“Most Relevant” Filter

YouTube recently rebranded its Comments tab to “Community” and is now introducing a “most relevant” comment filter.

This new feature, currently being tested with a select group, analyzes a channel’s comments to highlight potential engagement opportunities, such as viewer questions.

Screenshot from: YouTube.com/CreatorInsider, November 2024.

The introduction of the “most relevant” filter in the Community tab is timely, as YouTube aims to reinforce its position against competitors like TikTok and Instagram.

This feature:

  • Utilizes AI to pinpoint valuable comments throughout a creator’s entire channel
  • Prioritizes viewer questions and fosters meaningful interactions
  • Assists creators in managing engagement more efficiently on both mobile and desktop platforms

Video Linking

YouTube is allowing eligible creators to initiate video linking requests with advertisers.

This option is available to YouTube Partner Program members with more than 4,000 subscribers and focuses specifically on Shorts content.

Screenshot from: YouTube.com/CreatorInsider, November 2024.

Previously, only advertisers could start video linking requests sent through emails and YouTube notifications. The new system empowers creators to proactively connect with brands, as YouTube suggests tagged content to potential advertisers.

Once approved, these links enable advertisers to access organic video performance data through Google Ads and establish clear content reuse rights between creators and brands.

Looking Ahead

As YouTube continues to roll out these features, here’s what you need to know:

  • Data stories are in beta – most creators won’t have access yet
  • The new comment filter should help spot meaningful viewer interactions faster
  • To use the brand linking tool, you’ll need 4K+ subs and YPP status

YouTube’s collecting feedback on all these features, so expect tweaks and updates. Keep an eye on your Studio dashboard for when these roll out to your channel.

See the full news update below:


Featured Image: JarTee/Shutterstock

Google Search Snippets Show Contradictory Information, Study Finds via @sejournal, @MattGSouthern

A recent investigation finds that Google’s Featured Snippets may display conflicting information from the same source material, depending on how users phrase their search queries.

This raises concerns about the search engine’s ability to interpret content accurately.

Sarah Presch, director at Dragon Metrics, discovered that Google’s Featured Snippets pull opposing statements from the same articles when users frame questions differently.

For example, searching “link between coffee and hypertension” generates a Featured Snippet highlighting caffeine’s potential to cause blood pressure spikes.

Searching “no link between coffee and hypertension” produces a contradictory snippet from the same Mayo Clinic article stating caffeine has no long-term effects.

Similar contradictions appeared across health topics, political issues, and current events.

The investigation found that asking whether a political candidate is “good” versus “bad” yields dramatically different results despite the fundamental question remaining the same.

Impact On Search Quality

“It’s one big bias machine,” Presch notes, explaining how Google’s algorithms appear to prioritize content that matches user intent rather than providing comprehensive, balanced information.

The findings align with internal Google documents from 2016, where engineers admitted, “We do not understand documents – we fake it.”

While Google maintains these documents are outdated, SEO experts suggest the underlying technical limitations persist.

Presch adds:

“What Google has done is they’ve pulled bits out of the text based on what people are searching for and fed them what they want to read.”

Mark Williams-Cook, founder of AlsoAsked, commented on the findings, stating:

“Google builds models to try and predict what people like, but the problem is this creates a kind of feedback loop. If confirmation bias pushes people to click on links that reinforce their beliefs, it teaches Google to show people links that lead to confirmation bias.”

Implications

These findings have implications for content creators and SEO professionals:

  • Featured Snippets may not accurately represent comprehensive content
  • User intent heavily influences how content is interpreted and displayed
  • Content strategy may need adjustment to maintain accuracy across various query formats

Google’s spokesperson defended the system, stating that users can find diverse viewpoints if they scroll beyond initial results.

The company also highlighted features like “About this result” that help users evaluate information sources.

Recommendations

Based on these findings, publishers should take the following actions:

  • Develop comprehensive content that remains accurate regardless of how queries are phrased.
  • Recognize the impact of search intent on the selection of Featured Snippets.
  • Track how your content is displayed in Featured Snippets for different search phrases.

As Google moves toward becoming an “answer engine” with AI-generated responses, digital marketers and content creators need to understand these limitations.


Featured Image: Song_about_summer/Shutterstock

ChatGPT Search Indexing: Essential Steps For Websites via @sejournal, @MattGSouthern

As the availability of ChatGPT Search expands, understanding its indexing mechanics will be vital for digital visibility.

While Bing’s index plays a key role, OpenAI’s system surfaces content using its own crawlers and attribution methods.

Here is a breakdown of the technical requirements for ensuring your website is indexed correctly.

Technical Framework

ChatGPT Search combines Bing’s search index with OpenAI’s proprietary technology.

According to OpenAI’s technical documentation, the platform utilizes a fine-tuned version of GPT-4o, enhanced with synthetic data generation techniques and integration with their o1-preview system.

The platform employs three distinct crawlers, each serving different purposes.

The OAI-SearchBot serves as the primary crawler for search functionality, while ChatGPT-User handles real-time user requests and enables direct interaction with external applications.

The third crawler, GPTBot, manages AI model training and can be blocked without affecting search visibility.

Implementation

Proper indexing begins with robots.txt configuration.

Your website’s robots.txt should specifically allow OAI-SearchBot while maintaining separate permissions for different OpenAI crawlers.

In addition to this basic configuration, websites must ensure proper indexing by Bing and maintain a clear site architecture.

It’s worth noting that allowing OAI-SearchBot doesn’t automatically mean the content will be used for AI training.

It can take approximately 24 hours for OpenAI’s systems to adjust to new crawling directives after a site’s robots.txt update.

Content Attribution

ChatGPT Search includes several key features for content publishers:

  • Source Attribution: All referenced content includes proper citation
  • Source Sidebar: Provides reference links for verification
  • Multiple Citation Opportunities: A single query can generate multiple source citations
  • Locations: Searches for specific locations will return an interactive map, as shown below.
Image Credit: OpenAI

Additional Considerations

Recent testing has revealed several important factors:

  • Content freshness affects visibility
  • Pages behind paywalls can still be cited
  • URLs returning 404 errors may still appear in citations
  • Multiple pages from the same domain can be referenced in a single response

Recommendations

Indexing in ChatGPT requires ongoing attention to technical health, including regular verification of the robots.txt file and crawler access.

Publishers should prioritize maintaining factual accuracy and up-to-date information while implementing a clear content structure.

This ensures that pages remain accessible across traditional search engines and AI-powered platforms, helping websites achieve broader visibility.


Featured Image: designkida/Shutterstock

OpenAI Reddit AMA And SEO For ChatGPT Search via @sejournal, @martinibuster

CEO Sam Altman and OpenAI executives held a Reddit AMA to answer questions, including those about ChatGPT Search, providing an inside look at how it works. Their answers offer insights into what SEO may look like in the immediate future.

The people from OpenAI answering the questions:

  • Sam Altman, CEO
  • Kevin Weil, Chief Product Officer
  • Mark Chen, SVP of Research
  • ​​Srinivas Narayanan, VP Engineering

Why ChatGPT Search Is Important

ChatGPT Search is not a search engine, it’s an AI chatbot with search, which means it doesn’t compete with Google as a search engine, it simply replaces it with something else that people already use for work and play. Now it has additional utility as an assistant in daily life and search.

Another advantage to ChatGPT Search is that it doesn’t show advertising nor does it follow users around the Internet. Users already trust ChatGPT with personal and business information so it’s already has goodwill with users.

What makes ChatGPT Search a threat to Google is that Users are already familiar with ChatGPT and have good feelings about it. Because it’s already in use there is no switching away from Google to break the habit of searching with Google.

Sam Altman On Why ChatGPT Search Is Better

In the Ask Me Anything (AMA) session on Reddit, a Redditor asked OpenAI CEO what the value of ChatGPT Search is over other search engines.

The person asked:

“My question is about the value ChatGPT Search offers compared to popular search engines. What are the unique advantages or key differentiators of ChatGPT Search that would make it worthwhile for a typical search engine user to choose it?

Sam Altman answered:

“For many queries, I find it to be a way faster/easier way to get the information I’m looking for. I think we’ll see this especially for queries that require more complex research. I also look forward to a future where a search query can dynamically render a custom web page in response!”

That bit about a “custom web page” is something to look out for because it hints at personalization based on what a user is searching for.

Complex Queries Are ChatGPT’s Advantage

Altman’s response about ChatGPT Search’s handling of complex queries calls attention to an advantage over Google. ChatGPT users are accustomed to using natural language, whereas Google users habitually use keyword searches. Keyword searches disadvantages Google because it’s harder to understand those queries, which is why Google displays People Also Ask features in Search.

Natural language queries is the way users interact with ChatGPT and that is an advantage for ChatGPT Search.

Grounding For Better Answers

The next question was about OpenAI’s progress on preventing ChatGPT from making things up (aka hallucinations) and also about how it’s going to incorporate fresh data to the index.

Both problems are generally approached with a technology and technique called Retrieval-Augmented Generation (RAG) which selects data from an up to date database like a search index or a knowledge graph and then provides that to the LLM-based chatbot to summarize and use as a base for an answer.

This is the question:

“Are hallucinations going to be a permanent feature? Why is it that even o1-preview, when approaching the end of a “thought” hallucinates more and more?

How will you handle old data (even 2-years old) that is now no longer “true”? Continuously train models or some sort of garbage collection? It’s a big issue in the truthfulness aspect.”

The answer was given by Mark Chen, SVP of Research

“We’re putting a lot of focus on decreasing hallucinations, but it’s a fundamentally hard problem – our models learn from human-written text, and humans sometimes confidently declare things they aren’t sure about.”

Mark Chen continued his answer by saying that they are getting better by the use of “grounding” which is something that Retrieval-Augmented Generation (RAG) helps large language models with. Chen also reveals that they believe that using Reinforcement Learning (RL) may help models stop hallucinating.

Reinforcement Learning (RL) is a way to teach a machine with experience, rewarding it when it’s correct and withholding the reward when it’s not correct, thus reinforcing good answers. The machine “learns” by making choices that maximizes rewards. In the context of hallucinations, a reward could be a score or signal that indicates that the answer is factual (and it could also be provided by human feedback scores).

Mark Chen continued his response:

“Our models are improving at citing, which grounds their answers in trusted sources, and we also believe that RL will help with hallucinations as well – when we can programmatically check whether models hallucinate, we can reward it for not doing so.”

Does ChatGPT Search Use Bing?

The next question is about what search data does ChatGPT Search use.

The question asked:

“Is ChatGPT Search still using Bing as the search engine behind scenes?”

The answer was provided by Rinivas Narayanan, VP Engineering at OpenAI:

“We use a set of services and Bing is an important one.”

That’s an interesting answer because it’s commonly assumed that Bing is the only search engine. The answer indicated that ChatGPT Search uses multiple “services” and that Bing is the most important. What are the other services that ChatGPT might use? That’s an open question.

What Does OpenAI Say About SEO For ChatGPT Search?

Someone asked the important question about how to optimize content for ChatGPT Search in order to improve rankings. The question was answered by Kevin Weill who said that they were still figuring it out, which could mean that they don’t know or that they’re still figuring out what to say about optimization.

Kevin Weill, Chief Product Officer responded:

“This is a great question—the product just launched today so there’s a lot to figure out still about where search will be similar and where it will be different in an AI world. Would love any feedback you have!”

Takeaways – SEO For ChatGPT Search

Chief Product Officer Kevin Weill is right, these are still the early days of their search and much can still change. The OpenAI Reddit AMA offers first hints at what SEO is growing into.

Other insights:

  • Bing is the main service ChatGPT Search uses but there are other services it uses as well. That makes Bing an important search engine to rank in.
  • ChatGPT users are accustomed to natural language interactions and may during the course of their work day use ChatGPT Search.
  • OpenAI may use Reinforcement Learning at some point to get a better handle on hallucinations.
  • Personalization may be arriving at some point in the future in the form of a dynamically rendered web page.

Beyond those takeaways is the consideration that OpenAI is not directly competing against Google with a standalone search engine, it has created a completely different experience for searching the web.

Featured Image by Shutterstock/Vitor Miranda

Google Updates Crawl Budget Best Practices via @sejournal, @MattGSouthern

Google has updated its crawl budget guidelines, stressing the need to maintain consistent link structures between mobile and desktop websites.

  • Large websites must ensure mobile versions contain all desktop links or risk slower page discovery.
  • The update mainly impacts sites with over 10,000 pages or those experiencing indexing issues.
  • Link structure consistency across mobile and desktop is now a Google-recommended best practice for crawl budget optimization.
Automattic’s Response To WP Engine Lawsuit Reframes Narrative via @sejournal, @martinibuster

Lawyers for Matt Mullenweg and Automattic filed a motion to dismiss the lawsuit from WP Engine, offering a different perspective on the dispute’s underlying causes.

The motion to dismiss claims that the one causing harm isn’t Mullenweg and Automattic but WP Engine, asserting that WP Engine is compelling the defendant to provide resources and support free of charge as well as to restrict the Mullenweg’s ability to express his opinions about WP Engine’s practices.

The motion to dismiss begins by accuses WP Engine of selectively choosing recent events as basis for their complaint. It then fills in the parts that were left out, beginning with the founding of WordPress over two decades ago when Matt co-founded a way to create websites that democratized Internet publishing in the process. The motion outlines how his organization devoted thousands of person-years to growing the platform, eventually getting it to a point where it now generates an estimated $10 billion dollars per year for thousands of companies and freelancers.

The point of the first part of the motion is to explain that Mullenweg and Automattic support the open source WordPress project because the project depends on a “symbiotic” relationship between the WordPress community and those who are a part of it, including web hosts like WP Engine.

“But the success and vitality of WordPress depends on a supportive and symbiotic relationship with those in the WordPress community.”

After establishing what the community is, how it was founded and the role of Mullenweg and Automattic as a strongly supportive of the community, it then paints a picture of WP Engine as a company that reaps huge benefits from the volunteer work and donated time without adequately giving back to the community. This is the part that Mullenweg and Automattic feel is left out of WP Engine’s complaint, that Mullenweg was expressing his opinion that not only should WP Engine should provide more support to the community and that Mullenweg was responding to the threat posed by the plaintiff’s behavior.

The motion explains:

“Plaintiff WP Engine’s conduct poses a threat to that community. WP Engine is a website hosting service built on the back of WordPress software and controlled by the private equity firm Silver Lake, which claims over $100B of assets under management.

…In addition to WordPress software, WP Engine also uses various of the free resources on the Website, and its Complaint alleges that access to the Website is now, apparently, critical for its business.”

Lastly, the beginning part of the motion, which explains the defendant’s side of the dispute, asserts that the defendant’s behavior was entirely within their legal right because no agreement exists between WordPress and WP Engine that guarantees them access to WordPress resources and that WP Engine at no time tried to secure rights to access.

The document continues:

“But the Complaint does not (and cannot) allege that WP Engine has any agreement with Matt (or anyone else for that matter) that gives WP Engine the right to use the Website’s resources. The Complaint does not (and cannot) allege that WP Engine at any time has attempted to secure that right from Matt or elsewhere.

Instead, WP Engine has exploited the free resources provided by the Website to make hundreds of millions of dollars annually. WP Engine has done so while refusing to meaningfully give back to the WordPress community, and while unfairly trading off the goodwill associated with the WordPress and WooCommerce trademarks.”

Accusation Of Trademark Infringement

The motion to dismiss filed by Mullenweg and Automattic accuse WP Engine of trademark infringement, a claim that has been at the heart of of Mullenweg’s dispute, which the legal response says is a dispute that Mullenweg attempted to amicably resolve in private.

The legal document asserts:

“In 2021, for the first time, WP Engine incorporated the WordPress trademark into the name of its own product offering which it called “Headless WordPress,” infringing that trademark and violating the express terms of the WordPress Foundation Trademark Policy, which prohibits the use of the WordPress trademarks in product names. And, over time, WP Engine has progressively increased its use and prominence of the WordPress trademark throughout its marketing materials, ultimately using that mark well beyond the recognized limits of nominative fair use.”

What Triggered The Dispute

The defendants claim that WP Engine benefited from the open source community but declined to become an active partner in the open source community. The defendants claim that they tried to bring WP Engine into the community as part of the symbiotic relationship but WP Engine refused.

The motion to dismiss is interesting because it first argues that WP Engine didn’t have an agreement with Automattic for use of the WordPress trademark nor did it had an agreement for the rights to have access to WordPress resources. Then it shows how the defendants tried to reach an agreement and that it was WP Engine’s refusal to “meaningfully give back to the WordPress community” and come to an agreement with Automattic is what triggered the dispute.

The document explains:

“Matt has attempted to raise these concerns with WP Engine and to reach an amicable resolution for the good of the community. In private, Matt also has encouraged WP Engine to give back to the ecosystem from which it has taken so much. Preserving and maintaining the resources made available on the Website requires considerable effort and investment—an effort and investment that Matt makes to benefit those with a shared sense of mission. WP Engine does not
embrace that mission.

WP Engine and Silver Lake cannot expect to profit off the back of others without carrying some of the weight—and that is all Matt has asked of them. For example, Matt suggested that WP Engine either execute a license for the Foundation’s WordPress trademarks or dedicate eight percent of its revenue to the further development of the open source WordPress software.”

Mullenweg Had Two Choices

The above is what Mullenweg and Automattic claim is at the heart of the dispute, the unwillingness of WP Engine to reach an agreement with Automattic and become a stronger partner with the community. The motion to dismiss say that WP Engine’s refusal to reach an agreement left Mullenweg few choices of what to do next, as the motion explains:.

“When it became abundantly clear to Matt that WP Engine had no interest in giving back, Matt was left with two choices: (i) continue to allow WP Engine to unfairly exploit the free resources of the Website, use the WordPress and WooCommerce trademarks without authorization, which would also threaten the very existence of those trademarks, and remain silent on the negative impact of its behavior or (ii) refuse to allow WP Engine to do that and demand publicly that WP Engine do more to support the community.”

Disputes Look Different From Each Side

Matt Mullenweg and Automattic have been portrayed in an unflattering light since the dispute with WP Engine burst into public. The motion to dismiss communicates that Mullenweg’s motivations were in defense of the WordPress community, proving that every dispute looks different depending on who is telling the story. Now it’s up to the judge to decide.

Featured Image by Shutterstock/santypan

Google Chrome DevTools Adds Advanced CLS Debugging Tool via @sejournal, @MattGSouthern

Chrome introduces new debugging tool in Canary build, helping developers identify and fix website layout stability issues.

  • Chrome Canary has added a new “Layout Shift Culprits” feature that visually identifies page layout problems.
  • Developers can now see and replay layout shifts in real-time to pinpoint specific issues.
  • The tool will move from Chrome Canary to regular Chrome in a future release, though no date has been announced.
6 SEO Practices You Need To Stop Right Now via @sejournal, @martinibuster

Some SEO practices haven’t kept pace with changes in search engines and may now be self-defeating, leading to content that fails to rank. Here are six SEO practices that hinder ranking and suggestions for more effective approaches.

1. Redundant SEO Practices

The word redundant means no longer effective, not necessary, superfluous. The following are three redundant SEO practices.

A. Expired Domains

For example, some SEOs think that buying expired domains is a relatively new thing but it’s actually well over  twenty years old. Old school SEOs stopped buying them in 2003 when Google figured out how to reset the PageRank on expired domains. Everyone holding expired domains at that time experienced it when they stopped working.

This is the announcement in 2003 about Google’s handling of expired domains:

“Hey, the index is going to be coming out real soon, so I wanted to give people some idea of what to expect for this index. Of course it’s bigger and deeper (yay!), but we’ve also put more of a focus on algorithmic improvements for spam issues. One resulting improvement with this index is better handling of expired domains–the authority for a domain will be reset when a domain expires, even though dangling links to the expired domain are still out on the web. We’ll be rolling this change in over the next few months starting with this index.”

In 2005 Google became domain name registrar #895 in order to gain access to domain name registration information in order to “increase the quality” of the search results. Becoming a domain name registrar gave them real-time access to when domain names were registered, who registered them and what web hosting address they were pointing to.

It’s surprising to relatively newbie SEOs when I say that Google has a handle on expired domains but it’s not news to those of us who were the very first SEOs in history to buy them. Buying expired domains for ranking purposes is an example of a redundant SEO practice.

B. Google And Paid Links

Another example are paid links. I know for a fact that some paid links will push a site to rank better and this has been the case  for many years and still is. But, those rankings are temporary. Most sites generally don’t get a manual action, they just stop ranking.

A likely reason is that Google’s infrastructure and algorithms can neutralize the PageRank flowing from  paid links thereby allowing the site to rank where it’s supposed to rank without disrupting their business by penalizing their site. That wasn’t always the case.

The recent HCU updates are a blood bath. But the 2012 Google Penguin algorithm update was cataclysmic on a scale several orders larger than what many are experiencing today. It affected big brand sites, affiliate sites and everything in between. Thousands and thousands of websites lost their rankings, nobody was spared.

The paid link business never returned to the mainstream status it formerly enjoyed when so-called white hats endorsed paid links based on the rationalization that paid links weren’t bad because they’re “advertising.”  Wishful thinking.

Insiders at the paid link sellers informed me that a significant amount of paid links didn’t work because Google was able to unravel the link networks.  As early as 2005 Google was using statistical analysis to identify unnatural link patterns. In 2006 Google applied for a patent on a process that used a Reduced Link Graph as a way to map out the link relationships of websites, which included identifying link spam networks.

If you understand the risk, have at it. Most people who aren’t interested in burning a domain and building another one should avoid it. Paid links is another form of redundant SEO.

C. Robots Index, Follow

The epitome of redundant SEO is the use of “follow, index” in the meta robots tag.

This is why index, follow is redundant:

  • Indexing pages and following links are Googlebot’s default mode. Telling it to do that is redundant, like telling yourself to breathe.
  • Meta robots tags are directives. Googlebot can’t be forced to index content and follow links.
  • Google’s Robots Meta documentation only lists nofollow and noindex as valid directives.
  • “index” and “follow” are ignored because you can’t use a directive to force a search engine to follow or index a page.
  • Leaving those values there is a bad look in terms of competence.

Validation:

Google’s Special Tags documentation specifically says that those tags aren’t needed because crawling and indexing are the default behavior.

“The default values are index, follow and don’t need to be specified.”

Here’s the part that’s a head scratcher. Some WordPress SEO plugins add the “index, follow” robots meta tag by default. So if you use one of these SEO plugins, it’s not your fault if “index, follow” is on your web page. SEO plugin makers should know better.

2. Scraping Google’s Search Features

I’m not saying to avoid using Google’s search features for research. That’s fine. What this is about is using that data verbatim “because it’s what Google likes.”  I’ve audited many sites that were hit by Google’s recent updates that exact match these keywords across their entire website and while that’s not the only thing wrong with the content, I feel that it generates a signal that the site was made for search engines, something that Google warns about.

Scraping Google’s search features like People Also Ask and People Also Search For can be a way to get related topics to write about. But in my opinion it’s probably not a good idea to exact match those keywords across the entire website or in an entire web page.

It feels like keyword spamming and building web pages for search engines, two negative signals that Google says it uses.

3. Questionable Keyword Use

Many SEO strategies begin with keyword research and end with adding keywords to content. That’s an old school way of content planning that ignores the fact that Google is a natural language search engine.

If the content is about the keyword, then yes, put your keywords in there. Use the headings for describing what the content is about and titles to say what the page is about. Because Google is a natural language search engine it should recognize your phrasing as meaning what a reader is asking about. That’s what the BERT is about, understanding what a user means.

The decades old practice of regarding headings and titles as a dumping ground for keywords is deeply ingrained. It’s something I encourage you to take some time to think about because a hard focus on keywords can become an example of SEO that gets in the way of SEO.

4. Copy Your Competitors But Do It Better?

A commonly accepted SEO tactic is to analyze the competitors top-ranked content, then use the insights about that content to create the exact same content but better. On the surface it sounds reasonable but it doesn’t take much thinking to recognize the absurdity of a strategy predicated on copying someone else’s content but “do it better.” And then people ask why Google discovers their content but declines to index it.

Don’t overthink it. Overthinking leads to unnecessary things like the whole author bio EEEAT thing the industry recently cycled through.  Just use your expertise, use your experience, use your knowledge to create content that you know will satisfy readers  satisfied  and make them buy more stuff.

5. Adding More Content Because Google

When a publisher acts on the belief that ‘this is what Google likes,’ they’re almost certainly headed in the wrong direction. One example is a misinterpretation of Google’s Information Gain patent which they think means Google ranks sites that contain more content on related topics than what’s already in the search results.

That’s a poor understanding of the patent but more to the point, doing what’s in a patent is generally naïve because ranking is a multi-system process, focusing on one thing will not generally be enough to get a site to the top.

The context of the Information Gain Patent is about ranking web pages in AI Chatbots. The invention of the patent, what makes it new, is that it’s about anticipating what the next natural language question will be and then having those ready to show in the AI search results or showing those additional results after the original answers.

The key point about that patent is that it’s about anticipating what the next question will be in a series of questions. So if you ask an AI chatbot how to build a bird house, the next question the AI Search can anticipate is what kind of wood to use. That’s what information gain is about. Identifying what the next question may be and then ranking another page that answers that additional question.

The patent is not about ranking web pages in the regular organic search results. That’s a misinterpretation caused by cherry picking sentences out of context.

Publishing content that’s aligned with your knowledge, experience and your understanding of what users need is a best practice. That’s what expertise and experience is all about.

6. Basing Decisions On Research Of Millions Of Google Search Results

One of the longtime bad practices in SEO, going back decades, is the one where some SEO does a study of millions of search results and then draws conclusions about factors in isolation. Drawing  conclusions about links, word counts, structured data, and 3rd party domain rating metrics ignores the fact that there are multiple systems at work to rank web pages, including some systems that completely re-rank the search results.

Here’s why SEO “research studies” should be ignored:

A. Isolating one factor in a “study” of millions of search results ignores the reality that pages are ranked due to many signals and systems working together.

B. Examining millions of search results overlooks the ranking influence of natural language-based analysis by systems like BERT and the influence they have on the interpretation of queries and web documents.

C. Search results studies present their conclusions as if Google still ranks ten blue links. Search features with images, videos, featured snippets, shopping results are generally ignored by these correlation studies, making them more obsolete than at any other time in SEO history.

It’s time the SEO industry considers sticking a fork in search results correlations then snapping the handle off.

SEO Is Subjective

SEO is subjective. Everyone has an opinion. It’s up to you to decide what is reasonable for you.

Featured Image by Shutterstock/Roman Samborskyi

YouTube Expands AI-Generated Video Summaries, Adds New Tools via @sejournal, @MattGSouthern

YouTube announced an expansion of its AI-generated video summaries feature alongside several platform updates.

AI Summary Expansion

AI-generated summaries, previously tested on select English-language videos, will now reach a broader global audience.

According to YouTube’s official announcement:

“These video summaries use generative AI to create a short basic summary of a YouTube video which provides viewers with a quick glimpse of what to expect.”

The company emphasized that these AI summaries “do not replace or impact a Creator’s ability to write their own video descriptions” but serve as complementary content to help viewers find relevant information more efficiently.

Studio Mobile

YouTube announced a restructured content management system for creators.

The revamped Studio mobile interface organizes content by format-specific shelves, including videos, Shorts, livestreams, and playlists.

Notable changes include:

  • A new list view option for each content format
  • Simplified visibility of monetization status
  • Scheduled content filter that appears only when relevant

Community Engagement Updates

YouTube is rolling out changes to its community engagement tools.

The former “comments” tab is being rebranded as “Community” and will feature enhanced audience metrics and moderation capabilities.

Notable additions include a community spotlight feature highlighting engaged viewers and AI-powered comment reply suggestions.

YouTube notes this feature “will be limited to a small number of creators while we test the feature.”

Creator Support Chatbot

YouTube is testing an AI-powered support chatbot on Studio desktop.

The feature appears as a clickable icon next to the search field, though currently limited to eligible creators during the testing phase.

Availability

According to the announcement, these features will be rolled out gradually “over the coming weeks and months.”

YouTube requests feedback from creators and viewers as the new features become available, particularly regarding the AI-generated summaries.

See the full announcement below:


Featured Image: Rokas Tenys/Shutterstock

Meta Takes Step To Replace Google Index In AI Search via @sejournal, @martinibuster

Meta is reportedly developing a search engine index for its AI chatbot to reduce reliance on Google for AI-generated summaries of current events. Meta AI appears to be evolving to the next stage of becoming a fully independent AI search engine.

Meta-ExternalAgent

Meta has been crawling the Internet since at least this past summer from a user agent called, Meta-ExternalAgent. There have been multiple reports in various forums about excessive amounts of crawling with one person on Hacker News reporting having received 50,000 hits by the bot. A post in the WebmasterWorld bot crawling forum notes that although the documentation for Meta-ExternalAgent says it respects robots.txt it wouldn’t have made a difference because the bot never visited the file.

It may be that the bot wasn’t fully ready earlier this year and that it’s poor behavior has settled down.

The purpose of the bot is to summarize search results and according to the results it’s to reduce reliance on Google and Bing for search results.

Is This A Challenge To Google?

It may be possible that this is indeed a the prelude to a challenge to Google (and other search engines) in AI search. The information at this time supports that this is about creating a search index to complement their Meta AI. As reported in The Verge, Meta is crawling sites for search summaries to be used within the Meta AI Chatbot:

“The search engine would reportedly provide AI-generated search summaries of current events within the Meta AI chatbot.”

The Meta AI chatbot looks like a search engine and it’s clear that it’s still using Google’s search index.

For example, a search t Meta AI about the recent game four of the World Series showed a summary with an accurate answer that had a link to Google.

Screenshot Of Meta AI With Link To Google Search

Here’s a close up showing the link to Google search results and a link to the sources:

Screenshot Of Close-Up Of Meta AI Results

Clicking on the View Sources button spawns a popup with links to Google Search.

Screenshot Of Meta AI View Sources Pop-Up

Read the original reports:

A report was posted in The Verge, based on another reported published on The Information.

See also:

Featured Image by Shutterstock/Skorzewiak