Meta’s New Ad Tools Promise More Precise Customer Targeting via @sejournal, @MattGSouthern

Meta is rolling out ad platform upgrades for Facebook and Instagram.

The updates, coming in the next few months, focus on boosting performance and customization through AI-powered campaign optimization.

New Features For Precise Value Definition

Meta is rolling out a new “Conversion Value Rules” tool to give advertisers more flexibility.

This feature lets you adjust the value of different customer actions or groups to your business within a single campaign.

Let’s say you know some customers tend to spend way more over time. Now, you can tell the system to bid higher for those folks without setting up a separate campaign.

Incremental Attribution Model

Meta plans to introduce a new optional attribution setting later this year. This feature will focus on what it terms “incremental conversions.”

Instead of maximizing the total number of attributed conversions, this new model aims to optimize ad delivery for conversions likely to occur only because of ad exposure.

In other words, the model identifies and targets potential customers who wouldn’t have converted without seeing the advertisement.

Initial tests of this feature have yielded positive results. Advertisers participating in these trials have observed an average increase of over 20% in incremental conversions.

Enhanced Analytics Integration

Meta is launching direct connections with external analytics platforms, starting now and continuing through 2025. They’re kicking off with Google Analytics and Northbeam and plan to add Triple Whale and Adobe later.

These connections let businesses share combined campaign data from different channels with Meta’s ad system. The goal is to give advertisers a complete picture of how their campaigns perform across various platforms.

By getting this broader data set, Meta expects to fine-tune its AI models and help advertisers run more effective campaigns.

Cross-Publisher Journey Optimization

Meta is using what it’s learned from its early connections with analytics tools to update its ad system. These changes consider how customers interact with ads across different platforms before purchasing.

Early tests of this update have been positive. On average, third-party analytics tools show a 30% increase in conversions attributed to Meta ads. However, advertisers might see higher costs per thousand impressions (CPMs).

Right now, this update is being applied to campaigns that aim to increase the number or value of conversions under the sales objective. Meta plans to extend this to other campaign objectives soon.

Google Analytics Integration: What It Means

The Google Analytics connection is big news for industry folks, as it could offer the following benefits:

  • Unified view of Meta ads and overall site performance
  • Better multi-touch attribution
  • Insights to refine SEO strategy based on paid social impact
  • Smarter budget decisions between paid social and SEO
  • Easier reporting
  • Cross-channel optimization opportunities

This integration blurs the lines between paid social, organic social, and SEO, offering a more holistic view of digital marketing efforts.

Why This Matters

As privacy changes shake up digital advertising, Meta’s updates address the need for more accurate, valuable insights.

The move towards AI-driven features and cross-channel integration marks a new era in ad sophistication.

To make the most of these updates, review your Meta ad strategy and clearly define your customer journey and value metrics.

Stay tuned for the rollout, and be ready to test these new features as they become available.


Featured Image: Cristian Valderas/Shutterstock

Google’s Gary Illyes Continues To Warn About URL Parameter Issues via @sejournal, @MattGSouthern

Google’s Gary Illyes recently highlighted a recurring SEO problem on LinkedIn, echoing concerns he’d previously voiced on a Google podcast.

The issue? URL parameters cause search engines difficulties when they’re crawling websites.

This problem is especially challenging for big sites and online stores. When different parameters are added to a URL, it can result in numerous unique web addresses that all lead to the same content.

This can impede search engines, reducing their efficiency in crawling and indexing sites properly.

The URL Parameter Conundrum

In both the podcast and LinkedIn post, Illyes explains that URLs can accommodate infinite parameters, each creating a distinct URL even if they all point to the same content.

He writes:

“An interesting quirk of URLs is that you can add an infinite (I call BS) number of URL parameters to the URL path, and by that essentially forming new resources. The new URLs don’t have to map to different content on the server even, each new URL might just serve the same content as the parameter-less URL, yet they’re all distinct URLs. A good example for this is the cache busting URL parameter on JavaScript references: it doesn’t change the content, but it will force caches to refresh.”

He provided an example of how a simple URL like “/path/file” can expand to “/path/file?param1=a” and “/path/file?param1=a&param2=b“, all potentially serving identical content.

“Each [is] a different URL, all the same content,” Illyes noted.

Accidental URL Expansion & Its Consequences

Search engines can sometimes find and try to crawl non-existent pages on your site, which Illyes calls “fake URLs.”

These can pop up due to things like poorly coded relative links. What starts as a normal-sized site with around 1,000 pages could balloon to a million phantom URLs.

This explosion of fake pages can cause serious problems. Search engine crawlers might hit your servers hard, trying to crawl all these non-existent pages.

This can overwhelm your server resources and potentially crash your site. Plus, it wastes the search engine’s crawl budget on useless pages instead of your content.

In the end, your pages might not get crawled and indexed properly, which could hurt your search rankings.

Illyes states:

“Sometimes you might create these new fake URLs accidentally, exploding your URL space from a balmy 1000 URLs to a scorching 1 million, exciting crawlers that in turn hammer your servers unexpectedly, melting pipes and whistles left and right. Bad relative links are one relatively common cause. But robotstxt is your friend in this case.”

E-commerce Sites Most Affected

The LinkedIn post didn’t specifically call out online stores, but the podcast discussion clarified that this issue is a big deal for ecommerce platforms.

These websites typically use URL parameters to handle product tracking, filtering, and sorting.

As a result, you might see several different URLs pointing to the same product page, with each URL variant representing color choices, size options, or where the customer came from.

Mitigating The Issue

Illyes consistently recommends using robots.txt to tackle this issue.

On the podcast, Illyes highlighted possible fixes, such as:

  • Creating systems to spot duplicate URLs
  • Better ways for site owners to tell search engines about their URL structure
  • Using robots.txt in smarter ways to guide search engine bots

The Deprecated URL Parameters Tool

In the podcast discussion, Illyes touched on Google’s past attempts to address this issue, including the now-deprecated URL Parameters tool in Search Console.

This tool allowed websites to indicate which parameters were important and which could be ignored.

When asked on LinkedIn about potentially bringing back this tool, Illyes was skeptical about its practical effectiveness.

He stated, “In theory yes. in practice no,” explaining that the tool suffered from the same issues as robots.txt, namely that “people couldn’t for their dear life figure out how to manage their own parameters.”

Implications for SEO and Web Development

This ongoing discussion from Google has several implications for SEO and web development:

  1. Crawl Budget: For large sites, managing URL parameters can help conserve crawl budget, ensuring that important pages are crawled and indexed.
  2. Site Architecture: Developers may need to reconsider how they structure URLs, particularly for large e-commerce sites with numerous product variations.
  3. Faceted Navigation: E-commerce sites using faceted navigation should be mindful of how this impacts URL structure and crawlability.
  4. Canonical Tags: Canonical tags help Google understand which URL version should be considered primary.

Why This Matters

Google is discussing URL parameter issues across multiple channels, which indicates a genuine concern for search quality.

For industry experts, staying informed on these technical aspects is essential for maintaining search visibility.

While Google works on solutions, proactive URL management and effective crawler guidance are recommended.

Google’s John Mueller On Removing Unwanted Content From Search via @sejournal, @MattGSouthern

Google’s John Mueller explained on Reddit how to remove unwanted content from search results.

This came up when someone asked about getting rid of an old article about their arrest that kept showing up in Google searches.

The person was arrested for a minor offense in 2018, but a news article appears in Google searches years later.

Even though the case was settled, the article is still on the first page of results, and the person wants it removed.

What can they do? Here’s what Mueller advised.

Mueller’s Guidance On Getting Content Removed

Mueller explained that even though the news outlet said they “de-indexed” the article, this process isn’t always quick or simple.

He suggested a few ways to tackle the issue:

  1. Complete Takedown: The news outlet said no to removing the article, but this is the most effective way, showing the page as a 404 error.
  2. Noindex Tag: This is probably what the news outlet did. It keeps the article on its site but tells search engines to ignore it. Mueller advised checking the page’s code for this tag.
  3. Name Swap: Mueller suggested asking the news outlet to replace the person’s name with something generic like “John Doe” as a workaround. This could make the article harder to find in name searches.
  4. Right to be Forgotten: For folks in some areas, especially Europe, this legal option might help.

About the article still showing up in searches, Mueller said that even after de-indexing, it can take up to six months for a page to disappear from results:

“Regarding how long it takes to “see” a noindex, there’s no specific time, but it’s usually less than a few months. I think I’ve seen it take up to 6 months. They’re not kept in the index forever without being refreshed. If you use the public removal tool (for non-site-owners), Google will check the page fairly quickly (within a few days) and use that to confirm that the page has a noindex.”

He assured that pages don’t stay indexed forever without being rechecked.

Mueller mentioned that while some “hidden” de-indexing methods exist, they’re not common.

He recommended using Google’s public removal tool, which allows Google to recheck the page within days. This might speed things up if the news outlet has properly de-indexed the article.

Mueller stated:

“It’s really rare (I can’t think of any case in the last year or so) that someone would use a kind of “hidden” noindex; it’s complicated to set up & maintain. Most sites just use the visible robots meta tag for switching things to noindex, which you would be able to see yourself fairly quickly. If you use the removal tool, Google will also see any “more hidden” noindex settings.”

This advice gave the person a better grasp of their situation and possible next moves to deal with their lingering online content problem.

Tools for Content Removal

Mueller listed two main ways to get rid of content from search results:

  • For website owners: The Removals and SafeSearch reports tool
  • For everyone else: The Refresh Outdated Content tool

If you own the site, Google removes the content on request.

For non-owners, Google does a few checks before taking anything down.

Mueller said using these tools won’t accidentally make your page show up more in searches.

He stated:

“The removal tool for site-owners has a help page titled “Removals and SafeSearch reports Tool”, subtitle “Temporarily block search results from your site, or manage SafeSearch filtering”. (Site-owner = the person running the website, in their Search Console account)

The public removal tool for non-site-owners is titled “Refresh Outdated Content tool” / subtitle: “Request an update to outdated content in Google Search results” (non-site-owner would be someone who doesn’t work on the website themselves, like you).

The site-owner tool will process a removal very quickly, without checking if the page is actually noindex or not. The assumption is that as the site-owner, you can block whatever you want. If they’re willing to do this for you, that’s the fastest way.

For non-site-owners, the tool will check multiple times to confirm that the page is removed, noindex, or appropriately changed. It won’t do anything until it has confirmed that, so there’s no harm in trying it. Neither of these tools will make a page more visible (SEOs would love that). The tools are also labeled as “temporary” removals – because if the page becomes indexable again, it can show up again in search.”

Why This Matters

This shows how difficult it can be to manage what people see about you online.

While Google offers ways to remove old or unwanted articles, it can take a while, and sometimes, the publisher must cooperate.

Featured Image: tomeqs/Shutterstock

Google Ranking Glitch: Live Updates (Unrelated to August Core Update) via @sejournal, @theshelleywalsh

Google is currently addressing a separate issue affecting search rankings, unrelated to the August 2024 core update.

Google Revises Core Update Guidance: What’s Changed? via @sejournal, @MattGSouthern

Google has updated its guidance on core algorithm updates, providing more detailed recommendations for impacted websites.

The revised document, published alongside the August core update rollout, includes several additions and removals.

New Sections Added

The most significant change includes two new sections: “Check if there’s a traffic drop in Search Console” and “Assessing a large drop in position.”

The “Check if there’s a traffic drop in Search Console” section provides step-by-step instructions for using Search Console to determine if a core update has affected a website.

The process involves:

  1. Confirming the completion of the core update by checking the Search Status Dashboard
  2. Waiting at least a week after the update finishes before analyzing Search Console data
  3. Comparing search performance from before and after the update to identify ranking changes
  4. Analyzing different search types (web, image, video, news) separately

The “Assessing a large drop in position” section offers guidance for websites that have experienced a significant ranking decline following a core update.

It recommends thoroughly evaluating the site’s content against Google’s quality guidelines, focusing on the pages most impacted by the update.

Other Additions

The updated document also includes a “Things to keep in mind when making changes” section, encouraging website owners to prioritize substantive, user-centric improvements rather than quick fixes.

It suggests that content deletion should be a last resort, indicating that removing content suggests it was created for search engines rather than users.

Another new section, “How long does it take to see an effect in Search results,” sets expectations for the time required to see ranking changes after making content improvements.

Google states that it may take several months for the full impact to be reflected, possibly requiring waiting until a future core update.

The document adds a closing paragraph noting that rankings can change even without website updates as new content emerges on the web.

Removed Content

Several sections from the previous version of the document have been removed or replaced in the update.

The paragraph stating that pages impacted by a core update “haven’t violated our spam policies” and comparing core updates to refreshing a movie list has been removed.

The “Assessing your own content” section has been replaced by the new “Assessing a large drop in position.”.

The “How long does it take to recover from a core update?” section no longer contains specific details about the timing and cadence of core updates and the factors influencing recovery time.

Shift In Tone & Focus

There’s a noticeable shift in tone and focus with this update.

While the previous guide explained the nature and purpose of core updates, the revised edition has more actionable guidance.

For example, the new sections related to Search Console provide clearer direction for identifying and addressing ranking drops.

In Summary

Here’s a list of added and removed items in Google’s updated Core Algorithm Update Guidance.

Added:

  • “Check if there’s a traffic drop in Search Console” section:
    • Step-by-step instructions for using Search Console to identify ranking changes.
  • “Assessing a large drop in position” section:
    • Guidance for websites experiencing significant ranking declines after a core update.
  • “Things to keep in mind when making changes” section:
    • Encourages substantive improvements over quick fixes.
    • Suggests content deletion as a last resort.
  • “How long does it take to see an effect in Search results” section:
    • Sets expectations for the time to see ranking changes after content improvements.
    • States that full impact may take several months and require a future core update.
  • Closing paragraph:
    • Notes that rankings can change even without website updates as new content emerges.

Removed:

  • A paragraph stating pages impacted by a core update “haven’t violated our spam policies.”
  • Comparing core updates to refreshing a list of best movies.
  • The “Assessing your own content” section from the previous version was replaced by the new “Assessing a large drop in position” section.
  • Specific details about the timing of core updates and factors influencing recovery time.

An archived version of Google’s previous core update guidance can be accessed via the Wayback Machine.


Featured Image: salarko/Shutterstock

Google’s “Information Gain” Patent For Ranking Web Pages via @sejournal, @martinibuster

Google was recently granted a patent on ranking web pages, which may offer insights into how AI Overviews ranks content. The patent describes a method for ranking pages based on what a user might be interested in next.

Contextual Estimation Of Link Information Gain

The name of the patent is Contextual Estimation Of Link Information Gain, it was filed in 2018 and granted in June 2024. It’s about calculating a ranking score called Information Gain that is used to rank a second set of web pages that are likely to be of interest to a user as a slightly different follow-up topic related to a previous question.

The patent starts with general descriptions then adds layers of specifics over the course of paragraphs.  An analogy can be that it’s like a pizza. It starts out as a mozzarella pizza, then they add mushrooms, so now it’s a mushroom pizza. Then they add onions, so now it’s a mushroom and onion pizza. There are layers of specifics that build up to the entire context.

So if you read just one section of it, it’s easy to say, “It’s clearly a mushroom pizza” and be completely mistaken about what it really is.

There are layers of context but what it’s building up to is:

  • Ranking a web page that is relevant for what a user might be interested in next.
  • The context of the invention is an automated assistant or chatbot
  • A search engine plays a role in a way that seems similar to Google’s AI Overviews

Information Gain And SEO: What’s Really Going On?

A couple of months ago I read a comment on social media asserting that “Information Gain” was a significant factor in a recent Google core algorithm update.  That mention surprised me because I’d never heard of information gain before. I asked some SEO friends about it and they’d never heard of it either.

What the person on social media had asserted was something like Google was using an “Information Gain” score to boost the ranking of web pages that had more information than other web pages. So the idea was that it was important to create pages that have more information than other pages, something along those lines.

So I read the patent and discovered that “Information Gain” is not about ranking pages with more information than other pages. It’s really about something that is more profound for SEO because it might help to understand one dimension of how AI Overviews might rank web pages.

TL/DR Of The Information Gain Patent

What the information gain patent is really about is even more interesting because it may give an indication of how AI Overviews (AIO) ranks web pages that a user might be interested next.  It’s sort of like introducing personalization by anticipating what a user will be interested in next.

The patent describes a scenario where a user makes a search query and the automated assistant or chatbot provides an answer that’s relevant to the question. The information gain scoring system works in the background to rank a second set of web pages that are relevant to a what the user might be interested in next. It’s a new dimension in how web pages are ranked.

The Patent’s Emphasis on Automated Assistants

There are multiple versions of the Information Gain patent dating from 2018 to 2024. The first version is similar to the last version with the most significant difference being the addition of chatbots as a context for where the information gain invention is used.

The patent uses the phrase “automated assistant” 69 times and uses the phrase “search engine” only 25 times.  Like with AI Overviews, search engines do play a role in this patent but it’s generally in the context of automated assistants.

As will become evident, there is nothing to suggest that a web page containing more information than the competition is likelier to be ranked higher in the organic search results. That’s not what this patent talks about.

General Description Of Context

All versions of the patent describe the presentation of search results within the context of an automated assistant and natural language question answering. The patent starts with a general description and progressively becomes more specific. This is a feature of patents in that they apply for protection for the widest contexts in which the invention can be used and become progressively specific.

The entire first section (the Abstract) doesn’t even mention web pages or links. It’s just about the information gain score within a very general context:

“An information gain score for a given document is indicative of additional information that is included in the document beyond information contained in documents that were previously viewed by the user.”

That is a nutshell description of the patent, with the key insight being that the information gain scoring happens on pages after the user has seen the first search results.

More Specific Context: Automated Assistants

The second paragraph in the section titled “Background” is slightly more specific and adds an additional layer of context for the invention because it mentions  links. Specifically, it’s about a user that makes a search query and receives links to search results – no information gain score calculated yet.

The Background section says:

“For example, a user may submit a search request and be provided with a set of documents and/or links to documents that are responsive to the submitted search request.”

The next part builds on top of a user having made a search query:

“Also, for example, a user may be provided with a document based on identified interests of the user, previously viewed documents of the user, and/or other criteria that may be utilized to identify and provide a document of interest. Information from the documents may be provided via, for example, an automated assistant and/or as results to a search engine. Further, information from the documents may be provided to the user in response to a search request and/or may be automatically served to the user based on continued searching after the user has ended a search session.”

That last sentence is poorly worded.

Here’s the original sentence:

“Further, information from the documents may be provided to the user in response to a search request and/or may be automatically served to the user based on continued searching after the user has ended a search session.”

Here’s how it makes more sense:

“Further, information from the documents may be provided to the user… based on continued searching after the user has ended a search session.”

The information provided to the user is “in response to a search request and/or may be automatically served to the user”

It’s a little clearer if you put parentheses around it:

Further, information from the documents may be provided to the user (in response to a search request and/or may be automatically served to the user) based on continued searching after the user has ended a search session.

Takeaways:

  • The patent describes identifying documents that are relevant to the “interests of the user” based on “previously viewed documents” “and/or other criteria.”
  • It sets a general context of an automated assistant “and/or” a search engine
  • Information from the documents that are based on “previously viewed documents” “and/or other criteria” may be shown after the user continues searching.

More Specific Context: Chatbot

The patent next adds an additional layer of context and specificity by mentioning how chatbots can “extract” an answer from a web page (“document”) and show that as an answer. This is about showing a summary that contains the answer, kind of like featured snippets, but within the context of a chatbot.

The patent explains:

“In some cases, a subset of information may be extracted from the document for presentation to the user. For example, when a user engages in a spoken human-to-computer dialog with an automated assistant software process (also referred to as “chatbots,” “interactive personal assistants,” “intelligent personal assistants,” “personal voice assistants,” “conversational agents,” “virtual assistants,” etc.), the automated assistant may perform various types of processing to extract salient information from a document, so that the automated assistant can present the information in an abbreviated form.

As another example, some search engines will provide summary information from one or more responsive and/or relevant documents, in addition to or instead of links to responsive and/or relevant documents, in response to a user’s search query.”

The last sentence sounds like it’s describing something that’s like a featured snippet or like AI Overviews where it provides a summary. The sentence is very general and ambiguous because it uses “and/or” and “in addition to or instead of” and isn’t as specific as the preceding sentences. It’s an example of a patent being general for legal reasons.

Ranking The Next Set Of Search Results

The next section is called the Summary and it goes into more details about how the Information Gain score represents how likely the user will be interested in the next set of documents. It’s not about ranking search results, it’s about ranking the next set of search results (based on a related topic).

It states:

“An information gain score for a given document is indicative of additional information that is included in the given document beyond information contained in other documents that were already presented to the user.”

Ranking Based On Topic Of Web Pages

It then talks about presenting the web page in a browser, audibly reading the relevant part of the document or audibly/visually presenting a summary of the document (“audibly/visually presenting salient information extracted from the document to the user, etc.”)

But the part that’s really interesting is when it next explains using a topic of the web page as a representation of the the content, which is used to calculate the information gain score.

It describes many different ways of extracting the representation of what the page is about. But what’s important is that it’s describes calculating the Information Gain score based on a representation of what the content is about, like the topic.

“In some implementations, information gain scores may be determined for one or more documents by applying data indicative of the documents, such as their entire contents, salient extracted information, a semantic representation (e.g., an embedding, a feature vector, a bag-of-words representation, a histogram generated from words/phrases in the document, etc.) across a machine learning model to generate an information gain score.”

The patent goes on to describe ranking a first set of documents and using the Information Gain scores to rank additional sets of documents that anticipate follow up questions or a progression within a dialog of what the user is interested in.

The automated assistant can in some implementations query a search engine and then apply the Information Gain rankings to the multiple sets of search results (that are relevant to related search queries).

There are multiple variations of doing the same thing but in general terms this is what it describes:

“Based on the information gain scores, information contained in one or more of the new documents may be selectively provided to the user in a manner that reflects the likely information gain that can be attained by the user if the user were to be presented information from the selected documents.”

What All Versions Of The Patent Have In Common

All versions of the patent share general similarities over which more specifics are layered in over time (like adding onions to a mushroom pizza). The following are the baseline of what all the versions have in common.

Application Of Information Gain Score

All versions of the patent describe applying the information gain score to a second set of documents that have additional information beyond the first set of documents. Obviously, there is no criteria or information to guess what the user is going search for when they start a search session. So information gain scores are not applied to the first search results.

Examples of passages that are the same for all versions:

  • A second set of documents is identified that is also related to the topic of the first set of documents but that have not yet been viewed by the user.
  • For each new document in the second set of documents, an information gain score is determined that is indicative of, for the new document, whether the new document includes information that was not contained in the documents of the first set of documents…

Automated Assistants

All four versions of the patent refer to automated assistants that show search results in response to natural language queries.

The 2018 and 2023 versions of the patent both mention search engines 25 times. The 2o18 version mentions “automated assistant” 74 times and the latest version mentions it 69 times.

They all make references to “conversational agents,” “interactive personal assistants,” “intelligent personal assistants,” “personal voice assistants,” and “virtual assistants.”

It’s clear that the emphasis of the patent is on automated assistants, not the organic search results.

Dialog Turns

Note: In everyday language we use the word dialogue. In computing they the spell it dialog.

All versions of the patents refer to a way of interacting with the system in the form of a dialog, specifically a dialog turn. A dialog turn is the back and forth that happens when a user asks a question using natural language, receives an answer and then asks a follow up question or another question altogether. This can be natural language in text, text to speech (TTS), or audible.

The main aspect the patents have in common is the back and forth in what is called a “dialog turn.” All versions of the patent have this as a context.

Here’s an example of how the dialog turn works:

“Automated assistant client 106 and remote automated assistant 115 can process natural language input of a user and provide responses in the form of a dialog that includes one or more dialog turns. A dialog turn may include, for instance, user-provided natural language input and a response to natural language input by the automated assistant.

Thus, a dialog between the user and the automated assistant can be generated that allows the user to interact with the automated assistant …in a conversational manner.”

Problems That Information Gain Scores Solve

The main feature of the patent is to improve the user experience by understanding the additional value that a new document provides compared to documents that a user has already seen. This additional value is what is meant by the phrase Information Gain.

There are multiple ways that information gain is useful and one of the ways that all versions of the patent describes is in the context of an audio response and how a long-winded audio response is not good, including in a TTS (text to speech) context).

The patent explains the problem of a long-winded response:

“…and so the user may wait for substantially all of the response to be output before proceeding. In comparison with reading, the user is able to receive the audio information passively, however, the time taken to output is longer and there is a reduced ability to scan or scroll/skip through the information.”

The patent then explains how information gain can speed up answers by eliminating redundant (repetitive) answers or if the answer isn’t enough and forces the user into another dialog turn.

This part of the patent refers to the information density of a section in a web page, a section that answers the question with the least amount of words. Information density is about how “accurate,” “concise,” and “relevant”‘ the answer is for relevance and avoiding repetitiveness. Information density is important for audio/spoken answers.

This is what the patent says:

“As such, it is important in the context of an audio output that the output information is relevant, accurate and concise, in order to avoid an unnecessarily long output, a redundant output, or an extra dialog turn.

The information density of the output information becomes particularly important in improving the efficiency of a dialog session. Techniques described herein address these issues by reducing and/or eliminating presentation of information a user has already been provided, including in the audio human-to-computer dialog context.”

The idea of “information density” is important in a general sense because it communicates better for users but it’s probably extra important in the context of being shown in chatbot search results, whether it’s spoken or not. Google AI Overviews shows snippets from a web page but maybe more importantly, communicating in a concise manner is the best way to be on topic and make it easy for a search engine to understand content.

Search Results Interface

All versions of the Information Gain patent are clear that the invention is not in the context of organic search results. It’s explicitly within the context of ranking web pages within a natural language interface of an automated assistant and an AI chatbot.

However, there is a part of the patent that describes a way of showing users with the second set of results within a “search results interface.” The scenario is that the user sees an answer and then is interested in a related topic. The second set of ranked web pages are shown in a “search results interface.”

The patent explains:

“In some implementations, one or more of the new documents of the second set may be presented in a manner that is selected based on the information gain stores. For example, one or more of the new documents can be rendered as part of a search results interface that is presented to the user in response to a query that includes the topic of the documents, such as references to one or more documents. In some implementations, these search results may be ranked at least in part based on their respective information gain scores.”

…The user can then select one of the references and information contained in the particular document can be presented to the user. Subsequently, the user may return to the search results and the references to the document may again be provided to the user but updated based on new information gain scores for the documents that are referenced.

In some implementations, the references may be reranked and/or one or more documents may be excluded (or significantly demoted) from the search results based on the new information gain scores that were determined based on the document that was already viewed by the user.”

What is a search results interface? I think it’s just an interface that shows search results.

Let’s pause here to underline that it should be clear at this point that the patent is not about ranking web pages that are comprehensive about a topic. The overall context of the invention is showing documents within an automated assistant.

A search results interface is just an interface, it’s never described as being organic search results, it’s just an interface.

There’s more that is the same across all versions of the patent but the above are the important general outlines and context of it.

Claims Of The Patent

The claims section is where the scope of the actual invention is described and for which they are seeking legal protection over. It is mainly focused on the invention and less so on the context. Thus, there is no mention of a search engines, automated assistants, audible responses, or TTS (text to speech) within the Claims section. What remains is the context of search results interface which presumably covers all of the contexts.

Context: First Set Of Documents

It starts out by outlining the context of the invention. This context is receiving a query, identifying the topic, and ranking a first group of relevant web pages (documents) and selecting at least one of them as being relevant and either showing the document or communicating the information from the document (like a summary).

“1. A method implemented using one or more processors, comprising: receiving a query from a user, wherein the query includes a topic; identifying a first set of documents that are responsive to the query, wherein the documents of the set of documents are ranked, and wherein a ranking of a given document of the first set of documents is indicative of relevancy of information included in the given document to the topic; selecting, based on the rankings and from the documents of the first set of documents, a most relevant document providing at least a portion of the information from the most relevant document to the user;”

Context: Second Set Of Documents

Then what immediately follows is the part about ranking a second set of documents that contain additional information. This second set of documents is ranked using the information gain scores to show more information after showing a relevant document from the first group.

This is how it explains it:

“…in response to providing the most relevant document to the user, receiving a request from the user for additional information related to the topic; identifying a second set of documents, wherein the second set of documents includes at one or more of the documents of the first set of documents and does not include the most relevant document; determining, for each document of the second set, an information gain score, wherein the information gain score for a respective document of the second set is based on a quantity of new information included in the respective document of the second set that differs from information included in the most relevant document; ranking the second set of documents based on the information gain scores; and causing at least a portion of the information from one or more of the documents of the second set of documents to be presented to the user, wherein the information is presented based on the information gain scores.”

Granular Details

The rest of the claims section contains granular details about the concept of Information Gain, which is a ranking of documents based on what the user already has seen and represents a related topic that the user may be interested in. The purpose of these details is to lock them in for legal protection as part of the invention.

Here’s an example:

The method of claim 1, wherein identifying the first set comprises:
causing to be rendered, as part of a search results interface that is presented to the user in response to a previous query that includes the topic, references to one or more documents of the first set;
receiving user input that that indicates selection of one of the references to a particular document of the first set from the search results interface, wherein at least part of the particular document is provided to the user in response to the selection;

To make an analogy, it’s describing how to make the pizza dough, clean and cut the mushrooms, etc. It’s not important for our purposes to understand it as much as the general view of what the patent is about.

Information Gain Patent

An opinion was shared on social media that this patent has something to do with ranking web pages in the organic search results, I saw it, read the patent and discovered that’s not how the patent works. It’s a good patent and it’s important to correctly understand it. I analyzed multiple versions of the patent to see what they  had in common and what was different.

A careful reading of the patent shows that it is clearly focused on anticipating what the user may want to see based on what they have already seen. To accomplish this the patent describes the use of an Information Gain score for ranking web pages that are on topics that are related to the first search query but not specifically relevant to that first query.

The context of the invention is generally automated assistants, including chatbots. A search engine could be used as part of finding relevant documents but the context is not solely an organic search engine.

This patent could be applicable to the context of AI Overviews. I would not limit the context to AI Overviews as there are additional contexts such as spoken language in which Information Gain scoring could apply. Could it apply in additional contexts like Featured Snippets? The patent itself is not explicit about that.

Read the latest version of Information Gain patent:

Contextual estimation of link information gain

Featured Image by Shutterstock/Khosro