Google Revises Core Update Guidance: What’s Changed? via @sejournal, @MattGSouthern

Google has updated its guidance on core algorithm updates, providing more detailed recommendations for impacted websites.

The revised document, published alongside the August core update rollout, includes several additions and removals.

New Sections Added

The most significant change includes two new sections: “Check if there’s a traffic drop in Search Console” and “Assessing a large drop in position.”

The “Check if there’s a traffic drop in Search Console” section provides step-by-step instructions for using Search Console to determine if a core update has affected a website.

The process involves:

  1. Confirming the completion of the core update by checking the Search Status Dashboard
  2. Waiting at least a week after the update finishes before analyzing Search Console data
  3. Comparing search performance from before and after the update to identify ranking changes
  4. Analyzing different search types (web, image, video, news) separately

The “Assessing a large drop in position” section offers guidance for websites that have experienced a significant ranking decline following a core update.

It recommends thoroughly evaluating the site’s content against Google’s quality guidelines, focusing on the pages most impacted by the update.

Other Additions

The updated document also includes a “Things to keep in mind when making changes” section, encouraging website owners to prioritize substantive, user-centric improvements rather than quick fixes.

It suggests that content deletion should be a last resort, indicating that removing content suggests it was created for search engines rather than users.

Another new section, “How long does it take to see an effect in Search results,” sets expectations for the time required to see ranking changes after making content improvements.

Google states that it may take several months for the full impact to be reflected, possibly requiring waiting until a future core update.

The document adds a closing paragraph noting that rankings can change even without website updates as new content emerges on the web.

Removed Content

Several sections from the previous version of the document have been removed or replaced in the update.

The paragraph stating that pages impacted by a core update “haven’t violated our spam policies” and comparing core updates to refreshing a movie list has been removed.

The “Assessing your own content” section has been replaced by the new “Assessing a large drop in position.”.

The “How long does it take to recover from a core update?” section no longer contains specific details about the timing and cadence of core updates and the factors influencing recovery time.

Shift In Tone & Focus

There’s a noticeable shift in tone and focus with this update.

While the previous guide explained the nature and purpose of core updates, the revised edition has more actionable guidance.

For example, the new sections related to Search Console provide clearer direction for identifying and addressing ranking drops.

In Summary

Here’s a list of added and removed items in Google’s updated Core Algorithm Update Guidance.

Added:

  • “Check if there’s a traffic drop in Search Console” section:
    • Step-by-step instructions for using Search Console to identify ranking changes.
  • “Assessing a large drop in position” section:
    • Guidance for websites experiencing significant ranking declines after a core update.
  • “Things to keep in mind when making changes” section:
    • Encourages substantive improvements over quick fixes.
    • Suggests content deletion as a last resort.
  • “How long does it take to see an effect in Search results” section:
    • Sets expectations for the time to see ranking changes after content improvements.
    • States that full impact may take several months and require a future core update.
  • Closing paragraph:
    • Notes that rankings can change even without website updates as new content emerges.

Removed:

  • A paragraph stating pages impacted by a core update “haven’t violated our spam policies.”
  • Comparing core updates to refreshing a list of best movies.
  • The “Assessing your own content” section from the previous version was replaced by the new “Assessing a large drop in position” section.
  • Specific details about the timing of core updates and factors influencing recovery time.

An archived version of Google’s previous core update guidance can be accessed via the Wayback Machine.


Featured Image: salarko/Shutterstock

Google’s “Information Gain” Patent For Ranking Web Pages via @sejournal, @martinibuster

Google was recently granted a patent on ranking web pages, which may offer insights into how AI Overviews ranks content. The patent describes a method for ranking pages based on what a user might be interested in next.

Contextual Estimation Of Link Information Gain

The name of the patent is Contextual Estimation Of Link Information Gain, it was filed in 2018 and granted in June 2024. It’s about calculating a ranking score called Information Gain that is used to rank a second set of web pages that are likely to be of interest to a user as a slightly different follow-up topic related to a previous question.

The patent starts with general descriptions then adds layers of specifics over the course of paragraphs.  An analogy can be that it’s like a pizza. It starts out as a mozzarella pizza, then they add mushrooms, so now it’s a mushroom pizza. Then they add onions, so now it’s a mushroom and onion pizza. There are layers of specifics that build up to the entire context.

So if you read just one section of it, it’s easy to say, “It’s clearly a mushroom pizza” and be completely mistaken about what it really is.

There are layers of context but what it’s building up to is:

  • Ranking a web page that is relevant for what a user might be interested in next.
  • The context of the invention is an automated assistant or chatbot
  • A search engine plays a role in a way that seems similar to Google’s AI Overviews

Information Gain And SEO: What’s Really Going On?

A couple of months ago I read a comment on social media asserting that “Information Gain” was a significant factor in a recent Google core algorithm update.  That mention surprised me because I’d never heard of information gain before. I asked some SEO friends about it and they’d never heard of it either.

What the person on social media had asserted was something like Google was using an “Information Gain” score to boost the ranking of web pages that had more information than other web pages. So the idea was that it was important to create pages that have more information than other pages, something along those lines.

So I read the patent and discovered that “Information Gain” is not about ranking pages with more information than other pages. It’s really about something that is more profound for SEO because it might help to understand one dimension of how AI Overviews might rank web pages.

TL/DR Of The Information Gain Patent

What the information gain patent is really about is even more interesting because it may give an indication of how AI Overviews (AIO) ranks web pages that a user might be interested next.  It’s sort of like introducing personalization by anticipating what a user will be interested in next.

The patent describes a scenario where a user makes a search query and the automated assistant or chatbot provides an answer that’s relevant to the question. The information gain scoring system works in the background to rank a second set of web pages that are relevant to a what the user might be interested in next. It’s a new dimension in how web pages are ranked.

The Patent’s Emphasis on Automated Assistants

There are multiple versions of the Information Gain patent dating from 2018 to 2024. The first version is similar to the last version with the most significant difference being the addition of chatbots as a context for where the information gain invention is used.

The patent uses the phrase “automated assistant” 69 times and uses the phrase “search engine” only 25 times.  Like with AI Overviews, search engines do play a role in this patent but it’s generally in the context of automated assistants.

As will become evident, there is nothing to suggest that a web page containing more information than the competition is likelier to be ranked higher in the organic search results. That’s not what this patent talks about.

General Description Of Context

All versions of the patent describe the presentation of search results within the context of an automated assistant and natural language question answering. The patent starts with a general description and progressively becomes more specific. This is a feature of patents in that they apply for protection for the widest contexts in which the invention can be used and become progressively specific.

The entire first section (the Abstract) doesn’t even mention web pages or links. It’s just about the information gain score within a very general context:

“An information gain score for a given document is indicative of additional information that is included in the document beyond information contained in documents that were previously viewed by the user.”

That is a nutshell description of the patent, with the key insight being that the information gain scoring happens on pages after the user has seen the first search results.

More Specific Context: Automated Assistants

The second paragraph in the section titled “Background” is slightly more specific and adds an additional layer of context for the invention because it mentions  links. Specifically, it’s about a user that makes a search query and receives links to search results – no information gain score calculated yet.

The Background section says:

“For example, a user may submit a search request and be provided with a set of documents and/or links to documents that are responsive to the submitted search request.”

The next part builds on top of a user having made a search query:

“Also, for example, a user may be provided with a document based on identified interests of the user, previously viewed documents of the user, and/or other criteria that may be utilized to identify and provide a document of interest. Information from the documents may be provided via, for example, an automated assistant and/or as results to a search engine. Further, information from the documents may be provided to the user in response to a search request and/or may be automatically served to the user based on continued searching after the user has ended a search session.”

That last sentence is poorly worded.

Here’s the original sentence:

“Further, information from the documents may be provided to the user in response to a search request and/or may be automatically served to the user based on continued searching after the user has ended a search session.”

Here’s how it makes more sense:

“Further, information from the documents may be provided to the user… based on continued searching after the user has ended a search session.”

The information provided to the user is “in response to a search request and/or may be automatically served to the user”

It’s a little clearer if you put parentheses around it:

Further, information from the documents may be provided to the user (in response to a search request and/or may be automatically served to the user) based on continued searching after the user has ended a search session.

Takeaways:

  • The patent describes identifying documents that are relevant to the “interests of the user” based on “previously viewed documents” “and/or other criteria.”
  • It sets a general context of an automated assistant “and/or” a search engine
  • Information from the documents that are based on “previously viewed documents” “and/or other criteria” may be shown after the user continues searching.

More Specific Context: Chatbot

The patent next adds an additional layer of context and specificity by mentioning how chatbots can “extract” an answer from a web page (“document”) and show that as an answer. This is about showing a summary that contains the answer, kind of like featured snippets, but within the context of a chatbot.

The patent explains:

“In some cases, a subset of information may be extracted from the document for presentation to the user. For example, when a user engages in a spoken human-to-computer dialog with an automated assistant software process (also referred to as “chatbots,” “interactive personal assistants,” “intelligent personal assistants,” “personal voice assistants,” “conversational agents,” “virtual assistants,” etc.), the automated assistant may perform various types of processing to extract salient information from a document, so that the automated assistant can present the information in an abbreviated form.

As another example, some search engines will provide summary information from one or more responsive and/or relevant documents, in addition to or instead of links to responsive and/or relevant documents, in response to a user’s search query.”

The last sentence sounds like it’s describing something that’s like a featured snippet or like AI Overviews where it provides a summary. The sentence is very general and ambiguous because it uses “and/or” and “in addition to or instead of” and isn’t as specific as the preceding sentences. It’s an example of a patent being general for legal reasons.

Ranking The Next Set Of Search Results

The next section is called the Summary and it goes into more details about how the Information Gain score represents how likely the user will be interested in the next set of documents. It’s not about ranking search results, it’s about ranking the next set of search results (based on a related topic).

It states:

“An information gain score for a given document is indicative of additional information that is included in the given document beyond information contained in other documents that were already presented to the user.”

Ranking Based On Topic Of Web Pages

It then talks about presenting the web page in a browser, audibly reading the relevant part of the document or audibly/visually presenting a summary of the document (“audibly/visually presenting salient information extracted from the document to the user, etc.”)

But the part that’s really interesting is when it next explains using a topic of the web page as a representation of the the content, which is used to calculate the information gain score.

It describes many different ways of extracting the representation of what the page is about. But what’s important is that it’s describes calculating the Information Gain score based on a representation of what the content is about, like the topic.

“In some implementations, information gain scores may be determined for one or more documents by applying data indicative of the documents, such as their entire contents, salient extracted information, a semantic representation (e.g., an embedding, a feature vector, a bag-of-words representation, a histogram generated from words/phrases in the document, etc.) across a machine learning model to generate an information gain score.”

The patent goes on to describe ranking a first set of documents and using the Information Gain scores to rank additional sets of documents that anticipate follow up questions or a progression within a dialog of what the user is interested in.

The automated assistant can in some implementations query a search engine and then apply the Information Gain rankings to the multiple sets of search results (that are relevant to related search queries).

There are multiple variations of doing the same thing but in general terms this is what it describes:

“Based on the information gain scores, information contained in one or more of the new documents may be selectively provided to the user in a manner that reflects the likely information gain that can be attained by the user if the user were to be presented information from the selected documents.”

What All Versions Of The Patent Have In Common

All versions of the patent share general similarities over which more specifics are layered in over time (like adding onions to a mushroom pizza). The following are the baseline of what all the versions have in common.

Application Of Information Gain Score

All versions of the patent describe applying the information gain score to a second set of documents that have additional information beyond the first set of documents. Obviously, there is no criteria or information to guess what the user is going search for when they start a search session. So information gain scores are not applied to the first search results.

Examples of passages that are the same for all versions:

  • A second set of documents is identified that is also related to the topic of the first set of documents but that have not yet been viewed by the user.
  • For each new document in the second set of documents, an information gain score is determined that is indicative of, for the new document, whether the new document includes information that was not contained in the documents of the first set of documents…

Automated Assistants

All four versions of the patent refer to automated assistants that show search results in response to natural language queries.

The 2018 and 2023 versions of the patent both mention search engines 25 times. The 2o18 version mentions “automated assistant” 74 times and the latest version mentions it 69 times.

They all make references to “conversational agents,” “interactive personal assistants,” “intelligent personal assistants,” “personal voice assistants,” and “virtual assistants.”

It’s clear that the emphasis of the patent is on automated assistants, not the organic search results.

Dialog Turns

Note: In everyday language we use the word dialogue. In computing they the spell it dialog.

All versions of the patents refer to a way of interacting with the system in the form of a dialog, specifically a dialog turn. A dialog turn is the back and forth that happens when a user asks a question using natural language, receives an answer and then asks a follow up question or another question altogether. This can be natural language in text, text to speech (TTS), or audible.

The main aspect the patents have in common is the back and forth in what is called a “dialog turn.” All versions of the patent have this as a context.

Here’s an example of how the dialog turn works:

“Automated assistant client 106 and remote automated assistant 115 can process natural language input of a user and provide responses in the form of a dialog that includes one or more dialog turns. A dialog turn may include, for instance, user-provided natural language input and a response to natural language input by the automated assistant.

Thus, a dialog between the user and the automated assistant can be generated that allows the user to interact with the automated assistant …in a conversational manner.”

Problems That Information Gain Scores Solve

The main feature of the patent is to improve the user experience by understanding the additional value that a new document provides compared to documents that a user has already seen. This additional value is what is meant by the phrase Information Gain.

There are multiple ways that information gain is useful and one of the ways that all versions of the patent describes is in the context of an audio response and how a long-winded audio response is not good, including in a TTS (text to speech) context).

The patent explains the problem of a long-winded response:

“…and so the user may wait for substantially all of the response to be output before proceeding. In comparison with reading, the user is able to receive the audio information passively, however, the time taken to output is longer and there is a reduced ability to scan or scroll/skip through the information.”

The patent then explains how information gain can speed up answers by eliminating redundant (repetitive) answers or if the answer isn’t enough and forces the user into another dialog turn.

This part of the patent refers to the information density of a section in a web page, a section that answers the question with the least amount of words. Information density is about how “accurate,” “concise,” and “relevant”‘ the answer is for relevance and avoiding repetitiveness. Information density is important for audio/spoken answers.

This is what the patent says:

“As such, it is important in the context of an audio output that the output information is relevant, accurate and concise, in order to avoid an unnecessarily long output, a redundant output, or an extra dialog turn.

The information density of the output information becomes particularly important in improving the efficiency of a dialog session. Techniques described herein address these issues by reducing and/or eliminating presentation of information a user has already been provided, including in the audio human-to-computer dialog context.”

The idea of “information density” is important in a general sense because it communicates better for users but it’s probably extra important in the context of being shown in chatbot search results, whether it’s spoken or not. Google AI Overviews shows snippets from a web page but maybe more importantly, communicating in a concise manner is the best way to be on topic and make it easy for a search engine to understand content.

Search Results Interface

All versions of the Information Gain patent are clear that the invention is not in the context of organic search results. It’s explicitly within the context of ranking web pages within a natural language interface of an automated assistant and an AI chatbot.

However, there is a part of the patent that describes a way of showing users with the second set of results within a “search results interface.” The scenario is that the user sees an answer and then is interested in a related topic. The second set of ranked web pages are shown in a “search results interface.”

The patent explains:

“In some implementations, one or more of the new documents of the second set may be presented in a manner that is selected based on the information gain stores. For example, one or more of the new documents can be rendered as part of a search results interface that is presented to the user in response to a query that includes the topic of the documents, such as references to one or more documents. In some implementations, these search results may be ranked at least in part based on their respective information gain scores.”

…The user can then select one of the references and information contained in the particular document can be presented to the user. Subsequently, the user may return to the search results and the references to the document may again be provided to the user but updated based on new information gain scores for the documents that are referenced.

In some implementations, the references may be reranked and/or one or more documents may be excluded (or significantly demoted) from the search results based on the new information gain scores that were determined based on the document that was already viewed by the user.”

What is a search results interface? I think it’s just an interface that shows search results.

Let’s pause here to underline that it should be clear at this point that the patent is not about ranking web pages that are comprehensive about a topic. The overall context of the invention is showing documents within an automated assistant.

A search results interface is just an interface, it’s never described as being organic search results, it’s just an interface.

There’s more that is the same across all versions of the patent but the above are the important general outlines and context of it.

Claims Of The Patent

The claims section is where the scope of the actual invention is described and for which they are seeking legal protection over. It is mainly focused on the invention and less so on the context. Thus, there is no mention of a search engines, automated assistants, audible responses, or TTS (text to speech) within the Claims section. What remains is the context of search results interface which presumably covers all of the contexts.

Context: First Set Of Documents

It starts out by outlining the context of the invention. This context is receiving a query, identifying the topic, and ranking a first group of relevant web pages (documents) and selecting at least one of them as being relevant and either showing the document or communicating the information from the document (like a summary).

“1. A method implemented using one or more processors, comprising: receiving a query from a user, wherein the query includes a topic; identifying a first set of documents that are responsive to the query, wherein the documents of the set of documents are ranked, and wherein a ranking of a given document of the first set of documents is indicative of relevancy of information included in the given document to the topic; selecting, based on the rankings and from the documents of the first set of documents, a most relevant document providing at least a portion of the information from the most relevant document to the user;”

Context: Second Set Of Documents

Then what immediately follows is the part about ranking a second set of documents that contain additional information. This second set of documents is ranked using the information gain scores to show more information after showing a relevant document from the first group.

This is how it explains it:

“…in response to providing the most relevant document to the user, receiving a request from the user for additional information related to the topic; identifying a second set of documents, wherein the second set of documents includes at one or more of the documents of the first set of documents and does not include the most relevant document; determining, for each document of the second set, an information gain score, wherein the information gain score for a respective document of the second set is based on a quantity of new information included in the respective document of the second set that differs from information included in the most relevant document; ranking the second set of documents based on the information gain scores; and causing at least a portion of the information from one or more of the documents of the second set of documents to be presented to the user, wherein the information is presented based on the information gain scores.”

Granular Details

The rest of the claims section contains granular details about the concept of Information Gain, which is a ranking of documents based on what the user already has seen and represents a related topic that the user may be interested in. The purpose of these details is to lock them in for legal protection as part of the invention.

Here’s an example:

The method of claim 1, wherein identifying the first set comprises:
causing to be rendered, as part of a search results interface that is presented to the user in response to a previous query that includes the topic, references to one or more documents of the first set;
receiving user input that that indicates selection of one of the references to a particular document of the first set from the search results interface, wherein at least part of the particular document is provided to the user in response to the selection;

To make an analogy, it’s describing how to make the pizza dough, clean and cut the mushrooms, etc. It’s not important for our purposes to understand it as much as the general view of what the patent is about.

Information Gain Patent

An opinion was shared on social media that this patent has something to do with ranking web pages in the organic search results, I saw it, read the patent and discovered that’s not how the patent works. It’s a good patent and it’s important to correctly understand it. I analyzed multiple versions of the patent to see what they  had in common and what was different.

A careful reading of the patent shows that it is clearly focused on anticipating what the user may want to see based on what they have already seen. To accomplish this the patent describes the use of an Information Gain score for ranking web pages that are on topics that are related to the first search query but not specifically relevant to that first query.

The context of the invention is generally automated assistants, including chatbots. A search engine could be used as part of finding relevant documents but the context is not solely an organic search engine.

This patent could be applicable to the context of AI Overviews. I would not limit the context to AI Overviews as there are additional contexts such as spoken language in which Information Gain scoring could apply. Could it apply in additional contexts like Featured Snippets? The patent itself is not explicit about that.

Read the latest version of Information Gain patent:

Contextual estimation of link information gain

Featured Image by Shutterstock/Khosro

The Long-Term Strategy of Building A Personal Brand Through Content And Value With Eric Enge via @sejournal, @theshelleywalsh

Building a brand and focusing on brand awareness has become a current topic of discussion across SEO social media, but this is not a new concept; it’s just surfacing again.

After the infamous Panda update in 2011, the rise of the “brand” entered into the conversation as Google began to put its emphasis on surfacing trusted brands to push out lesser-known exact match domains.

SEO professionals have always understood how important it is to develop their own “personal brand.” Forums, coveted conference speaking slots, blogging, and writing books are proven and successful ways to build trust and authority in the industry. Or any industry.

Over the last few years, I have been speaking to Pioneers of the industry about the early days of SEO and their experience navigating twenty-five years of Google.

One of the recurrent themes across these interviews is that most of the Pioneers built a personal brand either through design or accident and their hard-earned reputation has helped to grow their SEO agency or business.

Building a brand is a long-term strategy that is not easy and cannot be gamed, which is why it’s such a strong signal of trust and reputation.

Eric Enge leveraged data studies and writing books to gain recognition, which culminated in him being the Search Personality of the Year in 2016. Eric is well-respected in the industry for his ethical approach to SEO and business and his use of quality content to build reputation.

Although he had to learn this the hard way through trial and error.

In the early days, Eric started out with considerable success from ranking lead generation sites through link schemes and buying links, until a manual penalty turned off his hugely profitable income overnight. To get the penalty overturned took over a year of investment and diligence with a commitment to follow a “white hat” approach.

Eric was so grateful when he got a second chance, he has been an advocate for an ethical approach to SEO ever since. He has proven that building quality content that gives back to an industry is a better long-term option than buying links.

What stands out in Eric’s story is how each investment of his time led to the next opportunity and the next. His consistent application of effort and hard work was what led him to be invited to “The Art of SEO.”

This same story of consistent effort is replicated across most successful people. You don’t get “overnight success.”

For example, I have known Aleyda Solis for 14 years and during this time, I have never known anyone to work as hard at speaking, producing content and continually giving to the industry. She is deservedly one of, if not the most, influential person in SEO today.

I talked to Eric about a wide range of topics, and this article focuses on a part of our conversation about investing in good content. It’s just a small excerpt of our wide-ranging discussion and you can watch the full video here.

Eric Enge Talks About His Journey Of Creating Content And Value

Shelley Walsh: “Eric, you started writing for Search Engine Watch in 2008, and from your experience, you said that you were given a second chance and that really changed the way you looked at everything. Did that have an impact on the direction of making you want to start giving back to the community in any way?”

Eric Enge: “It’s all part of the same sequence of events, as it were. As I mentioned, after we got back in the index in December 2005, Matt Cutts – since I had a certain amount of contact with him – followed some of the content I published. Then, prior to 2008, he had actually awarded me runner-up for Best White Hat Blog. That’s when Rebecca Lieb and Elizabeth Osmeloski of Search Engine Watch decided to give me a shot at writing for them. That was all search news and I was writing a post a day.

That exposure put me in a position where I had much more of an audience than I did on my site. But I was also getting to a point where I had begun to develop some visibility from the positions I was taking in my approach to Google, and I had a certain amount of visibility developing inside of Google.

I managed to persuade Shashi Seth who was working at Google to let me interview him and publish it on the Stone Temple blog. That was my first interview of an industry luminary, if you will. He later had a VP role at Yahoo, so he had quite a notable career.

I’d also been following this guy called Rand Fishkin. One day, Rand wrote a post that said, Surely someone would want to get the 10,000 links and 40,000 social shares that would result from doing a study comparing how analytics programs measure traffic and the differences between them.

I was the first to respond to this, and I said something like, ‘I got you covered, I’m going to do this.’ The study turned into hundreds of hours of work, because getting sites lined up to agree to run multiple different analytics programs at the same time and the tracking of all the data and doing all that was very intense.

But I did it, and I published the Web Analytics Shootout. It didn’t get 10,000 links or 40,000 social shares, by the way, but it did attract Rand’s attention that I had followed through on what was obviously a massive effort. From this, I got my first speaking engagement at Search Engine Strategies in San Jose.

All the while, the continued messaging was received well by people at Google, and they understood that this was more than a tactic for me. I think that resonated a lot. As I said earlier, when you give to someone or give back to someone, strangely enough, sometimes it comes back your way.

After fixing the early problems, I never again bought a link, never spammed a thing, and did extremely well. I proved, even back in the heyday of link buying, that you didn’t need to do it to build a successful business.”

Walsh: “You’ve just rewritten the fourth edition of “The Art Of SEO” (probably the best-known book in the industry). But how did that come about in 2009, and how did you become the lead author alongside Rand Fishkin, Stephan Spencer, and Jessie Stricchiola.”

Enge: “Between starting to speak, writing at Search Engine Watch, the interviews I was publishing on my blog, and also doing data-driven research studies, I was attracting a lot of attention to Stone Temple and I developed some recognition.

Back in 2008, Rand and Stephan had decided to collaborate on a book and persuaded O’Reilly to let them publish through them. Separately, Jessie Stricchiola had signed an agreement with O’Reilly for “The Art of SEO” title, but they were having trouble progressing. Then O’Reilly put Jessie together with Stephan and Rand, and they tried to do something, but it was going too slowly.

At another SES, Stephan told me about this project involving him and Rand and Jessie, and that they needed someone to drive the process because they were having trouble, and didn’t have enough time, etc.”

The understanding was that I would be the last named author. Thirteen weeks later, I had written the first draft of all 13 chapters. I heavily leveraged stuff that each of them had previously published and mashed it into a single book. Then the review process started, which is what you underestimate with a book and how grueling that will be.

After all the work I put in, I think it was Rand who brought up that he didn’t think it was appropriate that I’d be the last-named author. After doing the majority of everything, I didn’t think it was appropriate either.

Stephan wanted to be the first named author, so we had a very mature discussion about the whole thing and we needed a way to break that deadlock. Jessie’s novel idea was to look at the New York Times headline from the following day, and from the first letter of the first headline, the name that was closest would be the lead.

I woke up at three in the morning to look at the New York Times online. Every other headline on the entire edition that day began with the letter S. The only one that didn’t was the main headline which was an ‘F’!

The whole book and I’m doing a content marketing course – they’re all about just wanting to give people tools to help them in their careers. I know Stephan has said this to me many times too, so many people have benefited from the various editions of “The Art of SEO.” The industry has been so good to me and this is my way of giving back.”

How To Build Trust And Credibility

What you can take from this article is that applied effort and hard work are consistent across all successful people I know in this industry. There are no shortcuts to recognition.

At SEJ, we work with some of the best contributors in the industry and they have all proven themselves through the value they give to the industry through their efforts.

As we learn to adapt to the introduction of AI and how that might change the industry and ways of working, content production is the one area that stands the most to lose.

Any type of content that is difficult to produce and takes effort is most likely to resist the proliferation of AI content.

Thought leadership, interviews, data studies and experiments are where you can build credibility. And also, kind of ironically, where you stand the most chance of being cited by generative AI.

Thank you to Eric Enge for being my guest on SEO Pioneers.

More resources: 


Featured Image from author

Google vs Microsoft Bing: A Detailed Comparison Of Two Search Engines via @sejournal, @wburton27

Between Google and Bing, which search engine should you focus on? Should you focus on both or prioritize one over the other?

Google is still the world’s most popular search engine and dominant APP store player, but things are changing quickly in an AI-driven world.

With the rise of artificial intelligence and both Bing and Google incorporating AI – i.e., Microsoft Copilot powered by OpenAI’s GPT-4, Bing Chat, and Google Gemini – into their algorithms and in the search engine results pages (SERPs), things are changing fast.

Let’s explore.

Google Vs. Microsoft Bing Market Share

One of the first distinctions between Microsoft Bing and Google is market share. According to Statcounter, in the US:

  • Google fell to 86.58%, down from 86.94% in March and 88.88% YoY.
  • Microsoft Bing grew to 8.24%, up from 8.04% in March and up from 6.43% YoY.
  • Yahoo grew to 2.59%, up from 2.48% in March and up from 2.33% YoY.

That’s pretty huge to see Bing growing and Google reducing.

Globally

Google had a 91.05% search market share in June 2024, according to Statcounter’s revised data, which is down from 91.38% in March and 92.82% YoY. Google’s highest search market share during the past 12 months, globally, was 93.11% last May.

While that may make it tempting to focus on Google alone, Microsoft Bing provides good conversions and has a user base that shouldn’t be ignored. Bing’s usage has grown because of the AI-powered feature Bing Chat, which has attracted new users.

Bing is also used by digital assistants such as Alexa and Cortana.

Bing has around 100 million daily active users which is a number that you can’t ignore. It’s particularly important to optimize for Bing if you’re targeting an American audience. In fact, 28.3% of online queries in the U.S. are powered by Microsoft properties when you factor in Yahoo and voice searches.

Some have wondered over the years whether Bing is an acronym for “Because It’s Not Google.” I’m not sure how true that is, but the name does come from a campaign in the early 1990s for its predecessor, Live Search.

Another fun tidbit is that Ahrefs recently did a study on the Top 100 Bing searches globally, and the #1 query searched was [Google].

Comparing Google Vs. Microsoft Bing’s Functionality

From a search functionality perspective, the two search engines are similar, but Google offers more core features:

Feature Google  Microsoft Bing
Text Search Yes Yes
Video Search Yes Yes
Image Search Yes Yes
Maps Yes Yes
News Yes Yes
Shopping Yes Yes
Books Yes No
Flights Yes No
Finance Yes No
Scholarly Literature Yes No

Comparing AI Functionality

Feature Google Bing
AI Accuracy Prone to errors More accurate since it is based on OpenAI GPT-4
Integration Google Workspace Microsoft 365 apps (Word, PowerPoint, Excel, etc.)
Image Generation Handle complex image prompts better than Gemini Allows users to use existing images as prompts for modifications, a feature not in Copilot
Knowledge Base Accesses the up-to-date info and has access to the web Copilot may lag due to potentially outdated databases
Summarizes Provides concise summaries for content within Google’s ecosystem i.e., YouTube videos or emails Good at summarizing meetings and writing emails. Etc.
Context Window Has significantly larger context window of 2 million tokens (or up to 10 million for researchers), allowing it to process much more information at once Microsoft Copilot (using GPT-4) has a context window of up to 100,000 tokens.
AI in Results Yes (AI Overviews) Yes
Focus Research Business and customer service applications
Pricing Similar Similar

How Google & Microsoft Bing Differ In Size Of Index And Crawling

Google says:

“The Google Search index contains hundreds of billions of webpages and is well over 100,000,000 gigabytes in size.”

Even so, not even Google can crawl the entire web. That is just not going to happen.

This is why using structured data is so important, especially now with AI overviews. It provides a data feed about your content so Google can understand it better, which can help you qualify for rich results and get more clicks and impressions.

Microsoft Bing hasn’t released similar figures. However, this search engine index size estimate website puts the Microsoft Bing index at somewhere between 8 to 14 billion web pages.

The two engines have shared a little about their approaches to web indexing.

Microsoft Bing says:

“Bingbot uses an algorithm to determine which sites to crawl, how often, and how many pages to fetch from each site. The goal is to minimize bingbot crawl footprint on your web sites while ensuring that the freshest content is available.”

Around the same time the above statement was made, John Mueller from Google said:

“I think the hard part here is that we don’t crawl URLs with the same frequency all the time. So, some URLs we will crawl daily. Some URLs maybe weekly.

Other URLs every couple of months, maybe even every once half year or so. So, this is something that we try to find the right balance for so that we don’t overload your server.”

Google has a mobile-first index, while Microsoft Bing takes a different stance and does not have plans to apply a mobile-first indexing policy.

Instead, Microsoft Bing maintains a single index that is optimized for both desktop and mobile, so it is important to make sure your site experience is optimized, loads quickly, and gives users what they need.

Google has evolved into more than just a search engine with products like Gmail, Maps, Chrome OS, Android OS, YouTube, and more.

Microsoft Bing also offers email via Outlook, as well as other services like Office Online or OneDrive.

Unlike Google, however, it does not have its own operating system. Instead, it uses Windows Phone 8 or iOS on Apple devices.

Now, let’s take a look at where Bing is on par with Google – or superior.

Differences In User Interface & Tools

Google has a clean, simple interface that many people find easy to use, but for some queries, AI overviews are shown.

Google search for bitcoinScreenshot from search for [bitcoin], Google, July 2024

So does Microsoft Bing, though Bing is a little bit more visual.

Microsoft Bing search for bitcoinScreenshot from search for [bitcoin], Microsoft Bing, July 2024

Both search engines display useful information about related searches, images, companies, and news and do a great job of informing users of everything they need to know about a given topic.

SEO professionals love our tools and data.

Thankfully, both Google and Microsoft Bing have decent keyword research tools that offer insights into performance:

Keywords research toolsScreenshot from author, July 2024

One area where I think Google falls behind is the data it provides in Google Search Console. If you want to learn how to use it, check out How to Use Google Search Console for SEO: A Complete Guide.

One of the cool feature sets in Microsoft Bing is the ability to import data from Google Search Console:

Another Microsoft Bing feature that I think beats Google is the fact that it provides SEO Reports.

Bing Webmaster ToolsScreenshot from Bing Webmaster Tools, July 2024

According to Bing, these reports contain common page-level recommendations based on SEO best practices to improve your rankings.

The reports are automatically generated biweekly and provide tips as to what to work on or investigate.

See A Complete Guide to Bing Webmaster Tools to learn more.

Microsoft Bing May Excel In Image Search Over Google

When it comes to image search, Microsoft Bing may have a leg up on Google by providing higher-quality images.

Microsoft Bing search for donutsScreenshot from search for [donuts], Microsoft Bing, July 2024

I like the filtering features in its image search, too, because you can turn titles off and search by image size, color, or type.

Test out Bing Visual Image Search, which allows you to do more with images. Check out its library of specialized skills to help you shop, identify landmarks and animals, or just have fun.

Then, see How Bing’s Image & Video Algorithm Works to learn more.

Google search for donutsScreenshot from search for [donuts], Google, July 2024

Google has more images available for viewing than Microsoft Bing. Make the most of it with the tips in A Guide to Google’s Advanced Image Search.

However, Microsoft Bing provides more detailed information about the image users are searching for.

How Microsoft Bing & Google Handle Video Search

Microsoft Bing provides a much more visual video search results page, including a grid view of large thumbnails.

Google’s video results are more standard, featuring a vertical list of small thumbnails.

As you can see from the screenshot of a Bitcoin search below, they include different filters like length, price, etc., which is a great user experience.

I did not get this experience with Google video search.

This is one area where Microsoft Bing outperforms Google.

Microsoft Bing video searchScreenshot from search for [bitcoin], Microsoft Bing, July 2024
Google video search for bitcoinScreenshot from search for [bitcoin], Google, July 2024

Map Listings On Both Search Engines Matter For Local SEO

Both engines have similar functionality for maps, including map listings and local listings in the search engine results pages (SERPs).

Make sure you claim all your listings in both Microsoft Bing and Google and optimize your profile with business information, photos, proper categories, social information, etc.

Accurate name, address, and phone number (NAP) information are key. Google focuses on a user’s immediate vicinity by default, providing highly localized search results, while Bing offers a broader view of the wider area in local searches, which can be beneficial for some businesses.

See A Complete Guide to Google Maps Marketing.

Optimizing For Google Search Vs. Microsoft Bing

Google is primarily concerned with E-E-A-T: Experience, Expertise, Authority, and Trust. Providing users with high-quality, useful, and helpful content that is factual, original, and offers users value, as well as a site that provides good user experience, will help you rank.

Backlinks are also still important.

Microsoft Bing has always been focused on on-page optimization. It emphasizes exact-match keywords in domain names and URLs, gives weight to social signals and official business listings, and favors older and established domains.

Unlike Google, Microsoft Bing states in its webmaster guidelines that it incorporates social signals into its algorithm. That means you should also focus on Twitter and Facebook – including building good quality content on your site and social platforms – if you want to rank highly in Microsoft Bing.

Content is extremely important for both search engines. Always focus on high-quality content that satisfies the user’s intent and informational needs. By creating useful and relevant content, users will naturally love it and link to it.

Both speed, mobile-friendliness, and proper tech infrastructure matter for both engines.

Make sure you check out these resources for optimizing for various search engines:

Google Is Pushing Organic Results Further And Further Down The Page

As time goes on, Google continues to push organic results down the page, resulting in more revenue from paid search ads and fewer clicks from organic search. That is why a blended strategy is important to win in today’s SERPs.

Here is a comparison between a search in Google and a search in Bing. As you can see, Bing does not have as many ads as Google, and organic listings are more prominent on the page than Google.

Bing search for project management softwareScreenshot from search for [project management software], Microsoft Bing, July 2024
Google search for project management softwareScreenshot from search for [project management software], Google, July 2024

Google Search Vs. Microsoft Bing: The Verdict

Both Microsoft Bing and Google satisfy the informational needs of millions of people every day.

While Google remains the dominant player in the battle between Bing and Google, they both offer opportunities for your brand to reach new users and put you in front of millions of qualified customers who are looking for information, products, and services.

Bing offers unique advantages and opportunities, particularly in visual search, social signals, and certain niche markets.

Bing holds a smaller market share but has a growing user base.

Since optimizing for both Bing and Google is similar, with some key differences, I recommend optimizing for both. This can enhance overall visibility and reach, especially in a world where Google is pushing organic listings further and further down the page.

More resources: 


Featured Image: Overearth/Shutterstock

DHS plans to collect biometric data from migrant children “down to the infant”

The US Department of Homeland Security (DHS) plans to collect and analyze photos of the faces of migrant children at the border in a bid to improve facial recognition technology, MIT Technology Review can reveal. This includes children “down to the infant,” according to John Boyd, assistant director of the department’s Office of Biometric Identity Management (OBIM), where a key part of his role is to research and develop future biometric identity services for the government. 

As Boyd explained at a conference in June, the key question for OBIM is, “If we pick up someone from Panama at the southern border at age four, say, and then pick them up at age six, are we going to recognize them?”

Facial recognition technology (FRT) has traditionally not been applied to children, largely because training data sets of real children’s faces are few and far between, and consist of either low-quality images drawn from the internet or small sample sizes with little diversity. Such limitations reflect the significant sensitivities regarding privacy and consent when it comes to minors. 

In practice, the new DHS plan could effectively solve that problem. According to Syracuse University’s Transactional Records Access Clearinghouse (TRAC), 339,234 children arrived at the US-Mexico border in 2022, the last year for which numbers are currently available. Of those children, 150,000 were unaccompanied—the highest annual number on record. If the face prints of even 1% of those children had been enrolled in OBIM’s craniofacial structural progression program, the resulting data set would dwarf nearly all existing data sets of real children’s faces used for aging research.

It’s unclear to what extent the plan has already been implemented; Boyd tells MIT Technology Review that to the best of his knowledge, the agency has not yet started collecting data under the program, but he adds that as “the senior executive,” he would “have to get with [his] staff to see.” He could only confirm that his office is “funding” it. Despite repeated requests, Boyd did not provide any additional information. 

Boyd says OBIM’s plan to collect facial images from children under 14 is possible due to recent “rulemaking” at “some DHS components,” or sub-offices, that have removed age restrictions on the collection of biometric data. US Customs and Border Protection (CBP), the US Transportation Security Administration, and US Immigration and Customs Enforcement declined to comment before publication. US Citizenship and Immigration Services (USCIS) did not respond to multiple requests for comment. OBIM referred MIT Technology Review back to DHS’s main press office. 

DHS did not comment on the program prior, but sent an emailed statement following publication: “The Department of Homeland Security uses various forms of technology to execute its mission, including some biometric capabilities. DHS ensures all technologies, regardless of type, are operated under the established authorities and within the scope of the law. We are committed to protecting the privacy, civil rights, and civil liberties of all individuals who may be subject to the technology we use to keep the nation safe and secure.”

Boyd spoke publicly about the plan in June at the Federal Identity Forum and Exposition, an annual identity management conference for federal employees and contractors. But close observers of DHS that we spoke with—including a former official, representatives of two influential lawmakers who have spoken out about the federal government’s use of surveillance technologies, and immigrants’ rights organizations that closely track policies affecting migrants—were unaware of any new policies allowing biometric data collection of children under 14. 

That is not to say that all of them are surprised. “That tracks,” says one former CBP official who has visited several migrant processing centers on the US-Mexico border and requested anonymity to speak freely. He says “every center” he visited “had biometric identity collection, and everybody was going through it,” though he was unaware of a specific policy mandating the practice. “I don’t recall them separating out children,” he adds.

“The reports of CBP, as well as DHS more broadly, expanding the use of facial recognition technology to track migrant children is another stride toward a surveillance state and should be a concern to everyone who values privacy,” Justin Krakoff, deputy communications director for Senator Jeff Merkley of Oregon, said in a statement to MIT Technology Review. Merkley has been an outspoken critic of both DHS’s immigration policies and of government use of facial recognition technologies

Beyond concerns about privacy, transparency, and accountability, some experts also worry about testing and developing new technologies using data from a population that has little recourse to provide—or withhold—consent. 

Could consent “actually take into account the vast power differentials that are inherent in the way that this is tested out on people?” asks Petra Molnar, author of The Walls Have Eyes: Surviving Migration in the Age of AI. “And if you arrive at a border … and you are faced with the impossible choice of either: get into a country if you give us your biometrics, or you don’t.”

“That completely vitiates informed consent,” she adds.

This question becomes even more challenging when it comes to children, says Ashley Gorski, a senior staff attorney with the American Civil Liberties Union. DHS “should have to meet an extremely high bar to show that these kids and their legal guardians have meaningfully consented to serve as test subjects,” she says. “There’s a significant intimidation factor, and children aren’t as equipped to consider long-term risks.”

Murky new rules

The Office of Biometric Identity Management, previously known as the US Visitor and Immigrant Status Indicator Technology Program (US-VISIT), was created after 9/11 with the specific mandate of collecting biometric data—initially only fingerprints and photographs—from all non-US citizens who sought to enter the country. 

Since then, DHS has begun collecting face prints, iris and retina scans, and even DNA, among other modalities. It is also testing new ways of gathering this data—including through contactless fingerprint collection, which is currently deployed at five sites on the border, as Boyd shared in his conference presentation. 

Since 2023, CBP has been using a mobile app, CBP One, for asylum seekers to submit biometric data even before they enter the United States; users are required to take selfies periodically to verify their identity. The app has been riddled with problems, including technical glitches and facial recognition algorithms that are unable to recognize darker-skinned people. This is compounded by the fact that not every asylum seeker has a smartphone. 

Then, just after crossing into the United States, migrants must submit to collection of biometric data, including DNA. For a sense of scale, a recent report from Georgetown Law School’s Center on Privacy and Technology found that CBP has added 1.5 million DNA profiles, primarily from migrants crossing the border, to law enforcement databases since it began collecting DNA “from any person in CBP custody subject to fingerprinting” in January 2020. The researchers noted that an overrepresentation of immigrants—the majority of whom are people of color—in a DNA database used by law enforcement could subject them to over-policing and lead to other forms of bias. 

Generally, these programs only require information from individuals aged 14 to 79. DHS attempted to change this back in 2020, with proposed rules for USCIS and CBP that would have expanded biometric data collection dramatically, including by age. (USCIS’s proposed rule would have doubled the number of people from whom biometric data would be required, including any US citizen who sponsors an immigrant.) But the USCIS rule was withdrawn in the wake of the Biden administration’s new “priorities to reduce barriers and undue burdens in the immigration system.” Meanwhile, for reasons that remain unclear, the proposed CBP rule was never enacted. 

This would make it appear “contradictory” if DHS were now collecting the biometric data of children under 14, says Dinesh McCoy, a staff attorney with Just Futures Law, an immigrant rights group that tracks surveillance technologies. 

Neither Boyd nor DHS’s media office would confirm which specific policy changes he was referring to in his presentation, though MIT Technology Review has identified a 2017 memo, issued by then-Secretary of Homeland Security John F. Kelly, that encouraged DHS components to remove “age as a basis for determining when to collect biometrics.” 

The DHS’s Office of the Inspector General (OIG) referred to this memo as the “overarching policy for biometrics at DHS” in a September 2023 report, though none of the press offices MIT Technology Review contacted—including the main DHS press office, OIG, and OBIM, among others—would confirm whether this was still the relevant policy; we have not been able to confirm any related policy changes since then. 

The OIG audit also found a number of fundamental issues related to DHS’s oversight of biometric data collection and use—including that its 10-year strategic framework for biometrics, covering 2015 to 2025, “did not accurately reflect the current state of biometrics across the Department, such as the use of facial recognition verification and identification.” Nor did it provide clear guidance for the consistent collection and use of biometrics across DHS, including age requirements. 

But there is also another potential explanation for the new OBIM program: Boyd says it is being conducted under the auspices of the DHS’s undersecretary of science and technology, the office that leads much of the agency’s research efforts. Because it is for research, rather than to be used “in DHS operations to inform processes or decision making,” many of the standard restrictions for DHS use of face recognition and face capture technologies do not apply, according to a DHS directive

Do you have any additional information on DHS’s craniofacial structural progression initiative? Please reach out with a non-work email to tips@technologyreview.com or securely on Signal at 626.765.5489. 

Some lawyers allege that changing the age limit for data collection via department policy, not by a federal rule, which requires a public comment period, is problematic. McCoy, for instance, says any lack of transparency here amplifies the already “extremely challenging” task of “finding [out] in a systematic way how these technologies are deployed”—even though that is key for accountability.

Advancing the field

At the identity forum and in a subsequent conversation, Boyd explained that this data collection is meant to advance the development of effective FRT algorithms. Boyd leads OBIM’s Future Identity team, whose mission is to “research, review, assess, and develop technology, policy, and human factors that enable rapid, accurate, and secure identity services” and to make OBIM “the preferred provider for identity services within DHS.” 

Driven by high-profile cases of missing children, there has long been interest in understanding how children’s faces age. At the same time, there have been technical challenges to doing so, both preceding FRT and with it. 

At its core, facial recognition identifies individuals by comparing the geometry of various facial features in an original face print with subsequent images. Based on this comparison, a facial recognition algorithm assigns a percentage likelihood that there is a match. 

But as children grow and develop, their bone structure changes significantly, making it difficult for facial recognition algorithms to identify them over time. (These changes tend to be even more pronounced  in children under 14. In contrast, as adults age, the changes tend to be in the skin and muscle, and have less variation overall.) More data would help solve this problem, but there is a dearth of high-quality data sets of children’s faces with verifiable ages. 

“What we’re trying to do is to get large data sets of known individuals,” Boyd tells MIT Technology Review. That means taking high-quality face prints “under controlled conditions where we know we’ve got the person with the right name [and] the correct birth date”—or, in other words, where they can be certain about the “provenance of the data.” 

For example, one data set used for aging research consists of 305 celebrities’ faces as they aged from five to 32. But these photos, scraped from the internet, contain too many other variables—such as differing image qualities, lighting conditions, and distances at which they were taken—to be truly useful. Plus, speaking to the provenance issue that Boyd highlights, their actual ages in each photo can only be estimated. 

Another tactic is to use data sets of adult faces that have been synthetically de-aged. Synthetic data is considered more privacy-preserving, but it too has limitations, says Stephanie Schuckers, director of the Center for Identification Technology Research (CITeR). “You can test things with only the generated data,” Schuckers explains, but the question remains: “Would you get similar results to the real data?”

(Hosted at Clarkson University in New York, CITeR brings together a network of academic and government affiliates working on identity technologies. OBIM is a member of the research consortium.) 

Schuckers’s team at CITeR has taken another approach: an ongoing longitudinal study of a cohort of 231 elementary and middle school students from the area around Clarkson University. Since 2016, the team has captured biometric data every six months (save for two years of the covid-19 pandemic), including facial images. They have found that the open-source face recognition models they tested can in fact successfully recognize children three to four years after they were initially enrolled. 

But the conditions of this study aren’t easily replicable at scale. The study images are taken in a controlled environment, all the participants are volunteers, the researchers sought consent from parents and the subjects themselves, and the research was approved by the university’s Institutional Review Board. Schuckers’s research also promises to protect privacy by requiring other researchers to request access, and by providing facial datasets separately from other data that have been collected. 

What’s more, this research still has technical limitations, including that the sample is small, and it is overwhelmingly Caucasian, meaning it might be less accurate when applied to other races. 

Schuckers says she was unaware of DHS’s craniofacial structural progression initiative. 

Far-reaching implications

Boyd says OBIM takes privacy considerations seriously, and that “we don’t share … data with commercial industries.” Still, OBIM has 144 government partners with which it does share information, and it has been criticized by the Government Accountability Office for poorly documenting who it shares information with, and with what privacy-protecting measures. 

Even if the data does stay within the federal government, OBIM’s findings regarding the accuracy of FRT for children over time could nevertheless influence how—and when—the rest of the government collects biometric data, as well as whether the broader facial recognition industry may also market its services for children. (Indeed, Boyd says sharing “results,” or the findings of how accurate FRT algorithms are, is different than sharing the data itself.) 

That this technology is being tested on people who are offered fewer privacy protections than would be afforded to US citizens is just part of the wider trend of using people from the developing world, whether they are migrants coming to the border or civilians in war zones, to help improve new technologies. 

In fact, Boyd previously helped advance the Department of Defense’s biometric systems in Iraq and Afghanistan, where he acknowledged that individuals lacked the privacy protections that would have been granted in many other contexts, despite the incredibly high stakes. Biometric data collected in those war zones—in some areas, from every fighting-age male—was used to identify and target insurgents, and being misidentified could mean death. 

These projects subsequently played a substantial role in influencing the expansion of biometric data collection by the Department of Defense, which now happens globally. And architects of the program, like Boyd, have taken important roles in expanding the use of biometrics at other agencies. 

“It’s not an accident” that this testing happens in the context of border zones, says Molnar. Borders are “the perfect laboratory for tech experimentation, because oversight is weak, discretion is baked into the decisions that get made … it allows the state to experiment in ways that it wouldn’t be allowed to in other spaces.” 

But, she notes, “just because it happens at the border doesn’t mean that that’s where it’s going to stay.”

Update: This story was updated to include comment from DHS.

Do you have any additional information on DHS’s craniofacial structural progression initiative? Please reach out with a non-work email to tips@technologyreview.com or securely on Signal at 626.765.5489. 

Charts: Global Economic Outlook Q3 2024

Global growth will remain stable at 3.2% in 2024 and 3.3% in 2025 according to the International Monetary Fund’s July 2024 “World Economic Outlook” report, subtitled “The Global Economy in a Sticky Spot.”

The IMF updates its economic outlook twice yearly using a “bottom-up” approach, starting with individual countries and then aggregating into overall global projections.

Per the IMF, growth in the United States will fall from 2.6% in 2024 to 1.9% in 2025, reflecting a slower-than-expected start to the year. The euro region will pick up from 0.5% in 2023 to 0.9% in 2024 and rise to 1.5% in 2025.

The IMF projects growth in advanced economies to remain unchanged at 1.7% in 2024 and rise slightly to 1.8% in 2025.

Meanwhile, according to IMF estimates, growth in emerging markets and developing economies will decline marginally from 4.4% in 2023 to 4.3% in 2024 and 2025.

Per the IMF, the global consumer inflation rate, including food and energy, will fall from 6.7% in 2023 to 5.9% in 2024 and 4.4% in 2025.