OpenAI Expected to Integrate Real-Time Data In ChatGPT via @sejournal, @martinibuster

Sam Altman, CEO of OpenAI, dispelled rumors that a new search engine would be announced on Monday, May 13. Recent deals have raised the expectation that OpenAI will announce the integration of real-time content from English, Spanish, and French publications into ChatGPT, complete with links to the original sources.

OpenAI Search Is Not Happening

Many competing search engines have tried and failed to challenge Google as the leading search engine. A new wave of hybrid generative AI search engines is currently trying to knock Google from the top spot with arguably very little success.

Sam Altman is on record saying that creating a search engine to compete against Google is not a viable approach. He suggested that technological disruption was the way to replace Google by changing the search paradigm altogether. The speculation that Altman is going to announce a me-too search engine on Monday never made sense given his recent history of dismissing the concept as a non-starter.

So perhaps it’s not a surprise that he recently ended the speculation by explicitly saying that he will not be announcing a search engine on Monday.

He tweeted:

“not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.”

“New Stuff” May Be Iterative Improvement

It’s quite likely that what’s going to be announced is iterative which means it improves ChatGPT but not replaces it. This fits into how Altman recently expressed his approach with ChatGPT.

He remarked:

“And it does kind of suck to ship a product that you’re embarrassed about, but it’s much better than the alternative. And in this case in particular, where I think we really owe it to society to deploy iteratively.

There could totally be things in the future that would change where we think iterative deployment isn’t such a good strategy, but it does feel like the current best approach that we have and I think we’ve gained a lot from from doing this and… hopefully the larger world has gained something too.”

Improving ChatGPT iteratively is Sam Altman’s preference and recent clues point to what those changes may be.

Recent Deals Contain Clues

OpenAI has been making deals with news media and User Generated Content publishers since December 2023. Mainstream media has reported these deals as being about licensing content for training large language models. But they overlooked a a key detail that we reported on last month which is that these deals give OpenAI access to real-time information that they stated will be used to give attribution to that real-time data in the form of links.

That means that ChatGPT users will gain the ability to access real-time news and to use that information creatively within ChatGPT.

Dotdash Meredith Deal

Dotdash Meredith (DDM) is the publisher of big brand publications such as Better Homes & Gardens, FOOD & WINE, InStyle, Investopedia, and People magazine. The deal that was announced goes way beyond using the content as training data. The deal is explicitly about surfacing the Dotdash Meredith content itself in ChatGPT.

The announcement stated:

“As part of the agreement, OpenAI will display content and links attributed to DDM in relevant ChatGPT responses. …This deal is a testament to the great work OpenAI is doing on both fronts to partner with creators and publishers and ensure a healthy Internet for the future.

Over 200 million Americans each month trust our content to help them make decisions, solve problems, find inspiration, and live fuller lives. This partnership delivers the best, most relevant content right to the heart of ChatGPT.”

A statement from OpenAI gives credibility to the speculation that OpenAI intends to directly show licensed third-party content as part of ChatGPT answers.

OpenAI explained:

“We’re thrilled to partner with Dotdash Meredith to bring its trusted brands to ChatGPT and to explore new approaches in advancing the publishing and marketing industries.”

Something that DDM also gets out of this deal is that OpenAI will enhance DDM’s in-house ad targeting in order show more tightly focused contextual advertising.

Le Monde And Prisa Media Deals

In March 2024 OpenAI announced a deal with two global media companies, Le Monde and Prisa Media. Le Monde is a French news publication and Prisa Media is a Spanish language multimedia company. The interesting aspects of these two deals is that it gives OpenAI access to real-time data in French and Spanish.

Prisa Media is a global Spanish language media company based in Madrid, Spain that is comprised of magazines, newspapers, podcasts, radio stations, and television networks. It’s reach extends from Spain to America. American media companies include publications in the United States, Argentina, Bolivia, Chile, Colombia, Costa Rica, Ecuador, Mexico, and Panama. That is a massive amount of real-time information in addition to a massive audience of millions.

OpenAI explicitly announced that the purpose of this deal was to bring this content directly to ChatGPT users.

The announcement explained:

“We are continually making improvements to ChatGPT and are supporting the essential role of the news industry in delivering real-time, authoritative information to users. …Our partnerships will enable ChatGPT users to engage with Le Monde and Prisa Media’s high-quality content on recent events in ChatGPT, and their content will also contribute to the training of our models.”

That deal is not just about training data. It’s about bringing current events data to ChatGPT users.

The announcement elaborated in more detail:

“…our goal is to enable ChatGPT users around the world to connect with the news in new ways that are interactive and insightful.”

As noted in our April 30th article that revealed that OpenAI will show links in ChatGPT, OpenAI intends to show third party content with links to that content.

OpenAI commented on the purpose of the Le Monde and Prisa Media partnership:

“Over the coming months, ChatGPT users will be able to interact with relevant news content from these publishers through select summaries with attribution and enhanced links to the original articles, giving users the ability to access additional information or related articles from their news sites.”

There are additional deals with other groups like The Financial Times which also stress that this deal will result in a new ChatGPT feature that will allow users to interact with real-time news and current events .

OpenAI’s Monday May 13 Announcement

There are many clues that the announcement on Monday will be that ChatGPT users will gain the ability to interact with content about current events.  This fits into the terms of recent deals with news media organizations. There may be other features announced as well but this part is something that there are many clues pointing to.

Watch Altman’s interview at Stanford University

Featured Image by Shutterstock/photosince

Google Defends Lack Of Communication Around Search Updates via @sejournal, @MattGSouthern

While Google informs the public about broad core algorithm updates, it doesn’t announce every minor change or tweak, according to Google’s Search Liaison Danny Sullivan.

The comments were in response to Glenn Gabe’s question about why Google doesn’t provide information about volatility following the March core update.

Gabe wrote:

“… when site owners think a major update is done, they are not expecting crazy volatility that sometimes completely reverses what happened with the major update.
The impact from whatever rolled out on 5/3 and now 5/8 into 5/9 has been strong.”

Sullivan explained that Google continuously updates its search ranking systems, with around 5,000 updates per year across different algorithms and components.

Many of these are minor adjustments that would go unnoticed, Sullivan says:

“If we were giving notice about all the ranking system updates we do, it would be like this:

Hi. It’s 1:14pm — we just did an update to system 112!
Hi. It’s 2:26pm — we just did an update to system 34!

That’s because we do around 5,000 updates per year.”

While Google may consider these minor changes, combining thousands of those tweaks can lead to significant shifts in rankings and traffic that sites need help understanding.

More open communication from Google could go a long way.

Ongoing Shifts From Web Changes

Beyond algorithm adjustments, Sullivan noted that search results can fluctuate due to the nature of web content.

Google’s ranking systems continually process new information, Sullivan explains:

“… already launched and existing systems aren’t themselves being updated in how they operate, but the information they’re processing isn’t static but instead is constantly changing.”

Google focuses communications on major updates versus a never-ending stream of notifications about minor changes.

Sullivan continues:

“This type of constant ‘hey, we did an update’ notification stuff probably isn’t really that useful to creators. There’s nothing to ‘do’ with those types of updates.”

Why SEJ Cares

Understanding that Google Search is an ever-evolving platform is vital for businesses and publishers that rely on search traffic.

It reiterates the need for a long-term SEO strategy focused on delivering high-quality, relevant content rather than reacting to individual algorithm updates.

However, we realize Google’s approach to announcing updates can leave businesses scrambling to keep up with ranking movements.

More insight into these changes would be valuable for many.

How This Can Help You

Knowing that Google processes new information in addition to algorithm changes, you may have more realistic expectations post-core updates.

Instead of assuming stability after a major update, anticipate fluctuations as Google’s systems adapt to new web data.


Featured Image: Aerial Film Studio/Shutterstock

Google’s Strategies For Dealing With Content Decay via @sejournal, @MattGSouthern

In the latest episode of the Search Off The Record podcast, Google Search Relations team members John Mueller and Lizzi Sassman did a deep dive into dealing with “content decay” on websites.

Outdated content is a natural issue all sites face over time, and Google has outlined strategies beyond just deleting old pages.

While removing stale content is sometimes necessary, Google recommends taking an intentional, format-specific approach to tackling content decay.

Archiving vs. Transitional Guides

Google advises against immediately removing content that becomes obsolete, like materials referencing discontinued products or services.

Removing content too soon could confuse readers and lead to a poor experience, Sassman explains:

“So, if I’m trying to find out like what happened, I almost need that first thing to know. Like, “What happened to you?” And, otherwise, it feels almost like an error. Like, “Did I click a wrong link or they redirect to the wrong thing?””

Sassman says you can avoid confusion by providing transitional “explainer” pages during deprecation periods.

A temporary transition guide informs readers of the outdated content while steering them toward updated resources.

Sassman continues:

“That could be like an intermediary step where maybe you don’t do that forever, but you do it during the transition period where, for like six months, you have them go funnel them to the explanation, and then after that, all right, call it a day. Like enough people know about it. Enough time has passed. We can just redirect right to the thing and people aren’t as confused anymore.”

When To Updates Vs. When To Write New Content

For reference guides and content that provide authoritative overviews, Google suggests updating information to maintain accuracy and relevance.

However, for archival purposes, major updates may warrant creating a new piece instead of editing the original.

Sassman explains:

“I still want to retain the original piece of content as it was, in case we need to look back or refer to it, and to change it or rehabilitate it into a new thing would almost be worth republishing as a new blog post if we had that much additional things to say about it.”

Remove Potentially Harmful Content

Google recommends removing pages in cases where the outdated information is potentially harmful.

Sassman says she arrived at this conclusion when deciding what to do with a guide involving obsolete structured data:

“I think something that we deleted recently was the “How to Structure Data” documentation page, which I thought we should just get rid of it… it almost felt like that’s going to be more confusing to leave it up for a period of time.

And actually it would be negative if people are still adding markup, thinking they’re going to get something. So what we ended up doing was just delete the page and redirect to the changelog entry so that, if people clicked “How To Structure Data” still, if there was a link somewhere, they could still find out what happened to that feature.”

Internal Auditing Processes

To keep your content current, Google advises implementing a system for auditing aging content and flagging it for review.

Sassman says she sets automated alerts for pages that haven’t been checked in set periods:

“Oh, so we have a little robot to come and remind us, “Hey, you should come investigate this documentation page. It’s been x amount of time. Please come and look at it again to make sure that all of your links are still up to date, that it’s still fresh.””

Context Is Key

Google’s tips for dealing with content decay center around understanding the context of outdated materials.

You want to prevent visitors from stumbling across obsolete pages without clarity.

Additional Google-recommended tactics include:

  • Prominent banners or notices clarifying a page’s dated nature
  • Listing original publish dates
  • Providing inline annotations explaining how older references or screenshots may be obsolete

How This Can Help You

Following Google’s recommendations for tackling content decay can benefit you in several ways:

  • Improved user experience: By providing clear explanations, transition guides, and redirects, you can ensure that visitors don’t encounter confusing or broken pages.
  • Maintained trust and credibility: Removing potentially harmful or inaccurate content and keeping your information up-to-date demonstrates your commitment to providing reliable and trustworthy resources.
  • Better SEO: Regularly auditing and updating your pages can benefit your website’s search rankings and visibility.
  • Archival purposes: By creating new content instead of editing older pieces, you can maintain a historical record of your website’s evolution.
  • Streamlined content management: Implementing internal auditing processes makes it easier to identify and address outdated or problematic pages.

By proactively tackling content decay, you can keep your website a valuable resource, improve SEO, and maintain an organized content library.

Listen to the full episode of Google’s podcast below:


Featured Image: Stokkete/Shutterstock

Google Defines “Content Decay” In New Podcast Episode via @sejournal, @MattGSouthern

In the latest episode of Google’s Search Off The Record podcast, hosts John Mueller and Lizzi Sassman discussed “content decay”—the natural process by which online content becomes outdated or loses relevance over time.

While not a widely used term among SEO professionals, the concept raises questions about how websites should handle aging content that may contain obsolete information, broken links, or outdated references.

What Is Content Decay?

Mueller, a Search Advocate at Google, defines content decay as:

“[Content decay is] something where, when you look at reference material, it’s kind of by definition old. People wrote about it because they’ve studied it for a really long time. So it’s an old thing. But that doesn’t mean it’s no longer true or no longer useful.”

It’s worth noting Mueller was initially unfamiliar with the term:

“When I looked at it, it sounded like this was a known term, and I felt inadequate when I realized I had no idea what it actually meant, and I had to interpret what it probably means from the name.

Sassman, who oversees the Search Central website’s content, admitted she was also unfamiliar with content decay.

She stated:

“… it sounded a little bit negative … Like something’s probably wrong with the content. Probably it’s rotting or something has happened to it over time.”

After defining the term, the two dissected various approaches to handling content decay, using Google’s help documents as a case study.

Content Decay Not Necessarily A Bad Thing

Content decay isn’t, by definition, a bad thing.

Blog posts announcing past events or product changes may seem like sources of content decay.

However, Sassman advises keeping that content for historical accuracy.

Sassman gives an example, citing Google’s decision to keep pages containing the outdated term “Webmaster Tools.”

“If we went back and replaced everything where we said ‘Google Webmasters’ with ‘Search Console,’ it would be factually incorrect. Search Console didn’t exist at that point. It was Webmaster Tools.”

Avoiding User Confusion

According to Mueller, the challenge in dealing with content decay is “avoiding confusing people.”

Indicating when content is outdated, providing context around obsolete references, and sensible use of redirects can help mitigate potential confusion.

Mueller stated

“People come to our site for whatever reason, then we should make sure that they find information that’s helpful for them and that they understand the context. If something is old and they search for it, they should be able to recognize, ‘Oh, maybe I have to rethink what I wanted to do because what I was searching for doesn’t exist anymore or is completely different now.’”

No One-Size-Fits-All Solution

There are no easy solutions to content decay. You must thoughtfully evaluate aging content, understanding that some pieces warrant archiving while others remain valuable historical references despite age.

Listen to the full episode of Google’s podcast below:

Why SEJ Cares

The concept of “content decay” addresses a challenge all website owners face – how to manage and maintain content as it ages.

Dealing with outdated website content is essential to creating a positive user experience and building brand trust.

How This Can Help You

By examining Google’s approaches, this podcast episode offers the following takeaways:

  • There’s value in preserving old content for historical accuracy.
  • Consider updating old pages to indicate outdated advice or deprecated features.
  • Establish an auditing process for routinely evaluating aging content.

FAQ

What does “content decay” mean in the context of SEO?

Online content tends to become outdated or irrelevant over time. This can happen due to industry changes, shifts in user interests, or simply the passing of time.

In the context of SEO, outdated content impacts how useful and accurate the information is for users, which can negatively affect website traffic and search rankings.

To maintain a website’s credibility and performance in search results, SEO professionals need to identify and update or repurpose content that has become outdated.

Should all outdated content be removed from a website?

Not all old content needs to be deleted. It depends on what kind of content it is and why it was created. Content that shows past events, product changes, or uses outdated terms can be kept for historical accuracy.

Old content provides context and shows how a brand or industry has evolved over time. It’s important to consider value before removing, updating, or keeping old content.

What are the best practices to avoid user confusion with outdated content?

Website owners and SEO professionals should take the following steps to avoid confusing users with outdated content:

  • Show when content was published or note if the information has changed since it was created.
  • Add explanations around outdated references to explain why they may no longer be relevant.
  • Set up redirects to guide users to the most current information if the content has moved or been updated.

These strategies help people understand a page’s relevance and assist them in getting the most accurate information for their needs.


Featured Image: Screenshot from YouTube.com/GoogleSearchCentral, May 2024. 

Ex-Google CEO Implies AI Search Will Have No Links via @sejournal, @martinibuster

Eric Schmidt, former CEO of Google, said in an interview that Google’s mission is to organize the world’s information, not to provide blue links. Schmidt’s pragmatic statements seem to describe a future where websites are unnecessary and advertising is increasingly effective.

Answers Without Links Is A Good User Experience?

The ex-CEO’s prediction of the future of Google may seems to contradict statements by Google’s current CEO that assert that search and the web will continue to coexist as well as by Danny Sullivan who has many times said that a healthy web ecosystem is important to Google.

There are many actions taken by Google in the past that indicate that Eric Schmidt’s prediction fit perfectly with how Google has ranked sites in the past.

The early days of the web were navigated not just by search engines but by curated web directories that served as starting places for Internet users to go find information, hopping from link to link in a hyperlinked Internet. The idea was that hyperlinks was how users could find information.

Google Search not only ranked webpages from web directories, Google itself hosted a version of DMOZ, an open source web directory that was curated by thousands of volunteers much like Wikipedia is maintained by volunteer editors today.

But a day came when Google stopped ranking directories and the reason given was that it was a better user experience show answers and not links to pages with more links (this event is likely archived somewhere on the WebmasterWorld forum, it happened a long time ago).

Then there are Google’s answers for flight tracking, package tracking, stock quotes, the time and weather information that has zero links.

Example Of An Answer Without Links

Eric Schmidt’s assertion that Google will take advantage of AI to show answers fits into Google’s design principle that showing answers is a good user experience if it fully satisfies the query.

The only difference between the old days and now is that Google is that AI has (mostly) unlocked the ability to show answers without linking to any websites.

So it’s not far-fetched that Google may decide that showing answers is a good user experience, there is precedence for that approach.

AI Is Underhyped

Schmidt put forward the idea that AI is not overhyped but in fact is underhyped.

He observed:

“I hate to tell you but I think this stuff is underhyped not overhyped. Because the arrival of intelligence of a non-human form is really a big deal for the world.

It’s coming. It’s here. It’s about to happen. It happens in stages. …the reason I’m saying it’s underhyped is you’re seeing the future of reasoning, the future of human interaction, the future of research, the future of planning is being invented right now.

There’s something called infinite context windows, which means that you can — it’s like having an infinite short-term memory, which I certainly don’t have, where you can basically keep feeding it information and it keeps learning and changing.”

Eric Schmidt On The Future Of Search

The interviewer asked Schmidt about a future where AI answers questions without links to sources on the web.

The interviewer asked this question:

“In a world where the AI provides the answer, and doesn’t necessarily need to send you to 12 places where you’re going to go find it yourself… what happens to all of that?

Eric Schmidt answered:

“It’s pretty important to understand that Google is not about blue links, it’s about organizing the world’s information. What better tool than the arrival of AI to do that better.

Do you think you can monetize that? You betcha.”

Will Answers Without Links Happen?

It has to be reiterated that Eric Schmidt (LinkedIn profile) is no longer the CEO at Google or Executive Chairman & Technical Advisor at Alphabet (for four years now). His opinions may not reflect the current thinking within Google.

However it’s not unreasonable to speculate that maybe he is saying out loud what those within Google cannot officially discuss.

The most solid information we have now is that Google Search will continue to have links but that Google (and others like Apple) are moving ahead with AI assistants on mobile devices that can answer questions and perform tasks.

Watch the Eric Schmidt interview here:

Google Launches New ‘Saved Comparisons’ Feature For Analytics via @sejournal, @MattGSouthern

Google announced a new tool for Analytics to streamline data comparisons.

The ‘saved comparisons’ feature allows you to save filtered user data segments for rapid side-by-side analysis.

Google states in an announcement:

“We’re launching saved comparisons to help you save time when comparing the user bases you care about.

Learn how you can do that without recreating the comparison every time!”

Google links to a help page that lists several benefits and use cases:

“Comparisons let you evaluate subsets of your data side by side. For example, you could compare data generated by Android devices to data generated by iOS devices.”

“In Google Analytics 4, comparisons take the place of segments in Universal Analytics.”

Saved Comparisons: How They Work

The new comparisons tool allows you to create customized filtered views of Google Analytics data based on dimensions like platform, country, traffic source, and custom audiences.

These dimensions can incorporate multiple conditions using logic operators.

For example, you could generate a comparison separating “Android OR iOS” traffic from web traffic. Or you could combine location data like “Country = Argentina OR Japan” with platform filters.

These customized comparison views can then be saved to the property level in Analytics.

Users with access can quickly apply saved comparisons to any report for efficient analysis without rebuilding filters.

Google’s documentation states:

“As an administrator or editor…you can save comparisons to your Google Analytics 4 property. Saved comparisons enable you and others with access to compare the user bases you care about without needing to recreate the comparisons each time.”

Rollout & Limitations

The saved comparisons feature is rolling out gradually. There’s a limit of 200 saved comparisons per property.

For more advanced filtering needs, such as sequences of user events, Google recommends creating a custom audience first and saving a comparison based on that audience definition.

Some reports may be incompatible if they don’t include the filtered dimensions used in a saved comparison. In that case, the documentation suggests choosing different dimensions or conditions for that report type.

Why SEJ Cares

The ability to create and apply saved comparisons addresses a time-consuming aspect of analytics work.

Analysts must view data through different lenses, segmenting by device, location, traffic source, etc. Manually recreating these filtered comparisons for each report can slow down production.

Any innovation streamlining common tasks is welcome in an arena where data teams are strapped for time.

How This Can Help You

Saved comparisons mean less time getting bogged down in filter recreation and more time for impactful analysis.

Here are a few key ways this could benefit your work:

  • Save time by avoiding constant recreation of filters for common comparisons (e.g. mobile vs desktop, traffic sources, geo locations).
  • Share saved comparisons with colleagues for consistent analysis views.
  • Switch between comprehensive views and isolated comparisons with a single click.
  • Break down conversions, engagement, audience origins, and more by your saved user segments.
  • Use thoughtfully combined conditions to surface targeted segments (e.g. paid traffic for a certain product/location).

The new saved comparisons in Google Analytics may seem like an incremental change. However, simplifying workflows and reducing time spent on mundane tasks can boost productivity in a big way.


Featured Image: wan wei/Shutterstock

LinkedIn Report: AI Skills Now Must-Have For Marketers via @sejournal, @MattGSouthern

A new report by Microsoft and LinkedIn reveals the rapid adoption of AI tools and skills in the marketing industry.

According to the 2024 Work Trend Index Annual Report, which surveyed over 31,000 people across 31 countries, marketing professionals who leverage AI enjoy a competitive advantage.

Employers recognize the efficiency gains AI capabilities provide in marketing roles and increasingly seek applicants with those skills.

Karim R. Lakhani, Chair of the Digital Data Design Institute at Harvard, states in the report:

“Marketers are harnessing the power of AI to work smarter, not just faster. It’s enabling them to focus on higher-value, creative work while automating more routine tasks.”

Here are some highlights from the report illustrating the need to develop an AI skill set to remain competitive.

AI Aptitude: The New Must-Have Skill for Marketers

The survey data reveals a strong preference among business leaders for candidates and employees with AI skills.

A majority, 66%, stated they wouldn’t consider hiring candidates lacking AI proficiency.

Further, 71% expressed a preference for less experienced job seekers with AI skills over more seasoned professionals without that expertise.

This inclination was pronounced in creative fields like marketing and design.

Michael Platt, a neuroscience professor at the Wharton School, states in the report:

“AI is redefining what it means to be a competitive marketer in today’s digital landscape. Professionals who can effectively integrate AI into their work are positioning themselves as invaluable assets to their organizations.”

The report indicates that early-career marketers who develop AI skills could benefit significantly.

77% of leaders reported that employees adept at leveraging AI would be trusted with greater responsibilities earlier in their careers than their peers without AI skills.

The AI Arms Race For Top Marketing Talent

Data from LinkedIn shows that job postings highlighting AI tools and applications have seen a 17% increase in application growth compared to those that don’t mention AI.

Additionally, 54% of early-career employees cited access to AI technologies as a key factor influencing their choice of employer.

Organizations that provide AI training and support for their marketing teams likely have an advantage in attracting top talent.

Why SEJ Cares

The widespread adoption of AI in marketing signifies a shift in the skills and capabilities necessary for succeeding in this rapidly evolving industry.

As AI transforms marketing approaches, professionals who fail to adapt risk being left behind.

The 2024 Work Trend Index Annual Report’s findings are relevant to marketing professionals at all levels. They demonstrate that AI proficiency is necessary for career advancement and job market competitiveness.

Additionally, the report highlights businesses’ role in fostering an AI-driven culture.

Companies investing in AI tools, training, and employee support will be better positioned to attract and retain top talent, drive innovation, and achieve better results.

Read the full report.

How This Can Help You

For marketing professionals to succeed in the AI era, the report suggests:

  • Prioritize developing AI skills through courses, workshops, training programs, and collaborating with AI practitioners to gain hands-on experience.
  • Embrace experimenting with new AI tools and techniques, integrating them into daily workflows to improve efficiency.
  • Share AI knowledge actively with colleagues to foster a culture of knowledge sharing and drive organizational AI adoption.
  • Highlight AI capabilities during job searches by demonstrating the successful use of AI to drive results in previous roles.
  • Choose employers committed to AI adoption that provide access to cutting-edge AI tools and support ongoing learning.

These recommendations can help you future-proof your career and advance in an increasingly competitive field.


Featured Image: eamesBot/Shutterstock

Big Brands Receive Site Abuse Manual Actions via @sejournal, @martinibuster

Google indicated that manual actions were coming to webpages that host third party webpages and according to some, the effects of those manual actions may be showing up in the search results.

Site Reputation Abuse Manual Actions

Google’s SearchLiaison tweeted late on May 6th that Google was enforcing the new site reputation abuse policy with manual actions. Manual actions are when someone at Google inspects a webpage to determine if the page is in violation of a spam policy.

Th Reputation Abuse policy affects sites that host third party content that is published with little to no oversight from the hosting website. The purpose of the arrangement is for the third party to take advantage of the host site’s reputation so that both receive a share of affiliate sales. An example could be a news website that’s hosting coupon code content that’s entirely created by a third party.

What Are Manual Actions?

A manual action is when a human at Google visually inspects a website to determine if they engaged in violations of Google’s spam policies. The result of a manual action is typically but not always a removal from Google’s search index. Sometimes the offending webpages are completely removed and sometimes they are only prevented from ranking.

Sites With Manual Actions

Google communicates to the site publisher if a site has been issued a manual action. Only the site publisher and those with access to a website’s search console account is able to know. Google generally doesn’t announce which sites have received a manual action. So unless a site has completely disappeared from Google Search it’s not possible to say with any degree of certainty if a site has received a manual action.

The fact that a webpage has disappeared from Google’s search results is not confirmation that it has received a manual action, especially if other pages from the site can still be found.

It’s important then to understand that unless a website or Google publicly acknowledges a manual action anyone on the outside can only speculate if a site has received one. The only exception is in the case when a site is completely removed from the search index, in which case there’s a high probability that the site has indeed penalized.

Big Brands Dropped From Search Results

It can’t be said with certainty that a site received a manual action if the page is still in the search index. But Aleyda Solis noticed that some big brand websites have recently stopped ranking for coupon related search queries.

Aleyda shared screenshots of coupon related search results before and after the Site Abuse policies were enforced. Her tweets showed screenshots of sites that were no longer ranking. Some of the sites appear to have removed their coupon webpages (highlighted in red) and sites that still hosted coupon pages but were no longer ranking in the search results were highlighted in orange in Aleyda’s screenshots.

It should be noted that Aleyda does not accuse any site of having received a manual action. She only shows that some sites are no longer ranking for coupon code search queries.

Aleyda tweeted:

“Google has already started taking action for the new site reputation abuse policy 👀👇 See the before/after for many of the most popular “promo code(s)” queries:

* carhartt promo code
* postmates promo code
* samsung promo code
* godaddy promo code

Sites that were ranking before and not anymore:

* In Orange (with still existing coupon sections): Cnet, Glamour, Reuters, USA Today, CNN, Business Insider
* In Red (with removed coupon sections): LA Times, Time Magazine, Wired, Washington Post”

Did Reuters Receive A Manual Action?

The global news agency Reuters formerly took the number one ranking spot for the keyword phrase “GoDaddy promo code” (as seen in the “before” screenshot posted by Aleyda to Twitter).

But Reuters is completely removed from the search results today.

Did the Reuters GoDaddy page receive a manual action? Manual actions typically result in a webpage’s complete removal from Google’s search index, But that’s not the case with the Reuters GoDaddy coupon page. A site search for the GoDaddy coupon page still shows webpages from Reuters are currently still in Google’s index. It’s just not ranking anymore.

Reuters Coupon Page Remains In Search Index

Screenshot of search results

It’s hard to say with certainty if the Reuters page received a manual action but what is clear is that the page is no longer ranking, as Aleyda correctly points out.

Did Reuters GoDaddy Page Violate Google’s Spam Policy?

Google’s Site Reputation Abuse policy says that a characteristic of site reputation abuse is the lack of oversight of the third party content.

“Site reputation abuse is when third-party pages are published with little or no first-party oversight or involvement…”

Reuter’s current GoDaddy page contains a disclaimer that asserts oversight over the third party content.

This is the current disclaimer:

“The Reuters newsroom staff have no role in the production of this content. It was checked and verified by the coupon team of Reuters Plus, the brand marketing studio of Reuters, in collaboration with Upfeat.”

Reuters’ disclaimer shows that there is first-party oversight which indicates that Reuters is in full compliance with Google’s spam policy.

But there’s a problem. There was a completely different disclaimer prior to Google’s Site Reputation Abuse policy announcement.  This raises the question as to whether Reuters changed their disclaimer in order to give the appearance that there was oversight.

Fact: Reuters Changed The Disclaimer

The current disclaimer on the Reuters coupon page asserts that there was some oversight of the third party content.  If that’s true then Reuters complies with Google’s spam policy.

But from March 11, 2024 and prior, the Reuters published a disclaimer that clearly disavowed involvement with the third party content.

This is what Google’s site reputation abuse policy says:

“Site reputation abuse is when third-party pages are published with little or no first-party oversight or involvement…”

And this is the March 11, 2024 disclaimer on the Reuters coupon page:

“Reuters was not involved in the creation of this content.”

Reuters Previously Denied Oversight Of 3rd Party Content

Screenshot of Reuters' previous disclaimer that disavows involvement in third party coupon content

Reuters changed their disclaimer about a week after Google’s core update was announced.  That disclaimer had always distanced Reuters from involvement prior to Google’s spam policy announcement.

This is their 2023 disclaimer on the same GoDaddy Coupon page:

“This service is operated under license by Upfeat Media Inc. Retailers listed on this page are curated by Upfeat. Reuters editorial staff is not involved.”

Why did that disclaimer change after Google’s Site Reputation Abuse announcement? If Reuters is in violation did they receive a manual action but were spared from having those pages removed from Google’s search index?

Manual Actions

Manual actions can result in a complete removal of the offending webpage from Google’s search index. That’s not what happened to Reuters and other big brand coupon pages highlighted by Aleyda so it could be possible that the big brand coupon pages only received a ranking demotion and not a full blown de-indexing as is common for regular sites. Or it could be that the demotion of those pages in the rankings are complete coincidence.

Featured Image by Shutterstock/Mix and Match Studio

Hit By The Core Algorithm? 5 Factors To Be Aware Of via @sejournal, @martinibuster

Many factors can affect rankings after a core algorithm update. It’s not always about the helpfulness of content, there are other factors that can play a role in why the algorithm changed and negatively affected your website rankings.

If you find yourself saying, “It used to rank before, why doesn’t it rank now?” then some of these factors may be something to consider.

1. Algorithmic Losses Are Not Necessarily Persistent

Sites hit by the core algorithm update (which includes the Helpful Content part) do not have a permanent strike against them. Over the past ten years Google has rolled out complicated algorithms and systems that can take months between update cycles, leaving affected sites unable to find a quick path back to the search results. While that’s not a permanent mark it does feel like a site has acquired a curse that permanently marks them as no good and permanently excluded.

Google’s John Mueller answered a question where he confirmed that getting caught in a Core Algorithm Update is not persistent and with work a site can recover from being hit by an update.

Someone asked on X (formerly Twitter):

“Can a site hit by HCU grow again in terms of traffic if it improves in quality? Many fear that no matter the amount of improvements we make a HCU hit site will forever have a classifier assigned to it that keeps it from growing again.”

John Mueller responded:

“Yes, sites can grow again after being affected by the “HCU” (well, core update now). This isn’t permanent. It can take a lot of work, time, and perhaps update cycles, and/but a different – updated – site will be different in search too.”

2. Recovering Is Not The Right Word

A lot of people think of recovering from an update as resetting the rankings so that websites regain positions to a previous state. John Mueller’s answer on X suggests that publishers can understand algorithmic effects as something that requires adjusting a website to fit into an evolving web, including user expectations.

Mueller tweeted:

“Permanent changes are not very useful in a dynamic world, so yes. However, “recover” implies going back to just-as-before, and IMO that is always unrealistic, since the world, user-expectations, and the rest of the web continues to change. It’s never “just-as-before”.”

This statement seems to imply that to a certain degree, algorithmic updates reflect user expectations in what they expect to see in the search results. A way to understand this is with the example of Google’s Medic Update from a few years back. That update reflected a realignment of the search results with what users expect to see when making certain queries. After the Medic update, search queries for medical topics required search results with a scientific approach. Sites that reflected folk remedies and unscientific did not fit that updated definition of relevance.

There are subtle variations to this realignment of search results that goes directly to answering the question, what do users mean when they ask a search query? Sometimes relevance means informational sites while for other queries it may mean review sites are what users expect to see.

So if your site is hit by a core algorithm update, revisit the SERPs and try to determine what the new SERPs mean in terms of relevance and self-assess whether your site meets this new definition of relevance.

Circling back to Mueller’s response, there is no “going back to just-as-before” and that may be because there has been a subtle shift in relevance. Sometimes the fix is subtle. Sometimes getting back into the search engine results (SERPs) requires a major change in the website so that it meets with user expectations.

3. Thresholds And Ranking Formulas

Another interesting point that Mueller discussed is the difference between an ongoing algorithmic evaluation and the more persistent effects from a ranking system that requires an update cycle before a site can recover.

Someone asked:

“The simple question is whether you need to wait for a new core update to recover from the HCU. A simple “yes” or “no you can recover anytime” would suffice.”

John Mueller answered:

“It’s because not all changes require another update cycle. In practice, I’d assume that stronger effects will require another update. Core updates can include many things.”

Then continued with these interesting comments:

“For example, a ranking formula + some thresholds could be updated. The effects from the updated formula are mostly ongoing, the changes to thresholds often require another update to adjust.

…(“thresholds” is a simplification for any numbers that need a lot of work and data to be recalculated, reevaluated, reviewed)”

The above means there are two kinds of effects that can hit a site. One that is a part of a continually updated ranking formula that can quickly reflect changes made to a site. These used to be called rolling updates where the core algorithm can make relatively instant evaluations about a site and boost or demote the rankings.

The other kind of algorithmic issue is one that requires a massive recalculation. This is what the HCU and even the Penguin algorithms used to be like until they got folded into the core algorithm. They were like massive calculations that seemed to assign scores that were only updated on the following cycle.

4. The Web & Users Change

In another recent exchange on X, John Mueller affirmed that a key to success is keeping track of what users expect.

He tweeted:

“…there is no one-shot secret to long-lasting online success. Even if you find something that works now, the web, user desires, and how they engage with websites changes. It’s really hard to make good, popular, persistent things.”

That statement offers these concepts to keep in mind for online success:

  • The Internet
  • User desires
  • How users engage with websites
  • popularity is not persistent

Those are not algorithm factors. But they could be things that Google picks up on in terms of understanding what users expect to see when they make a search query.

What users expect to see is my preferred definition of relevance. That has practically zero to do with “semantic relevance” and more about what users themselves expect. This is something that some SEOs and publishers trip over. They focus hard on what words and phrases mean and forget that what really matters is what they mean to users.

Mueller posted something similar in an answer about why a website ranks #1 in one country and doesn’t perform as well in another. He said that what users expect to see in response to a query can be different from country to country. The point is that it’s not about semantics and entities and other technical aspects but often search ranking relevance has a lot to do with the users.

He tweeted:

“It’s normal for the search results in countries to vary. Users are different, expectations may vary, and the web is also very different.”

That insight may be helpful for some publishers who have lost rankings in a core algorithm update. It could be that user expectations have changed and the algorithm is reflecting those expectations.

5. Page-Level Signal

Google’s SearchLiaison affirmed that the Helpful Content component of the core algorithm is generally a page-level signal but that there are sitewide ones as well. His tweet quoted the Helpful Content Update FAQ which says:

“Do Google’s core ranking systems assess the helpfulness of content on a page-level or site-wide basis?

Our core ranking systems are primarily designed to work on the page level, using a variety of signals and systems to understand the helpfulness of individual pages. We do have some site-wide signals that are also considered.”

Keep An Open Mind

It’s frustrating to lose rankings in a core algorithm update. I’ve been working in SEO for about 25 years and auditing websites since 2004. Helping site owners identify why their sites no longer rank has taught me that it’s useful is to keep an open mind about what is affecting the rankings.

The core algorithm has a lot of signals, some of which pertain to the helpfulness while others are relevance to users, relevance to site queries and also just plain site quality. So it may be helpful to not get stuck thinking that a site lost rankings because of one thing because it could be something else or even multiple factors.

Featured Image by Shutterstock/Benny Marty

Google’s Mueller Outlines Path To Recovery For Sites Hit By Core Update via @sejournal, @MattGSouthern

Google’s Search Advocate John Mueller recently addressed the SEO community’s concerns about site recovery after being impacted by algorithm updates.

The conversation arose as people questioned whether sites hit by the September helpful content update could regain lost traffic and rankings after future core updates.

The exchange began on X when an SEO professional, Thomas Jepsen, asked Mueller if Google’s previous stance still held true – that the search engine “doesn’t hold a grudge” and sites will recover once issues are resolved.

Mueller confirmed, “That’s still the case,” but cautioned that “some things take much longer to be reassessed (sometimes months, at the moment), and some bigger effects require another update cycle.

Addressing Lingering Confusion

Following Mueller’s statements, confusion persisted around whether sites hit by the helpful content update require a new core update to recover lost rankings.

Mueller clarified:

“… not all changes require another update cycle. In practice, I’d assume that stronger effects will require another update. Core updates can include many things.”

He likened core updates to adjustments in ranking formulas and thresholds, with the latter often necessitating another update cycle.

Dismissing Permanence Concerns

There’s concern that sites affected by the September helpful content update will be permanently classified, obstructing future growth.

Mueller addressed those concerns and affirmed that affected sites could regain traffic by improving quality.

However, Mueller says full recovery to pre-update levels is unrealistic.

He states:

“Permanent changes are not very useful in a dynamic world… However, ‘recover’ implies going back to just-as-before, and IMO that is always unrealistic, since the world, user-expectations, and the rest of the web continues to change. It’s never ‘just-as-before’.”

When asked directly if a site affected by the helpful content update can grow in traffic if it improves in quality, Mueller stated:

“Yes, sites can grow again after being affected by the ‘HCU’ (well, core update now). This isn’t permanent. It can take a lot of work, time, and perhaps update cycles, and/but a different – updated – site will be different in search too.”

The Long Road Ahead

Continuing the conversation on LinkedIn, Mueller stressed that the recovery process isn’t specific to helpful content updates or core updates but applies to all kinds of systems and updates within Google.

Mueller states:

“… to be clear, it’s not that “helpful content update” “recoveries” take longer than other updates. It’s just that some kinds of changes take a long time to build up, and that applies to all kinds of systems & updates in Google & in any other larger computer system. Saying that this is specific to the helpful content system, or to core updates would be wrong & misleading.”

Mueller acknowledged that the recovery process doesn’t have a single, straightforward solution and may require deep analysis and significant work to understand how to make a website relevant again.

“There is, however, the additional aspect of the “core update” being about how our systems assess content overall, how we consider it to be helpful, reliable, relevant to users’ queries. This does not map back to a single change that you can make on a website, so – in my experience – it’s not something that a website can just tweak overnight and be done with it. It can require deep analysis to understand how to make a website relevant in a modern world, and significant work to implement those changes — assuming that it’s something that aligns with what the website even wants.”

Lastly, he adds that a recovery will take more than fixing technical issues. It may require a realignment of business priorities.

“These are not “recoveries” in the sense that someone fixes a technical issue and they’re back on track – they are essentially changes in a business’s priorities (and, a business might choose not to do that).”

Why SEJ Cares

Google’s core algorithm updates can dramatically impact a website’s search visibility and traffic.

For sites negatively affected, clear guidance on recovery is critical – both for setting realistic expectations and charting a practical path forward.

Mueller’s insights reassure that improvement remains possible through strategic realignment with Google’s current quality standards.

How This Can Help You

Mueller’s insights allow impacted sites to set realistic expectations for recovery.

Regaining visibility remains possible with patience, thorough analysis, and persistent effort.

Mueller’s statements offer the following takeaways for sites impacted by Google’s updates:

  • Recovery isn’t out of the question but will require significant effort over multiple update cycles.
  • Simply restoring previous tactics is insufficient; sites must evolve to meet changing user needs and internet best practices.
  • Deep analysis is necessary to identify areas for improvement and realign content strategy with modern relevance signals.
  • Returning to previous ranking positions is unrealistic due to evolving user needs.

Featured Image: rudall30/Shutterstock