Google AI Overviews = Theft? Court Ruling Sets Precedent via @sejournal, @MattGSouthern

Google’s bold new vision for the future of online search, powered by AI technology, is fuelling an industrywide backlash over fears it could damage the internet’s open ecosystem.

At the center of the controversy are Google’s newly launched “AI Overviews,” which are generated summaries that aim to directly answer search queries by pulling in information from across the web.

AI overviews appear prominently at the top of results pages, potentially limiting users’ need to click through to publishers’ websites.

The move sparked legal action in France, where publishers filed cases accusing Google of violating intellectual property rights by ingesting their content to train AI models without permission.

A group of French publishers won an early court battle in April 2024. A judge ordered Google to negotiate fair compensation for repurposing snippets of their content.

Publishers in the US are raising similar objections as Google’s new AI search overviews threaten to siphon traffic away from sources. They argue that Google unfairly profits from others’ content.

The debate highlights the need for updated frameworks governing the use of online data in the age of AI.

Concerns From Publishers

According to industry watchers, the implications of AI overviews could impact millions of independent creators who depend on Google Search referral traffic.

Frank Pine, executive editor at MediaNews Group, tells The Washington Post:

“If journalists did that to each other, we’d call that plagiarism.”

Pine’s company, which publishes the Denver Post and Boston Herald, is among those suing OpenAI for allegedly scraping copyrighted articles to train their language models.

Google’s revenue model has long been predicated on driving traffic to other websites and monetizing that flow through paid advertising channels.

AI overviews threaten to shift that revenue model.

Kimber Matherne, who runs a food blog, is quoted in the post article stating:

“[Google’s] goal is to make it as easy as possible for people to find the information they want. But if you cut out the people who are the lifeblood of creating that information, then that’s a disservice to the world.”

According to the Post’s report, Raptive, an ad services firm, estimates the changes could result in $2 billion in lost revenue for online creators.

They also believe some websites could lose two-thirds of their search traffic.

Raptive CEO Michael Sanchez tells The Post:

“What was already not a level playing field could tip its way to where the open internet starts to become in danger of surviving.”

Concerns From Industry Professionals

Google’s AI overviews are understandably raising concerns among industry professionals, as expressed through numerous tweets criticizing the move.

Matt Gibbs questioned how Google developed the knowledge base for its AI, bluntly stating, “They ripped it off publishers who did the actual work to create the knowledge. Google are a bunch of thieves.”

In her tweet, Kristine Schachinger echoed similar sentiments, referring to Google’s AI answers as “a complete digital theft engine which will prevent sites getting clicks at all.”

Gareth Boyd retweeted a quote from the Washington Post article highlighting the struggles of blogger Jake Boly, whose site recently saw a 96% drop in Google traffic.

Boyd said, “The precedent being set by OpenAI and Google is scary…” and that “more people should be equally angry” at both companies for the “open theft of content.”

In his tweet, Avram Piltch directly accused Google of theft, stating, “the data used to train their AI came from the very publishers that allowed Google to crawl them and are now going to be harmed. This is theft, plain and simple. And it’s a threat to the future of the web.”

Lily Ray made a similar claim about Google: “Using all the content they took from the sites that made Google. With little to no attribution or traffic.”

Legal Gray Area

The controversy taps into broader debates around intellectual property and fair use, as AI systems are trained on unprecedented scales of data scraped across the internet.

Google argues its models only ingest publicly available web data and that publishers previously benefited from search traffic.

Publishers implicitly consent to their content being indexed by search engines unless they opt out.

However, laws weren’t conceived with training AI models in mind.

What’s The Path Forward?

This debate highlights the need for new rules around how AI uses online data.

The way forward is unclear, but the stakes are high.

Some suggest revenue sharing or licensing fees when publisher content is used to train AI models. Others propose an opt-in system that gives website owners more control over how their content is used for AI training.

The French rulings suggest that the courts may step in without explicit guidelines and good-faith negotiations.

The web has always relied on a balance between search engines and content creators. If that balance is disrupted without new safeguards, it could undermine the exchange of information that makes the internet so valuable.


Featured Image: Veroniksha/Shutterstock

New Google AI Overviews Documentation & SEO via @sejournal, @martinibuster

Google published new documentation about their new AI Overviews search feature which summarizes an answer to a search query and links to webpages where more information can be found. The new documentation offers important information about how the new feature works and what publishers and SEOs should consider.

What Triggers AI Overviews

AI Overviews shows when the user intent is to quickly understand information, especially when that information need is tied to a task.

“AI Overviews appear in Google Search results when our systems determine …when you want to quickly understand information from a range of sources, including information from across the web and Google’s Knowledge Graph.”

In another part of the documentation it ties the trigger to task-based information needs:

“…and use the information they find to advance their tasks.” “

What Kinds Of Sites Does AI Overviews Link To?

An important fact to consider is that just because AI Overviews is triggered by a user’s need to quickly understand something doesn’t mean that only queries with an informational need will trigger the new search feature. Google’s documentation makes it clear that the kinds of websites that will benefit from AI Overviews links includes “creators” (which implies video creators), ecommerce stores and other businesses. This means that far more than informational websites that will benefit from AI overviews.

The new documentation lists the kinds of sites that can receive a link from the AI overviews:

“This allows people to dig deeper and discover a diverse range of content from publishers, creators, retailers, businesses, and more, and use the information they find to advance their tasks.”

Where AI Overviews Sources Information

AI Overviews shows information from the web and the knowledge graph. Large Language Models currently need to be entirely retrained from the ground up when adding significant amounts of new data. That means that the websites chosen to be displayed in Overviews feature are selected from Google’s standard search index which in turn means that Google may be using Retrieval-augmented generation (RAG).

RAG is a system that sits between a large language model and a database of information that’s external to the LLM. This external database can be a specific knowledge like the entire content of an organization’s HR policies to a search index. It’s a supplemental source of information that can be used to double-check the information provided by an LLM or to show where to read more about the question being answered.

The section quoted at the beginning of the article notes that AI Overviews cites sources from the web and the Knowledge Graph:

“AI Overviews appear in Google Search results when our systems determine …when you want to quickly understand information from a range of sources, including information from across the web and Google’s Knowledge Graph.”

What Automatic Inclusion Means For SEO

Inclusion in AI Overviews is automatic and there’s nothing specific to AI Overviews that publishers or SEOs need to do. Google’s documentation says that following their guidelines for ranking in the regular search is all you have to do for ranking in AI Overviews. Google’s “systems” determine what sites are picked to show up for the topics surfaced in AI Overviews.

All the statements seem to confirm that the new Overviews feature sources data from the regular Search Index. It’s possible that Google filters the search index specially for AI Overviews but offhand I can’t think of any reason Google would do that.

All the statements that indicate automatic inclusions point to the likely possibility that Google uses the regular search index:

“No action is needed for publishers to benefit from AI Overviews.”

“AI Overviews show links to resources that support the information in the snapshot, and explore the topic further.”

“…diverse range of content from publishers, creators, retailers, businesses, and more…”

“To rank in AI Overviews, publishers only need to follow the Google Search Essentials guide.

“Google’s systems automatically determine which links appear. There is nothing special for creators to do to be considered other than to follow our regular guidance for appearing in search, as covered in Google Search Essentials.”

Think In Terms Of Topics

Obviously, keywords and synonyms in queries and documents play a role. But in my opinion they play and oversized role in SEO. There are many ways that a search engine can annotate a document in order to match a webpage to a topic, like what Googler Martin Splitt referred to as a centerpiece annotation. A centerpiece annotation is used by Google to label a webpage with what that webpage is about.

Semantic Annotation

This kind of annotation links webpage content to concepts which in turn gives structure to a unstructured document. Every webpage is unstructured data so search engines have to make sense of that. Semantic Annotation is one way to do that.

Google has been matching webpages to concepts since at least 2015. A Google webpage about their cloud products talks about how they integrated neural matching into their Search Engine for the purpose of annotating webpage content with their inherent topics.

This is what Google says about how it matches webpages to concepts:

“Google Search started incorporating semantic search in 2015, with the introduction of noteworthy AI search innovations like deep learning ranking system RankBrain. This innovation was quickly followed with neural matching to improve the accuracy of document retrieval in Search. Neural matching allows a retrieval engine to learn the relationships between a query’s intentions and highly relevant documents, allowing Search to recognize the context of a query instead of the simple similarity search.

Neural matching helps us understand fuzzier representations of concepts in queries and pages, and match them to one another. It looks at an entire query or page rather than just keywords, developing a better understanding of the underlying concepts represented in them.”

Google’s been doing this, matching webpages to concepts, for almost ten years. Google’s documentation about AI Overviews also mentions that showing links to webpages based on topics is a part of determining what sites are ranked in AI Overviews.

Here’s how Google explains it:

“AI Overviews show links to resources that support the information in the snapshot, and explore the topic further.

…AI Overviews offer a preview of a topic or query based on a variety of sources, including web sources.”

Google’s focus on topics has been a thing for a long time and it’s well past time SEOs lessened their grip on keyword targeting and start to also give Topic Targeting a chance to enrich their ability to surface content in Google Search, including in AI Overviews.

Google says that the same optimizations described in their Search Essentials documentation for ranking in Google Search are the same optimizations to apply to rank in Google Overview.

This is exactly what the new documentation says:

“There is nothing special for creators to do to be considered other than to follow our regular guidance for appearing in search, as covered in Google Search Essentials.”

Read Google’s New SEO Related Documentation On AI Overviews

AI Overviews and your website

Featured Image by Shutterstock/Piotr Swat

Google Rolls Out New ‘Web’ Filter For Search Results via @sejournal, @MattGSouthern

Google is introducing a filter that allows you to view only text-based webpages in search results.

The “Web” filter, rolling out globally over the next two days, addresses demand from searchers who prefer a stripped-down, simplified view of search results.

Danny Sullivan, Google’s Search Liaison, states in an announcement:

“We’ve added this after hearing from some that there are times when they’d prefer to just see links to web pages in their search results, such as if they’re looking for longer-form text documents, using a device with limited internet access, or those who just prefer text-based results shown separately from search features.”

The new functionality is a throwback to when search results were more straightforward. Now, they often combine rich media like images, videos, and shopping ads alongside the traditional list of web links.

How It Works

On mobile devices, the “Web” filter will be displayed alongside other filter options like “Images” and “News.”

Screenshot from: twitter.com/GoogleSearchLiaison, May 2024.

If Google’s systems don’t automatically surface it based on the search query, desktop users may need to select “More” to access it.

Screenshot from: twitter.com/GoogleSearchLiaison, May 2024.

More About Google Search Filters

Google’s search filters allow you to narrow results by type. The options displayed are dynamically generated based on your search query and what Google’s systems determine could be most relevant.

The “All Filters” option provides access to filters that are not shown automatically.

Alongside filters, Google also displays “Topics” – suggested related terms that can further refine or expand a user’s original query into new areas of exploration.

For more about Google’s search filters, see its official help page.


Featured Image: egaranugrah/Shutterstock

Was OpenAI GPT-4o Hype A Troll On Google? via @sejournal, @martinibuster

OpenAI managed to steal the attention away from Google in the weeks leading up to Google’s biggest event of the year (Google I/O). When the big announcement arrived there all they had to show was a language model that was slightly better than the previous one with the “magic” part not even in Alpha testing stage.

OpenAI may have left users feeling like a mom receiving a vacuum cleaner for Mothers Day but it surely succeeded in minimizing press attention for Google’s important event.

The Letter O

The first hint that there’s at least a little trolling going on is the name of the new GPT model, 4 “o” with the letter “o” as in the name of Google’s event,  I/O.

OpenAI says that the letter O stands for Omni, which means everything, but it sure seems like there’s a subtext to that choice.

GPT-4o Oversold As Magic

Sam Altman in a tweet the Friday before the announcement promised “new stuff” that felt like “magic” to him:

“not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.”

OpenAI co-founder Greg Brockman tweeted:

“Introducing GPT-4o, our new model which can reason across text, audio, and video in real time.

It’s extremely versatile, fun to play with, and is a step towards a much more natural form of human-computer interaction (and even human-computer-computer interaction):”

The announcement itself explained that previous versions of ChatGPT used three models to process audio input. One model to turn audio input into text. Another model to complete the task and output the text version of it and a third model to turn the text output into audio. The breakthrough for GPT-4o is that it can now process the audio input and output within a single model and output it all in the same amount of time that it takes a human to listen and respond to a question.

But the problem is that the audio part isn’t online yet. They’re still working on getting the guardrails working and it will take weeks before an Alpha version is released to a few users for testing. Alpha versions are expected to possibly have bugs while the Beta versions are generally closer to the final products.

This is how OpenAI explained the disappointing delay:

“We recognize that GPT-4o’s audio modalities present a variety of novel risks. Today we are publicly releasing text and image inputs and text outputs. Over the upcoming weeks and months, we’ll be working on the technical infrastructure, usability via post-training, and safety necessary to release the other modalities.

The most important part of GPT-4o, the audio input and output, is finished but the safety level is not yet ready for public release.

Some Users Disappointed

It’s inevitable that an incomplete and oversold product would generate some negative sentiment on social media.

AI engineer Maziyar Panahi (LinkedIn profile) tweeted his disappointment:

“I’ve been testing the new GPT-4o (Omni) in ChatGPT. I am not impressed! Not even a little! Faster, cheaper, multimodal, these are not for me.
Code interpreter, that’s all I care and it’s as lazy as it was before!”

He followed up with:

“I understand for startups and businesses the cheaper, faster, audio, etc. are very attractive. But I only use the Chat, and in there it feels pretty much the same. At least for Data Analytics assistant.

Also, I don’t believe I get anything more for my $20. Not today!”

There are others across Facebook and X that expressed similar sentiments although many others were happy with what they felt was an improvement in speed and cost for the API usage.

Did OpenAI Oversell GPT-4o?

Given that the GPT-4o is in an unfinished state it’s hard not to miss the impression that the release was timed to coincide with and detract from Google I/O. Releasing it on the eve of Google’s big day with a half-finished product may have inadvertently created the impression that GPT-4o in the current state is a minor iterative improvement.

In the current state it’s not a revolutionary step forward but once the audio portion of the model exits Alpha testing stage and makes it through the Beta testing stage then we can start talking about revolutions in large language model. But by the time that happens Google and Anthropic may already have staked a flag on that mountain.

OpenAI’s announcement paints a lackluster image of the new model, promoting the performance as on the same level as GPT-4 Turbo. The only bright spots is the significant improvements in languages other than English and for API users.

OpenAI explains:

  • “It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API.”

Here are the ratings across six benchmarks that shows GPT-4o barely squeaking past GPT-4T in most tests but falling behind GPT-4T in an important benchmark for reading comprehension.

Here are the scores:

  • MMLU (Massive Multitask Language Understanding)
    This is a benchmark for multitasking accuracy and problem solving in over fifty topics like math, science, history and law. GPT-4o (scoring 88.7) is slightly ahead of GPT4 Turbo (86.9).
  • GPQA (Graduate-Level Google-Proof Q&A Benchmark)
    This is 448 multiple-choice questions written by human experts in various fields like biology, chemistry, and physics. GPT-4o scored 53.6, slightly outscoring GPT-4T (48.0).
  • Math
    GPT 4o (76.6) outscores GPT-4T by four points (72.6).
  • HumanEval
    This is the coding benchmark. GPT-4o (90.2) slightly outperforms GPT-4T (87.1) by about three points.
  • MGSM (Multilingual Grade School Math Benchmark)
    This tests LLM grade-school level math skills across ten different languages. GPT-4o scores 90.5 versus 88.5 for GPT-4T.
  • DROP (Discrete Reasoning Over Paragraphs)
    This is a benchmark comprised of 96k questions that tests language model comprehension over the contents of paragraphs. GPT-4o (83.4) scores nearly three points lower than GPT-4T (86.0).

Did OpenAI Troll Google With GPT-4o?

Given the provocatively named model with the letter o, it’s hard to not consider that OpenAI is trying to steal media attention in the lead-up to Google’s important I/O conference. Whether that was the intention or not OpenAI wildly succeeded in minimizing attention given to Google’s upcoming search conference.

Does a language model that barely outperforms its predecessor worth all the hype and media attention it received? The pending announcement dominated news coverage over Google’s big event so for OpenAI the answer is clearly yes, it was worth the hype.

Featured Image by Shutterstock/BeataGFX

SGE Is Here. Google Rolls Out AI-Powered Overviews To US Search Results via @sejournal, @MattGSouthern

At its annual I/O developer conference, Google unveiled plans to incorporate generative AI directly into Google Search.

Additionally, Google announced an expansion to Search Generative Experience (SGE), designed to reinvent how people discover and consume information.

Upcoming upgrades include:

  • Adjustable overviews to simplify language or provide more detail
  • Multi-step reasoning to handle complex queries with nuances
  • Built-in planning capabilities for tasks like meal prep and vacations
  • AI-organized search result pages to explore ideas and inspiration
  • Visual search querying through uploaded videos and images

Liz Reid, Head of Google Search, states in an announcement:

“Now, with generative AI, Search can do more than you ever imagined. So you can ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork.”

What’s New In Google Search & SGE

New Gemini Model

A customized Gemini language model is central to Google’s AI-powered Search revamp.

Google’s announcement states:

“This is all made possible by a new Gemini model customized for Google Search. It brings together Gemini’s advanced capabilities — including multi-step reasoning, planning and multimodality — with our best-in-class Search systems.”

AI overviews generate quick answers to their queries, piecing together information from multiple sources.

Google reports that people have already used AI Overviews billions of times through Search Labs.

AI Overviews In US Search Results

Google is bringing AI overviews from Search Labs into its general search results pages.

That means hundreds of millions of US searchers will gain access to AI overviews this week and over 1 billion by year’s end.

Image Credit: blog.google/products/search/generative-ai-google-search-may-2024/, May 2024.

Searchers will soon be able to adjust the language and level of detail in AI overviews to suit their needs and understanding of the topic.

Image Credit: blog.google/products/search/generative-ai-google-search-may-2024/, May 2024.

Complex Questions & Planning Capabilities

SGE’s multi-step reasoning capabilities will allow you to ask complex questions and receive detailed answers.

For example, you could ask, “Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill,” and receive a comprehensive response.

Image Credit: blog.google/products/search/generative-ai-google-search-may-2024/, May 2024.

In addition to answering complex queries, SGE will offer planning assistance for various aspects of life, such as meal planning and vacations.

You can request a customized meal plan by searching for something like “create a 3-day meal plan for a group that’s easy to prepare.” You will receive a tailored plan with recipes from across the web.

AI-Organized Results & Visual Search

Google is introducing AI-organized results pages that categorize helpful results under unique, AI-generated headlines, presenting diverse perspectives and content types.

This feature will initially be available for dining and recipes, with plans to expand to movies, music, books, hotels, shopping, and more.

SGE will also enable users to ask questions using video content. This visual search capability can save you time describing issues or typing queries, as you can record a video instead.

Image Credit: blog.google/products/search/generative-ai-google-search-may-2024/, May 2024.

What Does This Mean For Businesses?

While Google touts SGE as a way to enhance search quality, the prominence of AI-generated content could impact businesses and publishers who rely on Google Search traffic.

AI overviews occupy extensive screen real estate and could bury traditional “blue link” web results, significantly limiting clickthrough rates.

Data from ZipTie and Search Engine Journal contributor Bart Goralewicz indicate that SGE displays cover over 80% of search queries across most verticals.

Additionally, under SGE’s unique ranking system, only 47% of the top 10 traditional web results appear as sources powering AI overview generation.

Bart Goralewicz, Founder of Onely, states:

“SGE operates on a completely different level compared to traditional search. If you aim to be featured in Google SGE, you’ll need to develop a distinct strategy tailored to this new environment. It’s a whole new game.”

Tomasz Rudzki of ZipTie cautions:

“Google SGE is the most controversial and anxiety-provoking change in search,” commented. With so much changing week by week, businesses relying on organic search must carefully monitor SGE’s evolution.”

How To Optimize Your Site for SGE

As AI search accelerates, SEO professionals and content creators face new challenges in optimizing for discoverability.

Consider implementing these tactics for a potential increase in visibility in search results.

Structure content explicitly as questions and direct answers.
With AI overviews answering queries directly, optimizing content in a question-and-answer format may increase the likelihood of having it surfaced by Google’s AI models.

Create topic overview pages spanning initial research to final decisions.
Google’s AI search can handle complex, multi-step queries. Creating comprehensive overview content that covers the entire journey—from initial research to final purchasing decisions—could position those pages as prime sources for Google’s AI.

Pursue featured status on high-authority Q&A and information sites.
Studies found sites like Quora and Reddit are frequently cited in Google’s AI overviews. Having authoritative, industry-expert-level content featured prominently on these high-profile Q&A platforms could increase visibility within AI search results.

Maximize technical SEO for improved crawling of on-page content.
Like traditional search-leveraged web crawlers, Google’s AI models still rely on crawling a site’s content. Ensuring optimal technical SEO for crawlers to access and adequately render all on-page content is crucial for it to surface in AI overviews.

Tracking search volume for queries exhibiting AI overviews.
Identifying queries that currently trigger AI overviews can reveal content gaps and optimization opportunities. Tracking search volume for these queries enables prioritizing efforts around high-value terms and topics Google already enhances with AI results.

Looking Ahead

As Google moves forward with its AI-centric search vision, disruptions could reshape digital economies and information ecosystems.

Companies must acclimate their strategies for an AI-powered search landscape.

We will be following these developments closely at Search Engine Journal with an aim to provide strategies to help make your content discoverable in SGE.

Why Google Can’t Tell You About Every Ranking Drop via @sejournal, @MattGSouthern

In a recent Twitter exchange, Google’s Search Liaison, Danny Sullivan, provided insight into how the search engine handles algorithmic spam actions and ranking drops.

The discussion was sparked by a website owner’s complaint about a significant traffic loss and the inability to request a manual review.

Sullivan clarified that a site could be affected by an algorithmic spam action or simply not ranking well due to other factors.

He emphasized that many sites experiencing ranking drops mistakenly attribute it to an algorithmic spam action when that may not be the case.

“I’ve looked at many sites where people have complained about losing rankings and decide they have a algorithmic spam action against them, but they don’t. “

Sullivan’s full statement will help you understand Google’s transparency challenges.

Additionally, he explains why the desire for manual review to override automated rankings may be misguided.

Challenges In Transparency & Manual Intervention

Sullivan acknowledged the idea of providing more transparency in Search Console, potentially notifying site owners of algorithmic actions similar to manual actions.

However, he highlighted two key challenges:

  1. Revealing algorithmic spam indicators could allow bad actors to game the system.
  2. Algorithmic actions are not site-specific and cannot be manually lifted.

Sullivan expressed sympathy for the frustration of not knowing the cause of a traffic drop and the inability to communicate with someone about it.

However, he cautioned against the desire for a manual intervention to override the automated systems’ rankings.

Sullivan states:

“…you don’t really want to think “Oh, I just wish I had a manual action, that would be so much easier.” You really don’t want your individual site coming the attention of our spam analysts. First, it’s not like manual actions are somehow instantly processed. Second, it’s just something we know about a site going forward, especially if it says it has change but hasn’t really.”

Determining Content Helpfulness & Reliability

Moving beyond spam, Sullivan discussed various systems that assess the helpfulness, usefulness, and reliability of individual content and sites.

He acknowledged that these systems are imperfect and some high-quality sites may not be recognized as well as they should be.

“Some of them ranking really well. But they’ve moved down a bit in small positions enough that the traffic drop is notable. They assume they have fundamental issues but don’t, really — which is why we added a whole section about this to our debugging traffic drops page.”

Sullivan revealed ongoing discussions about providing more indicators in Search Console to help creators understand their content’s performance.

“Another thing I’ve been discussing, and I’m not alone in this, is could we do more in Search Console to show some of these indicators. This is all challenging similar to all the stuff I said about spam, about how not wanting to let the systems get gamed, and also how there’s then no button we would push that’s like “actually more useful than our automated systems think — rank it better!” But maybe there’s a way we can find to share more, in a way that helps everyone and coupled with better guidance, would help creators.”

Advocacy For Small Publishers & Positive Progress

In response to a suggestion from Brandon Saltalamacchia, founder of RetroDodo, about manually reviewing “good” sites and providing guidance, Sullivan shared his thoughts on potential solutions.

He mentioned exploring ideas such as self-declaration through structured data for small publishers and learning from that information to make positive changes.

“I have some thoughts I’ve been exploring and proposing on what we might do with small publishers and self-declaring with structured data and how we might learn from that and use that in various ways. Which is getting way ahead of myself and the usual no promises but yes, I think and hope for ways to move ahead more positively.”

Sullivan said he can’t make promises or implement changes overnight, but he expressed hope for finding ways to move forward positively.


Featured Image: Tero Vesalainen/Shutterstock

Content Decay: A Rotten Name For A Real SEO Issue via @sejournal, @martinibuster

Google’s Lizzi Sassman and John Mueller answered a question about Content Decay, expressing confusion over the phrase because they’d never heard of it. Turns out there’s a good reason: Content Decay is a just a new name created to make an old problem look like a new one.

Googlers Never Heard Of Content Decay

Google tech writer Lizzi Sassman began a Google Search Off The Record podcast by stating that they are talking about Content Decay because someone submitted that topic and then remarked that she had never heard of Content Decay.

She said:

“…I saw this come up, I think, in your feedback form for topics for Search Off the Record podcast that someone thought that we should talk about content decay, and I did not know what that was, and so I thought I should look into it, and then maybe we could talk about it.”

Google’s John Mueller responded:

“Well, it’s good that someone knows what it is. …When I looked at it, it sounded like this was a known term, and I felt inadequate when I realized I had no idea what it actually meant, and I had to interpret what it probably means from the name.”

Then Lizzi pointed out that the name Content Decay sounds like it’s referring to something that’s wrong with the content:

“Like it sounds a little bit negative. A bit negative, yeah. Like, yeah. Like something’s probably wrong with the content. Probably it’s rotting or something has happened to it over time.”

It’s not just Googlers who don’t know what the term Content Decay means, experienced SEOs with over 25 years of experience had never heard of it either, including myself. I reached out to several experienced SEOs and nobody had heard of the term Content Decay.

Like Lizzi, anyone who hears the term Content Decay will reasonably assume that this name refers to something that’s wrong with the content. But that is incorrect. As Lizzi and John Mueller figured out, content decay is not really about content, it’s just a name that someone gave to a natural phenomenon that’s been happening for thousands of years.

If you feel out of the loop because you too have never heard of Content Decay, don’t. Content Decay is one of those inept labels someone coined to put a fresh name on a problem that is so old it predates not just the Internet but the invention of writing itself.

What Is Content Decay?

What people mean when they talk about Content Decay is a slow drop in search traffic. But a slow drop in traffic is not a definition, it’s just a symptom of the actual problem which is declining user interest. Declining user interest in a topic, product, service or virtually any entity is something that that is normal and expected that can sneak up affect organic search trends, even for evergreen topics. Content Decay is an inept name for an actual SEO issue to deal with. Just don’t call it Content Decay.

How Does User Interest Dwindle?

Dwindling interest is a longstanding phenomenon that is older than the Internet. Fashion, musical styles and topics come and go in the physical and the Internet planes.

A classic example of dwindling interest is how search queries for digital cameras collapsed after the introduction of the iPhone because most people no longer needed a separate camera device.

Similarly, the problem with dwindling traffic is not necessarily the content. It’s search trends. If search trends are the reason for declining traffic then that’s probably declining user interest and the problem to solve is figuring out why interest in a topic is changing.

Typical reasons for declining user interest:

  • Perceptions of the topic changed
  • Seasonality
  • A technological disruption
  • The way words are used has changed
  • Popularity of the topic has waned

When diagnosing a drop in traffic always keep an open mind to all possibilities because sometimes there’s nothing wrong with the content or the SEO. The problem is with user interest, trends and other factors that have nothing to do with the content itself.

There Are Many Reasons For A Drop In Traffic

The problem with inept SEO catch-all phrases is that because they do not describe anything specific the meaning of the catch-all phrase tends to morph and pretty much the catch-all begins describing things beyond what it initially ineptly described.

Here are other reasons for why traffic could decline (both slow and precipitously):

  1. The decay is happening to user interest in a topic (declining user interest is a better description).
  2. Traffic slows down because Google introduces a new navigational feature (like people also ask.
  3. Traffic slows because Google introduces a new rich result (video results, shopping results, featured snippets)
  4. The slow decline in search traffic could be a side effect of personalized search causes the site to rank less often and only for specific people/areas (personalized search)
  5. The drop in search traffic is because relevance changed (Algorithm Relevance Change)
  6. A drop in organic search traffic is due to improved competition (Competition)

Catchall Phrases Are Not Useful

Content Decay is one of many SEO labels put on problems or strategies in order to make old problems and methods appear to be new. Too often those labels are inept and cause confusion because they don’t describe the problem.

Putting a name to the cause of the problem is a good practice. So rather than use fake names like Content Decay maybe make a conscious effort to use the actual name of what the problem or solution is. In the case of Content Decay it’s best to identify the problem (declining user interest) and refer to the problem by that name.

Featured Image by Shutterstock/Blueastro

Google: Proximity Not A Factor For Local Service Ads Rankings via @sejournal, @MattGSouthern

Google has clarified that a business’s proximity to a searcher isn’t a primary factor in how Local Services Ads are ranked.

This change reflects Google’s evolving understanding of what’s relevant to users searching for local service providers.

Chris Barnard, a Local SEO Analyst at Sterling Sky, started the discussion by pointing out an update to a Google Help Center article.

In a screenshot, he highlights that Google removed the section stating proximity is a factor in local search ad rankings.

Ginny Marvin, Google’s Ads Liaison, responded to clarify the change.

In a statement, Marvin said:

“LSA ranking has evolved over time as we have learned what works best for consumers and advertisers. We’ve seen that proximity of a business’ location is often not a key indicator of relevancy.

For example, the physical location of a home cleaning business matters less to potential customers than whether their home is located within the business’ service area.”

Marvin confirmed this wasn’t a sudden change but an update to “more accurately reflect these ranking considerations” based on Google’s learnings.

The updated article now states that location relevance factors include:

“…the context of a customer’s search… the service or job a customer is searching for, time of the search, location, and other characteristics.”

Proximity Still A Factor For Service Areas

Google maintains policies requiring service providers to limit their ad targeting to areas they can service from their business locations.

As Marvin cites, Google’s Local Services platform policies state:

“Local Services strives to connect consumers with local service providers. Targeting your ads to areas that are far from your business location and/or that you can’t reasonably serve creates a negative and potentially confusing experience for consumers.”

Why SEJ Cares

By de-emphasizing proximity, Google is giving its ad-serving algorithms the flexibility to surface the most relevant and capable providers.

This allows the results to match user intent better and connect searchers with companies that can realistically service their location.


FAQ

What should businesses do in response to the change in Local Services Ads ranking factors?

With the recent changes to how Google ranks Local Services Ads, businesses should update the service areas listed for their ads to reflect the regions they can realistically provide services. You’ll want to match the service areas to what’s listed on your Google Business Profile.

Companies should also ensure their service offerings and availability information are up-to-date, as these are other key factors that will impact how well their Local service ads rank and show up for relevant local searches.

Why is it important for marketers to understand changes to Local Services Ads ranking?

These changes affect how businesses get matched with potential customers. Google no longer heavily prioritizes closeness when ranking local service ads. Instead, it focuses more on other relevant factors.

Understanding this shift allows businesses to update their local service ad strategies. By optimizing for Google’s new priorities, companies can get their ads in front of the right audience.

Can a business still target areas far from their location with Local Services Ads?

No, Google doesn’t allow businesses to target areas they can’t realistically service.

This is to prevent customers from being matched with providers who are too far away to help them. Businesses can only advertise in areas close to their location or service areas.


Featured Image: Mamun sheikh K/Shutterstock

OpenAI Announces ChatGPT 4o Omni via @sejournal, @martinibuster

ChatGPT announced a new version of ChatGPT that can accept audio, image and text inputs and also generate outputs in audio, image and text. OpenAI is calling the new version of ChatGPT 4o, with the “o” standing for “omni” which is a combining form word that means “all”.

ChatGPT 4o (Omni)

OpenAI described this new version of ChatGPT as a progression toward more natural human and machine interactions which responds to user inputs at the same speed as a human to human conversations. The new version matches ChatGPT 4 Turbo in English and significantly outperforms Turbo in other languages. There is a significant improvement in API performance, increasing in speed and operating 50% less expensively.

The announcement explains:

“As measured on traditional benchmarks, GPT-4o achieves GPT-4 Turbo-level performance on text, reasoning, and coding intelligence, while setting new high watermarks on multilingual, audio, and vision capabilities.”

Advanced Voice Processing

The previous method for communicating with voice involved bridging together three different models to handle transcribing voice inputs to text where the second model (GPT 3.5 or GPT-4) processes it and outputs text and a third model that transcribes the text back into audio. That method is said to lose nuances in the various translations.

OpenAI described the downsides of the previous approach that are (presumably) overcome by the new approach:

“This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.”

The new version doesn’t need three different models because all of the inputs and outputs are handled together in one model for end to end audio input and output. Interestingly, OpenAI states that they haven’t yet explored the full capabilities of the new model or fully understand the limitations of it.

New Guardrails And An Iterative Release

OpenAI GPT 4o features new guardrails and filters to keep it safe and avoid unintended voice outputs for safety. However today’s announcement says that they are only rolling out the capabilities for text and image inputs and text outputs and a limited audio at launch. GPT 4o is available for both free and paid tiers, with Plus users receiving 5 times higher message limits.

Audio capabilities are due for a limited alpha-phase release for ChatGPT Plus and API users within weeks.

The announcement explained:

“We recognize that GPT-4o’s audio modalities present a variety of novel risks. Today we are publicly releasing text and image inputs and text outputs. Over the upcoming weeks and months, we’ll be working on the technical infrastructure, usability via post-training, and safety necessary to release the other modalities. For example, at launch, audio outputs will be limited to a selection of preset voices and will abide by our existing safety policies.”

Read the announcement:

Hello GPT-4o

Featured Image by Shutterstock/Photo For Everything

Google Warns Against “Sneaky Redirects” When Updating Content via @sejournal, @MattGSouthern

When dealing with outdated website content, Google has warned against using certain redirects that could be perceived as misleading to users.

The advice came up during a recent episode of Google’s Search Off The Record podcast.

In the episode, Search Relations team members John Mueller and Lizzi Sassman discussed strategies for managing “content decay” – the gradual process of website content becoming obsolete over time.

During the conversation, the two Googlers addressed the practice of using redirects when older content is replaced or updated.

However, they cautioned against specific redirect methods that could be seen as “sneaky.”

When Rel=canonical Becomes “Sneaky”

The redirect method that raised red flags is the incorrect use of rel=canonical tags.

This was brought up during a discussion about linking similar, but not equivalent, content.

Sassman stated:

“… for that case, I wish that there was something where I could tie those things together, because it almost feels like that would be better to just redirect it.

For example, Daniel Weisberg on our team blogged about debugging traffic drops with Search Console in a blog post. And then we worked on that to turn that into documentation and we added content to it. We want people to go look at the new thing, and I would want people to find that new thing in search results as well.

So, to me, like that one, I don’t know why people would need to find the older version forthat, because it’s not like an announcement. It was best practice kind of information.

So, for that, would it be better to do like a rel=canonical situation?”

Mueller immediately raised concerns with Sassman’s proposed use of the rel=canonical tag.

Mueller replied:

“The rel=canonical would be kind of sneaky there because it’s not really the same thing… it’s not equivalent.

I always see rel=canonical as something where you tell search engines ‘these are actually equivalent, and you can pick whichever one you want.

We’re kind of seeing it as like, ‘Well, these are equivalent, but treat this as a redirect,’ which is tricky because they’re like, ‘Ah, they say rel=canonical, but they actually mean something different.’”

What To Do Instead

If you find yourself having to make a similar decision as Sassman, Mueller says this is the correct approach:

“I think either redirecting or not redirecting. It’s like really saying that it’s replaced or keeping both.”

The best way to link a page to a newer, more comprehensive page is with a redirect, not a rel=canonical.

Or you can keep them both up if you feel there’s still value in the older page.

Why SEJ Cares

Using redirects or canonical tags incorrectly can be seen as an attempt to manipulate search rankings, which violates Google’s guidelines and can result in penalties or a decrease in visibility.

Following Google’s recommendations can ensure your site remains in good standing and visitors access the most relevant content.

Listen to the full podcast episode below:


FAQ

What are the issues with using rel=canonical tags for updated content?

Using rel=canonical tags can be misleading if the old and new pages aren’t equivalent.

Google’s John Mueller suggests that rel=canonical implies the pages are identical and a search engine can choose either.  Using it to signal a redirect when the content isn’t equivalent is seen as “sneaky” and potentially manipulative.

Rel=canonical should only be used when content is truly equivalent; otherwise, a 301 redirect or maintaining both pages is recommended.

Is it acceptable to keep outdated content accessible to users?

Yes, it’s acceptable to keep outdated content accessible if it still holds value. Google’s John Mueller suggests that you can either redirect outdated content to the updated page or keep both versions of the content live.

If the older content offers valuable information or historical context, it’s worthwhile to keep it accessible along with the updated version.

How should redirects be handled when updating website content?

The correct approach to handling redirects is to use a 301 redirect if the old content has been replaced or is considered obsolete.

A 301 redirect tells search engines—and visitors—that the old page has moved permanently to a new location. Additionally, it allows the transfer of link equity and minimizes negative impact on search rankings.


Featured Image: Khosro/Shutterstock