Google Hints At Improving Site Rankings In Next Update via @sejournal, @MattGSouthern

Google’s John Mueller says the Search team is “explicitly evaluating” how to reward sites that produce helpful, high-quality content when the next core update rolls out.

The comments came in response to a discussion on X about the impact of March’s core update and September’s helpful content update.

In a series of tweets, Mueller acknowledged the concerns, stating:

“I imagine for most sites strongly affected, the effects will be site-wide for the time being, and it will take until the next update to see similar strong effects (assuming the new state of the site is significantly better than before).”

He added:

“I can’t make any promises, but the team working on this is explicitly evaluating how sites can / will improve in Search for the next update. It would be great to show more users the content that folks have worked hard on, and where sites have taken helpfulness to heart.”

What Does This Mean For SEO Professionals & Site Owners?

Mueller’s comments confirm Google is aware of critiques about the March core update and is refining its ability to identify high-quality sites and reward them appropriately in the next core update.

For websites, clearly demonstrating an authentic commitment to producing helpful and high-quality content remains the best strategy for improving search performance under Google’s evolving systems.

The Aftermath Of Google’s Core Updates

Google’s algorithm updates, including the September “Helpful Content Update” and the March 2024 update, have far-reaching impacts on rankings across industries.

While some sites experienced surges in traffic, others faced substantial declines, with some reporting visibility losses of up to 90%.

As website owners implement changes to align with Google’s guidelines, many question whether their efforts will be rewarded.

There’s genuine concern about the potential for long-term or permanent demotions for affected sites.

Recovery Pathway Outlined, But Challenges Remain

In a previous statement, Mueller acknowledged the complexity of the recovery process, stating that:

“some things take much longer to be reassessed (sometimes months, at the moment), and some bigger effects require another update cycle.”

Mueller clarified that not all changes would require a new update cycle but cautioned that “stronger effects will require another update.”

While affirming that permanent changes are “not very useful in a dynamic world,” Mueller adds that “recovery” implies a return to previous levels, which may be unrealistic given evolving user expectations.

“It’s never ‘just-as-before’,” Mueller stated.

Improved Rankings On The Horizon?

Despite the challenges, Mueller has offered glimmers of hope for impacted sites, stating:

“Yes, sites can grow again after being affected by the ‘HCU’ (well, core update now). This isn’t permanent. It can take a lot of work, time, and perhaps update cycles, and/but a different – updated – site will be different in search too.”

He says the process may require “deep analysis to understand how to make a website relevant in a modern world, and significant work to implement those changes — assuming that it’s something that aligns with what the website even wants.”

Looking Ahead

Google’s search team is actively working on improving site rankings and addressing concerns with the next core update.

However, recovery requires patience, thorough analysis, and persistent effort.

The best way to spend your time until the next update is to remain consistent and produce the most exceptional content in your niche.


FAQ

How long does it generally take for a website to recover from the impact of a core update?

Recovery timelines can vary and depend on the extent and type of updates made to align with Google’s guidelines.

Google’s John Mueller noted that some changes might be reassessed quickly, while more substantial effects could take months and require additional update cycles.

Google acknowledges the complexity of the recovery process, indicating that significant improvements aligned with Google’s quality signals might be necessary for a more pronounced recovery.

What impact did the March and September updates have on websites, and what steps should site owners take?

The March and September updates had widespread effects on website rankings, with some sites experiencing traffic surges while others faced up to 90% visibility losses.

Publishing genuinely useful, high-quality content is key for website owners who want to bounce back from a ranking drop or maintain strong rankings. Stick to Google’s recommendations and adapt as they keep updating their systems.

To minimize future disruptions from algorithm changes, it’s a good idea to review your whole site thoroughly and build a content plan centered on what your users want and need.

Is it possible for sites affected by core updates to regain their previous ranking positions?

Sites can recover from the impact of core updates, but it requires significant effort and time.

Mueller suggested that recovery might happen over multiple update cycles and involves a deep analysis to align the site with current user expectations and modern search criteria.

While a return to previous levels isn’t guaranteed, sites can improve and grow by continually enhancing the quality and relevance of their content.


Featured Image: eamesBot/Shutterstock

Google Reveals Two New Web Crawlers via @sejournal, @martinibuster

Google revealed details of two new crawlers that are optimized for scraping image and video content for “research and development” purposes. Although the documentation doesn’t explicitly say so, it’s presumed that there is no impact in ranking should publishers decide to block the new crawlers.

It should be noted that the data scraped by these crawlers are not explicitly for AI training data, that’s what the Google-Extended crawler is for.

GoogleOther Crawlers

The two new crawlers are versions of Google’s GoogleOther crawler that was launched in April 2023. The original GoogleOther crawler was also designated for use by Google product teams for research and development in what is described as one-off crawls, the description of which offers clues about what the new GoogleOther variants will be used for.

The purpose of the original GoogleOther crawler is officially described as:

“GoogleOther is the generic crawler that may be used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.”

Two GoogleOther Variants

There are two new GoogleOther crawlers:

  • GoogleOther-Image
  • GoogleOther-Video

The new variants are for crawling binary data, which is data that’s not text. HTML data is generally referred to as text files, ASCII or Unicode files. If it can be viewed in a text file then it’s a text file/ASCII/Unicode file. Binary files are files that can’t be open in a text viewer app, files like image, audio, and video.

The new GoogleOther variants are for image and video content. Google lists user agent tokens for both of the new crawlers which can be used in a robots.txt for blocking the new crawlers.

1. GoogleOther-Image

User agent tokens:

  • GoogleOther-Image
  • GoogleOther

Full user agent string:

GoogleOther-Image/1.0

2. GoogleOther-Video

User agent tokens:

  • GoogleOther-Video
  • GoogleOther

Full user agent string:

GoogleOther-Video/1.0

Newly Updated GoogleOther User Agent Strings

Google also updated the GoogleOther user agent strings for the regular GoogleOther crawler. For blocking purposes you can continue using the same user agent token as before (GoogleOther). The new Users Agent Strings are just the data sent to servers to identify the full description of the crawlers, in particular the technology used. In this case the technology used is Chrome, with the model number periodically updated to reflect which version is used (W.X.Y.Z is a Chrome version number placeholder in the example listed below)

The full list of GoogleOther user agent strings:

  • Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; GoogleOther)
  • Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GoogleOther) Chrome/W.X.Y.Z Safari/537.36

GoogleOther Family Of Bots

These new bots may from time to time show up in your server logs and this information will help in identifying them as genuine Google crawlers and will help publishers who may want to opt out of having their images and videos scraped for research and development purposes.

Featured Image by Shutterstock/ColorMaker

YouTube Unveils New Content And Ad Offerings At Brandcast via @sejournal, @gregjarboe

YouTube unveiled four new content and ad offerings at its 13th annual Brandcast at David Geffen Hall, Lincoln Center.

Key announcements include:

  • WNBA Friday night games: Google and Scripps Sports announced an agreement for YouTube TV to show the locally televised WNBA Friday night games on ION in both the home and away markets of the teams playing. This season, YouTube TV will be the only digital multichannel video programming distributor (dMVPD) carrying local and national WNBA games. The games will be part of the YouTube TV Base Plan starting May 31 and continuing through the regular season.
  • Creator Takeovers via YouTube Select: YouTube announced the expansion of this takeover option to more creators, which was initially piloted at the end of 2023. With the formalization of this program, brands will be able to collaborate with top YouTube creators to own a 100% share of voice on their channel, leveraging creator-fan solid connections.
  • Non-Skips for Video Reach Campaigns: YouTube announced a new AI-powered format optimized for connected TV (CTV), using non-skippable assets across in-stream inventory.
  • Branded QR Codes: YouTube launched branded QR codes, enabling advertisers to drive more interactivity by putting their brand front and center in a more modern QR code.

In addition to these news announcements, YouTube’s executive bench also took the stage to talk about their vision, the importance of content, and innovation in advertising.

Neal Mohan, the CEO of YouTube, said, “Creators are drawing audiences on the big screen because they’re the new Hollywood. They have business strategies, writers’ rooms, and production teams. They’re reimagining classic TV genres, from morning shows to sports commentary. And they’re inventing entirely new ones!”

He added, “Along the way, creators are redefining what we think of as ‘TV.’ And they deserve the same acclaim as other creative professionals. I believe it’s time a creator won an Emmy.”

YouTube CEO Neal Mohanan speaks onstage during YouTube Brandcast 2024 at David Geffen Hall on May 15, 2024, in New York City. (Photo by Kevin Mazur/Getty Images for YouTube)

Mary Ellen Coe, the Chief Business Officer of YouTube, said:

“No one is more engaged than loyal YouTube fans. They excitedly count down to new videos, rewatch old ones, and create their own in response. And they rush to their favorite creator’s channel in the 24 hours after new videos are released. Which presents an ideal moment for brands to engage with these fans.”

Sean Downey, the President of Americas and Global Partners at Google, said:

“Google AI has been at the core of our ads solutions for years. As we advance, our ability to help brands drive ROI keeps improving.”

The night featured musical performances by Billie Eilish featuring FINNEAS, Benson Boone, and Stray Kids, as well as various YouTube creators, including Haley Kalil, Kinigra Deon, Ryan Trahan, Shannon Sharpe, and Zach King.

This highlighted why a survey conducted by Kantar found that viewers in the United States say that if they could only watch one service for an entire year, YouTube would be the #1 platform they chose.

Mary Ellen Coe, CBO, YouTube, speaks at YouTube Brandcast. (Photo by Noam Galai/Getty Images for YouTube)

The audience gleaned other critical data throughout the evening event, which was part of the Upfronts.

For example, according to Nielsen’s total TV and streaming report for the US, YouTube has remained the leader in streaming watch time every month since February 2023. And 9 out of 10 viewers say they use YouTube, according to a Pew Research Study.

According to YouTube’s internal data, the key CTV metrics included:

  • Views in the living room have increased by more than 130% from 2020 to 2023.
  • On average, viewers watch over 1 Billion hours of YouTube content on the big screen (television) daily.
  • YouTube TV has more than 8 million paid subscribers.
  • Over 40 of YouTube’s top 100 channels by watch time have TV as their most-watYouTube’sen.
  • Last year, views of Shorts on connected TVs more than doubled.

Advertisers in the audience also snacked on these news nuggets:

  • According to a custom MMM meta-analysis commissioned by Google with Nielsen, on average, YouTube drives higher long-term Return on Ad Spend (ROAS) than TV, other online video, and paid social.
  • Based on a meta-analysis across 13 NCS sales lift studies, AI-powered video reach campaign mixes earned an average ROAS 3.7x higher (271%) than manually optimized campaigns.
  • According to a Kantar survey, viewers in the United States agree that YouTube is the #1 video platform for gaming content, outperforming TV, social, and streaming platforms.
  • According to a Google/Ipsos YouTube Trends Survey, 54% of people would rather watch creators break down a significant event like the Oscars or Grammys than watch it themselves.

Now, that’s a lot of news to digest. Still, as I mentioned in “Google Unveils Updates At IAB NewFronts 2024,” YouTube is also “expected to make more announcements at VidCon A” aheim 2024, which will take place from June 26–29, 2024, at the Anaheim Convention Center.

So, as TV newscasters would say in the old days, “Don’t touch that dial.”


Featured Image: Muhammad Alimaki/Shutterstock

Google Ads Restricts Brand Names & Logos From AI Image Generation via @sejournal, @MattGSouthern

Google has provided details about the capabilities and limitations of its AI image generation tools for Google Ads.

The clarification came after search marketer Darcy Burk expressed excitement about the potential for AI to create product images.

This prompted Google’s Ads Liaison, Ginny Marvin, to outline some key restrictions.

Branded Content Off-Limits

Marvin confirmed that while Google’s AI tools can generate generic product images, they are designed to avoid creating visuals that depict branded items or logos.

Marvin stated:

“The tool will generate product images, but it won’t generate product images that include brand names or logos.”

She provided an illustrative example:

“So, for example, you could ask it to generate images of ‘a dog in a pet stroller in a park,’ but if you asked it to generate images of ‘a dog in a pet stroller in a park with a Doggo logo,’ you’ll get an error notification to remove mentions of brands and branded items from your description.”

Guidelines Outlined

Marvin points to Google’s support documentation for more details on using the AI image generation and editing capabilities.

When attempting to generate branded product images, users will likely receive an error message instructing them to remove any branded terms from their prompts.

Google’s support page notes:

“Generative AI tools in Google Ads are designed to automatically limit the creation of certain content.”

It lists “Faces, children, or specific individuals” and “Branded items and logos” as examples of restricted subject matter.

Restricted Verticals

Google’s documentation also addresses concerns around safety and responsible AI development.

Generated images include digital watermarking to identify their AI-generated nature and deter misuse.

Sensitive advertising verticals like politics and pharmaceuticals are also restricted from automatically receiving AI-generated image suggestions.

“As this technology evolves, we’re continuously evaluating and improving our approach to safety,” Google states.

Why SEJ Cares

As generative AI capabilities expand across the advertising ecosystem, clear guidelines from Google help provide guardrails to mitigate potential risks while allowing advertisers to experiment.

Understanding current limitations, such as restrictions around branded visuals, is critical for marketers looking to incorporate AI image generation into their workflows.

How This Can Help You

For advertisers, Google’s AI image generation tools can produce large volumes of high-quality generic product and lifestyle images at scale.

By following the outlined guidelines around avoiding branded references, you can generate a variety of visual assets suited for ecommerce product listings, display ads, social media marketing and more.

This can streamline traditionally time-consuming processes like product photoshoots while maintaining brand safety.


FAQ

How does Google Ads’ AI image generation tool handle branded content?

Google’s AI image generation tool can create generic product images but is designed to exclude any branded items or logos.

If a user tries to generate an image with specific brands or logos, the system will trigger an error notification directing them to remove those references before proceeding.

  • The tool generates generic product images
  • It excludes brand names and logos
  • Users receive error notifications guiding them to correct prompts

What kind of content is restricted when using Google Ads’ AI image generation tools?

Several types of content are restricted when using the AI image generation tools in Google Ads.

Restrictions include creating images featuring faces, children, specific individuals, branded items, and logos.

Sensitive verticals like politics and pharmaceuticals are also barred from receiving AI-generated image suggestions.

How does the restriction on branded content benefit marketers using Google’s AI tools?

By focusing on generating only generic product images, advertisers can utilize the tool for a variety of applications, such as ecommerce product listings, display ads, and social media marketing, without risking any legal issues related to brand misuse.


Featured Image: DANIEL CONSTANTE/Shutterstock

Google AI Overviews = Theft? Court Ruling Sets Precedent via @sejournal, @MattGSouthern

Google’s bold new vision for the future of online search, powered by AI technology, is fuelling an industrywide backlash over fears it could damage the internet’s open ecosystem.

At the center of the controversy are Google’s newly launched “AI Overviews,” which are generated summaries that aim to directly answer search queries by pulling in information from across the web.

AI overviews appear prominently at the top of results pages, potentially limiting users’ need to click through to publishers’ websites.

The move sparked legal action in France, where publishers filed cases accusing Google of violating intellectual property rights by ingesting their content to train AI models without permission.

A group of French publishers won an early court battle in April 2024. A judge ordered Google to negotiate fair compensation for repurposing snippets of their content.

Publishers in the US are raising similar objections as Google’s new AI search overviews threaten to siphon traffic away from sources. They argue that Google unfairly profits from others’ content.

The debate highlights the need for updated frameworks governing the use of online data in the age of AI.

Concerns From Publishers

According to industry watchers, the implications of AI overviews could impact millions of independent creators who depend on Google Search referral traffic.

Frank Pine, executive editor at MediaNews Group, tells The Washington Post:

“If journalists did that to each other, we’d call that plagiarism.”

Pine’s company, which publishes the Denver Post and Boston Herald, is among those suing OpenAI for allegedly scraping copyrighted articles to train their language models.

Google’s revenue model has long been predicated on driving traffic to other websites and monetizing that flow through paid advertising channels.

AI overviews threaten to shift that revenue model.

Kimber Matherne, who runs a food blog, is quoted in the post article stating:

“[Google’s] goal is to make it as easy as possible for people to find the information they want. But if you cut out the people who are the lifeblood of creating that information, then that’s a disservice to the world.”

According to the Post’s report, Raptive, an ad services firm, estimates the changes could result in $2 billion in lost revenue for online creators.

They also believe some websites could lose two-thirds of their search traffic.

Raptive CEO Michael Sanchez tells The Post:

“What was already not a level playing field could tip its way to where the open internet starts to become in danger of surviving.”

Concerns From Industry Professionals

Google’s AI overviews are understandably raising concerns among industry professionals, as expressed through numerous tweets criticizing the move.

Matt Gibbs questioned how Google developed the knowledge base for its AI, bluntly stating, “They ripped it off publishers who did the actual work to create the knowledge. Google are a bunch of thieves.”

In her tweet, Kristine Schachinger echoed similar sentiments, referring to Google’s AI answers as “a complete digital theft engine which will prevent sites getting clicks at all.”

Gareth Boyd retweeted a quote from the Washington Post article highlighting the struggles of blogger Jake Boly, whose site recently saw a 96% drop in Google traffic.

Boyd said, “The precedent being set by OpenAI and Google is scary…” and that “more people should be equally angry” at both companies for the “open theft of content.”

In his tweet, Avram Piltch directly accused Google of theft, stating, “the data used to train their AI came from the very publishers that allowed Google to crawl them and are now going to be harmed. This is theft, plain and simple. And it’s a threat to the future of the web.”

Lily Ray made a similar claim about Google: “Using all the content they took from the sites that made Google. With little to no attribution or traffic.”

Legal Gray Area

The controversy taps into broader debates around intellectual property and fair use, as AI systems are trained on unprecedented scales of data scraped across the internet.

Google argues its models only ingest publicly available web data and that publishers previously benefited from search traffic.

Publishers implicitly consent to their content being indexed by search engines unless they opt out.

However, laws weren’t conceived with training AI models in mind.

What’s The Path Forward?

This debate highlights the need for new rules around how AI uses online data.

The way forward is unclear, but the stakes are high.

Some suggest revenue sharing or licensing fees when publisher content is used to train AI models. Others propose an opt-in system that gives website owners more control over how their content is used for AI training.

The French rulings suggest that the courts may step in without explicit guidelines and good-faith negotiations.

The web has always relied on a balance between search engines and content creators. If that balance is disrupted without new safeguards, it could undermine the exchange of information that makes the internet so valuable.


Featured Image: Veroniksha/Shutterstock

New Google AI Overviews Documentation & SEO via @sejournal, @martinibuster

Google published new documentation about their new AI Overviews search feature which summarizes an answer to a search query and links to webpages where more information can be found. The new documentation offers important information about how the new feature works and what publishers and SEOs should consider.

What Triggers AI Overviews

AI Overviews shows when the user intent is to quickly understand information, especially when that information need is tied to a task.

“AI Overviews appear in Google Search results when our systems determine …when you want to quickly understand information from a range of sources, including information from across the web and Google’s Knowledge Graph.”

In another part of the documentation it ties the trigger to task-based information needs:

“…and use the information they find to advance their tasks.” “

What Kinds Of Sites Does AI Overviews Link To?

An important fact to consider is that just because AI Overviews is triggered by a user’s need to quickly understand something doesn’t mean that only queries with an informational need will trigger the new search feature. Google’s documentation makes it clear that the kinds of websites that will benefit from AI Overviews links includes “creators” (which implies video creators), ecommerce stores and other businesses. This means that far more than informational websites that will benefit from AI overviews.

The new documentation lists the kinds of sites that can receive a link from the AI overviews:

“This allows people to dig deeper and discover a diverse range of content from publishers, creators, retailers, businesses, and more, and use the information they find to advance their tasks.”

Where AI Overviews Sources Information

AI Overviews shows information from the web and the knowledge graph. Large Language Models currently need to be entirely retrained from the ground up when adding significant amounts of new data. That means that the websites chosen to be displayed in Overviews feature are selected from Google’s standard search index which in turn means that Google may be using Retrieval-augmented generation (RAG).

RAG is a system that sits between a large language model and a database of information that’s external to the LLM. This external database can be a specific knowledge like the entire content of an organization’s HR policies to a search index. It’s a supplemental source of information that can be used to double-check the information provided by an LLM or to show where to read more about the question being answered.

The section quoted at the beginning of the article notes that AI Overviews cites sources from the web and the Knowledge Graph:

“AI Overviews appear in Google Search results when our systems determine …when you want to quickly understand information from a range of sources, including information from across the web and Google’s Knowledge Graph.”

What Automatic Inclusion Means For SEO

Inclusion in AI Overviews is automatic and there’s nothing specific to AI Overviews that publishers or SEOs need to do. Google’s documentation says that following their guidelines for ranking in the regular search is all you have to do for ranking in AI Overviews. Google’s “systems” determine what sites are picked to show up for the topics surfaced in AI Overviews.

All the statements seem to confirm that the new Overviews feature sources data from the regular Search Index. It’s possible that Google filters the search index specially for AI Overviews but offhand I can’t think of any reason Google would do that.

All the statements that indicate automatic inclusions point to the likely possibility that Google uses the regular search index:

“No action is needed for publishers to benefit from AI Overviews.”

“AI Overviews show links to resources that support the information in the snapshot, and explore the topic further.”

“…diverse range of content from publishers, creators, retailers, businesses, and more…”

“To rank in AI Overviews, publishers only need to follow the Google Search Essentials guide.

“Google’s systems automatically determine which links appear. There is nothing special for creators to do to be considered other than to follow our regular guidance for appearing in search, as covered in Google Search Essentials.”

Think In Terms Of Topics

Obviously, keywords and synonyms in queries and documents play a role. But in my opinion they play and oversized role in SEO. There are many ways that a search engine can annotate a document in order to match a webpage to a topic, like what Googler Martin Splitt referred to as a centerpiece annotation. A centerpiece annotation is used by Google to label a webpage with what that webpage is about.

Semantic Annotation

This kind of annotation links webpage content to concepts which in turn gives structure to a unstructured document. Every webpage is unstructured data so search engines have to make sense of that. Semantic Annotation is one way to do that.

Google has been matching webpages to concepts since at least 2015. A Google webpage about their cloud products talks about how they integrated neural matching into their Search Engine for the purpose of annotating webpage content with their inherent topics.

This is what Google says about how it matches webpages to concepts:

“Google Search started incorporating semantic search in 2015, with the introduction of noteworthy AI search innovations like deep learning ranking system RankBrain. This innovation was quickly followed with neural matching to improve the accuracy of document retrieval in Search. Neural matching allows a retrieval engine to learn the relationships between a query’s intentions and highly relevant documents, allowing Search to recognize the context of a query instead of the simple similarity search.

Neural matching helps us understand fuzzier representations of concepts in queries and pages, and match them to one another. It looks at an entire query or page rather than just keywords, developing a better understanding of the underlying concepts represented in them.”

Google’s been doing this, matching webpages to concepts, for almost ten years. Google’s documentation about AI Overviews also mentions that showing links to webpages based on topics is a part of determining what sites are ranked in AI Overviews.

Here’s how Google explains it:

“AI Overviews show links to resources that support the information in the snapshot, and explore the topic further.

…AI Overviews offer a preview of a topic or query based on a variety of sources, including web sources.”

Google’s focus on topics has been a thing for a long time and it’s well past time SEOs lessened their grip on keyword targeting and start to also give Topic Targeting a chance to enrich their ability to surface content in Google Search, including in AI Overviews.

Google says that the same optimizations described in their Search Essentials documentation for ranking in Google Search are the same optimizations to apply to rank in Google Overview.

This is exactly what the new documentation says:

“There is nothing special for creators to do to be considered other than to follow our regular guidance for appearing in search, as covered in Google Search Essentials.”

Read Google’s New SEO Related Documentation On AI Overviews

AI Overviews and your website

Featured Image by Shutterstock/Piotr Swat

Google Rolls Out New ‘Web’ Filter For Search Results via @sejournal, @MattGSouthern

Google is introducing a filter that allows you to view only text-based webpages in search results.

The “Web” filter, rolling out globally over the next two days, addresses demand from searchers who prefer a stripped-down, simplified view of search results.

Danny Sullivan, Google’s Search Liaison, states in an announcement:

“We’ve added this after hearing from some that there are times when they’d prefer to just see links to web pages in their search results, such as if they’re looking for longer-form text documents, using a device with limited internet access, or those who just prefer text-based results shown separately from search features.”

The new functionality is a throwback to when search results were more straightforward. Now, they often combine rich media like images, videos, and shopping ads alongside the traditional list of web links.

How It Works

On mobile devices, the “Web” filter will be displayed alongside other filter options like “Images” and “News.”

Screenshot from: twitter.com/GoogleSearchLiaison, May 2024.

If Google’s systems don’t automatically surface it based on the search query, desktop users may need to select “More” to access it.

Screenshot from: twitter.com/GoogleSearchLiaison, May 2024.

More About Google Search Filters

Google’s search filters allow you to narrow results by type. The options displayed are dynamically generated based on your search query and what Google’s systems determine could be most relevant.

The “All Filters” option provides access to filters that are not shown automatically.

Alongside filters, Google also displays “Topics” – suggested related terms that can further refine or expand a user’s original query into new areas of exploration.

For more about Google’s search filters, see its official help page.


Featured Image: egaranugrah/Shutterstock

Was OpenAI GPT-4o Hype A Troll On Google? via @sejournal, @martinibuster

OpenAI managed to steal the attention away from Google in the weeks leading up to Google’s biggest event of the year (Google I/O). When the big announcement arrived there all they had to show was a language model that was slightly better than the previous one with the “magic” part not even in Alpha testing stage.

OpenAI may have left users feeling like a mom receiving a vacuum cleaner for Mothers Day but it surely succeeded in minimizing press attention for Google’s important event.

The Letter O

The first hint that there’s at least a little trolling going on is the name of the new GPT model, 4 “o” with the letter “o” as in the name of Google’s event,  I/O.

OpenAI says that the letter O stands for Omni, which means everything, but it sure seems like there’s a subtext to that choice.

GPT-4o Oversold As Magic

Sam Altman in a tweet the Friday before the announcement promised “new stuff” that felt like “magic” to him:

“not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.”

OpenAI co-founder Greg Brockman tweeted:

“Introducing GPT-4o, our new model which can reason across text, audio, and video in real time.

It’s extremely versatile, fun to play with, and is a step towards a much more natural form of human-computer interaction (and even human-computer-computer interaction):”

The announcement itself explained that previous versions of ChatGPT used three models to process audio input. One model to turn audio input into text. Another model to complete the task and output the text version of it and a third model to turn the text output into audio. The breakthrough for GPT-4o is that it can now process the audio input and output within a single model and output it all in the same amount of time that it takes a human to listen and respond to a question.

But the problem is that the audio part isn’t online yet. They’re still working on getting the guardrails working and it will take weeks before an Alpha version is released to a few users for testing. Alpha versions are expected to possibly have bugs while the Beta versions are generally closer to the final products.

This is how OpenAI explained the disappointing delay:

“We recognize that GPT-4o’s audio modalities present a variety of novel risks. Today we are publicly releasing text and image inputs and text outputs. Over the upcoming weeks and months, we’ll be working on the technical infrastructure, usability via post-training, and safety necessary to release the other modalities.

The most important part of GPT-4o, the audio input and output, is finished but the safety level is not yet ready for public release.

Some Users Disappointed

It’s inevitable that an incomplete and oversold product would generate some negative sentiment on social media.

AI engineer Maziyar Panahi (LinkedIn profile) tweeted his disappointment:

“I’ve been testing the new GPT-4o (Omni) in ChatGPT. I am not impressed! Not even a little! Faster, cheaper, multimodal, these are not for me.
Code interpreter, that’s all I care and it’s as lazy as it was before!”

He followed up with:

“I understand for startups and businesses the cheaper, faster, audio, etc. are very attractive. But I only use the Chat, and in there it feels pretty much the same. At least for Data Analytics assistant.

Also, I don’t believe I get anything more for my $20. Not today!”

There are others across Facebook and X that expressed similar sentiments although many others were happy with what they felt was an improvement in speed and cost for the API usage.

Did OpenAI Oversell GPT-4o?

Given that the GPT-4o is in an unfinished state it’s hard not to miss the impression that the release was timed to coincide with and detract from Google I/O. Releasing it on the eve of Google’s big day with a half-finished product may have inadvertently created the impression that GPT-4o in the current state is a minor iterative improvement.

In the current state it’s not a revolutionary step forward but once the audio portion of the model exits Alpha testing stage and makes it through the Beta testing stage then we can start talking about revolutions in large language model. But by the time that happens Google and Anthropic may already have staked a flag on that mountain.

OpenAI’s announcement paints a lackluster image of the new model, promoting the performance as on the same level as GPT-4 Turbo. The only bright spots is the significant improvements in languages other than English and for API users.

OpenAI explains:

  • “It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API.”

Here are the ratings across six benchmarks that shows GPT-4o barely squeaking past GPT-4T in most tests but falling behind GPT-4T in an important benchmark for reading comprehension.

Here are the scores:

  • MMLU (Massive Multitask Language Understanding)
    This is a benchmark for multitasking accuracy and problem solving in over fifty topics like math, science, history and law. GPT-4o (scoring 88.7) is slightly ahead of GPT4 Turbo (86.9).
  • GPQA (Graduate-Level Google-Proof Q&A Benchmark)
    This is 448 multiple-choice questions written by human experts in various fields like biology, chemistry, and physics. GPT-4o scored 53.6, slightly outscoring GPT-4T (48.0).
  • Math
    GPT 4o (76.6) outscores GPT-4T by four points (72.6).
  • HumanEval
    This is the coding benchmark. GPT-4o (90.2) slightly outperforms GPT-4T (87.1) by about three points.
  • MGSM (Multilingual Grade School Math Benchmark)
    This tests LLM grade-school level math skills across ten different languages. GPT-4o scores 90.5 versus 88.5 for GPT-4T.
  • DROP (Discrete Reasoning Over Paragraphs)
    This is a benchmark comprised of 96k questions that tests language model comprehension over the contents of paragraphs. GPT-4o (83.4) scores nearly three points lower than GPT-4T (86.0).

Did OpenAI Troll Google With GPT-4o?

Given the provocatively named model with the letter o, it’s hard to not consider that OpenAI is trying to steal media attention in the lead-up to Google’s important I/O conference. Whether that was the intention or not OpenAI wildly succeeded in minimizing attention given to Google’s upcoming search conference.

Does a language model that barely outperforms its predecessor worth all the hype and media attention it received? The pending announcement dominated news coverage over Google’s big event so for OpenAI the answer is clearly yes, it was worth the hype.

Featured Image by Shutterstock/BeataGFX

SGE Is Here. Google Rolls Out AI-Powered Overviews To US Search Results via @sejournal, @MattGSouthern

At its annual I/O developer conference, Google unveiled plans to incorporate generative AI directly into Google Search.

Additionally, Google announced an expansion to Search Generative Experience (SGE), designed to reinvent how people discover and consume information.

Upcoming upgrades include:

  • Adjustable overviews to simplify language or provide more detail
  • Multi-step reasoning to handle complex queries with nuances
  • Built-in planning capabilities for tasks like meal prep and vacations
  • AI-organized search result pages to explore ideas and inspiration
  • Visual search querying through uploaded videos and images

Liz Reid, Head of Google Search, states in an announcement:

“Now, with generative AI, Search can do more than you ever imagined. So you can ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork.”

What’s New In Google Search & SGE

New Gemini Model

A customized Gemini language model is central to Google’s AI-powered Search revamp.

Google’s announcement states:

“This is all made possible by a new Gemini model customized for Google Search. It brings together Gemini’s advanced capabilities — including multi-step reasoning, planning and multimodality — with our best-in-class Search systems.”

AI overviews generate quick answers to their queries, piecing together information from multiple sources.

Google reports that people have already used AI Overviews billions of times through Search Labs.

AI Overviews In US Search Results

Google is bringing AI overviews from Search Labs into its general search results pages.

That means hundreds of millions of US searchers will gain access to AI overviews this week and over 1 billion by year’s end.

Image Credit: blog.google/products/search/generative-ai-google-search-may-2024/, May 2024.

Searchers will soon be able to adjust the language and level of detail in AI overviews to suit their needs and understanding of the topic.

Image Credit: blog.google/products/search/generative-ai-google-search-may-2024/, May 2024.

Complex Questions & Planning Capabilities

SGE’s multi-step reasoning capabilities will allow you to ask complex questions and receive detailed answers.

For example, you could ask, “Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill,” and receive a comprehensive response.

Image Credit: blog.google/products/search/generative-ai-google-search-may-2024/, May 2024.

In addition to answering complex queries, SGE will offer planning assistance for various aspects of life, such as meal planning and vacations.

You can request a customized meal plan by searching for something like “create a 3-day meal plan for a group that’s easy to prepare.” You will receive a tailored plan with recipes from across the web.

AI-Organized Results & Visual Search

Google is introducing AI-organized results pages that categorize helpful results under unique, AI-generated headlines, presenting diverse perspectives and content types.

This feature will initially be available for dining and recipes, with plans to expand to movies, music, books, hotels, shopping, and more.

SGE will also enable users to ask questions using video content. This visual search capability can save you time describing issues or typing queries, as you can record a video instead.

Image Credit: blog.google/products/search/generative-ai-google-search-may-2024/, May 2024.

What Does This Mean For Businesses?

While Google touts SGE as a way to enhance search quality, the prominence of AI-generated content could impact businesses and publishers who rely on Google Search traffic.

AI overviews occupy extensive screen real estate and could bury traditional “blue link” web results, significantly limiting clickthrough rates.

Data from ZipTie and Search Engine Journal contributor Bart Goralewicz indicate that SGE displays cover over 80% of search queries across most verticals.

Additionally, under SGE’s unique ranking system, only 47% of the top 10 traditional web results appear as sources powering AI overview generation.

Bart Goralewicz, Founder of Onely, states:

“SGE operates on a completely different level compared to traditional search. If you aim to be featured in Google SGE, you’ll need to develop a distinct strategy tailored to this new environment. It’s a whole new game.”

Tomasz Rudzki of ZipTie cautions:

“Google SGE is the most controversial and anxiety-provoking change in search,” commented. With so much changing week by week, businesses relying on organic search must carefully monitor SGE’s evolution.”

How To Optimize Your Site for SGE

As AI search accelerates, SEO professionals and content creators face new challenges in optimizing for discoverability.

Consider implementing these tactics for a potential increase in visibility in search results.

Structure content explicitly as questions and direct answers.
With AI overviews answering queries directly, optimizing content in a question-and-answer format may increase the likelihood of having it surfaced by Google’s AI models.

Create topic overview pages spanning initial research to final decisions.
Google’s AI search can handle complex, multi-step queries. Creating comprehensive overview content that covers the entire journey—from initial research to final purchasing decisions—could position those pages as prime sources for Google’s AI.

Pursue featured status on high-authority Q&A and information sites.
Studies found sites like Quora and Reddit are frequently cited in Google’s AI overviews. Having authoritative, industry-expert-level content featured prominently on these high-profile Q&A platforms could increase visibility within AI search results.

Maximize technical SEO for improved crawling of on-page content.
Like traditional search-leveraged web crawlers, Google’s AI models still rely on crawling a site’s content. Ensuring optimal technical SEO for crawlers to access and adequately render all on-page content is crucial for it to surface in AI overviews.

Tracking search volume for queries exhibiting AI overviews.
Identifying queries that currently trigger AI overviews can reveal content gaps and optimization opportunities. Tracking search volume for these queries enables prioritizing efforts around high-value terms and topics Google already enhances with AI results.

Looking Ahead

As Google moves forward with its AI-centric search vision, disruptions could reshape digital economies and information ecosystems.

Companies must acclimate their strategies for an AI-powered search landscape.

We will be following these developments closely at Search Engine Journal with an aim to provide strategies to help make your content discoverable in SGE.