Google Search Team Explains The “It Depends” Response via @sejournal, @MattGSouthern

Google’s Search Relations team has explained why their SEO advice often sounds vague or comes with conditions, such as “it depends.”

In a recent Search Off the Record podcast, team members Martin Splitt and Gary Illyes shared the challenges that prevent them from providing clear-cut answers.

The discussion was part of what the team referred to as a “more human episode.”

The Googlers acknowledged they sometimes come across as robotic and used this episode to show a more human side.

The Context Problem

Splitt works as Google’s bridge between developers and SEO professionals. He provided an example of how good advice can be distorted when people overlook the broader context.

At a Tech SEO Summit, he presented a slide with a bold statement about JavaScript performance. To prevent confusion, he added a note stating that the slide lacked context and provided a full explanation during the talk.

But even with that, he said the statement still got pulled out and repeated on its own.

“I had a remark on that slide saying there’s context missing here, and then I gave all that context… The problem with me saying that in general is that people will just take that one sentence and ignore everything else I said before or after.”

He clarified that JavaScript plays an important role in many web experiences, like enabling offline support. But that nuance often gets lost when single lines are quoted in isolation.

Why Google Doesn’t Share Slides

This loss of context is one reason why Google teams don’t typically share their presentation slides.

Illyes confirmed that slides on their own can be misleading:

He stated:

“Our slides without context, they are useless.”

The team sees what happens when advice meant for one specific situation gets used everywhere. This can hurt websites that have different needs.

For example, advice that works for a small local business might be wrong for a global company with websites in multiple languages.

The “It Depends” Situation

Both Google reps know the SEO community gets frustrated with “it depends” answers.

Splitt even called it his “pet peeve.” But they explained why they can’t give simple yes-or-no answers.

Splitt noted:

“Someone who is serving a very specific niche with highly regulated content in a single country in a single language might have very different requirements than a multilanguage multinational brand that sells everything to everyone.”

They try to give more complete answers by explaining what factors matter. But this makes their advice longer and more complex.

The Google team also worries about how people use their quotes. Splitt said people often pick one statement while ignoring other important information.

Splitt explained:

“It often makes things tricky because people might cherry pick and might pick one thing you said, take that out of context and use it as an example why people should follow their agenda rather than ours.”

While they know public statements can be quoted freely, both reps feel bad when selective quoting gets out of control.

What This Means

The Google team’s openness about their struggles affirms the experience of many SEO professionals.

Google’s guidance often feels cautious because it needs to account for a wide range of use cases.

Instead of seeking simple answers, focus on the factors that influence Google’s recommendations.

Understanding the “why” behind Google’s advice is more useful than chasing one-size-fits-all solutions.

Listen to the full podcast episode below:


Featured Image: Roman Samborskyi/Shutterstock

Google Removes Robots.txt Guidance For Blocking Auto-Translated Pages via @sejournal, @MattGSouthern

Google removes robots.txt guidance for blocking auto-translated pages. This change aligns Google’s technical documents with its spam policies.

  • Google removed guidance advising websites to block auto-translated pages via robots.txt.
  • This aligns with Google’s policies that judge content by user value, not creation method.
  • Use meta tags like “noindex” for low-quality translations instead of sitewide exclusions.
Google Offers Voluntary Buyouts To Core U.S. Teams Amid AI Push via @sejournal, @MattGSouthern

Google is offering voluntary buyouts to employees across several of its core U.S.-based teams, including Search, Ads, engineering, marketing, and research.

The offer provides eligible employees with at least 14 weeks of severance and is available through July 1, according to reporting from The Verge and The Information.

The buyouts are limited to employees in the U.S. who report into Google’s Core Systems division, and exclude staff at DeepMind, Google Cloud, YouTube, and central ad sales.

An Exit Path, Not a Layoff

While Google has conducted layoffs in other departments earlier this year, the current program is being positioned differently.

It’s entirely voluntary and framed as an opportunity for employees to step away if their goals or performance no longer align with Google’s direction.

In a memo obtained by Business Insider, Jen Fitzpatrick, the Senior Vice President of Core Systems, explained the reasoning behind the move:

“The Voluntary Exit Program may be a fit for Core Googlers who aren’t feeling excited about and aligned with Core’s mission and goals, or those who are having difficulty meeting the demands of their role.”

Fitzpatrick added:

“This isn’t about reducing the number of people in Core. We will use this opportunity to create internal mobility and fresh growth opportunities.”

While the message downplays the idea of forced exits, this move bears a resemblance to earlier reorganizations.

In January, Google began with internal reshuffling in its Platforms and Devices division, which later led to confirmed layoffs affecting Pixel, Nest, Android, and Assistant teams. Whether the current buyouts will lead to further cuts remains to be seen.

New Return-to-Office Rules

Alongside the exit program, Google is updating its hybrid work policy.

All U.S.-based Core employees who live within 50 miles of an approved return site are being asked to transfer back to an office and follow the standard three-day in-office schedule.

Fitzpatrick noted that while remote flexibility is still supported, in-person presence is viewed as critical to collaboration and innovation.

Fitzpatrick wrote:

“When it comes to connection, collaboration, and moving quickly to innovate together, there’s just no substitute for coming together in person.”

These changes are positioned as part of a cultural shift toward spending more time in the office and aligning around shared goals.

Tied to Google’s Broader AI Push

This move comes as Google deploys its AI strategy across multiple business units. Over the past year, the company has:

This shows AI is driving changes both internally and externally.

Fitzpatrick’s memo opens by framing the current moment as a “transformational” shift for Google:

“AI is reshaping everything—our products, our tools, the way we work, how we innovate, and so on.”

Looking Ahead

While Google insists this isn’t about cutting jobs, the voluntary exit program and mandatory RTO policies make a couple of things clear. Google is fine-tuning who builds its products and how they do it.

Google wants its teams engaged, in-office, and ready to build the next generation of AI-driven tools.

For marketers and SEO professionals, this restructuring could foreshadow faster product rollouts, rapidly evolving search experiences, and continued automation in advertising tools.


Featured Image: Roman Samborskyi/Shutterstock

YouTube Begins Showing Posts In The Shorts Feed via @sejournal, @MattGSouthern

YouTube has announced that Posts will now appear in the Shorts feed. This change allows users to see and interact with Posts while watching short videos.

How the Feature Works

You can now see Posts while scrolling through Shorts on YouTube.

In the screenshot below, you can see that the layout keeps the same vertical aspect ratio. While scrolling, the video gets smaller, using half the screen for Posts.

Screenshot from: YouTube.com/CreatorInsider, June 2025.

You can like and comment on these Posts without stopping your video.

See more about it in YouTube’s video announcement:

Background on YouTube Posts

YouTube has had posts as part of its creator toolkit for several years. These posts let channel owners share polls, quizzes, GIFs, text updates, images, and videos.

Posts are found in a special tab on the creator’s channel. They can also show up on subscribers’ homepages or in their subscription feeds.

Potential New Exposure

This update gives creators a new way to reach their YouTube audience with posts.

Further, this gives creators a way to reach Shorts viewers without creating vertical videos.

If you only publish traditional long-form content on your channel, you can potentially get into the Shorts feed by publishing text or images.

Looking Ahead

With this update, YouTube is experimenting with combining other content types with its most popular features.

It’s possible that YouTube is making this change because it’s competing with Instagram and TikTok, which mix videos with different types of content. Combining content formats has the potential to boost user engagement and keep people on YouTube longer.

For creators, this provides an additional distribution channel with a bare minimum cost to entry. Writing a text post may now get you in the same feed as a fully produced Short.

YouTube hasn’t announced specifics for how Posts will be selected or how often they’ll appear. Creators will have to do their own testing to see how this impacts visibility.


Featured Image: Roman Samborskyi/Shutterstock

Google Shows Why Rankings Collapsed After Domain Migration via @sejournal, @martinibuster

Google’s John Mueller used a clever technique to show the publisher of an educational site how to diagnose their search performance issues, which were apparently triggered by a domain migration but were actually caused by the content.

Site With Ranking Issues

Someone posted a plea for help on the Bluesky social network to help their site recover from a site migration gone wrong. The person associated with the website attributed the de-indexing directly to the site migration because there was a direct correlation between the two events.

SEO Insight
An interesting point to highlight is that the site migration preceded the de-indexing by Google but it’s not the cause. The site migration is not the cause for the de-indexing. The migration is what set a chain of events into action that led to the real cause, which as you’ll see later on is low quality content. A common error that SEOs and publishers make is to stop investigating upon discovering the most obvious reason for why something is happening. But the most obvious reason is not always the actual reason, as you’ll see further on.

This is what was posted on social media:

“Hello SEO Community,

Sudden Deindexing & Traffic Drop after Domain Migration (from javatpoint.com to tpointtech.com) – Need Help”

Google’s John Mueller answered their plea and suggested they do a site search on Bing with their new domain, like this:

site:tpointtech.com sexy

And when you do that Bing shows “top ten list” articles about various Indian celebrities.

Google’s John Mueller also suggested doing a site search for “watch online” and “top ten list” which revealed that the site is host to scores of low quality web pages that are irrelevant to their topic.

A screenshot of one of the pages shows how abundant the off-topic web pages are on that website:

Where Did Irrelevant Pages Come From?

The irrelevant pages originated from the original domain, Javatpoint.com, from which they migrated. When they migrated to Tpointtech they also brought along all of that low quality irrelevant content as well.

Here’s a screenshot of the original domain, demonstrating that the off-topic content originated on the old domain:

Google’s John Mueller posted:

“One of the things I noticed is that there’s a lot of totally unrelated content on the site. Is that by design? If you go to Bing and use [site:tpointtech.com watch online], [site:tpointtech.com sexy], [site:tpointtech.com top 10] , similarly probably in your Search Console, it looks really weird.”

Takeaways

Bing Is Useful For Site Searches
Google’s John Mueller showed that Bing can be useful for identifying pages that Google is not indexing which could then indicate a content problem.

SEO Insight
The fact that Bing continues to index the off topic content may highlight a difference between Google and Bing. The domain migration might be showing one of the ways that Google identifies the motivation for content, whether the intent is to rank and monetize rather than create something useful to site visitors. An argument could be made that the wildly off-topic nature of the content betrays the “made-for-search-engines” motivation that Google cautions against.

Irrelevant Content
A site generally has a main topic, with branching related subtopics. But in general the main topic and subtopics relate to each other in a way that makes sense for the user. Adding wildly off-topic content low quality content betrays an intent to create content for traffic, something that Google explicitly prohibits.

Past Performance Doesn’t Predict Future Performance
There’s a tendency on the part of site publishers to shrug about their content quality because it seems to them that Google likes it just fine. But that doesn’t mean the content is fine, it means that it hasn’t become an issue yet. Some problems are dormant and when I see this in site reviews and generally say that this may not be a problem now but it could become a problem later so it’s best to be proactive about it now.

Given that the search performance issues occurred after the site migration but the irrelevant content was pre-existing it appears that the effects of the irrelevant content were muted by the standing the original content had. Nevertheless the irrelevant content was still an issue, it just hadn’t hatched into an issue yet. Migrating the site to a new domain forced Google to re-evaluate the entire site and that’s when the low quality content became an issue.

Content Quality Versus Content Intent
It’s possible for someone to make a case that the content, although irrelevant, was high quality and shouldn’t have made a difference. What calls attention to me is that the topics appear to signal an intent to create content for ranking and monetization purposes. It’s hard to argue that the content is useful for site visitors to an educational site.

Expansion Of Content Topics
Lastly there’s the issue of whether it’s a good idea to expand the range of topics that a site is relevant for. A television review site can expand to include reviews of other electronics like headphones and keyboards and it’s especially smoother if the domain name doesn’t set up the wrong expectation. That’s why domains with the product types in them are so limiting because they presume the publisher will never achieve so much success that they’ll have to expand the range of topics.

Featured Image by Shutterstock/Ollyy

Google Launches Loyalty Program Structured Data Support via @sejournal, @MattGSouthern

Google now supports structured data that allows businesses to show loyalty program benefits in search results.

Businesses can use two new types of structured data. One type defines the loyalty program itself, while the other illustrates the benefits members receive for specific products.

Here’s what you need to know.

Loyalty Structured Data

When businesses use this new structured data for loyalty programs, their products can display member benefits directly in Google. This allows shoppers to view the perks before clicking on any listings.

Google recognizes four specific types of loyalty benefits that can be displayed:

  • Loyalty Points: Points earned per purchase
  • Member-Only Prices: Exclusive pricing for members
  • Special Returns: Perks like free returns
  • Special Shipping: Benefits like free or expedited shipping

This is a new way to make products more visible. It may also result in higher clicks from search results.

The announcement states:

“… member benefits, such as lower prices and earning loyalty points, are a major factor considered by shoppers when buying products online.”

Details & Requirements

The new feature needs two steps.

  1. First, add loyalty program info to your ‘Organization’ structured data.
  2. Then, add loyalty benefits to your ‘Product’ structured data.
  3. Bonus step: Check if your markup works using the Rich Results Test tool.

With valid markup in place, Google will be aware of your loyalty program and the perks associated with each product.

Important implementation note: Google recommends placing all loyalty program information on a single dedicated page rather than spreading it across multiple pages. This helps ensure proper crawling and indexing.

Multi-Tier Programs Now Supported

Businesses can define multiple membership tiers within a single loyalty program—think bronze, silver, and gold levels. Each tier can have different requirements for joining, such as:

  • Credit card signup requirements
  • Minimum spending thresholds (e.g., $250 annual spend)
  • Periodic membership fees

This flexibility allows businesses to create sophisticated loyalty structures that match their existing programs.

Merchant Center Takes Priority

Google Shopping software engineers Irina Tuduce and Pascal Fleury say this feature is:

“… especially important if you don’t have a Merchant Center account and want the ability to provide a loyalty program for your business.”

It’s worth reiterating: If your business already uses Google Merchant Center, keep using that for loyalty programs.

In fact, if you implement both structured data markup and Merchant Center loyalty programs, Google will prioritize the Merchant Center settings. This override ensures there’s no confusion about which data source takes precedence.

Looking Ahead

The update seems aimed at helping smaller businesses compete with larger retailers, which often have complex Merchant Center setups.

Now, smaller sites can share similar information using structured data, including sophisticated multi-tier programs that were previously difficult to implement without Merchant Center.

Small and medium e-commerce sites without Merchant Center accounts should strongly consider adopting this markup.

For more details, see Google’s new help page.

Google Responds To Site That Lost Ranks After Googlebot DDoS Crawl via @sejournal, @martinibuster

Google’s John Mueller answered a question about a site that received millions of Googlebot requests for pages that don’t exist, with one non-existent URL receiving over two million hits, essentially DDoS-level page requests. The publisher’s concerns about crawl budget and rankings seemingly were realized, as the site subsequently experienced a drop in search visibility.

NoIndex Pages Removed And Converted To 410

The 410 Gone server response code belongs to the family 400 response codes that indicate a page is not available. The 404 response means that a page is not available and makes no claims as to whether the URL will return in the future, it simply says the page is not available.

The 410 Gone status code means that the page is gone and likely will never return. Unlike the 404 status code, the 410 signals the browser or crawler that the missing status of the resource is intentional and that any links to the resource should be removed.

The person asking the question was following up on a question they posted three weeks ago on Reddit where they noted that they had about 11 million URLs that should not have been discoverable that they removed entirely and began serving a 410 response code. After a month and a half Googlebot continued to return looking for the missing pages. They shared their concern about crawl budget and subsequent impacts to their rankings as a result.

Mueller at the time forwarded them to a Google support page.

Rankings Loss As Google Continues To Hit Site At DDOS Levels

Three weeks later things have not improved and they posted a follow-up question noting they’ve received over five millions requests for pages that don’t exist. They posted an actual URL in their question but I anonymized it, otherwise it’s verbatim.

The person asked:

“Googlebot continues to aggressively crawl a single URL (with query strings), even though it’s been returning a 410 (Gone) status for about two months now.

In just the past 30 days, we’ve seen approximately 5.4 million requests from Googlebot. Of those, around 2.4 million were directed at this one URL:
https://example.net/software/virtual-dj/ with the ?feature query string.

We’ve also seen a significant drop in our visibility on Google during this period, and I can’t help but wonder if there’s a connection — something just feels off. The affected page is:
https://example.net/software/virtual-dj/?feature=…

The reason Google discovered all these URLs in the first place is that we unintentionally exposed them in a JSON payload generated by Next.js — they weren’t actual links on the site.

We have changed how our “multiple features” works (using ?mf querystring and that querystring is in robots.txt)

Would it be problematic to add something like this to our robots.txt?

Disallow: /software/virtual-dj/?feature=*

Main goal: to stop this excessive crawling from flooding our logs and potentially triggering unintended side effects.”

Google’s John Mueller confirmed that it’s Google’s normal behavior to keep returning to check if a page that is missing has returned. This is Google’s default behavior based on the experience that publishers can make mistakes and so they will periodically return to verify whether the page has been restored. This is meant to be a helpful feature for publishers who might unintentionally remove a web page.

Mueller responded:

“Google attempts to recrawl pages that once existed for a really long time, and if you have a lot of them, you’ll probably see more of them. This isn’t a problem – it’s fine to have pages be gone, even if it’s tons of them. That said, disallowing crawling with robots.txt is also fine, if the requests annoy you.”

Caution: Technical SEO Ahead

This next part is where the SEO gets technical. Mueller cautions that the proposed solution of adding a robots.txt could inadvertently break rendering for pages that aren’t supposed to be missing.

He’s basically advising the person asking the question to:

  • Double-check that the ?feature= URLs are not being used at all in any frontend code or JSON payloads that power important pages.
  • Use Chrome DevTools to simulate what happens if those URLs are blocked — to catch breakage early.
  • Monitor Search Console for Soft 404s to spot any unintended impact on pages that should be indexed.

John Mueller continued:

“The main thing I’d watch out for is that these are really all returning 404/410, and not that some of them are used by something like JavaScript on pages that you want to have indexed (since you mentioned JSON payload).

It’s really hard to recognize when you’re disallowing crawling of an embedded resource (be it directly embedded in the page, or loaded on demand) – sometimes the page that references it stops rendering and can’t be indexed at all.

If you have JavaScript client-side-rendered pages, I’d try to find out where the URLs used to be referenced (if you can) and block the URLs in Chrome dev tools to see what happens when you load the page.

If you can’t figure out where they were, I’d disallow a part of them, and monitor the Soft-404 errors in Search Console to see if anything visibly happens there.

If you’re not using JavaScript client-side-rendering, you can probably ignore this paragraph :-).”

The Difference Between The Obvious Reason And The Actual Cause

Google’s John Mueller is right to suggest a deeper diagnostic to rule out errors on the part of the publisher. A publisher error started the chain of events that led to the indexing of pages against the publisher’s wishes. So it’s reasonable to ask the publisher to check if there may be a more plausible reason to account for a loss of search visibility. This is a classic situation where an obvious reason is not necessarily the correct reason. There’s a difference between being an obvious reason and being the actual cause. So Mueller’s suggestion to not give up on finding the cause is good advice.

Read the original discussion here.

Featured Image by Shutterstock/PlutusART

WordPress Co-Founder Mullenweg’s Reaction To FAIR Project via @sejournal, @martinibuster

The Linux Foundation recently announced the FAIR Package Manager project, an open-source, distributed WordPress plugin and theme repository that decentralizes control of the repository. A distributed theme and plugin repository became a priority for many in the WordPress community after Matt Mullenweg took control of certain paid premium plugins and created free versions from them, in addition to removing access to the free versions of the original plugins.

The Linux announcement, made on Friday, June 6, came during the middle of WordCamp Europe, all but assuring that it would be a topic of discussion at the three-day conference.

According to the Linus foundation announcement:

“…The FAIR Package Manager project paves the way for the stability and growth of open source content management, giving contributors and businesses additional options governed by a neutral community…”

It was inevitable that Matt Mullenweg would be asked about it and that’s what happened, twice. Mullenweg was gracious about answering the questions but he was also understandably cautious about it, given that it had only been less than 24 hours since the FAIR project had been announced.

Initial Reaction To Project FAIR

The first question was asked early in the question and answer period, where Mullenweg was asked how he sees such initiatives coexisting with WordPress and asking what he sees as the ideal outcome.

Mullenweg expressed cautious optimism, praising the open source nature of WordPress by saying that that’s the point of open source, that it can coexist with everything. But he also was reluctant to say much more. He did seem a little annoyed that the FAIR project was created “in secret.” I don’t know the extent of whether the FAIR project was created in secret but it did seem as if the Linux foundation essentially ambushed WordPress and WordCampe with their announcement.

Mullenweg answered:

“…I think that’s part of the beauty that something like this can be written with the APIs that WordPress has. I don’t know if I want to comment too much further on it just because kind of just found out about it last night, there hasn’t been that much time. There’s a lot of code and uh and complexities.

You know, I do wish if the team did want to collaborate or the team says we want to be transparent and everything. But it did sort of drop as a surprise. It was worked on in secret for six months. But we can work past that and look at it. “

Do Users Want A Federated Repository?

Mullenweg next turned the question away from what he might think about it and asked if this is something that WordPress users would want. He also explained the immensity of the undertaking a decentralized system for the repository.

He continued his answer:

“I do think things we need to keep in mind are, you know, what are users asking for?

What are the challenges they’re facing around finding the right things, knowing it’s secure, getting updates? You know the stats around how many sites that are hacked are from out of date plugins. Those are things that are top of my mind for the plugin directory and so the trust and safety elements of that for the.org directory.

…So we’re now up to 72,000 plugins and themes. This is about 3.2 terabytes, like zip files. That’s not counting all the SVN history and everything like that. So there’s a there’s a lot of data there, which also we need to make sure, like if 500 mirrors are set up and they’re all sucking down the directory like, that could DDOS us.”

About twenty minutes later someone else stepped up and asked the question again, sharing about her long history with WordPress and her opinion of why the FAIR project may be useful.

She said:

“I’ve been contributing to the communication team for 14 years and contributing to plug in review team for a couple of years and my whole work in documentation was serving the user every decision we made we made was to serve user. And in plugin review team we also include plugin authors So everything we do we do for plugin authors and users to make their lives easier and better.”

Next she offered an explanation of why she thinks the FAIR project is good for plugin authors and users:

“So the Fair project is actually federated and independent repository of trusted plugins and teams. And it is under the Linux Foundation. So that means a lot when it’s under the Linux foundation.

And what it means for users and plugin authors and team authors is actually making their lives easier and better, more secure. It makes all the products more discoverable and also developers can choose their source. Where are they using their supply chain from.

But also, it is helping WordPress.org because these are mirrors so it will reduce the load from WordPress.org for every update and all of that.

…I don’t know if you trust me, but it seemed to me that this aligns with the idea of having users and developers first in mind. Would you as wordpress.org consider collaborating with this project?”

Mullenweg’s answer was cautious in tone, giving the impression that he didn’t know much about the FAIR project aside from the public announcement made by the Linux Foundation.

He answered:

“Of course we consider everything, but even in what you said, I think there’s a lot of challenges to it. So for example, right now, a supply chain attack needs to breach wordpress.org which has never been hacked.”

At this point loud laughter rang out in the hall, catching Mullenweg by surprise.

He then continued, offering an idea of the complexity of a federated theme and plugin repository:

“The… now all of a sudden there is N places that could potentially be compromised that you know there’s ways to do that, many ways. There’s N places with uptime issues.

And… it makes it much more difficult for, I don’t know if it’s actually better for WordPress.org, because it makes it much more difficult to do things like rollouts, phased rollouts, or let’s say we get plugin authors the ability to ship to 5% of users and then see what happens, which means we also need things being checked back and then we can roll out to the rest, which is something that I’ve heard a ton of plugin authors ask for.

It will break all the analytics and stats that we provide and also that we internally …use to make decisions, for example which versions of PHP we support…

So I think that it’s uh a big part of why WordPress is where it is today is because of the infrastructure and the sort of feedback loop that we get from wordpress.org.

Also, the trust that we’re able to engender by having that be a resource. When you look at marketplaces, people aren’t asking necessarily for I want it to be downloaded from more locations.

  • They’re asking for how do I know this is trustworthy?
  • How do I know these reviews are real?
  • Who’s moderating?
  • Who’s checking the IP’s on these different reviews?
  • What’s the plug in rating?
  • What’s the compatibility for it?
  • How does it, compatible with my other plugins?

These are things I’m hearing from users, not I need it hosted in a different place. This is one example.

And again, I don’t want to get too far into it because I want to read the code. I want to dive more into it. I want colleagues to look at it. So, I think it’s kind of premature, less than 24 hours in to say like we’re going to …this or not.”

At this point Mullenweg praised the fact that people were being constructive rather than arguing.

He continued:

“But I do think it’s awesome that people are shipping code versus just arguing or talking or writing blog posts. I think that’s a pretty productive way to sort of channel possible disagreements or anything, and then we can see how it looks. Might be a super niche thing that a few people use, maybe one or two hosts or it might be something that maybe there’s something in there that becomes …popular.”

Then he returned to listing things that still need to be looked into, trying to give an idea of how complex creating a decentralized repository is.

Mullenweg continued:

“Like something that we probably need to do in the plug and review is something about these admin banners right, now how is that enforced in a distributed FAIR system?”

Mullenweg then asked the person asking the question how she would solve all of those problems to which she answered that she’s not the smartest person in the room but that this is something to be collaborated on and then she tossed off a joking remark that maybe they can ask ChatGPT, which drew laughter and applause, breaking the tension of the moment and ending the question on a light note.

Watch the question and answer session in about the 8 hour mark of the video:

Google’s Update To Recipe Structured Data Confirms A Ranking Criteria via @sejournal, @martinibuster

Google updated the Recipe Schema.org structured data documentation to reflect more precise guidance on what the image structured data property affects and where to find additional information about ranking recipe images in the regular organic search results.

Schema.org Structured Data And Rich Results

The SEO and publisher community refers to the text results as the organic search results or the ten blue links. Google refers to them as the text results.

Structured data helps a site’s content become eligible to rank in Google’s rich results but it generally doesn’t help content rank better in the text results.

That’s the concept underlying Google’s update to the Recipe structured data guidance with the addition of two sentences:

“Specifying the image property in Recipe markup has no impact on the image chosen for a text result image. To optimize for a text result image, follow the image SEO best practices.”

Recipe structured data influences the images shown in the Recipe Rich Results. The structured data does not influence the image rankings in the regular text results (aka the organic search results).

Ranking Images In Text Results

Google offers documentation for image best practices which specify normal HTML like the and elements. Google also recommends using an image sitemap, a sitemap that’s specifically for images.

Something to pay particular attention to is to not use images that have blurry qualities to them. Always use sharp images to give your images the best chance for showing up in the search results.

I know that some images may contain slight purposeful blurring for optimization purposes (blurring decreases image size) and to enhance the perspective of foreground and background. But Google recommends using sharp images and to avoid blurring in images. Google doesn’t say it’s an image ranking factor, but it does make that recommendation.

Here’s what Google’s image optimization guidance recommends:

“High-quality photos appeal to users more than blurry, unclear images. Also, sharp images are more appealing to users in the result thumbnail and can increase the likelihood of getting traffic from users.”

In my opinion I think it’s best to avoid excessive use of blurring. I only have my own anecdotal experience with purposely blurred images not showing up in the search results. So, to me it’s interesting to see my experience confirmed that Google treats blurred images as a negative quality and sharp images as a positive quality.

Read Google’s updated Recipe structured data documentation about images here:

https://developers.google.com/search/docs/appearance/structured-data/recipe#image

Read more about images in Google’s text results here.

Read about blurry and sharp images here:

https://developers.google.com/search/docs/appearance/google-images#good-quality-photos%20optimize-for-speed

Featured Image by Shutterstock/Dean Drobot

Newly Released Data Shows Desktop AI Search Referrals Dominate via @sejournal, @martinibuster

BrightEdge Enterprise SEO platform released new data showing distinctive patterns across major AI search and chatbot platforms and also called attention to potential disruption from Apple if it breaks with Google as the default search engine in Safari.

Desktop AI Traffic Dominance

One of the key findings in the BrightEdge data is that traffic to websites from AI chatbots and search engines is highest from desktop users. The exception is Google Search which is reported to send more traffic from mobile devices over desktop.

The report notes that 94% of the traffic from ChatGPT originates from desktop apps with just 6% of referrals coming from mobile apps. BrightEdge speculates that the reason why there’s less mobile traffic is because ChatGPT’s mobile app shows an in-app preview, requiring a user to execute a second click to navigate to an external site. This creates a referral bottleneck that doesn’t exist on the desktop.

But that doesn’t explain why Perplexity, Bing, and Google Gemini also show similar levels of desktop traffic dominance. Could it be a contextual difference where users on desktop are using AI for business and mobile use is less casual? The fact that Google Search sends more mobile referral traffic than desktop could suggest a contextual reason for the disparity in mobile traffic from AI search and chatbots.

BrightEdge shared their insights:

“While Google maintains an overwhelming market share in overall search (89%) and an even stronger position on mobile (93%), its dominance is particularly crucial in mobile web search. BrightEdge data indicates that Apple phones alone account for 57% of Google’s mobile traffic to US and European brand websites. But with Safari being the default for around a billion users, any change to that default could reallocate countless search queries overnight.

Apple’s vendor-agnostic Apple Intelligence also suggests opportunities for seismic shifts in web search. While generative AI tools have surged in popularity through apps on IOS, mobile web search—where the majority of search still occurs—remains largely controlled by Google via Safari defaults. This makes Apple’s control of Safari the most valuable real estate in the mobile search landscape.”

Here are the traffic referral statistics provided by BrightEdge:

  • Google Search: Only major AI search with mobile majority traffic referrals (53% mobile vs 44% desktop)
  • ChatGPT: 94% desktop, just 6% mobile referrals
  • Perplexity: 96.5% desktop, 3.4% mobile
  • Bing: 94% desktop, 4% mobile
  • Google Gemini: 91% desktop, 5% mobile

Apple May Play The Kingmaker?

With Apple’s Worldwide Developers Conference (WWDC) nearing, one of the changes that many will be alert to is any announcement relative to the company’s Safari browser which controls the default search settings on nearly a billion devices. A change in search provider in Safari could initiate dramatic changes to who the new winners and losers are in web search.

Perplexity asserts that the outcome of changes to Safari browser defaults may impact search marketing calculations for the following reasons:

“58% of Google’s mobile traffic to brand websites comes from iPhones

Safari remains the default browser for nearly a billion users

Apple has not yet embedded AI-powered search into its mobile web stack”

Takeaways

  • Desktop Users Of AI Search Account For The Majority Of Referral Traffic
    Most AI-generated search traffic from from ChatGPT, Perplexity, Bing, and Gemini comes from desktop usage, not mobile.
  • Google Search Is The Traffic Referral Outlier
    Unlike other AI search tools, Google Search still delivers a majority of its traffic via mobile devices.
  • In-App Previews May Limit ChatGPT Mobile AI Referrals
    ChatGPT’s mobile app requires an extra click to visit external sites, possibly explaining low mobile referral numbers.
  • Apple’s Position Is Pivotal To Search Marketing
    Apple devices account for over half of Google’s mobile traffic to brand websites, giving Apple an outsized impact on mobile search traffic.
  • Safari Default And Greater Market Share
    With Safari set as the default browser for nearly a billion users, Apple effectively controls the gate to mobile web search.
  • Perplexity Stands To Gain Market Share
    If Apple switches Safari’s default search to Perplexity, the resulting shift in traffic could remake the competitive balance in search marketing.
  • Search Marketers Should Watch WWDC
    Any change announced at Apple’s WWDC regarding Safari’s search engine could have large-scale impact on search marketing.

BrightEdge data shows that desktop usage is the dominant source of traffic referrals from AI-powered search tools like ChatGPT, Perplexity, Bing, and Gemini, with Google Search as the only major platform that sends more traffic via mobile.

This pattern could suggest a behavioral split between desktop users, who may be performing work-related or research-heavy tasks, and mobile users, who may be browsing more casually. BrightEdge also points to a bottleneck built into the ChatGPT app that creates a one-click barrier to mobile traffic referrals.

BrightEdge’s data further cites Apple’s control over Safari, which is installed on nearly a billion devices, as a potential disruptor due to a possible change in the default search engine away from Google. Such a shift could significantly alter mobile search traffic patterns.

Read more at BrightEdge

The Open Frontier of Mobile AI Search

Featured Image by Shutterstock/Tada Images