YouTube Begins Showing Posts In The Shorts Feed via @sejournal, @MattGSouthern

YouTube has announced that Posts will now appear in the Shorts feed. This change allows users to see and interact with Posts while watching short videos.

How the Feature Works

You can now see Posts while scrolling through Shorts on YouTube.

In the screenshot below, you can see that the layout keeps the same vertical aspect ratio. While scrolling, the video gets smaller, using half the screen for Posts.

Screenshot from: YouTube.com/CreatorInsider, June 2025.

You can like and comment on these Posts without stopping your video.

See more about it in YouTube’s video announcement:

Background on YouTube Posts

YouTube has had posts as part of its creator toolkit for several years. These posts let channel owners share polls, quizzes, GIFs, text updates, images, and videos.

Posts are found in a special tab on the creator’s channel. They can also show up on subscribers’ homepages or in their subscription feeds.

Potential New Exposure

This update gives creators a new way to reach their YouTube audience with posts.

Further, this gives creators a way to reach Shorts viewers without creating vertical videos.

If you only publish traditional long-form content on your channel, you can potentially get into the Shorts feed by publishing text or images.

Looking Ahead

With this update, YouTube is experimenting with combining other content types with its most popular features.

It’s possible that YouTube is making this change because it’s competing with Instagram and TikTok, which mix videos with different types of content. Combining content formats has the potential to boost user engagement and keep people on YouTube longer.

For creators, this provides an additional distribution channel with a bare minimum cost to entry. Writing a text post may now get you in the same feed as a fully produced Short.

YouTube hasn’t announced specifics for how Posts will be selected or how often they’ll appear. Creators will have to do their own testing to see how this impacts visibility.


Featured Image: Roman Samborskyi/Shutterstock

Google Shows Why Rankings Collapsed After Domain Migration via @sejournal, @martinibuster

Google’s John Mueller used a clever technique to show the publisher of an educational site how to diagnose their search performance issues, which were apparently triggered by a domain migration but were actually caused by the content.

Site With Ranking Issues

Someone posted a plea for help on the Bluesky social network to help their site recover from a site migration gone wrong. The person associated with the website attributed the de-indexing directly to the site migration because there was a direct correlation between the two events.

SEO Insight
An interesting point to highlight is that the site migration preceded the de-indexing by Google but it’s not the cause. The site migration is not the cause for the de-indexing. The migration is what set a chain of events into action that led to the real cause, which as you’ll see later on is low quality content. A common error that SEOs and publishers make is to stop investigating upon discovering the most obvious reason for why something is happening. But the most obvious reason is not always the actual reason, as you’ll see further on.

This is what was posted on social media:

“Hello SEO Community,

Sudden Deindexing & Traffic Drop after Domain Migration (from javatpoint.com to tpointtech.com) – Need Help”

Google’s John Mueller answered their plea and suggested they do a site search on Bing with their new domain, like this:

site:tpointtech.com sexy

And when you do that Bing shows “top ten list” articles about various Indian celebrities.

Google’s John Mueller also suggested doing a site search for “watch online” and “top ten list” which revealed that the site is host to scores of low quality web pages that are irrelevant to their topic.

A screenshot of one of the pages shows how abundant the off-topic web pages are on that website:

Where Did Irrelevant Pages Come From?

The irrelevant pages originated from the original domain, Javatpoint.com, from which they migrated. When they migrated to Tpointtech they also brought along all of that low quality irrelevant content as well.

Here’s a screenshot of the original domain, demonstrating that the off-topic content originated on the old domain:

Google’s John Mueller posted:

“One of the things I noticed is that there’s a lot of totally unrelated content on the site. Is that by design? If you go to Bing and use [site:tpointtech.com watch online], [site:tpointtech.com sexy], [site:tpointtech.com top 10] , similarly probably in your Search Console, it looks really weird.”

Takeaways

Bing Is Useful For Site Searches
Google’s John Mueller showed that Bing can be useful for identifying pages that Google is not indexing which could then indicate a content problem.

SEO Insight
The fact that Bing continues to index the off topic content may highlight a difference between Google and Bing. The domain migration might be showing one of the ways that Google identifies the motivation for content, whether the intent is to rank and monetize rather than create something useful to site visitors. An argument could be made that the wildly off-topic nature of the content betrays the “made-for-search-engines” motivation that Google cautions against.

Irrelevant Content
A site generally has a main topic, with branching related subtopics. But in general the main topic and subtopics relate to each other in a way that makes sense for the user. Adding wildly off-topic content low quality content betrays an intent to create content for traffic, something that Google explicitly prohibits.

Past Performance Doesn’t Predict Future Performance
There’s a tendency on the part of site publishers to shrug about their content quality because it seems to them that Google likes it just fine. But that doesn’t mean the content is fine, it means that it hasn’t become an issue yet. Some problems are dormant and when I see this in site reviews and generally say that this may not be a problem now but it could become a problem later so it’s best to be proactive about it now.

Given that the search performance issues occurred after the site migration but the irrelevant content was pre-existing it appears that the effects of the irrelevant content were muted by the standing the original content had. Nevertheless the irrelevant content was still an issue, it just hadn’t hatched into an issue yet. Migrating the site to a new domain forced Google to re-evaluate the entire site and that’s when the low quality content became an issue.

Content Quality Versus Content Intent
It’s possible for someone to make a case that the content, although irrelevant, was high quality and shouldn’t have made a difference. What calls attention to me is that the topics appear to signal an intent to create content for ranking and monetization purposes. It’s hard to argue that the content is useful for site visitors to an educational site.

Expansion Of Content Topics
Lastly there’s the issue of whether it’s a good idea to expand the range of topics that a site is relevant for. A television review site can expand to include reviews of other electronics like headphones and keyboards and it’s especially smoother if the domain name doesn’t set up the wrong expectation. That’s why domains with the product types in them are so limiting because they presume the publisher will never achieve so much success that they’ll have to expand the range of topics.

Featured Image by Shutterstock/Ollyy

Google Launches Loyalty Program Structured Data Support via @sejournal, @MattGSouthern

Google now supports structured data that allows businesses to show loyalty program benefits in search results.

Businesses can use two new types of structured data. One type defines the loyalty program itself, while the other illustrates the benefits members receive for specific products.

Here’s what you need to know.

Loyalty Structured Data

When businesses use this new structured data for loyalty programs, their products can display member benefits directly in Google. This allows shoppers to view the perks before clicking on any listings.

Google recognizes four specific types of loyalty benefits that can be displayed:

  • Loyalty Points: Points earned per purchase
  • Member-Only Prices: Exclusive pricing for members
  • Special Returns: Perks like free returns
  • Special Shipping: Benefits like free or expedited shipping

This is a new way to make products more visible. It may also result in higher clicks from search results.

The announcement states:

“… member benefits, such as lower prices and earning loyalty points, are a major factor considered by shoppers when buying products online.”

Details & Requirements

The new feature needs two steps.

  1. First, add loyalty program info to your ‘Organization’ structured data.
  2. Then, add loyalty benefits to your ‘Product’ structured data.
  3. Bonus step: Check if your markup works using the Rich Results Test tool.

With valid markup in place, Google will be aware of your loyalty program and the perks associated with each product.

Important implementation note: Google recommends placing all loyalty program information on a single dedicated page rather than spreading it across multiple pages. This helps ensure proper crawling and indexing.

Multi-Tier Programs Now Supported

Businesses can define multiple membership tiers within a single loyalty program—think bronze, silver, and gold levels. Each tier can have different requirements for joining, such as:

  • Credit card signup requirements
  • Minimum spending thresholds (e.g., $250 annual spend)
  • Periodic membership fees

This flexibility allows businesses to create sophisticated loyalty structures that match their existing programs.

Merchant Center Takes Priority

Google Shopping software engineers Irina Tuduce and Pascal Fleury say this feature is:

“… especially important if you don’t have a Merchant Center account and want the ability to provide a loyalty program for your business.”

It’s worth reiterating: If your business already uses Google Merchant Center, keep using that for loyalty programs.

In fact, if you implement both structured data markup and Merchant Center loyalty programs, Google will prioritize the Merchant Center settings. This override ensures there’s no confusion about which data source takes precedence.

Looking Ahead

The update seems aimed at helping smaller businesses compete with larger retailers, which often have complex Merchant Center setups.

Now, smaller sites can share similar information using structured data, including sophisticated multi-tier programs that were previously difficult to implement without Merchant Center.

Small and medium e-commerce sites without Merchant Center accounts should strongly consider adopting this markup.

For more details, see Google’s new help page.

Google Responds To Site That Lost Ranks After Googlebot DDoS Crawl via @sejournal, @martinibuster

Google’s John Mueller answered a question about a site that received millions of Googlebot requests for pages that don’t exist, with one non-existent URL receiving over two million hits, essentially DDoS-level page requests. The publisher’s concerns about crawl budget and rankings seemingly were realized, as the site subsequently experienced a drop in search visibility.

NoIndex Pages Removed And Converted To 410

The 410 Gone server response code belongs to the family 400 response codes that indicate a page is not available. The 404 response means that a page is not available and makes no claims as to whether the URL will return in the future, it simply says the page is not available.

The 410 Gone status code means that the page is gone and likely will never return. Unlike the 404 status code, the 410 signals the browser or crawler that the missing status of the resource is intentional and that any links to the resource should be removed.

The person asking the question was following up on a question they posted three weeks ago on Reddit where they noted that they had about 11 million URLs that should not have been discoverable that they removed entirely and began serving a 410 response code. After a month and a half Googlebot continued to return looking for the missing pages. They shared their concern about crawl budget and subsequent impacts to their rankings as a result.

Mueller at the time forwarded them to a Google support page.

Rankings Loss As Google Continues To Hit Site At DDOS Levels

Three weeks later things have not improved and they posted a follow-up question noting they’ve received over five millions requests for pages that don’t exist. They posted an actual URL in their question but I anonymized it, otherwise it’s verbatim.

The person asked:

“Googlebot continues to aggressively crawl a single URL (with query strings), even though it’s been returning a 410 (Gone) status for about two months now.

In just the past 30 days, we’ve seen approximately 5.4 million requests from Googlebot. Of those, around 2.4 million were directed at this one URL:
https://example.net/software/virtual-dj/ with the ?feature query string.

We’ve also seen a significant drop in our visibility on Google during this period, and I can’t help but wonder if there’s a connection — something just feels off. The affected page is:
https://example.net/software/virtual-dj/?feature=…

The reason Google discovered all these URLs in the first place is that we unintentionally exposed them in a JSON payload generated by Next.js — they weren’t actual links on the site.

We have changed how our “multiple features” works (using ?mf querystring and that querystring is in robots.txt)

Would it be problematic to add something like this to our robots.txt?

Disallow: /software/virtual-dj/?feature=*

Main goal: to stop this excessive crawling from flooding our logs and potentially triggering unintended side effects.”

Google’s John Mueller confirmed that it’s Google’s normal behavior to keep returning to check if a page that is missing has returned. This is Google’s default behavior based on the experience that publishers can make mistakes and so they will periodically return to verify whether the page has been restored. This is meant to be a helpful feature for publishers who might unintentionally remove a web page.

Mueller responded:

“Google attempts to recrawl pages that once existed for a really long time, and if you have a lot of them, you’ll probably see more of them. This isn’t a problem – it’s fine to have pages be gone, even if it’s tons of them. That said, disallowing crawling with robots.txt is also fine, if the requests annoy you.”

Caution: Technical SEO Ahead

This next part is where the SEO gets technical. Mueller cautions that the proposed solution of adding a robots.txt could inadvertently break rendering for pages that aren’t supposed to be missing.

He’s basically advising the person asking the question to:

  • Double-check that the ?feature= URLs are not being used at all in any frontend code or JSON payloads that power important pages.
  • Use Chrome DevTools to simulate what happens if those URLs are blocked — to catch breakage early.
  • Monitor Search Console for Soft 404s to spot any unintended impact on pages that should be indexed.

John Mueller continued:

“The main thing I’d watch out for is that these are really all returning 404/410, and not that some of them are used by something like JavaScript on pages that you want to have indexed (since you mentioned JSON payload).

It’s really hard to recognize when you’re disallowing crawling of an embedded resource (be it directly embedded in the page, or loaded on demand) – sometimes the page that references it stops rendering and can’t be indexed at all.

If you have JavaScript client-side-rendered pages, I’d try to find out where the URLs used to be referenced (if you can) and block the URLs in Chrome dev tools to see what happens when you load the page.

If you can’t figure out where they were, I’d disallow a part of them, and monitor the Soft-404 errors in Search Console to see if anything visibly happens there.

If you’re not using JavaScript client-side-rendering, you can probably ignore this paragraph :-).”

The Difference Between The Obvious Reason And The Actual Cause

Google’s John Mueller is right to suggest a deeper diagnostic to rule out errors on the part of the publisher. A publisher error started the chain of events that led to the indexing of pages against the publisher’s wishes. So it’s reasonable to ask the publisher to check if there may be a more plausible reason to account for a loss of search visibility. This is a classic situation where an obvious reason is not necessarily the correct reason. There’s a difference between being an obvious reason and being the actual cause. So Mueller’s suggestion to not give up on finding the cause is good advice.

Read the original discussion here.

Featured Image by Shutterstock/PlutusART

WordPress Co-Founder Mullenweg’s Reaction To FAIR Project via @sejournal, @martinibuster

The Linux Foundation recently announced the FAIR Package Manager project, an open-source, distributed WordPress plugin and theme repository that decentralizes control of the repository. A distributed theme and plugin repository became a priority for many in the WordPress community after Matt Mullenweg took control of certain paid premium plugins and created free versions from them, in addition to removing access to the free versions of the original plugins.

The Linux announcement, made on Friday, June 6, came during the middle of WordCamp Europe, all but assuring that it would be a topic of discussion at the three-day conference.

According to the Linus foundation announcement:

“…The FAIR Package Manager project paves the way for the stability and growth of open source content management, giving contributors and businesses additional options governed by a neutral community…”

It was inevitable that Matt Mullenweg would be asked about it and that’s what happened, twice. Mullenweg was gracious about answering the questions but he was also understandably cautious about it, given that it had only been less than 24 hours since the FAIR project had been announced.

Initial Reaction To Project FAIR

The first question was asked early in the question and answer period, where Mullenweg was asked how he sees such initiatives coexisting with WordPress and asking what he sees as the ideal outcome.

Mullenweg expressed cautious optimism, praising the open source nature of WordPress by saying that that’s the point of open source, that it can coexist with everything. But he also was reluctant to say much more. He did seem a little annoyed that the FAIR project was created “in secret.” I don’t know the extent of whether the FAIR project was created in secret but it did seem as if the Linux foundation essentially ambushed WordPress and WordCampe with their announcement.

Mullenweg answered:

“…I think that’s part of the beauty that something like this can be written with the APIs that WordPress has. I don’t know if I want to comment too much further on it just because kind of just found out about it last night, there hasn’t been that much time. There’s a lot of code and uh and complexities.

You know, I do wish if the team did want to collaborate or the team says we want to be transparent and everything. But it did sort of drop as a surprise. It was worked on in secret for six months. But we can work past that and look at it. “

Do Users Want A Federated Repository?

Mullenweg next turned the question away from what he might think about it and asked if this is something that WordPress users would want. He also explained the immensity of the undertaking a decentralized system for the repository.

He continued his answer:

“I do think things we need to keep in mind are, you know, what are users asking for?

What are the challenges they’re facing around finding the right things, knowing it’s secure, getting updates? You know the stats around how many sites that are hacked are from out of date plugins. Those are things that are top of my mind for the plugin directory and so the trust and safety elements of that for the.org directory.

…So we’re now up to 72,000 plugins and themes. This is about 3.2 terabytes, like zip files. That’s not counting all the SVN history and everything like that. So there’s a there’s a lot of data there, which also we need to make sure, like if 500 mirrors are set up and they’re all sucking down the directory like, that could DDOS us.”

About twenty minutes later someone else stepped up and asked the question again, sharing about her long history with WordPress and her opinion of why the FAIR project may be useful.

She said:

“I’ve been contributing to the communication team for 14 years and contributing to plug in review team for a couple of years and my whole work in documentation was serving the user every decision we made we made was to serve user. And in plugin review team we also include plugin authors So everything we do we do for plugin authors and users to make their lives easier and better.”

Next she offered an explanation of why she thinks the FAIR project is good for plugin authors and users:

“So the Fair project is actually federated and independent repository of trusted plugins and teams. And it is under the Linux Foundation. So that means a lot when it’s under the Linux foundation.

And what it means for users and plugin authors and team authors is actually making their lives easier and better, more secure. It makes all the products more discoverable and also developers can choose their source. Where are they using their supply chain from.

But also, it is helping WordPress.org because these are mirrors so it will reduce the load from WordPress.org for every update and all of that.

…I don’t know if you trust me, but it seemed to me that this aligns with the idea of having users and developers first in mind. Would you as wordpress.org consider collaborating with this project?”

Mullenweg’s answer was cautious in tone, giving the impression that he didn’t know much about the FAIR project aside from the public announcement made by the Linux Foundation.

He answered:

“Of course we consider everything, but even in what you said, I think there’s a lot of challenges to it. So for example, right now, a supply chain attack needs to breach wordpress.org which has never been hacked.”

At this point loud laughter rang out in the hall, catching Mullenweg by surprise.

He then continued, offering an idea of the complexity of a federated theme and plugin repository:

“The… now all of a sudden there is N places that could potentially be compromised that you know there’s ways to do that, many ways. There’s N places with uptime issues.

And… it makes it much more difficult for, I don’t know if it’s actually better for WordPress.org, because it makes it much more difficult to do things like rollouts, phased rollouts, or let’s say we get plugin authors the ability to ship to 5% of users and then see what happens, which means we also need things being checked back and then we can roll out to the rest, which is something that I’ve heard a ton of plugin authors ask for.

It will break all the analytics and stats that we provide and also that we internally …use to make decisions, for example which versions of PHP we support…

So I think that it’s uh a big part of why WordPress is where it is today is because of the infrastructure and the sort of feedback loop that we get from wordpress.org.

Also, the trust that we’re able to engender by having that be a resource. When you look at marketplaces, people aren’t asking necessarily for I want it to be downloaded from more locations.

  • They’re asking for how do I know this is trustworthy?
  • How do I know these reviews are real?
  • Who’s moderating?
  • Who’s checking the IP’s on these different reviews?
  • What’s the plug in rating?
  • What’s the compatibility for it?
  • How does it, compatible with my other plugins?

These are things I’m hearing from users, not I need it hosted in a different place. This is one example.

And again, I don’t want to get too far into it because I want to read the code. I want to dive more into it. I want colleagues to look at it. So, I think it’s kind of premature, less than 24 hours in to say like we’re going to …this or not.”

At this point Mullenweg praised the fact that people were being constructive rather than arguing.

He continued:

“But I do think it’s awesome that people are shipping code versus just arguing or talking or writing blog posts. I think that’s a pretty productive way to sort of channel possible disagreements or anything, and then we can see how it looks. Might be a super niche thing that a few people use, maybe one or two hosts or it might be something that maybe there’s something in there that becomes …popular.”

Then he returned to listing things that still need to be looked into, trying to give an idea of how complex creating a decentralized repository is.

Mullenweg continued:

“Like something that we probably need to do in the plug and review is something about these admin banners right, now how is that enforced in a distributed FAIR system?”

Mullenweg then asked the person asking the question how she would solve all of those problems to which she answered that she’s not the smartest person in the room but that this is something to be collaborated on and then she tossed off a joking remark that maybe they can ask ChatGPT, which drew laughter and applause, breaking the tension of the moment and ending the question on a light note.

Watch the question and answer session in about the 8 hour mark of the video:

Google’s Update To Recipe Structured Data Confirms A Ranking Criteria via @sejournal, @martinibuster

Google updated the Recipe Schema.org structured data documentation to reflect more precise guidance on what the image structured data property affects and where to find additional information about ranking recipe images in the regular organic search results.

Schema.org Structured Data And Rich Results

The SEO and publisher community refers to the text results as the organic search results or the ten blue links. Google refers to them as the text results.

Structured data helps a site’s content become eligible to rank in Google’s rich results but it generally doesn’t help content rank better in the text results.

That’s the concept underlying Google’s update to the Recipe structured data guidance with the addition of two sentences:

“Specifying the image property in Recipe markup has no impact on the image chosen for a text result image. To optimize for a text result image, follow the image SEO best practices.”

Recipe structured data influences the images shown in the Recipe Rich Results. The structured data does not influence the image rankings in the regular text results (aka the organic search results).

Ranking Images In Text Results

Google offers documentation for image best practices which specify normal HTML like the and elements. Google also recommends using an image sitemap, a sitemap that’s specifically for images.

Something to pay particular attention to is to not use images that have blurry qualities to them. Always use sharp images to give your images the best chance for showing up in the search results.

I know that some images may contain slight purposeful blurring for optimization purposes (blurring decreases image size) and to enhance the perspective of foreground and background. But Google recommends using sharp images and to avoid blurring in images. Google doesn’t say it’s an image ranking factor, but it does make that recommendation.

Here’s what Google’s image optimization guidance recommends:

“High-quality photos appeal to users more than blurry, unclear images. Also, sharp images are more appealing to users in the result thumbnail and can increase the likelihood of getting traffic from users.”

In my opinion I think it’s best to avoid excessive use of blurring. I only have my own anecdotal experience with purposely blurred images not showing up in the search results. So, to me it’s interesting to see my experience confirmed that Google treats blurred images as a negative quality and sharp images as a positive quality.

Read Google’s updated Recipe structured data documentation about images here:

https://developers.google.com/search/docs/appearance/structured-data/recipe#image

Read more about images in Google’s text results here.

Read about blurry and sharp images here:

https://developers.google.com/search/docs/appearance/google-images#good-quality-photos%20optimize-for-speed

Featured Image by Shutterstock/Dean Drobot

Newly Released Data Shows Desktop AI Search Referrals Dominate via @sejournal, @martinibuster

BrightEdge Enterprise SEO platform released new data showing distinctive patterns across major AI search and chatbot platforms and also called attention to potential disruption from Apple if it breaks with Google as the default search engine in Safari.

Desktop AI Traffic Dominance

One of the key findings in the BrightEdge data is that traffic to websites from AI chatbots and search engines is highest from desktop users. The exception is Google Search which is reported to send more traffic from mobile devices over desktop.

The report notes that 94% of the traffic from ChatGPT originates from desktop apps with just 6% of referrals coming from mobile apps. BrightEdge speculates that the reason why there’s less mobile traffic is because ChatGPT’s mobile app shows an in-app preview, requiring a user to execute a second click to navigate to an external site. This creates a referral bottleneck that doesn’t exist on the desktop.

But that doesn’t explain why Perplexity, Bing, and Google Gemini also show similar levels of desktop traffic dominance. Could it be a contextual difference where users on desktop are using AI for business and mobile use is less casual? The fact that Google Search sends more mobile referral traffic than desktop could suggest a contextual reason for the disparity in mobile traffic from AI search and chatbots.

BrightEdge shared their insights:

“While Google maintains an overwhelming market share in overall search (89%) and an even stronger position on mobile (93%), its dominance is particularly crucial in mobile web search. BrightEdge data indicates that Apple phones alone account for 57% of Google’s mobile traffic to US and European brand websites. But with Safari being the default for around a billion users, any change to that default could reallocate countless search queries overnight.

Apple’s vendor-agnostic Apple Intelligence also suggests opportunities for seismic shifts in web search. While generative AI tools have surged in popularity through apps on IOS, mobile web search—where the majority of search still occurs—remains largely controlled by Google via Safari defaults. This makes Apple’s control of Safari the most valuable real estate in the mobile search landscape.”

Here are the traffic referral statistics provided by BrightEdge:

  • Google Search: Only major AI search with mobile majority traffic referrals (53% mobile vs 44% desktop)
  • ChatGPT: 94% desktop, just 6% mobile referrals
  • Perplexity: 96.5% desktop, 3.4% mobile
  • Bing: 94% desktop, 4% mobile
  • Google Gemini: 91% desktop, 5% mobile

Apple May Play The Kingmaker?

With Apple’s Worldwide Developers Conference (WWDC) nearing, one of the changes that many will be alert to is any announcement relative to the company’s Safari browser which controls the default search settings on nearly a billion devices. A change in search provider in Safari could initiate dramatic changes to who the new winners and losers are in web search.

Perplexity asserts that the outcome of changes to Safari browser defaults may impact search marketing calculations for the following reasons:

“58% of Google’s mobile traffic to brand websites comes from iPhones

Safari remains the default browser for nearly a billion users

Apple has not yet embedded AI-powered search into its mobile web stack”

Takeaways

  • Desktop Users Of AI Search Account For The Majority Of Referral Traffic
    Most AI-generated search traffic from from ChatGPT, Perplexity, Bing, and Gemini comes from desktop usage, not mobile.
  • Google Search Is The Traffic Referral Outlier
    Unlike other AI search tools, Google Search still delivers a majority of its traffic via mobile devices.
  • In-App Previews May Limit ChatGPT Mobile AI Referrals
    ChatGPT’s mobile app requires an extra click to visit external sites, possibly explaining low mobile referral numbers.
  • Apple’s Position Is Pivotal To Search Marketing
    Apple devices account for over half of Google’s mobile traffic to brand websites, giving Apple an outsized impact on mobile search traffic.
  • Safari Default And Greater Market Share
    With Safari set as the default browser for nearly a billion users, Apple effectively controls the gate to mobile web search.
  • Perplexity Stands To Gain Market Share
    If Apple switches Safari’s default search to Perplexity, the resulting shift in traffic could remake the competitive balance in search marketing.
  • Search Marketers Should Watch WWDC
    Any change announced at Apple’s WWDC regarding Safari’s search engine could have large-scale impact on search marketing.

BrightEdge data shows that desktop usage is the dominant source of traffic referrals from AI-powered search tools like ChatGPT, Perplexity, Bing, and Gemini, with Google Search as the only major platform that sends more traffic via mobile.

This pattern could suggest a behavioral split between desktop users, who may be performing work-related or research-heavy tasks, and mobile users, who may be browsing more casually. BrightEdge also points to a bottleneck built into the ChatGPT app that creates a one-click barrier to mobile traffic referrals.

BrightEdge’s data further cites Apple’s control over Safari, which is installed on nearly a billion devices, as a potential disruptor due to a possible change in the default search engine away from Google. Such a shift could significantly alter mobile search traffic patterns.

Read more at BrightEdge

The Open Frontier of Mobile AI Search

Featured Image by Shutterstock/Tada Images

WordPress Plugin Platform Offers Proactive Security Scanning via @sejournal, @martinibuster

WordPress security company Patchstack announced a new security tier called managed Vulnerability Disclosure Program platform (mVDP), which offers both human and advanced AI plugin reviews to help plugin developers keep their software resistant to vulnerabilities and provide greater trustworthiness.

One of the biggest problems with WordPress is vulnerabilities from third-party plugins. An enormous amount of plugins are discovered with vulnerabilities every day and it doesn’t matter if the developer is a one-person shop or a large multinational organization, vulnerabilities happen and when they do user trust goes down, especially if it happens on an ongoing basis.

PatchStack offers a way for software developers to build trust with their users with two tiers of protection, a free and a paid tier that help plugin developers focus on creating high quality plugins that are free from vulnerabilities.

With more and more software being generated by AI, we’re seeing a significant increase in new vulnerabilities and an equal increase in AI-generated security reports, which makes managing the security of plugins more important than ever.

Patchstack offers a standard managed VDP and a new Security Suite that costs $70/month.

According to the announcement, the new paid tier comes with the following benefits:

“$40 worth of AI tokens for code security reviews per month

Team management feature with 5 seats included

Discussion board for direct communication with the reporting researchers

AI code review and human research
The new Security Suite tier combines the best of both worlds. Your plugins will receive boosted visibility (100% AXP bonus) in the Patchstack Alliance ethical hackers community, which encourages security researchers to report significantly more bugs and help plugins fix more vulnerabilities faster.

Additionally, our AI code review tool can scan through your entire codebase to find WordPress-specific security issues and highlight potential improvements. We are currently launching this in beta, but we’ll have much many releases to share in the coming months.”

Security Suite customers will receive security recommendations from their internal security experts, helping developers be proactive about building safe to use WordPress plugins.

Read more at Patchstack:

NEW: Patchstack AI code review tool and Security Suite for plugin vendors

Featured Image by Shutterstock/STILLFX

Respected SEO Rockstar Deconstructs SEO For Google’s AI Search via @sejournal, @martinibuster

One of the SEO industry’s SEO Rockstars recently shared his opinion about SEO for generative AI, calling attention to facts about Google and how the new AI search really works.

Greg Boser is a search marketing pioneer with a deep level of experience that few in the industry can match or even begin to imagine.

Digital Marketers And The History Of SEO

His post was in response to a tweet by someone else that in his opinion overstated that SEO is losing dominance. Greg began his SEO rant by pointing out how some search marketer’s conception of SEO is outdated but they’re so new to SEO that they don’t realize it.

For example, the practice of buying links is one of the oldest tactics in SEO, so old that newcomers to SEO gave it a new name, PBN (private blog network), as if giving link buying a new name changes it somehow. And by the way, I’ve never seen a PBN that was private. The moment you put anything out on the web Google knows about it. If an automated spambot can find it in literally five minutes, Google probably already knows about it, too.

Greg wrote:

“If anyone out there wants to write their own “Everything you think you know is wrong. GEO is the way” article, just follow these simple steps:

1. Frame “SEO” as everything that was a thing between 2000 – 2006. Make sure to mention buying backlinks and stuffing keywords. And try and convince people the only KPI was rankings.”

Google’s Organic Links

The second part of his post calls attention to the fact that Google has not been a ten organic links search engine for a long time. Google providing answers isn’t new.

He posted:

“2. Frame the current state of things as if it all happened in the last 2 weeks. Do not under any circumstances mention any of the following things from the past 15 years:

2009 – Rich Snippets
2011 – Knowledge Graph (things not strings)
2013 – Hummingbird (Semantic understanding of conversational queries)
2014 – Featured Snippets – (direct answers at position “Zero”)
2015 – PPA Boxes (related questions anticipating follow-up questions)
2015 – RankBrain (machine learning to interpret ambiguous queries)
2019 – BERT (NLP to better understand context)
2021 – MUM (BERT on Steroids)
2023 – SGE (The birth of AIO)”

Overstate The Problem

The next part is a reaction to the naive marketing schtick that tries to stir up fear about AI search in order to present themselves as the answer.

He wrote:

“3. Overstate the complexity to create a sense of fear and anxiety and then close with “Your only hope is to hire a GEO expert”

Is AI Search Complex And Does It Change Everything?

I think it’s reasonable to say that AI Search is complex because Google’s AI Mode and to a lesser extent AI Overviews, is showing links to a wider range of search intents than regular searches used to show. Even Google’s Rich Snippets were aligned to the search intent of the original search query.

That’s no longer the case with AIO and AI Mode search results. That’s the whole point about Query Fan-out (read about a patent that describes what Query Fan-out might be), that the original query is broken out into follow-up questions.

Greg Boser has a point though in a follow-up post where he said that the query fan-out technique is pretty similar to People Also Ask (PAA), Google’s just sticking it into the AI Mode results.

He wrote in a follow-up post about Query fan-out:

“Yeah the query fan thing is the rage of the day. It’s like PAA is getting memory holed.”

AI Mode Is A Serious Threat To SEO?

I agree with Greg to a certain extent that AI Mode is not a threat to SEO. The same principles about promoting your site, technical SEO and so on still apply. The big difference is that AI Mode is not directly answering the query but providing answers to the entire information journey. You can dismiss it as just PAA above the fold but that’s still a big deal because it complicates what you’re going to try to rank for.

Michael Bonfils, another old timer SEO recently observed that AI search is eliminating the beginning and middle part of the sales funnel, observing about AI search:

“This is, you know, we have a funnel, we all know which is the awareness consideration phase and the whole center and then finally the purchase stage. The consideration stage is the critical side of our funnel. We’re not getting the data. How are we going to get the data?”

So yeah, AI Search is different than anything we’ve seen before but, as Greg points out, it’s still SEO and adapting to change is has always been a part of it.

Read Greg Boser’s post on X:

Google AI Mode Introduces Data Visualization For Finance Queries via @sejournal, @MattGSouthern

Google has started rolling out interactive charts in AI Mode through Labs.

You can now ask complex financial questions and get both visual charts and detailed explanations.

The system builds these responses specifically for each user’s question.

Visual Analytics Come AI Mode

Soufi Esmaeilzadeh, Director of Product Management for Search at Google, explained that you can ask questions like “compare the stock performance of blue chip CPG companies in 2024” and get automated research with visual charts.

Google does the research work automatically. It looks up individual companies and their stock prices without requiring you to perform manual searches.

You can ask follow-up questions like “did any of these companies pay back dividends?” and AI Mode will understand what you’re looking for.

Technical Details

Google uses Gemini’s advanced reasoning and multimodal capabilities to power this feature.

The system analyzes what users are requesting, pulls both current and historical financial data, and determines the most effective way to present the information.

Implications For Publishers

Financial websites that typically receive traffic from comparison content should closely monitor their analytics. Google now provides direct visual answers to complex financial questions.

Searchers might click through to external sites less often for basic comparison data. But this also creates opportunities. Publishers that offer deeper analysis or expert commentary may find new ways to add value beyond basic data visualization.

Availability & Access

The data visualization feature is currently available through AI Mode in Labs. This means it’s still experimental. Google hasn’t announced plans for wider rollout or expansion to other types of data beyond financial information.

Users who want to try it out can access it through Google’s Labs program. Labs typically tests experimental search features before rolling them out more widely.

Looking Ahead

The trend toward comprehensive, visual responses continues Google’s strategy of becoming the go-to source for information rather than just a gateway to other websites.

While currently limited to financial data, the technology could expand to other data-heavy industries.

The feature remains experimental, but it offers a glimpse into how AI-powered search may evolve.