Google Launches Loyalty Program Structured Data Support via @sejournal, @MattGSouthern

Google now supports structured data that allows businesses to show loyalty program benefits in search results.

Businesses can use two new types of structured data. One type defines the loyalty program itself, while the other illustrates the benefits members receive for specific products.

Here’s what you need to know.

Loyalty Structured Data

When businesses use this new structured data for loyalty programs, their products can display member benefits directly in Google. This allows shoppers to view the perks before clicking on any listings.

Google recognizes four specific types of loyalty benefits that can be displayed:

  • Loyalty Points: Points earned per purchase
  • Member-Only Prices: Exclusive pricing for members
  • Special Returns: Perks like free returns
  • Special Shipping: Benefits like free or expedited shipping

This is a new way to make products more visible. It may also result in higher clicks from search results.

The announcement states:

“… member benefits, such as lower prices and earning loyalty points, are a major factor considered by shoppers when buying products online.”

Details & Requirements

The new feature needs two steps.

  1. First, add loyalty program info to your ‘Organization’ structured data.
  2. Then, add loyalty benefits to your ‘Product’ structured data.
  3. Bonus step: Check if your markup works using the Rich Results Test tool.

With valid markup in place, Google will be aware of your loyalty program and the perks associated with each product.

Important implementation note: Google recommends placing all loyalty program information on a single dedicated page rather than spreading it across multiple pages. This helps ensure proper crawling and indexing.

Multi-Tier Programs Now Supported

Businesses can define multiple membership tiers within a single loyalty program—think bronze, silver, and gold levels. Each tier can have different requirements for joining, such as:

  • Credit card signup requirements
  • Minimum spending thresholds (e.g., $250 annual spend)
  • Periodic membership fees

This flexibility allows businesses to create sophisticated loyalty structures that match their existing programs.

Merchant Center Takes Priority

Google Shopping software engineers Irina Tuduce and Pascal Fleury say this feature is:

“… especially important if you don’t have a Merchant Center account and want the ability to provide a loyalty program for your business.”

It’s worth reiterating: If your business already uses Google Merchant Center, keep using that for loyalty programs.

In fact, if you implement both structured data markup and Merchant Center loyalty programs, Google will prioritize the Merchant Center settings. This override ensures there’s no confusion about which data source takes precedence.

Looking Ahead

The update seems aimed at helping smaller businesses compete with larger retailers, which often have complex Merchant Center setups.

Now, smaller sites can share similar information using structured data, including sophisticated multi-tier programs that were previously difficult to implement without Merchant Center.

Small and medium e-commerce sites without Merchant Center accounts should strongly consider adopting this markup.

For more details, see Google’s new help page.

Google Responds To Site That Lost Ranks After Googlebot DDoS Crawl via @sejournal, @martinibuster

Google’s John Mueller answered a question about a site that received millions of Googlebot requests for pages that don’t exist, with one non-existent URL receiving over two million hits, essentially DDoS-level page requests. The publisher’s concerns about crawl budget and rankings seemingly were realized, as the site subsequently experienced a drop in search visibility.

NoIndex Pages Removed And Converted To 410

The 410 Gone server response code belongs to the family 400 response codes that indicate a page is not available. The 404 response means that a page is not available and makes no claims as to whether the URL will return in the future, it simply says the page is not available.

The 410 Gone status code means that the page is gone and likely will never return. Unlike the 404 status code, the 410 signals the browser or crawler that the missing status of the resource is intentional and that any links to the resource should be removed.

The person asking the question was following up on a question they posted three weeks ago on Reddit where they noted that they had about 11 million URLs that should not have been discoverable that they removed entirely and began serving a 410 response code. After a month and a half Googlebot continued to return looking for the missing pages. They shared their concern about crawl budget and subsequent impacts to their rankings as a result.

Mueller at the time forwarded them to a Google support page.

Rankings Loss As Google Continues To Hit Site At DDOS Levels

Three weeks later things have not improved and they posted a follow-up question noting they’ve received over five millions requests for pages that don’t exist. They posted an actual URL in their question but I anonymized it, otherwise it’s verbatim.

The person asked:

“Googlebot continues to aggressively crawl a single URL (with query strings), even though it’s been returning a 410 (Gone) status for about two months now.

In just the past 30 days, we’ve seen approximately 5.4 million requests from Googlebot. Of those, around 2.4 million were directed at this one URL:
https://example.net/software/virtual-dj/ with the ?feature query string.

We’ve also seen a significant drop in our visibility on Google during this period, and I can’t help but wonder if there’s a connection — something just feels off. The affected page is:
https://example.net/software/virtual-dj/?feature=…

The reason Google discovered all these URLs in the first place is that we unintentionally exposed them in a JSON payload generated by Next.js — they weren’t actual links on the site.

We have changed how our “multiple features” works (using ?mf querystring and that querystring is in robots.txt)

Would it be problematic to add something like this to our robots.txt?

Disallow: /software/virtual-dj/?feature=*

Main goal: to stop this excessive crawling from flooding our logs and potentially triggering unintended side effects.”

Google’s John Mueller confirmed that it’s Google’s normal behavior to keep returning to check if a page that is missing has returned. This is Google’s default behavior based on the experience that publishers can make mistakes and so they will periodically return to verify whether the page has been restored. This is meant to be a helpful feature for publishers who might unintentionally remove a web page.

Mueller responded:

“Google attempts to recrawl pages that once existed for a really long time, and if you have a lot of them, you’ll probably see more of them. This isn’t a problem – it’s fine to have pages be gone, even if it’s tons of them. That said, disallowing crawling with robots.txt is also fine, if the requests annoy you.”

Caution: Technical SEO Ahead

This next part is where the SEO gets technical. Mueller cautions that the proposed solution of adding a robots.txt could inadvertently break rendering for pages that aren’t supposed to be missing.

He’s basically advising the person asking the question to:

  • Double-check that the ?feature= URLs are not being used at all in any frontend code or JSON payloads that power important pages.
  • Use Chrome DevTools to simulate what happens if those URLs are blocked — to catch breakage early.
  • Monitor Search Console for Soft 404s to spot any unintended impact on pages that should be indexed.

John Mueller continued:

“The main thing I’d watch out for is that these are really all returning 404/410, and not that some of them are used by something like JavaScript on pages that you want to have indexed (since you mentioned JSON payload).

It’s really hard to recognize when you’re disallowing crawling of an embedded resource (be it directly embedded in the page, or loaded on demand) – sometimes the page that references it stops rendering and can’t be indexed at all.

If you have JavaScript client-side-rendered pages, I’d try to find out where the URLs used to be referenced (if you can) and block the URLs in Chrome dev tools to see what happens when you load the page.

If you can’t figure out where they were, I’d disallow a part of them, and monitor the Soft-404 errors in Search Console to see if anything visibly happens there.

If you’re not using JavaScript client-side-rendering, you can probably ignore this paragraph :-).”

The Difference Between The Obvious Reason And The Actual Cause

Google’s John Mueller is right to suggest a deeper diagnostic to rule out errors on the part of the publisher. A publisher error started the chain of events that led to the indexing of pages against the publisher’s wishes. So it’s reasonable to ask the publisher to check if there may be a more plausible reason to account for a loss of search visibility. This is a classic situation where an obvious reason is not necessarily the correct reason. There’s a difference between being an obvious reason and being the actual cause. So Mueller’s suggestion to not give up on finding the cause is good advice.

Read the original discussion here.

Featured Image by Shutterstock/PlutusART

WordPress Co-Founder Mullenweg’s Reaction To FAIR Project via @sejournal, @martinibuster

The Linux Foundation recently announced the FAIR Package Manager project, an open-source, distributed WordPress plugin and theme repository that decentralizes control of the repository. A distributed theme and plugin repository became a priority for many in the WordPress community after Matt Mullenweg took control of certain paid premium plugins and created free versions from them, in addition to removing access to the free versions of the original plugins.

The Linux announcement, made on Friday, June 6, came during the middle of WordCamp Europe, all but assuring that it would be a topic of discussion at the three-day conference.

According to the Linus foundation announcement:

“…The FAIR Package Manager project paves the way for the stability and growth of open source content management, giving contributors and businesses additional options governed by a neutral community…”

It was inevitable that Matt Mullenweg would be asked about it and that’s what happened, twice. Mullenweg was gracious about answering the questions but he was also understandably cautious about it, given that it had only been less than 24 hours since the FAIR project had been announced.

Initial Reaction To Project FAIR

The first question was asked early in the question and answer period, where Mullenweg was asked how he sees such initiatives coexisting with WordPress and asking what he sees as the ideal outcome.

Mullenweg expressed cautious optimism, praising the open source nature of WordPress by saying that that’s the point of open source, that it can coexist with everything. But he also was reluctant to say much more. He did seem a little annoyed that the FAIR project was created “in secret.” I don’t know the extent of whether the FAIR project was created in secret but it did seem as if the Linux foundation essentially ambushed WordPress and WordCampe with their announcement.

Mullenweg answered:

“…I think that’s part of the beauty that something like this can be written with the APIs that WordPress has. I don’t know if I want to comment too much further on it just because kind of just found out about it last night, there hasn’t been that much time. There’s a lot of code and uh and complexities.

You know, I do wish if the team did want to collaborate or the team says we want to be transparent and everything. But it did sort of drop as a surprise. It was worked on in secret for six months. But we can work past that and look at it. “

Do Users Want A Federated Repository?

Mullenweg next turned the question away from what he might think about it and asked if this is something that WordPress users would want. He also explained the immensity of the undertaking a decentralized system for the repository.

He continued his answer:

“I do think things we need to keep in mind are, you know, what are users asking for?

What are the challenges they’re facing around finding the right things, knowing it’s secure, getting updates? You know the stats around how many sites that are hacked are from out of date plugins. Those are things that are top of my mind for the plugin directory and so the trust and safety elements of that for the.org directory.

…So we’re now up to 72,000 plugins and themes. This is about 3.2 terabytes, like zip files. That’s not counting all the SVN history and everything like that. So there’s a there’s a lot of data there, which also we need to make sure, like if 500 mirrors are set up and they’re all sucking down the directory like, that could DDOS us.”

About twenty minutes later someone else stepped up and asked the question again, sharing about her long history with WordPress and her opinion of why the FAIR project may be useful.

She said:

“I’ve been contributing to the communication team for 14 years and contributing to plug in review team for a couple of years and my whole work in documentation was serving the user every decision we made we made was to serve user. And in plugin review team we also include plugin authors So everything we do we do for plugin authors and users to make their lives easier and better.”

Next she offered an explanation of why she thinks the FAIR project is good for plugin authors and users:

“So the Fair project is actually federated and independent repository of trusted plugins and teams. And it is under the Linux Foundation. So that means a lot when it’s under the Linux foundation.

And what it means for users and plugin authors and team authors is actually making their lives easier and better, more secure. It makes all the products more discoverable and also developers can choose their source. Where are they using their supply chain from.

But also, it is helping WordPress.org because these are mirrors so it will reduce the load from WordPress.org for every update and all of that.

…I don’t know if you trust me, but it seemed to me that this aligns with the idea of having users and developers first in mind. Would you as wordpress.org consider collaborating with this project?”

Mullenweg’s answer was cautious in tone, giving the impression that he didn’t know much about the FAIR project aside from the public announcement made by the Linux Foundation.

He answered:

“Of course we consider everything, but even in what you said, I think there’s a lot of challenges to it. So for example, right now, a supply chain attack needs to breach wordpress.org which has never been hacked.”

At this point loud laughter rang out in the hall, catching Mullenweg by surprise.

He then continued, offering an idea of the complexity of a federated theme and plugin repository:

“The… now all of a sudden there is N places that could potentially be compromised that you know there’s ways to do that, many ways. There’s N places with uptime issues.

And… it makes it much more difficult for, I don’t know if it’s actually better for WordPress.org, because it makes it much more difficult to do things like rollouts, phased rollouts, or let’s say we get plugin authors the ability to ship to 5% of users and then see what happens, which means we also need things being checked back and then we can roll out to the rest, which is something that I’ve heard a ton of plugin authors ask for.

It will break all the analytics and stats that we provide and also that we internally …use to make decisions, for example which versions of PHP we support…

So I think that it’s uh a big part of why WordPress is where it is today is because of the infrastructure and the sort of feedback loop that we get from wordpress.org.

Also, the trust that we’re able to engender by having that be a resource. When you look at marketplaces, people aren’t asking necessarily for I want it to be downloaded from more locations.

  • They’re asking for how do I know this is trustworthy?
  • How do I know these reviews are real?
  • Who’s moderating?
  • Who’s checking the IP’s on these different reviews?
  • What’s the plug in rating?
  • What’s the compatibility for it?
  • How does it, compatible with my other plugins?

These are things I’m hearing from users, not I need it hosted in a different place. This is one example.

And again, I don’t want to get too far into it because I want to read the code. I want to dive more into it. I want colleagues to look at it. So, I think it’s kind of premature, less than 24 hours in to say like we’re going to …this or not.”

At this point Mullenweg praised the fact that people were being constructive rather than arguing.

He continued:

“But I do think it’s awesome that people are shipping code versus just arguing or talking or writing blog posts. I think that’s a pretty productive way to sort of channel possible disagreements or anything, and then we can see how it looks. Might be a super niche thing that a few people use, maybe one or two hosts or it might be something that maybe there’s something in there that becomes …popular.”

Then he returned to listing things that still need to be looked into, trying to give an idea of how complex creating a decentralized repository is.

Mullenweg continued:

“Like something that we probably need to do in the plug and review is something about these admin banners right, now how is that enforced in a distributed FAIR system?”

Mullenweg then asked the person asking the question how she would solve all of those problems to which she answered that she’s not the smartest person in the room but that this is something to be collaborated on and then she tossed off a joking remark that maybe they can ask ChatGPT, which drew laughter and applause, breaking the tension of the moment and ending the question on a light note.

Watch the question and answer session in about the 8 hour mark of the video:

Google’s Update To Recipe Structured Data Confirms A Ranking Criteria via @sejournal, @martinibuster

Google updated the Recipe Schema.org structured data documentation to reflect more precise guidance on what the image structured data property affects and where to find additional information about ranking recipe images in the regular organic search results.

Schema.org Structured Data And Rich Results

The SEO and publisher community refers to the text results as the organic search results or the ten blue links. Google refers to them as the text results.

Structured data helps a site’s content become eligible to rank in Google’s rich results but it generally doesn’t help content rank better in the text results.

That’s the concept underlying Google’s update to the Recipe structured data guidance with the addition of two sentences:

“Specifying the image property in Recipe markup has no impact on the image chosen for a text result image. To optimize for a text result image, follow the image SEO best practices.”

Recipe structured data influences the images shown in the Recipe Rich Results. The structured data does not influence the image rankings in the regular text results (aka the organic search results).

Ranking Images In Text Results

Google offers documentation for image best practices which specify normal HTML like the and elements. Google also recommends using an image sitemap, a sitemap that’s specifically for images.

Something to pay particular attention to is to not use images that have blurry qualities to them. Always use sharp images to give your images the best chance for showing up in the search results.

I know that some images may contain slight purposeful blurring for optimization purposes (blurring decreases image size) and to enhance the perspective of foreground and background. But Google recommends using sharp images and to avoid blurring in images. Google doesn’t say it’s an image ranking factor, but it does make that recommendation.

Here’s what Google’s image optimization guidance recommends:

“High-quality photos appeal to users more than blurry, unclear images. Also, sharp images are more appealing to users in the result thumbnail and can increase the likelihood of getting traffic from users.”

In my opinion I think it’s best to avoid excessive use of blurring. I only have my own anecdotal experience with purposely blurred images not showing up in the search results. So, to me it’s interesting to see my experience confirmed that Google treats blurred images as a negative quality and sharp images as a positive quality.

Read Google’s updated Recipe structured data documentation about images here:

https://developers.google.com/search/docs/appearance/structured-data/recipe#image

Read more about images in Google’s text results here.

Read about blurry and sharp images here:

https://developers.google.com/search/docs/appearance/google-images#good-quality-photos%20optimize-for-speed

Featured Image by Shutterstock/Dean Drobot

Newly Released Data Shows Desktop AI Search Referrals Dominate via @sejournal, @martinibuster

BrightEdge Enterprise SEO platform released new data showing distinctive patterns across major AI search and chatbot platforms and also called attention to potential disruption from Apple if it breaks with Google as the default search engine in Safari.

Desktop AI Traffic Dominance

One of the key findings in the BrightEdge data is that traffic to websites from AI chatbots and search engines is highest from desktop users. The exception is Google Search which is reported to send more traffic from mobile devices over desktop.

The report notes that 94% of the traffic from ChatGPT originates from desktop apps with just 6% of referrals coming from mobile apps. BrightEdge speculates that the reason why there’s less mobile traffic is because ChatGPT’s mobile app shows an in-app preview, requiring a user to execute a second click to navigate to an external site. This creates a referral bottleneck that doesn’t exist on the desktop.

But that doesn’t explain why Perplexity, Bing, and Google Gemini also show similar levels of desktop traffic dominance. Could it be a contextual difference where users on desktop are using AI for business and mobile use is less casual? The fact that Google Search sends more mobile referral traffic than desktop could suggest a contextual reason for the disparity in mobile traffic from AI search and chatbots.

BrightEdge shared their insights:

“While Google maintains an overwhelming market share in overall search (89%) and an even stronger position on mobile (93%), its dominance is particularly crucial in mobile web search. BrightEdge data indicates that Apple phones alone account for 57% of Google’s mobile traffic to US and European brand websites. But with Safari being the default for around a billion users, any change to that default could reallocate countless search queries overnight.

Apple’s vendor-agnostic Apple Intelligence also suggests opportunities for seismic shifts in web search. While generative AI tools have surged in popularity through apps on IOS, mobile web search—where the majority of search still occurs—remains largely controlled by Google via Safari defaults. This makes Apple’s control of Safari the most valuable real estate in the mobile search landscape.”

Here are the traffic referral statistics provided by BrightEdge:

  • Google Search: Only major AI search with mobile majority traffic referrals (53% mobile vs 44% desktop)
  • ChatGPT: 94% desktop, just 6% mobile referrals
  • Perplexity: 96.5% desktop, 3.4% mobile
  • Bing: 94% desktop, 4% mobile
  • Google Gemini: 91% desktop, 5% mobile

Apple May Play The Kingmaker?

With Apple’s Worldwide Developers Conference (WWDC) nearing, one of the changes that many will be alert to is any announcement relative to the company’s Safari browser which controls the default search settings on nearly a billion devices. A change in search provider in Safari could initiate dramatic changes to who the new winners and losers are in web search.

Perplexity asserts that the outcome of changes to Safari browser defaults may impact search marketing calculations for the following reasons:

“58% of Google’s mobile traffic to brand websites comes from iPhones

Safari remains the default browser for nearly a billion users

Apple has not yet embedded AI-powered search into its mobile web stack”

Takeaways

  • Desktop Users Of AI Search Account For The Majority Of Referral Traffic
    Most AI-generated search traffic from from ChatGPT, Perplexity, Bing, and Gemini comes from desktop usage, not mobile.
  • Google Search Is The Traffic Referral Outlier
    Unlike other AI search tools, Google Search still delivers a majority of its traffic via mobile devices.
  • In-App Previews May Limit ChatGPT Mobile AI Referrals
    ChatGPT’s mobile app requires an extra click to visit external sites, possibly explaining low mobile referral numbers.
  • Apple’s Position Is Pivotal To Search Marketing
    Apple devices account for over half of Google’s mobile traffic to brand websites, giving Apple an outsized impact on mobile search traffic.
  • Safari Default And Greater Market Share
    With Safari set as the default browser for nearly a billion users, Apple effectively controls the gate to mobile web search.
  • Perplexity Stands To Gain Market Share
    If Apple switches Safari’s default search to Perplexity, the resulting shift in traffic could remake the competitive balance in search marketing.
  • Search Marketers Should Watch WWDC
    Any change announced at Apple’s WWDC regarding Safari’s search engine could have large-scale impact on search marketing.

BrightEdge data shows that desktop usage is the dominant source of traffic referrals from AI-powered search tools like ChatGPT, Perplexity, Bing, and Gemini, with Google Search as the only major platform that sends more traffic via mobile.

This pattern could suggest a behavioral split between desktop users, who may be performing work-related or research-heavy tasks, and mobile users, who may be browsing more casually. BrightEdge also points to a bottleneck built into the ChatGPT app that creates a one-click barrier to mobile traffic referrals.

BrightEdge’s data further cites Apple’s control over Safari, which is installed on nearly a billion devices, as a potential disruptor due to a possible change in the default search engine away from Google. Such a shift could significantly alter mobile search traffic patterns.

Read more at BrightEdge

The Open Frontier of Mobile AI Search

Featured Image by Shutterstock/Tada Images

WordPress Plugin Platform Offers Proactive Security Scanning via @sejournal, @martinibuster

WordPress security company Patchstack announced a new security tier called managed Vulnerability Disclosure Program platform (mVDP), which offers both human and advanced AI plugin reviews to help plugin developers keep their software resistant to vulnerabilities and provide greater trustworthiness.

One of the biggest problems with WordPress is vulnerabilities from third-party plugins. An enormous amount of plugins are discovered with vulnerabilities every day and it doesn’t matter if the developer is a one-person shop or a large multinational organization, vulnerabilities happen and when they do user trust goes down, especially if it happens on an ongoing basis.

PatchStack offers a way for software developers to build trust with their users with two tiers of protection, a free and a paid tier that help plugin developers focus on creating high quality plugins that are free from vulnerabilities.

With more and more software being generated by AI, we’re seeing a significant increase in new vulnerabilities and an equal increase in AI-generated security reports, which makes managing the security of plugins more important than ever.

Patchstack offers a standard managed VDP and a new Security Suite that costs $70/month.

According to the announcement, the new paid tier comes with the following benefits:

“$40 worth of AI tokens for code security reviews per month

Team management feature with 5 seats included

Discussion board for direct communication with the reporting researchers

AI code review and human research
The new Security Suite tier combines the best of both worlds. Your plugins will receive boosted visibility (100% AXP bonus) in the Patchstack Alliance ethical hackers community, which encourages security researchers to report significantly more bugs and help plugins fix more vulnerabilities faster.

Additionally, our AI code review tool can scan through your entire codebase to find WordPress-specific security issues and highlight potential improvements. We are currently launching this in beta, but we’ll have much many releases to share in the coming months.”

Security Suite customers will receive security recommendations from their internal security experts, helping developers be proactive about building safe to use WordPress plugins.

Read more at Patchstack:

NEW: Patchstack AI code review tool and Security Suite for plugin vendors

Featured Image by Shutterstock/STILLFX

Respected SEO Rockstar Deconstructs SEO For Google’s AI Search via @sejournal, @martinibuster

One of the SEO industry’s SEO Rockstars recently shared his opinion about SEO for generative AI, calling attention to facts about Google and how the new AI search really works.

Greg Boser is a search marketing pioneer with a deep level of experience that few in the industry can match or even begin to imagine.

Digital Marketers And The History Of SEO

His post was in response to a tweet by someone else that in his opinion overstated that SEO is losing dominance. Greg began his SEO rant by pointing out how some search marketer’s conception of SEO is outdated but they’re so new to SEO that they don’t realize it.

For example, the practice of buying links is one of the oldest tactics in SEO, so old that newcomers to SEO gave it a new name, PBN (private blog network), as if giving link buying a new name changes it somehow. And by the way, I’ve never seen a PBN that was private. The moment you put anything out on the web Google knows about it. If an automated spambot can find it in literally five minutes, Google probably already knows about it, too.

Greg wrote:

“If anyone out there wants to write their own “Everything you think you know is wrong. GEO is the way” article, just follow these simple steps:

1. Frame “SEO” as everything that was a thing between 2000 – 2006. Make sure to mention buying backlinks and stuffing keywords. And try and convince people the only KPI was rankings.”

Google’s Organic Links

The second part of his post calls attention to the fact that Google has not been a ten organic links search engine for a long time. Google providing answers isn’t new.

He posted:

“2. Frame the current state of things as if it all happened in the last 2 weeks. Do not under any circumstances mention any of the following things from the past 15 years:

2009 – Rich Snippets
2011 – Knowledge Graph (things not strings)
2013 – Hummingbird (Semantic understanding of conversational queries)
2014 – Featured Snippets – (direct answers at position “Zero”)
2015 – PPA Boxes (related questions anticipating follow-up questions)
2015 – RankBrain (machine learning to interpret ambiguous queries)
2019 – BERT (NLP to better understand context)
2021 – MUM (BERT on Steroids)
2023 – SGE (The birth of AIO)”

Overstate The Problem

The next part is a reaction to the naive marketing schtick that tries to stir up fear about AI search in order to present themselves as the answer.

He wrote:

“3. Overstate the complexity to create a sense of fear and anxiety and then close with “Your only hope is to hire a GEO expert”

Is AI Search Complex And Does It Change Everything?

I think it’s reasonable to say that AI Search is complex because Google’s AI Mode and to a lesser extent AI Overviews, is showing links to a wider range of search intents than regular searches used to show. Even Google’s Rich Snippets were aligned to the search intent of the original search query.

That’s no longer the case with AIO and AI Mode search results. That’s the whole point about Query Fan-out (read about a patent that describes what Query Fan-out might be), that the original query is broken out into follow-up questions.

Greg Boser has a point though in a follow-up post where he said that the query fan-out technique is pretty similar to People Also Ask (PAA), Google’s just sticking it into the AI Mode results.

He wrote in a follow-up post about Query fan-out:

“Yeah the query fan thing is the rage of the day. It’s like PAA is getting memory holed.”

AI Mode Is A Serious Threat To SEO?

I agree with Greg to a certain extent that AI Mode is not a threat to SEO. The same principles about promoting your site, technical SEO and so on still apply. The big difference is that AI Mode is not directly answering the query but providing answers to the entire information journey. You can dismiss it as just PAA above the fold but that’s still a big deal because it complicates what you’re going to try to rank for.

Michael Bonfils, another old timer SEO recently observed that AI search is eliminating the beginning and middle part of the sales funnel, observing about AI search:

“This is, you know, we have a funnel, we all know which is the awareness consideration phase and the whole center and then finally the purchase stage. The consideration stage is the critical side of our funnel. We’re not getting the data. How are we going to get the data?”

So yeah, AI Search is different than anything we’ve seen before but, as Greg points out, it’s still SEO and adapting to change is has always been a part of it.

Read Greg Boser’s post on X:

Google AI Mode Introduces Data Visualization For Finance Queries via @sejournal, @MattGSouthern

Google has started rolling out interactive charts in AI Mode through Labs.

You can now ask complex financial questions and get both visual charts and detailed explanations.

The system builds these responses specifically for each user’s question.

Visual Analytics Come AI Mode

Soufi Esmaeilzadeh, Director of Product Management for Search at Google, explained that you can ask questions like “compare the stock performance of blue chip CPG companies in 2024” and get automated research with visual charts.

Google does the research work automatically. It looks up individual companies and their stock prices without requiring you to perform manual searches.

You can ask follow-up questions like “did any of these companies pay back dividends?” and AI Mode will understand what you’re looking for.

Technical Details

Google uses Gemini’s advanced reasoning and multimodal capabilities to power this feature.

The system analyzes what users are requesting, pulls both current and historical financial data, and determines the most effective way to present the information.

Implications For Publishers

Financial websites that typically receive traffic from comparison content should closely monitor their analytics. Google now provides direct visual answers to complex financial questions.

Searchers might click through to external sites less often for basic comparison data. But this also creates opportunities. Publishers that offer deeper analysis or expert commentary may find new ways to add value beyond basic data visualization.

Availability & Access

The data visualization feature is currently available through AI Mode in Labs. This means it’s still experimental. Google hasn’t announced plans for wider rollout or expansion to other types of data beyond financial information.

Users who want to try it out can access it through Google’s Labs program. Labs typically tests experimental search features before rolling them out more widely.

Looking Ahead

The trend toward comprehensive, visual responses continues Google’s strategy of becoming the go-to source for information rather than just a gateway to other websites.

While currently limited to financial data, the technology could expand to other data-heavy industries.

The feature remains experimental, but it offers a glimpse into how AI-powered search may evolve.

WordPress Shares How AI May Play Stronger Role In Web Publishing via @sejournal, @martinibuster

WordPress interviewed a member of the newly formed WordPress AI Team who shared how AI can be integrated into WordPress, outlining a future in which the platform supports AI agents and content consumption while enabling new kinds of functionality. To achieve this, the team is focusing on developer tools that allow third-party developers and services to connect AI systems to WordPress without embedding generative features directly into core.

The interview was with James LaPage, the AI engineering lead at Automattic and one of the leaders of the newly announced WordPress AI Team.

Screenshot of James LaPage of WordPress AI Team

Timing Of AI Team Announcement

Many competitors, from private closed systems like Wix, Duda, and Shopify to open-source platforms like Drupal CMS, have various AI integrations built in. Third-party WordPress plugins such as Yoast, Rank Math, and Elementor also feature AI integration. WordPress hosts including Bluehost, 10Web, and Automattic’s commercial WordPress.com platform offer AI-powered site builder functionality. A case could be made that WordPress is late to the AI party.

James LaPage of the WordPress AI Team argues that a cautious approach was necessary due to the fast rate of changes within AI. This makes sense given that Agentic AI (AI agents that research the web on behalf of humans), is just beginning to gain adoption.

LaPage explains these realities early in the interview:

” I’ve wanted an AI team for a long time. I think right now actually was the perfect time to launch it because the …generative AI boom and the technology running and powering that boom is actually like pretty recent, and it’s changing so rapidly and only recently have we seen a lot of centralization around, for example, how these models work, how they consume information, how you interact with them, how you connect them to software.

So we’ve come to a point right now where a project like WordPress, which is massive and humongous and incredibly important on the web, is able to begin actually exploring this type of stuff because it isn’t changing from under our feet in the way that it was a year ago or two years ago.

And a good way to point that out is there was a Make WordPress post about AI two years ago that Ann published, and a lot of us had commented on it and it was really like, Yeah, this is awesome.

And as you read through those comments, you can kind of see everybody being excited but not really knowing where to push that excitement and point and say do this or do this or do this and we finally get to the point now where this team can say this is what we want to be doing and there can be real understanding of why we’re doing that and prior art in terms of how things actually work.”

WordPress As A Fully AI-Accessible System

LaPage was asked what an AI-friendly WordPress might look like in three years. He share a vision of WordPress as a foundational framework for AI agents, like a platform where tools, content, and interactivity are natively exposed to be dynamically interacted with and consumed.

He explained:

“I think if WordPress is able to become something that we can use AI to consume information from and build functionality for, that is a lovely spot and position it can be in. And it’s already almost in that spot. And if we can make it more accessible to AI, then I think that we are able to maintain its position on the open web as this place that you express yourself digitally.

…What I would love to see is WordPress be this platform where people continue to digitally express themselves. And I think that expression becomes more important in this era where more and more stuff will be consumed by chatbots and you’ll be speaking with AI and you’ll be doing all these different things.

Having the ability to express yourself and also be able to express yourself in ways where you couldn’t before because you couldn’t develop this crazy idea that you have in your head, or you have a crazy idea in your head, you don’t even know how to do it… Like, that type of stuff I would love to enable through the work that we do on this AI team.

So maintaining the position of yes, it’s really important to have this digital presence on the Internet. It’s very important not to subscribe only to these walled gardens, like the social media platforms and the AI chatbots, but instead have this lovely blossoming of expression on the web as WordPress enabled in its beginnings as well.

Like, this was something that it was very difficult to publish your thoughts and then it wasn’t. Let’s do it again. But let’s do it with AI.”

Technical Description Of Future Of AI Innovation With WordPress

James LaPage went into a description of what MCP Model Context Protocol is and the role it plays with how AI can interact with and transform WordPress into like a framework for being able to accomplish a wider range of things on the web.

“So MCP is model context protocol. This is an open protocol and standard. So it’s important to focus on that. It’s a standard. It’s not a technology package that’s built in Python that you go and install. You can build things around this standard and what the standard does is define how software can expose functionality to AI, in the simplest definition.

So you have the ability to define tools which are ways that you expose, hey, you can do this or you can read this on my piece of software. You can look at the piece of software as WordPress and then you also have the method of providing those tools to the client, which is something like Claude or Cursor or another AI agent for example, that can then read those tools and use them however they want, and it’s up to the folks building the actual systems to implement the protocol properly and to build the actual agents and the tools and everything that comes with it through MCP.

So when you look at how we enable AI within WordPress and outside of WordPress, we’ve had similar needs at Automatic …and other folks in the industry have had needs to define how AI speaks to specifically in WordPress different plugins and different functionality within the core software and the Feature API is the answer to exposed features of WordPress and features of plugins in a WordPress specific way to AI.

And this is intended to almost be something that goes into WordPress core, allows plug-in developers to expose this functionality to AI within WordPress in this unified way, similar to how I explained MCP. But do it in the WordPress way allows you to plug into the capabilities and the permission callbacks and the REST API aliases and all of these different WordPress-focused things, which means you’re not reinventing the wheel on WordPress, you’re simply exposing functionality in this unified way, which then it’s up to a developer to say well, now I have this list of functionality, list of things I can do with WordPress resources, I can read with WordPress, let’s build an agent or let’s build a media generation playground or let’s build a single shot, single click button that generates a whole bunch of stuff and use that features API to do so.

And when you think about how WordPress can speak with software outside of itself and almost become that framework for the functionality that plugins bring in, the data that the database stores and custom post types and posts, then you kind of start infusing the ideas behind Feature API and MCP.”

You Can Become Involved

Something that many WordPress users might not be aware of is that every user and interested party can contribute to WordPress to help shape it to be what they need it to be. Even a user who doesn’t know how to program can still influence WordPress by expressing their opinions to WordPress.

LaPage invited the wider WordPress community to get involved with providing feedback to the AI Team.

He said:

“Immediately, the way to get involved is through the make.wordpress.org/AI blog. There are several posts popping out. The most recent one as we’re recording being the hallway hangout. This probably best way to be plugged in is through the Core AI Slack, in the Make WordPress Slack. Both of those things are linked throughout the make.wordpress.org/AI site and the news announcements and everything else, so that’s how you can get involved right now in terms of contributing into the future.

A big focus of the group is to get to a very solid road map with explicit instructions and directions on how you can contribute that are likely going to be several projects that work together that we build and maintain. There’s likely going to be many other focuses around AI that we want to address, and we’re going to try to make it as clear as possible as to how you can get involved and how you can actually go and help make WordPress what it needs to be in in this AI era.

So right now, join the the core AI Slack, check out the blog posts and join the hallway hang out on Monday to really get in on the ground floor.”

Watch the WordPress interview with James LaPage here:

Featured image/Screenshot by author

Google Shares Details Behind AI Mode Development via @sejournal, @MattGSouthern

Google has shared new details about how it designed and built AI Mode.

In a blog post, the company reveals the user research, design challenges, and testing that shaped its advanced AI search experience.

These insights may help you understand how Google creates AI-powered search tools. The details show Google’s shift from traditional keyword searches to natural language conversations.

User Behavior Drove AI Mode Creation

Google built AI Mode in response to the ways people were using AI Overviews.

Google’s research showed a disconnect between what searchers wanted and what was available.

Claudia Smith, UX Research Director at Google, explains:

“People saw the value in AI Overviews, but they didn’t know when they’d appear. They wanted them to be more predictable.”

The research also found people started asking longer questions. Traditional search wasn’t built to handle these types of queries well.

This shift in search behavior led to a question that drove AI Mode’s creation, explains Product Management Director Soufi Esmaeilzadeh:

“How do you reimagine a Search gen AI experience? What would that look like?”

AI “Power Users” Guided Development Process

Google’s UX research team identified the most important use cases as: exploratory advice, how-to guides, and local shopping assistance.

This insight helped the team understand what people wanted from AI-powered search.

Esmaeilzadeh explained the difference:

“Instead of relying on keywords, you can now pose complex questions in plain language, mirroring how you’d naturally express yourself.”

According to Esmaeilzadeh, early feedback suggests that the team’s approach was successful:

“They appreciate us not just finding information, but actively helping them organize and understand it in a highly consumable way, with help from our most intelligent AI models.”

Industry Concerns Around AI Mode

While Google presents an optimistic development story, industry experts are raising valid concerns.

John Shehata, founder of NewzDash, reports that sites are already “losing anywhere from 25 to 32% of all their traffic because of the new AI Overviews.” For news publishers, health queries show 26% AI Overview penetration.

Mordy Oberstein, founder of Unify Brand Marketing, analyzed Google’s I/O demonstration and found the examples weren’t as complex as presented. He shows how Google combined readily available information rather than showcasing advanced AI reasoning.

Google’s claims about improved user engagement have not been verified. During a recent press session, Google executives claimed AI search delivers “more qualified clicks” but admitted they have “no data to share” on these quality improvements.

Further, Google’s reporting systems don’t differentiate between clicks from traditional search, AI overviews, and AI mode. This makes independent verification impossible.

Shehata believes that the fundamental relationship between search and publishers is changing:

“The original model was Google: ‘Hey, we will show one or two lines from your article, and then we will give you back the traffic. You can monetize it over there.’ This agreement is broken now.”

What This Means

For SEO professionals and content marketers, Google’s insights reveal important changes ahead.

The shift from keyword targeting to conversational queries means content strategies need to focus on directly answering user questions rather than optimizing for specific terms.

The focus on exploratory advice, how-to content, and local help shows these content types may become more important in AI Mode results.

Shehata recommends that publishers focus on content with “deep analysis of a situation or an event” rather than commodity news that’s “available on hundreds and thousands of sites.”

He also notes a shift in success metrics: “Visibility, not traffic, is the new metric” because “in the new world, we will get less traffic.”

Looking Ahead

Esmaeilzadeh said significant work continues:

“We’re proud of the progress we’ve made, but we know there’s still a lot of work to do, and this user-centric approach will help us get there.”

Google confirmed that more AI Mode features shown at I/O 2025 will roll out in the coming weeks and months. This suggests the interface will keep evolving based on user feedback and usage patterns.