56% Of CEOs Report No Revenue Gains From AI: PwC Survey via @sejournal, @MattGSouthern

Most companies haven’t yet seen financial returns from their AI investments, according to PwC’s 29th Global CEO Survey.

The survey of 4,454 chief executives across 95 countries found that 56% report neither increased revenue nor lower costs from AI over the past 12 months.

What The Survey Found

About 30% of CEOs said their company saw increased revenue from AI in the last year. On costs, 26% reported decreases while 22% said costs went up. PwC defined “increase” and “decrease” as changes of 2% or more.

Only 12% of companies achieved both revenue gains and cost reductions. PwC called this group the “vanguard” and noted they had stronger AI foundations in place, including defined roadmaps and technology environments built for integration.

For marketing specifically, the numbers suggest early-stage adoption. Just 22% of CEOs said their organization applies AI to demand generation to a large or very large extent. The company’s products, services, and experiences showed similar numbers at 19%.

Separate from AI, CEO confidence in near-term growth has declined. Only 30% said they were very or extremely confident about revenue growth over the next 12 months. That’s down from 38% last year and a peak of 56% in 2022.

Why This Matters

The survey adds data to a pattern I’ve tracked over the past year. A LinkedIn report found 72% of B2B marketers felt overwhelmed by AI’s pace of change. A Gartner survey showed 73% of marketing teams were using AI, but 87% of CMOs had experienced campaign performance problems.

The 22% demand generation figure gives marketers a rough benchmark for how their AI adoption compares to the broader executive population. It’s self-reported CEO perception rather than measured deployment, but it suggests most organizations are still in early stages of applying AI to customer acquisition at scale.

PwC’s framing is direct:

“Isolated, tactical AI projects often don’t deliver measurable value.”

The report adds that tangible returns come from enterprise-scale deployment consistent with company business strategy.

Looking Ahead

PwC recommends companies focus on building AI foundations before expecting returns. That includes defined roadmaps, technology environments that enable integration, and formalized responsible AI processes.

For marketing teams evaluating their own AI investments, this survey suggests most organizations are still working through the same questions.


Featured Image: Blackday/Shutterstock

Five Things To Do That Will Increase Authoritativeness And Earn Links via @sejournal, @martinibuster

The following are five things that anyone can do to establish authoritativeness and trustworthiness that can be communicated quickly and contribute to earning more links. The trick to this technique is that you have to put some time into these tactics first but the rewards after you are done are links, lots of them.

The idea behind this tactic is to convince a web publisher to give you a free link, or to give you the opportunity to publish an article (with or without a customary byline and link).

In order to cut through the noise of all the other emails the web publisher receives, it is necessary to establish your authority in order to inspire trust. And you need to do it quickly. These are some touchstones I crafted, through trial and error, in order to accomplish a higher success level in link building campaigns.

I call this method, Establishing your Bona Fides. It works by creating trust with one to two sentences. Whether at the beginning, middle or end of the outreach is up to you, but I’ve enjoyed a good response rate by placing it near the beginning.

Here are the shortcuts to establishing bona fides:

  1. Awards
  2. Media appearances and mentions
  3. List of authoritative organizations that have published your work
  4. List of peers that have published your work
  5. Authority of your website’s authors

As you can see this isn’t really something you can fake your way through. But if you take the time to first establish your bona fides (what makes your legitimate and authoritative), you will see a higher percentage of positive response rates. People will take your emails more seriously.

There is no need to be annoying and badger people over and over the way some marketing agencies do. The success rate improvement from this method will cut the need for such aggressive pestering, something that I have never approved of.

The first two bona fides are self explanatory. But I will explain them quickly.

Awards
It’s always useful to obtain recognition in whatever field that you are in (if that’s a thing). Even if it’s recognition for volunteering for an organization and doing charitable work.  Other kinds of awards are the kind that local news might give out, like best whatever in whatever town your company is based out of.

Media Appearances And Mentions
Appearing in television news, being cited in respected news or online magazines are ways to establish signals of authoritativeness. Signals of authoritativeness aren’t just ranking signals, they are also the kinds of things that  humans respond to.

Organizations And Associations
The third bona fide relates to associations and organizations that your company is allied or partnered with, and any publications that are related to those organizations, both online and offline. Some organizations are always on the lookout for people to profile or publish articles by for their association publications. This kind of publishing is a great way to establish authoritativeness and trustworthiness. It’s truly earning recognition for your expertise.

Publishing articles in offline publications are a bonanza. While you likely won’t get a link, you will also be the rare online organization submitting a guest post in those publications. Most companies and marketing agencies aren’t doing this because there is no link associated with it. This this will be your advantage because as you’ll see, it will help to increase your link building success rate. When you publish an article in an authoritative space, even if it’s offline, it gives you the ability to rightfully say in your outreach email that you’ve been published in so and so magazine or newsletter. Associating your brand with the authoritative brand in this way instantly makes your brand authoritative to the person you’re communicating with. This is especially powerful if the person you’re communicating with is also a member of whatever association or organization that you have published an article with.

The reason this approach works is that it enables you to establish yourself as authoritative with a single sentence. With only a few words in your outreach email, you can quickly profile your site as not a spammer, and a legit organization that’s ultimately worthy of getting a link. In my experience this has worked exceedingly well for consistently earning instant trust from whoever you’re outreaching to.

You can get to number four  (list of peers that have published your work) without doing number three (list of organizations that have published your work). But you’ll have greater success if you put a good amount of number three projects behind you. Even if you don’t use all the projects in your initial outreach email, you may have to deploy them in follow up emails to doubting recipients who need more convincing. And you get add all of these to your About Us page.

Authority Of Website Authors
Point number five (authority of your website’s authors) is more or less self-explanatory. It helps if the person authoring your articles is someone who the outreach recipient can identify with, can think of as “one of us” when you list their credentials. For example, I once did an outreach in the educational space citing the writing talents of a math teacher who was also an education technology blogger. This person’s credentials and authority opened doors for my link building outreach and helped my client receive links from some truly prestigious education related websites.

Obviously, the success of this approach requires do some work ahead of time to get appearances in blogs, podcasts, video interviews, publishing in association and organization online and offline publications. Even taking a photo with someone who is well known and authoritative and putting that on your About Us page can be helpful. People who are considering giving you a link will go to your website’s About Us page to verify who this company is and if they’re as above board and authoritative as you say.

Using the above pre-campaign tactics will improve your trustworthiness and authoritativeness and have a positive impact on link building success rates.

Featured Image by Shutterstock/Krakenimages.com

Shopify Shares More Details On Universal Commerce Protocol (UCP) via @sejournal, @martinibuster

Harvey Finkelstein, the president of Shopify, was recently interviewed about their open source Universal Commerce Protocol (UCP), which enables agentic AI shopping. Co-developed with Google, he explains how UCP enables brands to be discovered by customers based on personalized recommendations, as opposed to advertising and classic search paradigms that are less personalized.

Finkelstein said that the Universal Commerce Protocol (UCP) is designed to enable AI agents to surface products in a manner that merchants can control, show consumers personalized recommendations based on users’ preferences, and deliver a shopping experience that’s as good as any ecommerce store platform.

Shopify is also opening agentic commerce access to brands that are not Shopify customers through their Agentic plan, which he briefly mentions. This plan is designed for enterprise brands and merchants who do not use Shopify to upload their product data to Shopify’s infrastructure so it can be discovered and purchased directly by AI agents.

This positions Shopify as infrastructure for agentic commerce, not just a hosted commerce platform. This makes it easier for brands to gain immediate access to agentic shopping channels without having to migrate platforms.

Finkelstein also points out that agentic commerce only works if consumers can access all brands, not just those on Shopify.

Shopify’s Finkelstein said that UCP will enable merchants to more effectively control how their products are shown. He also discussed their strategy of bringing agentic shopping to all brands, regardless of whether they are on Shopify or not.

He explained:

“We created this protocol called Universal Commerce Protocol which effectively is this universal language is open sourced so that all merchants can speak directly to every single one of the agents.

And the best way to explain it is up until now, it was really just about like a single transaction.

So I can buy something on ChatGPT or Gemini or Microsoft. there’s no concept of loyalty or subscription or bundling or, you know, if it’s furniture, for example, please don’t ship it to me on Thursday. I’m not home Thursday. Send it Friday.

So this idea of creating this universal protocol that we co-developed with Google means that now merchants can actually tell these agents exactly how to show their products on these agentic tools. And it should be as good as it is on the online store. So that was a really, really big one.

The second thing we announced also with Google is that now we’re actually expanding. You can sell everywhere commerce is happening from an agentic perspective.

So we’re going beyond the agentic storefronts of just ChatGPT, which is what we said, you know, in Q3. Now it’s also, we’re going to be working with Gemini, with AI mode in Google Search, and also with copilot.

And maybe the last one is that we’re actually bringing agentic commerce to every brand, whether or not they’re on Shopify.

So if you’re not on Shopify, but you want to have your product syndicated and indexed, you can do so with our agentic plan.”

Access To Many Brands Is Key

Finkelstein stressed that the key to the success of agentic AI is to be able to show the widest possible selection of brands. He said it’s a big opportunity.

He explained:

“I think if Agentic is going to do what a lot of us think it’s going to do from a commerce perspective, you have to give consumers all the brands.

We obviously want them all on Shopify, but there’s some brands that want to participate now, but it may take some time for them to migrate over.

So this idea of opening up to anyone, we think is a big opportunity.”

Who Will Be The Early Adopters?

Finkelstein was asked about who the early adopters will be. His answer was cautious, seemingly acknowledging that it’s likely not going to immediately be a big crush of people turning to AI to buy things.

He answered:

“I think it’ll likely be something that like most people use some of the time and some people use most of the time. I don’t think it’s going to cross the threshold of most most, the way e-commerce does now. It’s just going to take time. It’s going to take some time.”

AI Chat Reduces Friction

Finkelstein said that Universal Commerce Protocol (UCP) enables better shopping experiences, reducing the “friction” that AI shopping may have produced. He believes that once people start having good experiences shopping with an agent, they will start to get into the habit of using it for other kinds of shopping and begin relying on it.

Finkelstein explained:

“Once you have a good experience, I think the actual friction reduces. You’ll keep having it over and over again.

But the thing that we felt was missing, and this is the reason why I think this UCP protocol is so important, is it was very difficult to do merchandising inside of these applications.

And this protocol allows you to do a lot more… Well, up until UCP happened, you couldn’t actually do subscriptions. Now you can.

Or this idea of bundling, you know, for Gymshark, it’s a huge part of their business is if you buy these, you’ll also buy these as well. You can do that as well.

So I think all of these things are sort of in line with creating a much more delightful experience in the chat.”

Merit Based Shopping Versus SEO?

Finkelstein brought up the topic of merit-based shopping where products are recommended to a user because it is what they are looking for. He used the phrase “merit-based shopping” as a contrast to today’s online advertising ecosystems that prioritize products that pay to be shown as a recommendation. The main point is that shopping recommendations are made based on personalization.

Finkelstein explained:

“And I think ultimately what it leads to is like, this will be merit-based shopping, which will be different than I think some of the traditional retailers who were kind of leaning on their balance sheets to spend money on ads. You can’t really game the system in that that way.

You actually have to be, from a context perspective, the right product for the right consumer.”

What Happens To Creative Assets And SEO

One of the podcast hosts asked about what happens to creative assets like photos, saying that he noticed that shopping AI uses images. He asked how that was going to evolve. Finkelstein’s answer touched on SEO in the context of how agentic AI shopping is about showing products based on user preferences, a tighter form of relevance than in the advertising and classic search ecosystems.

Finkelstein explained:

“I think …the idea of SEO won’t exist in Agentic because again, it’s merit-based and it’s mostly based on the context history you’ve had.

But I do think though, you’re going to have… these brands are going to have people at their companies who are thinking a lot about like consistent updates to UCP, consistent updates to the catalog.

So they may pull something off the catalog and say, we don’t want to sell it anymore this way. So I think there’s going to be, I don’t know if they’re going to be actual jobs, but there’s going to be people inside of the company, potentially in the merchandising department, who say, actually, the way that we want to sell all this, the way we want to describe this to these agents is a particular way.

And then because of UCP and because of Shopify catalog, it gets easily disseminated across every single one of these agentic applications. So the experience just gets better and better.

I think you have to be a little bit of a techno optimist… as I am, to believe that even if the experience is not incredible right now, it’s likely just going to get better at this ridiculous pace.”

Cutting Out Incentivized Recommendations

When asked what’s the most exciting thing about Agentic AI, he returned to the concept of merit-based shopping, where LLMs have the ability to personalize responses by learning user preferences and therefore recommend a product that fits within that person’s requirements. He contrasted that with what happens in the real world, where a salesperson’s recommendations are influenced by commissions.

So what he is excited about is the idea of the playing field being leveled. He mentioned the possibility of lesser-known brands, like True Classic Tees, being surfaced in AI shopping because that kind of brand is a match for a specific consumer.

He responded:

“Most of the excitement is actually around this idea of like, is there a potential for this to level the playing field? Meaning, you know, if I’ve done a bunch of research historically on an agentic application …about the stuff that I love, the brands that I love. …It probably should not show me a generic pair of boots.

So the excitement actually is around like, is this going to introduce more brands that otherwise are unknown to more people or, you know, True Classic Tee, for example, which, you know, if you’re looking for a black t-shirt, I suspect on a search engine, you’re not going to see True Classic Tee come up that much, but it’s an incredible product and ultimately it can be found on these agentic tools in a way that it probably couldn’t historically.”

Agentic AI Will Accelerate Online Shopping

The other thing that Finkelstein is excited about is that he believes Agentic AI shopping will accelerate the amount of shopping that is done online. He compared using Agentic AI to the COVID moment, where people changed their work and shopping behavior in a major way that became permanent.

He then circled back to the idea that Agentic AI is less biased:

“I think it’s actually a better version of that because it’s an unbiased discussion, an unbiased conversation.”

Watch the video podcast interview at a few minutes after the 3 hour mark:

Featured Image by Shutterstock/Julien Tromeur

More Sites Blocking LLM Crawling – Could That Backfire On GEO? via @sejournal, @martinibuster

Hostinger released an analysis showing that businesses are blocking AI systems used to train large language models while allowing AI assistants to continue to read and summarize more websites. The company examined 66.7 billion bot interactions across 5 million websites and found that AI assistant crawlers used by tools such as ChatGPT now reach more sites even as companies restrict other forms of AI access.

Hostinger Analysis

Hostinger is a web host and also a no-code, AI agent-driven platform for building online businesses. The company said it analyzed anonymized website logs to measure how verified crawlers access sites at scale, allowing it to compare changes in how search engines and AI systems retrieve online content.

The analysis they published shows that AI assistant crawlers expanded their reach across websites during a five-month period. Data was collected during three six-day windows in June, August, and November 2025.

OpenAI’s SearchBot increased coverage from 52 percent to 68 percent of sites, while Applebot (which indexes content for powering Apple’s search features) doubled from 17 percent to 34 percent. During the same period, traditional search crawlers essentially remained constant. The data indicates that AI assistants are adding a new layer to how information reaches users rather than replacing search engines outright.

At the same time, the data shows that companies sharply reduced access for AI training crawlers. OpenAI’s GPTBot dropped from access on 84 percent of websites in August to 12 percent by November. Meta’s ExternalAgent dropped from 60 percent coverage to 41 percent website coverage. These crawlers collect data over time to improve AI models and update their Parametric Knowledge but many businesses are blocking them, either to limit data use or for fear of copyright infringement issues.

Parametric Knowledge

Parametric Knowledge, also known as Parametric Memory, is the information that is “hard-coded” into the model during training. It is called “parametric” because the knowledge is stored in the model’s parameters (the weights). Parametric Knowledge is long-term memory about entities, for example, people, things, and companies.

When a person asks an LLM a question, the LLM may recognize an entity like a business and then retrieve the the associated vectors (facts) that it learned during training. So, when a business or company blocks a training bot from their website, they’re keeping the LLM from knowing anything about them, which might not be the best thing for an organization that’s concerned about AI visibility.

Allowing an AI training bot to crawl a company website enables that company to exercise some control over what the LLM knows about it, including what it does, branding, whatever is in the About Us, and enables the LLM to know about the products or services offered. An informational site may benefit from being cited for answers.

Businesses Are Opting Out Of Parametric Knowledge

Hostinger’s analysis shows that businesses are “aggressively” blocking AI training crawlers. While Hostinger’s research doesn’t mention this, the effect of blocking AI training bots is that businesses are essentially opting out of LLM’s parametric knowledge because the LLM is prevented from learning directly from first-party content during training, removing the site’s ability to tell its own story and forcing the LLM to rely on third-party data or knowledge graphs.

Hostinger’s research shows:

“Based on tracking 66.7 billion bot interactions across 5 million websites, Hostinger uncovered a significant paradox:

Companies are aggressively blocking AI training bots, the systems that scrape content to build AI models. OpenAI’s GPTBot dropped from 84% to 12% of websites in three months.

However, AI assistant crawlers, the technology that ChatGPT, Apple, etc. use to answer customer questions, are expanding rapidly. OpenAI’s SearchBot grew from 52% to 68% of sites; Applebot doubled to 34%.”

A recent post on Reddit shows how blocking LLM access to content is normalized and understood as something to protect intellectual property (IP).

The post starts with an initial question asking how to block AIs:

“I want to make sure my site is continued to be indexed in Google Search, but do not want Gemini, ChatGPT, or others to scrape and use my content.

What’s the best way to do this?”

Screenshot Of A Reddit Conversation

Later on in that thread someone asked if they’re blocking LLMs to protect their intellectual property and the original poster responded affirmatively, that that was the reason.

The person who started the discussion responded:

“We publish unique content that doesn’t really exist elsewhere. LLMs often learn about things in this tiny niche from us. So we need Google traffic but not LLMs.”

That may be a valid reason. A site that publishes unique instructional information about a software product that does not exist elsewhere may want to block an LLM from indexing their content because if they don’t then the LLM will be able to answer questions while also removing the need to visit the site.

But for other sites with less unique content, like a product review and comparison site or an ecommerce site, it might not be the best strategy to block LLMs from adding information about those sites into their parametric memory.

Brand Messaging Is Lost To LLMs

As AI assistants answer questions directly, users may receive information without needing to visit a website. This can reduce direct traffic and limit the reach of a business’s pricing details, product context, and brand messaging. It’s possible that the customer journey ends inside the AI interface and the businesses that block LLMs from acquiring knowledge about their companies and offerings are essentially relying on the search crawler and search index to fill that gap (and maybe that works?).

The increasing use of AI assistants affects marketing and extends into revenue forecasting. When AI systems summarize offers and recommendations, companies that block LLMs have less control over how pricing and value appear. Advertising efforts lose visibility earlier in the decision process, and ecommerce attribution becomes harder when purchases follow AI-generated answers rather than direct site visits.

According to Hostinger, some organizations are becoming more selective about what which content is available to AI, especially AI assistants.

Tomas Rasymas, Head of AI at Hostinger commented:

“With AI assistants increasingly answering questions directly, the web is shifting from a click-driven model to an agent-mediated one. The real risk for businesses isn’t AI access itself, but losing control over how pricing, positioning, and value are presented when decisions are made.”

Takeaway

Blocking LLMs from using website data for training is not really the default position to take, even though many people feel real anger and annoyance of the idea of an LLM training on their content.  It may be useful to take a more considered response that weighs the benefits versus the disadvantages and to also consider whether those disadvantages are real or perceived.

Featured Image by Shutterstock/Lightspring

YouTube CEO Announces AI Creation Tools, In-App Shopping For 2026 via @sejournal, @MattGSouthern

YouTube CEO Neal Mohan announced the company’s priorities in his annual letter, previewing new AI creation tools, expanded shopping features, and format changes to Shorts.

AI Creation Tools

YouTube is adding three AI creation features this year. Creators will be able to make Shorts using their own likeness, produce games from text prompts through the experimental Playables program, and experiment with music creation tools.

More than 1 million channels used YouTube’s AI creation tools daily in December, according to the letter. The company also reported 20 million users learned about content through its Ask tool in December, and 6 million daily viewers watched at least 10 minutes of autodubbed content.

Mohan sees these tools as creative aids rather than replacements.

“Throughout this evolution, AI will remain a tool for expression, not a replacement,” he wrote.

YouTube also addressed concerns about AI-generated content quality, saying it’s building on spam and clickbait detection systems to reduce what Mohan called “AI slop.”

Shopping Expands With In-App Checkout

YouTube is pushing further into commerce with in-app checkout, letting viewers purchase products without leaving the site.

More than 500,000 creators are already in YouTube Shopping. Mohan cited creator Vineet Malhotra, who drove “millions of dollars in YouTube Shopping GMV in 2025.”

I covered YouTube’s commerce push back in September when the company announced AI-powered product tagging and automatic timestamps for shopping videos. In-app checkout is the next step, aiming to reduce the friction of sending viewers to external sites.

Brand partnership tools are expanding too. Shorts creators will be able to add links to brand sites for sponsored content, and a new feature lets creators swap out branded segments after publishing to turn back catalogs into recurring revenue.

Shorts Gets Image Posts

Image posts are coming to the Shorts feed this year. Shorts now averages 200 billion daily views, according to Mohan.

The addition brings YouTube closer to Instagram’s format, mixing static images with video in the same feed.

Parental Controls

Parental control updates announced last week let parents set time limits on Shorts scrolling for kids and teens, including setting the timer to zero. YouTube calls this an “industry first.”

How 2025 Promises Played Out

I covered Mohan’s 2025 letter when he announced TV had surpassed mobile as the primary viewing device in the U.S. That letter made similar commitments. Some shipped, others are still pending.

Auto dubbing, which he promised to expand to all YouTube Partner Program creators, rolled out. The 2026 letter says 6 million daily viewers now watch at least 10 minutes of autodubbed content. AI tools for video ideas, titles, and thumbnails launched through the Inspiration Tab last year.

YouTube TV’s multiview improvements are still coming. The 2025 letter promised enhancements; the 2026 letter says “fully customizable multiview” arrives soon. The specialized YouTube TV plans Mohan announced this year are new.

The likeness-based Shorts creation, text-to-game features, in-app checkout, and image posts in Shorts are all new to the 2026 roadmap.

Why This Matters

YouTube keeps building tools that hold users and transactions inside its ecosystem. AI creation features give creators more production options. In-app checkout gives YouTube more control over the commerce layer.

The $100 billion YouTube says it paid creators over the past four years shows the scale of its creator economy. These updates aim to keep that system growing.

Looking Ahead

Most features don’t have specific launch dates. Mohan used “this year” and “soon” throughout the letter.

Parental control updates are rolling out now. Creators in YouTube Shopping should watch for checkout integration, and those using AI tools can expect expanded options throughout 2026.

A Little Clarity On SEO, GEO, And AEO via @sejournal, @martinibuster

The debate about AEO/GEO centers on whether it’s a subset of SEO, a standalone discipline, or just standard SEO. Deciding on where to plant a flag is difficult because every argument makes a solid case. There’s no doubt that change is underway and it may be time find where all the competing ideas intersect and work from there.

The Case Against AEO/GEO

Many SEOs argue that AEO/GEO doesn’t differentiate itself enough to justify being anything other than a subset of SEO, sharing computers in the same office.

Harpreet Singh Chatha (X profile) of Harps Digital recently tweeted about AEO / GEO myths to leave behind in 2025.

Some of what he listed:

  • “LLMs.txt
  • Paying a GEO expert to do “chunk optimization.” Chunking content is just making your content readable.
  • Thinking AEO / GEO have nothing in common with SEO. Ask your favourite GEO expert for 25 things that are unique to AI search and don’t overlap with SEO. They will block you.
  • Saying SEO is dead. “

The legendary Greg Boser (LinkedIn profile), one of the original SEOs since 1996 tweeted this:

“At the end of the day, the core foundation of what we do always has been and always will be about understanding how humans use technology to gain knowledge.

We don’t need to come up with a bunch of new acronyms to continue to do what we do. All that needs to happen is we all agree to change the “E” in SEO from “Engine” to “Experience”.

Then everyone can stop wasting time writing all the ridiculous SEO/GEO/AEO posts, and get back to work.”

Inability To Articulate AEO/GEO

What contributes to the perception that AEO/GEO is not a real thing is that many proponents of AEO/GEO fail to differentiate it from standard SEO. We’ve all seen it where someone tweets their new tactic and the SEO peanut gallery chimes in, nah, that’s SEO.

Back in October Microsoft published a blog post about optimizing content for for AI where they asserted:

“While there’s no secret strategy for being selected by AI systems, success starts with content that is fresh, authoritative, structured, and semantically clear.”

The post goes on to affirm the importance of SEO fundamentals such as “Crawlability, metadata, internal linking, and backlinks” but then states that these are just starting points. Microsoft points out that AI search provides answers, not ranked list of pages. That’s correct and it changes a lot.

Microsoft says that now it’s about which pieces of content are being ranked:

“In AI search, ranking still happens, but it’s less about ordering entire pages and more about which pieces of content earn a place in the final answer.”

That kind of echoes what Jesse Dwyer of Perplexity AI recently said about AI Search and SEO:

“As for the index technology, the biggest difference in AI search right now comes down to whole-document vs. “sub-document” processing.

…The AI-first approach is known as “sub-document processing.” Instead of indexing whole pages, the engine indexes specific, granular snippets (not to be confused with what SEO’s know as “featured snippets”).”

Microsoft recently published an explainer called “From discovery to influence:A guide to AEO and GEO” that’s tellingly focused mostly on shopping, which is notable and remarkable because there’s a growing awareness that ecommerce stands to gain a lot from AI Search.

No such luck for informational sites because it’s also gradually becoming understood that Agentic AI is poised to strip informational sites of all branding and value-add and treating them as sources of data.

Common SEO Practices That Pass As GEO

Some of what some champion as GEO and AEO are actually longstanding SEO practices:

  • Crafting content in the form of answers
    Good SEOs have been doing this since Featured Snippets came out in 2014.
  • Chunking content
    Crafting content in tight paragraphs looks good in mobile devices and it’s something good SEOs and thoughtful content creators have been doing for well over a decade.
  • Structured Content
    Headings and other elements that strongly disambiguate the content are also SEO.
  • Structured Data
    Shut your mouth. This is SEO.

The Customer Is Always Right

Some of in the GEO Is Real campe tend to regard themselves as evolving with the times but they also acknowledge they’re just offering what the clients are demanding. SEO practioners are in a hard spot, what are you going to do? Plant your flag on traditional SEO and turn your back on what potential clients are begging for?

Googlers Insist It’s Still SEO

There are Googlers such as Robby Stein (VP of Product), Danny Sullivan, and John Mueller who say that SEO is 100% still relevant because under the hood AI is just firing off Google searches for top ranked sites to backfill into synthesized answers and links (Read: Google Downplays GEO – But Let’s Talk About Garbage AI SERPs). OpenAI was recently hiring a content strategist that is able to lean into to SEO (not GEO), which some say demonstrates that even OpenAI is focused on traditional SEO.

Optimization Is No Longer Just Google

Manick Bhan (LinkedIn profile), founder of the Search Atlas SEO suite, offered an interesting take on why we may be transitioning to a divided SEO and GEO path.

Manick shared:

“SEO has always meant ‘search engine optimization,’ but in practice it has historically meant ‘Google optimization.’ Google defined the interface, the ranking paradigm, the incentives, and the entire mental model the industry used.

The challenge with calling GEO a ‘sub-discipline’ of SEO is that the LLM ecosystem is not one ecosystem, and Google’s AI Mode is becoming a generative surface itself.”

Manick asserts that there is no one “GEO” because each of the AI search and answer engines use different methodologies. He observed that the underlying tactics remain the same but the “the interface, the retrieval model, and the answer surface” are all radically changed from anything that’s come before.

Manick believes that GEO is not SEO, offering the following insights:

“My position is clear: GEO is not just SEO with a fresh coat of paint, and reducing it to that misses the fundamental shift in how modern answer engines actually retrieve, rank, and assemble information.

Yes, the tactics still live in the same universe of on-page and off-page signals. Those fundamentals haven’t changed. But the machines we’re optimizing for have.

Today’s answer engines:

  • Retrieve differently,
  • Fuse and weight sources differently,
  • Handle recency differently,
  • Assign trust and authority differently,
  • Fan out queries differently,
  • And incorporate user behavior into their RAG corpora differently.

Even seemingly small mechanics — like logit calibration and temperature — produce practically different retrieval outputs, which is why identical prompts across engines show measurable semantic drift and citation divergence.

This is why we’re seeing quantifiable, repeatable differences in:

  • Retrieved sources,
  • Answer structures,
  • Citation patterns,
  • Semantic frames,
  • And ranking behavior across LLMs, AI Mode surfaces, and classical Google results.

In this landscape, humility and experimentation matter more than dogma. Treating all of this as ‘just SEO’ ignores how different these systems already are, and how quickly they’re evolving.”

It’s Clear We Are In Transition

Maybe one of the reasons for the anti-GEO backlash is that there is a loud contingent of agencies and individuals who have very little experience with SEO, some who are fresh out of college with zero experience. And it’s not their lack of experience that gets some SEOs in ranting mode. It’s the things they purport are GEO/AEO that are clearly just SEO.

Yet, as Manick of Search Atlas pointed out, AI search and chat surfaces are wildly different from classic search and it’s kind of closing ones eyes to the obvious to deny that things are different and in transition.

Featured Image by Shutterstock/Natsmith1

Wix Introduces Harmony AI Website Builder via @sejournal, @martinibuster

Wix announced the launch of Wix Harmony, a new AI-powered website builder that combines natural language site creation with Wix’s existing visual editor. The company introduced the product in New York and said it will begin rolling out in English to users in the coming weeks. Wix positions Harmony as a tool designed to produce fully functional, production-ready websites rather than quick demos, addressing common tradeoffs between fast site creation and developing deployment-ready websites.

Wix has steadily expanded its use of artificial intelligence across its product line, aiming to streamline the process of maintaining an online presence while keeping users in control of how their sites look and function. Wix Harmony represents the company’s most direct effort to integrate AI into the core website-building workflow rather than treating it as a separate feature.

Aria Is Wix Harmony’s Interface

Wix Harmony’s interface is Aria, an AI agent that responds to natural language instructions and applies changes directly within the Wix editor. Users can ask Aria to perform tasks ranging from visual changes such as adjusting colors or layouts, to adding commerce features or redesigning entire pages. Because Aria operates within Wix’s existing architecture, Wix says changes made in one area of a site will not disrupt other sections or introduce unintended behavior.

Video Of Wix Harmony

Switch Between Manual And AI Workflows

An interesting feature of Wix Harmony is that it enables users to move back and forth between AI-assisted creation and manual editing without rebuilding elements from scratch. A user can generate a page or section through a prompt, then manually fine-tune spacing, layout, and content using drag-and-drop controls. Wix describes this as a way to speed up site creation while keeping design decisions in human control, rather than locking users into AI-generated outputs. This is a truly thoughtful implementation of AI that is flexible to how their users may want to use AI.

Delivers Sites That Are Ready To Deploy

Wix is positioning Harmony as a solution that is capable of delivering websites that are ready to deploy to a live environment. By running Harmony sites on the same infrastructure as all Wix websites, the company says it can support live traffic, ongoing updates, and business operations without requiring users to migrate to a different platform.

Websites created with Wix Harmony include access to Wix’s existing business features, including online commerce, scheduling, transactions, and payments. Wix also says these sites include built-in accessibility monitoring, search optimization tools, performance support, and privacy-focused infrastructure designed to meet regulatory requirements such as GDPR. These capabilities are intended to let users launch and operate websites without adding third-party services to cover basic operational needs.

Wix co-founder and CEO Avishai Abrahami said:

“Our focus is on combining the best new technologies with modern design, and this is the power of Wix. With Wix Harmony, now anyone can create a beautiful website, design easily with prompts and natural language without sacrificing scalability, security, reliability and performance. This is the benchmark of what a website builder should be.”

Harmony is part of Wix’s thoughtful integration of AI, providing users with tools that businesses can use to their benefit. Read more about Wix Harmony.

Featured Image by Wix

How Recommender Systems Like Google Discover May Work via @sejournal, @martinibuster

Google Discover is largely a mystery to publishers and the search marketing community even though Google has published official guidance about what it is and what they feel publishers should know about it. Nevertheless, it’s so mysterious that it’s generally not even considered as a recommender system, yet that is what it is. This is a review of a classic research paper that shows how to scale a recommender system. Although it’s for YouTube, it’s not hard to imagine how this kind of system can be adapted to Google Discover.

Recommender Systems

Google Discover belongs to the class of systems known as a recommender systems. A classic recommender system I remember is the MovieLens system from way back in 1997. It is a university science department project that allowed users to rate movies and it would use those ratings to recommend movies to watch. The way it worked is like, people who tend to like these kinds of movies tend to also like these other kinds of movies. But these kinds of algorithms have limitations that make them fall short for the scale necessary to personalize recommendations for YouTube or Google Discover.

Two-Tower Recommender System Model

The modern style of recommender systems are sometimes referred to as the Two-Tower architecture or the Two-Tower model. The Two-Tower model came about as a solution for YouTube, even though the original research paper (Deep Neural Networks for YouTube Recommendations) does not use this term.

It may seem counterintuitive to look to YouTube to understand how the Google Discover algorithm works, but the fact is that the system Google developed for YouTube became the foundation for how to scale a recommender system for an environment where massive amounts of content are generated every hour of the day, 24 hours a day.

It’s called the Two-Tower architecture because there are two representations that are matched against each other, like two towers.

In this model, which handles the initial “retrieval” of content from the database, a neural network processes user information to produce a user embedding, while content items are represented by their own embeddings. These two representations are matched using similarity scoring rather than being combined inside a single network.

I’m going to repeat that the research paper does not refer to the architecture as a Two-Tower architecture, it’s a description for this kind of approach that was created later. So, while the research paper doesn’t use the word tower, I’m going to continue using it as it makes it easier to visualize what’s going on in this kind of recommender system.

User Tower
The User Tower processes things like a user’s watch history, search tokens, location, and basic demographics. It uses this data to create a vector representation that maps the user’s specific interests in a mathematical space.

Item Tower
The Item Tower represents content using learned embedding vectors. In the original YouTube implementation, these were trained alongside the user model and stored for fast retrieval. This allows the system to compare a user’s “coordinates” against millions of video “coordinates” instantly, without having to run a complex analysis on every single video each time you refresh your feed.

The Fresh Content Problem

Google’s research paper offers an interesting take on freshness. The problem of freshness is described as a tradeoff between exploitation and exploration. The YouTube recommendation system has to balance between showing users content that is already known to be popular (exploitation) versus exposing them to new and unproven content (exploration). What motivates Google to show new but unproven content, at least for the context of YouTube, is that users show a strong preference for new and fresh content.

The research paper explains why fresh content is important:

“Many hours worth of videos are uploaded each second to YouTube. Recommending this recently uploaded (“fresh”) content is extremely important for YouTube as a product. We consistently observe that users prefer fresh content, though not at the expense of relevance.”

This tendency to show fresh content seems to hold true for Google Discover, where Google tends to show fresh content on topics that users are personally trending with. Have you ever noticed how Google Discover tends to favor fresh content? The insights that the researchers had about user preferences probably carry over to the Google Discover recommendation system. The takeaway here is that producing content on a regular basis could be helpful for getting web pages surfaced in Google Discover.

An interesting insight in this research paper, and I don’t know if it’s still true but it’s still interesting, is that the researchers state that machine learning algorithms show an implicit biased toward older existing content because they are trained on historical data.

They explain:

“Machine learning systems often exhibit an implicit bias towards the past because they are trained to predict future behavior from historical examples.”

The neural network is trained on past videos and they learn that things from one or two days ago were popular. But this creates a bias for things that happened in the past. The way they solved the freshness issue is when the system is recommending videos to a user (serving), this time-based feature is set to zero days ago (or slightly negative). This signals to the model that it is making a prediction at the very end of the training window, essentially forcing it to predict what is popular right now rather than what was popular on average in the past.

Accuracy Of Click Data

Google’s foundational research paper also provides insights about implicit user feedback signals, which is a reference to click data. The researchers say that this kind of data rarely provides accurate user satisfaction information.

The researchers write:

“Noise: Historical user behavior on YouTube is inherently difficult to predict due to sparsity and a variety of unobservable external factors. We rarely obtain the ground truth of user satisfaction and instead model noisy implicit feedback signals. Furthermore, metadata associated with content is poorly structured without a well defined ontology. Our algorithms need
to be robust to these particular characteristics of our training data.”

The researchers conclude the paper by stating that this approach to recommender systems helped increase user watch time and proved to be more effective than other systems.

They write:

“We have described our deep neural network architecture for recommending YouTube videos, split into two distinct problems: candidate generation and ranking.
Our deep collaborative filtering model is able to effectively assimilate many signals and model their interaction with layers of depth, outperforming previous matrix factorization approaches used at YouTube.

We demonstrated that using the age of the training example as an input feature removes an inherent bias towards the past and allows the model to represent the time-dependent behavior of popular of videos. This improved offline holdout precision results and increased the watch time dramatically on recently uploaded videos in A/B testing.

Ranking is a more classical machine learning problem yet our deep learning approach outperformed previous linear and tree-based methods for watch time prediction. Recommendation systems in particular benefit from specialized features describing past user behavior with items. Deep neural networks require special representations of categorical and continuous features which we transform with embeddings and quantile normalization, respectively.”

Although this research paper is ten years old, it still offers insights into how recommender systems work and takes a little of the mystery out of recommender systems like Google Discover. Read the original research paper: Deep Neural Networks for YouTube Recommendations

Featured Image by Shutterstock/Andrii Iemelianenko

NotificationX WordPress WooCommerce Plugin Vulnerabilities Impact 40k Sites via @sejournal, @martinibuster

A vulnerability advisory was published for the NotificationX FOMO plugin for WordPress and WooCommerce sites, affecting more than 40,000 websites. The vulnerability, which is rated at a 7.2 (High) severity level, enables unauthenticated attackers to inject malicious JavaScript that can execute in a visitor’s browser when specific conditions are met.

NotificationX – FOMO Plugin

The NotificationX FOMO plugin is used by WordPress and WooCommerce site owners to display notification bars, popups, and real-time alerts such as recent sales, announcements, and promotional messages. The plugin is commonly deployed on marketing and e-commerce sites to create urgency and draw visitor attention through notifications.

Exposure Level

The vulnerability does not require any authentication or acquire any user role before launching an attack. Attackers do not need a WordPress account or any prior access to the site to trigger the vulnerability. Exploitation relies on getting a victim to visit a specially crafted page that interacts with the vulnerable site.

Root Cause Of The Vulnerability

The issue is a DOM-based Cross-Site Scripting (XSS) vulnerability tied to how the plugin processes preview data. In the context of a WordPress plugin vulnerability, DOM-based Cross-Site Scripting (XSS) vulnerability happens when a WordPress plugin contains client-side JavaScript that processes data from an untrusted source (the “source”) in an unsafe way, usually by writing the data to the web page (the “sink”).

In the context of the NotificationX plugin, the vulnerability exists because the plugin’s scripts accepts input through the nx-preview POST parameter, but does not properly sanitize the input or escape the output before it is rendered in the browser. Security checks that are supposed to check that user-supplied data is treated as plain text are missing. This allows an attacker to create a malicious web page that automatically submits a form to the victim’s site, forcing the victim’s browser to execute harmful scripts injected via that parameter.

The end result is that an attacker-controlled input can be interpreted as executable JavaScript instead of harmless preview content.

What Attackers Can Do

If exploited, the vulnerability enables attackers to execute arbitrary JavaScript in the context of the affected site. The injected script executes when a user visits a malicious page that automatically submits a form to the vulnerable NotificationX site.

This can allow attackers to:

  • Hijack logged-in administrator or editor sessions
  • Perform actions on behalf of authenticated users
  • Redirect visitors to malicious or fraudulent websites
  • Access sensitive information available through the browser

The official Wordfence advisory explains:

“The NotificationX – FOMO, Live Sales Notification, WooCommerce Sales Popup, GDPR, Social Proof, Announcement Banner & Floating Notification Bar plugin for WordPress is vulnerable to DOM-Based Cross-Site Scripting via the ‘nx-preview’ POST parameter in all versions up to, and including, 3.2.0. This is due to insufficient input sanitization and output escaping when processing preview data. This makes it possible for unauthenticated attackers to inject arbitrary web scripts in pages that execute when a user visits a malicious page that auto-submits a form to the vulnerable site.”

Affected Versions

All versions of NotificationX up to and including 3.2.0 are vulnerable. A patch is available and the vulnerability was addressed in NotificationX version 3.2.1, which includes security enhancements related to this issue.

Recommended Action

Site owners using NotificationX are recommended to update their plugin immediately to version 3.2.1 or later. Sites that cannot update should disable the plugin until the patched version can be applied. Leaving vulnerable versions active exposes visitors and logged-in users to client-side attacks that can be difficult to detect and mitigate.

One More Vulnerability

This plugin has another vulnerability that is rated 4.3 medium threat level.  The Wordfence advisory for this one describes it like this:

“The NotificationX plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check on the ‘regenerate’ and ‘reset’ REST API endpoints in all versions up to, and including, 3.1.11. This makes it possible for authenticated attackers, with Contributor-level access and above, to reset analytics for any NotificationX campaign, regardless of ownership.”

The NotificationX WordPress plugin includes two REST API endpoints called “regenerate” and “reset.” These endpoints are used to manage campaign analytics, such as resetting or rebuilding the stats that show how a notification is performing.

The problem is that these endpoints do not properly check user permissions for modifying data. In this case, the plugin only checks whether a user is logged in with Contributor-level access or higher, not whether they are actually allowed to perform the action. Even though users with the Contributor level role normally have very limited permissions, this flaw lets them perform actions they should not be able to do.

In this case, the damage that an attacker can do is limited. For example, an attacker can’t take over a site. Updated to version 3.2.1 or higher (same as the other vulnerability) will patch this vulnerability.

An attacker can:

  • Reset analytics for any NotificationX campaign
  • Do this even if they did not create or own the campaign
  • Repeatedly wipe or regenerate campaign statistics

Featured Image by Shutterstock/Art Furnace

WordPress Advanced Custom Fields Extended Plugin Vulnerability via @sejournal, @martinibuster

An advisory was published about a vulnerability in the popular Advanced Custom Fields: Extended WordPress plugin that is rated 9.8, affecting up to 100,000 installations.

The flaw enables unauthenticated attackers to register themselves with administrator privileges and gain full control of a website and all settings.

Advanced Custom Fields: Extended Plugin

The Advanced Custom Fields: Extended plugin is an add-on to the popular Advanced Custom Fields Pro plugin. It is used by WordPress site owners and developers to extend how custom fields work, manage front-end forms, create options pages, define custom post types and taxonomies, and customize the WordPress admin experience.

The plugin is widely used, with more than 100,000 active installations, and is commonly deployed on sites that rely on front-end forms and advanced content management workflows.

Who Can Exploit This Vulnerability

This vulnerability can be exploited by unauthenticated attackers, which means there is no barrier of first having to attain a higher permission level before launching an attack. If the affected version of the plugin is present with a specific configuration in place, anyone on the internet can attempt to exploit the flaw. That kind of exposure significantly increases risk because it removes the need for compromised credentials or insider access.

Privilege Escalation Exposure

The vulnerability is a privilege escalation flaw caused by missing role restrictions during user registration.

Specifically, the plugin’s insert_user function does not limit which user roles can be assigned when a new user account is created by anyone. Under normal circumstances, WordPress should strictly control which roles users can select or be assigned during registration.

Because this check is missing, an attacker can submit a registration request that explicitly assigns the administrator role to the new account.

This issue only occurs when the site’s form configuration maps a custom field directly to the WordPress role field. When that condition is met, the plugin accepts the supplied role value without verifying that it is safe or permitted.

The flaw appears to be due to insufficient server-side validation of the form field “Choices.” The plugin seems to have relied on the the HTML form to restrict which roles a user could select. For example, the developer could create a user sign up form with only the “subscriber” role as an option. But there was no verification on the backend to check if the user role the subscriber was signing up with matched the user roles that the form is supposed to be limited to.

What was probably happening is that an unauthenticated attacker could inspect the form’s HTML, see the field responsible for the user role, and intercept the HTTP request so that, for example, instead of sending role=subscriber, the attacker could change the value to role=administrator. The code responsible for the insert_user action took this input and passed it directly to WordPress user creation functions. It did not check if “administrator” was actually one of the allowed options in the field’s “Choices” list.

The Changelog for the plugin lists the following entry as one of the patches to the plugin:

“Enforced front-end fields validation against their respective “Choices” settings.”

That entry in the changelog means the plugin now actively checks front-end form submissions to ensure the submitted value matches the field’s defined “Choices”, rather than trusting whatever value is posted.

There is also this entry in the changelog:

“Module: Forms – Added security measure for forms allowing user role selection”

This entry means the plugin added server-side protections to prevent abuse when a front-end form is allowed to set or select a WordPress user role.

Overall, the patches to the plugin added stronger validation controls for front-end forms plus made them more configurable.

What Attackers Can Gain

If successfully exploited, the attacker gains administrator-level access to the WordPress site.

That level of access allows attackers to:

  • Install or modify plugins and themes
  • Inject malicious code
  • Create backdoor administrator accounts
  • Steal or manipulate site data
  • Redirect visitors or distribute malware

Gaining administrator access is a full site takeover.

The Wordfence advisory describes the issue as follows:

“The Advanced Custom Fields: Extended plugin for WordPress is vulnerable to Privilege Escalation in all versions up to, and including, 0.9.2.1. This is due to the ‘insert_user’ function not restricting the roles with which a user can register. This makes it possible for unauthenticated attackers to supply the ‘administrator’ role during registration and gain administrator access to the site.”

As Wordfence describes, the plugin trusts user-supplied input for account roles when it should not. That trust allows attackers to bypass WordPress’s normal protections and grant themselves the highest possible permission level.

Wordfence also reports having blocked active exploitation attempts targeting this vulnerability, indicating that attackers are already probing sites for exposure.

Conditions Required For Exploitation

The vulnerability is not automatically exploitable on every site running the plugin.

Exploitation requires that:

  • The site uses a front-end form provided by the plugin
  • The form maps a custom field directly to the WordPress user role

Patch Status and What Site Owners Should Do

The vulnerability affects all versions up to and including 0.9.2.1. The issue is addressed in version 0.9.2.2, which introduces additional validation and security checks around front-end forms and user role handling.

The entry for the official changelog for ACF Extended Basic 0.9.2.2:

  • Module: Forms – Enforced front-end fields validation against their respective “Choices” settings
  • Module: Forms – Added security measure for forms allowing user role selection
  • Module: Forms – Added acfe/form/validate_value hook to validate fields individually on front
  • Module: Forms – Added acfe/form/pre_validate_value hook to bypass enforced validation

Site owners using this plugin should update immediately to the latest patched version. If updating is not possible, the plugin should be disabled until the fix can be applied.

Given the severity of the flaw and the lack of authentication required to exploit it, delaying action leaves affected sites exposed to a complete takeover.

Featured Image by Shutterstock/Art Furnace