Everyone wants AI sovereignty. No one can truly have it.

Governments plan to pour $1.3 trillion into AI infrastructure by 2030 to invest in “sovereign AI,” with the premise being that countries should be in control of their own AI capabilities. The funds include financing for domestic data centers, locally trained models, independent supply chains, and national talent pipelines. This is a response to real shocks: covid-era supply chain breakdowns, rising geopolitical tensions, and the war in Ukraine.  

But the pursuit of absolute autonomy is running into reality. AI supply chains are irreducibly global: Chips are designed in the US and manufactured in East Asia; models are trained on data sets drawn from multiple countries; applications are deployed across dozens of jurisdictions.  

If sovereignty is to remain meaningful, it must shift from a defensive model of self-reliance to a vision that emphasizes the concept of orchestration, balancing national autonomy with strategic partnership. 

Why infrastructure-first strategies hit walls 

A November survey by Accenture found that 62% of European organizations are now seeking sovereign AI solutions, driven primarily by geopolitical anxiety rather than technical necessity. That figure rises to 80% in Denmark and 72% in Germany. The European Union has appointed its first Commissioner for Tech Sovereignty. 

This year, $475 billion is flowing into AI data centers globally. In the United States, AI data centers accounted for roughly one-fifth of GDP growth in the second quarter of 2025. But the obstacle for other nations hoping to follow suit isn’t just money. It’s energy and physics. Global data center capacity is projected to hit 130 gigawatts by 2030, and for every $1 billion spent on these facilities, $125 million is needed for electricity networks. More than $750 billion in planned investment is already facing grid delays. 

And it’s also talent. Researchers and entrepreneurs are mobile, drawn to ecosystems with access to capital, competitive wages, and rapid innovation cycles. Infrastructure alone won’t attract or retain world-class talent.  

What works: An orchestrated sovereignty

What nations need isn’t sovereignty through isolation but through specialization and orchestration. This means choosing which capabilities you build, which you pursue through partnership, and where you can genuinely lead in shaping the global AI landscape. 

The most successful AI strategies don’t try to replicate Silicon Valley; they identify specific advantages and build partnerships around them. 

Singapore offers a model. Rather than seeking to duplicate massive infrastructure, it invested in governance frameworks, digital-identity platforms, and applications of AI in logistics and finance, areas where it can realistically compete. 

Israel shows a different path. Its strength lies in a dense network of startups and military-adjacent research institutions delivering outsize influence despite the country’s small size. 

South Korea is instructive too. While it has national champions like Samsung and Naver, these firms still partner with Microsoft and Nvidia on infrastructure. That’s deliberate collaboration reflecting strategic oversight, not dependence.  

Even China, despite its scale and ambition, cannot secure full-stack autonomy. Its reliance on global research networks and on foreign lithography equipment, such as extreme ultraviolet systems needed to manufacture advanced chips and GPU architectures, shows the limits of techno-nationalism. 

The pattern is clear: Nations that specialize and partner strategically can outperform those trying to do everything alone. 

Three ways to align ambition with reality 

1.  Measure added value, not inputs.  

Sovereignty isn’t how many petaflops you own. It’s how many lives you improve and how fast the economy grows. Real sovereignty is the ability to innovate in support of national priorities such as productivity, resilience, and sustainability while maintaining freedom to shape governance and standards.  

Nations should track the use of AI in health care and monitor how the technology’s adoption correlates with manufacturing productivity, patent citations, and international research collaborations. The goal is to ensure that AI ecosystems generate inclusive and lasting economic and social value.  

2. Cultivate a strong AI innovation ecosystem. 

Build infrastructure, but also build the ecosystem around it: research institutions, technical education, entrepreneurship support, and public-private talent development. Infrastructure without skilled talent and vibrant networks cannot deliver a lasting competitive advantage.   

3. Build global partnerships.  

Strategic partnerships enable nations to pool resources, lower infrastructure costs, and access complementary expertise. Singapore’s work with global cloud providers and the EU’s collaborative research programs show how nations advance capabilities faster through partnership than through isolation. Rather than competing to set dominant standards, nations should collaborate on interoperable frameworks for transparency, safety, and accountability.  

What’s at stake 

Overinvesting in independence fragments markets and slows cross-border innovation, which is the foundation of AI progress. When strategies focus too narrowly on control, they sacrifice the agility needed to compete. 

The cost of getting this wrong isn’t just wasted capital—it’s a decade of falling behind. Nations that double down on infrastructure-first strategies risk ending up with expensive data centers running yesterday’s models, while competitors that choose strategic partnerships iterate faster, attract better talent, and shape the standards that matter. 

The winners will be those who define sovereignty not as separation, but as participation plus leadership—choosing who they depend on, where they build, and which global rules they shape. Strategic interdependence may feel less satisfying than independence, but it’s real, it is achievable, and it will separate the leaders from the followers over the next decade. 

The age of intelligent systems demands intelligent strategies—ones that measure success not by infrastructure owned, but by problems solved. Nations that embrace this shift won’t just participate in the AI economy; they’ll shape it. That’s sovereignty worth pursuing. 

Cathy Li is head of the Centre for AI Excellence at the World Economic Forum.

Rethinking AI’s future in an augmented workplace

There are many paths AI evolution could take. On one end of the spectrum, AI is dismissed as a marginal fad, another bubble fueled by notoriety and misallocated capital. On the other end, it’s cast as a dystopian force, destined to eliminate jobs on a large scale and destabilize economies. Markets oscillate between skepticism and the fear of missing out, while the technology itself evolves quickly and investment dollars flow at a rate not seen in decades. 

All the while, many of today’s financial and economic thought leaders hold to the consensus that the financial landscape will stay the same as it has been for the last several years. Two years ago, Joseph Davis, global chief economist at Vanguard, and his team felt the same but wanted to develop their perspective on AI technology with a deeper foundation built on history and data. Based on a proprietary data set covering the last 130 years, Davis and his team developed a new framework, The Vanguard Megatrends Model, from research that suggested a more nuanced path than hype extremes: that AI has the potential to be a general purpose technology that lifts productivity, reshapes industries, and augments human work rather than displaces it. In short, AI will be neither marginal nor dystopian. 

“Our findings suggest that the continuation of the status quo, the basic expectation of most economists, is actually the least likely outcome,” Davis says. “We project that AI will have an even greater effect on productivity than the personal computer did. And we project that a scenario where AI transforms the economy is far more likely than one where AI disappoints and fiscal deficits dominate. The latter would likely lead to slower economic growth, higher inflation, and increased interest rates.”

Implications for business leaders and workers

Davis does not sugar-coat it, however. Although AI promises economic growth and productivity, it will be disruptive, especially for business leaders and workers in knowledge sectors. “AI is likely to be the most disruptive technology to alter the nature of our work since the personal computer,” says Davis. “Those of a certain age might recall how the broad availability of PCs remade many jobs. It didn’t eliminate jobs as much as it allowed people to focus on higher value activities.” 

The team’s framework allowed them to examine AI automation risks to over 800 different occupations. The research indicated that while the potential for job loss exists in upwards of 20% of occupations as a result of AI-driven automation, the majority of jobs—likely four out of five—will result in a mixture of innovation and automation. Workers’ time will increasingly shift to higher value and uniquely human tasks. 

This introduces the idea that AI could serve as a copilot to various roles, performing repetitive tasks and generally assisting with responsibilities. Davis argues that traditional economic models often underestimate the potential of AI because they fail to examine the deeper structural effects of technological change. “Most approaches for thinking about future growth, such as GDP, don’t adequately account for AI,” he explains. “They fail to link short-term variations in productivity with the three dimensions of technological change: automation, augmentation, and the emergence of new industries.” Automation enhances worker productivity by handling routine tasks; augmentation allows technology to act as a copilot, amplifying human skills; and the creation of new industries creates new sources of growth.

Implications for the economy 

Ironically, Davis’s research suggests that a reason for the relatively low productivity growth in recent years may be a lack of automation. Despite a decade of rapid innovation in digital and automation technologies, productivity growth has lagged since the 2008 financial crisis, hitting 50-year lows. This appears to support the view that AI’s impact will be marginal. But Davis believes that automation has been adopted in the wrong places. “What surprised me most was how little automation there has been in services like finance, health care, and education,” he says. “Outside of manufacturing, automation has been very limited. That’s been holding back growth for at least two decades.” The services sector accounts for more than 60% of US GDP and 80% of the workforce and has experienced some of the lowest productivity growth. It is here, Davis argues, that AI will make the biggest difference.

One of the biggest challenges facing the economy is demographics, as the Baby Boomer generation retires, immigration slows, and birth rates decline. These demographic headwinds reinforce the need for technological acceleration. “There are concerns about AI being dystopian and causing massive job loss, but we’ll soon have too few workers, not too many,” Davis says. “Economies like the US, Japan, China, and those across Europe will need to step up function in automation as their populations age.” 

For example, consider nursing, a profession in which empathy and human presence are irreplaceable. AI has already shown the potential to augment rather than automate in this field, streamlining data entry in electronic health records and helping nurses reclaim time for patient care. Davis estimates that these tools could increase nursing productivity by as much as 20% by 2035, a crucial gain as health-care systems adapt to ageing populations and rising demand. “In our most likely scenario, AI will offset demographic pressures. Within five to seven years, AI’s ability to automate portions of work will be roughly equivalent to adding 16 million to 17 million workers to the US labor force,” Davis says. “That’s essentially the same as if everyone turning 65 over the next five years decided not to retire.” He projects that more than 60% of occupations, including nurses, family physicians, high school teachers, pharmacists, human resource managers, and insurance sales agents, will benefit from AI as an augmentation tool. 

Implications for all investors 

As AI technology spreads, the strongest performers in the stock market won’t be its producers, but its users. “That makes sense, because general-purpose technologies enhance productivity, efficiency, and profitability across entire sectors,” says Davis. This adoption of AI is creating flexibility for investment options, which means diversifying beyond technology stocks might be appropriate as reflected in Vanguard’s Economic and Market Outlook for 2026. “As that happens, the benefits move beyond places like Silicon Valley or Boston and into industries that apply the technology in transformative ways.” And history shows that early adopters of new technologies reap the greatest productivity rewards. “We’re clearly in the experimentation phase of learning by doing,” says Davis. “Those companies that encourage and reward experimentation will capture the most value from AI.” 

Looking globally, Davis sees the United States and China as significantly ahead in the AI race. “It’s a virtual dead heat,” he says. “That tells me the competition between the two will remain intense.” But other economies, especially those with low automation rates and large service sectors, like Japan, Europe, and Canada, could also see significant benefits. “If AI is truly going to be transformative, three sectors stand out: health care, education, and finance,” says Davis. “For AI to live up to its potential, it must fundamentally reshape these industries, which face high costs and rising demand for better, faster, more personalized services.”

However, Davis says Vanguard is more bullish on AI’s potential to transform the economy than it was just a year ago. Especially since that transformation requires application beyond Silicon Valley. “When I speak to business leaders, I remind them that this transformation hasn’t happened yet,” says Davis. “It’s their investment and innovation that will determine whether it does.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

New Ecommerce Tools: January 21, 2026

Every week we publish a list of new products and services for ecommerce merchants. This installment includes updates on post-purchase intelligence, inventory optimization, payments, agentic commerce, product recommendations, and chatbots and shopping assistants.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

PinchAI raises $5 million to help retailers defend against return fraud. PinchAI, a post-purchase intelligence platform that helps retailers reward loyal customers and prevent return fraud, has announced a $5 million seed round, co-led by Dynamo Ventures and Infinity Ventures with participation from Defined Capital and PayPal Ventures. With the funding, PinchAI will accelerate product development across its abuse prediction models, warehouse intelligence systems, and adaptive return engine.

Home page of PinchAI

PinchAI

ConverSight and Katana partner to forecast demand and optimize inventory. ConverSight, a provider of unified decision intelligence, has partnered with Katana Cloud Inventory, a cloud-native platform for product businesses. The collaboration will deliver AI foresight to manufacturing, ecommerce, wholesale, and retail companies, enabling them to plan inventory, anticipate demand shifts, and reduce carrying costs. As part of the launch, the companies are introducing QuickStart AI for Katana, a tool that claims to transform Katana’s data into proactive forecasting and inventory optimization.

Paysafe and Pay.com launch partnership. Payments platform Paysafe has announced a collaboration with Pay.com, a payments orchestrator. Pay.com’s technology enhances the checkout experience by leveraging advanced orchestration with a centralised risk engine to maximize acceptance and authorization rates. The platform now includes Paysafe’s credit and debit card processing. Pay.com has also integrated Paysafe’s Skrill and Neteller digital wallets, as well as PaysafeCard eCash tool, among other alternative payment methods.

Helcim launches Payment Extension for merchants. Helcim, a payments company, has launched Payment Extension, enabling merchants to “bring their own payments” to their favorite browser-based software. By installing a browser extension, businesses can connect their existing workflow directly to Helcim. This integration allows transaction data to flow from the software to Helcim’s payment interface and back again, ensuring invoices are marked paid and books are balanced without manual data entry.

Channelwill rebrands as Cwill, releases unified commerce platform. Channelwill, a Shopify app developer focused on post-purchase and retention tools, has announced its rebrand to Cwill. The company is integrating its five products into a single platform: Parcelwill post-purchase solution, Trustwill retention solution, Sendwill email marketing tool, Chatwill AI-powered customer-service assistant, and SEOwill for content and organic search.

Home page of Cwill

Cwill

AWS launches European Sovereign Cloud. Amazon Web Services has launched European Sovereign Cloud, located within the E.U. and providing an independently operated cloud with technical controls, sovereign assurances, and legal protections for sensitive data. AWS will extend the cloud’s footprint from Germany across the E.U., starting with Belgium, the Netherlands, and Portugal.

Ballerine launches agentic commerce governance platform. Ballerine, an AI-native risk and compliance platform for financial institutions, fintechs, and marketplaces, has launched “Trusted Agentic Commerce Governance Platform,” a real-time operating tool to help payment service providers prepare and govern merchants for agent-driven commerce. The platform evaluates eligibility, enforces policies, and continuously monitors inventory and behavioral signals as catalogs evolve, according to Ballerine.

Lyxity launches API for content on WordPress, Wix, Drupal, Strapi. Lyxity, a provider of intelligent content technology for marketers, has launched its API, extending its AI to WordPress, Wix, Drupal, and Strapi-powered websites. Via its API, Lyxity says it enables (i) direct connection to a website’s content management system, (ii) retrieval of performance and query data from Google Search Console, (iii) production of publication-ready content, (iv) review and enhancement of existing legacy content, (v) expansion and organization of content for improved clarity, and (vi) manual review and editing before publishing.

Evertune tracks brand visibility in product recommendations. Evertune, a provider of generative engine optimization, has announced shopping intelligence to track how AI models recommend products and translate raw data into actionable optimization strategies. Evertune states its tool can (i) track shopping trigger rates to benchmark category purchase intent and identify opportunity size, (ii) monitor shopping visibility to measure competitive performance and prove optimization impact over time, (iii) identify which partnerships drive discovery and where distribution gaps exist, and (iv) see the price ranges and averages AI displays for your products versus competitors.

Home page of Evertune

Evertune

Tredence unveils agentic commerce accelerators. Tredence, a data science and AI tools provider, has launched five agentic commerce accelerators. “Cosmos Customer Intelligence Agent” predicts customer actions and models shopper preferences for real-time personalization. “Personalized Content Generation Agent” generates on-brand, multimodal personalized content (text, product descriptions, images, videos). “Contextual Search Agent” implements question-driven and contextual search for customers. “Shopper Concierge Agent” provides a genAI shopping assistant for relevant insights and product recommendations. “Customer Engagement Agent” orchestrates cross-channel messaging.

Knowband launches AI chatbot for PrestaShop and OpenCart stores. Knowband has released its AI chatbot for ecommerce stores on PrestaShop and OpenCart platforms. The chatbot supports real-time conversations and provides info such as product details, order status, shipping updates, and order tracking. Merchants can add the chatbot directly to their store’s frontend to help shoppers obtain instant assistance while browsing. Knowband says the chatbot works with multiple AI models, including ChatGPT, Gemini, DeepSeek, and Claude.

​​​Insight and Stripe partner for enterprise commerce.Insight Enterprises has announced an expanded partnership with Stripe. The collaboration combines Stripe’s programmable financial services platform and Insight’s solutions integrator capabilities to accelerate how organizations launch digital revenue models and scale globally. The result, per Insight​ and Stripe,​ is tools for modern checkout and payments integrations, enabling purchases directly in genAI platforms.

Crescendo launches multimodal shopping assistant on Shopify. Crescendo, an AI-powered customer service platform, has launched its multimodal AI shopping assistant on the Shopify App Store. Crescendo says its AI assistants unify service and shopping into a single experience, answering service questions and guiding shoppers toward purchase.

Home page of Crescendo

Crescendo

YouTube CEO Announces AI Creation Tools, In-App Shopping For 2026 via @sejournal, @MattGSouthern

YouTube CEO Neal Mohan announced the company’s priorities in his annual letter, previewing new AI creation tools, expanded shopping features, and format changes to Shorts.

AI Creation Tools

YouTube is adding three AI creation features this year. Creators will be able to make Shorts using their own likeness, produce games from text prompts through the experimental Playables program, and experiment with music creation tools.

More than 1 million channels used YouTube’s AI creation tools daily in December, according to the letter. The company also reported 20 million users learned about content through its Ask tool in December, and 6 million daily viewers watched at least 10 minutes of autodubbed content.

Mohan sees these tools as creative aids rather than replacements.

“Throughout this evolution, AI will remain a tool for expression, not a replacement,” he wrote.

YouTube also addressed concerns about AI-generated content quality, saying it’s building on spam and clickbait detection systems to reduce what Mohan called “AI slop.”

Shopping Expands With In-App Checkout

YouTube is pushing further into commerce with in-app checkout, letting viewers purchase products without leaving the site.

More than 500,000 creators are already in YouTube Shopping. Mohan cited creator Vineet Malhotra, who drove “millions of dollars in YouTube Shopping GMV in 2025.”

I covered YouTube’s commerce push back in September when the company announced AI-powered product tagging and automatic timestamps for shopping videos. In-app checkout is the next step, aiming to reduce the friction of sending viewers to external sites.

Brand partnership tools are expanding too. Shorts creators will be able to add links to brand sites for sponsored content, and a new feature lets creators swap out branded segments after publishing to turn back catalogs into recurring revenue.

Shorts Gets Image Posts

Image posts are coming to the Shorts feed this year. Shorts now averages 200 billion daily views, according to Mohan.

The addition brings YouTube closer to Instagram’s format, mixing static images with video in the same feed.

Parental Controls

Parental control updates announced last week let parents set time limits on Shorts scrolling for kids and teens, including setting the timer to zero. YouTube calls this an “industry first.”

How 2025 Promises Played Out

I covered Mohan’s 2025 letter when he announced TV had surpassed mobile as the primary viewing device in the U.S. That letter made similar commitments. Some shipped, others are still pending.

Auto dubbing, which he promised to expand to all YouTube Partner Program creators, rolled out. The 2026 letter says 6 million daily viewers now watch at least 10 minutes of autodubbed content. AI tools for video ideas, titles, and thumbnails launched through the Inspiration Tab last year.

YouTube TV’s multiview improvements are still coming. The 2025 letter promised enhancements; the 2026 letter says “fully customizable multiview” arrives soon. The specialized YouTube TV plans Mohan announced this year are new.

The likeness-based Shorts creation, text-to-game features, in-app checkout, and image posts in Shorts are all new to the 2026 roadmap.

Why This Matters

YouTube keeps building tools that hold users and transactions inside its ecosystem. AI creation features give creators more production options. In-app checkout gives YouTube more control over the commerce layer.

The $100 billion YouTube says it paid creators over the past four years shows the scale of its creator economy. These updates aim to keep that system growing.

Looking Ahead

Most features don’t have specific launch dates. Mohan used “this year” and “soon” throughout the letter.

Parental control updates are rolling out now. Creators in YouTube Shopping should watch for checkout integration, and those using AI tools can expect expanded options throughout 2026.

A Little Clarity On SEO, GEO, And AEO via @sejournal, @martinibuster

The debate about AEO/GEO centers on whether it’s a subset of SEO, a standalone discipline, or just standard SEO. Deciding on where to plant a flag is difficult because every argument makes a solid case. There’s no doubt that change is underway and it may be time find where all the competing ideas intersect and work from there.

The Case Against AEO/GEO

Many SEOs argue that AEO/GEO doesn’t differentiate itself enough to justify being anything other than a subset of SEO, sharing computers in the same office.

Harpreet Singh Chatha (X profile) of Harps Digital recently tweeted about AEO / GEO myths to leave behind in 2025.

Some of what he listed:

  • “LLMs.txt
  • Paying a GEO expert to do “chunk optimization.” Chunking content is just making your content readable.
  • Thinking AEO / GEO have nothing in common with SEO. Ask your favourite GEO expert for 25 things that are unique to AI search and don’t overlap with SEO. They will block you.
  • Saying SEO is dead. “

The legendary Greg Boser (LinkedIn profile), one of the original SEOs since 1996 tweeted this:

“At the end of the day, the core foundation of what we do always has been and always will be about understanding how humans use technology to gain knowledge.

We don’t need to come up with a bunch of new acronyms to continue to do what we do. All that needs to happen is we all agree to change the “E” in SEO from “Engine” to “Experience”.

Then everyone can stop wasting time writing all the ridiculous SEO/GEO/AEO posts, and get back to work.”

Inability To Articulate AEO/GEO

What contributes to the perception that AEO/GEO is not a real thing is that many proponents of AEO/GEO fail to differentiate it from standard SEO. We’ve all seen it where someone tweets their new tactic and the SEO peanut gallery chimes in, nah, that’s SEO.

Back in October Microsoft published a blog post about optimizing content for for AI where they asserted:

“While there’s no secret strategy for being selected by AI systems, success starts with content that is fresh, authoritative, structured, and semantically clear.”

The post goes on to affirm the importance of SEO fundamentals such as “Crawlability, metadata, internal linking, and backlinks” but then states that these are just starting points. Microsoft points out that AI search provides answers, not ranked list of pages. That’s correct and it changes a lot.

Microsoft says that now it’s about which pieces of content are being ranked:

“In AI search, ranking still happens, but it’s less about ordering entire pages and more about which pieces of content earn a place in the final answer.”

That kind of echoes what Jesse Dwyer of Perplexity AI recently said about AI Search and SEO:

“As for the index technology, the biggest difference in AI search right now comes down to whole-document vs. “sub-document” processing.

…The AI-first approach is known as “sub-document processing.” Instead of indexing whole pages, the engine indexes specific, granular snippets (not to be confused with what SEO’s know as “featured snippets”).”

Microsoft recently published an explainer called “From discovery to influence:A guide to AEO and GEO” that’s tellingly focused mostly on shopping, which is notable and remarkable because there’s a growing awareness that ecommerce stands to gain a lot from AI Search.

No such luck for informational sites because it’s also gradually becoming understood that Agentic AI is poised to strip informational sites of all branding and value-add and treating them as sources of data.

Common SEO Practices That Pass As GEO

Some of what some champion as GEO and AEO are actually longstanding SEO practices:

  • Crafting content in the form of answers
    Good SEOs have been doing this since Featured Snippets came out in 2014.
  • Chunking content
    Crafting content in tight paragraphs looks good in mobile devices and it’s something good SEOs and thoughtful content creators have been doing for well over a decade.
  • Structured Content
    Headings and other elements that strongly disambiguate the content are also SEO.
  • Structured Data
    Shut your mouth. This is SEO.

The Customer Is Always Right

Some of in the GEO Is Real campe tend to regard themselves as evolving with the times but they also acknowledge they’re just offering what the clients are demanding. SEO practioners are in a hard spot, what are you going to do? Plant your flag on traditional SEO and turn your back on what potential clients are begging for?

Googlers Insist It’s Still SEO

There are Googlers such as Robby Stein (VP of Product), Danny Sullivan, and John Mueller who say that SEO is 100% still relevant because under the hood AI is just firing off Google searches for top ranked sites to backfill into synthesized answers and links (Read: Google Downplays GEO – But Let’s Talk About Garbage AI SERPs). OpenAI was recently hiring a content strategist that is able to lean into to SEO (not GEO), which some say demonstrates that even OpenAI is focused on traditional SEO.

Optimization Is No Longer Just Google

Manick Bhan (LinkedIn profile), founder of the Search Atlas SEO suite, offered an interesting take on why we may be transitioning to a divided SEO and GEO path.

Manick shared:

“SEO has always meant ‘search engine optimization,’ but in practice it has historically meant ‘Google optimization.’ Google defined the interface, the ranking paradigm, the incentives, and the entire mental model the industry used.

The challenge with calling GEO a ‘sub-discipline’ of SEO is that the LLM ecosystem is not one ecosystem, and Google’s AI Mode is becoming a generative surface itself.”

Manick asserts that there is no one “GEO” because each of the AI search and answer engines use different methodologies. He observed that the underlying tactics remain the same but the “the interface, the retrieval model, and the answer surface” are all radically changed from anything that’s come before.

Manick believes that GEO is not SEO, offering the following insights:

“My position is clear: GEO is not just SEO with a fresh coat of paint, and reducing it to that misses the fundamental shift in how modern answer engines actually retrieve, rank, and assemble information.

Yes, the tactics still live in the same universe of on-page and off-page signals. Those fundamentals haven’t changed. But the machines we’re optimizing for have.

Today’s answer engines:

  • Retrieve differently,
  • Fuse and weight sources differently,
  • Handle recency differently,
  • Assign trust and authority differently,
  • Fan out queries differently,
  • And incorporate user behavior into their RAG corpora differently.

Even seemingly small mechanics — like logit calibration and temperature — produce practically different retrieval outputs, which is why identical prompts across engines show measurable semantic drift and citation divergence.

This is why we’re seeing quantifiable, repeatable differences in:

  • Retrieved sources,
  • Answer structures,
  • Citation patterns,
  • Semantic frames,
  • And ranking behavior across LLMs, AI Mode surfaces, and classical Google results.

In this landscape, humility and experimentation matter more than dogma. Treating all of this as ‘just SEO’ ignores how different these systems already are, and how quickly they’re evolving.”

It’s Clear We Are In Transition

Maybe one of the reasons for the anti-GEO backlash is that there is a loud contingent of agencies and individuals who have very little experience with SEO, some who are fresh out of college with zero experience. And it’s not their lack of experience that gets some SEOs in ranting mode. It’s the things they purport are GEO/AEO that are clearly just SEO.

Yet, as Manick of Search Atlas pointed out, AI search and chat surfaces are wildly different from classic search and it’s kind of closing ones eyes to the obvious to deny that things are different and in transition.

Featured Image by Shutterstock/Natsmith1

The 5-Pillar Audit: Diagnosing Strategy Vs. Tactic Failures In A Google Ads Account

If your Google Ads campaigns are underperforming, your first instinct might be to dive into the platform. You may want to tweak bid strategies, clean up keywords, or adjust targeting.

That is the classic PPC audit.

Here is the challenge today: Tactical audits matter less than you think. In an age of AI automation, a campaign can be perfectly executed on a technical level and still deliver zero business value.

Data shows that cost-per-lead increased for 13 out of 23 industries in 2025. The bigger problem is often strategic misalignment, which lives outside the Google Ads interface.

A true paid search audit separates strategy failures from tactical errors. The difference can mean wasted budget or meaningful business growth.

Here is how I break down the strategic assessment through five key pillars, with real-world stories and practical guidance for PPC professionals.

1. Business Objective Alignment

Many performance challenges begin before a PPC account ever launches. When business goals are unclear, internally misaligned, or not translated into measurable paid media targets, even well-built campaigns struggle.

Common indicators of misalignment include:

  • KPIs set around platform metrics (CPC, Quality Score, Ad Rank) rather than commercial outcomes.
  • Conflicting definitions of success across marketing, sales, and leadership.
  • Goals that shift frequently without corresponding strategy changes.
  • Targets that do not reflect actual funnel realities or close-rate data.

For example, a brand may ask for lower CPAs when the business need is pipeline growth. Or, they may request more volume when budgets only support lower-funnel efficiency.

Without a clear, unified business objective, Google Ads will optimize toward whatever signals are available in the platform. It will do this even if those signals do not support company priorities.

Solid performance requires alignment first, optimization second.

Story: A client copied a successful account from another market. The original account’s primary call to action was “Visit In-Person.” The copied account focused on ecommerce conversions first and then visiting in person. The new account looked technically perfect, mirroring the structure, keywords, and setup of the original. Yet, the account failed because the new market was not willing to purchase before visiting in person.

Takeaway: Even a technically flawless campaign will fail if the strategy does not align with real-world buyer behavior. Google Ads will optimize toward whatever signals exist in the platform. If those signals do not match business objectives or audience intent, performance suffers. Strategic alignment comes first. Optimization comes second.

2. Offer and Pricing Viability

Paid search cannot compensate for an offer that is uncompetitive, unclear, or mismatched to audience expectations. This is one of the most common strategic failures hidden inside PPC performance issues.

Key considerations include:

  • Competitiveness of pricing against alternatives.
  • Clarity of the value proposition.
  • Strength of differentiation in a crowded market.
  • Relevance of the offer to the searcher’s intent.

When market fit or price positioning is weak, no amount of bid strategy refinement will improve conversion rates. This is especially visible in categories where competitors set consumer expectations for features, price, or delivery.

Before adjusting tactics, the offer itself must be evaluated with the same rigor as the campaign.

Story: A client was running direct-to-consumer campaigns for a product. It was priced higher on their website than on Amazon. Customers who clicked through expected the best deal but quickly discovered they could get it cheaper elsewhere. Even with a perfect campaign structure and messaging, conversions suffered.

Takeaway: Customers are savvy and will compare across channels. If the value proposition is unclear or the price is uncompetitive, paid search cannot overcome it. Strategic evaluation of offer clarity and pricing is essential before optimizing campaigns further.

3. Audience And Intent Fit

Traffic volume does not equal qualified demand. Performance issues often stem from a disconnect between keyword intent, audience readiness, and the conversion expectations placed on the campaign.

Common causes include:

  • Bidding on high-volume terms that attract broad or early-stage research users.
  • Expecting lower-funnel performance from upper-funnel queries or channels.
  • Targeting keywords with long research cycles while measuring short-term ROAS.
  • Misinterpreting category search behavior and funnel signals.

Google’s automation can reach the right people efficiently, but it cannot change their readiness to convert. Ensuring the campaign aligns with the correct stage of intent across keywords, audiences, and creative is essential for stable performance.

Story: A wedding venue client initially ran campaigns directing users to a “Book Now” action. While this seems like a clear conversion, most prospective clients wanted to schedule a tour first before committing. By adjusting the call to action to “Book a Tour,” the campaign better matched audience intent, and conversions increased substantially.

Takeaway: Understanding the true stage of your audience in the funnel is critical. Even precise targeting and strong creative cannot compensate for mismatched intent. Strategy must reflect how and when your audience is willing to act.

4. Landing Page Utility And Experience

A strong ad cannot overcome a weak landing experience. This pillar has become increasingly important as Google takes on more of the optimization work within the campaign. The landing page is one of the few levers advertisers retain full control over.

Areas that frequently limit performance include:

  • Slow page speed or friction in the conversion path.
  • Generic or outdated content that does not match the user’s expectations.
  • Messaging that does not reinforce the ad’s promise.
  • Lack of clear differentiation or compelling proof.
  • Poor mobile usability.

Today’s searchers recognize generic or AI-generated content quickly, and engagement drops accordingly. When traffic is well-matched and intent is strong, the landing experience must be equally strong to convert. For example, a slow page can be deadly: 53% of mobile visits are likely to be abandoned if pages take longer than three seconds to load.

If the landing page cannot support the PPC campaign’s goals, the issue is strategic, not tactical. The issue should be addressed before further optimization happens in the platform.

Story: In many ecommerce campaigns, I have seen traffic directed to a homepage instead of a category or product-specific page. Even when the campaign structure and keywords were perfect, the slightly misaligned landing page caused conversions to underperform. Aligning the ad group with the exact page, messaging, and discount offering significantly improved results.

Takeaway: When traffic is well-matched and intent is strong, the landing experience must also be strong to convert. Even technically flawless campaigns can fail if the page does not deliver the clarity, relevance, and proof the user expects. Strategic improvements to landing pages should come before further optimization in the platform.

5. Measurement Architecture

Even well-designed campaigns will underperform if measurement systems are incomplete or misaligned. With automation relying heavily on signals, inaccurate or low-quality conversion data can lead to poor optimization that compounds over time.

Frequent measurement challenges include:

  • Tracking micro-conversions that inflate performance.
  • Inconsistent goals between Google Ads and the CRM.
  • Missing or unreliable conversion values.
  • Delayed offline conversion uploads.
  • Broken tagging or incorrect attribution logic.

The consequence is not just inaccurate reporting. It is the machine learning system optimizing toward the wrong outcomes. Ensuring accurate, strategically aligned measurement is foundational to effective campaign operation. For instance, Google’s internal data shows that advertisers who feed the system with quality signals are 63% more likely to publish Search campaigns with “Good” or “Excellent” Ad Strength.

Story: A client had overlapping keywords targeting consumer intent. As a result, the majority of calls went to consumers rather than the intended business clients. Offline conversions were not uploaded, so Google Ads could not optimize for actual leads. Despite correctly structured campaigns and perfect keywords, performance suffered because the machine was optimizing toward the wrong signals.

Takeaway: Accurate, strategically aligned measurement is foundational. Without it, even technically flawless campaigns can fail. Providing the algorithm with high-quality conversion signals ensures optimization drives real business outcomes, not misleading metrics.

Turning Insights Into Action

Once the source of failure is identified, the path forward becomes clearer. It is crucial to determine if the issue is strategic or tactical.

Follow these steps before diving into campaign optimizations:

  • Validate the business objective. Align on definitions, measurement, and expected outcomes.
  • Assess the offer. Confirm the value proposition, pricing, and differentiation hold up in the current market.
  • Match audience and intent. Ensure keywords, targeting, and funnel goals reflect true buying behavior.
  • Strengthen the landing experience. Improve relevance, clarity, speed, and conversion pathways.
  • Fix measurement at the source. Provide the algorithm with accurate, high-value signals.

Only after these strategic components are addressed should account and campaign-level optimizations begin.

Final Thought

In today’s automated environment, many Google Ads issues masquerade as tactical problems when they originate elsewhere. The five-pillar audit brings clarity to where the breakdown is happening and what needs to change to improve account performance.

By separating strategy from execution, teams can make better decisions, allocate resources more effectively, and build campaigns that support true business impact rather than platform-level wins that look good only in reports.

More Resources:


Featured Image: Viktoriia_M/Shutterstock

Wix Introduces Harmony AI Website Builder via @sejournal, @martinibuster

Wix announced the launch of Wix Harmony, a new AI-powered website builder that combines natural language site creation with Wix’s existing visual editor. The company introduced the product in New York and said it will begin rolling out in English to users in the coming weeks. Wix positions Harmony as a tool designed to produce fully functional, production-ready websites rather than quick demos, addressing common tradeoffs between fast site creation and developing deployment-ready websites.

Wix has steadily expanded its use of artificial intelligence across its product line, aiming to streamline the process of maintaining an online presence while keeping users in control of how their sites look and function. Wix Harmony represents the company’s most direct effort to integrate AI into the core website-building workflow rather than treating it as a separate feature.

Aria Is Wix Harmony’s Interface

Wix Harmony’s interface is Aria, an AI agent that responds to natural language instructions and applies changes directly within the Wix editor. Users can ask Aria to perform tasks ranging from visual changes such as adjusting colors or layouts, to adding commerce features or redesigning entire pages. Because Aria operates within Wix’s existing architecture, Wix says changes made in one area of a site will not disrupt other sections or introduce unintended behavior.

Video Of Wix Harmony

Switch Between Manual And AI Workflows

An interesting feature of Wix Harmony is that it enables users to move back and forth between AI-assisted creation and manual editing without rebuilding elements from scratch. A user can generate a page or section through a prompt, then manually fine-tune spacing, layout, and content using drag-and-drop controls. Wix describes this as a way to speed up site creation while keeping design decisions in human control, rather than locking users into AI-generated outputs. This is a truly thoughtful implementation of AI that is flexible to how their users may want to use AI.

Delivers Sites That Are Ready To Deploy

Wix is positioning Harmony as a solution that is capable of delivering websites that are ready to deploy to a live environment. By running Harmony sites on the same infrastructure as all Wix websites, the company says it can support live traffic, ongoing updates, and business operations without requiring users to migrate to a different platform.

Websites created with Wix Harmony include access to Wix’s existing business features, including online commerce, scheduling, transactions, and payments. Wix also says these sites include built-in accessibility monitoring, search optimization tools, performance support, and privacy-focused infrastructure designed to meet regulatory requirements such as GDPR. These capabilities are intended to let users launch and operate websites without adding third-party services to cover basic operational needs.

Wix co-founder and CEO Avishai Abrahami said:

“Our focus is on combining the best new technologies with modern design, and this is the power of Wix. With Wix Harmony, now anyone can create a beautiful website, design easily with prompts and natural language without sacrificing scalability, security, reliability and performance. This is the benchmark of what a website builder should be.”

Harmony is part of Wix’s thoughtful integration of AI, providing users with tools that businesses can use to their benefit. Read more about Wix Harmony.

Featured Image by Wix

Web Governance As A Growth Lever: Building A Center Of Excellence That Actually Works via @sejournal, @billhunt

In every digital transformation I’ve consulted on, from global banks to manufacturing giants, the failure point isn’t usually the strategy. It’s the governance.

Strategy defines where to go. Operations define how to get there. Governance is what keeps everyone moving in the same direction, at the same speed, without crashing into each other.

In my earlier Search Engine Journal articles, we built the foundation for this discussion:

This article closes the loop. Because until governance and accountability take hold, every strategy, no matter how visionary, remains a PowerPoint slide.

Governance As Guardrails For Growth

Governance has a branding problem. Too often, it’s mistaken for red tape – a set of rules designed to slow things down. In reality, good governance is what lets organizations move faster without flying apart. It’s a system of guardrails, not gates – a shared framework that protects creativity by keeping it aligned with purpose.

When done right, governance is the difference between freedom and anarchy. It ensures that every team, design, dev, content, and analytics can innovate confidently within an agreed-upon structure of trust, compliance, and clarity.

Governance doesn’t limit autonomy; it enables responsible autonomy.

The most effective Centers of Excellence build their governance around three principles:

  1. Guardrails, not barriers – Standards prevent rework and confusion, not creativity.
  2. Enablement through clarity – When expectations are clear, teams spend less time negotiating and more time executing.
  3. Evolution, not enforcement – Governance must adapt with technology, markets, and now, AI systems.

This turns governance into a living framework – one that scales excellence, accelerates innovation, and protects enterprise value simultaneously.

The Cast Of Characters: Who Belongs In A Modern Center of Excellence

A true Center of Excellence (COE) isn’t a department—it’s an alignment mechanism.

Its power lies in uniting diverse roles around shared definitions of value, performance, and accountability.

Role Type Primary Focus Key Question They Answer
Business Leadership (CEO, CFO, CMO) Direction, metrics, incentives “Are our digital assets creating measurable enterprise value?”
Digital Operations (CTO, DevOps, Product) Infrastructure, scalability, uptime “Can we deploy and measure at scale without friction?”
Marketing & Experience (SEO, UX, Content, CX) Discoverability, usability, trust “Is our content findable, credible, and consistent across markets?”
Data & AI Enablement (Analytics, Schema, AI Strategy) Structuring and measuring the data layer “Can machines – and humans – understand our brand at every level?”

An effective COE sits at the crossroads of these groups. It translates corporate objectives into digital guardrails, workflows, and shared KPIs.

And it does so through clarity of ownership – who decides, who executes, and who is accountable for outcomes.

Without that alignment, teams drift into the ownership gap I outlined in “Who Owns Web Performance?,” each optimizing their own slice while the organization loses system-level performance.

Anatomy Of A Working Center Of Excellence

A COE that works isn’t a poster on the wall but an ecosystem built around five components:

  1. Vision & Mandate – A clearly articulated purpose with executive sponsorship. Governance without mandate becomes optional. Tie the COE to measurable outcomes – revenue efficiency, cost avoidance, and risk reduction.
  2. Standards & Playbooks – Codified frameworks for content hierarchy, tagging, schema, and AI readiness. Standards remove friction when they’re written for usability, not perfection.
  3. Measurement & Accountability – Shared dashboards connecting digital KPIs to business KPIs. The CEO shouldn’t ask, “How’s SEO?” but “What’s the digital contribution to EBITDA?”
  4. Enablement & Knowledge Sharing – Training, automation, and playbooks that make compliance the natural outcome of good work, not an afterthought.
  5. Feedback & Evolution – Regular audits and retrospectives to ensure standards evolve as the technology – and the company – does.

A COE that only publishes rules is a library.

A COE that enforces and evolves them is a growth engine.

Effective governance transforms from control to enablement when standards become self-reinforcing. Instead of asking, “Did we follow the rules?” teams ask, “Do the rules help us move faster and smarter?” That’s the culture shift a Center of Excellence exists to create.

Corporate Judo: Turning Structure Into Strength

In “Epiphany 2 — Leverage Corporate Judo,” I wrote that the secret to lasting change isn’t fighting the system – it’s using its momentum. You don’t overpower corporate structure; you redirect it.

“The art of corporate judo is learning to use the organization’s own weight to create forward motion.”

Web governance works the same way. Rather than viewing process and policy as obstacles, a skilled COE converts them into leverage – turning approvals, reporting lines, and compliance requirements into tools for acceleration. A well-designed COE doesn’t rebel against structure; it channels it toward growth.

In this sense, governance becomes corporate aikido by absorbing friction and transforming it into alignment.

Cross-Channel Alignment: The Prerequisite For Performance

Before you can optimize, you must align. The most advanced analytics stack or SEO roadmap will fail if the organization itself is out of sync.

A functioning COE creates connective tissue between:

  • Search & Content – shared definitions of topics, authority, and metrics.
  • UX & Engineering – balance between design freedom and structural consistency.
  • Marketing & Analytics – unified measurement across paid, earned, and owned.
  • Corporate & Regional Teams – global templates with local flexibility.

In multinational environments, this alignment prevents the “geo-targeting misalignment” I’ve written about – where the wrong market page ranks, or translation replaces true localization. The COE becomes the referee between global efficiency and local relevance.

Why This Matters Even More In The AI Era

AI has raised the stakes for governance.
In the old world, poor governance hurt rankings.
In the new world, it hurts eligibility.

Search-grounded AI systems like Google’s AI Overviews and Bing Copilot rely on structured, accessible, and authoritative data to decide what’s trustworthy enough to include. If your schema, content, or infrastructure is inconsistent, the machine can’t reconcile your brand—and when it can’t reconcile, it omits you.

If SEO was about visibility, AI is about eligibility – and eligibility depends on governance.

As I argued in “Stop Retrofitting. Start Commissioning: The New Role of SEO in the Age of AI,” The role of SEO and, by extension, digital governance has shifted from a reactive fix to a proactive design function. SEO is no longer the cleanup crew that patches gaps after launch. It must become the Commissioning Authority, the group that ensures what’s being built meets both user and machine standards before it ever goes live.

Governance, in this new context, isn’t back-office oversight. It’s front-office enablement.

It ensures that every digital asset – content, structure, schema, and technical architecture – is commissioned for machine interpretation, not just human readability.

Because in today’s AI-first ecosystem, the question isn’t simply, “Can users find us?” It’s “Can machines trust, understand, and use us?”

“The era of being brought in after launch is over.
Governance – and SEO – must move upstream to where strategy and systems are conceived.”

Good governance isn’t a final check; it’s a design ethos. It transforms your organization from retrofitting performance to commissioning excellence.

And that shift from reactivity to readiness is what separates the brands that survive AI disruption from the ones that silently vanish from the conversation.

Governance As Digital Operating Leverage

Governance may not sound glamorous, but it’s the lever that compounds returns across every other investment.

  • Revenue Growth – Faster launches, better discoverability, consistent brand experience.
  • Cost Efficiency – Reduced rework, redundant tools, and duplicated content.
  • Capital Efficiency – Shared systems and reusable frameworks across markets.
  • Risk & Resilience – Compliance, uptime, and data consistency.
  • Innovation & Optionality – Guardrails that enable safe experimentation with AI and automation.

In financial terms, governance converts digital activity into operating leverage by increasing output without proportionally growing cost. This means your overall Web Effectiveness is a shareholder issue, not a marketing one. Governance is how you turn that theory into muscle.

The Leadership Imperative

Ultimately, governance fails when it’s delegated. A COE can’t succeed without executive willpower and cross-functional buy-in.

The CEO owns shareholder value.
The CMO owns demand.
The CTO owns systems.
But the COE owns the connection between them.

If your website is the factory, your COE is the operations manual that keeps it producing value – efficiently, predictably, and at scale.

Web governance isn’t a brake pedal; it’s a steering system. It creates the clarity and confidence that allow innovation to scale safely. It’s how large organizations protect creativity without chaos — and how they turn complexity into compound value.

In the age of AI, alignment isn’t optional. Governance is growth.

More Resources:


Featured Image: ImageFlow/Shutterstock

How Recommender Systems Like Google Discover May Work via @sejournal, @martinibuster

Google Discover is largely a mystery to publishers and the search marketing community even though Google has published official guidance about what it is and what they feel publishers should know about it. Nevertheless, it’s so mysterious that it’s generally not even considered as a recommender system, yet that is what it is. This is a review of a classic research paper that shows how to scale a recommender system. Although it’s for YouTube, it’s not hard to imagine how this kind of system can be adapted to Google Discover.

Recommender Systems

Google Discover belongs to the class of systems known as a recommender systems. A classic recommender system I remember is the MovieLens system from way back in 1997. It is a university science department project that allowed users to rate movies and it would use those ratings to recommend movies to watch. The way it worked is like, people who tend to like these kinds of movies tend to also like these other kinds of movies. But these kinds of algorithms have limitations that make them fall short for the scale necessary to personalize recommendations for YouTube or Google Discover.

Two-Tower Recommender System Model

The modern style of recommender systems are sometimes referred to as the Two-Tower architecture or the Two-Tower model. The Two-Tower model came about as a solution for YouTube, even though the original research paper (Deep Neural Networks for YouTube Recommendations) does not use this term.

It may seem counterintuitive to look to YouTube to understand how the Google Discover algorithm works, but the fact is that the system Google developed for YouTube became the foundation for how to scale a recommender system for an environment where massive amounts of content are generated every hour of the day, 24 hours a day.

It’s called the Two-Tower architecture because there are two representations that are matched against each other, like two towers.

In this model, which handles the initial “retrieval” of content from the database, a neural network processes user information to produce a user embedding, while content items are represented by their own embeddings. These two representations are matched using similarity scoring rather than being combined inside a single network.

I’m going to repeat that the research paper does not refer to the architecture as a Two-Tower architecture, it’s a description for this kind of approach that was created later. So, while the research paper doesn’t use the word tower, I’m going to continue using it as it makes it easier to visualize what’s going on in this kind of recommender system.

User Tower
The User Tower processes things like a user’s watch history, search tokens, location, and basic demographics. It uses this data to create a vector representation that maps the user’s specific interests in a mathematical space.

Item Tower
The Item Tower represents content using learned embedding vectors. In the original YouTube implementation, these were trained alongside the user model and stored for fast retrieval. This allows the system to compare a user’s “coordinates” against millions of video “coordinates” instantly, without having to run a complex analysis on every single video each time you refresh your feed.

The Fresh Content Problem

Google’s research paper offers an interesting take on freshness. The problem of freshness is described as a tradeoff between exploitation and exploration. The YouTube recommendation system has to balance between showing users content that is already known to be popular (exploitation) versus exposing them to new and unproven content (exploration). What motivates Google to show new but unproven content, at least for the context of YouTube, is that users show a strong preference for new and fresh content.

The research paper explains why fresh content is important:

“Many hours worth of videos are uploaded each second to YouTube. Recommending this recently uploaded (“fresh”) content is extremely important for YouTube as a product. We consistently observe that users prefer fresh content, though not at the expense of relevance.”

This tendency to show fresh content seems to hold true for Google Discover, where Google tends to show fresh content on topics that users are personally trending with. Have you ever noticed how Google Discover tends to favor fresh content? The insights that the researchers had about user preferences probably carry over to the Google Discover recommendation system. The takeaway here is that producing content on a regular basis could be helpful for getting web pages surfaced in Google Discover.

An interesting insight in this research paper, and I don’t know if it’s still true but it’s still interesting, is that the researchers state that machine learning algorithms show an implicit biased toward older existing content because they are trained on historical data.

They explain:

“Machine learning systems often exhibit an implicit bias towards the past because they are trained to predict future behavior from historical examples.”

The neural network is trained on past videos and they learn that things from one or two days ago were popular. But this creates a bias for things that happened in the past. The way they solved the freshness issue is when the system is recommending videos to a user (serving), this time-based feature is set to zero days ago (or slightly negative). This signals to the model that it is making a prediction at the very end of the training window, essentially forcing it to predict what is popular right now rather than what was popular on average in the past.

Accuracy Of Click Data

Google’s foundational research paper also provides insights about implicit user feedback signals, which is a reference to click data. The researchers say that this kind of data rarely provides accurate user satisfaction information.

The researchers write:

“Noise: Historical user behavior on YouTube is inherently difficult to predict due to sparsity and a variety of unobservable external factors. We rarely obtain the ground truth of user satisfaction and instead model noisy implicit feedback signals. Furthermore, metadata associated with content is poorly structured without a well defined ontology. Our algorithms need
to be robust to these particular characteristics of our training data.”

The researchers conclude the paper by stating that this approach to recommender systems helped increase user watch time and proved to be more effective than other systems.

They write:

“We have described our deep neural network architecture for recommending YouTube videos, split into two distinct problems: candidate generation and ranking.
Our deep collaborative filtering model is able to effectively assimilate many signals and model their interaction with layers of depth, outperforming previous matrix factorization approaches used at YouTube.

We demonstrated that using the age of the training example as an input feature removes an inherent bias towards the past and allows the model to represent the time-dependent behavior of popular of videos. This improved offline holdout precision results and increased the watch time dramatically on recently uploaded videos in A/B testing.

Ranking is a more classical machine learning problem yet our deep learning approach outperformed previous linear and tree-based methods for watch time prediction. Recommendation systems in particular benefit from specialized features describing past user behavior with items. Deep neural networks require special representations of categorical and continuous features which we transform with embeddings and quantile normalization, respectively.”

Although this research paper is ten years old, it still offers insights into how recommender systems work and takes a little of the mystery out of recommender systems like Google Discover. Read the original research paper: Deep Neural Networks for YouTube Recommendations

Featured Image by Shutterstock/Andrii Iemelianenko

NotificationX WordPress WooCommerce Plugin Vulnerabilities Impact 40k Sites via @sejournal, @martinibuster

A vulnerability advisory was published for the NotificationX FOMO plugin for WordPress and WooCommerce sites, affecting more than 40,000 websites. The vulnerability, which is rated at a 7.2 (High) severity level, enables unauthenticated attackers to inject malicious JavaScript that can execute in a visitor’s browser when specific conditions are met.

NotificationX – FOMO Plugin

The NotificationX FOMO plugin is used by WordPress and WooCommerce site owners to display notification bars, popups, and real-time alerts such as recent sales, announcements, and promotional messages. The plugin is commonly deployed on marketing and e-commerce sites to create urgency and draw visitor attention through notifications.

Exposure Level

The vulnerability does not require any authentication or acquire any user role before launching an attack. Attackers do not need a WordPress account or any prior access to the site to trigger the vulnerability. Exploitation relies on getting a victim to visit a specially crafted page that interacts with the vulnerable site.

Root Cause Of The Vulnerability

The issue is a DOM-based Cross-Site Scripting (XSS) vulnerability tied to how the plugin processes preview data. In the context of a WordPress plugin vulnerability, DOM-based Cross-Site Scripting (XSS) vulnerability happens when a WordPress plugin contains client-side JavaScript that processes data from an untrusted source (the “source”) in an unsafe way, usually by writing the data to the web page (the “sink”).

In the context of the NotificationX plugin, the vulnerability exists because the plugin’s scripts accepts input through the nx-preview POST parameter, but does not properly sanitize the input or escape the output before it is rendered in the browser. Security checks that are supposed to check that user-supplied data is treated as plain text are missing. This allows an attacker to create a malicious web page that automatically submits a form to the victim’s site, forcing the victim’s browser to execute harmful scripts injected via that parameter.

The end result is that an attacker-controlled input can be interpreted as executable JavaScript instead of harmless preview content.

What Attackers Can Do

If exploited, the vulnerability enables attackers to execute arbitrary JavaScript in the context of the affected site. The injected script executes when a user visits a malicious page that automatically submits a form to the vulnerable NotificationX site.

This can allow attackers to:

  • Hijack logged-in administrator or editor sessions
  • Perform actions on behalf of authenticated users
  • Redirect visitors to malicious or fraudulent websites
  • Access sensitive information available through the browser

The official Wordfence advisory explains:

“The NotificationX – FOMO, Live Sales Notification, WooCommerce Sales Popup, GDPR, Social Proof, Announcement Banner & Floating Notification Bar plugin for WordPress is vulnerable to DOM-Based Cross-Site Scripting via the ‘nx-preview’ POST parameter in all versions up to, and including, 3.2.0. This is due to insufficient input sanitization and output escaping when processing preview data. This makes it possible for unauthenticated attackers to inject arbitrary web scripts in pages that execute when a user visits a malicious page that auto-submits a form to the vulnerable site.”

Affected Versions

All versions of NotificationX up to and including 3.2.0 are vulnerable. A patch is available and the vulnerability was addressed in NotificationX version 3.2.1, which includes security enhancements related to this issue.

Recommended Action

Site owners using NotificationX are recommended to update their plugin immediately to version 3.2.1 or later. Sites that cannot update should disable the plugin until the patched version can be applied. Leaving vulnerable versions active exposes visitors and logged-in users to client-side attacks that can be difficult to detect and mitigate.

One More Vulnerability

This plugin has another vulnerability that is rated 4.3 medium threat level.  The Wordfence advisory for this one describes it like this:

“The NotificationX plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check on the ‘regenerate’ and ‘reset’ REST API endpoints in all versions up to, and including, 3.1.11. This makes it possible for authenticated attackers, with Contributor-level access and above, to reset analytics for any NotificationX campaign, regardless of ownership.”

The NotificationX WordPress plugin includes two REST API endpoints called “regenerate” and “reset.” These endpoints are used to manage campaign analytics, such as resetting or rebuilding the stats that show how a notification is performing.

The problem is that these endpoints do not properly check user permissions for modifying data. In this case, the plugin only checks whether a user is logged in with Contributor-level access or higher, not whether they are actually allowed to perform the action. Even though users with the Contributor level role normally have very limited permissions, this flaw lets them perform actions they should not be able to do.

In this case, the damage that an attacker can do is limited. For example, an attacker can’t take over a site. Updated to version 3.2.1 or higher (same as the other vulnerability) will patch this vulnerability.

An attacker can:

  • Reset analytics for any NotificationX campaign
  • Do this even if they did not create or own the campaign
  • Repeatedly wipe or regenerate campaign statistics

Featured Image by Shutterstock/Art Furnace