Realizing value with AI inference at scale and in production

Training an AI model to predict equipment failures is an engineering achievement. But it’s not until prediction meets action—the moment that model successfully flags a malfunctioning machine—that true business transformation occurs. One technical milestone lives in a proof-of-concept deck; the other meaningfully contributes to the bottom line.

Craig Partridge, senior director worldwide of Digital Next Advisory at HPE, believes “the true value of AI lies in inference”. Inference is where AI earns its keep. It’s the operational layer that puts all that training to use in real-world workflows. Partridge elaborates, “The phrase we use for this is ‘trusted AI inferencing at scale and in production,’” he says. “That’s where we think the biggest return on AI investments will come from.”

Getting to that point is difficult. Christian Reichenbach, worldwide digital advisor at HPE, points to findings from the company’s recent survey of 1,775 IT leaders: While nearly a quarter (22%) of organizations have now operationalized AI—up from 15% the previous year—the majority remain stuck in experimentation.

Reaching the next stage requires a three-part approach: establishing trust as an operating principle, ensuring data-centric execution, and cultivating IT leadership capable of scaling AI successfully.

Trust as a prerequisite for scalable, high-stakes AI

Trusted inference means users can actually rely on the answers they’re getting from AI systems. This is important for applications like generating marketing copy and deploying customer service chatbots, but it’s absolutely critical for higher-stakes scenarios—say, a robot assisting during surgeries or an autonomous vehicle navigating crowded streets.

Whatever the use case, establishing trust will require doubling down on data quality; first and foremost, inferencing outcomes must be built on reliable foundations. This reality informs one of Partridge’s go-to mantras: “Bad data in equals bad inferencing out.”

Reichenbach cites a real-world example of what happens when data quality falls short—the rise of unreliable AI-generated content, including hallucinations, that clogs workflows and forces employees to spend significant time fact-checking. “When things go wrong, trust goes down, productivity gains are not reached, and the outcome we’re  looking for is not achieved,” he says.

On the other hand, when trust is properly engineered into inference systems, efficiency and productivity gains can increase. Take a network operations team tasked with troubleshooting configurations. With a trusted inferencing engine, that unit gains a reliable copilot that can deliver faster, more accurate, custom-tailored recommendations—”a 24/7 member of the team they didn’t have before,” says Partridge.

The shift to data-centric thinking and rise of the AI factory

In the first AI wave, companies rushed to hire data scientists and many viewed sophisticated, trillion-parameter models as the primary goal. But today, as organizations move to turn early pilots into real, measurable outcomes, the focus has shifted toward data engineering and architecture.

“Over the past five years, what’s become more meaningful is breaking down data silos, accessing data streams, and quickly unlocking value,” says Reichenbach. It’s an evolution happening alongside the rise of the AI factory—the always-on production line where data moves through pipelines and feedback loops to generate continuous intelligence.

This shift reflects an evolution from model-centric to data-centric thinking, and with it comes a new set of strategic considerations. “It comes down to two things: How much of the intelligence–the model itself–is truly yours? And how much of the input–the data–is uniquely yours, from your customers, operations, or market?” says Reichenbach.

These two central questions inform everything from platform direction and operating models to engineering roles and trust and security considerations. To help clients map their answers—and translate them into actionable strategies—Partridge breaks down HPE’s four-quadrant AI factory implication matrix (see figure):

Source: HPE, 2025

  • Run: Accessing an external, pretrained model via an interface or API; organizations don’t own the model or the data. Implementation requires strong security and governance. It also requires establishing a center of excellence that makes and communicates decisions about AI usage.
  • RAG (retrieval augmented generation): Using external, pre-trained models combined with a company’s proprietary data to create unique insights. Implementation focuses on connecting data streams to inferencing capabilities that provide rapid, integrated access to full-stack AI platforms.
  • Riches: Training custom models on data that resides in the enterprise for unique differentiation opportunities and insights. Implementation requires scalable, energy-efficient environments, and often high-performance systems.
  • Regulate: Leveraging custom models trained on external data, requiring the same scalable setup as Riches, but with added focus on legal and regulatory compliance for handling sensitive, non-owned data with extreme caution.

Importantly, these quadrants are not mutually exclusive. Partridge notes that most organizations—including HPE itself—operate across many of the quadrants. “We build our own models to help understand how networks operate,” he says. “We then deploy that intelligence into our products, so that our end customer gets the chance to deliver in what we call the ‘Run’ quadrant. So for them, it’s not their data; it’s not their model. They’re just adding that capability inside their organization.”

IT’s moment to scale—and lead

The second part of Partridge’s catchphrase about inferencing—”at scale”— speaks to a primary tension in enterprise AI: what works for a handful of use cases often breaks when applied across an entire organization.

“There’s value in experimentation and kicking ideas around,” he says. “But if you want to really see the benefits of AI, it needs to be something that everybody can engage in and that solves for many different use cases.”

In Partridge’s view, the challenge of turning boutique pilots into organization-wide systems is uniquely suited to the IT function’s core competencies—and it’s a leadership opportunity the function can’t afford to sit out. “IT takes things that are small-scale and implements the discipline required to run them at scale,” he says. “So, IT organizations really need to lean into this debate.”

For IT teams content to linger on the sidelines, history offers a cautionary tale from the last major infrastructure shift: enterprise migration to the cloud. Many IT departments sat out decision-making during the early cloud adoption wave a decade ago, while business units independently deployed cloud services. This led to fragmented systems, redundant spending, and security gaps that took years to untangle.

The same dynamic threatens to repeat with AI, as different teams experiment with tools and models outside IT’s purview. This phenomenon—sometimes called shadow AI—describes environments where pilots proliferate without oversight or governance. Partridge believes that most organizations are already operating in the “Run” quadrant in some capacity, as employees will use AI tools whether or not they’re officially authorized to.

Rather than shut down experimentation, it is now IT’s mandate to bring structure to it. And enterprises must architect a data platform strategy that brings together enterprise data with guardrails, governance framework, and accessibility to feed AI. Also, it’s critical to keep standardizing infrastructure (such as private cloud AI platforms), protecting data integrity, and safeguarding brand trust, all while enabling the speed and flexibility that AI applications demand. These are the requirements for reaching the final milestone: AI that’s truly in production.

For teams on the path to that goal, Reichenbach distills what success requires. “It comes down to knowing where you play: When to Run external models smarter, when to apply RAG to make them more informed, where to invest to unlock Riches from your own data and models, and when to Regulate what you don’t control,” says Reichenbach. “The winners will be those who bring clarity to all quadrants and align technology ambition with governance and value creation.”

For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Networking for AI: Building the foundation for real-time intelligence

The Ryder Cup is an almost-century-old tournament pitting Europe against the United States in an elite showcase of golf skill and strategy. At the 2025 event, nearly a quarter of a million spectators gathered to watch three days of fierce competition on the fairways.

From a technology and logistics perspective, pulling off an event of this scale is no easy feat. The Ryder Cup’s infrastructure must accommodate the tens of thousands of network users who flood the venue (this year, at Bethpage Black in Farmingdale, New York) every day.

To manage this IT complexity, Ryder Cup engaged technology partner HPE to create a central hub for its operations. The solution centered around a platform where tournament staff could access data visualization supporting operational decision-making. This dashboard, which leveraged a high-performance network and private-cloud environment, aggregated and distilled insights from diverse real-time data feeds.

It was a glimpse into what AI-ready networking looks like at scale—a real-world stress test with implications for everything from event management to enterprise operations. While models and data readiness get the lion’s share of boardroom attention and media hype, networking is a critical third leg of successful AI implementation, explains Jon Green, CTO of HPE Networking. “Disconnected AI doesn’t get you very much; you need a way to get data into it and out of it for both training and inference,” he says.

As businesses move toward distributed, real-time AI applications, tomorrow’s networks will need to parse even more massive volumes of information at ever more lightning-fast speeds. What played out on the greens at Bethpage Black represents a lesson being learned across industries: Inference-ready networks are a make-or-break factor for turning AI’s promise into real-world performance.

Making a network AI inference-ready

More than half of organizations are still struggling to operationalize their data pipelines. In a recent HPE cross-industry survey of 1,775  IT leaders, 45% said they could run real-time data pushes and pulls for innovation. It’s a noticeable change over last year’s numbers (just 7% reported having such capabilities in 2024), but there’s still work to be done to connect data collection with real-time decision-making.

The network may hold the key to further narrowing that gap. Part of the solution will likely come down to infrastructure design. While traditional enterprise networks are engineered to handle the predictable flow of business applications—email, browsers, file sharing, etc.—they’re not designed to field the dynamic, high-volume data movement required by AI workloads. Inferencing in particular depends on shuttling vast datasets between multiple GPUs with supercomputer-like precision.

“There’s an ability to play fast and loose with a standard, off-the-shelf enterprise network,” says Green. “Few will notice if an email platform is half a second slower than it might’ve been. But with AI transaction processing, the entire job is gated by the last calculation taking place. So it becomes really noticeable if you’ve got any loss or congestion.”

Networks built for AI, therefore, must operate with a different set of performance characteristics, including ultra-low latency, lossless throughput, specialized equipment, and adaptability at scale. One of these differences is AI’s distributed nature, which affects the seamless flow of data.

The Ryder Cup was a vivid demonstration of this new class of networking in action. During the event, a Connected Intelligence Center was put in place to ingest data from ticket scans, weather reports, GPS-tracked golf carts, concession and merchandise sales, spectator and consumer queues, and network performance. Additionally, 67 AI-enabled cameras were positioned throughout the course. Inputs were analyzed through an operational intelligence dashboard and provided staff with an instantaneous view of activity across the grounds.

“The tournament is really complex from a networking perspective, because you have many big open areas that aren’t uniformly packed with people,” explains Green. “People tend to follow the action. So in certain areas, it’s really dense with lots of people and devices, while other areas are completely empty.”

To handle that variability, engineers built out a two-tiered architecture. Across the sprawling venue, more than 650 WiFi 6E access points, 170 network switches, and 25 user experience sensors worked together to maintain continuous connectivity and feed a private cloud AI cluster for live analytics. The front-end layer connected cameras, sensors, and access points to capture live video and movement data, while a back-end layer—located within a temporary on-site data center—linked GPUs and servers in a high-speed, low-latency configuration that effectively served as the system’s brain. Together, the setup enabled both rapid on-the-ground responses and data collection that could inform future operational planning. “AI models also were available to the team which could process video of the shots taken and help determine, from the footage, which ones were the most interesting,” says Green.

Physical AI and the return of on-prem intelligence

If time is of the essence for event management, it’s even more critical in contexts where safety is on the line—for instance a self-driving car making a split-second decision to accelerate or brake.

In planning for the rise of physical AI, where applications move off screens and onto factory floors and city streets, a growing number of enterprises are rethinking their architectures. Instead of sending the data to centralized clouds for inference, some are deploying edge-based AI clusters that process information closer to where it is generated. Data-intensive training may still occur in the cloud, but inferencing happens on-site.

This hybrid approach is fueling a wave of operational repatriation, as workloads once relegated to the cloud return to on-premises infrastructure for enhanced speed, security, sovereignty, and cost reasons. “We’ve had an out-migration of IT into the cloud in recent years, but physical AI is one of the use cases that we believe will bring a lot of that back on-prem,” predicts Green, giving the example of an AI-infused factory floor, where a round-trip of sensor data to the cloud would be too slow to safely control automated machinery. “By the time processing happens in the cloud, the machine has already moved,” he explains.

There’s data to back up Green’s projection: research from Enterprise Research Group shows that 84% of respondents are reevaluating application deployment strategies due to the growth of AI. Market forecasts also reflect this shift. According to IDC, the AI market for infrastructure is expected to reach $758 billion by 2029.

AI for networking and the future of self-driving infrastructure

The relationship between networking and AI is circular: Modern networks make AI at scale possible, but AI is also helping make networks smarter and more capable.

“Networks are some of the most data-rich systems in any organization,” says Green. “That makes them a perfect use case for AI. We can analyze millions of configuration states across thousands of customer environments and learn what actually improves performance or stability.”

At HPE for example, which has one of the largest network telemetry repositories in the world, AI models analyze anonymized data collected from billions of connected devices to identify trends and refine behavior over time. The platform processes more than a trillion telemetry points each day, which means it can continuously learn from real-world conditions.

The concept broadly known as AIOps (or AI-driven IT operations) is changing how enterprise networks are managed across industries. Today, AI surfaces insights as recommendations that administrators can choose to apply with a single click. Tomorrow, those same systems might automatically test and deploy low-risk changes themselves.

That long-term vision, Green notes, is referred to as a “self-driving network”—one that handles the repetitive, error-prone tasks that have historically plagued IT teams. “AI isn’t coming for the network engineer’s job, but it will eliminate the tedious stuff that slows them down,” he says. “You’ll be able to say, ‘Please go configure 130 switches to solve this issue,’ and the system will handle it. When a port gets stuck or someone plugs a connector in the wrong direction, AI can detect it—and in many cases, fix it automatically.”

Digital initiatives now depend on how effectively information moves. Whether coordinating a live event or streamlining a supply chain, the performance of the network increasingly defines the performance of the business. Building that foundation today will separate those who pilot from those who scale AI.

For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

New Books: Wikipedia, Ring, Vibe Code, More

Innovation and leadership are often synonymous. What follows are 10 new titles from entrepreneurs, practitioners, and academics on innovation, leadership, and lessons learned.

Vibe Coding: Building Production-Grade Software with GenAI, Chat, Agents, and Beyond

Cover of Vibe Coding

Vibe Coding

by Gene Kim, Steve Yegge

Vibe coding” is the practice of describing a software tool to a generative AI platform, which would then code it. The authors, veterans of leading tech companies including Tripwire, Google, and Amazon, cut through controversy to offer a groundbreaking look at “the good, the bad, and the ugly” of this transformational programming practice and how to unlock its potential.

Ding Dong: How Ring Went from Shark Tank Reject to Everyone’s Front Door

Cover of Ding Dong

Ding Dong

by Jamie Siminoff and Andrew Postman

The honest, humorous story of how Siminoff, the founder of Ring home security, took his product from a humiliating Shark Tank rejection to a billion-dollar business juggernaut with celebrity investors.

Leadership Unblocked: Break through the Beliefs That Limit Your Potential

Cover of Leadership Unblocked

Leadership Unblocked

by Muriel M. Wilkins

Wilkins, a corporate coach, consultant, author, and podcaster, shares her experience and research to show readers how to overcome the “hidden blockers” — unconscious, limiting beliefs — that all too often get in the way of effective leadership.

The Seven Rules of Trust: A Blueprint for Building Things That Last

Cover of Seven Rules of Trust

Seven Rules of Trust

by Jimmy Wales with Dan Gardner

Wikipedia has grown from an unorthodox experiment into an indispensable global encyclopedia. Founder Jimmy Wales shares the lessons he learned about building and maintaining trust, accountability, and creativity in an era when public confidence in almost everything else has plummeted.

Natural-Born Entrepreneurs: Breaking into Business Ownership

Cover of Natural-Born Entrepreneurs

Natural-Born Entrepreneurs

by Lisa Piercey

Debuting at number 1 in Amazon’s “Starting a Business” category, “Natural-Born Entrepreneurs” offers a roadmap for transitioning from employee to employer by acquiring businesses — addressing deal structures, operations, and governance. Piercey is a physician, executive, investor, and former Tennessee state health commissioner.

The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity

Cover of Age of Extraction

Age of Extraction

by Tim Wu

Wu is a bestselling author, professor of law, and former White House advisor on tech policy. In this new book, he explores the power of tech platforms to shape the digital economy. Reviewers call it “astute and timely,” “a must-read,” and “an urgent wake-up call.”

Think Bigger, Lead Better: Eight to Great Principles for Organizational Success

Cover of Think Bigger, Lead Better

Think Bigger, Lead Better

by Rick Tollakson

Tollakson presents the “eight to great” principles distilled from growing his business tenfold across decades. The book asks readers, “Are you ready to think bigger?”

The Winner’s Curse: Behavioral Economics Anomalies, Then and Now

Cover of Winner's Curse

Winner’s Curse

by Richard H. Thaler and Alex Imas

Thaler, a Nobel laureate, and Imas, an up-and-coming economist, join forces to revisit concepts that challenged the idea of rational decision-making and gave rise to the field of behavioral economics. They show that these behavioral concepts show up in everything from professional golf to retirement planning.

How They Get You: Sneaky Everyday Economics and Smart Ways to Hold on to Your Money

Cover of How They Get You

How They Get You

by Chris Kohler

This entertaining guide to making better money-management choices comes from a top Australia-based financial journalist. It covers how to outsmart loyalty programs, gift cards, sneaky subscriptions, and late fees — all designed to get you spending more without realizing it.

Seven Tenths of a Second: Life, Leadership and Formula 1

Cover of Seven Tenths of a Second

Seven Tenths of a Second

by Zak Brown

Brown went from professional race car driver to a global leader in motorsport marketing to CEO of McLaren Racing. His book gives readers behind-the-scenes insights into a sport and business that demands continuous innovation.

Google Brings Gemini 3 To Search’s AI Mode via @sejournal, @MattGSouthern

Google has integrated Gemini 3 in Search’s AI Mode. This marks the first time Google has shipped a Gemini model to Search on its release date.

Google AI Pro and Ultra subscribers in the U.S. can access Gemini 3 Pro by selecting “Thinking” from the model dropdown in AI Mode.

Robby Stein, VP and GM of Google Search, wrote on X:

“Gemini 3, our most intelligent model, is landing in Google Search today – starting with AI Mode. Excited that this is the first time we’re shipping a new Gemini model in Search on day one.”

Google plans to expand Gemini 3 in AI Mode to all U.S. users soon, with higher usage limits for Pro and Ultra subscribers.

What’s New

Search Updates

Google describes Gemini 3 as a model with state-of-the-art reasoning and deep multimodal understanding.

In the context of Search, it’s designed to explain advanced concepts, work through complex questions, and support interactive visuals that run directly inside AI Mode responses.

With Gemini 3 in place, Google says AI Mode has effectively re-architected what a “helpful response” looks like.

Stein explains:

“Gemini 3 is also making Search smarter by re-architecting what a helpful response looks like. With new generative UI capabilities, Gemini 3 in AI Mode can now dynamically create the overall response layout when it responds to your query – completely on the fly.”

Instead of only returning a block of text, AI Mode can design a response layout tailored to your query. That includes deciding when to surface images, tables, or other structured elements so the answer is clearer and easier to work with.

In the coming weeks, Google will add automatic model selection, Stein continues:

“Search will intelligently route tough questions in AI Mode and AI Overviews to our frontier model, while continuing to use faster models for simpler tasks.”

Enhanced Query Fan-Out

Gemini 3 upgrades Google’s query fan-out technique.

According to Stein, Search can now issue more related searches in parallel and better interpret what you’re trying to do.

A potential benefit, Stein adds, is that Google may find content it previously missed:

“It now performs more and much smarter searches because Gemini 3 better understands you. That means Search can now surface even more relevant web content for your specific question.”

Generative UI

Gemini 3 in AI Mode introduces generative UI features that build dynamic visual layouts around your query.

The model analyzes your question and constructs a custom response using visual elements such as images, tables, and grids. When an interactive tool would help, Gemini 3 can generate a small app in real time and embed it directly in the answer.

Examples from Google’s announcement include:

  • An interactive physics simulation for exploring the three-body problem
  • A custom mortgage loan calculator that lets you compare different options and estimate long-term savings

All of these responses include prominent links to high-quality content across the web so you can click through to source material.

See a demonstration in Google’s launch video below:

Why This Matters

Gemini 3 changes how your content is discovered and used in AI Mode. With deeper query fan-out, Google can access more pages per question, which might influence which sites are cited or linked during long, complex searches.

The updated layouts and interactive features change how links appear on your screen. On-page tools, explainers, and visualizations could now compete directly with Google’s own interface.

As Gemini 3 becomes available to more people, it will be important to watch how your content is shown or referenced in AI responses, in addition to traditional search rankings.

Looking Ahead

Google says it will continue refining these updates based on feedback as more people try the new tools. Automatic model selection is set to arrive in the coming weeks for Google AI Pro and Ultra subscribers in the U.S., with broader U.S. access to Gemini 3 in AI Mode planned but not yet scheduled.

Selling AI Search Strategies To Leadership Is About Risk via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

AI search visibility isn’t “too risky” to invest in for executives to buy-in. Selling AI search strategies to leadership is about risk.

Image Credit: Kevin Indig

A Deloitte survey of +2,700 leaders reveals that getting buy-in for an AI search strategy isn’t about innovation, but risk.

SEO teams keep failing to sell AI search strategies for one reason: They’re pitching deterministic ROI in a probabilistic environment.

The old way: Rankings → traffic → revenue. But that event chain doesn’t exist in AI systems.

LLMs don’t rank. They synthesize. And Google’s AI Overviews and AI Mode don’t “send traffic.” They answer.

Yet most teams still walk into a leadership meeting with a deck built on a decaying model. Then, executives say no – not because AI search “doesn’t work,” but because the pitch asks them to fund an outcome nobody can guarantee.

In AI search, you cannot sell certainty. You can only sell controlled learning.

1. You Can’t Sell AI Search With A Deterministic ROI Model

Everyone keeps asking the wrong question: “How do I prove my AI search strategy will work so leadership will fund it?” You can’t; there’s no traffic chain you can model. Randomness is baked directly into the outputs.

You’re forcing leadership to evaluate your AI search strategy with a framework that’s already decaying. Confusion about AI search vs. traditional SEO metrics and forecasting is blocking you from buy-in. When SEO teams try to sell an AI search strategy to leadership, they often encounter several structural problems:

  1. Lack of clear attribution and ROI: Where you see opportunity, leadership sees vague outcomes and deprioritizes investment. Traffic and conversions from AI Overviews, ChatGPT, or Perplexity are hard to track.
  2. Misalignment with core business metrics: It’s harder to tie results to revenue, CAC, or pipeline – especially in B2B.
  3. AI search feels too experimental: Early investments feel like bets, not strategy. Leadership may see this as a distraction from “real” SEO or growth work.
  4. No owned surfaces to leverage: Many brands aren’t mentioned in AI answers at all. SEO teams are selling a strategy that has no current baseline.
  5. Confusion between SEO and AI search strategy: Leadership doesn’t understand the distinction between optimizing for classic Google Search vs. LLMs vs. AI Overviews. Clear differentiation is needed to secure a new budget and attention.
  6. Lack of content or technical readiness: The site lacks the structured content, brand authority, or documentation to appear in AI-generated results.

2. Pitch AI Search Strategy As Risk Mitigation, Not Opportunity

Executives don’t buy performance in ambiguous environments. They buy decision quality. And the decision they need you to make is simple: Should your brand invest in AI-driven discovery before competitors lock in the advantage – or not?

Image Credit: Kevin Indig

AI search is still an ambiguous environment. That’s why your winning strategy pitch should be structured for fast, disciplined learning with pre-set kill criteria instead of forecasting traffic → revenue. Traditionally, SEO teams pitch outcomes (traffic, conversions), but leadership needs to buy learning infrastructure (testing systems, measurement frameworks, kill criteria) for AI search.

Leadership thinks you’re asking for “more SEO budget” when you’re actually asking them to buy an option on a new distribution channel.

Everyone treats the pitch as “convince them it will work” when it should be “convince them the cost of not knowing is higher than the cost of finding out.” Executives don’t need certainty about impact – they need certainty that you’ll produce a decision with their money.

Making stakes crystal clear:

Your Point of View + Consequences = Stakes. Leaders need to know what happens if they don’t act.

Image Credit: Kevin Indig

The cost of passing on an AI search strategy can be simple and brutal:

  1. Competitors who invest early in AI search visibility will build entity authority and brand presence.
  2. Organic traffic stagnates and will drop over time while cost-per-click rises.
  3. AI Overviews and AI Mode outputs will replace queries your brand used to win in Google.
  4. Your influence on the next discovery channel will be decided without you.

AI search strategy builds brand authority, third-party mentions, entity relationships, content depth, pattern recognition, and trust signals in LLMs. These signals compound. They also freeze into the training data of future models.

If you aren’t shaping that footprint now, the model will rely on whatever scraps already exist based on whatever your competitors are feeding it.

3. Sell Controlled Experiments – Small, Reversible, And Time-Boxed

You’re asking for resources to discover the truth before the market makes the decision for you. This approach collapses resistance because it removes the fear of sunk cost and turns ambiguity into manageable, reversible steps.

A winning AI search strategy proposal sounds like:

  • “We’ll run x tests over 12 months.”
  • “Budget: ≤0.3% of marketing spend.”
  • “Three-stage gates with Go/No-Go decisions.”
  • “Scenario ranges instead of false-precision forecasts.”
  • “We stop if leading indicators don’t move by Q3.”

45% of executives rely more on instinct than facts. Balance your data with a compelling narrative – focus on outcomes and stakes, not technical details.

I covered how to build a pitch deck and strategic narrative in how to explain the value of SEO to executives, but focus on selling learning as a deliverable under the current AI search landscape.

When presenting to leaders, they focus on three things only: money (revenue, profit, cost), market (market share, time-to-market), and exposure (retention, risk). Structure every pitch around these.

The SCQA framework (Minto Pyramid) guides you:

  • Situation: Set the context.
  • Complication: Explain the problem.
  • Question: What should we do?
  • Answer: Your recommendation.

This is the McKinsey approach – and executives expect it.


Featured Image: Paulo Bobita/Search Engine Journal

Cloudflare Outage Triggers 5xx Spikes: What It Means For SEO via @sejournal, @MattGSouthern

A Cloudflare incident is returning 5xx responses for many sites and apps that sit behind its network, which means users and crawlers may be running into the same errors.

From an SEO point of view, this kind of outage often looks worse than it is. Short bursts of 5xx errors usually affect crawl behavior before they touch long-term rankings, but there are some details worth paying attention to.

What You’re Likely Seeing

Sites that rely on Cloudflare as a CDN or reverse proxy may currently be serving generic “500 internal server error” pages or failing to load at all. In practice, everything in that family of responses is treated as a server error.

If Googlebot happens to crawl while the incident is ongoing, it will record the same 5xx responses that users see. You may not notice anything inside Search Console immediately, but over the next few days you could see a spike in server errors, a dip in crawl activity, or both.

Keep in mind that Search Console data is rarely real-time and often lags by roughly 48 hours. A flat line in GSC today could mean the report hasn’t caught up yet. If you need to confirm that Googlebot is encountering errors right now, you will need to check your raw server access logs.

This can feel like a ranking emergency. It helps to understand how Google has described its handling of temporary server problems in the past, and what Google representatives are saying today.

How Google Handles Short 5xx Spikes

Google groups 5xx responses as signs that a server is overloaded or unavailable. According to Google’s Search Central documentation on HTTP status codes, 5xx and 429 errors prompt crawlers to temporarily slow down, and URLs that continue to return server errors can eventually be dropped from the index if the issue remains unresolved.

Google’s “How To Deal With Planned Site Downtime” blog post gives similar guidance for maintenance windows, recommending a 503 status code for temporary downtime and noting that long-lasting 503 responses can be treated as a sign that content is no longer available.

In a recent Bluesky post, Google Search Advocate John Mueller reinforced the same message in plainer language. Mueller wrote:

“Yeah. 5xx = Google crawling slows down, but it’ll ramp back up.”

He added:

“If it stays at 5xx for multiple days, then things may start to drop out, but even then, those will pop back in fairly quickly.”

Taken together, the documentation and Mueller’s comments draw a fairly clear line.

Short downtime is usually not a major ranking problem. Already indexed pages tend to stay in the index for a while, even if they briefly return errors. When availability returns to normal, crawling ramps back up and search results generally settle.

The picture changes when server errors become a pattern. If Googlebot sees 5xx responses for an extended period, it can start treating URLs as effectively gone. At that point, pages may drop from the index until crawlers see stable, successful responses again, and recovery can take longer.

The practical takeaway is that a one-off infrastructure incident is mostly a crawl and reliability concern. Lasting SEO issues tend to appear when errors linger well beyond the initial outage window.

See additional guidance from Google regarding 5xx errors:

Analytics & PPC Reporting Gaps

For many sites, Cloudflare sits in front of more than just HTML pages. Consent banners, tag managers, and third-party scripts used for analytics and advertising may all depend on services that run through Cloudflare.

If your consent management platform or tag manager was slow or unavailable during the outage, that can show up later as gaps in GA4 and ad platform reporting. Consent events may not have fired, tags may have timed out, and some sessions or conversions may not have been recorded at all.

When you review performance, you might see a short cliff in GA4 traffic, a drop in reported conversions in Google Ads or other platforms, or both. In many cases, that will reflect missing data rather than a real collapse in demand.

It’s safer to annotate today’s incident in your analytics and media reports and treat it as a tracking gap before you start reacting with bid changes or budget shifts based on a few hours of noisy numbers.

What To Do If You Were Hit

If you believe you’re affected by today’s outage, start by confirming that the problem is really tied to Cloudflare and not to your origin server or application code. Check your own uptime monitoring and any status messages from Cloudflare or your host so you know where to direct engineering effort.

Next, record the timing. Note when you first saw 5xx errors and when things returned to normal. Adding an annotation in your analytics, Search Console, and media reporting makes it much easier to explain any traffic or conversion drops when you review performance later.

Over the coming days, keep an eye on the Crawl Stats Report and index coverage in Search Console, along with your own server logs. You’re looking for confirmation that crawl activity returns to its usual pattern once the incident is over, and that server error rates drop back to baseline. If the graphs settle, you can treat the outage as a contained event.

If, instead, you continue to see elevated 5xx responses after Cloudflare reports the issue as resolved, it’s safer to treat the situation as a site-specific problem.

What you generally do not need to do is change content, internal linking, or on-page SEO purely in response to a short Cloudflare outage. Restoring stability is the priority.

Finally, resist the urge to hit ‘Validate Fix’ in Search Console the moment the site comes back online. If you trigger validation while the connection is still intermittent, the check will fail, and you will have to wait for the cycle to reset. It is safer to wait until the status page says ‘Resolved’ for a full 24 hours before validating.

Why This Matters

Incidents like this one are a reminder that search visibility is tied to reliability as much as relevance. When a provider in the middle of your stack has trouble, it can quickly look like a sudden drop, even when the root cause is outside your site.

Knowing how Google handles temporary 5xx spikes and how they influence analytics and PPC reports can help you communicate better with your clients and stakeholders. It allows you to set realistic expectations and recognize when an outage has persisted long enough to warrant serious attention.

Looking Ahead

Once Cloudflare closes out its investigation, the main thing to watch is whether your crawl, error, and conversion metrics return to normal. If they do, this morning’s 5xx spike is likely to be a footnote in your reporting rather than a turning point in your organic or paid performance.

B2B Content Marketing Has Changed: Principles Of Good Strategy

This edited excerpt is from B2B Content Marketing Strategy by Devin Bramhall, ©2025, and is reproduced and adapted with permission from Kogan Page Ltd.

Modern content strategy is no longer about being a brand megaphone, shouting messages across digital space.

Modern content strategy that works is a blended approach designed to create community around shared experiences, build lasting relationships, and establish genuine trust and influence. It’s about leaning into individuality within niche communities by creating content that resonates with individuals and small groups rather than trying to appeal to the masses.

And it’s definitely not a pursuit of ubiquity, in the ways brands used to do it by creating a dominant presence on every platform and community space.

Instead, it’s about taking fewer actions to accomplish more. Playing a supporting role in the community sometimes by elevating others. It’s about building relationships that motivate action rather than force it. Mostly, it’s about creating frameworks and principles to guide and evaluate your decisions so you can develop your own “playbook” that works for your company and community.

Principles Of Good Content Marketing Strategy

Content marketing exists to serve business goals by solving customer pain points. It accomplishes this through education and relationship-building:

Education attracts potential buyers and influencers by providing immediate value in the form of short-term solutions (awareness and affinity).

Establishing trust allows your brand to become an ongoing part of your community’s lives by speaking their language, empathizing with their challenges, and solving their problems (nurture and engage).

Relationship formation creates alignment between external promises and internal experiences – the product delivers on the expectations set by content (convert, grow LTV, and upsell).

The goal is to help first and sell second – at which point customers often feel they reached decisions independently. They become eager to invest in both the product and the relationship. This is how content marketing works organically based on human behavior.

It’s also the stuff you already know.

Content marketing teams guided by the following principles consistently achieve superior results.

Create Unique Advantage

No other company exists with your exact combination of product, people, and resources. Your first job as a marketer is to identify what you already have that can be leveraged for growth.

This could be your founder’s network, your CMO’s substantial LinkedIn following that overlaps with your target buyers, or a product feature that solves a previously unaddressed problem. It might be an upcoming conference where your CEO is speaking to 300 decision-makers who gather only once per year.

Other advantages might include:

  • Budget, software, and technological resources.
  • Existing audiences, email lists, or content archives.
  • Market position (whether as an established leader or disruptive newcomer).
  • Opportunistic events like funding announcements or key hires.
  • Your own unique talents, experiences, and connections.

The goal is to create a content strategy that:

  1. Competitors can’t easily duplicate because they lack your specific advantages.
  2. Generates exponential impact by leveraging opportunistic events, efficient execution, and activities that serve multiple outcomes simultaneously.
  3. Is scalable with repeatable elements that compound over time and can expand with relative ease.

A prime example comes from Gong, the revenue intelligence platform. While competitors focused on standard SaaS marketing playbooks, Gong leveraged their unique advantage: Access to millions of sales conversations and the data patterns within them. By sharing insights from this proprietary data, they created content no competitor could replicate, establishing themselves as the definitive source of sales intelligence while simultaneously demonstrating their product’s value.

Serve Outcomes It Can Logically Impact (Better Than Other Approaches)

Strategy that serves business goals does need to be measured to ensure it’s serving those outcomes, and ideally, how well it achieves them. Yes, I’m talking about ROI.

The benefit of having clearly defined, quantifiable, time-based outcomes is twofold:

  • It helps you narrow down tactics.
  • It gives you a target to “bump up against” to extract learnings for continuous improvement.

This principle forces you to evaluate each potential marketing activity against a simple standard: Is this the best way to reach the business outcome we want, or are we doing it because it’s the way we’ve always done it?

Can Be Executed With Existing Resources

A strategy is only as good as your ability to execute it.

Your plan is only strategic if you factor in all constraints, including budget and resources. If you come up with a “brilliant” idea that you know is unlikely to be funded, then it’s not brilliant in the context in which you want to apply it.

So, if you come up with something that could really move the needle and you want to get funding for it, come up with an MVP and call it a test. Once you’ve shown impact and dazzled the purse-holders, then it’ll be easier to get budget to expand and do more. So start by getting buy-in on only those resources you need to execute a bare minimum version that demonstrates enough impact to justify additional investment. One approach that has worked for me (though it’s not a silver bullet) is to treat it like a sales activity. All I need is enough of the right kind of information that whoever I’m pitching to will:

  • Understand without a complex explanation.
  • See a type of business impact they recognize as valuable.
  • Not care too much about it (i.e., the investment is negligible to them).

Your best-case scenario at this stage is not enthusiasm; it’s disinterest. You want them to feel like saying yes is an errand, almost like it’s a waste of their time.

This requires keeping a ton of details to yourself – especially the ones your leadership will question. Also useful, make it feel familiar and demonstrate you listened to them by pointing out areas where you intentionally factored in something they wanted or advised. Think of it like landing page copy. Your “conversion” is a yes, so what details and messaging will get you that conversion?

This doesn’t mean your strategy can’t be ambitious. Rather, it means being realistic about what you can sustain long enough to see results.

Serves Outcomes It Can Logically Impact (Better Than Other Activities)

It doesn’t matter what size your marketing team is – at some point, you’ll be tasked with showing impact beyond what seems possible with your current resources. This is where strategic thinking becomes essential.

Content marketing strategy plays a crucial role in driving business results. What sets a strategy apart from a simple plan is its ability to serve as a unified and thoughtful response to a significant challenge, as emphasized by Richard Rumelt in his book “Good Strategy, Bad Strategy.”

A plan is simply a list of activities you know you can accomplish, like running errands in a particular order to minimize time. Strategy, by contrast, is using the resources you have to show enough impact that decision-makers will recognize, making sure you remind them over and over in different ways about that impact, then using that as leverage to get the budget to do what you wanted to in the first place.

This doesn’t mean your strategy can’t be ambitious. Rather, it means being realistic about what you can sustain long enough to see results that you can use to do more later.

Grounded In Facts, Not Best Practices

Choose channels, tactics, and messages based on YOUR customers, not on what others are doing or what industry best practices dictate.

At some point, nothing we currently do in marketing existed before. SEO, for example, was once considered a growth hack. It wasn’t in the content marketing lexicon, let alone on any list of best practices. Someone discovered it could provide unique advantage for their company to appear first when people searched for specific solutions.

This principle requires you to reason from your specific facts:

  • How do YOUR customers make purchase decisions?
  • What channels do THEY genuinely use for discovery and research?
  • What unique circumstances does YOUR company face?

What might appear as constraints – limited budget, market position, team size – can often become advantages if you approach them with curiosity and objectivity.

Designed To Have Exponential Impact

Most “strategies” content marketers present are just action plans that itemize tactics they will execute over a period of time to hit a goal.

Create content, distribute, convert people, measure results, repeat.

But think about how content marketing itself came to exist. It was all about leverage. Take SEO, for example. It was essentially a “free” way to get more people to visit your site without paying for ads. And for a while, it was an ROI multiplier, meaning that the amount of investment required to execute was minuscule compared to the long-term impact it would have over time. That’s a strategic ratio.

Now, SEO is a part of B2B marketing modus operandi. The ratio is more incremental; thus, it’s not really a strategic activity, it’s more of a table stakes tactic.

The opportunity for marketers now is to come up with a scalable way to transform bespoke interactions between people from the company and community across multiple mediums into ROI for the company that they can sustain. This means designing your strategy such that some activities serve more than one purpose or outcome, as well as having “self-sustaining” elements (i.e., automations, workflows, etc.) built in.

To read the full book, SEJ readers have an exclusive 25% discount code and free shipping to the US and UK. Use promo code “SEJ25” at koganpage.com here.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

The Knowns And Unknowns Of Structured Data Attribution via @sejournal, @marthavanberkel

As marketers, we love a great funnel. It provides clarity on how our strategies are working. We have conversion rates and can track the customer journey from discovery through conversion. But in today’s AI-first world, our funnel has gone dark.

We can’t yet fully measure visibility in AI experiences like ChatGPT or Perplexity. While emerging tools offer partial insights, their data isn’t comprehensive or consistently reliable. Traditional metrics like impressions and clicks still don’t tell the whole story in these spaces, leaving marketers facing a new kind of measurement gap.

To help bring clarity, let’s look at what we know and don’t know about measuring the value of structured data (also known as schema markup). By understanding both sides, we can focus on what’s measurable and controllable today, and where the opportunities lie as AI changes how customers discover and engage with our brands.

Why Most ‘AI Visibility’ Data Isn’t Real

AI has created a hunger for metrics. Marketers, desperate to quantify what’s happening at the top of the funnel, are turning to a wave of new tools. Many of these platforms are creating novel measurements, such as “brand authority on AI platforms,” that aren’t grounded in representative data.

For example, some tools are trying to measure “AI prompts” by treating short keyword phrases as if they were equivalent to consumer queries in ChatGPT or Perplexity. But this approach is misleading. Consumers are writing longer, context-rich prompts that go far beyond what keyword-based metrics suggest. These prompts are nuanced, conversational, and highly personalized – nothing like traditional long-tail queries.

These synthetic metrics offer false comfort. They distract from what’s actually measurable and controllable. The fact is, ChatGPT, Perplexity, and even Google’s AI Overviews aren’t providing us with clear and comprehensive visibility data.

So, what can we measure that truly impacts visibility? Structured data.

What Is AI Search Visibility?

Before diving into metrics, it’s worth defining “AI search visibility.” In traditional SEO, visibility meant appearing on page one of search results or earning clicks. In an AI-driven world, visibility means being understood, trusted, and referenced by both search engines and AI systems. Structured data plays a role in this evolution. It helps define, connect, and clarify your brand’s digital entities so that search engines and AI systems can understand them.

The Knowns: What We Can Measure With Confidence For Structured Data

Let’s talk about what is known and measurable today with regard to structured data.

Increased Click-Through Rates From Rich Results

From data in our quarterly business review, we see, by implementing structured data on a page, the content qualifies for a rich result, and enterprise brands consistently see an increase in click-through rates. Google currently supports more than 30 types of rich results, which continue to appear in organic search.

For example, from our internal data, in Q3 2025, one enterprise brand in the home appliances industry saw click-through rates on product pages increase by 300% when a rich result was awarded. Rich results continue to provide both visibility and conversion gains from organic search.

Example of a product rich result on Google's search engine results pageExample of a product rich result on Google’s search engine results page (Screenshot by author, November 2025)

Increased Non-Branded Clicks From Robust Entity Linking

It’s important to distinguish between basic schema markup and robust schema markup with entity linking that results in a knowledge graph. Schema markup describes what’s on a page. Entity linking connects those things to other well-defined entities across your site and the web, creating relationships that define meaning and context.

An entity is a unique and distinguishable thing or concept, such as a person, product, or service. Entity linking defines how those entities relate to one another, either through external authoritative sources like Wikidata and Google’s knowledge graph or your own internal content knowledge graph.

For example, imagine a page about a physician. The schema markup would describe the physician. Robust, semantic markup would also connect to Wikidata and Google’s knowledge graph to define their specialty, while linking to the hospital and medical services they provide.

Image from author, November 2025

AIO Visibility

Traditional SEO metrics can’t yet measure AI experiences directly, but some platforms can identify some instances when a brand is mentioned in an AI Overview (AIO) result.

Research from a BrightEdge report found that adopting entity-based SEO practices supports stronger AI visibility. The report noted:

“AI prioritizes content from known, trusted entities. Stop optimizing for fragmented keywords and start building comprehensive topic authority. Our data shows that authoritative content is three times more likely to be cited in AI responses than narrowly focused pages.”

The Unknowns: What We Can’t Yet Measure

While we can measure the impact of entities in schema markup through existing SEO metrics, we don’t yet have direct visibility into how these elements influence large language model (LLM) performance.

How LLMs Are Using Schema Markup

Visibility starts with understanding – and understanding starts with structured data.

Evidence for this is growing. In Microsoft’s Oct. 8, 2025 blog post, “Optimizing Your Content for Inclusion in AI Search Answers (Microsoft Advertising,” Krishna Madhaven, Principal Product Manager for Microsoft Bing, wrote:

“For marketers, the challenge is making sure their content is easy to understand and structured in a way that AI systems can use.”

He added:

“Schema is a type of code that helps search engines and AI systems understand your content.”

Similarly, Google’s article, “Top ways to ensure your content performs well in Google’s AI experiences on Search,” reinforces that “structured data is useful for sharing information about your content in a machine-readable way.”

Why are Google and Microsoft both emphasizing structured data? One reason may be cost and efficiency. Structured data helps build knowledge graphs, which serve as the foundation for more accurate, explainable, and trustworthy AI. Research has shown that knowledge graphs can reduce hallucinations and improve performance in LLMs:

While schema markup itself isn’t typically ingested directly to train LLMs, the retrieval phase in retrieval-augmented generation (RAG) systems plays a crucial role in how LLMs respond to queries. In recent work, Microsoft’s GraphRAG system generates a knowledge graph (via entity and relation extraction) from textual data and leverages that graph in its retrieval pipeline. In their experiments, GraphRAG often outperforms a baseline RAG approach, especially for tasks requiring multi-hop reasoning or grounding across disparate entities.

This helps explain why companies like Google and Microsoft are encouraging enterprise brands to invest in structured data – it’s the connective tissue that helps AI systems retrieve accurate, contextual information.

Beyond Page-Level SEO: Building Knowledge Graphs

There’s an important distinction between optimizing a single page for SEO and building a knowledge graph that connects your entire enterprise’s content. In a recent interview with Robby Stein, VP of Product at Google, it was noted that AI queries can involve dozens of subqueries behind the scenes (known as query fan-out). This suggests a level of complexity that demands a more holistic approach.

To succeed in this environment, brands must move beyond optimizing pages and instead build knowledge graphs, or rather, a data layer that represents the full context of their business.

The Semantic Web Vision, Realized

What’s really exciting is that the vision for the semantic web is here. As Tim Berners-Lee, Ora Lassila, and James Hendler wrote in “The Semantic Web” (Scientific American, 2001):

“The Semantic Web will enable machines to comprehend semantic documents and data, and enable software agents roaming from page to page to execute sophisticated tasks for users.”

We’re seeing this unfold today, with transactions and queries happening directly within AI systems like ChatGPT. Microsoft is already preparing for the next stage, often called the “agentic web.” In November 2024, RV Guha – creator of Schema.org and now at Microsoft – announced an open project called NLWeb. The goal of NLWeb is to be “the fastest and easiest way to effectively turn your website into an AI app, allowing users to query the contents of the site by directly using natural language, just like with an AI assistant or Copilot.”

In a recent conversation I had with Guha, he shared that NLWeb’s vision is to be the endpoint for agents to interact with websites. NLWeb will use structured data to do this:

“NLWeb leverages semi-structured formats like Schema.org…to create natural language interfaces usable by both humans and AI agents.”

Turning The Dark Funnel Into An Intelligent One

Just as we lack real metrics for measuring brand performance in ChatGPT and Perplexity, we also don’t yet have full metrics for schema markup’s role in AI visibility. But we do have clear, consistent signals from Google and Microsoft that their AI experiences do, in part, use structured data to understand content.

The future of marketing belongs to brands that are both understood and trusted by machines. Structured data is one factor towards making that happen.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Google On Generic Top Level Domains For SEO via @sejournal, @martinibuster

Google’s John Mueller answered a question about whether a generic Top Level Domain (gTLD) with a keyword in it offered any SEO advantage. His answer was in the context of a specific keyword TLD, but the topic involves broader questions about how Google evaluates TLDs in general.

Generic Top Level Domains (gTLDs)

gTLDs are domains that have a theme that relates to a topic or a purpose. The most commonly known ones are .com (generally used for commercial purposes) and .org (typically used for non-profit organizations).

The availability of unique keyword based gTLDs exploded in 2013. Now there are hundreds of gTLDs with which a website can brand themselves with and stand out.

Is There SEO Value In gTLDs?

The person asking the question on Reddit wanted to know if there’s an SEO value to registering a .music gTLD. The regular .com version of the domain name they wanted was not available but the .music version was.

The question they asked was:

“Noticed .music domains available and curious if it is relevant, growing etc or does the industry not care about it whatsoever? Is it worth reserving yours anyways just so someone else can’t have it, in case it becomes a thing?”

Are gTLDs Useful For SEO Purposes?

Google’s John Mueller limited his response to whether there gTLDs offered SEO value and his answer was no.

He answered:

“There’s absolutely no SEO advantage from using a .music domain.”

The funny thing about SEO is that Google’s standard of relevance is based on humans while SEOs think of relevance in terms of what Google thinks is relevant.

This sets up a huge disconnect between SEOs on one side who are creating websites that are keyword optimized for Google while Google itself is analyzing billions of user behavior signals because it’s optimizing search results for humans.

Optimizing For Humans With gTLDs

The thing about SEO is that it’s search engine optimization. When venturing out on the web it’s easy to forget that every website must be optimized for humans, too. Aside from spammy TLDs which can be problematic for SEO, the choice of a TLD isn’t important for SEO but it could be important for Human Optimization.

Optimizing for humans is a good idea because the signals generated by human interactions with the search engines and websites generate signals that Google uses at scale to better understand what users mean by their queries and what kinds of sites they expect to see for those queries. Some user generated signals, like searching by brand name, can send Google a signal that a particular brand is popular and is associated with a particular service, product, or keyword phrase (read about Google’s patent on branded search).

Circling back to optimizing for humans, if a particular gTLD is something that humans may associate with a brand, product, or service then there is something there that can be useful for making a site attractive to users.

I have experimented in the past with various gTLDs and found that I was able to build links more easily to .org domains than to the .com or .net versions. That’s an example of how a gTLD can be optimized for humans and lead to success.

I discovered that overtly commercial affiliate sites on .org domains ranked and converted well. They didn’t rank because of they were .org, though. The sites were top-ranked because humans responded well to the sites I created with that gTLD. It was easier to build links to them, for example. I have no doubt that people trusted my affiliate sites a little more because they were created on .org domains.

Optimizing for humans is conversion optimization. It’s super important.

Optimizing For Humans With Keyword-Based gTLDs

I haven’t played around with keyword gTLDs but I suspect that what I experienced with .org domains could happen with a keyword-based gTLD because a meaningful gTLD may communicate positive feelings or relevance to humans. You can call it branding but I think that the word “branding” is too abstract. I prefer the phrase optimizing for humans because in the end that’s what branding is really about.

So maybe it’s time we ditched bla,bla,bla-ing about branding and started talking about optimizing for humans. If that person had considered the question from the perspective of human optimization they may have been able to answer the question themselves.

When SEOs talk about relevance it seems like they’re generally referring to how relevant something is to Google. Relevance to Google is what was top of mind to the person asking the question about the .music gTLD and it might be why you’re reading this article.

Heck, relevance to search engines is what all that “entity” optimization hand waving is all about, right? Focusing on being relevant to search engines is a limited way to chase after success. For example, I cracked the code with the .org domains by focusing on humans.

At a certain point, if you’re trying to be successful online, it may be useful to take a step back and start thinking more about how relevant the content, colors, and gTLDs are to humans and you might discover that being relevant to humans makes it easier to be relevant to search engines.

Featured Image by Shutterstock/Kues

From Listings to Loyalty: The New Role of Local Search in Customer Experience

Ask yourself the following:

  • Do you reply to reviews?
  • Do you engage?
  • Do you make the interaction feel personal?
  • Do you follow through on your promises?
  • Do you keep information consistent across every platform?
  • Do you share fresh updates (ex: photos, posts, or promotions) that show you’re active?
  • Do you provide transparent details like pricing, wait times, or insurance accepted?

If you answered no to any of the aforementioned, it’s time to switch to a brand experience mentality. That shift shows up clearly in the data. Six in ten people say they at least sometimes click on Google’s AI-generated overviews, which means discovery is no longer only about traditional rankings. It’s about whether your brand shows up well when search engines pull together information in context.

Reputation follows the same logic. In Rio SEO’s latest study, three out of four consumers said they read at least four reviews before deciding where to go. And it’s not just the rating itself. Many put just as much weight on whether a business responds; silence feels like neglect, while engagement signals you’re listening.

The clock has also sped up. Nearly six in ten customers now expect a reply within 24 hours, a sharp jump from last year. For many, that means a same-day response is the expectation. Fast, human replies aren’t a nice touch anymore; they’re the baseline.

The major search platforms reinforce this reality. Google’s local pack favors businesses that post fresh photos, keep details up to date, and engage with reviews (and not just negative reviews but positive ones too). Apple Maps is becoming harder to ignore as well, Rio SEO’s research reveals about a third of consumers now use it frequently. With Siri, Safari, and iPhones all pulling from Apple Business Connect as the default, accurate profiles there can tip the balance just as much as on Google.

Put it all together, and the picture is clear: search visibility and customer experience are already intertwined. The brands thriving in 2025 treat local search as part of a unified Brand Experience strategy and Rio SEO helps brands stay visible, responsive, and trusted wherever customers are searching.

The BX Advantage: Connecting Signals to Action

Every brand gathers signals. Search clicks, review scores, survey feedback; it all piles up. The trouble is most of it never makes it past a slide deck. Customers don’t feel or see the difference.

That’s where Brand Experience (BX) comes in. BX connects visibility and reputation with actionable insights, so signals don’t just sit in a dashboard.

At Rio SEO, we put BX into motion. Our Local Experience solutions help brands connect discovery with delivery and turn what customers see in search into what they feel in real life. It’s the bridge between data and experience, helping enterprise marketers identify patterns, respond faster, and build trust at every location.

The goal isn’t to watch the numbers. It’s to quickly identify and make changes customers notice, such as faster check-ins, smoother booking, and clearer answers in search; all of which amount to better experiences and outcomes, for customers and employees alike.

Technology helps make this possible. AI platforms now tie search data, reviews, and feedback into one view. With predictive analytics layered in, teams can see trouble before it shows up at the front desk or checkout line. And with Google’s AI Overviews and Bing’s Copilot changing how people discover businesses, brands that prepare for those formats now will have an edge when others are still catching up.

Industry context shapes how this plays out. A retailer might connect “near me” searches to what’s actually on the shelf that week. A bank has to prove reliability every time someone checks a branch profile. A hospital needs to make sure that when a patient searches for “urgent care,” the hours, insurance info, and provider reviews are accurate that very day. Different settings, same principle: close the gap between what people see online and what they experience in real life.

And this isn’t just about dashboards. The real win comes from acting quickly on what the signals show. Think about two retailers with dipping review scores. One shrugs and logs it. The other digs deeper, notices the complaints all mention stockouts in one region, and shifts supply within days. Customers stay loyal because the brand responded, not because it had a prettier chart.

That’s the difference BX is designed to create. Reports tell you what already happened. Acting on those signals shapes what happens next.

The New Mandate for Marketing Leaders

In the experience economy, BX isn’t abstract; it’s actionable. And Rio SEO gives brands the tools, data, and automation to operationalize it, turning every search, review, and update into a moment that builds loyalty and long-term growth.

Today’s marketing leaders aren’t being judged on traffic spikes anymore. What matters now is whether customers stick around, how much value they bring over time, and what it costs to serve them. That shift changes everything about the role of local search and puts Brand Experience (BX) at the center of the conversation.

When search is treated as a checklist—hours updated, pin fixed, job done—brands miss the bigger opportunity. Worse, they give ground to competitors who recognize that discovery is experience, and experience drives revenue.

BX gives CMOs and marketing leaders a framework for connecting visibility, reputation, and responsiveness. It bridges the gap between what people see in search and what they experience when they engage. And that’s where Rio SEO delivers real advantage: by giving brands the unified data, automation, and insights to make BX tangible in every market, every listing, and every moment.

You can see the difference in how leaders approach it across divergent industries:

  • Retail: Linking “near me” searches directly to in-stock inventory so shoppers know what’s available before they walk in.
  • Restaurants: Connecting menu updates and “order online” links directly to local search profiles, so when a customer searches “Thai takeout near me,” they see real-time specials, accurate hours, and an easy path to order.
  • Financial Services: Displaying verified first-party reviews on branch profiles to boost credibility and reassure customers choosing where to bank.
Image by Rio SEO, Nov 2025

The common thread is dependability. Local search is no longer about being visible once. It’s about proving, again and again, that your brand can be trusted in the small but decisive moments when customers are making up their minds. BX provides the vision; Rio SEO provides the infrastructure to bring it to life: connecting discovery with loyalty in a world where customers expect precision, empathy, and instant answers.

The Strategic Case for Local Search

The business case for local search doesn’t sit on the margins anymore. It ties directly to growth, trust, and efficiency. Within a Brand Experience (BX) framework, it links customer intent with measurable business outcomes, and Rio SEO gives brands the precision tools to manage that connection at scale.

Revenue Starts Here

Local search is full of high-intent signals: someone taps “call now,” asks for directions, or books an appointment. These metrics are crucial moments that can lead to sales, often within hours. In fact, most local searchers buy within 48 hours: three-quarters of restaurant seekers and nearly two-thirds of retail shoppers. That urgency makes consistency and accessibility non-negotiable.

Trust is Built in the Details

Reviews have become a kind of reputation currency, and customers spend it carefully. Three out of four people read at least four reviews before making a choice. If the basics are wrong—a missing phone number, the wrong hours—trust evaporates. More than half of consumers say they won’t visit a business if the listing details are off. Rio SEO’s centralized platform keeps data clean and consistent, ensuring that every profile communicates reliability, the foundation of trust in BX.

Efficiency That Pays for Itself

Every time insights from search and feedback flow back into operations, friction disappears before it gets expensive. Accurate listings mean fewer misrouted calls. Quick review responses calm frustration before it snowballs. Clear online paths reduce the burden on service teams.

In healthcare, that can mean shorter call center queues. In financial services, fewer “where do I start?” calls during onboarding. For retailers, avoiding wasted trips when hours are wrong keeps customers coming back instead of leaving disappointed. Each fix trims cost-to-serve while strengthening trust—a rare double win. Rio SEO automates these workflows, saving teams time while enhancing experience quality.

Your Edge Over the Competition

Too many organizations still keep SEO and CX in separate lanes. BX unites them and Rio SEO operationalizes that unity. The ones who bring those signals together see patterns earlier, act faster, and pull ahead of rivals who are still optimizing for clicks instead of experiences.

The Power of Brand Experience

BX blends rigorous data with customer-centric urgency. It gives leaders a way to not only show up in search but to be chosen, trusted, and remembered.

Winning the Experience Economy Starts in Local Search

Search no longer waits for a typed query. With AI Overviews, predictive results, and personalized recommendations, it increasingly anticipates what people want and surfaces the businesses most likely to deliver.

That shift raises the bar. In this new environment, local search isn’t a maintenance task but rather the front line of Brand Experience (BX). Accuracy, responsiveness, and reputation aren’t side jobs anymore; they’re the signals that decide who gets noticed, who gets trusted, and who gets passed over.

The companies setting the pace already treat local presence as a growth engine, not a maintenance task. They link discovery with delivery, reviews with real replies, and feedback with action. Competitors who don’t will find themselves playing catch-up in an economy where expectations reset every day.

The message is clear: customers don’t separate search from experience, and neither can you. Local search is now where growth, trust, and efficiency intersect. Handle it as a checklist, and you’ll fall behind. Treat it as a lever for Brand Experience, and you’ll define the standard others have to meet.

That’s where Rio SEO makes the difference. We help enterprise brands connect the dots between visibility, data, and experience, empowering marketers to act on signals faster, measure impact clearly, and deliver consistency at scale. With Rio SEO, brands don’t just show up in search; they stand out, stay accurate, and turn visibility into measurable growth.

Image by Rio SEO, Nov 2025

Ready to lead in the era of AI-driven discovery?

Partner with Rio SEO to transform your local presence into a connected, data-powered experience that builds trust, drives action, and earns loyalty at every location, on every platform, every day.

Learn more about Rio SEO’s Local Experience solutions today.