Google Engineer Explains ‘Black Box’ AI Models In Search via @sejournal, @MattGSouthern

Nikola Todorovic, Director of Software Engineering at Google Search, appeared on an episode of Search Off the Record to discuss how AI evolved inside Google Search.

Todorovic leads Google’s SafeSearch engineering team and has worked in the search organization for 15 years. He said machine learning was difficult to deploy broadly across Search because complex models are harder to understand and fix than simpler systems.

He was explaining why Google could not simply apply ML systems across Search at once. Todorovic said these models can “function like a kind of a black box” because engineers don’t always understand what happens underneath.

That makes debugging harder when search systems change over time or when a model needs to be replaced, he said.

SafeSearch As Proving Ground

Todorovic said SafeSearch was one of the first places where Google could deploy AI models in Search because the team could isolate those systems from the main ranking flow.

SafeSearch could run standalone image and video classifiers that produced a signal, such as how explicit a result might be. If problems came up, engineers could iterate on the model without disrupting the rest of Search.

Convolutional neural networks began improving image understanding about 12 years ago, he said, making SafeSearch a natural early use case for machine learning inside Search.

AI Overviews Built On Existing Search

Todorovic described AI Overviews as a feature that “stamps on top” of Google’s existing retrieval and ranking systems. He said the retrieval and ranking underneath AI Overviews is still what he called “the old style, the old school.”

The process can involve fan-out queries, he said. Google may identify additional queries related to the original input, run them in parallel, and bring the retrieved results back into one response.

AI Overviews then combine and summarize information from selected results, including source text, snippets, titles, and other page context, he said.

AI Mode follows a similar pattern but operates with more independence, Todorovic said. He described it as still running on Search, while having a “bigger platform for its own.”

Why This Matters

The “black box” quote is getting attention, but the full context matters. Todorovic was explaining why machine learning was difficult to deploy broadly across Search, not saying Google lacks oversight of AI Overviews or AI Mode.

His comments add useful context to Google’s existing AI Search documentation. Google has already said AI Overviews and AI Mode may use query fan-out, issuing multiple related searches across subtopics and data sources to develop responses.

The useful point is not that AI is a “black box.” His comments reinforce that traditional Search systems still matter for AI Overviews, even as Google layers summarization and fan-out on top.

That keeps traditional Search fundamentals relevant to AI features, even as Google changes how results are summarized and presented.

Looking Ahead

The difference between AI Overviews and AI Mode is worth watching as Google expands AI Mode. Todorovic described AI Overviews as more isolated from the rest of Search, while AI Mode has more of its own infrastructure.

That difference may matter for how Google explains visibility, measurement, and optimization guidance as AI Mode expands.

Your Website Is A Source, Not A Megaphone via @sejournal, @slobodanmanic

There’s a lesson from the early days of social media that most brands eventually learned the hard way: Social media is not a megaphone.

You couldn’t just broadcast your press releases into the feed and expect people to care. The channel had rules. It rewarded conversation, not announcements. The companies that figured this out early thrived. The rest spent years shouting into a void, wondering why nobody was engaging.

We’re watching the same mistake happen again, just one layer deeper. This time it’s not about which platform you’re on. It’s about assuming your website is where the message lives.

Why Most Websites Break When AI Agents Read Them

Most websites are still built on a core assumption: Someone will arrive at your front door, navigate your carefully designed pages, and consume your message in the exact sequence and format you intended.

That assumption is breaking.

In 2026, your website is no longer the only interface to your content. An AI agent might summarize your service page for someone mid-conversation. A voice assistant might read your pricing aloud, stripped of all visual hierarchy. A research tool might pull three paragraphs from your blog, recontextualize them alongside a competitor’s, and present them in a comparison the user never asked you for. Someone might never visit your site and still make a decision based entirely on what your website says.

If your message only works when it’s wrapped in your layout, your fonts, your carefully choreographed scroll, you don’t have a message. You have a brochure. And brochures don’t travel well.

The shift that’s happening is subtle but fundamental: You need to design the message independently of the medium.

This doesn’t mean your website stops mattering. It means your website is now one of many surfaces where your message might land. And the message has to hold up in all of them. It has to make sense when it’s read in full, when it’s summarized in three sentences, when it’s pulled apart and reassembled by something you didn’t build and don’t control.

That changes how you write. It changes how you structure information. It changes what you think of as “the product” of your content work.

Here’s a simple test: If there’s a single “Lorem ipsum” anywhere in your website while it’s being built, the message came second. The design came first. That order no longer works.

A few things this means in practice:

Your core message needs to be extractable. If an agent grabs one paragraph from your website, does that paragraph carry weight on its own, or does it collapse without the paragraphs around it?

Your value proposition can’t hide behind design. Bold typography and hero animations don’t travel through an API. The words have to do the work.

Structure becomes a form of portability. Clear headings, logical hierarchy, well-defined claims. These aren’t just good for traditional SEO anymore. They’re how machines parse your intent and relay it accurately.

You need to think about your content the way a news agency thinks about a wire story. The story has to work no matter which publication picks it up, no matter how they crop it, no matter what headline they slap on it. The facts and the narrative have to be embedded in the text itself, not in the presentation layer.

Brand Control When AI Recontextualizes At Scale

There’s a natural resistance to this idea. “If I don’t control the experience, how do I control the brand?” But that’s the megaphone instinct talking. The desire to control exactly how every word lands, in exactly the right font, with exactly the right whitespace. That was always a bit of an illusion anyway. People skim. People read on phones in bad lighting. People copy-paste your pricing into a Slack thread with zero context.

The difference now is that the recontextualization is happening at scale, automatically, and often before a human even sees it.

So, the question isn’t how to prevent that. It’s how to make sure your message is strong enough to survive it.

Websites As Canonical Sources, Not Just Destinations

Your website still matters. But its job description has changed.

Your website is no longer just a destination. It’s a source. It’s the canonical, structured, well-maintained origin point from which your message gets picked up, interpreted, summarized, and carried elsewhere. The better that source material is, the better it travels.

Think of it this way: Your website used to be the store. Now, it’s also the warehouse. And the warehouse needs to be organized well enough that anyone (human or machine) can find what they need, understand what it means, and carry it somewhere else without losing the plot.

The companies that get this right will be the ones whose message shows up clearly, no matter where the conversation is happening. The ones that don’t will keep designing beautiful megaphones, and keep wondering why the room isn’t listening.

More Resources:


This post was originally published on No Hacks.


Featured Image: Pixel-Shot/Shutterstock

500M AI Searches Later: How To Actually Improve AI Search Visibility & Citations via @sejournal, @hethr_campbell

What signals actually drive AI search visibility?

Are competitors getting cited in AI Overviews while you’re watching from the sidelines?

How do you go from AI visibility gap alerts to a system that closes them?

Most SEO teams already have dashboards showing where they’re invisible in AI search. Few have a process to fix it.

Learn To Turn AI Search Visibility Data Into A High-Visibility System

Reconnect with Sam Garg, Founder and CEO of Writesonic, as he shares his practical framework for diagnosing citation gaps, prioritizing the right actions, and automating execution with AI agents and free open-source SEO & GEO tools.

You’ll Learn:

  • What drives AI citations: Visibility signal analysis from 500M+ AI conversations. You’ll learn which content types, sources, and placements actually get cited in ChatGPT, Perplexity, and Gemini.
  • GEO tasks that move the needle: Citation outreach, content refresh, and third-party placements, plus how to use AI agents and open-source tools to automate them.
  • Where AI search is headed next: Early signals on AI ecommerce and the shift from recommendations to transactions for your channel strategy.

This SEO webinar session covers what 500M+ AI conversations reveal about how citations are earned, which actions actually move the needle (citation outreach, content refresh, third-party placements), and how to use autonomous AI agents to execute at scale.

Watch on-demand now to get the most data-backed, actionable guidance available on improving your brand’s AI search visibility.

ChatGPT vs. Perplexity vs. Gemini: Which LLMs Are Driving Real Conversions? [Expert Panel] via @sejournal, @hethr_campbell

AI search is sending high-intent traffic, but not equally across platforms.

Which LLM is actually driving conversions in your clients’ verticals?

Should GEO efforts be concentrated on ChatGPT versus Perplexity or Gemini?

How do you build an AI search reporting framework clients will actually trust?

Watch the on-demand webinar now to get conversion data by LLM.

How To Identify & Focus On The LLM That Works For You

Not every LLM deserves equal optimization effort.

Misallocating that effort is costing your clients rankings, leads, and revenue.

In this on-demand GEO webinar, Natalie Ann and our expert panel for a breakdown of which platforms are driving measurable results, and how to build an AI search strategy backed by conversion data.

You’ll Be Able To:

  • Identify which LLMs drive the highest conversion rates in your clients’ industries
  • Prioritize GEO spend and content optimization based on platform-level performance data
  • Package LLM optimization as a billable service with reporting that proves impact to clients

Watch now, follow along below, and be ready to rethink how you’re allocating AI search effort.

How Brands Are Increasing AI Visibility By Up To 2,000% [Webinar] via @sejournal, @hethr_campbell

The answer is Reddit, and yes, this 90-day strategy is worth your time.

Most brands treat Reddit as an afterthought.

However, Reddit is where buyers finalize their purchase decisions.

Reddit is where human trust gets built.

Therefore, Reddit serves as a trust signal for how AI search tools determine which brands are worth recommending.

AI Mentions & Cites Brands Based On Trust Signals, Across Channels

When ChatGPT, Perplexity, or Google AIO recommends a brand, it’s drawing on a web of signals that indicate the brand is credible, relevant, and mentioned by real people in real contexts.

Reddit is one of the most authentic of those signals.

Your opportunity: not Reddit instead of other channels, but Reddit as a meaningful addition to the multi-channel trust footprint AI reasons from.

One brand OGS Media worked with saw 2,000% AI visibility growth in 90 days after building a genuine Reddit presence. That’s the strategy Bartosz and Brent are unpacking on May 5.

What You’ll Learn In This AI Search Webinar

  • How Reddit community content contributes to the multi-channel trust signals AI uses to evaluate and surface brands
  • The 5-stage framework behind OGS Media’s 2,000% AI visibility result
  • The 7 most common Reddit mistakes brands make
  • What authentic subreddit engagement looks like when it’s actually working
  • How to find and engage in Reddit conversations that influence both buyers and AI

About the Speakers

Bartosz Goralewicz is the CEO of OGS Media and one of the most experienced Reddit marketing practitioners in SEO. Brent Csutoras is a Reddit Official Advisor and the Owner of Search Engine Journal, with nearly two decades of hands-on Reddit strategy for brands across every major vertical.

AI Search Clicks Often Go To Local Domains: Report via @sejournal, @MattGSouthern

Aleyda Solis, founder of Orainti, analyzed 87 million AI search visits across 10 markets, finding most clicks go to local domains rather than global defaults.

Using Similarweb data, she examined more than 57,000 domain-market entries in the ‘click-producing layer.’ This layer includes visits to a domain after users click citations or links in AI-generated answers.

The analysis complicates the assumption that the biggest global brands automatically dominate AI search results.

The Main Pattern

In non-US markets, local domains with stronger signals drive the click layer. For example, Bol.com leads in Dutch ecommerce, MercadoLivre in Brazil, Bahn.de in Germany, and Lefrecce.it in Italy, ahead of global competitors like Amazon or Booking.com.

Solis suggests this reflects who has the usable answer locally, not brand size. For instance, Lefrecce has train route data for Milan to Rome, while Booking.com does not. Thus, AI search visibility often depends on local infrastructure.

Different Verticals, Different Rules

In ecommerce, five domains account for 50% of clicks, with platforms like Amazon dominating. Finance is less concentrated, accounting for 17 domains, while travel is highly fragmented with 47. Finance appears concentrated, with Stripe ranking first in 7 of 10 markets, driven by demand from B2B, developers, merchants, and infrastructure, rather than consumers.

PayPal leads in Germany and Italy. The investing sub-category accounts for 22.4% of finance AI clicks, with TradingView ranking in the top 20 across all markets. Travel discovery and booking are more dispersed. Italy’s ecommerce is concentrated, with Amazon.it capturing 46.2% of clicks; combined with Temu, over half. UK travel requires 129 domains for 50% of clicks.

Growth Is Uneven

The report reveals churn behind overall growth. The median monthly growth for the top 50 domains was +20% in ecommerce, +25% in finance, and +29.1% in travel. Many markets and verticals saw about 30% to 40% of top domains decline, e.g., Spain ecommerce with 21 of 49 domains and France finance with 22 of 50.

Solis notes that weighted averages can be distorted by small-base spikes, citing domains like azulviagens.com.br and innovasport.com with large one-month jumps, suggesting investigations rather than trends. Momentum offers more insight than a static snapshot, as a losing top domain may require more focus than a steady top-50 position.

Why This Matters

For brands working across multiple markets, the data suggests that AI search competitors may not be the same competitors they track in traditional SEO.

In Italian travel, the key domain for rail intent may be Lefrecce.it. In Dutch ecommerce, it may be Bol.com. In German travel, it may be Bahn.de.

Solis recommends a straightforward audit question: who holds the operational data, structured inventory, or institutional trust that AI needs for category tasks in each market?

Looking Ahead

The report highlights three gaps for international brands: presence in AI-driven answers, click acquisition, and domain ownership of customer relationships.

Solis plans to update the analysis monthly. The next pull will show whether the local-domain pattern holds.


Featured Image: RobinRmD/Shutterstock

Your AI Visibility Tracker Is Quietly Breaking Your Analytics And Your Strategy via @sejournal, @TaylorDanRW

Jan-Willem Bobbink shared a take on X, that AI visibility trackers are quietly breaking the analytics of brands who are paying them to track for them. It’s time we put more focus on this issue, as it is causing misalignment, misreporting, and misspending of resources and marketing budget in the clamor to be more visible in AI.

Screenshot from X, April 2026

Jan-Willem hits on the issue of the lack of attribution in RAG loops. When a tracker triggers a prompt, and that prompt triggers a fetch, the brand is essentially paying a tool to generate its own AI visibility, and it begins to report on itself.

This is known as being ouroboros, which is a word you will likely see appearing more and more in the SEO industry as we describe AI/LLMs.

The ouroboros effect of how AI starts to quote itself, something that Pedro Dias has covered recently.

A large number of AI visibility tools have received significant amounts of funding in recent months, and some of them charge brands tens of thousands of dollars to “track” visibility, but this looping effect is beginning to become a reality, and how third-party tools track AI visibility will have a knock-on effect.

One example I point back to a lot is the drop in citations that ChatGPT produced when it released the 5.0 model in August 2025.

A number of tools that provide ChatGPT visibility saw the graphs decline, not because websites had violated spam policies or their short-termist tactics had run their course, but because of how the tools tracked citations, and the model produced less. This isn’t a measure of visibility, but a rehashed version of rank tracking, and these graphs can cost vendor contracts, incorrectly inform budget spending, and create false panic (or false celebration).

The Dangers Of The Observer Effect

In physics, the observer effect states that the act of monitoring a phenomenon changes it. This is happening in real-time for the SEO industry.

Most LLM trackers use a headless browser or a specialized API. When Perplexity or ChatGPT “searches” for fresh info to answer your tracker’s prompt, it doesn’t just hit your homepage; it performs a RAG fetch and can hit multiple URLs.

Because these bots often rotate IPs/proxies or use “stealth” headers to avoid being blocked by anti-scraping walls, they look like legitimate organic discovery crawls. This is how a number of rank tracking tools have operated for a number of years.

Because of this, you might report to a client, or other stakeholders, that “AI interest in our product pages is up 40%,” when in reality, 35% of that was just your own tracking tool refreshing its cache, or other tracking tools looking for you as a competitor of their brand.

AI Tracking Noise Is Worse Than Rank Tracking Noise

As Jan-Willem noted, we used to ignore rank tracker noise in Google Search Console because impressions were a “soft” metric. But log file data is hard data used for infrastructure, understanding how bots are accessing your website (server log file analysis), and now, in the age of AI, understanding how AI platforms are interacting with your site.

When you present a report to your client, peers, or your chief marketing officer, you are trying to prove brand preference within a large language model. If your data is polluted by your own tracking (and other people’s tracking), you risk a “false positive” strategy.

You might double down on content that isn’t actually popular with real AI users, but is simply the content your tracking tool happens to trigger most often.

What To Do Right Now

Until a vendor builds the “Clean Log” API Jan-Willem is calling for, you have to treat log files with skepticism.

Run your tracking tools on a “quiet” staging environment or a specific set of sacrificial URLs to measure the “noise floor” created by the tool itself.

Look for specific patterns (user-agent fingerprinting) in the logs that correlate with your tool’s scan times. Even if IPs rotate, the timing often shows patterns that can be identified easily.

And stop reporting “total AI fetches” as a success metric. Focus on how often your brand is mentioned relative to competitors, which is a metric derived from the LLM output, not your server logs.

More Resources:


Featured Image: Master1305/Shutterstock

The 90-Day GEO Playbook for Local Search: How To Show Up When AI Does The Searching

This post was sponsored by Uberall. The opinions expressed in this article are the sponsor’s own.

Local consumers have stopped searching the way we built our marketing around.

This significant change in buyer habits has been quietly happening in the last 18 to 24 months.

According to recent Uberall research into AI search behavior, an estimated $750 billion in consumer spend is already shifting toward AI-powered search. Roughly 60% of all searches now end without a single click to a website. And in a finding that should stop every marketer cold, or at least those working for multi-location businesses, 68% of brands are missing entirely from the recommendations AI engines generate in their category.

That problem goes beyond channels. It’s a fast-moving visibility problem that risks affecting conversions and revenue.

Generative Engine Optimization (GEO) is the discipline built for this moment. Where SEO optimized pages for a ranking, GEO optimizes entities for a recommendation.

The goal is no longer just to be found in Search Engine Results Pages (SERPs). It’s to be cited, summarized, and trusted when a model answers on your customer’s behalf.

In GEO, three pillars carry the weight. If you’ve worked in SEO for any length of time, the shape will look familiar — compounding visibility isn’t new, it’s the surface that’s changed.

  • Source of truth. The basic facts about your brand (name, address, hours, services) need to match everywhere a model might look. Inconsistent signals train AI engines to trust you less.
  • Context engineering. Your content has to answer the questions customers actually ask, in the language they ask them. Of course, conversational answers should take priority over keyword clusters.
  • Orchestration. You measure citations, refresh content, and compound visibility over time.

Here is how those three pillars translate into a realistic 90-day plan teams can actually run.

Phase 1 (Week 1): Foundational Analysis

You cannot optimize what the model cannot parse. The first week is a data hygiene sprint, rather than a content sprint.

Start with the local SEO basics most teams assume are already clean:

  • Audit your NAP details (Name, Address, Phone) across Google Business Profiles, Apple Maps, Yelp, Bing Places, and the major data aggregators. Even small inconsistencies — a missing suite number, an old phone format, a rebrand that never propagated — train AI engines to treat your brand as a lower-confidence entity.
  • Check your location pages, about page, and product pages for structured data. Schema isn’t a magic AI switch — recent tests suggest LLMs largely read it like any other on-page text. What it does is reduce ambiguity about what your business is and does, and that clarity is what helps a model interpret and cite you correctly.
  • Type the questions your customers actually ask into ChatGPT, Gemini, Perplexity, and Google AI Overviews. Not branded queries – real ones like “best orthodontist near Lincoln Park,” “which EV charger works with a Ford Lightning,” “coffee shops in Berlin that allow dogs.” Note where you appear, where you don’t, and which competitors show up instead.

That gap list becomes your brief for the next 80 days. It’s also where most brands discover the blind spots they didn’t know they had.

Phase 2 (Days 7–30): Context Engineering And Targeted Content

Once you know which prompts you’re missing from, the work becomes specific. For each blind spot, you are building the content a model would actively want to cite.

A few patterns that hold up across industries:

  • One prompt, one page. If “best family dentist in Austin with Saturday hours” returns three competitors and none of your locations, build or optimize the pages that answer exactly that. Don’t bury the answer three scrolls down.
  • Write for the question, not the keyword. AI engines extract complete answers, not phrases. A well-structured FAQ with direct, factual responses often outperforms a 2,000-word, keyword-stuffed guide that dances around the point
  • Cite yourself credibly. Include dates, local details, original data, named authors, and explicit comparisons. Models reward specificity and downgrade vague claims.

This is the phase where content that actually gets cited starts to look different from content built for the old ranking game. It is tighter, more factual, and structured around how someone would ask a question out loud.

Phase 3 (Days 30–60): Surgical Placement & Off-Page Authority

Off-page authority still matters. The economics, however, have flipped.

The instinct is to chase top-tier publishers. For GEO, that is usually the wrong move.

The sites that generative engines pull from most often aren’t always the ones with the highest domain authority. These are the ones relevant to your business and are cited more frequently, even if they’re not huge publications.

A more effective approach:

  • Focus on sites that already rank in Google for the prompts your customers use — the kind of credible, topical sources you’d want them to find when they’re researching. Top-tier placement isn’t the goal; any authoritative site that actually serves your audience counts.
  • The publishers AI engines already cite in your category are the ones models trust enough to source from. Re-run your Phase 1 prompts, track which domains keep appearing in the citations, and that’s your shortlist.
  • Size and prestige aren’t reliable proxies for AI citation rates. A specialist publication with real topical authority in your category often earns more AI citations than a bigger, more generic name.

The goal isn’t link volume. It is being mentioned, in context, in the sources your category’s models already trust.

Phase 4 (Days 60–90): Orchestration And Compounding

By day 60, you should have new content live, citations starting to show up on publisher sites, and enough signal to measure. Phase 4 is where GEO stops being a project and starts being a system.

Three metrics worth tracking weekly:

  • AI citation rate — how often your brand is named in AI-generated answers for your priority prompts.
  • Share of Voice — your citation rate relative to competitors across the same prompt set.
  • Content decay — which cited pages are losing citations over time and need refreshing with new data, dates, or insights.
Image created by Uberall, April 2026

The compounding effect here is profound. Brands that treat GEO as an ongoing loop — audit, publish, place, measure, refresh — see substantially higher citations and conversion rates. A recent Search Engine Journal webinar, featuring Uberall with AthenaHQ, states that GEO-savvy brands see 2x as many citations and 3–9x higher conversion rates within 90 days compared to brands still optimizing purely for classic search.

That delta matters more than it looks. As zero-click behavior grows, the citation inside the AI answer is the conversion surface.

For a concrete example, Audika France, a multi-location hearing-care brand and Uberall customer, ran this orchestration loop as an early adopter. They used it to track how AI engines described their clinics, spot the attributes models were missing, and close the gap between visible and recommended. Their results show how one multi-location brand went from an AI blind spot to a consistent recommendation.

What To Do Next

The pattern is consistent across multiple industries, including retail and restaurants. Brands that start now build a structural advantage that is hard to unwind once the category catches up. The ones that wait end up explaining to their board a year from now why a competitor became the default recommendation in every model their customers use.

If you want a snapshot of how your locations are performing in AI search, check out our AI Visibility Grader tool. It gives you a quick view of your AI visibility and the factors shaping it.

Or if you want to take this further and get a higher definition picture of where you stand in AI search, GEO Studio’s free trial will map your brand’s presence across the major generative engines.

Local search has changed. This is how you become the default answer.


Image Credits

Featured Image: Image by Michelle Azar/ Uberall. Used with permission.
In-Post Image: Image by Uberall. Used with permission.

B2B Buyers Choose A Vendor Before They Reach Out – 3 Ways To Be Visible When It Counts via @sejournal, @alexanderkesler

The fundamental question for 2026 is not how visible you are in search, but how wide the gap has grown between where you invest in discoverability and where buyers actually form their decisions.

Here is the reality: B2B buyers complete the majority of their research and form vendor preferences before your sellers can make their introductions.

Traditional SEO is a critical component of the brand discovery process, but it represents only a fraction of how buying groups validate decisions.

While SEO requires optimizing content for individual search intent (one person researching a solution), B2B purchasing works fundamentally differently. Enterprise software and service decisions are made when buying groups, averaging eleven members, reach consensus.

B2B buyers contact vendors only after completing 61% of their research. So, by the time buyers reach out to schedule that first demo, they’ve already completed most of their research out of sight from client relationship managers, already forming a shortlist of preferred vendors.

To earn consideration from B2B buyers as a preferred vendor in 2026, organizations ought to master this invisible buying journey and the discoverability process to out-position competitors.

In this article, I will present three tactics to help you improve the discoverability of your brand beyond SEO, helping your brand appear as a top choice for B2B buyers.

How To Make Your Brand Discoverable For B2B Buyers

SEO remains essential for organic search visibility, but buyer research extends far beyond search queries.

Buyers use AI tools to research solutions and validate findings across peer networks, review sites, technical documentation, and professional networks.

This creates a need for your B2B brand to be visible across multiple channels at once.

Your ability to establish brand confidence by enabling validation across the entire buying group, as well as measuring performance in these channels, is essential for securing favorable placement on B2B vendor shortlists.

3 Tactics To Increase Brand Discoverability

1. Establish Brand Confidence

Beyond traditional search, you need credibility across peer networks and review sites where buying groups conduct research.

Ensure your brand is visible across these B2B buyer research channels:

  • Search engines, answer engines, and AI tools.
  • Review sites like G2 and TrustRadius.
  • Peer networks, including Slack, Reddit, and technical forums.
  • Technical documentation sites.
  • PR, Wikipedia.
  • Third-party sites, like partner and syndication networks.

Prioritize AEO And GEO

As buyers increasingly turn to AI tools to research solutions, answer engine optimization (AEO) and generative engine optimization (GEO) have become important to brand discoverability.

  • Conduct an AI visibility audit to assess brand visibility across AI platforms.
  • Track citations, identify entity recognition gaps, and monitor competitors in AI-generated responses.
  • Enhance technical infrastructure with schema markup and optimize content for large language models (LLMs).
  • Secure consistent citations through PR and vendor comparison content.
  • Use citation monitoring tools to connect AI visibility to revenue, not just impressions.

Review Platform Management

Buyers trust validation on the quality of solutions via professional peers more than vendor claims.

  • Maintain a steady flow of authentic reviews on sites like G2 and TrustRadius through client engagement.
  • Analyze competitors’ reviews to identify gaps your products cover, then address those gaps with specific use cases and documentation.
  • Respond promptly to every client/user review. Your responses demonstrate commitment to client success and provide context for future readers evaluating similar use cases.
  • Align review content with B2B buyer journey stages. Early-stage (top of funnel) researchers need high-level product capability validation, while late-stage (bottom of funnel) evaluators need detailed implementation and integration information.

Peer Community Engagement

When practitioners recommend your solution unprompted in peer forums, you have established genuine community support.

  • Engage in peer networks like LinkedIn, Reddit, Slack channels, and technical forums to build trust through authentic contributions.
  • Track community sentiment and branded search lift to measure impact.
  • Monitor how frequently your brand appears in organic peer discussions versus competitors.

2. Enable B2B Buyers To Validate Your Solutions

Supporting buying group decision-making relies on the discoverability of evidence that aligns with the specific priorities of individual group members.

Organizations that ensure discoverability and enable validation across technical and business stakeholders earn consideration when B2B buying groups narrow their options.

Technical Decision Maker Enablement

Technical buyers test solutions themselves before talking to sales. They research how to connect systems on GitHub, solve setup problems on Stack Overflow, and review code interfaces through live documentation before contacting vendors.

Use structured data strategies and content architecture techniques to ensure resources like code guides and setup workflows are easily discoverable by AI crawlers.

Enhance discoverability by:

  • Providing resources that allow technical buyers to test things on their own time. This includes complete code guides with working examples, test environments they can use immediately, detailed security documentation, and setup workflows for common platforms.
  • Making these resources easy to find where they actually work. Maintain GitHub projects with real examples, answer questions on Stack Overflow, and publish technical content that demonstrates expertise.
  • Creating discoverable materials that cater to different teams within an organization. Operations teams need setup guides demonstrating clean code design. Engineers need system diagrams showing how your solution fits their tech setup. Security teams need security reviews and access controls validated through independent audits.
  • Implementing FAQ schema, HowTo schema, and Organization/Product markup to improve visibility for LLMs, making resources like documentation and guides more accessible during AI search.

Business Leader Validation Frameworks

Business leaders trust proven results and return on investment over technical specifications. Ensure that validation data is discoverable and geared toward demonstrating how these solutions meet industry standards.

Provide benchmark data showing how your solution compares to industry standards, with metrics executives can confidently present to their CFO and board.

  • Commission independent research that positions your approach within broader market trends.
  • Secure placement in analyst evaluations. These third-party validations carry weight with executive buyers who need external credibility to support internal business cases.
  • Distribute insights through channels executives actually monitor: LinkedIn posts that demonstrate thought leadership on strategic challenges, webinars that address business transformation rather than product features, and board-ready presentations that translate technical capabilities into business outcomes.
  • Enhance citation authority by building backlinks and optimizing for third-party mentions. This positions your solution favorably within broader market trends, making it more discoverable and credible.

B2B Buying Group Champion Enablement Systems

Internal champions require easily discoverable resources to address objections of other stakeholders and build consensus across their buying groups.

  • Equip B2B buying group champions with resource kits that provide responses to predictable concerns:
    • Finance (ROI models and cost-benefit analyses).
    • IT (integration complexity and security requirements).
    • Security (compliance frameworks and audit readiness).
    • Operations (change management and training requirements).
    • Executive leadership (strategic alignment and competitive positioning).
  • Offer presentation templates designed for different audiences:
    • Executive summaries for C-suite approval.
    • Technical reviews for architecture committees.
    • Business cases for financial justification.
    • Adoption plans for operational leadership.
  • Use citation authority-building tactics such as knowledge panel optimization and competitor comparison content to make champion resources more visible and credible.

By weaving discoverability into these offerings, organizations will better support technical decision makers in validating solutions effectively, thus positioning themselves favorably in the decision-making process.

3. Measure And Optimize

Discovery channel analytics reveal which research paths lead to actual buyer engagement and revenue.

Track Discovery Performance Across Channels

Build a comprehensive discovery analytics dashboard that monitors:

AI Visibility Metrics:

  • Share-of-voice in AI-generated responses across LLMs like ChatGPT, Perplexity, Gemini, and Copilot.
  • Citation frequency trends and competitive displacement rate within AI answers (can be a challenge right now, but as tools mature).
  • AI-sourced traffic attribution and correlation with pipeline outcomes.

Review Platform Metrics:

  • Review volume trends, average ratings across key categories (ease of use, support quality, value), and competitive positioning within your category (quarterly).
  • Sentiment analysis from peer networks like Reddit and Slack, where practitioners discuss solutions candidly.

Technical Validation Metrics:

  • Developer engagement on GitHub and Stack Overflow, API call volumes, and technical documentation traffic.
  • Page interaction depth (scroll patterns, time on page) and trial conversion rates from documentation paths.

Business Stakeholder Metrics:

  • Content consumption patterns by role and lead quality from executive-focused content.
  • Analyst report downloads and correlation with enterprise deal conversion rates.

Discovery Path Indicators:

  • Branded search lift and correlation between community engagement and inbound inquiry volume.
  • Channel combinations and content sequences that appear in successful deals.

Analyze Discovery Patterns That Drive Revenue

Trace content consumption paths that lead to demo requests, trial signups, and sales conversations. Use tracking parameters and form fields that identify origin sources.

Reverse-engineer successful deals to uncover:

  • Which channels start serious evaluation (peer networks, review sites, technical documentation).
  • Whether discovery through practitioner recommendations correlates with higher-quality leads.
  • Which content types drive engagement from different stakeholder roles (technical documentation for engineers, analyst reports for executives, peer reviews for operations leaders).

Correlate discovery metrics with sales cycle length, win rates, and client advocacy rates to identify which activities drive shortlist inclusion versus those that simply generate activity without business impact.

The buyer journey has fundamentally changed. Research happens before engagement, decisions form before conversation, and shortlists solidify before prospects present themselves.

Organizations that win in 2026 understand this reality and act accordingly. They establish presence where B2B buyers research, enable validation across stakeholder groups, and measure what drives consideration.

Implemented successfully, discoverability is the revenue engine that drives conversion in the AI-led buying era.

Key Takeaways

  • Optimize for AI-powered search: AEO and GEO are now foundational to brand discoverability. Audit your visibility across ChatGPT, Perplexity, Gemini, and Copilot, then build citation authority, structured data, and AI-consumable content architecture to earn consistent inclusion.
  • Build systematic review presence: Maintain an authentic review flow on platforms like G2 and TrustRadius through consistent client engagement.
  • Engage peer networks authentically: Participate in LinkedIn, Reddit, Slack channels, and technical forums where target buyers gather. Share insights and answer questions to build organic support.
  • Enable technical validation: Provide comprehensive resources on GitHub and Stack Overflow where technical buyers validate solutions through hands-on testing.
  • Support business leader decisions: Offer benchmarking data, independent research reports, and analyst validations that economic buyers can defend to CFOs and boards.
  • Equip internal champions: Supply presentation templates, competitive frameworks, and objection response playbooks that enable champions to build consensus across finance, IT, security, operations, and executive stakeholders.
  • Measure what drives consideration: Track AI visibility metrics alongside review site performance, peer network sentiment, technical documentation engagement, and champion support usage, connecting every channel to pipeline outcomes.

More Resources:


Featured Image: eamesBot/Shutterstock

OpenAI Crawl Activity Tripled Since GPT-5, Data Shows via @sejournal, @MattGSouthern

OpenAI’s automated crawl activity is estimated to have roughly tripled after the launch of GPT-5, according to a new analysis from Botify and guest author Chris Long.

In Botify’s dataset, OpenAI’s search crawler is now generating more log events than its training crawler. That’s a reversal from the period before GPT-5.

Long, co-founder of the SEO consultancy Nectiv, analyzed roughly 7 billion OpenAI-bot log events from Botify’s enterprise client dataset spanning November 2024 through March 2026.

What The Data Shows

Two of the three OpenAI user agents Botify measured saw activity spike around the GPT-5 launch.

OAI-SearchBot, which retrieves content when ChatGPT performs web searches, recorded about 3.5x more events after August 2025. That works out to roughly 2.2 billion additional events in Botify’s dataset.

GPTBot, which collects training data, recorded about 2.9x more events over the same period. That is another 1.8 billion events.

The third user agent, ChatGPT-User, moved in the opposite direction. Long reports a 28% drop in ChatGPT-User log events between December 2025 and March 2026. ChatGPT-User fires when a ChatGPT session fetches a page on behalf of a user, so the drop measures logged user-initiated fetches rather than ChatGPT usage overall.

Long offers two possible readings. One is that fewer sessions may be triggering real-time page fetches. The other, suggested by Botify’s team, is that OpenAI may be relying more on stored or indexed resources, reducing the need to fetch pages in real time. Long does not pick between them.

Search Bot Now Outpaces Training Bot

Before GPT-5, OAI-SearchBot and GPTBot ran at roughly even volumes in Botify’s dataset, with a ratio of about 0.95 search events per training event. After GPT-5, that ratio rose to about 1.14.

The pattern lines up with what Dan Petrovic wrote in August 2025 about GPT-5, arguing that OpenAI was sourcing more answers from live search than from trained memory. Botify’s data is consistent with that read.

Industry Breakdown

The post-GPT-5 search bot increases varied by industry. Healthcare sites saw about 740% more OAI-SearchBot activity after launch; Media and Publishing, 702%; and Marketplaces, Software, and Retail, 190-216%.

Travel sites had the smallest rise at 30%. The search and training balance also varies. Long reports a +256% OAI-SearchBot to GPTBot crawl difference for Media/Publishing, the largest gap. Software and Internet lean toward search, Healthcare and Retail favor training, with -50% and -33%. GPTBot is more active overall.

Botify and Long suggest OpenAI routes prompt types differently: news inquiries trigger live search, health and product queries rely on trained knowledge.

How OpenAI’s Crawl Compares To Google’s

Even after tripling, OpenAI’s crawl activity is much smaller than Google’s.

In Botify’s most recent 30-day window, Googlebot registered 18.2 billion events, compared with 887 million events from OpenAI’s crawlers combined. That puts OpenAI at about 4% of Google’s crawl volume.

A year earlier, the same comparison was 15 billion Google events to 207 million OpenAI events, or about 1.38%. The gap is closing, though Google’s crawl is still roughly 20 times larger in absolute terms.

Bingbot registered about 5.49 billion events in the most recent window, putting OpenAI at roughly 14% of Bing.

Methodology & Commercial Context

The dataset is Botify’s, covering enterprise clients in retail, ecommerce, technology, publishing, travel, and marketplaces. The analysis was conducted by Long as a guest author on Botify’s blog.

For transparency, Botify sells log file analysis and AI bot management software, and the post promotes a follow-up webinar and a product demo.

The dataset skews toward large enterprise websites rather than a representative cross-section of the web.

Why This Matters

In Botify’s dataset, OAI-SearchBot now generates more log events than GPTBot. Sites that block only GPTBot are not blocking the bot OpenAI says is used to surface websites in ChatGPT search answers.

Sites that block OAI-SearchBot may be excluding themselves from ChatGPT search answers.

How This Fits With Other Reports

Botify’s findings line up with patterns other vendors have reported. An Alli AI analysis covered earlier this month found OpenAI’s ChatGPT-User made 3.6x more requests than Googlebot in a smaller WordPress-heavy sample. A Hostinger analysis found OAI-SearchBot’s website coverage reaching 55% while GPTBot coverage fell. Akamai’s recent bot traffic report showed OpenAI leading AI bot traffic to publishing sites.

The reports suggest that AI training crawls and AI search crawls need to be measured separately, especially as OAI-SearchBot activity grows.