AI Gives You The Vocabulary. It Doesn’t Give You The Expertise via @sejournal, @DuaneForrester

Hiring managers are watching something uncomfortable happen in interview rooms right now. Candidates arrive with the right credentials, the right vocabulary, the right tool stack on their résumés, and then someone asks them to reason through a problem out loud, and the room goes quiet in the wrong way. Not in the thoughtful kind of way, but the empty kind that tells you the person across the table has never actually had to think through a hard problem on their own. And research is converging on the same conclusion. Microsoft, the Swiss Business School, and TestGorilla have all documented the same pattern independently: Heavy AI reliance correlates directly with declining critical thinking, and the effect is strongest in younger, less experienced practitioners.

This isn’t a technology story so much as a cognition story, and the SEO industry is living a version of it in slow motion. What none of those studies name is the specific mechanism: the three-layer architecture of expertise where AI commands the retrieval layer completely, and the judgment layers underneath it are more exposed than they’ve ever been. That architecture is what this piece is about.

The Debate Is Framed On The Wrong Axis

Every conversation about AI and critical thinking eventually lands in the same place: humans versus machines, organic thinking versus generated output, authentic expertise versus artificial fluency. It’s a compelling frame and also the wrong one.

The real fracture line isn’t human versus AI. It’s retrieval versus judgment, and those are not the same cognitive act, even though AI has made them feel interchangeable in ways that should concern anyone serious about their craft.

Retrieval is access. It’s the ability to surface relevant information, synthesize patterns across a body of knowledge, and produce fluent output that maps to the shape of expertise. Large language models are extraordinary at this, genuinely and structurally superior to any individual human at the retrieval layer, and getting better at speed. Fighting that reality is not a strategy.

Judgment, however, is different. Judgment is knowing which question is actually the right question given this specific context, the ability to recognize when something that looks correct is wrong for this situation in ways that aren’t in any training data, the accumulated weight of having been wrong in consequential situations, learning why, and recalibrating. You cannot retrieve your way to judgment. You build it through deliberate practice under real conditions, over time, with skin in the game that a model structurally cannot have.

The problem isn’t that AI handles retrieval well. The problem is that retrieval output now sounds so much like judgment output that the gap between them has become nearly invisible, especially to people who haven’t yet built enough judgment to know the difference.

The Judgment Stack

Think about expertise as a stack, not a spectrum.

Layer 1 is retrieval – synthesis, pattern vocabulary, volume processing, surface recognition. This is AI territory, and handing work in this area over to an AI is not weakness but correct resource allocation. The practitioner who uses an LLM to compress a competitive analysis that would have taken three hours into 40 minutes isn’t cutting corners; they’re buying back time to do the work that actually compounds.

Layer 2 is the interface layer – hypothesis formation, question quality, contextual filtering, knowing which output to trust and which to interrogate. This is where the leverage actually lives, and it’s fundamentally human-plus-AI territory. Your prompt quality is a direct proxy for your judgment quality. Two practitioners can feed the same LLM the same general problem and get outputs that are miles apart in usefulness, because one of them knows what a good answer looks like before they ask the question, and that foreknowledge doesn’t come from the model but from Layer 3 working backward.

Layer 3 is consequence and context – the ability to recognize when a pattern that has always worked is about to break, to assess novel situations that don’t map cleanly to anything in the training data, to hold strategic framing steady under pressure when the data is ambiguous. This is human territory, not because AI couldn’t theoretically develop something like it, but because it requires something a deployed model structurally cannot have: skin in the game, real consequence, the accumulated scar tissue of being wrong when it mattered and having to carry that forward.

The critical thinking crisis everyone is diagnosing right now is not, at its root, an AI problem but a Layer 2 collapse. People skip directly from Layer 1 retrieval to Layer 3 claims, bypassing the judgment infrastructure entirely. Layer 1 output is fluent, confident, and often correct enough to pass casual scrutiny, which keeps the gap invisible right up until someone asks a follow-up the model didn’t anticipate, and the person has no independent footing to stand on.

What SEO Is Actually Revealing

SEO is a useful diagnostic here because the industry has always been an early signal for how the broader marketing world processes technological disruption. We were the first to chase algorithmic shortcuts at scale. We were the first to industrialize content in ways that traded quality for volume. And right now we are watching two distinct practitioner populations diverge in real time, with the gap between them widening faster than most people have noticed.

The first population is using LLMs as answer machines: feed the problem in, take the output out, ship it. Ask the model what’s wrong with a site’s rankings. Ask it to write the content strategy. Ask it to explain why traffic dropped. This isn’t entirely without value, since Layer 1 retrieval has genuine utility even here, but the practitioners operating purely at this layer are making a trade they may not fully understand yet. They are outsourcing the only part of the job that compounds in value over time. Every hard problem they hand off to a model without first attempting to reason through it themselves is a training repetition they didn’t take, a weight they didn’t lift, and those repetitions are how Layer 3 gets built. You want the muscle? You have to do the work.

The second population is using LLMs as reasoning partners. They come to the model with a hypothesis already formed, a question already sharpened by their own thinking, and they use the output to pressure-test their reasoning, surface considerations they may have missed, and accelerate the parts of the work that don’t require their hard-won judgment, which frees them to apply that judgment more deliberately where it matters. These practitioners are getting faster and better simultaneously, because the model is amplifying something that already exists.

The difference between these two groups has nothing to do with tool access, since they are using the same tools, and everything to do with what each practitioner brings to the model before they open it.

The Leveling Lie

The argument for AI as a leveling tool is not wrong; it’s just incomplete, and that incompleteness is where the damage happens.

A junior practitioner today has access to a compression of the field’s knowledge that would have been unimaginable five years ago. Ask an LLM about crawl budget allocation, entity relationships, structured data implementation, or the mechanics of how retrieval-augmented systems weight freshness signals, and you will get a coherent, usually accurate answer in seconds. That is a genuine democratization of Layer 1, and dismissing it as illusory is its own form of gatekeeping.

But Layer 1 access is not expertise. It is the vocabulary of expertise, and there is a specific kind of danger in having the vocabulary before you have the understanding, because fluency masks the gap. You can discuss the concepts. You can deploy the terminology correctly. You can produce output that looks like the work of someone with deep experience, and you can do all of that while having no independent capacity to evaluate whether what you just produced is actually right for the situation in front of you.

This is not a character flaw but a metacognitive failure, the condition of not knowing what you don’t yet know. The junior practitioner using an LLM to accelerate their access to field knowledge isn’t being lazy. In many cases, they are working hard and genuinely trying to develop. The problem is that Layer 1 fluency generates a confidence signal that isn’t calibrated to actual capability. The model doesn’t tell you when you’ve hit the edge of what it knows. It doesn’t flag the situations where the standard answer breaks down. It doesn’t know what it doesn’t know either, and neither do you yet, and that combination is where well-intentioned work quietly goes wrong.

The leveling effect is real, but the ceiling on it is lower than most people assume. What gets leveled is access to the knowledge layer. What doesn’t get leveled (what cannot be compressed or transferred through any tool) is the judgment architecture that determines what you do with that knowledge when the situation doesn’t follow the pattern.

The practitioners who understand this distinction will use AI to accelerate their development. The ones who don’t will use it to feel further along than they are, right up until the moment a genuinely novel problem requires something they haven’t built yet.

Where The Abdication Actually Happens

Let’s be precise about this, because the accusation of abdication usually gets thrown around in ways that are more emotional than useful.

Using AI at Layer 1 is not abdication. Letting a model handle competitive analysis synthesis, first-draft content frameworks, technical audit pattern recognition, or structured data generation is correct delegation, since these are retrievable tasks and doing them manually when a better tool exists isn’t intellectual virtue but inefficiency pretending to be rigor.

Abdication happens at a specific and different point. It happens when you stop taking the problems that would have built your Layer 3 judgment and start routing them directly to a model instead: not because the model’s output isn’t useful, but because the attempt itself was the point. The struggle to formulate an answer to a hard problem, even an incomplete or wrong answer, is the mechanism by which judgment gets built. Hand that struggle off consistently, and you are not saving time but spending something you may not realize you’re spending until it’s gone.

This is the part of the conversation that doesn’t get said clearly enough: The low-consequence training repetitions are how you prepare for the high-consequence moments. A practitioner who has reasoned through hundreds of traffic anomalies, content decay patterns, and crawl architecture decisions (even inefficiently, even wrongly at first) has built something that cannot be replicated by having asked an LLM to reason through those same problems on their behalf, because the model’s reasoning is not your reasoning, just as watching someone else lift the weight does not build your muscle.

The senior practitioners who feel their position eroding right now are often misdiagnosing the threat. The threat isn’t that AI makes their knowledge less valuable, since genuine Layer 3 judgment is actually more valuable in an AI-saturated environment, not less, precisely because it becomes rarer as more people mistake Layer 1 fluency for the whole stack. The real threat is that the market hasn’t developed clean signals yet for distinguishing Layer 3 capability from Layer 1 fluency dressed up convincingly. It’s a signal problem that is temporary and will resolve itself in the most public and consequential ways possible – in front of clients, in front of leadership, in front of the situations where someone needs to make a call the model can’t make.

The answer for experienced practitioners is not to resist AI but to use it in ways that continue building Layer 3 rather than substituting for it. Use the model to go faster on Layer 1, and use the time that buys you to take on harder problems at Layer 2 and 3 than you could have reached before. The ceiling on your development just got higher, and whether you use that is a choice.

The answer for junior practitioners is harder but more important: Understand that the shortcut doesn’t shorten the path but changes the surface underfoot. You can move across the terrain faster with better tools, but the terrain still has to be crossed, and there is no prompt that builds the judgment architecture for you. Only doing the work, being wrong in situations that matter, and carrying that forward builds that.

The Prerequisite

Critical thinking is not the alternative to AI use. Instead, it is the prerequisite for AI use that compounds.

Without it, you are operating entirely at Layer 1, fluent and fast and increasingly indistinguishable from everyone else who has access to the same tools you do, and everyone has access to the same tools you do. The tools are not the differentiator and never were, serving instead as a floor, and that floor is rising under everyone’s feet simultaneously.

What compounds is judgment. The accumulated capacity to ask better questions than the person next to you, to recognize the moment when the standard pattern breaks, to hold a strategic position steady when the data is ambiguous and the pressure is real. That capacity doesn’t live in the model but in the practitioner, built over time through deliberate practice under real conditions, and it is the only thing in The Judgment Stack that gets more valuable as the tools get better.

The interview rooms where qualified candidates go quiet when asked to reason out loud are not showing us a technology problem. They are showing us what happens when a generation of practitioners optimizes for Layer 1 output without building the infrastructure underneath it, accumulating the vocabulary without the architecture, and the fluency without the foundation.

The practitioners who will matter in three years are building that foundation right now, using every tool available to go faster at Layer 1 and using the time that buys them to go deeper at Layer 3 than was previously possible. They are not choosing between AI and thinking but using AI to think harder than they could before, and that is not a leveling effect but a compounding one … and compounding, as anyone who has spent serious time in this industry understands, is an advantage worth building.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Summit Art Creations/Shutterstock; Paulo Bobita/Search Engine Journal

Microsoft Says Bing Reached 1B Monthly Active Users via @sejournal, @MattGSouthern

Microsoft announced that Bing has reached 1 billion monthly active users for the first time. CEO Satya Nadella revealed this figure during the Q3 FY2026 earnings call.

Revenue from search ads, excluding traffic acquisition costs, increased by 12% year over year. Additionally, Edge has maintained its browser market share for 20 consecutive quarters.

Overall, Microsoft reported total revenue of $82.9 billion for the quarter, marking an 18% increase.

Search & Advertising

The segment that includes Bing was down 1% overall at $13.2 billion. Search advertising was the bright spot, with CFO Amy Hood pointing to higher volume and revenue per search.

Nadella was direct about where the consumer business stands:

“When it comes to our consumer business, we are doing the foundational work required to win back fans and strengthen engagement across Windows, Xbox, Bing, and Edge. In the near term, we are focused on fundamentals, prioritizing quality and serving our core users better.”

Search ad growth has held in double digits for three straight quarters. It grew 16% in Q1 FY2026, 10% in Q2, and 12% this quarter. For Q4, Microsoft guided that growth to the high single digits, a step down.

Back in 2023, Microsoft reported 100 million daily active users when it first added AI to Bing. Going from 100 million daily to 1 billion monthly is a big jump, though it’s unclear whether Copilot interactions count toward that number.

Edge is part of the story, too. It typically defaults to Bing, so five years of Edge growth means more people landing on Bing without actively choosing it.

Why This Matters

Edge has gained share for five straight years, and search ad revenue has grown in double digits for three consecutive quarters.

Microsoft has also been building the measurement tools to go with it. Bing Webmaster Tools now maps grounding queries to cited pages, and Microsoft previewed Citation Share at SEO Week earlier this month.

Still, Bing’s global search share sits at about 5% worldwide per StatCounter’s March 2026 data. That gap between 1 billion MAU and 5% share suggests a lot of those users are low-frequency or showing up through default settings rather than choosing Bing.

Looking Ahead

Microsoft’s next earnings call will show whether search ad growth picks back up or settles into single digits.

The Citation Share feature Microsoft previewed at SEO Week hasn’t shipped yet. When it does, it could be the first tool for tracking how your site’s AI visibility on Bing compares to competitors.

Your AI Visibility Tracker Is Quietly Breaking Your Analytics And Your Strategy via @sejournal, @TaylorDanRW

Jan-Willem Bobbink shared a take on X, that AI visibility trackers are quietly breaking the analytics of brands who are paying them to track for them. It’s time we put more focus on this issue, as it is causing misalignment, misreporting, and misspending of resources and marketing budget in the clamor to be more visible in AI.

Screenshot from X, April 2026

Jan-Willem hits on the issue of the lack of attribution in RAG loops. When a tracker triggers a prompt, and that prompt triggers a fetch, the brand is essentially paying a tool to generate its own AI visibility, and it begins to report on itself.

This is known as being ouroboros, which is a word you will likely see appearing more and more in the SEO industry as we describe AI/LLMs.

The ouroboros effect of how AI starts to quote itself, something that Pedro Dias has covered recently.

A large number of AI visibility tools have received significant amounts of funding in recent months, and some of them charge brands tens of thousands of dollars to “track” visibility, but this looping effect is beginning to become a reality, and how third-party tools track AI visibility will have a knock-on effect.

One example I point back to a lot is the drop in citations that ChatGPT produced when it released the 5.0 model in August 2025.

A number of tools that provide ChatGPT visibility saw the graphs decline, not because websites had violated spam policies or their short-termist tactics had run their course, but because of how the tools tracked citations, and the model produced less. This isn’t a measure of visibility, but a rehashed version of rank tracking, and these graphs can cost vendor contracts, incorrectly inform budget spending, and create false panic (or false celebration).

The Dangers Of The Observer Effect

In physics, the observer effect states that the act of monitoring a phenomenon changes it. This is happening in real-time for the SEO industry.

Most LLM trackers use a headless browser or a specialized API. When Perplexity or ChatGPT “searches” for fresh info to answer your tracker’s prompt, it doesn’t just hit your homepage; it performs a RAG fetch and can hit multiple URLs.

Because these bots often rotate IPs/proxies or use “stealth” headers to avoid being blocked by anti-scraping walls, they look like legitimate organic discovery crawls. This is how a number of rank tracking tools have operated for a number of years.

Because of this, you might report to a client, or other stakeholders, that “AI interest in our product pages is up 40%,” when in reality, 35% of that was just your own tracking tool refreshing its cache, or other tracking tools looking for you as a competitor of their brand.

AI Tracking Noise Is Worse Than Rank Tracking Noise

As Jan-Willem noted, we used to ignore rank tracker noise in Google Search Console because impressions were a “soft” metric. But log file data is hard data used for infrastructure, understanding how bots are accessing your website (server log file analysis), and now, in the age of AI, understanding how AI platforms are interacting with your site.

When you present a report to your client, peers, or your chief marketing officer, you are trying to prove brand preference within a large language model. If your data is polluted by your own tracking (and other people’s tracking), you risk a “false positive” strategy.

You might double down on content that isn’t actually popular with real AI users, but is simply the content your tracking tool happens to trigger most often.

What To Do Right Now

Until a vendor builds the “Clean Log” API Jan-Willem is calling for, you have to treat log files with skepticism.

Run your tracking tools on a “quiet” staging environment or a specific set of sacrificial URLs to measure the “noise floor” created by the tool itself.

Look for specific patterns (user-agent fingerprinting) in the logs that correlate with your tool’s scan times. Even if IPs rotate, the timing often shows patterns that can be identified easily.

And stop reporting “total AI fetches” as a success metric. Focus on how often your brand is mentioned relative to competitors, which is a metric derived from the LLM output, not your server logs.

More Resources:


Featured Image: Master1305/Shutterstock

Google Search Revenue Grew 19% In Q1, Pichai Cites AI via @sejournal, @MattGSouthern

Alphabet reported Q1 2026 earnings, with Google Search & Other revenue rising 19% year over year to $60.4 billion. CEO Sundar Pichai tied the quarter’s Search performance to AI Overviews and AI Mode, saying people are “coming back to Search more.”

Q1 revenue was lower sequentially than Q4 2025, when Search & Other came in at $63.1 billion, but year-over-year growth increased from 17% to 19%. Total Alphabet revenue reached $109.9 billion, up 22%.

What Pichai Said About Search

In his prepared remarks, Pichai connected the Search number to AI experiences, stating:

“People love our AI experiences like AI Mode and AI Overviews, and they’re coming back to Search more.”

Pichai also said, “queries are at an all-time high.” He described “strong growth in both users and usage of AI Mode globally” without sharing an exact figure. Past Google disclosures put AI Mode at roughly 100 million monthly active users and 75 million daily.

Pichai said AI Overviews “are driving overall Search growth.” Liz Reid made a similar engagement argument on Bloomberg’s Odd Lots earlier this month, describing AI Overviews as reducing low-value clicks rather than reducing useful traffic.

New Data On Search Speed And AI Costs

Pichai shared two efficiency figures.

The first was latency. Pichai said:

“Even as we’ve brought new AI features into our results page, we’ve reduced Search latency by more than 35% over the past five years.”

The second was the cost of running AI responses. He continued:

“Since upgrading AI Overviews and AI Mode to Gemini 3, we’ve reduced the cost of core AI responses by more than 30% thanks to continued hardware and engineering breakthroughs.”

Search Updates Pichai Highlighted

Pichai highlighted three Search rollout items from the quarter.

Personal Intelligence “expanded broadly in the U.S.,” referring to Google’s March expansion of Personal Intelligence to free U.S. users.

Agentic experiences shipped to new countries. Pichai cited restaurant booking as one of the early examples of what Pichai has called “search as an agent manager.”

Search Live multimodal capabilities went global.

Why This Matters

Over the past year, SEO professionals worried AI Overviews would reduce clicks to sites by satisfying user intent on the results page. Q1 numbers challenge that idea. If AI were cannibalizing traditional search, query volume and revenue would flatten. Instead, both increased.

But this doesn’t mean concerns are unfounded. “All-time high queries” doesn’t imply all-time high publisher clicks. Google hasn’t disclosed click-through rates or revenue split between AI Mode and traditional ads. More queries could mean fewer clicks per query if AI answers resolve intent early.

However, the revenue growth indicates the search ecosystem is expanding, even as user interaction patterns shift.

Looking Ahead

Google’s earnings show AI features are expanding search, but key questions remain about monetization and click-through rates.

Pichai said more info about Search will be shared at Google I/O in May and Google Marketing Live.

Earn AI Citations: What Your Content Needs To Look Like [A 4-Article Playbook] via @sejournal, @AirOpsHQ

TL;DR

The best companies aren’t panicking. Carta, Ramp, and Webflow are proving that visibility in AI search comes from connected systems where originality, speed, and credibility compound.

  • Search is now an answer engine. Visibility depends on being cited, not ranked.
  • Freshness fuels authority. 70% of AI-cited pages were updated within the past year.
  • Originality wins. LLMs reward information gain—new data, unique insights, and first-party context.
  • Humans set the standard. The best teams automate structure, not voice or judgment.
  • Authority lives off-site. 85% of brand mentions in AI search come from third-party sources, not your own.
  • Speed compounds trust. Teams that refresh content 3× faster dominate both Google and AI visibility.

The way people get information has changed more in the past year than in the previous twenty.

Search is no longer a list of links. Instead of typing a question into Google and scrolling through ten blue links, billions of people are now getting direct answers from AI assistants like ChatGPT, Claude and Gemini.

That single behavioral change is rewriting how brands are discovered. When your customers stop clicking through, traditional SEO and content strategies stop working. The playbook that defined a generation of growth is collapsing — and with it, the visibility companies have long relied on to reach their audiences.

From learning to decision-making, each answer is powered by content that meets new quality and freshness standards. The brands that show up will win. The ones that don’t will disappear. The best companies are adapting to this reality, not panicking. They’ve built connected systems where visibility, content, and performance feed each other in a continuous loop.

We’ve assembled this guide so you can see exactly what’s working for the leading brands like Carta, Rampa and Webflow across AI search today.

Use this playbook to stay visible, move faster, and turn intelligent systems into lasting growth.

What’s Actually Working Today

The “slop” era is over. High-volume filler stopped working because audiences lost trust and leaders lost patience. Visibility in AI search now depends on credibility that begins on your website, but extends far beyond it.

The rules are still emerging, but that’s the opportunity. With fewer incumbents, the fastest-moving teams are winning by mastering three things: originality, human judgment, and speed.

Based on ~15 million data points across AI answers, queries, citations and brand mentions, a pattern is clear: freshness and speed is the competitive edge. Seventy percent of the pages cited by AI models were updated within the past year, and content less than three months old is three times more likely to be referenced.

Our analysis shows the same pattern across every high-performing team.

1. Create information-gain content

LLMs reward novelty, not noise. The web is saturated with repetition, and large models filter for sources that add something new. Winning teams compete on information gain. They publish proprietary data, internal insights, and distinct points of view that expands the model’s knowledge, not restate it.

  • Carta and Ramp turn internal datasets, customer calls and insights from subject-matter experts into content to generate net-new content that audiences trust and LLMs notice.
  • Webflow saw a 6× higher conversion rate from AI-sourced traffic after focusing on original, structured, and authoritative material competitors couldn’t copy.

Takeaway: Authoritative and unique content is now the most defensible moat in AI search.

AI agents are now the reader

Another new question has emerged alongside staying visible in AI search: how do we stay visible when AI agents are doing the searching on someone’s behalf?

Agentic AI — tools that browse, retrieve, and act autonomously — is changing the retrieval layer of search. Users will soon (if they haven’t already) connect AI assistants to live data sources, delegating research tasks to agents, and relying on automated workflows to surface what’s relevant. When an agent browses for your category, it retrieves the most structured, authoritative, and current information it can find.

This is where Model Context Protocol (MCP) comes in. MCP is the emerging standard that allows AI assistants like Claude to connect directly to external tools and data sources in real time. For content teams, it means two things:

  1. Your content needs to be retrievable by machines, not just readable by humans. Structured formatting, clear hierarchy, and explicit answers aren’t just citation best practices — they’re the architecture agents depend on to extract and act on information.
  2. Your own AI workflows can connect to live performance data. AirOps now offers an MCP integration that lets teams surface citation insights, brand visibility data, and content performance directly inside Claude.

2. Keep humans in the loop

AI enhances creativity but never replaces it. Top teams use AI to accelerate research and structure while keeping humans in charge of voice, accuracy, and tone. They’ve built workflows where writers, strategists, and systems designers collaborate in real time.

  • Teams increasingly invest and upskill around content engineering, a hybrid role that blends editing, systems design, and quality control.
  • At Klaviyo, this role orchestrates content systems that merge brand context, data, and human quality together.

Takeaway: Automation works best when it’s guided by judgment. Human oversight is the safeguard that keeps AI-driven content credible.

3. Move at high velocity

Freshness is the new authority signal. AI models overwhelmingly cite content that’s recent and actively maintained. Pages updated within three months are three times more likely to be cited, and >60% of commercial pages cited by ChatGPT were updated in the past six months.

Given the increased requirements in refresh frequency, teams are building systems to not only keep up, but make this their advantage.

Chime had over 700 blog posts and a refresh process that was capping the team at around 50 posts per quarter. After implementing AirOps, each refresh dropped from 45 minutes to under 5 minutes — an 89% time reduction — with refresh velocity increasing 70%. Within four weeks, AI citations on priority questions tripled.

Docebo turned content refresh into a competitive system. When traffic on a page dropped more than 20%, it automatically triggered an update cycle . The result: a 25% share of voice lead in their category, plus double the publishing velocity without adding headcount.

What makes Docebo’s approach worth studying isn’t just the numbers. It’s the shift from reactive to proactive. Rather than responding to visibility loss after the fact, they built a system that catches it early and acts before the drop compounds. They’re now expanding that same logic into internal linking audits, sitemap reviews, and full AI search optimization. Their content operations are a core part of their infrastructure.

Takeaway: The fastest teams don’t just publish more. They build systems that detect decay and respond automatically.

Information gain + human judgment + speed = durable growth.

AI rewards marketers who think like builders, not publishers.

Now that we’ve covered what works, the next step is building a system that makes those results repeatable. The following framework shows how leading teams plan, execute, and measure visibility in AI search.

The New System of Action

Crafting content that meets the demands of AI search now depends on a repeatable process that connects strategy, creation, measurement, and trust. This is the new system of action for modern content and marketing teams.

It’s a practical framework any organization can use. The goal is to make visibility measurable and repeatable, using tools and systems that fit your workflow and ignite your team’s creativity.

1. Know exactly what to do next

Use data to know where to focus before you create anything. Visibility grows faster when you prioritize the queries that matter most.

  • Dive deep into the topics, prompts, and pages driving visibility and performance.
  • Surface opportunities on your site, external sites, and even Reddit threads on a regular cadence
  • Prioritize topics based on potential impact and effort, then align your team around the next best moves.

Result: A short list of high-impact topics that tells your team exactly where to invest next.

2. Create and refresh with precision

Keep your content system active and relevant. AI search rewards teams that update often and publish with the right structure to be found and cited.

  • Combine human expertise with precise AI to bring your brand’s stories to life with workflows for creation and refresh across both owned and earned channels.
  • Automate triggers for updates every 60–90 days, or when traffic or citations drop.
  • Design templates and review cycles that maintain accuracy, speed, and brand context.
  • Centralize where you collaborate with your team to accelerate approvals and stay aligned.

Result: A steady flow of content that stays visible, trusted, and aligned with how humans and AI search.

3. Measure your ROI and impact

The way we measure content performance has changed. Traffic and rankings once defined success, guided by impressions, clicks, and keyword positions. Now, visibility is measured by how often your brand is cited, mentioned, and trusted inside AI answers.

The best teams are shifting to a holistic approach that looks beyond search rankings to understand how the brand shows up across all discovery channels.

In AI search, visibility depends on appearing in trusted, authoritative answers on the topics that matter most.

To do this, don’t chase keyword volume and traffic. Instead, map out your most important topics, the queries that matter most, and where you want your brand to be seen as credible and useful.

What’s the ROI of your content? How has it performed over time? And where does your brand stand today?

What to measure:

The 2026 State of AI Search, developed with growth strategist Kevin Indig, confirmed many of these metrics. Pages that go more than three months without an update are 3× more likely to lose visibility. Annual updates are the minimum bar, with 70% of AI-cited pages updated within the past year. For SaaS, finance, and news, the window is tighter still.

One new signal worth adding to your dashboard: McKinsey research shows that 50% of Google searches already surface AI summaries, a number projected to hit 75% by 2028. Strong SEO and strong AEO aren’t parallel strategies. They’re the same investment.

Don’t wait for a traffic drop to trigger a refresh audit. The teams with the highest compounding visibility run standing weekly reviews of citation rate, share of voice, and pages aging out of the freshness window. They act before the decay starts.

  • Brand Visibility: How often your company appears in AI-generated answers.
  • Citation Rate: How frequently your pages are used as trusted sources.
  • Share of Voice: How your visibility compares to competitors across AI search.
  • Sentiment: Whether mentions are positive, neutral, or negative.

Result: A clear view of what’s driving growth and where to focus next.

4. Build a system of record for trust

In a world where AI generates endless variations of your message, the real differentiator is consistency. Consistency builds credibility, and credibility fuels authority. A system of record becomes the single source of truth that keeps every workflow, prompt, and piece of content aligned, factual, and unmistakably yours.

It should include:

  • Product knowledge: Core features, differentiators, and pricing context.
  • Brand voice: Tone, phrasing examples, and common pitfalls to avoid.
  • Positioning and messaging: Approved narratives and target personas.
  • Data sources: Verified research your team can cite confidently.
  • Governance rules: Who owns updates, how changes are approved, and where they’re tracked.

This structure turns scattered information into reusable, trustworthy context that every workflow can draw from.

Each component should stay in sync with your existing systems. Store this information in a knowledge base that grounds every prompt and output. It keeps your context organized, prevents drift, and reduces friction between teams. As your product, positioning, or tone evolves, your outputs evolve too. Your content always reflects who you are now, not who you were six months ago.

Result: A reliable foundation that keeps every message on-brand, factual, and trusted across all channels.

How to Turn the System Into Visibility

The system of action gives teams a repeatable way to plan, create, measure, and maintain trust. To turn that system into real visibility, you need consistent action. The following four plays show how leading teams do it.

1. Create: Originality and structure win visibility

Originality is the moat in AI search. Models reward content that introduces new information, but it also must follow a clear structure they can easily interpret and trust.

Across more than 12,000 pages analyzed, every structural element tested appeared more frequently in ChatGPT-cited content often by margins of 20 to 40 percentage points compared to Google’s top results.

  • Pages with FAQs show a 40% higher likelihood of being cited in AI search.
  • Pages with three or more schema types are 13% more likely to earn AI citations.
  • A clear heading hierarchy (H1 to H2 to H3) increases citation odds 2.8×.
  • Organized lists and tables appear in nearly 80% of ChatGPT citations, compared to 29% in Google’s top results.

At Carta, this approach turned into results fast. By embedding structured authoring and proprietary data into every post, the team achieved a 7× increase in AI citations and a 75% citation rate on newly published pages without adding headcount.

2. Refresh: Updated content builds trust

Freshness is now one of the strongest signals of trust in AI search. Models consistently favor pages that are recent, accurate, and actively maintained especially for commercial queries tied to purchase decisions.

  • 70% of cited pages were updated within the past year on ChatGPT.
  • Pages refreshed within 3 months are 3× more likely to be cited.
  • Companies in fast-moving industries like SaaS, finance and news sites only have a 3 month window before their content is out of date.

Webflow automated refresh workflows across its content library using AirOps, integrating directly with their CMS so updates could publish without manual staging. The results came fast: a 5× increase in content refresh velocity, a 40% traffic uplift within days of publication, and ChatGPT-attributed sign-ups growing from 2% to nearly 10% — with AI-sourced traffic converting at 6× the rate of traditional organic search.

3. Third-Party: Offsite signals add validation

Visibility doesn’t stop at your own domain. When AI models surface brands during early-stage commercial discovery, they look for external validation, not what the brand says about itself. In our research analyzing more than 21,000 brands, 85% of brand mentions are sourced from third-party content, not the brand’s own site. This shows that authority now lives across the web, not just on your homepage.

  • Brands are 6.5× more likely to be cited through third-party sources than from their own domains, making external validation the dominant driver of visibility in AI search.
  • 68% of brand mentions are unique to a single AI model–brands need consistent coverage across external sources to maintain visibility.
  • Nearly 90% of all third-party citations come from listicles, comparisons and review sites and 80% of cited brands show up within the first three positions. AI relies on these ranked formats to understand which brands define a category.

The data is specific. Our research found that nearly 90% of all third-party citations come from listicles, comparison pages, and review sites — and 80% of cited brands appear within the first three positions of those formats. If you’re not in the top three on a key comparison page, you’re effectively invisible in that AI answer.

Where to focus your offsite effort:

  • Reddit appears as a cited source in roughly 22% of AI-generated answers. Authentic peer discussion signals real-world credibility. The play isn’t brand promotion; it’s genuine participation in conversations your buyers are already having.
  • YouTube is an underrated citation source, particularly for non-branded “how-to” queries. 75% of YouTube citations in AI answers occur in exploratory searches
  • Listicles and comparisons are the highest-leverage surface to influence. If a publication in your category publishes a “best [your category] tools” list and you’re not on it (or in the top 3) that’s the first place to focus offsite outreach.

One more principle worth reinforcing: content must be quotable. Vague positioning and category-level claims give AI platforms nothing concrete to extract. The brands that earn the most offsite citations write in clear, specific, factual language that a model can lift and trust. Credibility is built in the specifics, not the superlatives.

As TrustRadius CMO Allyson Havener notes, “The most powerful influence happens where attribution can’t see: visibility in AI answers, peer referrals, and third-party proof. Credibility is the lever.”

4. Social Engagement: Community creates credibility

Community platforms have become the new trust layer of search. AI models now prioritize authentic participation and peer validation over brand promotion.

Our analysis of 5.5M answers found that user-generated citations cluster across four main types of platforms.

  • Community Q&A spaces like Reddit and YouTube reward direct expertise and real discussion.
  • Social platforms such as LinkedIn and X surface professional commentary and peer validation.
  • Community editorial sites like Wikipedia and Medium build authority through collective editing and consensus.
  • Review and rating platforms such as G2 and Trustpilot reinforce credibility through user feedback and proof points.

Visibility and awareness happen in real conversations across Reddit, LinkedIn, and YouTube, where the freshest and most authentic insights are shared. These platforms are increasingly cited in AI answers because they reflect what people are actually saying and searching for in real time, not static pages frozen in the past.

  • 48% of AI citations come from Reddit, LinkedIn and YouTube.
  • Reddit appears as a cited source in about 22% of generated answers.
  • 75% of YouTube citations occur in non-branded “how-to” queries where users are exploring, not searching for a specific brand.

LegalZoom focuses on high-impact Reddit discussions that align with its brand. Using AirOps workflows, the team identifies opportunities and drafts responses reviewed for compliance and accuracy reducing their response times from 48 hours to under 30 minutes.

The Compounding Loop

These actions strengthen each other over time. Original ideas create content worth refreshing. Fresh content earns new mentions across trusted sources. Those mentions spark conversations in communities that feed the next wave of ideas. This is how enduring visibility is built: a continuous loop of creation, refresh, validation, and engagement. Teams that keep the loop in motion build authority faster and sustain it longer.

Organize the Team That Powers the System

Content engineering is now a job title people are actively hiring for.

AirOps University offers certification in content engineering, and the AirOps Cohort, a two-week live training program, has produced a growing community of certified practitioners across enterprise marketing teams, agencies, and in-house SEO functions. There’s a dedicated job board and an expert marketplace. The role has moved from concept to profession.

The bar for standing up a content-led growth system has dropped significantly as a result. You don’t need to build this capability from scratch or spend months defining what the role looks like internally. There’s a growing talent pool, a shared curriculum, and a community of practitioners who have already solved the problems your team will face.

The four-role structure, Context Librarian, Content Engineering team, Strategy Lead, and Executive Sponsor, gives each function a clearer hiring path, a set of shared tools and workflows, and external peers to learn from. It’s no longer a model you have to build from first principles.

Learn more about the evolution of the 10x content engineer.

Context Management: Govern brand truth

Content only moves fast when everyone trusts the foundation. This role owns the single source of truth for product definitions, tone, and positioning, built from the inputs of product marketing, legal, and other key teams. By aggregating what matters most across functions, the Context Manager maintains a “context library” that keeps every workflow aligned, accurate, and ready to move with speed.

Result: Every project starts from an approved, reliable context that speeds up collaboration and reduces review cycles.

Content Engineering: Build systems that scale quality

The content engineer designs the workflows that power the entire system. They connect research, briefs, and refreshes into one repeatable process and integrate AI tools without losing human oversight. Their work turns creative ideas into structured, scalable operations.

Result: Higher output, greater precision, and a consistent standard of quality across every channel.

Strategy Lead: Turn data into smart bets

The strategy lead translates visibility and performance data into clear priorities. They identify which topics or formats are compounding results and which need to be retired or refreshed. Their goal is to shorten feedback loops so the team learns faster and focuses on what moves the needle.

Result: Every decision ties back to measurable ROI and the system gets smarter with each cycle.

Executive Sponsor: Clear the path and set the mandate

AI search has become a leadership priority. The executive sponsor provides top-down alignment across marketing, product, and legal. They remove obstacles, secure budgets, and make it clear that speed and experimentation are not optional—they’re expected.

Result: A unified mandate that empowers the team to move fast, make decisions confidently, and scale with support from the top.

Together, these roles form a loop of clarity, execution, and learning. Context librarians define the truth. Content engineers operationalize it. Strategy leads turn insight into action. Executive sponsors keep the path clear.

This structure turns a content team from a production line into a growth engine that’s built for speed, trust, and adaptability.

What to Do Next

This is not the time to slow down. The rules of visibility are changing every quarter, and the advantage now belongs to teams that move with structure. The best teams measure visibility weekly, refresh content quarterly, and keep human and AI systems learning together in one loop.

The shift ahead is bigger than technology alone. Visibility now depends on how well your systems, workflows, and people operate as one connected engine. High-performing teams already think of this as a core operating principle, not a campaign.

If you are ready to see where your brand stands and what it will take to compete, our team can help. AirOps works with marketing and growth leaders to evaluate visibility, identify winning strategies, and design systems of action that match each organization’s goals and structure.

Book a demo if you’re a brand ready to take control of your AI search visibility and stop flying blind.

Get started immediately with this exclusive free trial.

SEO Is Filed Under Marketing — That’s The Whole Problem via @sejournal, @pedrodias

Every technical SEO has a version of this meeting.

The migration shipped on Friday. The redirects went in. Someone on Monday noticed organic traffic had fallen off a cliff, and someone else remembered there was, in fact, an SEO on the team somewhere, and perhaps they should take a look.

This isn’t the meeting anyone puts on the LinkedIn headline. It’s the forensic cleanup. The polite explanation of why a decision you weren’t consulted on is now producing consequences you would have flagged if anyone had thought to tell you it was happening. The quiet realization that your job, not for the first time, is to do archaeology on an outcome that was predictable the moment the roadmap got drafted without you in the room.

Welcome to SEO. The dark art. The mysterious discipline. The thing executives describe in tones usually reserved for alternative medicine and aggressive tax planning, and the profession The Verge once called “the people who ruined the internet.”

The industry has spent years worrying about this reputation. There are conference talks. There are working groups. There are, I assume, strategic initiatives. The prescribed fix never changes: SEOs need to communicate better. Get a seat at the table. Build executive presence. Translate technical concepts into business value. Develop soft skills. Take an MBA. Learn storytelling. The recommendations multiply every year, nobody moves, and everyone agrees there’s a problem.

Here is the part nobody at the working group seems willing to say: The reputation is earned. Not because SEO is actually mystical. It isn’t. The reputation is earned because the profession has been filed under marketing for 20 years, and a discipline responsible for outcomes it has no authority to produce behaves, from the outside, exactly like dark magic.

SEO is in the wrong part of the organizational chart. Everything called a perception problem flows from that one structural mistake. The rest of this post is what the mistake has produced, why it’s about to get materially worse, and why the people with the authority to fix it aren’t going to.

Responsibility Without Authority

Here’s the list of things SEO is expected to produce outcomes on.

URL structure. Rendering behavior. Canonical signals. Internal linking architecture. Schema and structured data. Content modeling. Information architecture. Pagination. Faceted navigation. Crawl efficiency. Indexability logic. Status codes. Redirect chains. Site performance. Mobile parity. Image and media handling. Hreflang. The sitemaps. The robots directives. The rendering pipeline that decides whether any of the above reaches a crawler in the first place.

Here’s the list of those things marketing owns.

None of them.

Every item on the list is owned by product or engineering. Some of them are owned so deep inside engineering that getting them changed requires a ticket, a sprint, a roadmap slot, and a PM willing to argue for prioritizing work they don’t personally benefit from shipping. That’s fine if you’re inside the org that owns the thing. It’s a problem if you’re a marketing function being asked to influence it.

This is the diagnosis. It has a name in every discipline that’s had to deal with it: responsibility without authority. Someone is accountable for an outcome they can’t unilaterally produce. It’s a known failure mode in organizational design, which is a polite way of saying every other discipline figured this out decades ago. In SEO, we’ve normalized it so thoroughly that practitioners build whole careers around being good at it.

The skill the industry calls “stakeholder management” is the skill of working around this problem. Translate. Influence. Build relationships. Earn trust. Tell the story. Take the executive to lunch. All of it is the language of a function that has to beg for every deliverable, because it can’t ship any of them itself.

Let’s be clear about what begging produces. When SEO needs a canonical fix, it asks engineering. When it needs a URL structure change, it asks engineering. When it needs schema implemented, it asks engineering. When it needs pagination logic redesigned, it asks engineering. When it needs the rendering pipeline adjusted so that the content actually reaches crawlers, it asks engineering very politely, ideally with a deck. None of those requests are guaranteed to be honored. The function making the request doesn’t own the backlog, doesn’t set priorities, and doesn’t report into a leadership chain that can force the issue.

What does the function own? Content. Specifically, the content it can produce without asking anyone. That’s the one lever it can pull cleanly. We’ll come back to what that does to the profession’s theory of the world.

The genuinely embarrassing part is that everyone involved agrees the arrangement is dysfunctional. Marketing leaders complain that SEO can’t deliver. SEOs complain that they have no authority. Engineering complains that SEO requests arrive late and without context. Product complains that nobody told them. Everyone’s right. They’re all looking at the same structural error from different sides of it.

The fix isn’t better stakeholder management. The fix is moving the function to the part of the org that owns the surface area it’s responsible for. Not a title change. Not a dotted line. Not a cross-functional working group that meets on Thursdays. Actual reporting into the org that owns the work, with the authority, budget, and priority access that comes with it.

This isn’t a radical proposal. Every other technical function figured it out decades ago. Security used to be treated as a side concern of IT, was consistently marginalized, and now lives under its own leadership because the failure mode was too expensive to ignore. Site Reliability Engineering was invented specifically because treating reliability as someone’s part-time concern produced predictable outages at scale. Both disciplines moved when it became clear the work couldn’t be done from where it had been filed.

SEO is the last major technical discipline still filed under the function that doesn’t own any of the work.

Campaign Time Vs. Infrastructure Time

Marketing operates on campaign time. Quarterly planning. Monthly reporting. Weekly standups where someone asks what has moved since last week. It’s a rhythm optimized for activity you can start, finish, and measure inside a budget cycle. That rhythm is load-bearing for a lot of what marketing actually does. Campaigns have start dates and end dates. Launches have moments. Brand work compounds, but most of the artifacts a chief marketing officer is evaluated on are discrete, time-boxed, and attributable to a quarter.

SEO doesn’t operate on campaign time. It operates on infrastructure time.

The work that produces durable search presence is the work of getting architectural decisions right and then leaving them alone for years. A URL structure that survives three migrations is worth more than a content campaign that trends for a week. An information architecture that scales with the business is worth more than any individual piece of content that lives inside it. The compounding is in the foundations, not the surface. Practitioners know this. It’s in every conference talk about technical debt, every post-mortem on a botched migration, every hushed conversation about a client who redesigned their site without telling anyone.

Try explaining infrastructure time to a function measured on quarterly pipeline contribution.

The function can’t hear it. Not because the people in it are stupid. They aren’t. It can’t hear it because the evaluation framework can’t process it. A CMO who invests this quarter’s budget in work that won’t produce visible returns for eighteen months is a CMO who won’t be at the company in 18 months to see the returns. The incentive structure of the role forecloses the investment. You can’t ask someone to spend their political capital on outcomes that will land after they’ve been replaced. They will, rationally, spend it on outcomes that land before the next board review.

So the work gets compressed. The infrastructure conversation becomes a content conversation, because content can be produced inside a quarter. The migration becomes a launch, because launches have dates. The site architecture becomes a “content strategy refresh,” because that’s the vocabulary the budget line expects. Every part of SEO that doesn’t fit inside the campaign calendar either gets deprioritized, rebranded as something that does fit, or quietly declared out of scope.

This is why SEO audits sit on shelves. The audit correctly identifies 47 architectural issues that will take 18 months of engineering time to resolve. The function responsible for acting on the audit has to report results next quarter. The maths doesn’t work. The audit gets filed, the function ships a content campaign, and the 47 issues stay on the shelf until the next agency writes the next audit identifying the same 47 issues, phrased slightly differently. Sometimes the new agency charges more for identifying them a second time. It’s a growth industry.

You can spot a site that’s been through this cycle a few times. It has a lot of content. It has very little that works.

The point isn’t that campaign time is wrong. It’s that campaign time and infrastructure time are different timescales, and you can’t get infrastructure outcomes by running infrastructure work on a campaign cadence. It’s the operational equivalent of asking someone to build a foundation during a sprint. You’ll get something poured, and it will not hold.

Product orgs already understand this. Roadmaps are measured in quarters but planned in years. Architectural decisions get scoped against multi-year consequences as a matter of routine. The entire discipline of engineering management exists to reconcile the short-term delivery pressure with the long-term integrity of the system. It’s not perfect. Product orgs have their own pathologies, and anyone who’s watched a feature factory run itself into the ground can list them. But the conceptual vocabulary for treating some work as infrastructural is there. The meetings exist. The rituals exist. The career paths exist for people whose job is to protect long-horizon work from short-horizon pressure.

None of that vocabulary exists in marketing. It was never supposed to. Marketing’s job isn’t infrastructure. That’s the point. Putting SEO in marketing isn’t just a filing error. It’s filing infrastructure work inside the one function in the org explicitly structured not to do infrastructure work.

When Content Is Your Only Lever

Here is the thing about being trapped in responsibility without authority on campaign time: it shapes what you believe.

A function can only develop expertise in the problems it’s allowed to solve. When the only lever a profession can pull without asking permission is content, the profession slowly, over years, over careers, over decades, develops a theory of the world in which content is the answer to most problems. Not because anyone sat down and decided that. Because the feedback loops that produce professional instincts only have one surface to operate on.

You ship content. You measure content. You get promoted for content. You train juniors on content. You write conference talks about content. You consult on content. Eventually, you believe, in the genuine, felt-in-your-bones way that professional instinct works, that content is what SEO is. Not because you’ve reasoned your way there. Because it’s the only thing the structure around you has ever let you ship.

Google hasn’t exactly helped. For 20 years, the headline advice out of Mountain View has been “create great content.” Never “build a valuable product.” Never “get your architecture right.” Never “think about whether the thing you’re publishing should exist.” The technical guidance is in the docs, buried where most of the profession doesn’t read, written for engineers who aren’t the ones being asked to act on it. The loud advice, the one that became the industry’s shared vocabulary, the one repeated in every Search Central video and quoted in every agency pitch, was always about content. Practitioners followed the loudest signal available. That’s what professionals do. The signal pointed at content for two decades, and the profession dutifully pointed itself in the same direction.

Screenshot from X, April 2026

None of this absolves the industry. It does explain why the dysfunction has been so stable for so long. When the structure you work inside can only fund content, and the authority figure you’re trying to please only talks about content, developing a non-content theory of the work requires actively ignoring two reinforcing signals at once. Most people don’t. Most people shouldn’t have to.

This is how the industry ends up with GEO positioned as “optimize your content for LLMs.” It’s how “E-E-A-T” got absorbed as a content-quality checklist rather than what it actually describes. It’s how every major architectural shift in search and retrieval over the last five years has been immediately translated into a content prescription, regardless of whether the underlying change had anything to do with content.

AI Overviews arrive. The industry response: Write content that gets cited.

RAG pipelines become load-bearing for half the AI assistants on the internet. The industry response: write content that chunks well.

Agent browsers start navigating sites on behalf of users. The industry response, presumably arriving next quarter: write content for agents.

None of these responses are wrong, exactly. Content does play a role. The response is incomplete in a consistent, predictable direction, and the direction is the direction of the only tool the profession has ever been allowed to use. When all you have is a content calendar, everything looks like a content gap.

The architectural work that actually governs whether any of this succeeds (rendering pipelines, data models, URL design, schema, crawl surfaces, API exposure, the question of whether your application even produces stable, parseable HTML for a non-human client) sits outside the function making the prescription. So the prescription routes around it. It has to. The function doesn’t have the authority to ship the architectural work, so the architectural work exits the recommendation.

What the client receives is a content strategy. What the client needed was an architecture review. The consultant isn’t being dishonest. They’re producing the output their seat in the org allows them to produce.

This is the dysfunction loop. Marketing placement produces a content-only toolkit. The content-only toolkit produces a content-only theory of the work. The content-only theory produces prescriptions that ignore architecture. Sites spend a decade following those prescriptions. The architectural debt compounds. Eventually, something important breaks, a migration, a rendering change, a platform shift, and someone wonders why all that content didn’t save them.

It couldn’t have. The content was never the load-bearing layer. The industry just lost the vocabulary to describe what was.

Four Retrieval Contexts, One Architecture Problem

For most of SEO’s existence, there was one retrieval system that mattered. Googlebot crawled, Google’s index stored, Google’s ranking algorithms ordered, and whatever Google shipped defined the work. The profession developed against a single target. That target’s requirements were stable enough, for long enough, that “SEO” and “Google SEO” became synonymous to the point where nobody noticed the conflation.

That era is over. The target has multiplied, and the new targets don’t share requirements.

My read is that most sites now need to be correctly interpreted by not one retrieval system but at least four, each with materially different architectural needs. Classic search crawlers still do what they’ve always done, fetch, render, index, rank, and still care about rendered HTML, canonical signals, internal linking, and crawl efficiency. That work hasn’t disappeared. It’s just no longer the only work.

Alongside it sit three classes of retrieval that have arrived, are arriving, or are about to arrive at scale. Retrieval-augmented generation pipelines, where content gets chunked, embedded, and surfaced in response to queries that may never touch a traditional search results page. Agent browsers, where software acts on behalf of a user, filling forms, completing purchases, extracting information, and need the site’s interactive surface to be stable, parseable, and actionable. Training crawlers, whose presence on your site determines whether your content is even available to the models that power everything upstream of it.

Each of these has different requirements, or at least that’s what the behavior I’ve seen suggests. RAG pipelines appear to reward content that survives chunking: semantic boundaries that align with how retrieval windows get constructed, prose dense enough to carry meaning when it’s separated from its surroundings, metadata that persists through tokenization. Agent browsers appear to reward interactive affordances that hold up to automation: stable selectors, predictable navigation, structured data that describes what can be done on a page, and not just what it says. Classic crawlers reward what they’ve always rewarded, and the requirements occasionally conflict with the others. Content optimized for chunking may render poorly when the crawler expects traditional page structure, and sites that expose rich agent-friendly surfaces may produce indexing decisions that weren’t obvious a year ago.

I’d stake the general shape of this on the table: multiple retrieval contexts with overlapping but non-identical requirements. The specifics will keep moving.

None of this is a content problem. All of it is an architecture problem.

Deciding how your site serves four classes of non-human client with partially conflicting requirements is a product decision. It touches the rendering pipeline, the data model, the API surface, the URL structure, the caching strategy, the authentication boundaries, the robots directives. It requires trade-offs that can only be made by someone with the authority to ship across engineering and content at the same time. It requires someone to look at the four retrieval contexts, understand which ones matter for this specific business, and accept that you probably can’t optimize for all of them simultaneously.

That person is not in the marketing meeting. That person is in the architecture meeting, which is a meeting SEO generally isn’t invited to.

So what’s happening instead is predictable and, at this point, depressing to watch. The industry is translating each new retrieval context into a content prescription, because that’s the translation its org chart permits. RAG arrives, and the advice is “structure your content for retrieval.” Agent browsers arrive, and the advice will be, within a quarter or two, “write content agents can parse.” The architectural half of the work, the half that determines whether any of the content prescriptions can even be executed at the infrastructure level, goes unsaid, because the people producing the advice don’t have authority over it.

Meanwhile, the product teams making the actual architectural decisions aren’t making them with any of this in mind. They’re not malicious. They’re not negligent. They’re just not briefed, because the function that should be briefing them is in a different part of the org, producing a content strategy instead.

This is how sites go dark. Not in a single dramatic event. In a compounding sequence of architectural decisions, each individually defensible, made without anyone in the room whose job is to notice what happens at the retrieval layer when you ship them all together. The site doesn’t disappear. Its presence across four retrieval systems gets quietly, progressively worse, and by the time it shows up in a dashboard, it’s already eighteen months of decisions too late to fix cleanly.

The function that could have prevented this was downstairs, writing a blog post about E-E-A-T.

Decisions In Rooms SEO Isn’t In

Ask any technical SEO with a decade of scar tissue to describe the worst projects of their career. You’ll get a specific kind of story.

Someone decided to migrate the CMS. Someone decided to rebuild the frontend in a framework that rendered client-side by default. Someone decided a new URL structure was cleaner. Someone decided the old blog wasn’t worth porting. Someone decided the redirect logic could be simplified. Someone decided canonical tags were “technical debt.” Someone decided that consolidating three subdomains into one would be straightforward. Someone decided product pages didn’t need structured data because the new design handled it visually.

The word “someone” is doing a lot of work in those sentences. The someone is never an SEO. The SEO arrives after the decision has shipped, gets asked to “help with the SEO implications,” and spends the next six to eighteen months producing a recovery plan for a catastrophe that was entirely predictable at the point the decision got made.

This is not an accident of scheduling. It’s a structural consequence of where the function sits. Architectural decisions get made in meetings that belong to product and engineering. Those meetings have invite lists. The invite lists reflect who owns the work and who has the authority to block it. SEO, filed under marketing, is on neither list. Sometimes a senior SEO in a mature org has cultivated enough relationships to hear about decisions informally, early enough to influence them. That’s an individual achievement won in spite of the structure, not a property of the structure itself. The default state, the thing that happens when nobody has engineered around the problem, is that the architectural decisions get made and SEO finds out afterward.

Get SEO a seat at the table” is the industry’s perennial rallying cry for this problem. Twenty years of conference talks have urged practitioners to be more strategic, more influential, more business-aligned, more whatever it takes to get invited to the meeting. It hasn’t worked. It can’t work. The meeting isn’t a table you get invited to. It’s a meeting for the org that owns the system being decided on. You’re either in that org, or you’re not, and no amount of storytelling improves the address on your org chart.

The practitioners who have, genuinely, managed to get into the room did it the hard way. They built enough credibility with product and engineering leadership that they became de facto members of decisions they had no de jure claim on. That’s a real achievement. It’s also not reproducible at scale. A profession can’t be built on the assumption that every practitioner will, through force of personality and political skill, overcome the structural arrangement of the function. Most won’t. Most shouldn’t have to.

The fix is to change the arrangement, not to keep demanding that individual practitioners heroically overcome it.

The Field Isn’t What It Looks Like From The Outside

There’s a second-order problem here that the “move SEO to product” argument has to confront honestly. Even if the org chart changed tomorrow, the supply of practitioners who could do the work the new placement demands is thinner than the industry is willing to admit.

SEO has the lowest barrier to entry of any technical discipline in the digital field. No certification actually means anything. No degree is required. No apprenticeship is standardized. People arrive from copywriting, from PPC, from affiliate marketing, from blogging, from whatever adjacent role they were doing when someone at a meeting said, “We should probably do some SEO,” and they were the nearest person available. The field absorbed all of them. Then it labeled them all “SEOs” and treated the label as if it meant something consistent.

It doesn’t. Two people with “Senior Technical SEO” on their LinkedIn can have knowledge bases that barely overlap. One of them spent ten years running log file analyses and arguing with engineers about rendering pipelines. The other spent 10 years optimizing meta descriptions and building internal linking strategies in a CMS. Both are senior. Both are technical. The word means different things to each of them, and the industry has never insisted it should mean anything in particular.

For a long time, this didn’t matter much because the bar the field had to clear was low enough that the distribution of capability still produced acceptable outcomes on average. Sites got published. Content got indexed. Rankings happened. The weaker end of the distribution didn’t cause obvious catastrophes because the systems being optimized for were forgiving. Google was doing most of the heavy lifting, and the profession could ride on top of that.

The bar has moved. The work the piece has been describing (making architectural trade-offs across four retrieval contexts, sitting in product meetings with the authority to influence infrastructure, understanding what happens at the retrieval layer when engineering ships the quarterly roadmap) requires a specific profile. Deep technical literacy. Fluency in how web systems actually work, not just how they’re supposed to work. The capacity to read a rendering pipeline and know what it’s going to do to indexability. The capacity to sit in a conversation about caching strategy and understand which decisions will quietly delete the site from retrieval systems that don’t forgive as much as Google did.

That profile exists. It’s a minority of the field. It’s been a minority of the field for a long time, because the structure of the profession has been selecting against it. When the work that gets funded is content, the people who get promoted, hired, trained, and retained are the ones good at content. Twenty years of that selection pressure produces exactly the distribution you’d expect. The profile the industry needs most for the work emerging now is the profile the industry has been filtering out for a generation.

This isn’t a character judgment on individual practitioners. It’s an observation about what happens when a field has no barrier to entry, no standard of practice, and a structural incentive pointing everyone in the same narrow direction. You get a very large population, labeled uniformly, with wildly uneven actual capability, over-indexed on the parts of the work the structure rewards.

And the industry’s own self-descriptions make this worse. Every agency pitches “full-service SEO” as if it were one thing. Every job description asks for the same generic list of responsibilities. Every conference talk is titled as if it’s addressed to a profession with a shared baseline. None of that is true. The field is a patchwork, and the patches don’t know how different they are from each other until one of them is asked to do work the other couldn’t.

The Matching Failure

So there’s a rare profile, scattered across a field that doesn’t know it’s rare. There are organizations that urgently need it, without knowing what it looks like. And there’s a labor market connecting them that’s been trained to produce and consume the wrong match.

Practitioners who could do the product-driven work often don’t know they could. They’ve spent their careers inside marketing org charts, performing the work marketing funded, being evaluated against marketing metrics. Nothing in their day-to-day has ever mirrored back to them that their actual capability is architectural. They think they’re marketers who happen to be good with technical detail, because that’s the only identity the structure around them has offered. Some of them would be transformative hires in a product org. They’re not applying because “senior SEO” job listings aren’t framed in a way that tells them they should.

Hirers are worse off still. An organization that genuinely needs a product-embedded technical SEO, someone who can sit in architecture reviews, argue retrieval trade-offs with engineering leads, and prevent the next migration from quietly removing half the site from indexing, will write a job description that asks for none of those things. Not out of malice. Out of not knowing. The hiring manager copied the last SEO job description the company used, which was written by a marketing director who copied one from their previous employer, which was copied from an agency template, which was a descendant of a 2015 list of generic SEO responsibilities. The job listing asks for keyword research, content strategy, and “familiarity with technical SEO best practices.”

Then, having asked for the wrong things, they evaluate candidates against the wrong instrument.

The CV gets scanned for tools. Ahrefs. Semrush. Screaming Frog. Sistrix. Botify. DeepCrawl. Ahrefs again, because it was listed in a different section. The logic, such as it is, is that knowing the tools means knowing the work. It doesn’t. Tool proficiency is the thing you pick up on the first day of a new job. It’s genuinely the least interesting information on an SEO’s CV, and HR treats it as the most. A candidate who can name 12 SEO suites gets prioritized over a candidate who can explain why the rendering pipeline is quietly deindexing a third of the site, because the first CV is legible to the evaluation framework and the second isn’t. The second candidate probably also knows the tools. They just didn’t think listing them was the point. They were wrong about what the point was. Not in reality, but in the system they were applying through.

The person the company actually needs reads the job listing, doesn’t recognize themselves in it, and doesn’t apply. The person who does recognize themselves in it applies, lists 14 tools, and gets hired. The organization believes it has conducted a search. What it has conducted is a filter, and the filter was calibrated to exclude exactly the person it was supposed to find.

The market can’t execute the match even when both sides of it exist. The vocabulary that would let them find each other was never produced, because the function that should have produced it has been too busy defending its own legitimacy inside marketing to define what it actually is.

Meanwhile, HR departments, which are not the villains of this story but can only hire against the descriptions given to them, run searches optimized for the profile the field has always produced. They find candidates. The candidates are hired. The hires are placed in marketing, where the next cycle of the same pattern begins. The people who could have broken the loop are somewhere else, unreachable through the channels the organization knows how to use.

This is the deepest layer of the dysfunction. Placement produced the wrong selection pressure. The selection pressure produced a skewed professional population. The skewed population produced self-descriptions that encoded the placement. The self-descriptions produced hiring frameworks that reproduced the placement. Every turn of the loop made the next turn more certain. Nobody designed this. It’s just what happens when a structural error is allowed to run for twenty years without anyone senior enough to fix it noticing that it was the structure, not the people, that needed fixing.

The Room Stays The Same

This isn’t a respect problem. Respect doesn’t move the role. The profession could earn the respect of every CMO on earth by next Tuesday, and the work would still be happening in the wrong room, because the work was never a marketing problem to begin with.

The decisions that determine whether a site exists to search engines, RAG pipelines, agent browsers, and training crawlers are product decisions. They always were. We’ve just been staffing them with marketing hires and acting surprised when marketing instincts produce marketing outputs.

The people who could make those decisions at the level the work now requires do exist. There aren’t many, because the field has spent twenty years selecting for a different profile. There will be fewer of them in five years, because the pipeline that produced them is being automated away. And the organizations that most need them won’t find them, because the job descriptions are being written by someone who read exactly one SEO blog in 2019.

For what it’s worth, I’ve been working this way under the banner of SEO Product Management since before the term had traction. I didn’t invent the model. I just refused to pretend the marketing placement was working.

So the industry does what it always does when it can’t change the room: It changes the vocabulary. GEO. AEO. Whatever the next Sand Hill Road newsletter proposes. None of it gets anyone into the meeting. It just makes the exclusion sound current.

The dark art reputation is going to stay intact for a while longer. It’s an impressive illusion: making a product function look mystical by never letting it into the room where the decisions get made. Magicians have been running versions of this trick for centuries. It works best when the audience doesn’t know what they’re not seeing.

More Resources:


This post was originally published on The Inference.


Featured Image: Anton Vierietin/Shutterstock

How Brands Block AI Crawlers & Then Pay To Get Seen: The Protection Paradox via @sejournal, @billhunt

Modern marketing is full of good intentions that quietly sabotage themselves.

Nowhere is this clearer than in what I call the Protection Paradox, where smart companies spend enormous energy and money “protecting” their content or intellectual property, only to pay even more to get the same content in front of the same audiences through intermediaries.

Independently, each team can prove it did the right thing, but in practice, the brand ends up hiding its best ideas from the very ecosystems that shape demand, only to rent them back at a premium.

When Gating Content Becomes A Self‑Tax

In most B2B enterprises, “lead generation” is a shared operating doctrine. Every team is measured on some variation of leads, marketing qualified leads (MQLs), opportunities, or pipeline. That sounds aligned, but the methods and measurements of those numbers often pull teams in opposite directions.

Take the classic whitepaper marketing ecosystem:

  • The goal is to meet our MQL targets.
  • The content team produces a  “thought leadership” report and saves it as a PDF.
  • Marketing wraps it in a required 10- to 15-field form with a job title, vertical, budget, timeline, tech stack, and favorite color.
  • Sales insists that “we only want serious buyers,” so the form gets longer or more complex.

The logic of the sequence feels straightforward. The content is valuable, so access should be controlled. If someone is willing to fight through a long form, they must be serious. That’s how the gate gets justified internally.

In practice, it plays out very differently. The moment that asset goes behind a form, it starts to disappear from the environments where discovery actually happens. The PDF becomes difficult for search engines to interpret, nearly impossible for AI systems to extract from, and inconvenient for anyone who just wants a quick answer. Attribution only adds another layer of complexity, often creating internal friction over who gets credit for the lead rather than focusing on whether the content is actually being discovered and used.

I’ve seen teams celebrate the fact that something is now “published,” when in reality the most important ideas are reduced to a teaser paragraph and a button to fill out the form. The substance is there, technically, but functionally it’s gone.

And then there’s the audience problem. The gate doesn’t just control access; it reshapes who even bothers to engage. The people you’re trying to reach are often the first ones to opt out of navigating the lead form gauntlet.

Senior buyers don’t have the patience for a multi-step interrogation. Practitioners who are exploring a problem aren’t ready to declare intent. The partners and influencers who might have amplified the content simply move on to something easier to reference. None of this is intentional. But it’s remarkably consistent.

To further extend “the reach” of the content, it is often syndicated to aggregators and analyst networks. This was my favorite part of meetings, when managers would demand to know how Tech Target could outrank them for their own content.

Tech Target’s model was simple:

  • Aggregate content around a popular topic, then break the ideas into multiple SEO‑optimized articles.
  • Provide a simple, lightweight form with minimal information requirements.
  • Capture and nurture the demand you could have had yourself while selling the lead to us for $15 to $30.

Unfortunately, the original company ends up buying “qualified leads” created by its own content because an external partner has packaged the same content in a way that better matches how humans and algorithms actually discover information.

Internally, no one feels the irony:

  • Content reports success: “We produced a premium asset.”
  • Marketing ops reports success: “We generated X MQLs.”
  • Demand gen reports success: “Our cost per lead from partners is excellent.”
  • Sales reports success when any of those leads close.

From the outside, it’s absurd: the company hides its best thinking behind a hostile form, prevents it from competing in search and AI surfaces, then rents those same ideas back to itself, complete with a fresh mark‑up.

That’s the Protection Paradox in B2B: “We’re protecting the value of our content” quietly becomes “We’re taxing ourselves for access to our own ideas.”

When Everyone Else Can Quote You Better Than You Can

The irony doesn’t stop with aggregators and lead brokers. Once a “premium” whitepaper is locked behind a form, something else starts to happen, usually without anyone planning it. The organization begins to leak its own ideas back into the market, just not through its own channels.

PR teams pull out the most compelling charts, stats, and quotes and package them for journalists and analysts. Those stories end up as clean, accessible articles that are far easier to read and rank than the original document.

At the same time, customer and account teams start using the asset as a value-add. It gets shared with key clients, dropped into portals, and referenced in presentations. From there, it takes on a life of its own, often reappearing in places that are easier to access and easier to navigate than the source.

Partners do what partners always do. They take the core ideas, add a layer of their own perspective, and turn them into something more tailored to their audience. In many cases, those versions are clearer, more focused, and, whether intentionally or not, more discoverable.

None of this is irrational. If anything, it’s exactly what you would expect each team to do given their goals. What’s less obvious is the cumulative effect.

Over time, it becomes easier to encounter versions of your thinking everywhere else than it is to find your original work. The ideas spread, but the source gets harder to reach. Most people in B2B have experienced some version of this, even if they haven’t named it. You see your research cited in articles, referenced in decks, and mentioned in conversations. Prospects come into meetings already familiar with your frameworks or statistics.

But when they try to track it back to you, they don’t land on your site. They land on an analyst summary, a partner page, or a third-party library. Your version is still there, technically, but it’s buried behind a form, sitting at the end of a URL no one wants to deal with.

At that point, the content takes on a strange quality. It’s everywhere and nowhere at the same time. Widely referenced, but difficult to access at the source. And that’s where the dynamic really shifts.

You’re no longer just competing with aggregators for visibility. You’re effectively training the market to treat their interpretation as the primary version of your thinking. The ecosystem becomes the reference point, and you’ve turned your flagship content into a reference object the entire ecosystem leans on, while making it nearly impossible for anyone to get back to you without passing through someone else’s gate first.

When “AI Research” Is Just A Gate

While writing my previous Search Engine Journal article, I came across an amazing AI adoption statistic about the impact of “new and additive content” that turned out to be an AI conflation of multiple inferences from other research. When I tried to access the original research to pinpoint the statistic, I landed on a glossy page with three or four headline stats, a hero image, and a big “Download the full report” button. The moment you click, you’re pulled into a multi‑step funnel with pop‑ups, aggressive email capture, and product upsells.

You never actually see a clean, downloadable PDF that includes the methodology and sample details. Instead, you get offered dripped emails, webinar invites, and “personalized outreach” that assume your interest in one number equals intent to buy a platform.

If you accept those headline numbers at face value, you can still use them as directional inputs. But it’s important to recognize what you’re doing: treating them as marketing claims, not verifiable research. You’re building arguments on top of numbers you can’t interrogate.

In other words, the “research” is widely cited but functionally unreachable because the real asset has been optimized to maximize funnel performance rather than transparency. The Protection Paradox shows up here as epistemic debt: We protect the perceived value of the report by burying it, and then the market runs on unexamined soundbites.

Oreo, AI, And The Cost Of Being Invisible

Our B2C brethren are not immune to this thinking. A recent Digiday interview with Andrew Lederman, VP of Global Digital Commerce at Mondelez, offers an Oreo story that shows the same pattern on a global stage.

Oreo is one of the most recognizable brands on the planet. You would assume that if someone asks an AI assistant about cookies, maybe what are the best cookies, fun cookie recipes, family‑friendly snacks, Oreo would show up almost by default.

Actually do these prompts, especially [best cookies] and then ask the AI why Oreo is not included, and that is a key learning experience on how AI results actually work, but I digress…

Concerned about “protecting intellectual property and maintaining control over content,” Oreo’s parent company followed a familiar pattern: Treat AI crawlers as suspicious bots and keep them away from their precious brand assets. The intent was straightforward: to defend creative IP, control reuse, regain lost clicks, or maybe even reduce legal risk.

The result was anything but straightforward. Because AI systems had limited access to Oreo’s structured, machine‑readable content, Oreo showed up in only a fraction of cookie‑related responses. The world’s most famous cookie was underrepresented 90% of the time in the very channels shaping how people discover snacks, recipes, and brands.

No one set out to make Oreo invisible:

  • Legal was doing its job by minimizing unlicensed reuse.
  • IT was doing its job by restricting unknown automated traffic.
  • Marketing assumed “we’re Oreo, of course, we’ll be mentioned.”

Yet the practical effect of all that “protection” was silence.

The irony of the “Protecting our IP” statement gets sharper when you look at Oreo’s marketing spend:

  • The brand pours significant money into social media campaigns and influencer partnerships, orchestrating global, highly produced moments designed to go viral.
  • Influencer collaborations drive tens of millions of impressions, high engagement rates, and waves of user‑generated content.
  • Mondelez has already committed more than $40 million to a custom generative‑AI content platform built with Accenture and Publicis, using it to create social spots, ecommerce imagery, and eventually TV ads for Oreo and its sibling brands while cutting production costs by an estimated 30% to 50%.

On one side of the house, Oreo is paying creators and platforms to get algorithms to talk about Oreo. On the other side, a quiet policy tells some of the most important new algorithms on earth: “You’re not allowed to see us.”

Again, each silo can prove it acted rationally, but in the aggregate, the brand is effectively financing its own invisibility, funding new content and new campaigns while starving AI discovery channels of the material they need to keep Oreo top‑of‑mind.

Why Smart Teams Keep Making The Same Mistake

It’s tempting to chalk all this up to incompetence or a bad decision, but that lets the system off too easily.

Most of these choices are made by smart, well‑intentioned teams working in their respective silos, with incomplete information, misaligned incentives, and individual key performance indicators. The problem only becomes visible when you step back and look at how those decisions interact.

What you tend to see is a kind of quiet misalignment. Each team is optimizing for something slightly different, and those differences compound over time. Content teams focus on engagement and lead volume. Legal focuses on reducing risk. IT focuses on controlling access and managing cost. None of those priorities is wrong, but they don’t naturally point toward discoverability, especially in search and AI-driven environments.

At the same time, the way “value” is defined reinforces the pattern. In many organizations, content is treated as something you extract value from at the moment of conversion, whether that’s a form fill, a download, or a measurable interaction. What gets less attention is the value created before that moment, when ideas are circulating, being referenced, and showing up repeatedly in the places where people are actually looking for answers.

That gap becomes more obvious when you try to answer a simple question: Who owns discoverability?

There are clear owners for SEO, paid media, social, email, security, and infrastructure. Each of those roles has a defined scope and a set of metrics. What’s often missing is someone responsible for ensuring that content can actually be found and used across all those surfaces, especially as AI introduces new ways to access information.

In most organizations, that responsibility is implied rather than assigned. So everyone does their job. The metrics get hit. The reports look healthy. And yet, when you step outside the system, the outcome is underwhelming. The content exists, but it doesn’t consistently show up where it matters.

That’s the Protection Paradox in practice. Not a failure of individual teams, but a system where individual team success doesn’t translate into collective visibility.

Designing Protection That Doesn’t Erase You

Let’s be clear: The answer isn’t to swing to the other extreme and open everything, or abandon gating altogether; nor should we lock everything down to protect our content from being stolen by the big bad AI monsters.  Some level of protection and friction still makes sense. The real question is what you’re protecting, and whether the way you’re doing it is quietly working against you.

In most of the situations I’ve seen, the issue isn’t the presence of a gate. It’s where it shows up in the journey. When the core ideas themselves are hidden, the added friction is where discovery breaks down. When those same ideas are allowed to circulate and are well-structured, indexable, and easy to interpret, something different happens. You can still create moments of capture, but they occur later, when intent is clearer, and the exchange feels natural rather than forced.

Even the idea of “protecting IP” starts to shift when you look at it through this lens. It used to mean controlling distribution. Now it often means ensuring your ideas are the version that gets picked up, referenced, and built upon. That requires a different kind of thinking, less about blocking access entirely and more about shaping how your content shows up in structured, attributable ways.

Underneath all of this is an incentive problem. If teams are rewarded purely on gated conversions, they will gate everything. That’s a rational response. But if success is tied more closely to revenue, long-term value, and presence across search and AI-driven discovery, the trade-offs start to look different. The goal shifts from extracting value as early as possible to making sure the value exists in the first place.

The Real Risk Isn’t Being Copied. It’s Being Ignored.

The fear behind many “protective” decisions is that data will be copied, scraped, or taken advantage of.  Some of that risk is real and worth managing. But in most markets, the more serious threat is obscurity: The slow erosion of your presence in the places where decisions are actually made.

When your content can’t be found, people don’t stop asking questions. They just get their answers from someone else, and quite likely someone with whom you have freely shared it, an aggregator, a partner, a competitor, or a generic AI model that has never really met you.

The Protection Paradox is a warning label for this moment. If you’re spending more to amplify your content through intermediaries than you are to make it discoverable and usable at the source, you’re not protecting value; you’re paying to amplify what you’ve already hidden.

The brands that win the next decade will still protect what matters. But they’ll be brutally honest about the difference between guarding an asset and burying it, and they’ll make sure they’re not taxing themselves for the privilege of being seen.

More Resources:


Featured Image: Master1305/Shutterstock

B2B Buyers Choose A Vendor Before They Reach Out – 3 Ways To Be Visible When It Counts via @sejournal, @alexanderkesler

The fundamental question for 2026 is not how visible you are in search, but how wide the gap has grown between where you invest in discoverability and where buyers actually form their decisions.

Here is the reality: B2B buyers complete the majority of their research and form vendor preferences before your sellers can make their introductions.

Traditional SEO is a critical component of the brand discovery process, but it represents only a fraction of how buying groups validate decisions.

While SEO requires optimizing content for individual search intent (one person researching a solution), B2B purchasing works fundamentally differently. Enterprise software and service decisions are made when buying groups, averaging eleven members, reach consensus.

B2B buyers contact vendors only after completing 61% of their research. So, by the time buyers reach out to schedule that first demo, they’ve already completed most of their research out of sight from client relationship managers, already forming a shortlist of preferred vendors.

To earn consideration from B2B buyers as a preferred vendor in 2026, organizations ought to master this invisible buying journey and the discoverability process to out-position competitors.

In this article, I will present three tactics to help you improve the discoverability of your brand beyond SEO, helping your brand appear as a top choice for B2B buyers.

How To Make Your Brand Discoverable For B2B Buyers

SEO remains essential for organic search visibility, but buyer research extends far beyond search queries.

Buyers use AI tools to research solutions and validate findings across peer networks, review sites, technical documentation, and professional networks.

This creates a need for your B2B brand to be visible across multiple channels at once.

Your ability to establish brand confidence by enabling validation across the entire buying group, as well as measuring performance in these channels, is essential for securing favorable placement on B2B vendor shortlists.

3 Tactics To Increase Brand Discoverability

1. Establish Brand Confidence

Beyond traditional search, you need credibility across peer networks and review sites where buying groups conduct research.

Ensure your brand is visible across these B2B buyer research channels:

  • Search engines, answer engines, and AI tools.
  • Review sites like G2 and TrustRadius.
  • Peer networks, including Slack, Reddit, and technical forums.
  • Technical documentation sites.
  • PR, Wikipedia.
  • Third-party sites, like partner and syndication networks.

Prioritize AEO And GEO

As buyers increasingly turn to AI tools to research solutions, answer engine optimization (AEO) and generative engine optimization (GEO) have become important to brand discoverability.

  • Conduct an AI visibility audit to assess brand visibility across AI platforms.
  • Track citations, identify entity recognition gaps, and monitor competitors in AI-generated responses.
  • Enhance technical infrastructure with schema markup and optimize content for large language models (LLMs).
  • Secure consistent citations through PR and vendor comparison content.
  • Use citation monitoring tools to connect AI visibility to revenue, not just impressions.

Review Platform Management

Buyers trust validation on the quality of solutions via professional peers more than vendor claims.

  • Maintain a steady flow of authentic reviews on sites like G2 and TrustRadius through client engagement.
  • Analyze competitors’ reviews to identify gaps your products cover, then address those gaps with specific use cases and documentation.
  • Respond promptly to every client/user review. Your responses demonstrate commitment to client success and provide context for future readers evaluating similar use cases.
  • Align review content with B2B buyer journey stages. Early-stage (top of funnel) researchers need high-level product capability validation, while late-stage (bottom of funnel) evaluators need detailed implementation and integration information.

Peer Community Engagement

When practitioners recommend your solution unprompted in peer forums, you have established genuine community support.

  • Engage in peer networks like LinkedIn, Reddit, Slack channels, and technical forums to build trust through authentic contributions.
  • Track community sentiment and branded search lift to measure impact.
  • Monitor how frequently your brand appears in organic peer discussions versus competitors.

2. Enable B2B Buyers To Validate Your Solutions

Supporting buying group decision-making relies on the discoverability of evidence that aligns with the specific priorities of individual group members.

Organizations that ensure discoverability and enable validation across technical and business stakeholders earn consideration when B2B buying groups narrow their options.

Technical Decision Maker Enablement

Technical buyers test solutions themselves before talking to sales. They research how to connect systems on GitHub, solve setup problems on Stack Overflow, and review code interfaces through live documentation before contacting vendors.

Use structured data strategies and content architecture techniques to ensure resources like code guides and setup workflows are easily discoverable by AI crawlers.

Enhance discoverability by:

  • Providing resources that allow technical buyers to test things on their own time. This includes complete code guides with working examples, test environments they can use immediately, detailed security documentation, and setup workflows for common platforms.
  • Making these resources easy to find where they actually work. Maintain GitHub projects with real examples, answer questions on Stack Overflow, and publish technical content that demonstrates expertise.
  • Creating discoverable materials that cater to different teams within an organization. Operations teams need setup guides demonstrating clean code design. Engineers need system diagrams showing how your solution fits their tech setup. Security teams need security reviews and access controls validated through independent audits.
  • Implementing FAQ schema, HowTo schema, and Organization/Product markup to improve visibility for LLMs, making resources like documentation and guides more accessible during AI search.

Business Leader Validation Frameworks

Business leaders trust proven results and return on investment over technical specifications. Ensure that validation data is discoverable and geared toward demonstrating how these solutions meet industry standards.

Provide benchmark data showing how your solution compares to industry standards, with metrics executives can confidently present to their CFO and board.

  • Commission independent research that positions your approach within broader market trends.
  • Secure placement in analyst evaluations. These third-party validations carry weight with executive buyers who need external credibility to support internal business cases.
  • Distribute insights through channels executives actually monitor: LinkedIn posts that demonstrate thought leadership on strategic challenges, webinars that address business transformation rather than product features, and board-ready presentations that translate technical capabilities into business outcomes.
  • Enhance citation authority by building backlinks and optimizing for third-party mentions. This positions your solution favorably within broader market trends, making it more discoverable and credible.

B2B Buying Group Champion Enablement Systems

Internal champions require easily discoverable resources to address objections of other stakeholders and build consensus across their buying groups.

  • Equip B2B buying group champions with resource kits that provide responses to predictable concerns:
    • Finance (ROI models and cost-benefit analyses).
    • IT (integration complexity and security requirements).
    • Security (compliance frameworks and audit readiness).
    • Operations (change management and training requirements).
    • Executive leadership (strategic alignment and competitive positioning).
  • Offer presentation templates designed for different audiences:
    • Executive summaries for C-suite approval.
    • Technical reviews for architecture committees.
    • Business cases for financial justification.
    • Adoption plans for operational leadership.
  • Use citation authority-building tactics such as knowledge panel optimization and competitor comparison content to make champion resources more visible and credible.

By weaving discoverability into these offerings, organizations will better support technical decision makers in validating solutions effectively, thus positioning themselves favorably in the decision-making process.

3. Measure And Optimize

Discovery channel analytics reveal which research paths lead to actual buyer engagement and revenue.

Track Discovery Performance Across Channels

Build a comprehensive discovery analytics dashboard that monitors:

AI Visibility Metrics:

  • Share-of-voice in AI-generated responses across LLMs like ChatGPT, Perplexity, Gemini, and Copilot.
  • Citation frequency trends and competitive displacement rate within AI answers (can be a challenge right now, but as tools mature).
  • AI-sourced traffic attribution and correlation with pipeline outcomes.

Review Platform Metrics:

  • Review volume trends, average ratings across key categories (ease of use, support quality, value), and competitive positioning within your category (quarterly).
  • Sentiment analysis from peer networks like Reddit and Slack, where practitioners discuss solutions candidly.

Technical Validation Metrics:

  • Developer engagement on GitHub and Stack Overflow, API call volumes, and technical documentation traffic.
  • Page interaction depth (scroll patterns, time on page) and trial conversion rates from documentation paths.

Business Stakeholder Metrics:

  • Content consumption patterns by role and lead quality from executive-focused content.
  • Analyst report downloads and correlation with enterprise deal conversion rates.

Discovery Path Indicators:

  • Branded search lift and correlation between community engagement and inbound inquiry volume.
  • Channel combinations and content sequences that appear in successful deals.

Analyze Discovery Patterns That Drive Revenue

Trace content consumption paths that lead to demo requests, trial signups, and sales conversations. Use tracking parameters and form fields that identify origin sources.

Reverse-engineer successful deals to uncover:

  • Which channels start serious evaluation (peer networks, review sites, technical documentation).
  • Whether discovery through practitioner recommendations correlates with higher-quality leads.
  • Which content types drive engagement from different stakeholder roles (technical documentation for engineers, analyst reports for executives, peer reviews for operations leaders).

Correlate discovery metrics with sales cycle length, win rates, and client advocacy rates to identify which activities drive shortlist inclusion versus those that simply generate activity without business impact.

The buyer journey has fundamentally changed. Research happens before engagement, decisions form before conversation, and shortlists solidify before prospects present themselves.

Organizations that win in 2026 understand this reality and act accordingly. They establish presence where B2B buyers research, enable validation across stakeholder groups, and measure what drives consideration.

Implemented successfully, discoverability is the revenue engine that drives conversion in the AI-led buying era.

Key Takeaways

  • Optimize for AI-powered search: AEO and GEO are now foundational to brand discoverability. Audit your visibility across ChatGPT, Perplexity, Gemini, and Copilot, then build citation authority, structured data, and AI-consumable content architecture to earn consistent inclusion.
  • Build systematic review presence: Maintain an authentic review flow on platforms like G2 and TrustRadius through consistent client engagement.
  • Engage peer networks authentically: Participate in LinkedIn, Reddit, Slack channels, and technical forums where target buyers gather. Share insights and answer questions to build organic support.
  • Enable technical validation: Provide comprehensive resources on GitHub and Stack Overflow where technical buyers validate solutions through hands-on testing.
  • Support business leader decisions: Offer benchmarking data, independent research reports, and analyst validations that economic buyers can defend to CFOs and boards.
  • Equip internal champions: Supply presentation templates, competitive frameworks, and objection response playbooks that enable champions to build consensus across finance, IT, security, operations, and executive stakeholders.
  • Measure what drives consideration: Track AI visibility metrics alongside review site performance, peer network sentiment, technical documentation engagement, and champion support usage, connecting every channel to pipeline outcomes.

More Resources:


Featured Image: eamesBot/Shutterstock

Comparison Of AI Citation Patterns Offers Strategic SEO Insights via @sejournal, @martinibuster

BrightEdge published new data showing the different kinds of sites five AI search surfaces tend to show in generated answers. The data makes it possible to see how those differences shape which types of sites each AI engine shows, with strong implications for how to promote to each one.

The research focused on five AI search surfaces:

  1. ChatGPT
  2. Google AI Overviews
  3. Google AI Mode
  4. Google Gemini
  5. Perplexity

AI Engines Cite Different Sources But Recommend The Same Brands

The BrightEdge research compared the top cited website sources across AI engines to measure how much they overlap (Source Overlap). What the data shows is that there was a wide discrepancy across the five AI search engines tested, with the lowest level of overlapping source citations between any two AI search surfaces at 16% and the highest level of agreement between any two engines at 59%.

  • Lowest level of agreement: 16%
  • Highest level of overlap: 59%

Significant Agreement In Brand Citations

BrightEdge also measured brand name overlap between the five AI search surfaces and found that there was more agreement between all five. The lowest overlap between any two AI surfaces was 36% and the highest level of overlap between any two surfaces was 59%.

  • Lowest level of overlap: 36%
  • Highest level of brand citation overlap: 55%

This suggests that name brands that are tightly associated with products and services tend to perform similarly across most of the tested AI search surfaces and may also reflect how widely brands are cited by trusted websites and possibly user intent and expectations.

In my opinion, the takeaway here is that associating a brand with a product or service in a consumer’s mind is a powerful way to influence user expectations which can then translate into branded search. This is something that the SEO community has been slow to pick up on, even though Google has been hinting at user signals playing a strong role in rankings. I say that the SEO community has been slow to pick it up because Google’s been doing this since at least 2004 (Navboost) and most directly with the brand navigation signals in search (Google’s brand signals patent).

Wide Divergence Of Cited Sources

BrightEdge analyzed citations from the five AI surfaces across three types of websites (Institutional, Commercial and Editorial, and User Generated Content) and discovered wide variance between all five engines, despite the convergence on citing strong brands.

Three Categories Of Sites Analyzed

  1. Institutional sites, including government, academic, and big brand industry leaders
  2. Commercial and editorial sites, including media, reviews, and listings
  3. User Generated Content (UGC), including forums, video platforms, and social content

The data shows that every engine draws from all three categories, but weights the mix differently: institutional sources range from a low citation rate of 10% to a high of 26% of citations. Citations of UGC sites range from a low of 0.2% to a high of 18% of citations.

The largest category overlap across all five search engines are found in citations of corporate brand, commercial, and editorial sites, with a low end of 37% on Gemini to as high as 51% on AI Overviews.

BrightEdge offers this takeaway about that data:

“Review sites, comparison content, trade press, retailer listings, and finance data are the sources AI most frequently reaches for. Investment in PR, trade coverage, review site visibility, and category comparison content translates into visibility across every engine, not just one.”

Something that BrightEdge doesn’t mention is that AI search engines surface sponsored articles from trusted websites that are clearly labeled to conform with FTC guidelines on native advertising and Google’s guidelines on sponsored posts. This enables companies to tightly associate their brands with specific products and services and increase the likelihood of being cited in AI search surfaces.

Gemini And AI Overviews Differ On Website Authoritativeness

The difference between the kinds of websites Gemini and Google AI Overviews uses as sources shows that Gemini is more conservative, tending to show more trust toward institutional sites at a higher rate than user generated content (UGC). Institutional sites are academic, government, academic, and big brand sites.

AI Overviews, on the other hand, trusts both institutional and UGC sources of information, with nearly twice as many citations going to UGC websites.

  • Authoritativeness Of Institutional Versus UGC Content
  • Gemini: 26% institutional, 0.2% community
  • AI Overviews: 10% institutional, 18% community

Another revealing finding is that there is a wide variance in thhe top level domains that are cited by each AI search surface. Gemini tended to link out to only the very most trustworthy and authoritative websites. For example, Gemini tended to cite .gov and .org websites at higher rates than any of the other AI engines.

Gemini: 13% .gov, 23% .org

Gemini’s answers tend to trust institutional websites more than user generated content, citing them 26% of the time but distrusts UGC sites, only citing them a fraction of a percentage point. AI Overviews trusts UGC content to a vastly greater extent. Why is that?

It could be that the technologies underlying Gemini and AI Overviews differ. For example, it could be that Google’s FastSearch, which prioritizes speed over other ranking signals, may be a reason why UGC sites are sources more often than they are in Gemini. It’s an interesting question.

I did an informal experiment by asking both Gemini and AI Overviews to compare the use of a specific op-amp (an electrical part) in a specific amplifier.

  • Gemini’s answer cited institutional sources (Texas Instruments and the amplifier’s manufacturer).
  • AI Overviews cited the two institutional websites but also multiple user generated content (UGC) sites.

Gemini’s answer was typically conservative, citing the institutional website (Texas Instruments, the manufacturer).

AI Overviews citations of various UGC sites were useful in the context of this question because actual users shared their experiences with this op-amp as well as actual electronic measurements of the op-amp and comparisons to other ones.

.Edu Sites Not Authoritative?

Another interesting finding is that all of the AI search engines don’t often cite .edu websites. Perplexity cited .edu sites at a higher rate than any of the other AI engines, citing .edu websites 3.2% of the time.

Those results contradict a longstanding belief in SEO circles that .edu sites are more authoritative. BrightEdge’s research shows that .edu sites are not authoritative for the kinds of questions that users are asking AI search engines.

ChatGPT Cites A Higher Diversity Of Sources

The data also shows that ChatGPT shows a more diverse variety of website sources, relying on its top ten sources only 18.5% of the time, with Google AI Mode right behind it with 19.4%. Gemini (26.3%) and Perplexity (26.7%) show a greater amount of the same sites drawn from their top ten.

Percentage Of Top 10 Sources

  • ChatGPT: 18.5%
  • Google AI Mode: 19.4%
  • Gemini: 26.3%
  • Perplexity: 26.7%

Gemini And Perplexity Rely On Authoritative Sites

Gemini and Perplexity tended to rely the most on authoritative websites. As already noted, Gemini trusted institutional sites the most and Perplexity cited .edu sites more than any of the other AI engines.

Perplexity showed a similar pattern of conservatively linking out to the most trusted and authoritative sites. BrightEdge’s report explains:

“Perplexity concentrates more of its citations in institutional medical, government, encyclopedic, and medical publisher sources than any other engine. Combined, those four categories account for approximately 30% of Perplexity’s citations.”

Five AI Engines, Five Distinct Citation Profiles

Here is the breakdown showing the citation distribution for each AI search surface, with Gemini and Perplexity showing a strong preference for authority sites.

Gemini

  • 26% institutional sites
  • 23% .org
  • 13% .gov
  • 0.2% UGC

Perplexity

  • 86% of brand mentions appear in position 5 or earlier
  • 30% of citations from institutional medical, government, encyclopedic, and publisher sources
  • 22% institutional sites
  • 3.2% .edu
  • 1.5% UGC sites

ChatGPT

  • Top 10 sources account for 18.5% of citations
  • 20% .org
  • 12% .gov
  • 0.5% UGC

Google AI Mode

  • Top 10 sources account for 19.4% of citations
  • 14% institutional sites
  • 7% UGC

Google AI Overviews

  • 18% UGC
  • 10.6% of citations from a single video platform
  • 10% institutional sites
  • 2.9% from a forum platform

Google AI Is Not One System

Google’s AI Mode and Ai Overviews show almost the same websites, with a 59% rate of overlap of cited websites. Gemini has the least amount of overlap.

  • Gemini vs AI Overviews: 34%
  • Gemini vs AI Mode: 27%

These differences show that the Google’s AI systems rely on different mixes of sources, with Gemini showing the widest amount of difference.

Takeaways

The data makes it easy to view each AI search surface with a shorthand description of what kinds of sources each AI engine tends to cite. There is a wide variance in source citations with clear preferences of which kinds of sites each engine prefers to link to. If there is one big takeaway from the data, in my opinion it would be the importance of establishing a brand connection to products and services.

Other Takeaways

  • Gemini and Perplexity rely on high authority brand and institutional websites.
  • ChatGPT cites a broader range of sources, showing a higher mix of websites.
  • Google’s AI Overviews cites UGC sites more than any other AI search.
  • Gemini shows the least amount of overlap among the three Google AI systems.
  • AI Overviews and AI Mode show the highest level of overlap.
  • Citation overlap varies widely across all five AI engines, indicating major differences in source selection.

Read the BrightEdge report: Why AI Engines Cite Different Sources but Recommend the Same Brands

Featured Image by Shutterstock/Toey Andante

The AI Skills Salary Premium via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

I normally write about strategy and search behavior, not labor markets. But the SEO job market is the clearest leading indicator I’ve seen of how companies are actually valuing AI skills, so I followed the data off the usual map.

946 SEO job postings show companies are willing to pay a premium for AI skills. But the signal is buried in descriptions, and the salary premium only truly activates at mid-level and above.

SEO jobs that mention AI in the title pay $113,625 at the median compared to $89,438 for jobs that don’t. That 27% gap is live in the market right now; it’s not a projection.

In this memo, I’m covering:

  • Where the 25-27% AI pay premium actually shows up in SEO postings.
  • Why screening jobs by title filter misses four out of five of the roles paying more.
  • How to position your resume (or your job description if you’re a hiring manager) so the right opportunities land on your side of the table.

About this data:

  • 946 full-time SEO roles from SalaryGuide.com were included in this analysis, posted December 2025 through March 2026, deduped at company + job title.
  • Salary midpoints from the 41.8% of roles that disclosed pay.
  • “AI mention” means the title or description contains “AI,” “LLM,” “AEO,” “GEO,” “Answer Engine Optimization,” or “Generative Engine Optimization.”

Companies Pay 27% More Salary For AI Skills

AI in the job title commands the bigger salary premium, but the description signal covers far more ground. Only 146 jobs carry AI in the title. 563 include it in the description. The description bucket captures 4x more roles and still delivers a 25% median salary lift over non-AI descriptions ($100,000 vs. $80,000).

Image Credit: Kevin Indig

The dollar deltas are $24,187 for the title bucket and $20,000 for the description bucket. Compounded across salary negotiations over a career, neither is marginal.

The AI Requirement Is Hidden In The Job Description

Only 15.5% of SEO postings include AI in the title. 59.5% require it somewhere in the description. Employers are building AI into the role without putting it in the headline.

At senior levels, the pattern becomes near-universal:

  • 78.3% of director/executive descriptions mention AI.
  • 67.4% of manager descriptions do.

Even at mid-level, one in two job postings includes it.

A hangup here? Filtering job searches by AI in the title misses 80% of AI-required roles. The requirement sits in the body text, not the headline.

Image Credit: Kevin Indig

The AI Skill Premium Grows With Seniority

At entry-level positions, AI skills in the description carry a slight negative premium (-2.3%). Employers don’t pay new grads more for knowing AI.

The signal flips at mid level (+14.3%), then compounds sharply at the management layer.

Image Credit: Kevin Indig

A director with AI in the description earns $35,250 more at the median than one without. Senior roles may earn more, but the premium is due to AI judgment (instead of tool skills). The market pricing is applied accordingly. Junior candidates may need AI on their resume to get the interview, but getting paid more for AI skills happens at mid-level and above.

9+ Years In, AI Skills Are Assumed

Experience requirements tell the same story with a steeper slope: For junior 0-1 year roles, 40.9% mention AI in the description. For roles requiring 9+ years of experience, that number is 92%.

Image Credit: Kevin Indig

At 9+ years, AI isn’t listed as a differentiator. Instead, it’s embedded in the role definition.

The 8% of senior postings that don’t mention it are the outliers.

The Market Has Decided, But The Titles Haven’t Caught Up

Even if the salary premium compresses later, pricing your skills against job description-level signals is still the right move today.

1. If you’re a job candidate: Screen descriptions, not titles. The title filter misses 80% of the AI-required roles and the 25-27% premium that rides with them. Put AI evidence in the top one-third of your resume, or it won’t register for the postings that pay more.

2. If you’re a hiring manager: Your pay bands are already two-tier, whether you’ve formalized it or not. Roles requiring AI pay more at the median, and most of yours don’t say so upfront. Close that gap now.

3. Mid-career and up: This is where the premium actually compounds. If you’re 4+ years in and AI doesn’t appear in the first one-third of your resume, you’re pricing yourself against an outdated market.

Quote from Josh Peacok, founder of Search for Hire:

Having been on hundreds of discovery calls with companies hiring SEOs and having built out hundreds of search teams at Search for Hire, the pattern is undeniable: SEO talent is being priced on two axes now: fundamentals and AI capability. The candidates commanding a premium aren’t the ones who can use ChatGPT, they’re the ones who can build scalable systems with it. But AI without precision judgment can take you a long way in the wrong direction, fast. The real unicorns combine that build capability with deep technical skill, strategic thinking and the ability to sit in front of a client. That combination barely exists and when it does, it doesn’t stay on the market long.

More Resources:


Featured Image: beast01/Shutterstock; Paulo Bobita/Search Engine Journal