Half Your Traffic Left. The SEO Industry Sent Thoughts and Frameworks

Before AI Overviews launched in May 2024, Define Media Group’s portfolio of major U.S. publishers averaged 1.7 billion organic search clicks per quarter. Steady. Predictable. The kind of number you build a business model on and then stop thinking about, because why would you?

After the launch, traffic dropped 16% and never recovered. When Google expanded AI Overviews in May 2025, the decline accelerated. By Q4 2025, organic search traffic across that portfolio was down 42% from the pre-AIO baseline.

Nearly half the organic traffic, gone, from a portfolio large enough to be directional for the entire publishing industry.

The traffic bargain (you produce content, Google sends clicks, advertising revenue funds the next round of production) has been the economic engine of the open web for 20 years. That engine is seizing up in plain sight, and the industry’s response has been to argue about which dashboard to stare at while it happens.

New Interface, Same Delusion

The first camp did what the SEO industry always does when the ground shifts: they built new tools to measure the shaking.

Prompt tracking. LLM visibility dashboards. Share-of-answer metrics. In under 18 months, an entire vendor category materialized to sell you a number that tells you how often your brand appears in AI-generated responses. It’s Search Console for the chatbot era, and it comes with the same comforting implication: If the number goes up, you’re winning. If it goes down, buy more of the thing that makes it go up.

I’ve written about this before, and I’ll be blunt again: These tools are selling you bullshit with a confidence interval drawn on it in crayon. When a dashboard tells you your brand “appeared in 73% of relevant AI responses,” what it actually measured is: We fired some prompts at an API, got some outputs, and counted mentions. That’s not a ranking. That’s a lottery ticket.

The engineers who built these models cannot fully explain why a specific output appeared. But sure, a SaaS tool perched atop Mount Dunning-Kruger with a trend line has it all figured out.

The industry keeps buying because the alternative is admitting we’re flying blind. Questioning the data means telling the room that the “directional” charts in the client deck are noise dressed up as insight. Nobody wants to be that person. So the vendors keep selling, the dashboards keep flickering, and the number doesn’t need to correlate with revenue. It just needs to fluctuate enough to sustain a subscription.

Jono Alderson made the broader version of this argument in a recent piece, Clicks Don’t Count (and They Never Did). His point: SEO has always measured the interface rather than the forces underneath it. Rankings, traffic, visibility scores. None of these were measures of competitiveness. They were measurements of a presentation layer. We spent two decades optimizing what we could see and calling it strategy.

He’s right. And prompt tracking is the newest iteration of the same mistake. Old retrieval visibility in a trench coat, pretending to be two disciplines.

The second camp is more intellectually serious. Jono’s piece is the best version of this argument, and I agree with more of it than I’m about to make it sound like.

His framework: stop measuring the interface, start measuring competitiveness. Six structural dimensions drawn from marketing science validated for decades: experience integrity, physical availability, mental availability, distinctiveness, reputation, commercial proof. AI systems aggregate signals about brands across the web, not pages in isolation. The entities that are genuinely competitive get recommended and surfaced. Visibility is the output, not the input.

I think this is broadly correct. I also think it has a timing problem the size of a crater.

Those six dimensions operate on timescales of years. Building mental availability is a sustained brand investment. Earning reputation signals is the compound interest of consistently not being terrible. Strengthening distinctive assets requires buy-in from people who’ve never heard of Ehrenberg-Bass and aren’t going to read a blog post to find out.

The traffic collapse is happening in quarters.

Tell a publisher who just lost 42% of their search traffic to “strengthen structural competitiveness” and watch their face. It’s like telling someone whose house is flooding to invest in better drainage. You’re not wrong. You’re just not helping.

Jono knows this, to his credit. When someone in his comments asked how to operationalize the framework, his answer was honest: Redefine SEO to own those areas, or navigate the organizational politics of working with the teams that do. “Lots of organizational politics, either way.” That’s the kind of understatement that only someone who’s actually tried it would make.

What Actually Broke

The measurement debate is a sideshow. The traffic bargain wasn’t a metric. It was the economic foundation of content production on the open web.

Google needed content to crawl. Publishers needed distribution to monetize. Produce something worth indexing, Google sends traffic, you convert it into revenue, that revenue funds more content. The loop ran for 20 years. Everyone pretended it was a partnership rather than a dependency, and the pretence held because the numbers worked.

AI Overviews break the loop. Google synthesizes the answer from your content and serves it directly. The user gets what they need. Your content gets consumed on Google’s surface, with Google’s ads, generating Google’s engagement metrics. You get a citation link that roughly nobody clicks and a warm feeling about “brand visibility.”

Google’s own VP of Product for Search, Robby Stein, recently described how they had to “teach the model how to link out.” Linking to publishers wasn’t the default behavior. It had to be engineered back in. The system’s natural state is to absorb your content and answer the question. Sending traffic your way is the afterthought they bolted on, so the extraction doesn’t look like what it actually is: taking your stuff and serving it as theirs.

The breakage isn’t uniform. Define’s data shows breaking news traffic up 103% across all Google surfaces, while evergreen content dropped 40%. The Top Stories carousel has been largely shielded from AI Overview incursion. Evergreen content has not. The how-to guides, the explainers, the reference material, the content categories that built the SEO industry, are exactly the categories AI Overviews were designed to absorb and replace.

Google is selecting which content survives the transition. Time-sensitive content still drives clicks because you can’t summarize something that’s still developing. Everything else is increasingly raw material for the answer machine, and the machine doesn’t pay for raw materials.

If “competitiveness” replaces traffic as the operating metric, SEO’s scope has to change. Jono’s six dimensions are mostly owned by product, brand, and marketing. Experience integrity is product and UX. Mental availability is brand investment. Reputation is years of not cutting corners. Commercial proof is a function of whether the thing you sell is actually good. SEO teams control technical discoverability, content strategy, and site architecture. That’s one layer of the competitiveness framework, not the whole building.

So the discipline either expands into a cross-functional strategic role (good luck explaining to the CMO that SEO now owns brand strategy because the retrieval models changed) or it contracts honestly and positions itself as the technical infrastructure that makes competitiveness legible to machines. Either option beats “we’ll get you more organic traffic,” which is a promise that ages worse every quarter.

Clicks may not have been the right metric. Jono makes a persuasive case. We measured the interface and called it the system.

But clicks paid the bills. They funded editorial teams, justified content investment, and sustained the publishing ecosystem that both search engines and AI systems depend on for training data and retrieval sources. Without content to crawl, there’s nothing to index. Without content to train on, there’s nothing to synthesise. The irony is apparently lost on the company deploying AI Overviews.

Nobody’s building a transition strategy. The prompt-tracking vendors are selling the new dashboard. The strategists are selling the long view. Google won’t help. They broke the bargain, and their Discover push suggests they’d rather build a distribution surface they fully control than repair the one that shared value with publishers. The AI companies need content to exist, but haven’t worked out how to fund its production.

Everyone’s got a framework. Nobody’s got an answer.

The clicks didn’t count. But something needs to. Soon.

More Resources:


This post was originally published on The Inference.


Featured Image: Accogliente Design/Shutterstock

How Zero-Party & First-Party Data Can Fuel Your Intent-Based SEO Strategy via @sejournal, @rio_seo

There’s an interesting paradox currently occurring in the realm of marketing. Marketers have more tools and data at their fingertips, yet despite this influx of information, marketing leaders also somehow have less clarity than ever before.

Over the past decade, Google’s algorithms and privacy regulations have significantly shifted traditional SEO best practices. SEO has evolved from a precise science to more of a trust discipline, where marketers must infuse credibility and authority into their content to improve visibility.

The new opportunity at hand isn’t scraping more consumer behavior but rather listening to it in a new manner. By diving deeper into zero-party data, information customers willingly share, and first-party data, behavior observed directly on your own channels, chief marketing officers can shape their SEO strategies around real human intent.

Search success will be contingent on whether brands understand their audience well enough to create relevant, authentic, and trustworthy content at every step of the customer journey, not just when an algorithm prompts them to.

The Connection Between Zero-Party Data And SEO

Zero-party data is marketing’s cleanest and clearest source of truth. It uncovers the information customers want you to have. It unveils their preferences, motivations, and needs through methods like surveys, quizzes, chatbots, and more.

First-party data shows what users do. Zero-party data shows you why they did what they did. When paired together, both forms of data bridge the gap between analytics and empathy.

For example, a retail brand might ask site visitors in a post-purchase survey, “What is most likely to motivate you to make a purchase?” The choices the site visitor can choose between are price, sustainability, or convenience. Now, consider if nearly half of those respondents chose “sustainability.”

This insight shouldn’t fall into a void, but rather should be acted upon quickly. It’s not a trend but rather a clear signal. The content and SEO teams can now focus on creating content around “eco-friendly shopping” and other relevant sustainability topics, while communications teams can align messaging around the same topic. In turn, seamless collaboration and alignment take place.

Moving Beyond Keywords To Conversations

Traditional SEO honed in on what people typed into the search bar. Zero-party data reveals what people mean when they’re searching for a business, product, or service. Algorithms are increasingly rewarding intent satisfaction when evaluating content. When your content addresses and is built on declared motivations, like why someone is looking for your specific solution, you’re aligned with the future of search.

How To Turn Customer Data Into Search Strategy

The issue isn’t that CMOs aren’t collecting data; it’s the struggle with turning it into action that drives meaningful change.

An intent-based SEO strategy has three phases, which we will discuss next (capture, interpret, and activate).

Phase 1: Capture

Customers aren’t going to hand over information if they don’t see a clear value in doing so. To encourage this, marketers must highlight a mutual benefit in the information exchange. A few methods include:

  • Gated research studies.
  • Short post-purchase surveys.
  • Interactive quizzes or calculators.
  • Preference centers so customers only receive communication around specified topics that matter most to them.
  • Incentives such as coupons and exclusive promotions for newsletter subscribers.

Each of the aforementioned information exchanges becomes a declared-intent breadcrumb. Users have granted your business permission to act on their feedback and are much more actionable than cookie trails alone.

Phase 2: Interpret

Collecting information from myriad channels can make it difficult to determine where they should focus their attention first. To dissect and pull out the insights that matter most from unstructured and structured feedback, CMOs should invest in qualitative analysis tools. Tools like text analytics, for example, can make it easy for CMOs and CX teams alike to mine for common themes.

Customer Data Platforms (CDPs), can also help you create audiences and segments to deliver more personalized content that resonates with customers. This might look like a retail marketing manager only receiving newsletters, ebooks, or blogs that are related to the retail industry and trends.

These types of thematic content pillars can help inform supporting search queries, schema markup, content priorities, and more.

Phase 3: Activate

In this phase, you’ll set your plans into action. First, connect declared intent to keyword intent. For example, if customers talk about “security peace of mind,” this gives you clear insight into what they’re interested in learning more about and how your company can help. You could create content that explicitly speaks to “how we secure your personal data.”

On the other hand, if they’re talking about “easy to implement,” it may be beneficial for you to provide explainer-type content, such as a short video or an FAQ page (with FAQ schema), to address “how to integrate [product name]” searches.

Zero-party data helps move the needle with SEO efforts; from a guessing game to an action engine, producing content that doesn’t just satisfy search algorithms, but also the people behind the search, too.

Leadership Enablement: Aligning Teams, Culture, And Technology

To build an insight-to-action culture, CMOs should encourage teams to share qualitative learnings regularly, whether through a cadence of weekly meetings, via email, or a combination of the two. Customer experience teams should make Voice of Customer insights loud and clear to help inform SEO and content briefs.

It’s also important to highlight and reward cross-functional wins to showcase how working together helps drive growth. This might look like an SEO strategy that was informed by CX feedback or a case study that solves a pressing challenge clients typically face, informed by online reputation feedback.

Operationalize The Feedback Loop

CMOs can install a regular “intent feedback loop” to operationalize the data your company receives and act upon that data. This might look like:

  • Gather declared data (surveys, chatbot transcripts, online reviews, call center logs).
  • Identify what motivates consumers most (customers often talk about time savings, value for money, trust issues, emotions).
  • Update content briefs and keyword maps (primary and secondary keywords, content requirements, search intent to ensure you’re staying up to speed).
  • Measure whether your content is landing with your intended audience on an emotional and intellectual level. Engagement, recall, and action are key determinants of content success, not just how it ranks.

This type of feedback framework helps organizations embed customers’ preferences and desires directly into the content published, helping your business create the content that actually connects with your target audience.

The Metrics To Add

Measuring what matters most is integral to assess the impact of zero-party data analysis efforts. Alongside other SEO metrics, the following can gain a holistic view of your SEO performance:

Resonance Metrics

Engagement quality is a true testament of attention. Meanwhile, volume, while great to have, is somewhat meaningless if you have an abundance of unqualified leads. Instead, look at:

  • Average engagement time: How long people stick around to view your content.
  • Return visits: People who come back to consume more of your content.
  • Scroll depth: Visitors should scroll down to read the entirety of your content because they find it to be that interesting.

Relevance Metrics

Marketers must track growth in high-intent and branded queries, as these are most often the terms that someone who is on the verge of buying will use when searching for your business. If you’re showing up for phrases customers typically use when at the decision-making stage, such as “State Farm compared vs. Geico car insurance,” this indicates deeper resonance.

Relationship Metrics

Loyalty metrics, while not a metric SEOs track, can correlate with how well your SEO program is working. Reframing SEO performance as a reflection of customer understanding helps CMOs dig a layer deeper, past solely tactics, and understand deeper-rooted customer emotions that could be preventing your business from scaling. Look at:

  • Zero-party response rate: The percentage of users who are willing to share their personal information and experiences.
  • Repeat engagement: Consumers who continue to engage with your business and see value in doing so.
  • Customer lifetime value: How valuable a customer is to your business over time (how much they purchase, do they churn quickly)
  • Retention rate: Customers who continue to do business with you that you’ve worked hard to acquire and keep.

The Future Belongs To Human-Declared Intent

We may be in the age of AI, but the future is human. Yes, AI can generate a keyword-optimized blog in a matter of seconds, but human touch is where the real value is. And human-informed data will be your business’s ultimate differentiator.

Zero- and first-party data reveal pertinent insights that elevate organizations when this data is acted upon. It unlocks insights into why people search and not just what they search for. It also uncovers where in the sales journey customers are getting stuck and blockers for purchasing.

Moving forward, to fuel your SEO efforts:

  • Ask customers what matters most to them.
  • Listen to what they have to say.
  • Create content that addresses those asks.
  • Optimize it for human needs, not just engagement and clicks.
  • Measure customer experience metrics, not just SEO.

When marketing leaders take consumer feedback to heart, they bridge the gap between traffic and trust, building stronger relationships that lead to more purchases, repeat customers, and improved brand experiences.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

Google Begins Rolling Out The March 2026 Spam Update via @sejournal, @MattGSouthern

Google started rolling out the March 2026 spam update today, according to the Google Search Status Dashboard.

The update is global and in all languages, with a rollout that may take a few days.

What’s New

The Search Status Dashboard listed the update as an incident affecting ranking at 12:00 PM PT on March 24, with the release note posted at 12:18 PM PDT.

Google’s description reads:

“Released the March 2026 spam update, which applies globally and to all languages. The rollout may take a few days to complete.”

Google hasn’t published a blog post or announced new spam policies with this rollout. So far, it seems to be a standard spam update, not a broader policy change like the March 2024 update, which added categories such as content abuse, expired domain abuse, and site reputation abuse.

How Spam Updates Work

Google describes spam updates as improvements to spam-prevention systems like SpamBrain, targeting sites violating spam policies, which can lead to lower rankings or removal from search results.

Spam updates differ from core updates, which re-assess content quality. Spam updates enforce policies against violations like cloaking, link spam, and content abuse.

Sites affected by a spam update can recover, but recovery takes time. Google states improvements may only appear once automated systems detect compliance over months.

Context

This is Google’s first spam update since the August 2025 spam update, which ran from August 26 to September 22 and took nearly 27 days to complete. That update was characterized by SISTRIX as penalty-only, with affected spammy domains losing visibility but no broad ranking changes.

Google’s estimated timeline of “a few days” for the March 2026 update suggests a shorter rollout than recent spam updates, though timelines can stretch. The December 2024 spam update completed in seven days. The August 2025 update took nearly four weeks.

The March 2026 spam update comes about three weeks after the February Discover update finished rolling out.

Why This Matters

Ranking changes during spam update rollouts can happen quickly. Monitoring Search Console data over the next few days will help distinguish spam-related drops from normal fluctuation.

Google hasn’t announced new spam policy categories with this update, so the existing spam policies remain the relevant framework for evaluating any impact.

Looking Ahead

Google will update the Search Status Dashboard when the rollout is complete. Search Engine Journal will report on the completion and any observed effects.


Featured Image: Hurunaga Yuuka/Shutterstock

The Science Of How AI Picks Its Sources via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

In “The science of how AI pays attention,” I analyzed 1.2 million ChatGPT responses to understand exactly how AI reads a page. This is Part 2.

Where Part 1 told you where on a page AI looks, this one tells you which pages AI routinely considers.

The data clarifies:

  • Why ~30 domains own 67% of citations in any topic.
  • The page structure that earns citations across 50+ distinct queries vs. the one that gets cited once.
  • Whether the ski ramp from Part 1 is actually steeper or flatter in your vertical.
Image Credit: Kevin Indig

1. ~30 Domains Own 67% Of AI Citations Per Topic

Classic search is a winner-takes-all game. The top result gets disproportionately more clicks than the second. Is that also true for ChatGPT answers? Is the distribution of cited domains democratic or totalitarian?

Approach:

  1. Compute the citation share per domain per vertical.
  2. Calculate the cumulative share captured by the top 10% of domains.
  3. Dataset: 21,482 ChatGPT citation rows, 670 unique domains, 2,344 unique URLs, 127 unique prompts.

Results: The top 10 domains take 46% of all citations in a topic. The top 30 take 67%.

Image Credit: Kevin Indig

AI citation is slightly less concentrated than traditional organic search, but still extreme:

  • Effectively, there are ~30 seats (domains) at the citation table for any given topic. Everything else is nearly invisible.
  • Example: storylane.io appears as a cited source across 102 distinct prompts (unique questions asked of ChatGPT), reprise.com across 98. Even though reprise.com has more total citations (1,369 vs. storylane.io’s 968), storylane.io shows up in answers to a broader range of different questions.

We confirmed these findings in product-comparison verticals (SaaS tools, financial advisors). However, you’ll see below that the pattern is weaker in healthcare and open web topics, where no single domain dominates. Notably, the education sector receives the most AI citations of any vertical we studied.

What The Industry Patterns Showed

The findings above are from product comparison verticals (SaaS, financial advisors), but the pattern is weaker in healthcare and open web topics, where no single domain dominates, and stronger in the education sector.

Image Credit: Kevin Indig

Education is winner-take-most: the top 10% of domains capture 59.5% of all citations.

  • If you are not already in the top 5-10 domains in education, achieving citation breadth is exceptionally hard.
  • tefl.org alone answers 102 unique prompts and holds 18.75% of all Education citations.

Crypto is the second most concentrated at 43.0% for the top 10%.

  • A small set of technical documentation and comparison sites (alchemy.com, quicknode.com, chainstack.com) dominate Solana RPC and infrastructure queries.
  • The technical nature of Solana queries means few credible sources exist; once a domain earns trust in this niche, it captures a large share.

Finance sits at 29.4% for top-10%.

  • Concentration is query-type specific: Financial advisor locator pages (forfiduciary.com at 139 unique prompts, smartasset.com at 168 unique prompts) dominate city-level advisor queries.
  • But the long tail of financial product queries keeps total concentration moderate.

Healthcare is the least concentrated at 13.0% for the top 10%.

  • No single domain dominates. New entrants have a realistic path to citation reach.
  • The citation surface is spread across hundreds of domains, each covering a small slice of telehealth, HIPAA compliance, and healthcare app queries.

CRM/SaaS and HR Tech are similarly diffuse (16.1% and 14.4% top-10%).

  • These are multi-product software categories where dozens of comparison sites, review platforms, and vendor pages split citations.
  • Monday.com leads CRM with only 2.88% of all citations (37 unique prompts). A genuinely open competitive field

Top Takeaways

1. Breadth of topic coverage matters more than domain authority. A single well-structured comparison page (learn.g2.com: 65 unique prompts, 495 citations) can still outperform the entire domain portfolio of a well-known brand. The goal is not to rank for one query, but to answer a cluster.

2. Concentration reflects category maturity. Fragmentation is an opportunity. Education and Crypto have narrow, well-defined query spaces where a few authoritative sources have locked in trust. Healthcare and CRM are broad, fragmented categories where no single domain dominates. That fragmentation is your opening.

3. Citation reach (the number of distinct prompts a domain answers) is a more useful strategic metric than raw citation count. In low-concentration verticals like Healthcare and CRM, a focused 30-50 page strategy can realistically compete for a seat at the table. In high-concentration verticals like Education and Crypto, the path is narrower: become the definitive resource on a specific sub-topic or accept that you’re fighting for scraps.

2. The Citation Advantage Starts At 10,000 Words

In classic Search, word count and page length are somewhat indicative of ranks, as long as the quality is high. I wondered, again, if that is also true for showing up in ChatGPT answers?

Approach

  1. Measure raw text length of every cited page.
  2. Group length into seven buckets.
  3. For each bucket, calculate average citations per page.

Results: More words do indeed correlate with more citations, but there’s a ceiling.

Image Credit: Kevin Indig

The 5,000-to-10,000 jump is the largest single step – nearly 2x. Pages above 20,000 characters average 10.18 citations each vs. 2.39 for pages under 500 characters.

The length effect is vertical-specific: Finance inverts it entirely. High-cited Finance pages average 1,783 words vs. 2,084 for low-cited pages – a 0.86x lift. Authoritative compact sources, rate tables, and regulatory summaries outperform comprehensive guides there. The 10,000-character rule holds for SaaS and editorial content.

Image Credit: Kevin Indig

Finance peaks at 5,000-10,000 words (10.9 citations/page), then drops sharply at 10,000-20,000 (4.92 citations/page).

  • Finance also shows the steepest absolute gain: Pages under 500 words earn only 3.84 citations/page while 5,000-10,000 pages earn 10.9, which is a 2.8x multiplier from length optimization alone.
  • Very long Finance pages may dilute the citation-triggering content with redundant detail.

Education shows the clearest length-wins-everything pattern.

  • Citations per page climb steadily from 1.85 (under 500 words) to 6.05 (20K+ words) with no drop-off.

Crypto and Product Analytics behave similarly to Education.

  • Length consistently pays off, plateauing around the 10,000-20,000 tier (5.34 and 4.01, respectively). Both are technical verticals where comprehensiveness signals authority.

SaaS shows the weakest length effect: Citations per page range from 1.06 (1,000-2,000 words) to 2.77 (20,000+ words).

  • Even the longest CRM pages only get 2.77 citations per page on average.
  • In this vertical, length alone does not determine citations. Format, structure, and domain authority appear more important.

Healthcare shows a moderate length effect (1.74 to 3.92 citations/page).

  • But with one anomaly: 5,000-10,000 words (2.80) underperforms vs. 2,000-5,000 words (3.36).
  • Very long Healthcare pages may include too much clinical detail that dilutes citation-triggering content.

Top Takeaways

1. Universal finding: Very short pages (under 1,000 words) underperform in every vertical. The underperformance of thin content is consistent, but the reward for long content is vertical-specific.

2. Target your length based on industry, content type, and query intent, not a universal word count. For Finance verticals: Aim for 5,000-10,000 words. Education, Crypto, and Product Analytics: Go as long as possible. CRM/SaaS: Prioritize structure over word count.

3. 58% Of Cited URLs Are Cited Once

When we look at the citations within a topic, we often see many pages on a domain getting cited. So, how many citations can a single page get?

Approach

1. Count the number of unique prompts for each page.

  • Classify number of citations into: 1, 2-5, 6-10, 11+.
  • Inspect the top URLs per vertical for structural patterns.

Results: On average, 67% of cited URLs appear in only one prompt.

Think of it like a footprint game. Raw citation count tells you how popular a page is. Citation breadth tells you how strategically valuable it is. An evergreen page in AI citation is not one that gets cited a lot; it is one that keeps appearing across diverse queries.

Image Credit: Kevin Indig

The top 4.8% of URLs (cited 10+) are all category-level comparisons or guides answering “what is it,” “who uses it,” “how to choose,” and “pricing” in a single URL.

The citation pool isn’t a meritocracy of the best answer, but the degree varies sharply.

  • CRM/SaaS has the highest one-hit rate at 84.7%.
  • Finance produces the highest-reach evergreen pages: forfiduciary.com covers 119 unique prompts.
  • Crypto generates the most concentrated evergreen pages at 55.4% in the technical tier: chainstack.com/best-solana-rpc-providers-in-2026 (63 prompts), alchemy.com/overviews/solana-rpc (62 prompts), and rpcfast.com/blog/rpc-node-providers (61 prompts). All three are comparison pages covering the Solana RPC provider landscape from slightly different angles.
  • Education evergreen pages follow a different logic: tefl.org, internationalteflacademy.com, and gooverseas.com get cited broadly because they answer TEFL-adjacent queries (cost, location, certification type) from a single resource. One URL serves many query angles.

1. Evergreen pages share consistent structural patterns: Category-level guide format (best X for 2025/2026), broad topic coverage within a single page (what is X, how to choose X, top X vendors, pricing), and explicit year anchoring in URL or title. Pages that answer a class of questions earn citation breadth.

2. The top 5 evergreen pages in every vertical are either comparison roundups, authoritative guides, or directory/listing pages. No thin single-topic page reaches the 11+ prompt tier in any vertical.

3. A single evergreen page covering 10+ query intents is worth more in AI citation reach than 10 single-intent pages. The ROI of comprehensive content is front-loaded: one well-built page compounds citation reach over time. The long tail exists, but the top 5% of pages capture a disproportionate share of ongoing citation activity.

4. The Ski Ramp Is Steeper In Some Verticals

The science of how AI pays attention showed that ChatGPT cites 44.2% from the top 30% of any page. Does that trend hold across different verticals?

Approach: Re-run the same positional analysis across 7 verticals with 42,460 matched citations.

Results: The trend is real but varies by topic. One number holds everywhere: The bottom 10% of any page earns 2.4-4.4% of citations, roughly a quarter of what the peak band earns. The conclusion section is nearly invisible to AI, regardless of vertical.

Image Credit: Kevin Indig

What The Industry Patterns Showed

The true peak decile across all verticals is not the very opening. The 10-20% band is where AI reads hardest in every vertical. The first 10% is typically navigation, headlines, and intro fluff that AI skips.

  • Finance is the extreme case. 43.7% of citations land in the first 30% of the page. Finance pages front-load rate data, percentages, and key figures. AI grabs them and rarely reads past the halfway point.
  • Healthcare and HR Tech have the flattest ramps. Useful content is distributed more evenly across those pages.
  • Education peaks at the 30-40% decile rather than 10-20%, because educational content tends to bury the key answer slightly deeper after the intro.

Top Takeaways

1. Put your most citable claims and data in the first 30% of the page – no matter what industry you’re in. Summaries and conclusions rarely get cited.

2. For Finance brands: Front-load your thesis and statistics as much as possible.

What This Means For How You Build LLM Visibility

The domains that own citation share didn’t get there by writing better sentences. They built pages that hold true topical authority, addressing multiple queries in one place, and then repeated that authority across enough sub-topics to hold multiple seats at the table.

Getting cited across 30, 60, or 100 distinct prompts requires a targeted content architecture: pages built around query clusters and owning entire topics rather than individual keywords. Teams that keep the traditional “one keyword, one page” model will be structurally locked out of AI citation, even if their individual pages are beautifully written.

But as the data shows, there is no universal playbook. The tactics that work for a broad CRM platform could actively harm a Finance brand.

Methodology

We analyzed ~98,000 ChatGPT citation rows pulled from approximately 1.2 million ChatGPT responses from Gauge.

Because AI behaves differently depending on the topic, we isolated the data across 7 distinct, verified verticals to ensure the findings weren’t skewed by one specific industry.

Analyzed verticals:

  • B2B SaaS
  • Finance
  • Healthcare
  • Education
  • Crypto
  • HR Tech
  • Product Analytics

To reverse-engineer the citation selection, I ran the data through several layers of analysis:

  • Structural parsing: I measured the raw character length of every cited page and mapped heading hierarchies (H1s, H2s, H3s) to see how information architecture impacts visibility.
  • Positional mapping: I used Jaccard sliding-window similarity to pinpoint exactly where on the page the AI extracted its answers from, down to the specific decile.
  • Entity & Sentiment extraction: I ran the opening text of unique cited URLs through the Google Natural Language API to classify named entities (dates, prices, products) and used TextBlob to score sentiment, comparing the performance of corporate content against user-generated content (UGC).

Featured Image: Roman Samborskyi/Shutterstock; Paulo Bobita/Search Engine Journal

Research: “You Are An Expert” Prompts Can Damage Factual Accuracy via @sejournal, @martinibuster

“You are an expert” persona prompting can harm performance as much as it helps. A new study shows that persona prompting improves alignment with human expectations but can reduce factual accuracy on knowledge-heavy tasks, with effects varying by task type and model. The takeaway is that persona prompting works better on some kinds of tasks than it does in others.

Persona Prompting

Persona prompting is a common way to shape how large language models respond, especially in applications where tone and alignment with human expectations matter. It is widely used because it improves how outputs read and feel. Given how widespread persona prompting is, it may come as a surprise that its actual effect on performance remains unclear, as prior research shows inconsistent results, throwing the technique into doubt as to whether it is helping or harming.

The researchers concluded that persona prompting is neither broadly beneficial nor harmful, and that its efficacy depends on the type of task.

They found:

  • It improves alignment-related outputs such as tone, formatting, and safety behavior
  • Persona prompting degrades performance on tasks that rely on factual accuracy and reasoning

Based on this, the authors introduce a method called PRISM (Persona Routing via Intent-based Self-Modeling), that applies personas selectively, using intent-based routing instead of treating personas as a default setting. Their findings show that persona prompting works best as a conditional tool and provide a better understanding of when persona prompting helps and when it should be avoided.

Managing Behavioral Signals

In section three of the paper, the researchers say that expert personas have “useful behavioral signals” but that naïve use of persona prompting damages as much as it helps. They say this raises the question of whether those benefits can be separated from the harms and applied only where they improve results.

Behavioral signals influence LLM output. These signals are the reason persona prompting works. They drive improvements in tone, structure, safety behavior, and how well responses match expectations. Without them, there would be no benefit to persona prompting.

Yet, in a seeming paradox, the paper shows that those same signals interfere with tasks that depend on factual accuracy and reasoning. That is why the paper treats them as something to manage, not maximize.

These signals include:

  • Stylistic adaptation and tone matching: Adopting a professional or creative voice.
  • Structured formatting: Providing step-by-step or technical layouts.
  • Format adherence: Helping the model follow complex structures, like professional emails or step-by-step STEM explanations.
  • Intent following: Focusing the model on the user’s underlying goal, especially in tasks like data extraction.
  • Safety refusal: Identifying and declining harmful requests more effectively by adopting a “Safety Monitor” role.

Persona Prompt Wins

The paper found that persona prompts were a win in five out of eight categories of tasks:

  1. Extraction: +0.65 score increase.
  2. STEM: +0.60 score increase.
  3. Reasoning: +0.40 score increase.
  4. Writing: Improved through better stylistic adaptation.
  5. Roleplaying a domain expert: Improved through better tone matching.

The persona prompting won in the above categories because they are more about style and clarity rather than whether the answer is correct for facts and knowledge. They also found that the longer and more detailed the persona prompt, the stronger the alignment and safety behaviors become.

Persona Prompt Failures

Conversely, the expert persona consistently degraded performance in the remaining three (out of eight) categories because they rely on precise fact retrieval or strict logic rather than style and clarity. The reason for the performance drop is that adding a detailed expert persona essentially “distracts” the model by activating an “instruction-following mode” that prioritizes tone and style.

Activating expert personas come at the expense of “factual recall.” The model is so focused on trying to act like an expert that it forgets the information it learned during its initial training.That explains the drops in accuracy for facts and math.

Persona expert prompts performed worse in the following three categories:

  1. Math
  2. Coding
  3. Humanities (memorized factual knowledge)

The paper notes that on one of the knowledge benchmarks (MMLU), accuracy dropped from a 71.6% baseline to 68.0% even with the “minimum” persona, and fell further to 66.3% with the “long” persona.

They explained the safety improvements:

“More detailed persona descriptions provide richer alignment information, amplifying instruction-tuning behaviors proportionally.”

And showed why factual accuracy takes a hit:

“Persona Damages Pretraining Tasks
During pretraining, language models acquire capabilities such as factual knowledge memorization, classification, entity relationship recognition, and zero-shot reasoning. These abilities can be accessed without relying on instruction-tuning, and can be damaged by extra instruction-following context, such as expert persona prompts.”

Conclusions Reached

The researchers conclude that persona prompting consistently improves alignment-dependent tasks such as writing, roleplay, and safety behavior, while degrading performance on tasks that rely on pretraining-based knowledge, including math, coding, and general knowledge benchmarks.

They also found that a model’s sensitivity to personas scales with its training. Models that are more optimized to follow instructions are more “steerable,” which means they get the biggest boost in safety and tone, but they also suffer the largest drops in factual accuracy.

Takeaways

1. Be selective about using persona prompts:

  • Do not default to “You are an expert” prompts
  • Treat persona prompting as situational. Using it everywhere introduces hidden accuracy risks.

2. Persona prompting is effective for:

  • Writing quality
  • Tone
  • Formatting and organization
  • Readability

3. Tasks that don’t benefit from persona prompting and should instead use neutral prompting to preserve accuracy:

  • Fact-checking
  • Statistics
  • Technical explanations
  • Logic-heavy outputs
  • Research
  • SEO analysis

4. Remember these three findings:

  • Use persona prompting to generate content, then switch to a non-persona prompt (or a stricter mode) to verify facts.
  • Highly detailed “expert” prompts strengthen tone and clarity but reduce factual and knowledge accuracy.
  • “You are an expert” prompts may cause a model to prioritize sounding correct over actually being correct.

5. Match your prompts to the task:

  • Content creation: Persona helps
  • Analysis and validation: Persona hurts

The most effective approach is not one prompt, but a workflow that switches prompts depending on the task, similar to the researcher’s PRISM approach.

Read the research paper:
Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM

Featured Image by Shutterstock/ImageFlow

SEO 2.0: How Content Marketing Drives Visibility in AI Search via @sejournal, @hethr_campbell

The next evolution of SEO is unfolding right now: AI is changing how people discover brands & content.

Is your content cited ChatGPT, Gemini, Copilot, & AI Overviews?

How do you become a trusted source for AI citations?

Can you intentionally influence AI search outputs?

Yes, you can.

In this on-demand webinar, you can gain a practical, content-first framework for improving visibility in AI-powered search, plus learn:

How To Build The Content Signals AI Systems Actually Surface & Cite

This on-demand session breaks down how large language models retrieve, evaluate, and reference content, and walks through what that means for your upcoming SEO and content strategy.

You’ll walk away with a practical framework for building citation-worthy, AI-visible content that strengthens both traditional SERP rankings and AI recommendations.

You’ll Learn:

  • How to improve off-site mentions to boost AI mentions and citations.
  • Which content is citation-worthy, so you can build a powerful trust engine.
  • Exact traditional SEO advantages you should still consider.
Google Responds To Error That Causes Old Branding To Persist In SERPs via @sejournal, @martinibuster

Google’s John Mueller answered a question about Google rewriting title tags to show the old brand of a site that rebranded in 2015. Apparently everything was updated to the new brand name, but Google’s search results stubbornly persist in showing the old branding.

Old Brand Name Shown In Title Tags

The person asking the question on Bluesky related that a company updated their entire site with its new branding, but Google ignores it in favor of showing the old branding in the search results.

They posted:

“Hey @johnmu.com, curious about Site Name persistence. Treatwell (UK) is still showing as “Wahanda” in results – a rebrand that happened in 2015! Is there a specific “legacy” signal that might override current SiteName structured data for such a long period in one country only? “

Google’s Mueller was puzzled by the situation and didn’t have an answer as to why it was happening. Perhaps it’s one of those rare cases where a bug keeps a part of the index from updating. But he did suggest using the domain name as an alternate site name.

Mueller referred the person to one of Google’s developer pages, “What to do if your preferred site name isn’t selected.”

He responded:

“That’s a bit odd – I’ll pass it on to the team. FWIW what generally works in cases like this is to use the domain name as an alternate site name – developers.google.com/search/docs/… – but it would be nice if that weren’t needed.”

The site itself does not appear to contain on-page instances of the rogue branding. The old domain is correctly 301 redirecting to the new domain. However, there are some links in the footer that contain referral codes with the old branding on them, and the sitemap contains links to 404 pages that contain the old branding. Although those may not be the cause of the branding mismatch in the Google search results, it’s a good SEO practice to be tidy about what’s in your sitemaps and to remove outdated links.

These kinds of rare errors are interesting because they kind of provide a sneak peek into a part of Google’s indexing that isn’t normally in view, like a crack in a wall. What insights do you derive from this anomalous situation?

Featured Image by Shutterstock/SsCreativeStudio

3 Strategies That Can Survive AI Search In 2026: What I Shared At SEJ Live via @sejournal, @theshelleywalsh

It’s been an eventful start to the year for AI search, and AI is moving quickly, but there’s a lot of hype and panic. When really search is just doing what it has for the last 30 years, it’s constantly self-updating.

At Search Engine Journal, as most other publishers have, we’ve experienced considerable drops from Google organic traffic. The last few years have been a challenging time for a business model that publishes information and news.

Although this has come to a climax over the last few months, we identified changes and vulnerabilities several years ago and have taken action in the last few years, which has put us in a better position today.

Last week, I spoke at the first SEJ live to talk about where we are now in 2026, what is working, and what we should be leaving behind.

From the talk, I’m going to share with you the three foundational things I think you need to be focusing on right now in 2026. Strategies you can apply which will help you as AI impacts our industry.

What You Need To Leave Behind

Before I talk about what you should be doing, let’s just make sure you have moved on from outdated modes of thinking that will hold you back.

Image by author, March 2026

If you’re still obsessively checking ranking on a daily basis, this is like rearranging the deckchairs on the Titanic.

Ranking is 2016; visibility is 2026.

The foundation of search has always been to know who your customer is, where they operate, and to use content to connect with them and encourage an action. That interaction always used to happen in the SERP, and that was our attention marketplace.

In 2026, our digitally competent audiences are now operating fluidly in a multimodal search journey before moving to their conclusion. All with an AI layer of visibility interwoven.

Even if you do get a number 1 ranking, it doesn’t mean you will get a click because the noise in a SERP can displace the visibility of a listing right off the first page.

Advanced Web Ranking found that when an AI Overview is expanded, the first organic result is pushed approximately 1,674 pixels down the page, effectively below the fold on most screens. And AI Overviews are just one layer. Between ads, carousels, map packs, and image results, a number one ranking can be virtually invisible.

I’ve experienced client product SERPs shift dramatically in the last few years to the point where we have given up chasing a vanity and put our efforts into being creative to connect with customers.

2026 is all about intent and action-based strategy.

Let’s do some actual marketing and find those users where they are and give them a reason to engage with you. And I think we are going to all be better marketers for it.

What You Need To Move Toward – Strategy That Can Survive AI

SEO technical excellence is fundamental to being discovered in LLMs. Far from SEO being dead, it has never been so important.

Alongside that, content is still the foundation of online visibility – without it, you have no visibility.

The following three strategies outlined are core factors that can offer stability through our transition to the new world of AI search.

Screenshot by author, March 2026

1. AI-Proof Content

What I mean here is content that will not be cannibalized/synthesized by AI.

The paradox of visibility in LLMs is that you need consensus for trust to get attention, but you also need quality and difference for inclusion. For brands that have already been investing in conducting experiments and collating data, they are one step ahead.

I spoke to Grant Simmons on IMHO, and he described this as “golden knowledge“:
Your data.
Your experience.
Your opinion.

In practice, content that can avoid being cannibalized by AI summaries and actually feed the summary looks like:

  • Video interviews and first-hand experience formats. These gain visibility across social, SERPs, and LLMs because they contain a human perspective that AI can’t generate from training data alone. It’s webinars, it’s IMHOs
  • Original research and proprietary data. State of SEO and AI papers
  • Opinionated commentary and expert analysis. Such as a roster of the best contributors that are offering their lived experience.

Anyone can use an LLM to generate a summary of the query “What is SEO?”

But being a brand and a community offering an experience of the best minds in the industry, live shows, unique data reports, breaking news, and offering our expert takes on why this matters and what you need to pay attention to. Being the curator and hub of everything in the industry makes it a destination and source feeding the LLMs.

Investing in this level of content strategy can elevate a brand to being channel agnostic and reduce your single point of failure from over-reliance on one channel. And that is what we aim to be at Search Engine Journal.

Screenshot by author, March 2026

2. Value-Based Clicks

Different reports cite differing numbers, but what is consistent is that LLMs are referring traffic.

According to Chartbeat data reported by the Press Gazette, ChatGPT drives 0.02% of referrals to publishers. The Conductor 2026 benchmarks report says that LLM referral traffic is 1.08% of website traffic across 10 industries.

Right now, it might feel like a fraction of what we grew accustomed to from Google, but don’t forget, 1% of trillions of searches is still a considerable market of opportunity.

To capitalize on this is to consider what we can offer to encourage the clicks from the LLM to our brand site. Ask yourself:

  • Why is someone clicking on a link in an LLM?
  • Why would someone want to read more than the AI summary?
  • Or, why would someone want to know more about my brand/product or service.

Pre-carousels, featured snippets, and AI summaries, it was far easier to gain a click from ranking highly on a SERP. When you’re one of only 10 options, you’re going to get the test click that checks out if you are the page they are looking for.

But, much like it was a far more difficult job to retain that click, if you have something of value that connects with the user, you can still get the click from a citation or card in LLMs or SERP AI summaries.

Featured snippets may have reduced click-through rate, but they didn’t kill it. Visibility layers can be opportunities, and SEOs worked hard to get #0 because it was a way to jump up the SERP to a top position.

What can drive a click in an AI search environment:

  • Depth the summary can’t contain, case studies, implementation detail, nuance that offers a reason to want more.
  • Credibility and trust, according to Amsive, branded queries with AI Overviews actually see an 18% CTR increase.
  • Actionable assets, offering resources where the intent cannot be satisfied by a summary.

If you can distinguish the difference between instant answer traffic and build content for the people who don’t want the summary or the quick answer, then your brand can become valuable to users.

Screenshot by author, March 2026

3. SERP Opportunities Resistant To AI

Despite the concern that AI is going to kill Google, the search engine is not going anywhere.

Where Google has the edge in the race against LLMs is years of understanding their user and understanding how to deliver answers to queries to satisfy the consumer. They have an established audience and technology infrastructure. And a LOT of data.

Regardless of the stampede towards LLMs and the AI hype cycle, there is still a lot of opportunity to be had from the search engine.

Brightedge data says that just over half of queries have AIOs, and Conductor reports that just over one quarter of analyzed searches triggered an AIO (21.9 million unique Google searches).

This indicates that anything between half and three-quarters of SERPs do not have an AI overview. And this means, there are a lot of searches where intent will be satisfied by clicking on a page. Content that targets these queries and drives a specific action sidesteps the AIO problem entirely.

Think about what is resistant to LLMs:

  • News – breaking news that is happening too quickly for LLMs.
  • Branded – lean into trust and build a community that actively searches for you.
  • Downloads – my favorite conversion tool that has worked for years.

My belief is that AIO might take away traffic volume, but not the traffic of value.

Build Consensus With Your Website As A Hub

Finally, if there was one tip I would offer to everyone that could have the most impact, this would be “consensus.”

LLMs generate responses based on statistical patterns across their training and grounding data, so when a brand or message appears consistently across many sources, it is more likely to surface in AI answers. Ahrefs found that branded web mentions had the strongest correlation with appearing in AI conversations, stronger than any other factor tested. If you can maintain consistent messaging across multiple channels, you are in the best position to be featured.

Alongside this, a study from the University of Toronto found that LLMs prefer ‘earned media’ from trusted sources that can offer more authority than posting on your own site.

Posting and layering your content across channels such as Reddit, LinkedIn, YouTube, or any industry publications relevant to your industry, will help to build the messaging associated with your brand and help with inclusion in LLMs.

Make your website into the hub that connects to all the channels online where you are active and contributing, and don’t be afraid to put some of your best content on other channels to get visibility.

The 3 Changes We Made At Search Engine Journal

The biggest mistake publishers made in Q1 wasn’t AI. It was treating AI as something happening to them instead of something they can navigate strategically.

At Search Engine Journal, we’ve made three specific changes in response:

  1. We shifted editorial toward experience-first formats with interviews, analysis, and original research.
  2. We moved from programmatic revenue to asset-based sponsorship.
  3. We made growing a direct audience our top metric priority, so that we own our own audience.

If you’re still using the same tactics you have been applying to SEO since 2020, then you need to reconsider what your audience wants, where they operate, and who your competitors are.

SEO in 2026 includes visibility in all discovery engines. To remain relevant, be sure you are part of the conversations.

More Resources:


Featured Image: Shelley Walsh/Search Engine Journal

The SEO Skills Gap: Why Technical Expertise Alone Won’t Cut It Anymore

The SEO industry has spent the last couple of decades perfecting the art of looking productive while delivering value some might describe as questionable.

Armed with an extensive suite of analytical tools, SEO is an incredibly data-rich and metric-rich industry. It was easy to generate reports that, on the surface at least, looked impressive to a C-suite eager for more of that “data-led decision making” everyone kept talking about.

These days, the C-suite is less interested in metrics like rankings, traffic, and sessions. They’re finally asking: “So what?”

It’s the same question that killed the “likes and followers” era of social media marketing. Eventually, boards stopped caring about follower counts and started demanding conversion rates, customer acquisition costs, and a measurable return on their investment.

Now it’s SEO’s turn for a reckoning. And answering that question requires a very different skill set from how many SEOs have been trained. Too many SEOs lack that wider business awareness and marketing aptitude to understand how they fit into the bigger picture.

In short, we’re faced with an SEO skills gap which, if left unaddressed, risks SEO teams and agencies falling out of step with the expectations of senior leadership and clients.

Rankings and traffic are still important, don’t get me wrong. But they’re not business outcomes; they’re contributory factors. Yet SEOs continue to cross their fingers in the hope that growth in these metrics will magically translate into sales or some other form of measurable business value. Who measures that value and how it comes about is usually someone else’s problem.

Sales and marketing can fret over the wider strategy. If the vanity metrics continue to show growth, the SEO team sits back, content they’ve done their bit.

Except, with zero-click search on the rise as customers turn increasingly to AI tools, many organizations are seeing their search traffic trending down. That focus on volume over strategy is no longer working.

Connecting The Dots To Business Outcomes

I’ve been watching this shift play out in real time. Over the past few years, I’ve noticed clients focus less on “Can you improve our rankings,” and more on “Can you prove how this contributes to our business growth.”

But as much as I’d like to trust my gut, personal experience hardly qualifies as unequivocal evidence. Unfortunately, I lack the resources to conduct a comprehensive five-year longitudinal analysis to see how employer/client expectations might have changed. So, I conducted a quick straw poll of my network instead.

It’s a small data sample, so apply the appropriate pinch of salt. I simply wanted to get a sense of whether what I’m seeing holds true beyond my business.

It seems it does.

I asked respondents how confident they were in their SEO team’s ability to explain SEO’s contribution to business outcomes like customer acquisition cost (CAC), lifetime value (LTV), and pipeline. Scored on a scale of 1 to 10, the overall average is a smidge over 6.7. Not terrible, but not great either.

But in an environment where budgets are shrinking, a score of just “okay” when it comes to demonstrating business value is potentially fatal.

Simply saying, “Trust us, it helps,” will never survive a CFO review.

SEO’s New Critical Skills

I also asked respondents which skills they consider to be most critical when hiring future SEOs. Unsurprisingly, the top result was:

1. Technical SEO (83%)

Of course, it is. You can’t tune a car without knowing your way around an engine. So no; crawling, indexing, load times, schema … none of it is going away.

But that near-ubiquitousness also means that technical SEO is the price of admission. It’s table stakes. It’s the bare minimum requirement. Being great with technical SEO will get you in the door, but it won’t keep you in the room.

What’s more telling is how many respondents selected critical skills that most SEO teams I encounter still treat as “someone else’s job.”

2. Content strategy and creation (61%)

3. Business acumen – CAC, LTV, revenue forecasting (50%)

4. Communication and stakeholder management (39%)

While the market still needs technicians, it’s increasingly hiring commercial operators. Knowing how to do something is only useful when you can also clearly articulate why.

Meanwhile, the skills that SEOs would normally consider part of their job description languished nearer the bottom of the results.

=5. Data analytics and reporting (33%)

=5. AI/machine‑learning and automation (33%)

That’s not to say SEOs don’t need to worry about these skills. It’s just that they’re less likely to sway an employer or client’s hiring decisions. Like vanity metrics, they’re simply the means to an end. An aptitude for data analytics isn’t a replacement for business acumen, but it helps inform those strategic decisions. AI and automation are useful tools, but they’re no replacement for human-led content creation.

Today, what separates high-performing teams from the rest isn’t their aptitude with technical SEO or their skill with data, but whether they can connect execution to outcomes and defend it in the language of business.

Marketing Fundamentals Matter Now More Than Ever

As SEO evolved into its own discipline, it apparently forgot that search visibility is just one component of a much larger strategic puzzle.

Most SEO teams operate as if their job is to “optimize websites.” It’s not. Their job is to help businesses grow profitably. And you can’t do that without understanding the fundamental building blocks of marketing strategy that have been hammered into every marketing graduate for over 60 years.

The four Ps of Marketing: Product, Price, Place, and Promotion.

Product: Do You Even Know What You’re Selling?

When brothers Michael and Marc Grondahl launched Planet Fitness in 1992, their strategy struck many as completely irrational. They set out to actively repel the industry’s most valuable customers.

The reason was actually quite simple. The brothers wanted to go after the 80-85% of people who didn’t belong to a gym. They realized that a gym full of well-muscled gym junkies lifting heavy weights and posing in front of mirrors is intimidating for casual users.

This insight completely shaped the gym’s launch strategy. Remove heavy weights. Ban string tank tops. No posing mirrors. And because casual users don’t overuse the facilities, gym memberships could be more affordable.

Every decision reinforced the same positioning: This is a judgment-free zone for normal people, not a stage for bodybuilders.

Most SEO teams create content without spending sufficient time trying to understand product positioning or brand messaging. With pressure on to show results quickly, they jump straight to execution, following the usual methodologies and repeatable processes to target the most obvious industry keywords.

And here’s the problem: while you can use SEO tools or AI to generate comprehensive and prioritized keyword lists, they can’t tell you who you should be selling to or how to position the product against competitors. That requires human insight, commercial understanding, and strategic thinking.

  • What problem does this product solve?
  • Who is it for (and who is it deliberately not for)?
  • What differentiates it from the available alternatives?
  • What’s the positioning strategy: premium, value, specialist, or generalist?

Price: Understanding Value, Not Just Cost

Pricing isn’t just a number. It’s a strategic signal about quality and positioning to your target market.

For example, the Van Westendorp Price Sensitivity Meter, introduced in 1976 by Dutch economist Peter van Westendorp, helps businesses to determine the price range customers will find most acceptable. It does this by asking four questions:

  • At what price would the product be too cheap to trust?
  • At what price is it a bargain?
  • At what price is it getting expensive but still acceptable?
  • At what price is it too expensive to consider?

This methodology is particularly useful when launching a new product that doesn’t (yet) have any obvious competitors. It gauges how much value consumers place on the innovation.

A pricing strategy can fundamentally change who to target and what messaging to use. Yet SEOs don’t always consider a client’s pricing strategy when deciding on an approach.

If the product is positioned as a premium expense, it makes no sense to chase high-volume keywords likely to attract price-sensitive customers. You’re bringing in people who won’t convert because they’re looking for the cheapest option, not the best option.

Place: Digital Shelves And Strategic Positioning

Place focuses on making the product available to customers in the right location and at the right time. In retail, this science is well-established.

According to recent NielsenIQ research, shoppers typically make in-store purchasing decisions in under six seconds. Hence, best-selling items are placed at eye level while less profitable products are relegated to higher or lower shelves.

Online, this decision window widens, as 44% of shoppers take at least three minutes to find a product. But while a website doesn’t have shelves, the principles are otherwise identical. By the time someone is ready to buy, they’re far more likely to default to a brand they’re already familiar with.

In search results, you’re effectively competing for digital eye level: a top three ranking, a featured snippet, an AI overview citation.

But placement extends far beyond search rankings. Can your content be cited by AI tools? Are your conversion paths obvious? Do you appear in comparison articles? Are you positioned alongside competitors in ways that favor your value proposition?

Effective placement isn’t just about identifying the channels where the business wants to be visible. It’s also about developing an interconnected content ecosystem. Just as supermarkets place complementary products together, your content should create logical pathways that guide customers forward.

Promotion: Where SEO Forgets It’s Supposed To Persuade

While Placement is about getting your content and messaging in front of the right people, Promotion is about influencing what happens next. Promotion is the persuasion part.

Imagine someone researching project management tools, comparing Asana, Monday.com, and Basecamp. A landing page titled “Asana vs. Monday.com for agencies” isn’t just informational; it’s promotional. You’re deliberately influencing how they evaluate options and steering them toward a specific conclusion.

Imagine you’re the CMO for a fictional project management tool called …  oh, I don’t know … Taskaroo. (I’m no branding expert.) Someone researching project management tools would likely want to compare Taskaroo alongside other likely options: Asana, Monday.com, and Basecamp.

Comparison pages are popular SEO tactics because they target valuable keywords at a key part of the research journey. But a landing page titled “Asana vs. Taskaroo for agencies” has even more value as a promotional tactic. The content on that page is your opportunity to shape how potential customers evaluate their options, framed to favor your own value propositions, of course, in the hope that more people will put Taskaroo into active consideration.

That’s how promotional content should work: meeting people wherever they are in the customer journey and providing the ideal information and messaging to move them forward.

The Friction That Kills Conversion

Promotion is where I see most SEO strategies fall apart. Not because SEOs don’t create content, but because they’ve forgotten that promotion isn’t the same as visibility.

When SEOs don’t think in terms of content ecosystems, mapped to the customer journey, they create unnecessary friction at exactly the moment someone might be ready to move forward.

For example, an ecommerce site publishes an article about running shoes. It’s a handy primer for anyone who’s just getting interested in running, with brief overviews of all the different types: trail running shoes, track shoes, road running shoes. It’s well-written, ranks nicely, and targets someone at the top of the funnel.

But once the reader starts wondering whether they should get a pair of trail running shoes, there’s nowhere for them to go. No suggested further reading on trail running to develop the reader’s interest; no links to guides on what to look for in trail running shoes; no connection to product recommendations. In short, there’s no next step for someone entering the consideration phase of the journey.

Actually, if there is a link, it’s probably in the form of a CTA pointing to the product page in the hope of boosting that page’s rankings. But is it really likely that someone might miraculously jump from awareness to costly conversion in a single bound after only reading a hundred heavily optimized words?

The reader has hit friction. Any further research will mean leaving your site, searching again, and potentially landing on a competitor with a better understanding of their needs. Your SEO team may have done the hard work of attracting the right audience and exciting their interest, only to abandon them at the exact moment they’re ready to go deeper.

This is why content marketing strategy and business acumen are now considered essential SEO skills. While SEO is mostly about building rankings and attracting traffic, content marketing is about nurturing and directing that traffic towards genuine, measurable business outcomes.

And that requires a comprehensive ecosystem of interlinked content spanning the entire journey from initial awareness to conversion and beyond, addressing as many relevant questions, objections, and barriers to purchase as possible along the way.

Flipping The Script On SEO

At the heart of the SEO skills gap sits a fundamental misunderstanding:

The purpose of your content isn’t to boost your SEO. The purpose of SEO is to boost your content.

SEOs use content to rank. Marketers create content to convert. If it’s possible to tell which assets were created for SEO and which were created for Marketing, then you have a problem.

When an SEO creates content purely to rank for a keyword, they’re not thinking about what the customer ultimately hopes to achieve. They’re not thinking about the journey and what happens next. They’re not anticipating what questions might arise. They’re not proactively addressing barriers and concerns that might prevent a purchase decision.

By understanding the four Ps, SEO’s role becomes much clearer. Forget chasing volume with vanity metrics. Truly effective SEO is about building experiences tailored to the customer journey, removing friction at every touchpoint, so that the next step is always obvious and effortless.

The companies that understand this don’t just rank. They convert.

Stop hiring “SEO Specialists” and start hiring growth marketers with SEO expertise who understand how their work contributes to customer acquisition efficiency, pipeline growth, and profitability.

More Resources:


Featured Image: Na_Studio/Shutterstock

5 GEO Strategies To Make AI Search Engines Recommend Your Brand In 2026

This post was sponsored by Geoptie. The opinions expressed in this article are the sponsor’s own. 

The way people search is changing faster than most marketers realize. ChatGPT alone now has over 900 million weekly active users. Google AI Overviews appear in one out of every four search results.

Each of these contains the potential for AI to cite your brand.

This isn’t a future trend. It’s happening right now. And if your brand isn’t showing up in those AI-generated answers, you’re invisible to a rapidly growing audience, even if you rank #1 on Google.

That’s where Generative Engine Optimization (GEO) comes in: the practice of optimizing your online presence. So, AI engines cite, reference, and recommend your brand when users ask questions in your space.

1. Start By Measuring Your AI Visibility

Before changing a single word on your website, you need to know where you stand. Which AI platforms mention your brand? For which queries? How often are your competitors getting cited instead of you?

You can’t optimize what you don’t measure.

How To Measure AI Visibility

Most marketers skip this step because it feels unfamiliar. But the process is straightforward.

  1. List 10–15 questions your ideal customer would ask an AI engine, things like “best [your category] for [use case]” or “how to solve [problem you address].”
  2. Run each query in ChatGPT, Perplexity, and Gemini.
  3. Note whether your brand is mentioned, which competitors show up instead, and whether sources are cited.

Repeat monthly, because AI-generated answers shift as models update and new content gets indexed. Doing this manually across multiple platforms gets tedious fast, which is why dedicated GEO platforms exist to automate the tracking and monitor changes over time.

The best place to start? Run a free geo rank check on your brand. In under a minute, you’ll see which AI engines mention you, which ones don’t, and where your competitors show up instead.

This baseline is essential. Without it, you’re optimizing blind.

2. Don’t Abandon SEO. It Still Feeds AI

Here’s an important nuance: traditional search rankings still matter for GEO.

AI engines frequently pull from top-ranking Google results when generating their responses. If your page ranks well for a relevant query, there’s a higher chance an AI engine will reference it as a source. Google’s own AI Overviews heavily favor content that already performs well in organic search.

So keep doing what continues to drive SERP rankings:

  • Producing high-quality content
  • Building backlinks
  • Technical SEO.

But think of SEO as the foundation, not the full strategy. The brands that win in AI search are those that layer GEO tactics on top of a solid SEO foundation.

3. Make Sure Your Content Follows GEO Best Practices

This is where most of the work happens. AI engines are selective about what they cite, and the structure and quality of your content play a massive role. Here’s what to focus on:

  • Write for citability, not just readability. AI engines look for content that makes clear, specific claims backed by data or expertise. Vague, fluffy paragraphs get skipped. Concrete statements like definitions, statistics, step-by-step processes, and expert opinions are far more likely to be pulled into a generated response.
  • Structure content around questions. Conversational AI is driven by user questions. Structure your content to directly answer the questions your audience asks. Use clear headers, concise paragraphs, and FAQ When an AI engine scans your page and finds a clean, authoritative answer to a specific question, you become a prime candidate for citation.
  • Leverage schema markup and structured data. Help AI engines understand what your content is about by implementing proper schema FAQ schema, How-To schema, and Organization schema all give AI systems stronger signals about your content’s topic and structure.
  • Build topical authority, not just keyword-specific content. AI engines favor sources that demonstrate deep expertise on a topic. Rather than publishing scattered blog posts across dozens of topics, build comprehensive content clusters that cover a subject thoroughly. This signals to AI engines that your brand is a reliable authority worth citing.

Pro Tip: Leverage a comprehensive GEO platform. Optimizing your content for AI search involves many moving parts: content structure, schema markup, topical authority, and technical SEO. Keeping track of all these signals manually across every page on your site isn’t realistic, especially as AI engines update how they evaluate sources. A dedicated GEO platform lets you regularly scan your entire website, monitor your optimization scores, and catch issues before they cost you citations.

Want to see where you stand right now? Run a free GEO audit and get actionable insights on your site’s AI readiness in under a minute.

4. Show Up In Reddit & UGC Discussions

Here’s a strategy most brands overlook: AI engines love Reddit.

If you’ve noticed Reddit threads showing up in Google results more frequently, that’s not a coincidence. Google and AI platforms increasingly treat user-generated content, especially Reddit, as a trusted and authentic source of information. When someone asks an AI engine for a product recommendation or solution comparison, the response often draws from Reddit discussions.

This means your brand’s presence in relevant threads matters more than ever. But you can’t just show up and start promoting yourself. Here’s how to approach it the right way:

  • Find where your audience is already talking. Search Reddit for your product category, your competitors’ names, and the problems you solve. Identify 5–10 active subreddits where these conversations happen. Look for threads like “what tool do you use for [your category].”  These are the discussions AI engines pull from.
  • Contribute before you promote. Spend at least 2–3 weeks genuinely participating before your brand ever comes up. Reddit users check post history, and if your account is nothing but product mentions, you’ll get flagged as spam.
  • Be honest, not salesy. When a relevant recommendation thread comes up, share your product as one option among others. Mention what it’s good at and where it might not be the best fit. AI engines weigh authentic, nuanced mentions far more heavily than obvious self-promotion.
  • Check what AI engines are citing. Run your core queries in ChatGPT and Perplexity and see which Reddit threads appear. If your brand isn’t in those threads, that’s where to focus.

5. Get Featured In Listicles On Trusted Sites

When users ask AI engines for recommendations like “best project management tools,” the AI doesn’t generate that list from scratch. It synthesizes from existing listicle articles on authoritative websites. A single placement in a well-ranking listicle can get your brand recommended across ChatGPT, Perplexity, and Google AI Overviews simultaneously.

  • Find the listicles AI engines are already citing. Run your target recommendation queries in ChatGPT and Perplexity and note which articles they reference. These are the exact listicles you need to be in.
  • Build a hit list of publishers. Identify publications that come up repeatedly across both AI and traditional search results for “best [your category]” queries. Prioritize sites with strong domain authority.
  • Make inclusion easy. Make sure your product pages have a clear one-liner, obvious differentiators, social proof, and transparent pricing. Then pitch authors with something valuable, such as a free account, a demo, or data they can use.

Listicles get updated regularly and AI engines re-scan them, so a placement you earn today could start driving AI citations within weeks.

The Window Is Open, For Now

Generative Engine Optimization is still in its early stages. Most brands haven’t even started thinking about it, which means the opportunity to establish an early advantage is enormous.

The brands that start measuring their AI visibility, optimizing their content for citability, building community presence, and earning placements in authoritative listicles today will be the ones AI engines default to recommending tomorrow.

The question isn’t whether AI search will matter for your business. It’s whether you’ll be visible when it does.

Start Optimizing For AI Search Today

Every strategy in this article comes down to one thing: making your brand the obvious choice when AI engines look for sources to cite and recommend. You don’t need to tackle everything at once, but you do need to start.

Geoptie brings all five strategies together in one platform, from tracking your AI visibility across ChatGPT, Perplexity, and Google AI to auditing your content and monitoring your optimization scores over time. It’s built specifically for GEO, so you can stop guessing and start seeing exactly where your brand stands in AI search.

The early movers will own this space. Make sure you’re one of them.


Image Credits

Featured Image: Image by Tor App. Used with permission.