Why The Build Process Of Custom GPTs Matters More Than The Technology Itself

When Google introduced the transformer architecture in its 2017 paper “Attention Is All You Need,” few realized how much it would help transform digital work. Transformer architecture laid the foundations for today’s GPTs, which are now part of our daily work in SEO and digital marketing.

Search engines have used machine learning for decades, but it was the rise of generative AI that made many of us actively explore AI. AI platforms and tools like custom GPTs are already influencing how we research keywords, generate content ideas, and analyze data.

The real value, however, is not in using these tools to cut corners. It lies in designing them intentionally, aligning them with business goals, and ensuring they serve users’ needs.

This article is not a tutorial on how to build GPTs. I share why the build process itself matters, what I have learned so far, and how SEOs can use this product mindset to think more strategically in the age of AI.

From Barriers To Democratization

Not long ago, building tools without coding experience meant relying on developers, dealing with long lead times, and waiting for vendors to release new features. That has changed slightly. The democratization of technology has lowered the entry barriers, making it possible for anyone with curiosity to experiment with building tools like custom GPTs. At the same time, expectations have necessarily risen, as we expect tools to be intuitive, efficient, and genuinely useful.

This is a reason why technical skills still matter. But they’re not enough on their own. What matters more, in my opinion, is how we apply them. Are we solving a real problem? Are we creating workflows that align with business needs?

The strategic questions SEOs should be asking are no longer just “Can I build this?,” but:

  • Should I build this?
  • What problem am I solving, and for whom?
  • What’s the ultimate goal?

Why The Build Process Matters

Building a custom GPT is straightforward. Anyone can add a few instructions and click “save.” What really matters is what happens before and after: defining the audience, identifying the problem, scoping the work realistically, testing and refining outputs, and aligning them with business objectives.

In many ways, this is what good marketing has always been about: understanding the audience, defining their needs, and designing solutions that meet them.

As an international SEO, I’ve often seen cultural relevance and digital accessibility treated as afterthoughts. OpenAI offered me a way to explore whether AI could help address these challenges, especially since the tool is accessible to those of us without any coding expertise.

What began as a single project to improve cultural relevance in global SEO soon evolved into two separate GPTs when I realized the scope was larger than I could manage at the time.

That change wasn’t a failure, but a part of the process that led me toward a better solution.

Case Study: 2 GPTs, 1 Lesson

The Initial Idea

My initial idea was to build a custom GPT that could generate content ideas tailored to the UK, US, Canada, and Australia, taking both linguistic and cultural nuances into account.

As an international SEO, I know it is hard to engage global audiences who expect personalized experiences. Translation alone is not enough. Content must be linguistically accurate and contextually relevant.

This mirrors the wider shift in search itself. Users now expect personalized, context-driven results, and search engines are moving in that same direction.

A Change In Direction

As I began building, I quickly realized that the scope was bigger than expected. Capturing cultural nuance across four different markets while also learning how to build and refine GPTs required more time than I could commit at that moment.

Rather than leaving the project, I reframed it as a minimum viable product. I revisited the scope and shifted focus to another important challenge, but with a more consistent requirement – digital accessibility.

The accessibility GPT was designed to flag issues, suggest inclusive phrasing, and support internal advocacy. It adapted outputs to different roles, so SEOs, marketers, and project managers could each use it in relevant ways in their day-to-day work.

This wasn’t giving up on the content project. It was a deliberate choice to learn from one use case and apply those lessons to the next.

The Outcome

Working on the accessibility GPT first helped me think more carefully about scope and validation, which paid off.

As accessibility requirements are more consistent than cultural nuance, it was easier to refine prompts and test role-specific outputs, ensuring an inclusive, non-judgmental tone.

I shared the prototype with other SEOs and accessibility advocates. Their feedback was invaluable. Although their feedback was generally positive, they pointed out inconsistencies I hadn’t seen, including how I described the prompt in the GPT store.

After all, accessibility is not just about alt text or color contrast. It’s about how information is presented.

Once the accessibility GPT was running, I went back to the cultural content GPT, better prepared, with clearer expectations and a stronger process.

The key takeaway here is that the value lies not only in the finished product, but in the process of building, testing, and refining.

Risks And Challenges Along The Way

Not every risk became an issue, but the process brought its share of challenges.

The biggest was underestimating time and scope, which I solved by revisiting the plan and starting smaller. There were also platform limitations – ongoing model development, AI fatigue, and hallucinations. OpenAI itself has admitted that hallucinations are mathematically unavoidable. The best response is to be precise with prompts, keep instructions detailed, and always maintain a human-in-the-loop approach. GPTs should be seen as assistants, not replacements.

Collaboration added another layer of complexity. Feedback loops depended on colleagues’ availability, so I had to stay flexible and allow extra time. Their input, however, was crucial – I couldn’t have made progress without them. As none of the these are under my control, I could only keep on top of developments and do my best to handle all of them.

These challenges reinforced an important truth: Building strategically isn’t about chasing perfection, but about learning, adapting, and improving with each iteration.

Applying Product Thinking

The process I followed was similar to how product managers approach new products. SEOs can adopt the same mindset to design workflows that are both practical and strategic.

Validate The Problem

Not every issue needs AI – and not every issue needs solving. Identify and prioritize what really matters at that time and confirm whether a custom GPT, or any other tool, is the right way to address it.

Define The Use Case

Who will use the GPT, and how? A wide reach may sound appealing, but value comes from meeting specific needs. Otherwise, success can quickly fade away.

My GPTs are designed to support SEOs, marketers, and project managers in different scenarios of their daily work.

Prototype And Test

There is real value in starting small. With GPTs, I needed to write clear, specific instructions, then review the outputs and refine.

For instance, instead of asking the accessibility GPT for general ideas on making a form accessible, I instructed it to act as an SEO briefing developers on fixes or as a project manager assigning tasks.

For the content GPT, I instructed the GPT to act as a UK/ U.S. content strategist, developing inclusive, culturally relevant ideas for specific publications in British English/Standard American.

Iterate With Feedback

Bring colleagues and subject-matter experts into the process early. Their insights challenge assumptions, highlight inconsistencies, and make outputs more robust.

Keep On Top Of Developments

AI platforms evolve quickly, and processes also need to adapt to different scenarios. Product thinking means staying agile, adapting to change, and reassessing whether the tools we build still serve their purpose.

The roll-out of the failed GPT-5 reminded me how volatile the landscape can be.

Practical Applications For SEOs

Why build GPTs when there are already so many excellent SEO tools available? For me, it was partly curiosity and partly a way to test what I could achieve with my existing skills before suggesting a collaboration for a different product.

Custom GPTs can add real value in specific situations, especially with a human-in-the-loop approach. Some of the most useful applications I have found include:

  • Analyzing campaign data to support decision-making.
  • Assisting with competitor analysis across global markets.
  • Supporting content ideation for international audiences.
  • Clustering keywords or highlighting internal linking opportunities.
  • Drafting documentation or briefs.

The point is not to replace established tools or human expertise, but to use them as assistants within structured workflows. They can free up time for deeper thinking, while still requiring careful direction and review.

How SEOs Can Apply Product Thinking

Even if you never build a GPT, you can apply the same mindset in your day-to-day work. Here are a few suggestions:

  • Frame challenges strategically: Ask who the end user is, what they need, and what is broken in their experience. Don’t start with tactics without context.
  • Design repeatable processes: Build workflows that scale and evolve over time, instead of one-off fixes.
  • Test and learn: Treat tactics like prototypes. Run experiments, refine based on results. If A/B testing isn’t possible, as it often happens, at least be open to making any necessary adjustments where necessary.
  • Collaborate across teams: SEO does not exist in isolation. Work with UX, development, and content teams early. The key is to find ways to add value to their work.
  • Redefine success metrics: Qualified traffic, conversions, and internal process improvements matter in AI times. Success should reflect actual business impact.
  • Use AI strategically: Quick wins are tempting, but GPTs and other tools are best used to support structured workflows and highlight blind spots. Keep a human-in-the-loop approach to ensure outputs are accurate and relevant to your business needs.

Final Thought

The real innovation is not in the technology itself, but in how we choose to apply it.

We are now in the fifth industrial revolution, a time when humans and machines collaborate more closely than ever.

For SEOs, the opportunity is to move beyond tactical execution and start thinking like product strategists. That means asking sharper questions, testing hypotheses, designing smarter workflows, and creating solutions that adapt to real-world constraints.

It is about providing solutions, not just executing tasks.

More Resources:


Featured Image: SvetaZi/Shutterstock

The AI Search Visibility Audit: 15 Questions Every CMO Should Ask

This post was sponsored by IQRush. The opinions expressed in this article are the sponsor’s own.

Your traditional SEO is winning. Your AI visibility is failing. Here’s how to fix it.

Your brand dominates page one of Google. Domain authority crushes competitors. Organic traffic trends upward quarter after quarter. Yet when customers ask ChatGPT, Perplexity, or others about your industry, your brand is nowhere to be found.

This is the AI visibility gap, which causes missed opportunities in awareness and sales.

SEO ranking on page one doesn’t guarantee visibility in AI search.  The rules of ranking have shifted from optimization to verification.”

Raj Sapru, Netrush, Chief Strategy Officer

Recent analysis of AI-powered search patterns reveals a troubling reality: commercial brands with excellent traditional SEO performance often achieve minimal visibility in AI-generated responses. Meanwhile, educational institutions, industry publications, and comparison platforms consistently capture citations for product-related queries.

The problem isn’t your content quality. It’s that AI engines prioritize entirely different ranking factors than traditional search: semantic query matching over keyword density, verifiable authority markers over marketing claims, and machine-readable structure over persuasive copy.

This audit exposes 15 questions that separate AI-invisible brands from citation leaders.

We’re sharing the first 7 critical questions below, covering visibility assessment, authority verification, and measurement fundamentals. These questions will reveal your most urgent gaps and provide immediate action steps.

Question 1: Are We Visible in AI-Powered Search Results?

Why This Matters: Commercial brands with strong traditional SEO often achieve minimal AI citation visibility in their categories. A recent IQRush field audit found fewer than one in ten AI-generated answers included in the brand, showing how limited visibility remains, even for strong SEO performers. Educational institutions, industry publications, and comparison sites dominate AI responses for product queries—even when commercial sites have superior content depth. In regulated industries, this gap widens further as compliance constraints limit commercial messaging while educational content flows freely into AI training data.

How to Audit:

  • Test core product or service queries through multiple AI platforms (ChatGPT, Perplexity, Claude)
  • Document which sources AI engines cite: educational sites, industry publications, comparison platforms, or adjacent content providers
  • Calculate your visibility rate: queries where your brand appears vs. total queries tested

Action: If educational/institutional sources dominate, implement their citation-driving elements:

  • Add research references and authoritative citations to product content
  • Create FAQ-formatted content with an explicit question-answer structure
  • Deploy structured data markup (Product, FAQ, Organization schemas)
  • Make commercial content as machine-readable as educational sources

IQRush tracks citation frequency across AI platforms. Competitive analysis shows which schema implementations, content formats, and authority signals your competitors use to capture citations you’re losing.

Question 2: Are Our Expertise Claims Actually Verifiable?

Why This Matters: Machine-readable validation drives AI citation decisions: research references, technical standards, certifications, and regulatory documentation. Marketing claims like “industry-leading” or “trusted by thousands” carry zero weight. In one IQRush client analysis, more than four out of five brand mentions were supported by citations—evidence that structured, verifiable content is far more likely to earn visibility. Companies frequently score high on human appeal—compelling copy, strong brand messaging—but lack the structured authority signals AI engines require. This mismatch explains why brands with excellent traditional marketing achieve limited citation visibility.

How to Audit:

  • Review your priority pages and identify every factual claim made (performance stats, quality standards, methodology descriptions)
  • For each claim, check whether it links to or cites an authoritative source (research, standards body, certification authority)
  • Calculate verification ratio: claims with authoritative backing vs. total factual claims made

Action: For each unverified claim, either add authoritative backing or remove the statement:

  • Add specific citations to key claims (research databases, technical standards, industry reports)
  • Link technical specifications to recognized standards bodies
  • Include certification or compliance verification details where applicable
  • Remove marketing claims that can’t be substantiated with machine-verifiable sources

IQRush’s authority analysis identifies which claims need verification and recommends appropriate authoritative sources for your industry, eliminating research time while ensuring proper citation implementation.

Question 3: Does Our Content Match How People Query AI Engines?

Why This Matters: Semantic alignment matters more than keyword density. Pages optimized for traditional keyword targeting often fail in AI responses because they don’t match conversational query patterns. A page targeting “best project management software” may rank well in Google but miss AI citations if it doesn’t address how users actually ask: “What project management tool should I use for a remote team of 10?” In recent IQRush client audits, AI visibility clustered differently across verticals—consumer brands surfaced more frequently for transactional queries, while financial clients appeared mainly for informational intent. Intent mapping—informational, consideration, or transactional—determines whether AI engines surface your content or skip it.

How to Audit:

  • Test sample queries customers would use in AI engines for your product category
  • Evaluate whether your content is structured for the intent type (informational vs. transactional)
  • Assess if content uses conversational language patterns vs. traditional keyword optimization

Action: Align content with natural question patterns and semantic intent:

  • Restructure content to directly address how customers phrase questions
  • Create content for each intent stage: informational (education), consideration (comparison), transactional (specifications)
  • Use conversational language patterns that match AI engine interactions
  • Ensure semantic relevance beyond just keyword matching

IQRush maps your content against natural query patterns customers use in AI platforms, showing where keyword-optimized pages miss conversational intent.

Question 4: Is Our Product Information Structured for AI Recommendations?

Why This Matters: Product recommendations require structured data. AI engines extract and compare specifications, pricing, availability, and features from schema markup—not from marketing copy. Products with a comprehensive Product schema capture more AI citations in comparison queries than products buried in unstructured text. Bottom-funnel transactional queries (“best X for Y,” product comparisons) depend almost entirely on machine-readable product data.

How to Audit:

  • Check whether product pages include Product schema markup with complete specifications
  • Review if technical details (dimensions, materials, certifications, compatibility) are machine-readable
  • Test transactional queries (product comparisons, “best X for Y”) to see if your products appear
  • Assess whether pricing, availability, and purchase information is structured

Action: Implement comprehensive product data structure:

  • Deploy Product schema with complete technical specifications
  • Structure comparison information (tables, lists) that AI can easily parse
  • Include precise measurements, certifications, and compatibility details
  • Add FAQ schema addressing common product selection questions
  • Ensure pricing and availability data is machine-readable

IQRush’s ecommerce audit scans product pages for missing schema fields—price, availability, specifications, reviews—and prioritizes implementations based on query volume in your category.

Question 5: Is Our “Fresh” Content Actually Fresh to AI Engines?

Why This Matters: Recency signals matter, but timestamp manipulation doesn’t work. Pages with recent publication dates, but outdated information underperforms older pages with substantive updates: new research citations, current industry data, or refreshed technical specifications. Genuine content updates outweigh simple republishing with changed dates.

How to Audit:

  • Review when your priority pages were last substantively updated (not just timestamp changes)
  • Check whether content references recent research, current industry data, or updated standards
  • Assess if “evergreen” content has been refreshed with current examples and information
  • Compare your content recency to competitors appearing in AI responses

Action: Establish genuine content freshness practices:

  • Update high-priority pages with current research, data, and examples
  • Add recent case studies, industry developments, or regulatory changes
  • Refresh citations to include latest research or technical standards
  • Implement clear “last updated” dates that reflect substantive changes
  • Create update schedules for key content categories

IQRush compares your content recency against competitors capturing citations in your category, flagging pages that need substantive updates (new research, current data) versus pages where timestamp optimization alone would help.

Question 6: How Do We Measure What’s Actually Working?

Why This Matters: Traditional SEO metrics—rankings, traffic, CTR—miss the consideration impact of AI citations. Brand mentions in AI responses influence purchase decisions without generating click-through attribution, functioning more like brand awareness channels than direct response. CMOs operating without AI visibility measurement can’t quantify ROI, allocate budgets effectively, or report business impact to executives.

How to Audit:

  • Review your executive dashboards: Are AI visibility metrics present alongside SEO metrics?
  • Examine your analytics capabilities: Can you track how citation frequency changes month-over-month?
  • Assess competitive intelligence: Do you know your citation share relative to competitors?
  • Evaluate coverage: Which query categories are you blind to?

Action: Establish AI citation measurement:

  • Track citation frequency for core queries across AI platforms
  • Monitor competitive citation share and positioning changes
  • Measure sentiment and accuracy of brand mentions
  • Add AI visibility metrics to executive dashboards
  • Correlate AI visibility with consideration and conversion metrics

IQRush tracks citation frequency, competitive share, and month-over-month trends across across AI platforms. No manual testing or custom analytics development is required.

Question 7: Where Are Our Biggest Visibility Gaps?

Why This Matters: Brands typically achieve citation visibility for a small percentage of relevant queries, with dramatic variation by funnel stage and product category. IQRush analysis showed the same imbalance: consumer brands often surfaced in purchase-intent queries, while service firms appeared mostly in educational prompts. Most discovery moments generate zero brand visibility. Closing these gaps expands reach at stages where competitors currently dominate.

How to Audit:

  • List queries customers would ask about your products/services across different funnel stages
  • Group them by funnel stage (informational, consideration, transactional)
  • Test each query in AI platforms and document: Does your brand appear?
  • Calculate what percentage of queries produce brand mentions in each funnel stage
  • Identify patterns in the queries where you’re absent

Action: Target the funnel stages with lowest visibility first:

  • If weak at informational stage: Build educational content that answers “what is” and “how does” queries
  • If weak at consideration stage: Create comparison content structured as tables or side-by-side frameworks
  • If weak at transactional stage: Add comprehensive product specs with schema markup
  • Focus resources on stages where small improvements yield largest reach gains

IQRush’s funnel analysis quantifies gap size by stage and estimates impact, showing which content investments will close the most visibility gaps fastest.

The Compounding Advantage of Early Action

The first seven questions and actions highlight the differences between traditional SEO performance and AI search visibility. Together, they explain why brands with strong organic rankings often have zero citations in AI answers.

The remaining 8 questions in the comprehensive audit help you take your marketing further. They focus on technical aspects: the structure of your content, the backbone of your technical infrastructure, and the semantic strategies that signal true authority to AI. 

“Visibility in AI search compounds, making it harder for your competition to break through. The brands that make themselves machine-readable today will own the conversation tomorrow.”
Raj Sapru, Netrush, Chief Strategy Officer

IQRush data shows the same thing across industries: early brands that adopt a new AI answer engine optimization strategy quickly start to lock in positions of trust that competitors can’t easily replace. Once your brand becomes the reliable answer source, AI engines will start to default to you for related queries, and the advantage snowballs.

The window to be an early adopter and take AI visibility for your brand will not stay open forever.  As more brands invest in AI visibility, the visibility race is heating up.

Download the Complete AI Search Visibility Audit with detailed assessment frameworks, implementation checklists, and the 8 strategic questions covering content architecture, technical infrastructure, and linguistic optimization. Each question includes specific audit steps and immediate action items to close your visibility gaps and establish authoritative positioning before your market becomes saturated with AI-optimized competitors.

Image Credits

Featured Image: Image by IQRush. Used with permission.

In-Post Images: Image by IQRush. Used with permission.

Trust In AI Shopping Is Limited As Shoppers Verify On Websites via @sejournal, @MattGSouthern

A new IAB and Talk Shoppe study finds AI is accelerating discovery and comparisons, but it’s not the last stop.

Here are the key points before we get into the details:

  • AI pushes people to verify details on retailer sites, search, reviews, and forums rather than replacing those steps.
  • Only about half fully trust AI recommendations, which creates predictable detours when links are broken or specs and pricing don’t match.
  • Retailer traffic rises after AI, with one in three shoppers clicking through directly from an assistant.

About The Report

This report combines more than 450 screen-recorded AI shopping sessions with a U.S. survey of 600 consumers, giving you observed behavior and stated attitudes in one place.

It tracks where AI helps, where trust breaks, and what people do next.

Key Findings

AI speeds up research and makes it more focused, especially for comparing options, but it increases the number of steps as shoppers validate details elsewhere.

In the sessions, people averaged 1.6 steps before AI and 3.8 afterward, and 95% took extra steps to feel confident before ending a session.

Retailer and marketplace sites are the primary destination for validation. Seventy-eight percent of shoppers visited a retailer or marketplace during the journey, and 32% clicked directly from an AI tool.

The share that visited retailer sites rose from 20% before AI to 50% after AI. On those pages, people most often checked prices and deals, variants, reviews, and availability.

Low Trust In AI Recommendations

Trust is a constraint. Only 46% fully trusted AI shopping recommendations.

Common friction points where people lost trust were:

  • Missing links or sources
  • Mismatched specs or pricing
  • Outdated availability
  • Recommendations that didn’t fit budget or compatibility needs

These friction points sent people back to search, retailers, reviews, and forums.

Why This Matters

AI chatbots now shape mid-journey research.

If your product data, comparison content, and reviews are inconsistent with retailer listings, shoppers will notice when they verify elsewhere.

This reinforces the need to align details across channels to retain customer trust.

What To Do With This Info

Here are concrete steps you can take based on the report’s information:

  • Keep specs, pricing, availability, and variants in sync with retailer feeds.
  • Build comparison and “alternatives” pages around the attributes people prompt for.
  • Expand structured data for specs, variants, availability, and reviews.
  • Create content to answer common objections surfaced in forums and comment threads.
  • Monitor the queries and communities where shoppers validate information to close recurring gaps.

Looking Ahead

Respondents said AI made research feel easier, but confidence still depends on clear sources and verified reviews.

Expect assistants to keep influencing discovery while retailer and brand pages confirm the details that matter.

For more insight into how AI influences the shopping journey, see the full report.


Featured Image: Andrey_Popov/Shutterstock

AI Is Breaking The Economics Of Content via @sejournal, @Kevin_Indig

What does it say about the economics of content when the most visible site on the web loses significant traffic?

A status report by Wikipedia shows a significant decline in human page views over the last few months as a result of generative AI, “especially with search engines providing answers directly to searchers” [1].

Image Credit: Kevin Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

  • Evergreen content = Educational content covering established, timeless topics.
  • Additive content = Content that provides net-new takes, insights, and conversations.

Wikipedia is an evergreen site. Even though it’s a user-generated content (UGC) platform like Reddit or YouTube, its primary purpose is to serve comprehensive definitions on established topics. Reddit, YouTube, and LinkedIn & Co. are about additive topics and insights.

AI destroys the value of one while raising it for the other.

Wikipedia’s human traffic has dipped -5% YoY, while scrapers grew by 10.5% and bots by 162.4% [2]. The fact that scrapers and bots together make up almost as much traffic as humans is symbolic of the eroding value of answering questions.

Even though Wikipedia’s direct traffic is up ~23% and Chat GPT referrals are up 3.5x YoY, Google referrals are down -35% because AI Overviews make it redundant for users to click through.

Image Credit: Kevin Indig

Over the same time that Wikipedia lost ~90 million visits, Google started showing a lot more AI Overviews that answer user questions directly – often based on Wikipedia’s content.

Image Credit: Kevin Indig

Almost 50% of Wikipedia’s queries display a large AIO at the top of the search results. That’s no outlier: Reddit is at 46% and YouTube at 38%.

Google and ChatGPT reward additive content.

YouTube’s citation rate jumped from 37% to 54% (up 17 percentage points) at the same time as Wikipedia dropped from 58% to 42% (down 16 percentage points). Video is replacing text as Google’s primary source for answers.

Image Credit: Kevin Indig

ChatGPT cites Wikipedia 3x more often than it mentions the site, while Reddit is at one-to-one and YouTube at ~250%! Since users don’t click citations, mentions are much more valuable. [3]

Pre-AI, the economics of evergreen content were net-positive because it attracted clicks from Google, some of which converted into customers. LLMs like ChatGPT, AI Overviews, or AI Mode are not incentivized to send out traffic but to give the best answer, which makes the experience more similar to TikTok than Search.

LLMs use web content like Wikipedia for training, but offer invisible citations instead of mentions. The net return is negative. Wikipedia has to convince donors that it’s still worth giving money, while its content is used as a utility for LLMs.

Over the last 12 months, sites offering additive UGC have gained LLM visibility [4]:

  • Reddit.
  • LinkedIn.
  • Youtube.
  • Quora.
  • Yelp.
  • Tripadvisor.
  • Etc.

At the same time, content sites offering evergreen content lost significant amounts of organic traffic (and value):

  • Stackoverflow.
  • Chegg.
  • Britannica.
  • Wiktionary.
  • History.com.
  • eHow.
  • Etc.

With fewer and eventually maybe zero clicks arriving [5], the value of creating evergreen content is questionable – not just for Wikipedia.

The fix is to shift focus from evergreen topics to net-new insights:

  1. Invest more in additive content: data stories, research, customer success stories, thought leadership, etc. Oura, Ramp, Okta, and others are already making the shift and hiring economists, journalists, and researchers. [678]
  2. Lower your investment in evergreen content in favor of additive content. We don’t know the right mix, but 50/50 or even 70/30 seems better than 80/20.
  3. When to keep evergreen content: For user experience (critical to understand a topic), Topical Authority, or when you can automate + enrich with unique data.
  4. When creating evergreen content, focus on hyperlong-tail topics aligned with your audience personas and positioning that no one else is visible for.

Evaluate additive content against influenced pipeline, LLM citations/mentions/Share of Voice, and publisher links/coverage.


Featured Image: Paulo Bobita/Search Engine Journal

OpenAI Flags Emotional Reliance On ChatGPT As A Safety Risk via @sejournal, @MattGSouthern

OpenAI is telling companies that “relationship building” with AI has limits. Emotional dependence on ChatGPT is considered a safety risk, with new guardrails in place.

  • OpenAI says it has added “emotional reliance on AI” as a safety risk.
  • The new system is trained to discourage exclusive attachment to ChatGPT.
  • Clinicians helped define what “unhealthy attachment” looks like and how ChatGPT should respond.
The Same But Different: Evolving Your Strategy For AI-Driven Discovery via @sejournal, @alexmoss

The web – and the way in which humans interact with it – has definitely changed since the early days of SEO and the emergence of search engines in the early to mid-90s. In that time, we’ve witnessed the internet turn from something that nobody understood to something most of the world cannot operate without. This interview between Bill Gates and David Letterman puts this 30-year phenomenon into perspective:

The attitude 30 years ago was that the internet was not understood at all and nor was its potential influence. Today, the concept of AI entering into our daily lives is taken much more seriously to the point that it is something that many look upon with fear – perhaps now because we [think] we have an accurate outlook on how this may progress.

This transformation isn’t so much about the skills we’ve developed over time, but rather about the evolution of the technology and channels that surround them. Those technologies and channels are evolving at a fast pace and causing some to panic over whether their inherent technological skills will still apply within today’s Search ecosystem.

The Technological Rat Race

Right now, it may feel like there’s something new to learn or a new product to experiment with every day, and it can be difficult to decide where to focus your attention and priorities. This is, unfortunately, a phase that I believe will continue for a good couple of years as the dust settles over this wild west of change.

Because these changes are impacting nearly everything an SEO would be responsible for as part of organic visibility, it may feel overwhelming to digest all these things – all while we seemingly take on the challenge of communicating these changes to our clients or stakeholders/board members.

But change does not equal the end of days. This “change” relates to the technology around what we’ve been doing for over a generation, and not the foundation of the discipline itself.

Old Hat Is New Hat

The major search engines have been actively telling you, including Google and Bing, that core SEO principles should still be at the forefront of what we do moving forward. Danny Sullivan, former Search Liaison at Google, also made this clear during his recent keynote at WordCamp USA:

The consistent messages are clear:

  • Produce well-optimized sites that perform well.
  • Populate solid structured data and entity knowledge graphs.
  • Re-enforce brand sentiment and perspective.
  • Offer unique, valuable content for people.

The problem some may have is that the content we produce is moreso for agents than for people, and if this is true, what impact does this make?

The Web Is Splitting Into Two

The open web has been disrupted most of all, with some business models being uprooted by taking solved knowledge and serving it within their platform, appropriating the human visitor, which they rely on for income.

This has created a split from a complete open web into two – the “human” web and the “agentic” web – where these two audiences are both major considerations and will differ from site to site. SEOs will have to consider both sides of the web and how to serve both – which is where an SEO’s skill set becomes more valuable than it was before.

One example can be seen in the way that agents now take charge of ecommerce transactions, where OpenAI announced “Buy it in ChatGPT,” where the buying experience is even more seamless with instant checkouts. It also open-sourced the technology behind it, Agentic Commerce Protocol (ACP), and is already being adopted by content management system (CMS), including Shopify. This split between agentic and human engagement will still require optimization in order to ensure maximum discoverability.

When it comes to content, ensure everything is concise and avoid fluff, what I refer to as “tokenization spam.” Content isn’t just crawled; it’s processed, chunked, and tokenized. Agents will take preference to well-structured and formatted text.

“Short-Term Wins” Sounds Like Black Hat

Of course, during any technological shift, there will be some bad actors who may tell you about a brand-new tactic that is guaranteed to work to help you “rank in AI.” Remember that the dust has not yet settled when it comes to the maturity of these assistance engines, and you should compare this to the pre-Panda/Penguin era of SEO, where black hat SEO techniques were easier to achieve.

These algorithm updates closed those loopholes, and the same will happen again as these platforms improve – with increased speed as agents understand what is truly honest with increased precision.

Success Metrics Will Change, Not The Execution To Influence Them

In reality, core SEO principles and foundations are still the same and have been throughout most changes in the past – including “the end of desktop” when mobiles became more dominant; and “the end of typing” when voice search started to grow with products such as Alexa, Google Home, and even Google Glass.

Is the emergence of AI going to render what I do as an SEO obsolete? No.

Technical SEO remains the same, and the attributes that agents look at are not dissimilar to what we would be optimizing if large language models (LLMs) weren’t around. Brand marketing remains the same. While the term “brand sentiment” is a term used more widely nowadays, it is something that should have always been a part of our online marketing strategies when it comes to authority, relevance, and perspective.

That being said, our native metrics have been devalued within two years, and those metrics will continue to shift alongside the changes that are yet to come as these platforms deliver more stability. This has already skewed year-over-year data and will continue to skew for the year ahead as more LLMs evolve. This, however, could be compared to events such as replacing granular organic keyword data with one (not provided) metric within Google Analytics, the deprecation of Yahoo! Site Explorer, or devaluation of benchmark data such as Alexa Rank and Google PageRank.

Revise Your Success Metric Considerations

Success metrics now have to go beyond the SERP into visibility and discoverability as a whole, through multiple channels. There are now several tools and platforms available that can analyze and report on AI-focused visibility metrics, such as Yoast AI Brand Insights, that can provide better insight into how your brand is interpreted by LLMs.

If you’re more technical, make use of MCPs (Model Context Protocol) to understand data more via natural language dialogs. MCP is an open-source standard that lets AI applications connect to external systems like databases, tools, or workflows (you can visualize this as a USB-C port for AI) so they can access information and perform tasks using a simple, unified connection. There are several MCPs you can work with already, including:

You can take this a step further by coupling this with a vibe coding tool such as Claude Code, where you can use it to create a reporting app that uses a combination of the above MCP servers in order to extract the best data and create visuals and interactive charts for you and your clients/stakeholders.

The Same But Different … But Still The Same

While the divergence between human and agentic experiences is increasing, the methods by which we, as an SEO, would optimize for them are not too dissimilar. Leverage both within your strategy – just as you did when mobile gained traction in the same way.

More Resources:


Featured Image: Vallabh Soni/Shutterstock

OpenAI Releases Shared Project Feature To All Users via @sejournal, @martinibuster

OPenAI announced that it is upgrading all ChatGPT accounts to be eligible for the project sharing feature, which enables users to share a ChatGPT project with others who can then participate and make changes. The feature was previously available only to users on OpenAI’s Business, Enterprise, and Edu plans.

The new feature is available to users globally in the Free, Plus, Pro, and budget Go plans, whether accessed on the web, iOS, or Android devices.

There are limits specific to each plan according to the announcement:

  • Free users can collaborate on up to 5 files and with 5 collaborators
  • Plus and Go users can share up to 25 files with up to 10 collaborators
  • Pro users can share up to 40 files with up to 100 collaborators

    OpenAI suggested the following use cases:

“Group work: Upload notes, proposals, and contracts so collaborators can draft deliverables faster and stay in sync.

Content creation: Apply project-specific instructions to keep tone and style consistent across contributors.

Reporting: Store datasets and reports in one project, and return each week to generate updates without starting over.

Research: Keep transcripts, survey results, and market research in one place, so anyone in the project can query and build on the findings.

Project owners can choose to share a project with “Only those invited” or “Anyone with a link,” and can change visibility settings at any time including switching back to invite-only.”

Read more at OpenAI: Shared Projects

Featured Image by Shutterstock/LedyX

Microsoft Updates Copilot With Memory, Search Connectors, & More via @sejournal, @MattGSouthern

Microsoft announced its Copilot Fall Release, introducing features to make AI more personal and collaborative.

New capabilities include group collaboration, long-term memory, health tools, and voice-enabled learning.

Mustafa Suleyman, head of Microsoft AI, wrote in the announcement that the release represents a shift in how AI supports users.

Suleyman wrote:

“… technology should work in service of people. Not the other way around. Ever.”

What’s New

Search Improvements

Copilot Search combines AI-generated answers with traditional results in one view, providing cited responses for faster discovery.

Microsoft also highlighted its in-house models, including MAI-Voice-1, MAI-1-Preview, and MAI-Vision-1, as groundwork for more immersive Copilot experiences.

Memory & Personalization

Copilot now includes long-term memory that tracks user preferences and information across conversations.

You can ask Copilot to remember specific details like training for a marathon or an anniversary, and the AI can recall this information in future interactions. Users can edit, update, or delete memories at any time.

Search Across Services

New connector features link Copilot to OneDrive, Outlook, Gmail, Google Drive, and Google Calendar so you can search for documents, emails, and calendar events across multiple accounts using natural language.

Microsoft notes this is rolling out gradually and may not yet be available in all regions or languages.

Edge & Windows Integration

Copilot Mode in Edge is evolving into what Microsoft calls an “AI browser.”

With user permission, Copilot can see open tabs, summarize information, and take actions like booking hotels or filling forms.

Voice-only navigation enables hands-free browsing. Journeys and Actions are currently available in the U.S. only.

Shared AI Sessions

The Groups feature turns Copilot into a collaborative workspace for up to 32 people.

You can invite friends, classmates, or teammates to shared sessions. Start a session by sending a link, and anyone with the link can join and see the same conversation in real time.

This feature is U.S. only at launch.

Health Features

Copilot for health grounds responses in credible sources like Harvard Health for medical questions.

Health features are available only in the U.S. at copilot.microsoft.com and in the Copilot iOS app.

Voice Tutoring

Learn Live provides voice-enabled Socratic tutoring for educational topics.

Interactive whiteboards help you work through concepts for test preparation, language practice, or exploring new subjects. U.S. only.

“Mico” Character

Microsoft introduced Mico, an optional visual character that reacts during voice conversations.

Separately, Copilot adds a “real talk” conversation style that challenges assumptions and adapts to user preferences.

Why This Matters

These features change how Copilot fits into your workflow.

The move from individual to collaborative sessions means teams can use AI together rather than separately synthesizing results.

Long-term memory reduces the need to repeat context, which matters for ongoing projects where Copilot needs to understand your specific situation.

Looking Ahead

Features are live in the U.S. now. Microsoft says updates are rolling out across the UK, Canada, and beyond in the next few weeks.

Some features require a Microsoft 365 Personal, Family, or Premium subscription; usage limits apply. Specific availability varies by market, device, and platform.

Measuring When AI Assistants And Search Engines Disagree via @sejournal, @DuaneForrester

Before you get started, it’s important to heed this warning: There is math ahead! If doing math and learning equations makes your head swim, or makes you want to sit down and eat a whole cake, prepare yourself (or grab a cake). But if you like math, if you enjoy equations, and you really do believe that k=N (you sadist!), oh, this article is going to thrill you as we explore hybrid search in a bit more depth.

(Image Credit: Duane Forrester)

For years (decades), SEO lived inside a single feedback loop. We optimized, ranked, and tracked. Everything made sense because Google gave us the scoreboard. (I’m oversimplifying, but you get the point.)

Now, AI assistants sit above that layer. They summarize, cite, and answer questions before a click ever happens. Your content can be surfaced, paraphrased, or ignored, and none of it shows in analytics.

That doesn’t make SEO obsolete. It means a new kind of visibility now runs parallel to it. This article shows ideas of how to measure that visibility without code, special access, or a developer, and how to stay grounded in what we actually know.

Why This Matters

Search engines still drive almost all measurable traffic. Google alone handles almost 4 billion searches per day. By comparison, Perplexity’s reported total annual query volume is roughly 10 billion.

So yes, assistants are still small by comparison. But they’re shaping how information gets interpreted. You can already see it when ChatGPT Search or Perplexity answers a question and links to its sources. Those citations reveal which content blocks (chunks) and domains the models currently trust.

The challenge is that marketers have no native dashboard to show how often that happens. Google recently added AI Mode performance data into Search Console. According to Google’s documentation, AI Mode impressions, clicks, and positions are now included in the overall “Web” search type.

That inclusion matters, but it’s blended in. There’s currently no way to isolate AI Mode traffic. The data is there, just folded into the larger bucket. No percentage split. No trend line. Not yet.

Until that visibility improves, I’m suggesting we can use a proxy test to understand where assistants and search agree and where they diverge.

Two Retrieval Systems, Two Ways To Be Found

Traditional search engines use lexical retrieval, where they match words and phrases directly. The dominant algorithm, BM25, has powered solutions like Elasticsearch and similar systems for years. It’s also in use in today’s common search engines.

AI assistants rely on semantic retrieval. Instead of exact words, they map meaning through embeddings, the mathematical fingerprints of text. This lets them find conceptually related passages even when the exact words differ.

Each system makes different mistakes. Lexical retrieval misses synonyms. Semantic retrieval can connect unrelated ideas. But when combined, they produce better results.

Inside most hybrid retrieval systems, the two methods are fused using a rule called Reciprocal Rank Fusion (RRF). You don’t have to be able to run it, but understanding the concept helps you interpret what you’ll measure later.

RRF In Plain English

Hybrid retrieval merges multiple ranked lists into one balanced list. The math behind that fusion is RRF.

The formula is simple: score equals one divided by k plus rank. This is written as 1 ÷ (k + rank). If an item appears in several lists, you add those scores together.

Here, “rank” means the item’s position in that list, starting with 1 as the top. “k” is a constant that smooths the difference between top and mid-ranked items. Most systems typically use something near 60, but each may tune it differently.

It’s worth remembering that a vector model doesn’t rank results by counting word matches. It measures how close each document’s embedding is to the query’s embedding in multi-dimensional space. The system then sorts those similarity scores from highest to lowest, effectively creating a ranked list. It looks like a search engine ranking, but it’s driven by distance math, not term frequency.

(Image Credit: Duane Forrester)

Let’s make it tangible with small numbers and two ranked lists. One from BM25 (keyword relevance) and one from a vector model (semantic relevance). We’ll use k = 10 for clarity.

Document A is ranked number 1 in BM25 and number 3 in the vector list.
From BM25: 1 ÷ (10 + 1) = 1 ÷ 11 = 0.0909.
From the vector list: 1 ÷ (10 + 3) = 1 ÷ 13 = 0.0769.
Add them together: 0.0909 + 0.0769 = 0.1678.

Document B is ranked number 2 in BM25 and number 1 in the vector list.
From BM25: 1 ÷ (10 + 2) = 1 ÷ 12 = 0.0833.
From the vector list: 1 ÷ (10 + 1) = 1 ÷ 11 = 0.0909.
Add them: 0.0833 + 0.0909 = 0.1742.

Document C is ranked number 3 in BM25 and number 2 in the vector list.
From BM25: 1 ÷ (10 + 3) = 1 ÷ 13 = 0.0769.
From the vector list: 1 ÷ (10 + 2) = 1 ÷ 12 = 0.0833.
Add them: 0.0769 + 0.0833 = 0.1602.

Document B wins here as it ranks high in both lists. If you raise k to 60, the differences shrink, producing a smoother, less top-heavy blend.

This example is purely illustrative. Every platform adjusts parameters differently, and no public documentation confirms which k values any engine uses. Think of it as an analogy for how multiple signals get averaged together.

Where This Math Actually Lives

You’ll never need to code it yourself as RRF is already part of modern search stacks. Here are examples of this type of system from their foundational providers. If you read through all of these, you’ll have a deeper understanding of how platforms like Perplexity do what they do:

All of them follow the same basic process: Retrieve with BM25, retrieve with vectors, score with RRF, and merge. The math above explains the concept, not the literal formula inside every product.

Observing Hybrid Retrieval In The Wild

Marketers can’t see those internal lists, but we can observe how systems behave at the surface. The trick is comparing what Google ranks with what an assistant cites, then measuring overlap, novelty, and consistency. This external math is a heuristic, a proxy for visibility. It’s not the same math the platforms calculate internally.

Step 1. Gather The Data

Pick 10 queries that matter to your business.

For each query:

  1. Run it in Google Search and copy the top 10 organic URLs.
  2. Run it in an assistant that shows citations, such as Perplexity or ChatGPT Search, and copy every cited URL or domain.

Now you have two lists per query: Google Top 10 and Assistant Citations.

(Be aware that not every assistant shows full citations, and not every query triggers them. Some assistants may summarize without listing sources at all. When that happens, skip that query as it simply can’t be measured this way.)

Step 2. Count Three Things

  1. Intersection (I): how many URLs or domains appear in both lists.
  2. Novelty (N): how many assistant citations do not appear in Google’s top 10.
    If the assistant has six citations and three overlap, N = 6 − 3 = 3.
  3. Frequency (F): how often each domain appears across all 10 queries.

Step 3. Turn Counts Into Quick Metrics

For each query set:

Shared Visibility Rate (SVR) = I ÷ 10.
This measures how much of Google’s top 10 also appears in the assistant’s citations.

Unique Assistant Visibility Rate (UAVR) = N ÷ total assistant citations for that query.
This shows how much new material the assistant introduces.

Repeat Citation Count (RCC) = (sum of F for each domain) ÷ number of queries.
This reflects how consistently a domain is cited across different answers.

Example:

Google top 10 = 10 URLs. Assistant citations = 6. Three overlap.
I = 3, N = 3, F (for example.com) = 4 (appears in four assistant answers).
SVR = 3 ÷ 10 = 0.30.
UAVR = 3 ÷ 6 = 0.50.
RCC = 4 ÷ 10 = 0.40.

You now have a numeric snapshot of how closely assistants mirror or diverge from search.

Step 4. Interpret

These scores are not industry benchmarks by any means, simply suggested starting points for you. Feel free to adjust as you feel the need:

  • High SVR (> 0.6) means your content aligns with both systems. Lexical and semantic relevance are in sync.
  • Moderate SVR (0.3 – 0.6) with high RCC suggests your pages are semantically trusted but need clearer markup or stronger linking.
  • Low SVR (< 0.3) with high UAVR shows assistants trust other sources. That often signals structure or clarity issues.
  • High RCC for competitors indicates the model repeatedly cites their domains, so it’s worth studying for schema or content design cues.

Step 5. Act

If SVR is low, improve headings, clarity, and crawlability. If RCC is low for your brand, standardize author fields, schema, and timestamps. If UAVR is high, track those new domains as they may already hold semantic trust in your niche.

(This approach won’t always work exactly as outlined. Some assistants limit the number of citations or vary them regionally. Results can differ by geography and query type. Treat it as an observational exercise, not a rigid framework.)

Why This Math Is Important

This math gives marketers a way to quantify agreement and disagreement between two retrieval systems. It’s diagnostic math, not ranking math. It doesn’t tell you why the assistant chose a source; it tells you that it did, and how consistently.

That pattern is the visible edge of the invisible hybrid logic operating behind the scenes. Think of it like watching the weather by looking at tree movement. You’re not simulating the atmosphere, just reading its effects.

On-Page Work That Helps Hybrid Retrieval

Once you see how overlap and novelty play out, the next step is tightening structure and clarity.

  • Write in short claim-and-evidence blocks of 200-300 words.
  • Use clear headings, bullets, and stable anchors so BM25 can find exact terms.
  • Add structured data (FAQ, HowTo, Product, TechArticle) so vectors and assistants understand context.
  • Keep canonical URLs stable and timestamp content updates.
  • Publish canonical PDF versions for high-trust topics; assistants often cite fixed, verifiable formats first.

These steps support both crawlers and LLMs as they share the language of structure.

Reporting And Executive Framing

Executives don’t care about BM25 or embeddings nearly as much as they care about visibility and trust.

Your new metrics (SVR, UAVR, and RCC) can help translate the abstract into something measurable: how much of your existing SEO presence carries into AI discovery, and where competitors are cited instead.

Pair those findings with Search Console’s AI Mode performance totals, but remember: You can’t currently separate AI Mode data from regular web clicks, so treat any AI-specific estimate as directional, not definitive. Also worth noting that there may still be regional limits on data availability.

These limits don’t make the math less useful, however. They help keep expectations realistic while giving you a concrete way to talk about AI-driven visibility with leadership.

Summing Up

The gap between search and assistants isn’t a wall. It’s more of a signal difference. Search engines rank pages after the answer is known. Assistants retrieve chunks before the answer exists.

The math in this article is an idea of how to observe that transition without developer tools. It’s not the platform’s math; it’s a marketer’s proxy that helps make the invisible visible.

In the end, the fundamentals stay the same. You still optimize for clarity, structure, and authority.

Now you can measure how that authority travels between ranking systems and retrieval systems, and do it with realistic expectations.

That visibility, counted and contextualized, is how modern SEO stays anchored in reality.

More Resources:


This post was originally published on Duane Forrester Decodes


Featured Image: Roman Samborskyi/Shutterstock