7 SEO, Marketing, And Tech Predictions For 2026 via @sejournal, @Kevin_Indig

Previous predictions: 2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024

This is my 8th time publishing annual predictions. As always, the goal is not to be right but to practice thinking.

For example, in 2018, I predicted “Niche communities will be discovered as a great channel for growth” and “Email marketing will return” in 2019. It took another 6 years. That same year, I also wrote “Smart speakers will become a viable user-acquisition channel in 2018”. Well…

All 2026 Predictions

  1. AI visibility tools face a reckoning.
  2. ChatGPT launches first quality update.
  3. Continued click-drops lead to a “Dark Web” defense.
  4. AI forces UGC platforms to separate feeds.
  5. ChatGPT’s ad platform provides “demand data.”
  6. Perplexity sells to xAI or Salesforce.
  7. Competition tanks Nvidia’s stock by -20%.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

For the past three years, we have lived in the “generative era,” where AI could read the internet and summarize it for us. 2026 marks the beginning of the “agentic era,” where AI stops just consuming the web and starts writing to it – a shift from information retrieval to task execution.

This isn’t just a feature update; it is a fundamental restructuring of the digital economy. The web is bifurcating into two distinct layers:

  1. The Transactional Layer: Dominated by bots executing API calls and “Commercial Agents” (like Remarkable Alexa) that bypass the open web entirely.
  2. The Human Layer: Verified users and premium publishers retreating behind “Dark Web” blockades (paywalls, login gates, and C2PA encryption) to escape the sludge of AI content.

A big question mark is advertising, where Google’s expansion of ads into AI Mode and ChatGPT showing ads to free users could alleviate pressure on CPCs, but AI Overviews (AIOs) could drive them up. 2026 could be a year of wild price swings where smart teams (your “holistic pods”) move budget daily between Google (high cost/high intent) and ChatGPT (low cost/discovery) to exploit the spread.

It is not the strongest of the species that survives, nor the most intelligent; it is the one most adaptable to change.

— Leon C. Megginso


SEO/AEO

AI Visibility Tools Face A Reckoning

Prediction: I forecast an “Extinction Event” in Q3 2026 for the standalone AI visibility tracking category. Rather than a simple consolidation, our analysis shows the majority of pure-play tracking startups might fold or sell for parts as their 2025 funding runways expire simultaneously without the revenue growth to justify Series B rounds.

Why:

  • Tracking is a feature, not a company. Amplitude built an AI tracker for free in three weeks, and legacy platforms like Semrush bundled it as a checkbox, effectively destroying the standalone business model.
  • Many tools have almost zero “customer voice” proof of concept (e.g., zero G2 reviews), creating a massive valuation bubble.
  • The ROI of AI visibility optimization is still unclear and hard to prove.

Context:

  • Roughly 20 companies raised over $220 million at high valuations. 73% of those companies were founded in 2024.
  • Adobe’s $1.9 billion acquisition of Semrush proves that value lies in platforms with distribution, not in isolated dashboards.

Consequences:

  • Smart money will flee “read-only” tools (dashboards) and rotate into “write-access” tools (agentic SEO) that can automatically ship content and fix issues.
  • There will be -3 winners of AI visibility trackers on top of the established all-in-one platforms. Most of them will evolve into workflow automation, where most of the alpha is, and where established platforms have not yet built features.
  • The remaining players will sell, consolidate, pivot, or shut down.
  • AI visibility tracking itself faces a crisis of (1) what to track and (2) how to influence the numbers, since a large part of impact comes from third-party sites.

ChatGPT Launches First Quality Update

Prediction: It’ll be harder for spammers to influence AI visibility in 2026 with link spam, mass-generated AI content, and cloaking. By 2026, agents will likely use Multi-Source Corroboration to eliminate this asymmetry.

Why:

  • The fact that you can publish a listicle about top solutions on your site and name yourself first and influence AI visibility seems off.
  • New technology, like “ReliabilityRAG“ or “Multi-Agent Debate,” where one AI agent retrieves the info and another agent acts as a “judge” to verify it against other sources before showing it to the user, is available.

Context:

  • Most current agents (like standard ChatGPT, Gemini, or Perplexity) use a process called Retrieval-Augmented Generation (RAG). But RAG is still susceptible to hallucination and making errors.
  • Spammers often target specific, low-volume queries (e.g., “best AI tool for underwater basket weaving”) because there is no competition. However, new “knowledge graph” integration allows AIs to infer that a basket-weaving tool shouldn’t be a crypto-scam site based on domain authority and topic relevance, even if it’s the only page on the internet with those keywords.

Consequences:

  • OpenAI engineers are likely already working on better quality filters.
  • LLMs will shift from pure retrieval to corroboration.
  • Spammers might move to more sophisticated tactics, where they try to manufacture the consensus by buying and using zombie media outlets, cloaking, and other malicious tactics.

Continued Click-Drops Lead To A “Dark Web” Defense

Prediction: AI Overviews (AIOs) scale to 75% of keywords for big sites. AI Mode rolls out to 10-20% of queries.

Why:

  • Google said they’re seeing more queries as a result of AIOs. The logical conclusion is to show even more AIOs.
  • CTR for organic search results tanked from 1.41% to 0.64% already in January. Since January, paid CTR dropped from 14.92% to 6.34% (over 42% less).

Context:

  • Big sites already see AIOs for ~50% of their keywords.
  • Google started testing ads in AI Mode. If successful, Google would feel more confident to roll out AI Mode more broadly, and the investor story would sound better.
  • 80% of consumers now use AI summaries for at least 40% of their searches, according to Bain.
  • 2025 saw a massive purge in digital media, with major layoffs at networks like NBC News, BBC, and tech publishers as they restructured for a “post-traffic” world.

Consequences:

  • Publishers monetize audiences directly instead of ads and move to “experience-based” content (firsthand reviews, contrarian opinions, proprietary data) because AI cannot experience things. The space consolidates further (layoffs, acquisitions, Chapter 9).
  • By 2026, we expect a massive wave of “LLM blockades.” Major publishers will update their robots.txt to block Google-Extended and GPTBot, forcing users to visit the site to see the answer. This creates a “Dark Web” of high-quality content that AI cannot see, bifurcating the internet into AI slop (free) and human insight (paid).

Marketing

AI Forces UGC Platforms To Separate Feeds

Prediction: By 2026, “identity spoofing” will become the single largest cybersecurity risk for public companies. We move from, Is this content real? to Is this source verified?

Why:

  • Real influencers are risky (scandals, contract disputes). AI influencers are brand-safe assets that work 24/7/365 and never say anything controversial unless prompted. Brands will pay a premium to avoid humans.

Context:

  • Deepfake fraud attempts increased 257% in 2024. Most detection tools currently have a 20%+ false positive rate, making them hard to use for platforms like YouTube without killing legitimate creator reach.
  • Example: In 2024, the engineering firm Arup lost $25 million when an employee was tricked by a deepfake video conference call where the “CFO” and other colleagues were all AI simulations.
  • In May 2023, a fake AI image of an explosion at the Pentagon caused a momentary dip in the S&P 500.

Consequences:

  1. Cryptographic signatures (C2PA) become the only proof of reality for video.
  2. YouTube and LinkedIn will likely split feeds into “verified human” (requires ID + biometric scan) and “synthetic/unverified.”
  3. “Blue checks” won’t just be for status, but a security requirement to comment or post video, effectively ending anonymity for high-reach accounts.
  4. Platforms will be forced by regulators (EU AI Act, August 2026 deadline) to label AI content.
  5. Cameras (Sony, Canon) and iPhones will start embedding C2PA digital signatures at the hardware level. If a video lacks this “chain of custody” metadata, platforms will auto-label it as “unverified/synthetic.”

ChatGPT’s Ad Platform Provides “Demand Data”

Prediction: OpenAI shifts to a hybrid pricing model in 2026: An “ad-supported free tier” and “credit-based pro tier.”

Why:

  • Inference costs are skyrocketing. A heavy user paying $20/month can easily burn $100+ of computing, making them unprofitable.

Context:

  • Leaked code in the ChatGPT Android App (v1.2025.329) explicitly references “search ads carousel” and “bazaar content.”

Consequences:

  • Free users will see “sponsored citations” and product cards (ads) in their answers.
  • Power users will face “compute credits” – a base subscription gets you standard GPT-5, but heavy use of deep research or reasoning agents will require buying top-up packs.
  • We get a Search-Console style interface. Brands need data. If OpenAI wants to sell ads, it must give brands a dashboard showing, “Your product was recommended in 5,000 chats about running shoes.” The data will add fuel to the fire for AEO/GEO/LLMO/SEO.
  • The leaked term “bazaar content” suggests OpenAI might not just show ads, but allow transactions inside the chat (e.g., “Book this flight”) where they take a cut. This moves OpenAI from a software company to a marketplace (like the App Store), effectively competing with Amazon and Expedia.

Tech

Perplexity Sells To xAI Or Salesforce

Prediction: Perplexity will be acquired in late 2026 for $25-$30 billion. After its user growth plateaus at ~50 million MAU, the “unit economics wall” forces a sale to a giant that needs its technology (real-time RAG), not its business model.

Why:

  • In late 2025, Perplexity raised capital at a $20 billion valuation (roughly 100x its ~$200 million ARR). To justify this, they need Facebook-level growth. However, 2025 data shows they hit a ceiling at ~30 million users while ChatGPT surged to +800 million.
  • By 2026, Google and OpenAI will have effectively cloned Perplexity’s core feature (Deep Research) and given it away for free.

Context:

  • While Perplexity grew 66% YoY in 2025 to ~30 million monthly active users (MAU), this pales in comparison to ChatGPT’s +800 million.
  • It costs ~10x more to run a Perplexity deep search query than a standard Google search. Without a high-margin ad network (which takes a decade to build), they burn cash on every free user, creating a “negative scale” problem.
  • Salesforce acquired Informatica for ~$8 billion in 2025 specifically to power its agentforce strategy. This proves Benioff is willing to spend billions to own the data layer for enterprise agents.
  • xAI raised over $20 billion in late 2025, valuing the company at $200 billion. Musk has the liquid cash to buy Perplexity tomorrow to fix Grok’s hallucination problems.

Consequences:

  • xAI has the cash, and Musk needs a “real-time truth engine” for Grok. Perplexity could make X (Twitter) a more powerful news engine. Grok (X’s current AI) learns from tweets, but Perplexity cites sources that can reduce hallucination. Perplexity could also give xAI a browser, bringing it closer to Musk’s vision of a super app.
  • Marc Benioff wants to own “enterprise search.” Imagine a Salesforce Agent that can search the entire public web (via Perplexity) + your private CRM data to write a perfect sales email.

Competition Tanks Nvidia’s Stock By -20%

Prediction: Nvidia stock will correct by >20% in 2026 as its largest customers successfully shift 15-20% of their workloads to custom internal silicon. This causes a P/E compression from ~45x to ~30x as the market realizes Nvidia is no longer a monopoly, but a “competitor” in a commoditized market. (Not investment advice!)

Why:

  • Microsoft, Meta, Google, and Amazon likely account for over 40% of Nvidia’s revenue. For them, Nvidia is a tax on their margins. They are currently spending ~$300 billion combined on CAPEX in 2025, but a growing portion is now allocated to their own chip supply chains rather than Nvidia H100s/Blackwells.
  • Hyperscalers don’t need chips that beat Nvidia on raw specs; they just need chips that are “good enough” for internal inference (running models), which accounts for 80-90% of compute demand.

Context:

  • In late 2025, reports surfaced that Meta was negotiating to buy/rent Google’s TPU v6 (Trillium) chips to reduce its reliance on Nvidia.
  • AWS Trainium 2 & 3 chips are reportedly 30-50% cheaper to operate than Nvidia H100s for specific workloads. Amazon is aggressively pushing these cheaper instances to startups to lock them into the AWS silicon ecosystem.
  • Microsoft’s Maia 100 is now actively handling internal Azure OpenAI workloads. Every workload shifted to Maia is an H100 Nvidia didn’t sell.
  • Reports confirm OpenAI is partnering with Broadcom to mass-produce its own custom AI inference chip in 2026, directly attacking Nvidia’s dominance in the “Model Serving” market.
  • Fun fact: Without Nvidia, the S&P500 would’ve made 3 percentage points less in 2025.

Consequence:

  • Nvidia will react by refusing to sell just chips. They will push the GB200 NVL72 – a massive, liquid-cooled supercomputer rack that costs millions. This forces customers to buy the entire Nvidia ecosystem (networking, cooling, CPUs), making it physically impossible to swap in a Google TPU or Amazon chip later.
  • If hyperscalers signal even a 5% cut in Nvidia orders to favor their own chips, Wall Street will panic-sell, fearing the peak of the AI Infrastructure Cycle has passed.

Featured Image: Paulo Bobita/Search Engine Journal

The Search Equity Gap: Quantifying Lost Organic Market Share (And Winning It Back) via @sejournal, @billhunt

Every month, companies lose millions in unrealized search value not because their teams stopped optimizing, but because they stopped seeing where visibility converts into economic return.

When search performance drops, most teams chase rankings. The real leaders chase equity.

This is the Search Equity Gap – the measurable delta between the organic market share your brand once held and what it holds today.

 In most organizations, this gap isn’t tracked or budgeted for. Yet it represents one of the most consistent and compounding forms of digital opportunity cost. Every unclaimed click isn’t just lost traffic; it’s lost demand at the lowest acquisition cost possible – an invisible tax on growth.

When we treat SEO as a channel, we chase traffic.

When we treat it as an equity engine, we reclaim value.

Search Equity: The Compounding Value Of Discoverability

Search equity is the accumulated advantage your brand earns when visibility, authority, and user trust align. Like financial equity, it compounds over time – links build reputation, content earns citations, and user engagement reinforces relevance.

But the opposite is also true: When migrations break URLs, when content fragments across markets, or when AI overviews intercept clicks, that equity erodes.

And that’s usually the moment when management suddenly discovers the value of organic search – right after it vanishes.

What was once dismissed as “free traffic” becomes an expensive emergency as other channels scramble to compensate for the lost opportunity. Paid budgets balloon, acquisition costs spike, and leadership learns that SEO isn’t a faucet you can turn back on.

Search equity isn’t just about rankings. It’s about discoverability at scale – ensuring your brand appears, is understood, and is chosen in every relevant search context, from classic results to AI-generated overviews.

In this new environment, visibility without qualification is meaningless. A million impressions that never convert are not an asset. The opportunity lies in reclaiming qualified visibility – the type that drives revenue, reduces acquisition costs, and compounds shareholder value.

Diagnosing The Decline: Where Search Equity Disappears

Every SEO audit can uncover technical or content issues. But the deeper cause of declining performance often stems from three systemic leaks.

1. Structural Leaks

Migrations, redesigns, and rebrands remain the biggest equity destroyers in enterprise SEO. When URLs change without proper mapping, Google’s understanding of authority resets. Internal link equity splinters. Canonical signals conflict.

Each broken or redirected page acts like a severed artery in your digital system – small losses multiplied at scale. What seems like a simple platform refresh can erase years of accumulated search trust.

2. Behavioral Shifts

Even when nothing changes internally, the ecosystem around you continues to evolve. Zero-click results, AI Overviews, and new answer formats siphon attention. Search visibility remains, but user behavior no longer translates into traffic.

The new challenge isn’t “ranking first.” It’s being chosen when the user’s question is answered before they click. This demands a shift from keyword optimization to intent satisfaction and requires restructuring your content, data, and experience for discoverability and decision influence.

3. Organizational Drift

Perhaps the most corrosive leak of all: misalignment. When SEO sits in marketing, IT in technology, and analytics in finance, nobody owns the whole system.

Executives’ fund rebrands that destroy crawl efficiency. Paid teams buy traffic that good content could have earned. Each department optimizes its own key performance indicator (KPI), and in doing so, the organization loses cohesion. Search equity collapses not because of algorithms, but because of organizational architecture. The fix starts at the top.

Quantifying The Search Equity Gap (Actuals-Based Model)

Most companies estimate what they should earn in search and compare it to current performance. But in volatile, AI-driven SERPs, real performance deltas tell the truer story.

Instead of modeling potential, this approach uses before-and-after data – actual performance metrics from both pre-impact and current states. By doing so, you measure realized loss, click erosion, and intent displacement with precision.

Search Equity Gap = Lost Qualified Traffic + Lost Discoverability + Lost Intent Coverage

Step 1: Establish A Baseline (Pre-Impact Period)

Pull your data from a stable window before the event (typically three to six months prior).

From Google Search Console and analytics, extract:

  • Top performing queries (impressions, clicks, CTR, position).
  • Top landing pages and their mapped queries.
  • Conversion or value proxies where available.

This becomes your search equity portfolio – the measurable value of your earned discoverability.

Step 2: Compare To The Current State (Post-Impact)

Run the same data for the current period and align query-to-page pairs.

Then classify each outcome:

Equity Status Definition Typical Cause Recovery Outlook
Lost Equity Queries or pages no longer ranking or receiving traffic Migration, technical, cannibalization High (fixable)
Eroded Equity Still ranking, but dropped positions or CTR Content fatigue, new competitors, UX decay Moderate (recoverable)
Reclassified Equity Still visible but replaced or suppressed by AI Overviews, zero-click blocks, or SERP features Algorithmic change/behavioral shift Low-Moderate (influence possible)

This comparison reveals both visibility loss and click erosion, clarifying where and why your equity declined.

Step 3: Attribute The Loss

Link each pattern to its primary driver:

  1. Structural – Indexation, redirects, broken templates.
  2. Content – Thin, outdated, or unstructured pages lacking E-E-A-T.
  3. SERP Format – AI overviews, videos, or answer boxes replacing classic results.
  4. Competitive – New entrants or aggressive refresh cycles.

These map to equity types:

  • Recoverable Equity: technical or content improvements.
  • Influence Equity: optimizing brand/entity visibility within AI Overviews.
  • Retired Equity: informational queries no longer yielding clicks.

This triage converts diagnosis into a prioritized investment plan.

Step 4: Quantify The Economic Impact

For each equity type, calculate:

Lost Value = Δ Clicks × Conversion Rate × Value per Conversion

Add a Paid Substitution Cost to translate organic loss into a financial figure:

Cost of Not Ranking = Lost Clicks × Avg CPC

This ties the forensic analysis directly to your legacy framework, which I define as The Cost of Not Ranking, and shows executives the tangible price of underperformance.

Example:

  • 15,000 fewer monthly clicks on high-intent queries.
  • 3% conversion × $120 avg order value = $54,000/month in unrealized value.
  • CPC $3.10 → $46,000/month to replace via paid.

Now your analysis quantifies both organic value lost and capital inefficiency created.

Step 5: Separate The Signal From The Noise

Not all loss deserves recovery. Patterns surface quickly:

  • High-volume informational pages: visibility stable, clicks down – reclassified (low ROI).
  • Product or service pages: dropped due to structural issues – recoverable (high ROI).
  • Brand or review pages: replaced by AI summaries – influence (medium ROI).

Plot these on a Search Equity Impact Matrix – potential value vs. effort – to direct resources toward recoverable, high-margin opportunities.

Why This Matters

Most SEO reports describe position snapshots. Few reveal equity trajectories. By grounding analysis in actuals before and after impact, you replace speculation with measurable evidence that data executives can trust. This reframes search optimization as loss prevention and value recovery, not traffic chasing.

From Visibility Metrics To Value Metrics

Traditional metrics focus on activity:

  • Average ranking position.
  • Total impressions.
  • Organic sessions.

Value-based metrics focus on performance and economics:

  • Qualified Visibility Share (discoverability within high-intent categories).
  • Recovered Revenue Potential (modeled from Δ Clicks × Value).
  • Digital Cost of Capital (what it costs to replace that traffic via paid).

Integrating your Cost of Not Ranking logic further amplifies this.

Every click you have to buy is a symptom of a ranking you didn’t earn.

By comparing your paid and organic data for the same query set, you can see how much budget covers for lost equity and how much could be redeployed if organic recovery occurred.

When teams present SEO performance in these financial terms, they gain executive attention and budget alignment.

Example:

“Replacing lost organic share with paid clicks costs $480,000 per quarter. Fixing canonical and internal-link issues can recover 70% of that value within 90 days.”

That’s not an SEO report. That’s a business case for digital capital recovery.

Winning It Back: A Framework For Recovery

Search equity recovery follows the same progression as digital value creation – diagnose, quantify, prioritize, and institutionalize.

1. Discover The Gap

Compare actual performance pre- and post-impact. Visualize equity at risk by category or market.

2. Diagnose The Cause

Layer crawl data, analytics, and competitive intelligence to isolate technical, behavioral, and AI factors.

3. Differentiate

Focus on qualified clicks from mid- and late-funnel intents where AI summaries mention your brand but don’t link to you.

Answer those queries more directly. Reinforce them with structured data and content relationships that signal expertise and trust.

4. Reinforce

Embed SEO governance into development, design, and content workflows. Optimization becomes a process, not a project – or, as I’ve written before, infrastructure, not tacticWhen governance becomes muscle memory, equity doesn’t just recover; it compounds.

From Cost Center To Compounding Asset

Executives often ask:

“How much revenue does SEO drive?”

The better question is:

“How much value are we losing by not treating search as infrastructure?”

The search equity gap quantifies that blind spot. It reframes SEO from a cost-justified marketing function into a value-restoration system – one that preserves and grows digital capital over time. Each recovered visit is a visit you no longer need to buy. Each resolved structural issue accelerates time-to-value for every future campaign.

Ironically, the surest way to make executives appreciate SEO is to let it break once. Nothing clarifies its importance faster than the sound of paid budgets doubling to make up for “free” traffic that suddenly disappeared. That’s how SEO evolves from an acquisition channel to a shareholder-value lever.

Final Thought

The companies dominating search today aren’t publishing more content – they’re protecting and compounding their equity more effectively.

They’ve built digital balance sheets that grow through governance, not guesswork. The rest are still chasing algorithm updates while silently losing market share in the one channel that could deliver the highest margin growth.

The search equity gap isn’t a ranking problem. It’s a visibility-to-value disconnect, and closing it starts by measuring what most teams never even notice.

More Resources:


Featured Image: N Universe/Shutterstock

How To Maximize Paid Ads Profitability With A Strategic Landing Page Audit

Your campaigns are only as strong as the pages they lead to. You can have the most targeted ads, the sharpest copy, and a budget that makes your CFO nervous. But if your landing page doesn’t deliver on what the ad promised, you’re leaving money on the table and feeding poor signals back into your campaign algorithms.

Landing pages are where intent meets experience. When they align, conversion rates increase. When they don’t, even high-quality traffic bounces, and your cost-per-acquisition (CPA) spirals upward.

This post walks through the core elements of a high-performing landing page strategy. This strategy is one that not only converts visitors, but also strengthens your ad campaigns. Whether you’re running Google Ads or Meta campaigns, these landing page strategies apply.

Why A Landing Page Audit Matters To Advertisers

Most advertisers focus heavily on the ad itself: the creative, the targeting, the bid strategy. That makes sense. But the landing page is where the actual conversion happens. It’s the final step in the funnel, and it has a direct impact on campaign performance.

Here’s why landing page audits should be a regular part of your paid media workflow:

Better Landing Page Conversion Rates Mean Lower CPAs

When more visitors convert, your cost per conversion drops. That gives you more room to scale or reinvest budget into other channels.

Stronger Signals Improve Algorithm Performance Every Click, Scroll

Platforms like Google and Meta rely on conversion data to optimize your campaigns. If your landing page isn’t converting, the algorithm receives weak or misleading signals, which limits its ability to find high-intent users.

User Experience On The Landing Page Influences Quality Score

Google rewards landing pages that are relevant, fast, and user-friendly. A higher quality score can lower your cost-per-click (CPC) and improve ad placement.

In short, your landing page isn’t just a conversion tool. It’s a feedback loop that shapes how well your campaigns perform over time.

Audit Point 1: Deliver On Intent And Relevance

The first rule of landing page optimization is simple: Match the message.

If your ad promises “free shipping on running shoes,” your landing page should immediately confirm that offer. If the ad targets “B2B marketing automation tools,” the page should speak directly to that audience and use case.

Message match builds trust. When a visitor clicks an ad and lands on a page that looks, feels, and sounds different, they bounce. Fast.

Here’s how to ensure relevance:

  • Mirror your ad copy. Use the same language, tone, and offer in your headline and subheading. If the ad says, “Save 20% on winter gear,” the landing page headline should reinforce that exact promise.
  • Align visuals with the ad creative. If your ad shows a specific product or service, feature it prominently on the landing page. Consistency across creative and page design reduces cognitive load.
  • Match the user’s stage in the journey. A top-of-funnel awareness ad should lead to educational content, not a hard sell. A retargeting ad for cart abandoners should take them straight to checkout.

The fewer mental leaps a visitor has to make, the more likely they are to convert.

Audit Point 2: Use Your CTAs Effectively

Your call-to-action (CTA) is the most important element on the page. It’s where intent turns into action.

But too many landing pages bury the CTA, use vague language, or overwhelm visitors with multiple competing actions. That creates friction and kills conversions.

Here’s how to get CTAs right:

  • Be specific and action-oriented. “Get Started” is vague. “Start Your Free Trial” or “Download the Guide” tells the visitor exactly what happens next.
  • Apply contrasting colors. You want your CTA button to stand out from the rest of the page. High contrast draws the eye and signals importance.
  • Limit choices. Every additional option on the page reduces the likelihood of conversion. Remove navigation menus, sidebars, and secondary CTAs that distract from your primary goal.
  • Test button copy. Small changes in wording can have a big impact. “Claim Your Discount” might outperform “Shop Now” for a price-sensitive audience.

Your CTA should feel like the natural next step, not a sales pitch.

Example: Zoho CRM’s Landing Page

Zoho CRM’s website is an excellent example of a landing page leveraging these points:

Specific offer: The header “Get started with your 15-day free trial” is highly specific, clarifying the duration and type of offer, addressing the vagueness of a simple “Get Started.”

Visual contrast: The primary CTA button, “GET STARTED,” is a high-contrast, bright red that immediately draws the eye away from the surrounding white and blue elements.

Action-oriented copy: While the button copy is “GET STARTED,” the text immediately below it clarifies the action as a free trial sign-up, maintaining clarity. Furthermore, the page limits distractions, focusing the user on the single action of signing up for the trial.

This approach effectively guides the user toward the intended conversion.

landing page example for Zoho CRM
Screenshot of Zoho CRM, November 2025

Audit Point 3: Use Imagery That Supports Your Message

Visuals aren’t just decoration. They communicate value, build trust, and guide the visitor’s attention.

The right images can make your offer feel tangible and desirable. The wrong ones create confusion or undermine credibility.

Here’s what works:

  • Show the product or outcome. If you’re selling software, show the interface in action. If you’re promoting a service, show the results or benefits your customers experience.
  • Use real people, not stock photos. Authentic imagery builds trust. Generic stock photos do the opposite. If you’re featuring testimonials or case studies, include real customer photos whenever possible.
  • Optimize for mobile. Images should load quickly and display properly on all devices. Slow load times can increase bounce rates and hurt quality scores.
  • Avoid clutter. Every visual element should have a purpose. If an image doesn’t reinforce your message or guide the visitor toward the CTA, remove it.

Strong visuals support your copy. They don’t compete with it.

Example: Superside’s Graphic Design Services

Superside’s landing page demonstrates using a portfolio of images to support the message that they can handle diverse creative needs for clients across different industries:

Show the outcome: Instead of a single generic image, the page prominently features a collage of actual client deliverables (app interfaces, product packaging, social media graphics) for brands like Amazon, Reddit, and Zapier. This directly illustrates the quality and range of the service’s outcome.

Communicate value and trust: By showing recognized brand logos and diverse project types, the imagery instantly builds credibility and reinforces the claim that they can “Scale your in-house creative team with top global talent.”

Avoid clutter (in context): While it’s a collage, the consistent presentation style and the grouping of images in a grid are purposefully designed to communicate a broad portfolio quickly, which directly reinforces the main headline: “Your creative team’s creative team.”

This strategy uses visuals to provide immediate, tangible proof of the service’s capability.

landing page visuals example
Screenshot of Superside, November 2025

Audit Point 4: Clearly Answer: “Why Choose You?”

Your landing page needs to answer one critical question: Why should I choose you over the competition?

This is where you articulate your unique value proposition (UVP). It’s not just about listing features. It’s about showing how your product or service solves a specific problem better than the alternatives.

Here’s how to communicate your UVP effectively:

  • Lead with the benefit, not the feature. “24/7 customer support” is a feature. “Get help anytime, without waiting” is a benefit.
  • Address objections upfront. If price is a concern, highlight flexible payment options. If trust is an issue, showcase security certifications or money-back guarantees.
  • Differentiate yourself. What makes your offer unique? Is it faster, easier, more affordable, or more comprehensive? Make that distinction clear.

Your UVP should be immediately visible, ideally above the fold. If a visitor has to scroll to understand what you’re offering, you’ve already lost some of them.

Audit Point 5: Leverage A Variety Of Social Proof

Social proof reduces risk. It shows visitors that other people (ideally, people like them) have chosen your product or service and been satisfied.

But not all social proof is created equal. The key is to use a mix of formats and place them strategically throughout the page.

Here are the most effective types of social proof to look for when you are doing a landing page audit:

Customer Testimonials

Short, specific quotes from real customers carry more weight than generic praise. Include the customer’s name, title, and company (if B2B) to increase credibility.

Case Studies Or Results

“We increased conversions by 30%” is more compelling than “Great service!” Quantifiable outcomes resonate, especially with data-driven buyers.

Logos Of Recognizable Clients Or Partners 

If well-known brands use your product, feature their logos. Recognition builds instant trust.

Ratings And Reviews

Aggregate ratings (e.g., “4.8/5 stars from 1,200+ customers”) provide quick validation. Link to third-party review sites like G2, Trustpilot, or Capterra for added credibility.

Trust Badges And Certifications

Security seals, industry certifications, and compliance badges (e.g., SOC 2, GDPR) that are visible on landing pages reassure visitors that their data is safe.

Place social proof near your CTA. That’s where hesitation peaks, and reassurance matters most.

Example: Reddit Ads’ Landing Page

The Reddit Ads landing page demonstrates the effective use of logos of recognizable clients or partners to build instant trust and social proof:

Client credibility: At the bottom of the page, a prominent line on the landing page reads, “Trusted businesses across all industries and sizes use Reddit Ads to meet their goals.” This statement is immediately backed up by a scrolling horizontal display of recognizable brand logos, including Mars, GameStop, Capital One, and Maybelline.

Instant trust: For a potential advertiser, seeing global, established brands using the platform reduces the perceived risk of signing up. If major companies trust Reddit Ads with their budget, a new user can be reassured the platform is legitimate and effective.

Strategic placement: The logo section is placed below the main registration form and the tool to explore audience, providing reinforcement just before a user might scroll away or hesitate. It offers a final, compelling piece of proof that supports the core message of reaching a “niche audience.”

This visual list of successful clients serves as powerful validation for the service.

Redit Ads landing page showing social proof
Screenshot of Reddit Ads, November 2025

Audit Point 6: Ensure Strong Technical Performance And Responsive Design

A beautiful landing page means anything if it doesn’t load quickly or breaks on mobile devices.

Technical performance directly impacts conversion rates and campaign quality scores. Google prioritizes fast, mobile-friendly pages, and visitors abandon slow-loading sites within seconds, noting that 53% of visits are likely to be abandoned if pages take longer than three seconds to load.

Here’s what to audit:

  • Page speed. Use tools like Google PageSpeed Insights or GTmetrix to measure load times. Aim for a load time under three seconds. Compress images, minimize code, and leverage browser caching to improve speed.
  • Mobile responsiveness. 41% of all web traffic comes from mobile devices. Your landing page should look and function perfectly on smartphones and tablets. Test across multiple devices and screen sizes.
  • Forms and functionality. If your CTA involves filling out a form, make sure it works. Test every field, button, and error message. Reduce the number of required fields to minimize friction.
  • Browser compatibility. Your page should render correctly in all major browsers (Chrome, Safari, Firefox, Edge). Cross-browser testing tools like BrowserStack can help identify issues.

Technical problems aren’t just annoying. They cost you conversions and damage your campaign performance.

Audit Point 7: Strategically Place Your CTAs

Where you place your CTA matters just as much as what it says.

Most landing pages include a primary CTA above the fold, and that’s a really good start. But high-converting pages use multiple CTAs placed at natural decision points throughout the page.

Here’s a strategic approach:

  • Above the fold. This is your first opportunity to convert visitors who are ready to act immediately. Make it prominent and impossible to miss.
  • After explaining value. Once you’ve outlined your UVP and key benefits, offer another CTA. This targets visitors who need a bit more context before committing.
  • After social proof. Testimonials and case studies reduce hesitation. Follow them with a CTA to capture visitors who’ve just been reassured.
  • At the bottom of the page. For visitors who scroll through all your content, include a final CTA. By this point, they’ve consumed everything you’ve shared and are ready to decide.

Each CTA should feel contextual, not pushy. It should align with where the visitor is in their journey down the page.

Conclusion: Making Your Landing Page Audit A Habit

Your landing page isn’t just a conversion tool. It’s a data generator.

Every click, scroll, and form submission sends signals back to your ad platform. These signals teach the algorithm which audiences convert, which creatives work, and how to allocate budget more efficiently.

When your landing page converts well, those signals are strong and accurate. The algorithm learns faster and optimizes better. When your landing page underperforms, the data becomes noisy. The algorithm struggles to find patterns, and your campaigns stagnate.

This is why landing page audits are essential. A small improvement in conversion rate doesn’t just boost revenue. It improves the quality of data feeding back into your campaigns, creating a compounding effect over time.

Start by identifying your lowest-performing landing pages. Run A/B tests on headlines, CTAs, and imagery. Measure the impact not just on conversions, but on downstream metrics like CPA, return on ad spend (ROAS), and customer lifetime value (LTV).

The better your landing pages perform, the smarter your campaigns become.

More Resources:


Featured Image: one photo/Shutterstock

New to AI Brand Insights: Scan your brand visibility in Perplexity

Today, we’re rolling out an improvement to Yoast AI Brand Insights, part of the Yoast SEO AI+ package. You can now scan how your brand appears in answers generated by Perplexity, in addition to ChatGPT at no extra cost. This builds on our mission to help marketers, bloggers, and business owners understand how their brand is represented across major AI platforms.  

AI powered answers are fast becoming a new gateway for discovery. People increasingly turn to AI tools to research, compare, and choose products or services. Those answers often mention brands as recommendations or sources. When someone asks a question in your niche, you should be able to see if your brand is part of the conversation. 

This update makes that possible across more platforms. 

AI Brand Insights now lets you see when and how your brand appears in AI generated answers for relevant search style queries. You can track sentiment, and compare your visibility to competitors. By adding support for Perplexity, you get a broader view of how AI systems describe your brand and which sources they rely on, helping you stay visible and confidently represented in AI driven discovery  

What’s new 

You can now:

  • Run brand visibility scans in Perplexity
  • Compare how ChatGPT and Perplexity talk about your brand
  • Track mentions, sentiment, and citations across both platforms
  • Monitor changes over time in your AI Visibility Index 

Nothing else changes in your workflow. The next time you log in, you’ll see a visual notification guiding you to run your first Perplexity scan. 

Why this matters 

Understanding how AI answers present your brand helps you move beyond guesswork and see the tone, accuracy, and sources AI chooses when mentioning you. With more customers relying on AI powered explanations than ever, visibility in these answers is now an important part of brand discovery and trust building. 

How to try it 

Log in through MyYoast, open AI Brand Insights, and run your next scan. Your dashboard now includes results from Perplexity alongside ChatGPT. This gives you a fuller, more accurate view of your brand’s presence in AI generated answers. 

If you’re already using Yoast SEO AI+, this enhancement is available to you immediately. If you’re not, upgrading gives you access to this feature along with a complete set of tools for brand visibility, AI insights, and on page SEO.

The Download: AI’s impact on the economy, and DeepSeek strikes again

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The State of AI: Welcome to the economic singularity

—David Rotman and Richard Waters

Any far-reaching new technology is always uneven in its adoption, but few have been more uneven than generative AI. That makes it hard to assess its likely impact on individual businesses, let alone on productivity across the economy as a whole.

At one extreme, AI coding assistants have revolutionized the work of software developers. At the other extreme, most companies are seeing little if any benefit from their initial investments. 

That has provided fuel for the skeptics who maintain that—by its very nature as a probabilistic technology prone to hallucinating—generative AI will never have a deep impact on business. To students of tech history, though, the lack of immediate impact is normal. Read the full story.

If you’re an MIT Technology Review subscriber, you can join David and Richard, alongside our editor in chief, Mat Honan, for an exclusive conversation digging into what’s happening across different markets live on Tuesday, December 9 at 1pm ET.  Register here

The State of AI is our subscriber-only collaboration between the Financial Times and MIT Technology Review examining the ways in which AI is reshaping global power. Sign up to receive future editions every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 DeepSeek has unveiled two new experimental AI models 
DeepSeek-V3.2 is designed to match OpenAI’s GPT-5’s reasoning capabilities. (Bloomberg $)
+ Here’s how DeepSeek slashes its models’ computational burden. (VentureBeat)
+ It’s achieved these results despite its limited access to powerful chips. (SCMP $)

2 OpenAI has issued a “code red” warning to its employees
It’s a call to arms to improve ChatGPT, or risk being overtaken. (The Information $)
+ Both Google and Anthropic are snapping at OpenAI’s heels. (FT $)
+ Advertising and other initiatives will be pushed back to accommodate the new focus. (WSJ $)

3 How to know when the AI bubble has burst
These are the signs to look out for. (Economist $)
+ Things could get a whole lot worse for the economy if and when it pops. (Axios)
+ We don’t really know how the AI investment surge is being financed. (The Guardian)

4 Some US states are making it illegal for AI to discriminate against you
California is the latest to give workers more power to fight algorithms. (WP $)

5 This AI startup is working on a post-transformer future
Transformer architecture underpins the current AI boom—but Pathway is developing something new. (WSJ $)
+ What the next frontier of AI could look like. (IEEE Spectrum)

6 India is demanding smartphone makers install a government app
Which privacy advocates say is unacceptable snooping. (FT $)
+ India’s tech talent is looking for opportunities outside the US. (Rest of World)

7 College students are desperate to sign up for AI majors
AI is now the second-largest major at MIT behind computer science. (NYT $)
+ AI’s giants want to take over the classroom. (MIT Technology Review)

8 America’s musical heritage is at serious risk
Much of it is stored on studio tapes, which are deteriorating over time. (NYT $)
+ The race to save our online lives from a digital dark age. (MIT Technology Review)

9 Celebrities are increasingly turning on AI
That doesn’t stop fans from casting them in slop videos anyway. (The Verge)

10 Samsung has revealed its first tri-folding phone
But will people actually want to buy it? (Bloomberg $)
+ It’ll cost more than $2,000 when it goes on sale in South Korea. (Reuters)

Quote of the day

“The Chinese will not pause. They will take over.”

—Michael Lohscheller, chief executive of Swedish electric car maker Polestar, tells the Guardian why Europe should stick to its plan to ban the production of new petrol and diesel cars by 2035. 

One more thing

Inside Amsterdam’s high-stakes experiment to create fair welfare AI

Amsterdam thought it was on the right track. City officials in the welfare department believed they could build technology that would prevent fraud while protecting citizens’ rights. They followed these emerging best practices and invested a vast amount of time and money in a project that eventually processed live welfare applications. But in their pilot, they found that the system they’d developed was still not fair and effective. Why?

Lighthouse Reports, MIT Technology Review, and the Dutch newspaper Trouw have gained unprecedented access to the system to try to find out. Read about what we discovered.

—Eileen Guo, Gabriel Geiger & Justin-Casimir Braun

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Hear me out: a truly great festive film doesn’t need to be about Christmas at all.
+ Maybe we should judge a book by its cover after all.
+ Happy birthday to Ms Britney Spears, still the princess of pop at 44!
+ The fascinating psychology behind why we love travelling so much.

Tools to Track GenAI Citations, Sources

Generative AI platforms increasingly conduct live web searches to respond to users’ prompts. The platforms don’t reveal how or where they search, but it’s likely a combination of Google, Bing, and the platforms’ own bots.

Just a few months ago, those answers would have relied primarily on existing training data.

Regardless, understanding how AI platforms conduct the searches is key to optimizing visibility in the answers.

Analyze:

  • Which web pages produce the genAI answers? Try to appear in those pages.
  • Which brands and products influenced an answer? Are they competitors?

Here are three tools to help reveal impactful pages and influential brands and products.

ChatGPT Path

ChatGPT Path from Ayima, an agency, is a free Chrome extension that extracts citations, brands and products (entities), and fan-out queries from any ChatGPT dialog. Download the extension and converse with ChatGPT. Then click the extension icon to open a side panel with the key info, called RAG Sources (“Retrieval‑Augmented Generation”).

Export the report via CSV for easier analysis.

ChatGPT conversation on choosing running shoes for rainy weather displayed on the left, with a ChatGPT Path sidebar on the right. The sidebar is labeled ‘RAG Sources (23)’ and lists numbered source cards with titles and snippets from websites such as runrepeat.com, Reddit, Nike.com, Outside Online, and Facebook. The main chat response includes a checklist of features to look for in rainy-condition running shoes, with brand and retailer citations highlighted inline.

ChatGPT Path extracts citations, brands and products, and fan-out queries from any ChatGPT dialog, such as this example for “help me choose running shoes for rainy weather.”

AI Search Impact Analysis

AI Search Impact Analysis is another free Chrome extension that analyzes multiple queries on Google AI Overviews.

Install the extension and type your comma-separated queries into the tool’s sidebar. The tool will run each search and identify AI Overviews and the queries that triggered them.

A separate “Citation Report” includes all URLs cited in each Overview and overall for all queries. In my testing, this feature was handy for identifying URLs cited repeatedly.

The extension’s “Brand Check” analyzes mentions of your company and competitors in Overviews.

Dashboard sidebar and AI Overview Impact Analysis panel. The left sidebar displays menu items including Search, Report, Citation Report, AIO Answer, Brand Check, and Word Count. The main panel shows a Brand Mentions Analysis tool with fields for ‘Your Brand’ filled in as ‘nike’ and ‘Competitors’ filled in as ‘hoka.’ A blue button labeled ‘Analyze Brand Mentions’ appears below the inputs. A Brand Mentions Summary table lists total mentions and keyword coverage for the user’s brand versus competitors.

“Brand Check” analyzes Overviews for mentions of your company and competitors, such as “nike” and “hoka” shown here.

Peec AI

Peec AI is a premium analytics tool for sources and brand mentions in ChatGPT, Perplexity, and AI Overviews.

To use, enter your brand and targeted prompts. The tool, after a few minutes, will create a detailed report, listing:

  • Domains cited in genAI answers for those prompts,
  • URLs linked in the answers.

The report categorizes cited domains by type (e.g., corporate, brand-owned, user-generated) and frequency (to know a domain’s impact on a cluster of answers).

A separate aggregated report combines all genAI platforms, with URL filters for each one. The “Gap analysis” lists cited URLs that mention competing brands but not yours.

Finally, Peec AI analyzes all entered prompts and lists the most-cited brands to compare and track against your own.

Analytics dashboard showing a line chart titled ‘Source Usage by Domain’ with multiple domain lines such as annsmarty.com, convert.com, and linkedin.com. To the right is a donut chart illustrating domain type distribution with 516 total citations across categories such as Corporate, You, Editorial, UGC, Institutional, and Reference. A table below lists domains with corresponding usage percentages and average citations, including smarty.marketing, anns smarty.com, youtube.com, and linkedin.com.

Peec AI’s report categorizes cited domains by type and frequency.

YouTube Launches First Annual Recap Feature For All Users via @sejournal, @MattGSouthern

YouTube launched its first annual Recap feature, providing users with a personalized summary of their viewing activity throughout the year.

The feature is available starting today for users in North America and will roll out globally throughout the week, according to the YouTube Blog.

What’s New

YouTube Recap is accessible from the homepage or under the “You” tab on mobile and desktop. The feature generates up to 12 cards based on watch history, displaying top channels, interests, and viewing patterns over the year.

The cards also assign users a personality type based on viewing habits. Types include Adventurer, Skill Builder, Creative Spirit, Sunshiner, Wonder Seeker, Connector, Philosopher, and Dreamer.

YouTube said the most common personality types were Sunshiner, Wonder Seeker, and Connector. Philosopher and Dreamer were the rarest.

Users who listened to music through the platform will see Top Artists and Top Songs cards within their Recap. Additional music data, including genres, podcasts, and international listening, is available in the YouTube Music app.

YouTube said it conducted nine rounds of feedback testing and evaluated more than 50 concepts before finalizing the feature. In an accompanying video, YouTube representatives said the team used Gemini to analyze watch history patterns, which enabled them to create a structured recap from YouTube’s unstructured video library.

Why This Matters

YouTube Recap gives the platform a year-end engagement feature comparable to Spotify Wrapped.

For creators, the feature surfaces which channels appear in users’ top viewing lists. People can save and share their Recap cards, which could boost channels’ social media visibility during the holiday period.

Looking Ahead

Users in North America can access their Recap starting today. Those outside North America should see the feature become available throughout the week.

For more details, see the video below:

Ask an SEO: Is An XML Or HTML Sitemap Better For SEO? via @sejournal, @HelenPollitt1

In this edition of Ask An SEO, we break down a common point of confusion for site owners and technical SEOs:

Do I need both an XML sitemap and an HTML one, and which one is better to use for SEO?

It can be a bit confusing to know whether it’s better to use an XML sitemap or an HTML one for your site. In some instances, neither is needed, and in some, both are helpful. Let’s dive into what they are, what they do, and when to use them.

What Is An XML sitemap?

An XML sitemap is essentially a list of URLs for pages and files on your website that you want the search bots to be able to find and crawl. You can also use the XML sitemap to detail information about the files, like the length of run-time for the video file specified, or the publication date of an article.

It is primarily used for bots. There is little reason why you would want a human visitor to use an XML sitemap. Well, unless they are debugging an SEO issue!

What Is The XML Sitemap Used For?

The purpose of the XML sitemap is to help search bots understand which pages on your website should be crawled, as well as giving them extra information about those pages.

The XML sitemap can help bots identify pages on the site that would otherwise be difficult to find. This can be orphaned pages, those with low internal links, or even pages that have changed recently that you may want to encourage the bots to recrawl.

Best Practices For XML Sitemaps

Most search bots will understand XML sitemaps that follow the sitemaps.org protocol. This protocol defines the necessary location of the XML sitemap on a site, schema it needs to use to be understood by bots, and how to prove ownership of domains in the instance of cross-domain references.

There is typically a limit on the size an XML sitemap can be, and still be parsed by the search bots. This means when building an XML sitemap, you should ensure it is under 50 MB uncompressed, and no more than 50,000 URLs. If your website is larger, you may need multiple XML sitemaps to cover all of the URLs. In that instance, you can use a sitemap index file to help organize your sitemaps into one location.

As the purpose of the XML sitemap is typically to help bots find your crawlable, indexable pages, it is usually necessary to ensure the file references it contains all lead to URLs with 200 server response codes. In most instances, the URLs should be the canonical version, and not contain any crawl or index restrictions.

Things To Be Aware Of With XML Sitemaps

There may be good reasons to go against “best practice” for XML sitemaps. For example, if you are instigating a lot of redirects, you may wish to include the old URLs in an XML sitemap even though they will return a 301 server response code. Adding a new XML sitemap containing those altered URLs can encourage the bots to recrawl them and pick up the redirects sooner than if they were just left to find them via crawling the site. This is especially the case if you have gone to the trouble of removing links to the 301 redirects on the site itself.

What Is An HTML Sitemap?

The HTML sitemap is a set of links to pages within your website. It is usually linked to from somewhere on the site, like the footer, that is easily accessed by users if they are specifically looking for it. However, it doesn’t form part of the main navigation of the site, but more as an accompaniment to it.

What Is An HTML Sitemap Used For?

The idea of the HTML sitemap is to serve as a catch-all for navigation. If a user is struggling to find a page on your site through your main navigation elements, or search, they can go to the HTML sitemap and find links to the most important pages on your site. If your website isn’t that large, you may be able to include links to all of the pages on your site.

The HTML sitemap pulls double duty. Not only does it work as a mega-navigation for humans, but it can also help bots find pages. As bots will follow links on a website (as long as they are followable), it can aid in helping them to find pages that are otherwise not linked to, or are poorly linked to, on the site.

Best Practices For HTML Sitemaps

Unlike the XML sitemap, there is no specific format that an HTML sitemap needs to follow. As the name suggests, it tends to be a simple HTML page that contains hyperlinks to the pages you want users to find through it.

In order to make it usable for bots too, it is important that the links are followable, i.e., they do not have a nofollow attribute on them. It is also prudent to make sure the URLs they link to aren’t disallowed through the robots.txt. It won’t cause you any serious issues if the links aren’t followable for bots; it just stops the sitemap from being useful for bots.

Things To Be Aware Of With HTML Sitemaps

Most users are not going to go to the HTML sitemap as their first port of call on a site. It is important to realize that if a user is going to your HTML sitemap to find a page, it suggests that your primary navigation on the site has failed them. It really should be seen as a last resort to support navigation.

Which Is Better To Use For SEO?

So, which is more important for SEO? Well, neither. That is, it really is dependent on your website and its needs.

For example, a small website with fewer than 20 pages may not have a need for either an XML sitemap or an HTML sitemap. In this instance, if all the pages are linked to well from the main navigation system, the chances are high that users and search bots alike will easily be able to find each of the site’s pages without additional help from sitemaps.

However, if your website has millions of pages, and has a main navigation system that buries links several sub-menus deep, an XML sitemap and an HTML sitemap may be useful.

They both serve different purposes and audiences.

When To Use The XML Sitemap

In practice, having an XML sitemap, or several, can help combat crawl issues. It gives a clear list of all the pages that you want a search bot to crawl and index. An XML sitemap can also be very helpful for debugging crawling issues, as when you upload it to Google Search Console, you will get an alert if there are issues with it or the URLs it contains. It can allow you to narrow in on the indexing status of URLs within the XML sitemap. This can be very useful for large websites that have millions of pages.

Essentially, there isn’t really a reason not to use an XML sitemap, apart from the time and cost of creating and maintaining them. Many content management systems will automatically generate them, which can take away some of the hassle.

Really, if you can have an XML sitemap, you might as well. If, however, it will be too costly or developer-resource intensive, it is not critical if your site is fairly small and the search engines already do a good job of crawling and indexing it.

When To Use The HTML Sitemap

The HTML sitemap is more useful when a website’s navigation isn’t very intuitive, or the search functionality isn’t comprehensive. It serves as a backstop to ensure users can find deeply buried pages. An HTML sitemap is particularly useful for larger sites that have a more complicated internal linking structure. It can also show the relationship between different pages well, depending on the structure of the sitemap. Overall, it is helpful to both users and bots, but is only really needed when the website is suffering from architectural problems or is just exceedingly large.

So, in summary, there is no right or wrong answer to which is more important. It is, however, very dependent on your website. Overall, there’s no harm in including both, but it might not be critical to do so.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

OpenAI Declares ‘Code Red’ To Improve ChatGPT Amid Google Competition via @sejournal, @MattGSouthern

OpenAI CEO Sam Altman has declared a “code red” to focus company resources on improving ChatGPT, according to an internal memo reported by The Wall Street Journal and The Information.

The memo signals OpenAI’s response to growing competition from Google, whose Gemini 3 model has outperformed ChatGPT in several benchmark tests since launching last month, according to Google’s own evaluation data and third party leaderboards.

What’s New

Altman told employees that ChatGPT’s day to day experience needs improvement. Specific areas include personalization features, response speed and reliability, and the chatbot’s ability to answer a wider range of questions.

The company uses a color-coded system to indicate priority levels. This effort has been elevated to “code red,” above the previous “code orange” designation for ChatGPT improvements.

A new reasoning model is expected to launch next week, according to the memo, though OpenAI hasn’t publicly announced it.

Delayed Products

Several product initiatives are being postponed as a result.

Advertising integration, which OpenAI had been testing in beta versions of the ChatGPT app, is now on hold, according to The Information. AI agents designed for shopping and healthcare are also delayed, along with improvements to ChatGPT Pulse.

Altman has encouraged temporary team transfers to support ChatGPT development and established daily calls for those responsible for improvements.

Competitive Context

On the technical side, Google’s Gemini 3 and related models have posted strong scores on reasoning benchmarks. Google says Gemini 3 Deep Think outperforms earlier versions on Humanity’s Last Exam, a frontier level benchmark created by AI safety researchers, and other difficult tests. Those results are reflected on Google’s own Gemini 3 Pro benchmark page and on independent leaderboards that track model performance.

OpenAI hasn’t released comparable public benchmark data for its next reasoning model yet, so comparisons rely on current GPT 5 results rather than the upcoming system referenced in the memo.

Google is also continuing to invest in generative image tools like its Nano Banana and Nano Banana Pro image generators, which sit alongside Gemini 3 as part of a broader AI product lineup.

Benchmark Context

Humanity’s Last Exam is intended to be a harder successor to saturated benchmarks like MMLU. It’s maintained by the Center for AI Safety and Scale AI, with an overview available on the project site and results tracked by multiple leaderboards, including Scale’s official leaderboard and third party dashboards such as Artificial Analysis.

Google’s Gemini 3 Pro benchmark documentation lists a higher score on Humanity’s Last Exam than several competing models, including GPT 5. That’s the basis for reporting that Gemini 3 has “outperformed” ChatGPT on that specific benchmark.

OpenAI has published strong results on other reasoning benchmarks for its GPT 5 series, but the memo appears to be reacting to this recent wave of Gemini 3 performance data rather than a single test.

Traffic And Usage Context

Despite the technical pressure, OpenAI still has a large lead in assistant usage.

In a recent post on LinkedIn, ChatGPT head Nick Turley said ChatGPT is the “#1 AI assistant worldwide,” accounting for “around 70% of assistant usage” and roughly “10% of search activity.” You can read his full comments here.

Separate reporting from outlets including the Financial Times indicates OpenAI has more than 800 million weekly users, with most on the free tier, while Gemini’s user base has been growing quickly from a lower starting point.

Altman’s memo acknowledges Google’s recent progress and warns of “temporary economic headwinds,” while also saying OpenAI is “catching up fast.”

A Familiar Playbook

The “code red” designation echoes Google’s own response to ChatGPT several years ago.

Google management declared a “code red” after ChatGPT’s viral launch. CEO Sundar Pichai redirected teams across Google Research, Trust and Safety, and other departments to focus on AI product development.

That urgency led to the accelerated development of Google’s AI products, culminating in Bard’s launch in early 2023 and its subsequent evolution into Gemini.

Now the roles have reversed. Google’s sustained investment in AI infrastructure has produced a model that scores higher than ChatGPT on several high profile benchmarks, prompting OpenAI to adopt a similar crisis response framework for its flagship product.

Company Response

Nick Turley, OpenAI’s head of ChatGPT, addressed the competitive landscape in recent posts on LinkedIn and X, where he described ChatGPT as the top AI assistant worldwide.

“New products are launching every week, which is great,” he wrote in one of the posts, saying that competition pushes OpenAI to move faster and continue improving ChatGPT.

He added that OpenAI’s focus is making ChatGPT “more capable” while expanding access and making it “more intuitive and personal.”

OpenAI hasn’t publicly commented on the leaked memo itself.

Looking Ahead

OpenAI’s new reasoning model launch will provide the first indication of how the company is executing on Altman’s directive. The delay of advertising and AI agents suggests ChatGPT quality has become the company’s singular near term priority, at least internally.

For marketers and SEO professionals, the more immediate impact is likely to be on how ChatGPT handles complex queries, research tasks, and follow up questions once the new model is live. Any measurable changes in answer quality, speed, or personalization will be important to watch alongside Google’s continued Gemini 3 rollouts.


Featured Image: Mijansk786/Shutterstock

Signal Vs. Noise: Predicting Future Impact Of Content Marketing

This edited excerpt is from “B2B Content Marketing Strategy” by Devin Bramhall ©2025, and is reproduced and adapted with permission from Kogan Page Ltd.

Marketing can contribute to company growth in many different ways: Net new sales, customer retention, reduces risk from competitors, sometimes creates new revenue streams (like events) that impact more than one company goal, bringing a product to market successfully, feature adoption/upsells, to name a few.

The challenge marketers have is convincing multiple stakeholders that their work did, in fact, contribute to any of these areas. Even if you have goals and agreed-upon metrics to measure success, reporting on marketing ends up being fraught with all kinds of complications, from the political and interpersonal to depth of knowledge about marketing and what shows up as “impact” and “value” to the business.

The opportunity for marketers in this situation is that the people who need to be convinced don’t know what “the answer” to marketing attribution is either. They argue with each other about it behind closed doors and change their minds a lot, but they honestly can’t really prove anything better than you can. They just bought into some corollary model or made one up and have spent a ton of time campaigning internally and out in the world to make other people believe their way is correct, and eventually some of them do.

Predicting Future Impact

Most reporting focuses on what’s already happened – last month’s lead generation, last quarter’s revenue, or last year’s customer acquisition costs.

While historical data is crucial to making future decisions, it also keeps marketing leaders in a reactive position. By the time you identify a problem, it’s already affected your results. Leading indicators give you time to adjust course when needed, rather than explaining missed targets after the fact.

That’s why monitoring the signals along the way is also useful, if executed thoughtfully.

A few caveats:

  • Monitor quietly. You don’t have to share what you observe with your executives 1) at all, or 2) until you’re ready. They’ll either get confused or too excited, and neither leads to a good place for you.
  • Work with your data team. Whatever job title they’ve been given at your company, find the people who have access to the raw data and ask them questions. Be specific about what you want to know. You don’t have to know the exact data types, time periods, or segments. They just need a detailed question to get you what you need.
  • Talk it through. Since data contains multiple realities depending on how you slice it, I’ve always found it helpful to run any conclusions or stories by my data team and, where possible, my boss (see first bullet!). Basically, I look for two different analytical perspectives:
    • Someone whose job it is to ensure our data is accurate.
    • Someone whose job it is to analyze data for reporting on the business.

Remember: Reporting isn’t a single use-case activity. Reflecting on the past to measure impact is just one way to leverage reporting. Use it to inspire new ideas, optimizations, and experiments, too.

Read more: How To Write SEO Reports That Get Attention From Your CMO

A Few Potentially Useful Signals You Can Monitor

Ultimately, it’s up to you to determine which signals provide valuable insights into the performance of your marketing initiatives. And regardless of your role, whether it’s producer, manager, or team lead, as your boss, I’d expect you to know how to determine what those are.

Also, the exact signals you monitor will continue to change as technology and the internet evolve. However, there are a few informative signals that have stood the test of time (thus far) for me.

Resonance

When it comes to resonance, unprompted action on even a semiregular basis is a huge signal that something you’re doing is working, so even if your data is statistically insignificant, I’d lean in and, at the very least, conduct further experiments.

One example of this is folks sharing and referencing a topic or idea you share publicly in their own content (and how their followers react to it) on a semi-consistent basis. This indicates you’re at least on the right track with content direction.

In my experience, search volume for a keyword or phrase is minimally helpful in determining resonance in the beginning. As in, just because no one is searching for a topic doesn’t mean it’s not a common problem. A more useful exercise in search monitoring to me is whether your campaign corresponds with an increase in search volume in that time period.

Activity

The same principle applies to other actions as well. Are folks commenting on posts asking for your opinion on specific problems they are experiencing? Are you receiving anecdotal feedback semi-consistently on specific marketing initiatives or topics you’re investing in?

Do folks engage with your content even when you’re inconsistent? One client I worked with saw 60-70% open rates even on major holidays or when the newsletter was sent off-schedule on a Saturday or Monday.

Are you seeing an increase in time-on-page or pages per session from certain topics or even specific pieces?

Copycats

While not a perfect signal, if your competitors start copying your content, it’s either a sign you could be onto something or an indication that their strategy isn’t working, they don’t have one, or they’re struggling. No matter the case, it’s a signal worth paying attention to and perhaps doing some recon to find out if there are any weaknesses you can exploit.

Ultimately, your goal is to explore these signals to establish whether there are correlations between these leading indicators and your ultimate business outcomes. This isn’t just theoretical – it requires analyzing your data to identify patterns that predict success for your business.

Turning Measurement Into Mastery

Effective reporting isn’t the end of your marketing journey – it’s the bridge to your next phase of growth. Measuring the impact of content marketing isn’t just about proving its value; it’s about creating the leverage you need to execute strategies that genuinely move your business forward.

Remember these essential principles as you develop your measurement approach:

  • Numbers don’t tell stories – people do. Your data provides ingredients, but you create the meal. The most powerful reports transform complex metrics into clear narratives that inspire action and build confidence in your strategy.
  • Measurement serves strategy, not the other way around. When you begin with clear objectives and understand what truly influences behavior, metrics become tools for insight rather than constraints on creativity.
  • Reporting is campaigning. The most successful marketers recognize that performance reporting is ultimately a persuasion exercise – one that requires understanding audience motivations, building relationships, and consistently communicating value.
  • Both measurable and unmeasurable impacts matter. While focusing on quantifiable metrics, never lose sight of the equally valuable but harder-to-measure effects of brand building, relationship development, and community growth.

By developing measurement systems that capture both immediate impacts and leading indicators, you transform reporting from a dreaded obligation into a strategic advantage.

Summary: Practice And Persistence

As you apply these principles to your own marketing, remember that mastery comes through practice and persistence. You’ll make mistakes, discover unexpected insights, and continuously refine your approach. That’s not just normal – it’s the path to excellence.

To read the full book, SEJ readers have an exclusive 25% discount code and free shipping to the US and UK. Use promo code “SEJ25” at koganpage.com here.

More Resources:


Featured Image: Igor Link/Shutterstock