“Stop trying to make GEO happen. It’s not going to happen.”
With apologies to Gretchen Wieners and the writers of “Mean Girls,” the line feels like the only way to start this conversation about a buzzword making the rounds: GEO (which is now, allegedly, supposed to mean Generative Engine Optimization).
This article grew out of a LinkedIn post/open plea I wrote recently about this furore, which unexpectedly took off – approaching 10,000 impressions, four dozen comments, and plenty of laughter at bad acronym ideas. Clearly, this struck a nerve with the SEO and marketing community.
On the surface, to be fair, the concept makes sense. We’re in a new era where AI-driven search engines are shaping how content is retrieved, summarized, and delivered. Adapting SEO strategies for that reality is important; however…
Nobody Will Say “G-E-O”
Acronyms survive if they’re pronounceable. If they aren’t easy to say aloud, and also happen to spell an actual word, people will say it like a word.
To my point, no one is going to spell out “G-E-O” when talking about Generative Engine Optimization. It simply doesn’t roll off the tongue nicely. Inevitably, it becomes the word “geo” – and that’s where the trouble starts.
The word geo is ancient. It comes from the Greek word gē (γη), meaning earth or ground. It’s the root of hundreds of words we already use every day: geography, geology, geothermal, geopolitics, geospatial, geotracking, geotagging, geomapping. In technology, it’s baked into concepts like geo-targeting and geo-fencing, and in all cases, geo explicitly means “the earth” in some form or another.
The linguistic baggage here is too heavy. There is no amount of wishful thinking that will make “gee-ee-oh” mean something not related to the earth.
The Branding Problem: Words Have Meaning
Words and acronyms aren’t blank slates. They carry cultural, historical, and linguistic connotations and memories that can’t be erased by decree.
Try to rebrand “GEO” and people’s brains will still instantly (or at least initially) read it as “geography.” They might pause and look at the context, and then decide “Oh, this must be G-E-O which means generative engine optimization, which is like S-E-O but for AI.” That’s a lot of work we are asking the public to do for three little letters.
It’s the same reason I could never (not that I would ever) convince our marketing team to rebrand our SEO plugin as an “FBI” plugin. No matter how hard we try to make FBI mean For Better Indexing, we are not going to be able to overcome the decades of heavy usage that says FBI means Federal Bureau of Investigation.
In this case, GEO doesn’t have decades of historical usage; it has literally millennia of meaning that IS NOT THIS. Hijacking an acronym with multiple centuries of usage is not innovation; it is confusion.
The SEO Problem: Competing With Entrenched Meaning
Let’s set branding aside and look at this purely from an SEO perspective.
Search engines reward authority, longevity, and relevance. The word geo has decades of backlinks, established search volume, and deeply entrenched usage. Every authoritative signal in Google’s system points to geo = geography/geographical/earth-related or adjacent.
Generative Engine Optimization will be competing against that established meaning forever. It won’t matter how many blog posts declare that “GEO is the new SEO” – the search results for “geo” will belong to geography, not generative optimization.
Then we can look beyond Google’s index – the training data behind large language models (LLMs) already “knows” that geo refers to Earth and geography, because that’s what the word has meant in every corpus of text for thousands of years. The idea that we can overwrite that meaning in a few quarters of (AI-generated) blog posts and conference talks is, frankly, wishful thinking.
Acronym Soup: Why Hijacking Fails
This isn’t the first time people have tried to coin a buzzword by hijacking an acronym. It never works. Acronyms only stick when they are:
Unique (no heavy pre-existing baggage).
Clear (people know, or can easily surmise, what they stand for).
Pronounceable (people can easily say them in conversation).
When they aren’t, they dissolve into acronym soup. Everyone gets confused, nobody adopts the term consistently, and the idea dies.
Humor Break: Acronyms We Can Safely Reject Now
Since I’m sure there will be a scramble to come up with something “better” than GEO, let me save you the trouble and pre-remove a few tempting, but alas already in use, options from the list.
FBI – For Better Indexing (all your queries are under surveillance).
PDF – Prompt-Driven Framework (optimized for clients who never open them).
BIO – Bot Interaction Optimization (because the LLMs need to “like” you).
CEO – Crawl Efficiency Orchestration (manage your bots like a boss).
URL – Unified Retrieval Layer (ranking starts at the root).
GPS – Generative Prompt Sequencing (your AI still needs directions).
API – Automated Prompt Injection (though to be fair, my brain always defaults to “armor piercing incendiaries” but that’s probably just a me problem).
HTML – Human-Tuned Model Language (teach the bots to “speak search”).
INFO – Intelligent Neural Findability Optimization (make your content “discoverable” to AI).
PRO – Prompt Response Optimization (win the answer box in AI).
EV – Enhanced Visibility (because apparently that’s the whole point).
SEO – Synthetic Engine Optimization (yes, we’ve come full circle).
They’re funny, but none of them should happen for all of the reasons outlined above.
What Actually Works When Naming Concepts
So, if GEO is a lost cause, what should we be doing instead?
1. Start Unique
Don’t hijack a word or acronym already in heavy use.
The cleanest acronyms are invented, not repurposed.
2. Make It Pronounceable
SEO works because people can say it.
SaaS (Software as a Service) works because it’s short and phonetically easy (“sass” in case you didn’t know).
3. Anchor It In Authority
Google’s own acronyms, like E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness), stuck because Google itself enforced them.
A community can rally around a term, but only if it feels backed by authority or usefulness.
4. Check The SERPs First
Before you try to coin an acronym, search it.
If the first three pages of results are about something else entirely, you might be sunk before you begin.
The Bottom Line: Stop Trying To Make GEO Happen
Generative Engine Optimization as a concept makes sense, but GEO as an acronym is doomed.
It fails linguistically (nobody will say “G-E-O”), historically (the word is ancient and already claimed), and strategically (search engines and LLMs already associate “geo” with geography, not generative search).
If you want a new term to catch on, start with one that isn’t already taken. Otherwise, you’re not innovating language – you’re just creating acronym soup … and sabotaging your own visibility from day one.
So please, stop trying to make GEO happen. It’s not going to happen.
Google is apparently having trouble identifying paywalled content due to a standard way paywalled content is handled by publishers like news sites. It’s asking that publishers with paywalled content change the way they block content so as to help Google out.
Search Related JavaScript Problems
Google updated their guidelines with a call for publishers to consider changing how they block users from paywalled content. It’s fairly common for publishers to use a script to block non-paying users with an interstitial although the full content is still there in the code. This may be causing issues for Google in properly identifying paywalled content.
A recent addition to their search documentation about JavaScript issues related to search they wrote:
“If you’re using a JavaScript-based paywall, consider the implementation.
Some JavaScript paywall solutions include the full content in the server response, then use JavaScript to hide it until subscription status is confirmed. This isn’t a reliable way to limit access to the content. Make sure your paywall only provides the full content once the subscription status is confirmed.”
The documentation doesn’t say what problems Google itself is having, but a changelog documenting the change offers more context about why they are asking for this change:
“Adding guidance for JavaScript-based paywalls
What: Added new guidance on JavaScript-based paywall considerations.
Why: To help sites understand challenges with the JavaScript-based paywall design pattern, as it makes it difficult for Google to automatically determine which content is paywalled and which isn’t.”
The changelog makes it clear that the way some publishers use JavaScript for blocking paywalled content is making it difficult for Google to know if the content is or is not paywalled.
The change was an addition to a numbered list of JavaScript problems publishers should be aware of, item number 10 on their “Fix Search-related JavaScript Problems” page.
This post was sponsored by Cloudways. The opinions expressed in this article are the sponsor’s own.
Wondering why your rankings may be declining?
Just discovered your WooCommerce site has slow load times?
A slow WooCommerce site doesn’t just cost you conversions. It affects search visibility, backend performance, and customer trust.
Whether you’re a developer running your own stack or an agency managing dozens of client stores, understanding how WooCommerce performance scales under load is now considered table stakes.
Today, many WordPress sites are far more dynamic, meaning many things are happening at the same time:
Every action a user takes, from logging in, updating a cart, or initiating checkout, relies on live data from the server. These requests cannot be cached.
Tools like Varnish or CDNs can help with public pages such as the homepage or product listings. But once someone logs in to their account or interacts with their session, caching no longer helps. Each request must be processed in real time.
This article breaks down why that happens and what kind of server setup is helping stores stay fast, stable, and ready to grow.
Why Do WooCommerce Stores Slow Down?
WooCommerce often performs well on the surface. But as traffic grows and users start interacting with the site, speed issues begin to show. These are the most common reasons why stores slow down under pressure:
1. PHP: It Struggles With High User Activity
WooCommerce depends on PHP to process dynamic actions such as cart updates, coupon logic, and checkout steps. Traditional stacks using Apache for PHP handling are slower and less efficient.
Order creation, cart activity, and user actions generate a high number of database writes. During busy times like flash sales, new merchandise arrivals, or course launches, the database struggles to keep up.
Platforms that support optimized query execution and better indexing handle these spikes more smoothly.
3. Caching Issues: Object Caching Is Missing Or Poorly Configured
Without proper object caching, WooCommerce queries the database repeatedly for the same information. That includes product data, imagery, cart contents, and user sessions.
Solutions that include built-in Redis support help move this data to memory, reducing server load and improving site speed.
4. Concurrency Limits Affect Performance During Spikes
Most hosting stacks today, including Apache-based ones, perform well for a wide range of WordPress and WooCommerce sites. They handle typical traffic reliably and have powered many successful stores.
As traffic increases and more users log in and interact with the site at the same time, the load on the server begins to grow. Architecture starts to play a bigger role at that point.
Stacks built on NGINX with event-driven processing can manage higher concurrency more efficiently, especially during unanticipated traffic spikes.
Rather than replacing what already works, this approach extends the performance ceiling for stores that are becoming more dynamic and need consistent responsiveness under heavier load.
5. Your WordPress Admin Slows Down During Sales Seasons
During busy periods like seasonal sales campaigns or new stock availability, stores can often slow down for the team managing the site, too. The WordPress dashboard takes longer to load, which means publishing products, managing orders, or editing pages also becomes slower.
This slowdown happens because both shoppers and staff are using the site’s resources at the same time, and the server has to handle all those requests at once.
How To Architect A Scalable WordPress Setup For Dynamic Workloads?
WooCommerce stores today are built for more than stable traffic. Customers are logging in, updating their carts, taking actions to manage their subscription profile, and as a result, are interacting with your backend in real time.
The traditional WordPress setup, which is primarily designed for static content, cannot handle that kind of demand.
Here’s how a typical setup compares to one built for performance and scale:
Component
Basic Setup
Scalable Setup
Web Server
Apache
NGINX
PHP Handler
mod_php or CGI
PHP-FPM
Object Caching
None or database transients
Redis with Object Cache Pro
Scheduled Tasks
WP-Cron
System cron job
Caching
CDN or full-page caching only
Layered caching, including object cache
.htaccess Handling
Built-in with Apache
Manual rewrite rules in NGINX config
Concurrency Handling
Limited
Event-based, memory-efficient server
How To Manually Setup A Performance-Ready & Scalable WooCommerce Stack
If you’re setting up your own server or tuning an existing one, are the most important components to get right:
1) Use NGINX For Static File Performance
NGINX is often used as a high-performance web server for handling static files and managing concurrent requests efficiently. It is well suited for stores expecting high traffic or looking to fine-tune their infrastructure for speed.
Unlike Apache, NGINX does not use .htaccess files. Rewrite rules, such as permalinks, redirects, and trailing slashes, need to be added manually to the server block. For WordPress, these rules are well-documented and only need to be set once during setup.
This approach gives more control at the server level and can be helpful for teams building out their own environment or optimizing for scale.
2) Enable PHP-FPM For Faster Request Handling
PHP-FPM separates PHP processing from the web server. It gives you more control over memory and CPU usage. Tune values like pm.max_children and pm.max_requests based on your server size to prevent overload during high activity.
3) Install Redis With Object Cache Pro
Redis allows WooCommerce to store frequently used data in memory. This includes cart contents, user sessions, and product metadata.
Pair this with Object Cache Pro to compress cache objects, reduce database load, and improve site responsiveness under load.
4) Replace WP-Cron With A System-Level Cron Job
By default, WordPress checks for scheduled tasks whenever someone visits your site. That includes sending emails, clearing inventory, and syncing data. If you have steady traffic, it works. If not, things get delayed.
You can avoid that by turning off WP-Cron. Just add define(‘DISABLE_WP_CRON’, true); to your wp-config.php file. Then, set up a real cron job at the server level to run wp-cron.php every minute. This keeps those tasks running on time without depending on visitors.
5) Add Rewrite Rules Manually For NGINX
NGINX doesn’t use .htaccess. That means you’ll need to define URL rules directly in the server block.
This includes things like permalinks, redirects, and static file handling. It’s a one-time setup, and most of the rules you need are already available from trusted WordPress documentation. Once you add them, everything works just like it would on Apache.
A Few Tradeoffs To Keep In Mind
This kind of setup brings a real speed boost. But there are some technical changes to keep in mind.
NGINX won’t read .htaccess. All rewrites and redirects need to be added manually.
WordPress Multisite may need extra tweaks, especially if you’re using subdirectory mode.
Security settings like IP bans or rate limits should be handled at the server level, not through plugins.
Most developers won’t find these issues difficult to work with. But if you’re using a modern platform, much of it is already taken care of.
You don’t need overly complex infrastructure to make WooCommerce fast; just a stack that aligns with how modern, dynamic stores operate today.
Next, we’ll look at how that kind of stack performs under traffic, with benchmarks that show what actually changes when the server is built for dynamic sites.
What Happens When You Switch To An Optimized Stack?
Not all performance challenges come from code or plugins. As stores grow and user interactions increase, the type of workload becomes more important, especially when handling live sessions from logged-in users.
To better understand how different environments respond to this kind of activity, Koddr.io ran an independent benchmark comparing two common production setups:
A hybrid stack using Apache and NGINX.
A stack built on NGINX with PHP-FPM, Redis, and object caching.
Both setups were fully optimized and included tuned components like PHP-FPM and Redis. The purpose of the benchmark was to observe how each performs under specific, real-world conditions.
The tests focused on uncached activity from WooCommerce and LearnDash, where logged-in users trigger dynamic server responses.
In these scenarios, the optimized stack showed higher throughput and consistency during peak loads. This highlights the value of having infrastructure tailored for dynamic, high-concurrency traffic, depending on the use case.
WooCommerce Runs Faster Under Load
One test simulated 80 users checking out at the same time. The difference was clear:
Scenario
Hybrid Stack
Optimized Stack
Gain
WooCommerce Checkout
3,035 actions
4,809 actions
+58%
Screenshot from Koddr.io, August 2025
LMS Platforms Benefit Even More
For LearnDash course browsing—a write-heavy and uncached task, the optimized stack completed 85% more requests:
Scenario
Hybrid Stack
Optimized Stack
Gain
LearnDash Course List View
13,459 actions
25,031 actions
+85%
This shows how optimized stacks handle personalized or dynamic content more efficiently. These types of requests can’t be cached, so the server’s raw efficiency becomes critical.
Screenshot from Koddr.io, August 2025
Backend Speed Improves, Too
The optimized stack wasn’t just faster for customers. It also made the WordPress admin area more responsive:
WordPress login times improved by up to 31%.
Publish actions ran 20% faster, even with high traffic.
This means your team can concurrently manage products, update pages, and respond to sales in real time, without delays or timeouts.
It Handles More Without Relying On Caching
When Koddr turned off Varnish, the hybrid stack experienced a 71% drop in performance. This shows how effectively it handles cached traffic. The optimized stack dropped just 7%, which highlights its ability to maintain speed even during uncached, logged-in sessions.
Both setups have their strengths, but for stores with real-time user activity, reducing reliance on caching can make a measurable difference.
Stack Type
With Caching
Without Caching
Drop
Hybrid Stack
654,000 actions
184,000 actions
-7%
Optimized Stack
619,000 actions
572,000 actions
-7%
Screenshot from Koddr.io, August 2025
Why This Matters?
Static pages are easy to optimize. But WooCommerce stores deal with real-time traffic. Cart updates, login sessions, and checkouts all require live processing. Caching cannot help once a user has signed in.
The Koddr.io results show how an optimized server stack:
Helps scale without complex performance workarounds.
These are the kinds of changes that power newer stacks purpose-built for dynamic workloads like Cloudways Lightning, built for real WooCommerce workloads.
Core Web Vitals Aren’t Just About The Frontend
You can optimize every image. Minify every line of code. Switch to a faster theme. But your Core Web Vitals score will still suffer if the server can’t respond quickly.
That’s what happens when logged-in users interact with WooCommerce or LMS sites.
When a customer hits “Add to Cart,” caching is out of the picture. The server has to process the request live. That’s where TTFB (Time to First Byte) becomes a real problem.
Slow server response means Google waits longer to start rendering the page. And that delay directly affects your Largest Contentful Paint and Interaction to Next Paint metrics.
Frontend tuning gets you part of the way. But if the backend is slow, your scores won’t improve. Especially for logged-in experiences.
Real optimization starts at the server.
How Agencies Are Skipping The Manual Work
Every developer has a checklist for WooCommerce performance. Use NGINX. Set up Redis. Replace WP-Cron. Add a WAF. Test under load. Keep tuning.
But not every team has the bandwidth to maintain all of it.
That’s why more agencies are using pre-optimized stacks that include these upgrades by default. Cloudways Lightning, a managed stack based on NGINX + PHP-FPM, designed for dynamic workloads is a good example of that.
It’s not just about speed. It’s also about backend stability during high traffic. Admin logins stay fast. Product updates don’t hang. Orders keep flowing.
Joe Lackner, founder of Celsius LLC, shared what changed for them:
“Moving our WordPress workloads to the new Cloudways stack has been a game-changer. The console admin experience is snappier, page load times have improved by +20%, and once again Cloudways has proven to be way ahead of the game in terms of reliability and cost-to-performance value at this price point.”
This is what agencies are looking for. A way to scale without getting dragged into infrastructure management every time traffic picks up.
Final Takeaway
WooCommerce performance is no longer just about homepage load speed.
Your site handles real-time activity from both customers and your team. Once a user logs in or reaches checkout, caching no longer applies. Each action hits the server directly.
If the infrastructure isn’t optimized, site speed drops, sales suffer, and backend work slows down.
The foundations matter. A stack that’s built for high concurrency and uncached traffic keeps things fast across the board. That includes cart updates, admin changes, and product publishing.
For teams who don’t want to manage server tuning manually, options like Cloudways Lightning deliver a faster, simpler path to performance at scale.
Use promo code “SUMMER305” and get 30% off for 5 months + 15 free migrations. Signup Now!
New research from BrightEdge shows that Google AI Overviews, AI Mode, and ChatGPT recommend different brands nearly 62% of the time. BrightEdge concludes that each AI search platform is interpreting the data in different ways, suggesting different ways of thinking about each AI platform.
Methodology And Results
BrightEdge’s analysis was conducted with its AI Catalyst tool, using tens of thousands of the same queries across ChatGPT, Google AI Overviews (AIO), and Google AI Mode. The research documented a 61.9% overall disagreement rate, with only 33.5% of queries showing the exact same brands in all three AI platforms.
Google AI Overviews averaged 6.02 brand mentions per query, compared to ChatGPT’s 2.37. Commercial intent search queries containing phrases like “buy,” “where,” or “deals” generated brand mentions 65% of the time across all platforms, suggesting that these kinds of high-intent keyword phrases continue to be reliable for ecommerce, just like in traditional search engines. Understandably, e-commerce and finance verticals achieved 40% or more brand-mention coverage across all three AI platforms.
Three Platforms Diverge
Not all was agreement between the three AI platforms in the study. Many identical queries led to very different brand recommendations depending on the AI platform.
BrightEdge shares that:
ChatGPT cites trusted brands even when it’s not grounding on search data, indicating that it’s relying on LLM training data.
Google AI Overviews cites brands 2.5 times more than ChatGPT.
Google AI Mode cites brands less often than both ChatGPT and AIO.
The research indicates that ChatGPT favors trusted brands, Google AIO emphasizes breadth of coverage with more brand mentions per query, and Google AI Mode selectively recommends brands.
Next we untangle why these patterns exist.
Differences Exist
BrightEdge asserts that this split across the three platforms is not random. I agree that there are differences, but I disagree that “authority” has anything to do with it and offer an alternate explanation later on.
These are the conclusions that they draw from the data:
“The Brand Authority Play: ChatGPT’s reliance on training data means established brands with strong historical presence can capture mentions without needing fresh citations. This creates an “authority dividend” that many brands don’t realize they’re already earning—or could be earning with the right positioning.
The Volume Opportunity: Google AI Overview’s hunger for brand mentions means there are 6+ available slots per relevant query, with clear citation paths showing exactly how to earn visibility. While competitors focus on traditional SEO, innovative brands are reverse-engineering these citation networks.
The Quality Threshold: Google AI Mode’s selectivity means fewer brands make the cut, but those that do benefit from heavy citation backing that reinforces their authority across the web.”
Not Authority – It’s About Training Data
BrightEdge refers to “authority signals” within ChatGPT’s underlying LLM. My opinion differs in regard to an LLM’s generated output, not retrieval-augmented responses that pull in live citations. I don’t think there are any signals in the sense of ranking-related signals. In my opinion, the LLM is simply reaching for the entity (brand) related to a topic.
What looks like “authority” to someone with their SEO glasses on is more likely about frequency, prominence, and contextual embedding strength.
Frequency: How often the brand appears in the training data.
Prominence: How central the brand is in those contexts (headline vs. footnote).
Contextual Embedding Strength: How tightly the brand is associated with certain topics based on the model’s training data.
If a brand appears widely in appropriate contexts within the training data, then, in my opinion, it is more likely to be generated as a brand mention by the LLM, because this reflects patterns in the training data and not authority.
That said, I agree with BrightEdge that being authoritative is important, and that quality shouldn’t be minimized.
Patterns Emerge
The research data suggests that there are unique patterns across all three platforms that can behave as brand citation triggers. One pattern all three share is that keyword phrases with a high commercial intent generate brand mentions in nearly two-thirds of cases. Industries like e-commerce and finance achieve higher brand coverage, which, in my opinion, reflects the ability of all three platforms to accurately understand the strong commercial intents for keywords inherent to those two verticals.
A little sunshine in a partly cloudy publishing environment is the finding that comparison queries for “best” products generate 43% brand citations across all three AI platforms, again reflecting the ability of those platforms to understand user query contexts.
Citation Network Effect
BrightEdge has an interesting insight about creating presence in all three platforms that it calls a citation network effect. BrightEdge asserts that earning citations in one platform could influence visibility in the others.
They share:
“A well-crafted piece… could: Earn authority mentions on ChatGPT through brand recognition
Generate 6+ competitive mentions on Google AI Overview through comprehensive coverage
Secure selective, heavily-cited placement on Google AI Mode through third-party validation
The citation network effect means that earning mentions on one platform often creates the validation needed for another. “
Optimizing For Traditional Search Remains
Nevertheless, I agree with BrightEdge that there’s a strategic opportunity in creating content that works across all three environments, and I would make it explicit that SEO, optimizing for traditional search, is the keystone upon which the entire strategy is crafted.
Traditional SEO is still the way to build visibility in AI search. BrightEdge’s data indicates that this is directly effective for AIO and has a more indirect effect for AI Mode and ChatGPT.
ChatGPT can cite brand names directly from training data and from live data. It also cites brands directly from the LLM, which suggests that generating strong brand visibility tied to specific products and services may be helpful, as that is what eventually makes it into the AI training data.
BrightEdge’s conclusion about the data leans heavily into the idea that AI is creating opportunities for businesses that build brand awareness in the topics they want to be surfaced in. They share:
“We’re witnessing the emergence of AI-native brand discovery. With this fundamental shift, brand visibility is determined not by search rankings but by AI recommendation algorithms with distinct personalities and preferences.
The brands winning this transition aren’t necessarily the ones with the biggest SEO budgets or the most content. They’re the ones recognizing that AI disagreement creates more paths to visibility, not fewer.
As AI becomes the primary discovery mechanism across industries, understanding these platform-specific triggers isn’t optional—it’s the difference between capturing comprehensive brand visibility and watching competitors claim the opportunities you didn’t know existed.
The 62% disagreement gap isn’t breaking the system. It’s creating one—and smart brands are already learning to work it.”
Every week, 800 million searches happen across ChatGPT, Claude, Perplexity, and other AI engines.
If your brand isn’t showing up, you’re losing leads and opportunities.
Join Samanyou Garg, Founder of Writesonic, on September 10, 2025, for a webinar designed to help marketers and SEO teams master AI visibility. In this session, you’ll learn practical tactics to measure, prioritize, and optimize your AI footprint.
AI-driven search is no longer optional. Your brand’s presence in AI answer engines directly impacts traffic, leads, and revenue. This session will equip you with a step-by-step process to turn AI visibility into real business results.
Save your spot now to learn actionable strategies that top brands are using to dominate AI search.
Can’t attend live? Register anyway, and we’ll send you the full recording.
For as long as I’ve been in this industry, there’s been debate about whether SEO is strategic or tactical. Most SEOs would like to believe their work is strategic. Many executives see it as tactical. The truth is somewhere in between, and the arrival of generative AI is forcing a new level of clarity.
This matters because “strategy” and “tactics” are not synonyms. In business, strategy is the plan. Tactics are the moves. Confusing the two doesn’t just muddy language. It leads to wasted resources, stalled initiatives, and misplaced expectations for what SEO can and cannot deliver.
Defining Strategy Vs. Tactics
Image credit: Duane Forrester
Business literature has been clear on this for decades, and voices like Porter, Mintzberg, and Drucker shaped how leaders everywhere talk about strategy. Their framing applies directly when we examine SEO’s role today.
Michael Porter is widely recognized as the father of modern competitive strategy. A professor at Harvard Business School, he framed strategy as “choosing to run a different race, because it’s the one you’ve set yourself up to win.” His book “Competitive Strategy” remains one of the foundational texts in business thinking (his book on Amazon – not an affiliate link).
Henry Mintzberg is one of the most cited academics in management and organizational theory. He is famous for noting, “Strategy is not the consequence of planning, but the opposite: its starting point.” He also developed the 5 Ps framework — Plan, Ploy, Pattern, Position, and Perspective — which captures strategy as both deliberate and emergent (Mintzberg’s “The Strategy Concept I: Five Ps for Strategy”).
Peter Drucker is often called the father of modern management. His work shaped how companies think about leadership and decision-making. He emphasized that “the task of leadership is to create an alignment of strengths so strong that it makes the system’s weaknesses irrelevant.” His book “The Practice of Management” is considered a landmark in defining management’s role in aligning strategy with organizational outcomes (Drucker biography at the Drucker Institute).
Tactics, by contrast, are the practical steps. They’re what frontline teams execute, usually with short-term horizons. A strategy might be to compete on customer trust instead of low prices. The tactics are testimonial campaigns, return policies, and training staff to deliver exceptional service.
Other business functions get this distinction. In sales, strategy is deciding to prioritize enterprise accounts. Tactics are outreach sequences and demo scripts. In PR, strategy is positioning the brand as an industry leader. Tactics are pitching journalists and writing press releases. SEO is no different.
The confusion comes because SEO often has to “do it all.” Practitioners are expected to identify opportunities, set priorities, and then execute the work. That’s where the labels blur.
Strategy And Tactics In Traditional SEO
Looking back at the history of SEO makes the divide easier to see.
Early 2000s, PageRank era: Strategy was simple. Invest in being discoverable on Google. Tactics included link building, directory submissions, and keyword-stuffed pages. Companies that treated SEO purely tactically often succeeded short-term but collapsed when penalties arrived. The strategy was clear. Pursue visibility on Google, while the tactics were the link farms, keyword stuffing, and directory submissions that executed it.
2010–2015, Panda and Penguin: Google cracked down on low-quality content and manipulative links. Strategy shifted to “quality and sustainability.” Tactics became pruning thin content, disavowing bad links, and investing in editorial teams. Content farms like Demand Media scaled on tactics, but lacked sustainable strategy, and they were decimated by Panda. Here again, leadership set the strategic shift toward quality, and SEO carried it out through content pruning and link cleanup.
2015–2020, Mobile and Core Web Vitals: Strategy was “meet users where they are.” They were on mobile and wanted fast experiences. Tactics were responsive design, structured data, and site speed audits. Companies that made this strategic shift early (e.g., news outlets investing in AMP) gained advantage. The strategic goal was serving users where they were, while SEO implemented the tactical fixes that delivered on it.
2020s, BERT and passage indexing: Strategy tilted toward semantic relevance, competing not just on keywords but on meaning and intent. Tactics were writing for topics, structuring content for passage-level retrieval, and emphasizing context. Strategy tilted toward meaning; tactics followed in the form of topic clusters and passage-level optimization.
At every stage, leadership set the strategy (“we need growth from search”), and SEO executed the tactics. Advanced SEOs sometimes influenced strategy by warning about risks or opportunities, but the bulk of work remained tactical.
Strategy And Tactics In GenAI Optimization
Generative AI reshapes the landscape. Instead of 10 blue links, users now get synthesized answers. That changes both the strategic questions and the tactical execution.
Strategic choices now include:
Deciding whether to compete for visibility across multiple AI engines (ChatGPT, Perplexity, Gemini, Claude, etc.).
Choosing where to allocate budget: competing for evergreen visibility in broad topics, or dominating narrow niches where AI coverage is weaker.
Determining how much to invest in retrievability testing and monitoring as an organizational function.
Tactical execution now includes:
Structuring content into retrievable chunks sized for vector search.
Running retrieval tests across platforms to measure exposure.
Optimizing semantic density so each chunk is information-rich and self-contained.
Adding schema and structured data to clarify entities and facts.
Tracking machine-validated authority by measuring whether your content is surfaced or cited in AI responses.
Query fan-out work to determine opportunities and identify semantic overlap.
These tactics look new, but they build directly on the foundation of traditional SEO. Schema is simply structured markup, refined. Semantic density is the next evolution of topical relevance. Retrieval tests are the modern equivalent of checking indexation. GenAI optimization doesn’t replace SEO; it evolves from it.
GenAI Optimization: Is It A Strategy Or Tactic?
The sudden surge of interest in “GenAI optimization” is a perfect case study in this strategy-versus-tactics debate.
Everyone is talking about chunking, embeddings, and retrievability as if they are strategy. They aren’t. They’re tactics. And treating tactics as strategy is a classic oversimplification, something the industry has been guilty of for decades.
At the strategic level: Businesses decide that GenAI visibility is essential. They commit budget to becoming retrievable and authoritative across AI systems. They set goals to be cited in machine answers for their core vertical.
At the tactical level: Teams restructure content into chunks, add schema, run retrieval probes in ChatGPT or Perplexity, and measure citation frequency.
Both layers are needed. The risk comes when companies mistake tactical execution for strategy.
The Cost Of Misalignment
When strategy and tactics are misaligned, businesses lose, and the losses are measurable.
Missed opportunities: If leadership hasn’t set a strategy for GenAI visibility, tactical work is scattershot. Teams optimize content but don’t know which queries, topics, or surfaces matter. Competitors with clearer strategies win the ground.
Lost revenue: Without strategy, companies may secure citations in AI answers that don’t align with customer value. The result is visibility without conversion.
Wasted budgets: Chasing every GenAI trend without a clear North Star leads to investment in tools and audits that deliver no meaningful ROI.
Eroded trust: When executives believe they’ve funded a strategy but only see tactical outputs, confidence in SEO teams drops. Leadership expected market impact and the team only delivered structural updates.
The lesson is blunt: Businesses don’t usually fail because tactics are poorly executed. They usually fail because tactics aren’t anchored in strategy.
Why SEO Has Been Seen As Tactical
For two decades, SEO has been defined by tactical output. Executives set the strategy (“We need organic growth”), and SEO was tasked with delivering through audits, fixes, optimizations, and publishing.
This framing wasn’t wrong as it reflected the organizational structure. Strategy was set higher up; SEOs carried it out. That’s why SEO often struggled to win budget or a seat in strategic planning meetings. It was seen as execution.
The AI-Driven Shift
Generative AI changes that equation. Machines are absorbing tactical SEO tasks. Today, AI tools can generate meta descriptions, suggest keywords, build internal linking recommendations, even create structured schema markup. Some platforms simulate retrieval patterns directly. What once required specialized SEO execution is increasingly automated.
That doesn’t eliminate SEO. It elevates it. If tactical execution is becoming commoditized, the value shifts to strategy.
This mirrors Microsoft’s research on AI’s occupational impacts, which distinguishes between user goals (strategic intent) and AI actions (tactical execution). Humans set the “why.” AI delivers the “how.”
For SEO, the same shift is underway. The tactical layer is being automated. The strategic opportunity is to lead on visibility, authority, and trust in AI-driven ecosystems.
Drawing The Line With Examples
Traditional SEO in financial services: The strategy is to dominate “retirement planning for millennials.” The tactics include create calculators, publish evergreen guides, optimize metadata, and build relevant backlinks.
GenAI optimization in sustainable investing: The strategy is to ensure the brand is a trusted citation in AI answers on ESG funds. The tactics include run retrieval checks in Perplexity, embed structured citations, optimize chunks for semantic clarity, and measure citation frequency in ChatGPT and Gemini.
One sets the direction. The other executes the playbook.
Why This Feels Touchy
Many SEOs call their work strategic because they connect content, technical architecture, and authority signals into a broader picture. In many organizations, they’re the only ones framing visibility at all, but that doesn’t make every action strategic.
That deserves credit. But precision matters. Running an audit is not strategy. Updating a robots.txt file is not strategy. These are tactical actions. If we blur the line, we diminish our influence at precisely the moment AI is eroding the value of tactics.
Where The Balance Lies
So, is SEO strategic or tactical? The honest answer is both, but not equally.
Historically, SEO has been tactical.
Today, SEO carries strategic implications, especially as AI reshapes discovery.
The opportunity is to make the leap: from executing optimizations to shaping how organizations appear in machine-driven answers.
The balance is this: Tactics still matter. You can’t ignore schema, chunking, or retrieval testing. But the differentiator is strategy: deciding which battles to fight, which surfaces to win, and how to align SEO with the company’s long-term positioning.
Why This Matters For SEOs
This isn’t a semantic debate. It’s about influence and survival.
If SEO is seen as tactical, it’s underfunded, siloed, and brought in too late.
If SEO is seen as strategic, it gets budget, resources, and a seat in the boardroom.
The GenAI shift creates a once-in-a-generation opening for SEOs to redefine their value. As AI absorbs more tactical execution, the real opportunity is for SEOs to align with company-level strategy and expand their scope into visibility, trust, and authority. Those who recognize the difference between strategy and tactics will step into leadership. Those who stay focused only on tactical execution risk being automated out.
Closing Thought
For 20 years, SEO has been tactical excellence in service of growth. With GenAI, the tactical layer is shifting to machines. That makes strategy the defining frontier.
Not because SEOs suddenly became strategists, but because the environment demands it. The question now is: will SEOs step into that role, or will someone else claim it?
Google’s John Mueller answered a question about how many sitemaps to upload, and then said there are no guarantees that any of the URLs will be crawled right away.
A member of the r/TechSEO community on Reddit asked if it’s enough to upload the main sitemap.xml file, which then links to the more granular sitemaps. What prompted the question was their concern over recently changing their website page slugs (URL file names).
That person asked:
“I submitted “sitemap.xml” to Google Search Console, is this sufficient or do I also need to submit page-sitemap.xml and sitemap-misc.xml as separate entries for it to work? I recently changed my website’s page slugs, how long will it take for Google Search Console to consider the sitemap”
Mueller responded that uploading the sitemap index file (sitemap.xml) was enough and that Google would proceed from there. He also shared that it wasn’t necessary to upload the individual granular sitemaps.
What was of special interest were his comments indicating that uploading sitemaps didn’t “guarantee” that all the URLs would be crawled and that there is no set time for when Googlebot would crawl the sitemap URLs. He also suggested using the Inspect URL tool.
He shared:
“You can submit the individual ones, but you don’t really need to. Also, sitemaps don’t guarantee that everything is recrawled immediately + there’s no specific time for recrawling. For individual pages, I’d use the inspect URL tool and submit them (in addition to sitemaps).”
Is There Value In Uploading All Sitemaps?
According to John Mueller, it’s enough to upload the index sitemap file. However, from our side of the Search Console, I think most people would agree that it’s better not to leave it to chance that Google will or will not crawl a URL. For that reason, SEOs may decide it’s reassuring to go ahead and upload all sitemaps that contain the changed URLs.
The URL Inspection tool is a solid approach because it enables SEOs to request crawling for a specific URL. The downside of the tool is that you can only request this for one URL at a time. Google’s URL Inspection tool does not support bulk URL submissions for indexing.
You may have noticed your organic traffic looking different lately. Rankings fluctuate wildly, your content appears in AI summaries one week and vanishes the next, and users are increasingly getting their answers without ever visiting your website.
When 58.5% of searches end without a click, that carefully optimized content you spent weeks perfecting might be feeding AI answers instead of driving traffic to your site.
We’re witnessing the biggest shift in search since Google’s early days. Traditional SEO tactics aren’t enough anymore.
You need a strategy that works when AI systems become the middleman between your content and your audience.
The New Search Reality: AI Is Eating Your Clicks
Let’s be honest about what’s happening.
Google’s AI Overviews now appear for over 11% of all searches according to BrightEdge research, pulling information from multiple sources to create comprehensive answers above your organic results. Users get what they need without clicking through.
But, it’s not just Google. Perplexity processes over 780 million searches monthly, while ChatGPT’s browsing feature handles complex queries that users used to need multiple website visits to answer.
Your Content Is Working, Just Not How You Expected
Here’s what’s particularly frustrating: Your content is often powering these AI responses, but you’re not getting credit or traffic for it.
Search for [email automation] on Google and you’ll see a comprehensive AI Overview that defines the concept, explains how it works in four detailed steps, lists benefits, provides examples, and even mentions specific tools like ActiveCampaign and Mailchimp.
This response synthesizes information from multiple sources into one complete answer that eliminates the need to visit any individual website.
The user gets a definition, step-by-step process, benefits, examples, and tool recommendations all in one place.
Meanwhile, the original content creators who researched and wrote about email automation triggers, personalization strategies, and platform comparisons see their expertise repackaged without receiving the traffic they would have earned from traditional search results.
Screenshot from search for [email automation], Google, July 2025
This is the new normal. Voice search and conversational AI are training users to expect complete answers, not blue links to explore.
Zero-click searches aren’t killing SEO; they’re evolving it. Your content needs to work harder in this new environment.
What Marketers Need To Rethink
Forget everything you know about traditional SEO success metrics. The game has fundamentally changed.
Shift Your Focus: From Rankings To Mentions
That coveted No. 1 ranking? While still valuable, it’s becoming less reliable for driving traffic when AI systems deliver answers directly to users.
Rankings still matter, especially for commercial queries where users want to browse options. But, for informational searches where users seek quick answers, your content’s value now extends beyond its position in organic results.
Think about it this way: When someone asks ChatGPT or Google AI Mode about your industry, does your brand get mentioned? That’s your new battleground.
Your New Success Metrics
Instead of obsessing over click-through rates, you need to start tracking metrics that capture AI influence on your brand:
Brand mentions in AI responses across platforms tell you whether your content is being cited and referenced.
Branded search volume spikes often follow AI feature appearances.
Conversion assists where organic search was part of the user’s journey but not the final touchpoint.
Customer surveys asking, “How did you hear about us?” reveal AI influence that analytics can’t capture.
I’ve seen clients with flat traffic numbers but 200% increases in brand mentions in AI responses. That’s invisible growth that traditional analytics miss entirely.
Practical Strategies That Work
Here’s how to adapt your SEO approach for AI-powered search. These are strategies I’ve tested with clients across different industries.
Make Your Content AI-Friendly
The most important shift you can make is structuring your content for AI comprehension.
Place your main answer within the first one to two sentences of any piece of content. Think of it like writing a news article where the lead paragraph contains all the crucial information.
If someone asks, “What are the benefits of meditation?” your opening should be, “Meditation reduces stress, improves focus, and enhances emotional well-being through regular practice.” Then expand with details, examples, and supporting evidence.
Look at this great example from NerdWallet:
Screenshot from NerdWallet, July 2025
This approach serves both human readers who want quick answers and AI systems that prioritize clear, immediate responses. When Google’s AI Overview or ChatGPT pulls from your content, that opening statement becomes your brand’s voice in the answer.
I’ve seen this strategy increase AI citation rates by 40% for clients who consistently implement it.
Key formatting strategies that work:
Structured formats: Transform dense paragraphs into FAQs, numbered lists, and tables that AI can easily parse.
Clear headings: Create content hierarchy with H2 and H3 headings that AI can follow.
A well-structured FAQ section doesn’t just help users. It becomes a goldmine for AI systems looking for clear question-answer pairs.
Consider transforming complex pricing information into tables rather than burying details in lengthy paragraphs.
Build Citation-Worthy Authority
Creating content that AI systems want to reference requires a fundamental shift from aggregating existing information to generating original insights.
Publish studies, proprietary data, and exclusive interviews that can only come from your organization.
LLMs prioritize original sources over aggregated information, making your research significantly more likely to be cited and attributed.
Instead of stating facts directly, frame them as insights from your organization. “According to our research at [Company Name]” or “Based on our analysis of 10,000 customer surveys” signals to AI systems that the information comes from a specific, credible source.
This technique helps ensure that when LLMs pull information from your content, they’re more likely to include your brand name in the response.
If you’re in the gardening space, don’t just write one article about composting. Create a comprehensive resource covering composting basics, troubleshooting common problems, seasonal considerations, and advanced techniques, then link these pieces together strategically.
This clustering approach works because LLMs assess credibility partly based on depth and breadth of coverage.
Sites that demonstrate comprehensive knowledge on topics are more likely to be seen as authoritative sources worth citing.
I’ve watched brands jump from occasional mentions to consistent AI citations by implementing this strategy over six to 12 months.
Diversify Beyond Traditional Search
Don’t put all your eggs in the Google basket. AI systems pull information from diverse sources, and expanding your content distribution increases your chances of being included in LLM training data and responses.
Recent research from Ahrefs analyzing 78.6 million AI responses across Google AI Overviews, ChatGPT, and Perplexity reveals which platforms get cited most frequently.
The data shows clear patterns in what each AI system prefers to reference.
Platforms worth prioritizing based on AI citation data:
YouTube: Dominates Perplexity citations (16.1% mention share) and ranks high in AI Overviews (9.5%), making video content crucial for AI visibility.
Reddit: Heavily favored by Google AI Overviews (7.4% mention share) but absent from ChatGPT and Perplexity’s top citations.
News and industry publications: ChatGPT shows a strong preference for news outlets like Reuters and Apple News, making media coverage valuable.
Wikipedia: Leads citations across all three platforms, emphasizing the importance of having your brand or expertise documented on authoritative reference sites.
The research reveals that different AI systems have distinct preferences.
Google’s AI Overviews favor user-generated content from Reddit and Quora, while ChatGPT prioritizes news sources and authoritative publications.
Perplexity shows the strongest preference for YouTube content alongside Wikipedia.
Each platform has its own content style and audience, so adapt your messaging accordingly.
A LinkedIn post about industry trends might become a source for business-related AI responses, while a YouTube video explanation could be referenced for educational queries.
The key is maintaining consistent expertise and messaging across all channels.
Testing your content directly in different AI platforms gives you immediate feedback on how it’s being interpreted and used.
Ask ChatGPT questions related to your expertise and see if your content appears in the responses. Query Perplexity about industry topics you’ve covered.
This direct testing helps you understand how different AI systems process and present your information, allowing you to refine your approach based on real results.
Measuring Success In A Post-Click World
Traditional metrics aren’t telling the whole story anymore, and honestly, this is where most marketers struggle with the transition to AI-era SEO.
You’re used to clear, quantifiable metrics like organic traffic and click-through rates. Now you need to track influence that often happens without any direct interaction with your website.
Track AI Visibility Across Platforms
Start by monitoring featured snippets and AI Overview inclusions. These placements often indicate that AI systems are pulling from your content, even if they don’t generate the clicks you’re used to seeing.
Set up alerts for when your content gets featured because these appearances frequently correlate with increases in branded search volume and direct traffic.
Check if your brand appears when users ask AI tools about your industry. Search for your company name in ChatGPT, Perplexity, and Google’s AI Overview to see how you’re being represented.
You might discover that your brand is being mentioned in contexts you didn’t expect, giving you insights into how AI systems perceive your authority.
Social media monitoring becomes more important in this landscape because people often discuss insights they learned from AI summaries.
Set up tracking for mentions where people reference concepts or data points that originally came from your content, even if they don’t directly cite your brand.
These conversations indicate that your content is influencing discussions, even when traditional attribution models miss the connection.
Attribution Modeling For Invisible Influence
The challenge with zero-click searches is that they force you to rethink how you measure content success.
A user might read your advice in an AI summary today, then visit your site directly next week after remembering your brand name. Traditional last-click attribution completely misses this connection, making your SEO efforts appear less valuable than they actually are.
Implement first-touch attribution models that credit SEO for starting customer journeys, even when other channels complete the conversion.
Survey your new customers about how they first discovered your brand, and you’ll often find they mention seeing your content in search results or AI responses weeks before converting. This qualitative data fills in gaps that analytics can’t capture.
Look for patterns where direct traffic increases after your content gets featured in AI responses. Create custom UTM parameters for content that frequently appears in AI summaries.
While you can’t track every citation, you can identify trends in how AI-discovered content influences broader marketing performance.
Watch for increases in newsletter signups, demo requests, or branded searches following AI feature appearances.
Google Analytics 4’s attribution modeling can help you understand these multitouch journeys better than previous versions. Configure it to show conversion assists where organic search was part of the user’s path but not the final touchpoint.
This reveals the true value of your SEO efforts in an environment where direct attribution becomes increasingly difficult.
Tools And Techniques For Modern Measurement
SparkToro helps you understand where your audience discovers content and which sources they trust.
Use it to identify if your brand is being mentioned in the same contexts as industry leaders, indicating you’re gaining mindshare even without direct clicks.
This competitive intelligence reveals whether your AI strategy is working compared to others in your space.
Beyond traditional tools, create a systematic monitoring approach using multiple AI platforms.
Set up monthly checks to see if your citation frequency is increasing and which topics generate the most AI references.
Document examples of how your content gets referenced and summarized to understand what formats work best.
Remember that influence in AI responses often correlates with long-term brand growth, even if immediate traffic metrics look flat.
While comprehensive research on AI citation impact is still emerging, the pattern mirrors what we’ve seen with other “zero-click” features like featured snippets, brand exposure through authoritative citations can drive awareness and consideration that results in direct searches and conversions over time.
The key is connecting these invisible influences to eventual business outcomes.
Building Long-Term Resilience In An AI-First World
The brands that thrive in this new landscape will not just adapt to current changes.
They will anticipate what comes next and build systems that can weather the unprecedented volatility that AI-powered search brings.
Prepare For AI Volatility
Traditional core Google algorithm updates happen a few times per year and usually follow predictable patterns.
With each model update, LLMs can change their behavior, creating unprecedented volatility in search visibility that most SEO professionals haven’t experienced before.
Your content might appear in ChatGPT responses one week and disappear the next. This isn’t a bug or a penalty. It’s how LLMs work.
They constantly learn and adjust their understanding of what constitutes authoritative information based on new training data and updated models.
Instead of panicking over daily fluctuations, track broader patterns in brand mentions, branded search volume, and conversion trends.
These metrics provide more stable indicators of your content’s impact than individual AI citations, which can vary significantly based on model updates and algorithmic adjustments.
Your brand needs to be what I call “retypeable,” the kind of name people remember and search for when they’re ready to take action.
When users encounter your brand in an AI summary, they should immediately associate it with your core value proposition and remember it later when they’re ready to engage.
Build Flexible Systems
Set up processes to review and refresh your most important pages quarterly.
LLMs prioritize current information more heavily than traditional search engines, so maintaining content freshness becomes critical for sustained AI visibility.
Develop relationships with other authoritative sources in your industry through collaborations, partnerships, and cross-references.
The more your brand appears in connection with recognized authorities, the stronger your credibility signals become for AI systems.
These relationships create natural mentions across different content formats and platforms that extend beyond what you can control directly.
The Future Of SEO Is About Influence, Not Clicks
The shift to AI-powered search is changing not just how people find information but also how brands build authority and trust.
Companies that recognize this early and adapt their strategies accordingly will own the conversation in their industries, while others struggle to understand why their traditional SEO efforts aren’t delivering the same results.
Your content is still working. It’s influencing decisions, building brand awareness, and driving conversions.
You just need new ways to measure and optimize for its impact in an environment where visibility doesn’t always equal clicks, but influence still equals business growth.
Over the years, I’ve worked with numerous companies that engaged me to create world-class Search organizations and win the global search game, only to block the majority of the initiatives required to achieve that goal. This disconnect often stems from how the C-suite perceives its website.
In too many boardrooms, the site is still seen as a digital brochure and an expense managed by marketing, with limited scrutiny or strategic oversight. Yet, that same site touches nearly every phase of the customer journey, investor perception, partner evaluation, and talent acquisition.
This article brings those ideas together under a single call to action: It’s time for executive leadership to own web performance as a measurable, managed business function.
What Is The Digital Performance Gap?
The Digital Performance Gap is the measurable distance between your online potential and actual business outcomes. Most companies are leaking performance through misaligned teams, disconnected key performance indicators (KPIs), outdated platforms, or siloed operations.
Symptoms include:
Underwhelming organic traffic and conversions.
Disconnected websites across departments or geographies.
Content that ranks but doesn’t convert (or worse, can’t even be found).
Slow responsiveness to AI shifts and platform changes.
Tools and vendors operating without return on investment (ROI) oversight.
In short: You’re paying for a Ferrari and driving it like a lawnmower.
From Pit Crew To Performance System: A Better Analogy
Imagine you’re the owner of an F1 racing team. You’ve got the budget, the ambition, and a roster of great people – from engineers to mechanics to a world-class driver.
However, the engine design was handled by a team that never consulted with the race strategist. Your telemetry data doesn’t reach the pit wall. The car is fast in theory, but coordination is poor, and outcomes are inconsistent.
Sound familiar?
That’s how many enterprise websites operate. Everyone is working hard in their silos. But without integrated planning, shared goals, or clear leadership, the system can’t perform at its full potential.
Web effectiveness isn’t just about the “driver” (e.g., SEO or content teams)—it’s about the entire vehicle and how the organization supports it. And the C-suite? They’re the race directors. When the director doesn’t orchestrate the team, the whole system suffers.
In elite racing, the pit crew doesn’t just change tires. They analyze data, forecast risks, and adapt in real time. Their split-second coordination with the driver wins races. That’s what a web performance system should look like–fully integrated, real-time, and strategically directed.
But instead of this synergy, most digital organizations resemble a collection of vendors and internal teams using different playbooks, judged by different KPIs, and waiting for executive direction that never comes.
You can’t win the race if the engine team is optimizing for safety, the strategist is optimizing for top speed, and the pit crew is trying to meet tire budget KPIs. That’s not cross-functional excellence, it’s cross-functional chaos.
Web Effectiveness Is A Business Metric
Web Effectiveness is the degree to which your digital presence delivers against real business goals.
Relevance (structured content that solves user needs).
Integration (connected to customer relationship management or CRM, data layers, product feeds).
This isn’t marketing fluff. It’s operational excellence.
When no one owns it, everyone loses.
IT may control infrastructure.
Marketing manages messaging.
Sales owns conversion.
Legal redlines half the useful copy.
But no one owns the outcome. That’s a leadership failure.
The High Cost Of No Ownership
When the C-suite doesn’t take web performance seriously, the costs compound:
Visibility declines. You’re outranked by competitors who understand AI’s new rules.
Opportunity evaporates. Valuable search terms go unanswered – or worse, answered by the platforms themselves.
Budgets get wasted. You pay for tools, agencies, and tech that aren’t integrated or even used.
Your story gets told by others. Generative engines summarize what they find. If your content isn’t structured or visible, you’re not even in the conversation.
Even companies that only exist online often fail to fully leverage the very platform that drives their value.
What Executive Ownership Looks Like
Executive ownership doesn’t mean micromanaging metadata – it means ensuring that:
Web outcomes are tied to business KPIs.
Budgeting reflects strategic priority, not departmental silos.
SEO, UX, content, and dev teams are operating under a unified model.
Vendor evaluations include contribution to visibility and performance.
Someone is accountable for closing the performance gap.
Consider creating a Web Effectiveness Center of Excellence or appointing a Digital Effectiveness Officer to champion this mandate.
A Framework For Closing The Gap
To transition from fragmented efforts to strategic impact, organizations require a shared operating model. Here’s a high-level Web Effectiveness Framework:
Governance: Who owns what? Are responsibilities clear?
Visibility: Can search engines and AI systems discover, interpret, and cite your content?
Experience: Are you delivering what users need – on every device, in every format?
Optimization: Are you using the platforms, features, and data you already pay for?
Measurement: Are you tracking impact, not just traffic?
This framework can be scaled across divisions, regions, and lines of business. The key is treating your site not as a brochure, but as your most valuable digital asset.
Final Thought: Time To Step In
Closing the Digital Performance Gap starts with a mindset shift: from cost center to growth platform. From tactical ownership to strategic leadership.
Today’s website is no longer just a reflection of your brand—it is your brand. It’s where customers decide to trust you, where partners evaluate your credibility, and where investors form first impressions. Yet far too often, this central asset is owned by no one, governed by outdated workflows, and limited by KPIs that belong to another era.
Let’s be clear: digital excellence doesn’t happen by accident. It’s the result of intentional alignment between leadership, teams, and technology. And that alignment starts with the C-suite.
CMOs must champion performance and not just promotion. CTOs must prioritize enablement and not just uptime. CEOs must encourage cross-functional alignment, efficiency, speed, agility, and clarity to ensure optimal performance.
Web effectiveness should no longer be framed as a project, initiative, or marketing tactic. It’s a performance system. A business function. A shared responsibility. And if you don’t have someone responsible for web performance at the leadership level, it’s time to create that role. A Digital Effectiveness Officer, a Center of Excellence, or, at a minimum, a cross-functional ownership council that brings visibility, accountability, and forward momentum.
Because here’s the truth: If you don’t own your website’s performance, someone else will define your digital reputation—and capture your audience. Bring web effectiveness into the boardroom. Align your teams. Close the gap.
A post on LinkedIn called attention to Perplexity’s content discovery feed called Discover, which generates content on trending news topics. It praised the feed as a positive example of programmatic SEO, although some said that its days in Google’s search results are numbered. Everyone in that discussion believes those pages are one thing. In fact, they are something else entirely.
Context: Perplexity Discover
Perplexity publishes a Discover feed of trending topics. The page is like a portal to the news of the day, featuring short summaries and links to web pages containing the full summary plus links to the original news reporting.
SEOs have noticed that some of those pages are ranking in Google Search, spurring a viral discussion on LinkedIn.
Perplexity Discover And Programmatic SEO
Programmatic SEO is the use of automation to optimize web content and could also apply to scaled content creation. It can be tricky to pull off well and can result in a poor outcome if not.
A LinkedIn post calling attention to the Perplexity AI-generated Discover feed cited it as an example of programmatic SEO “on steroids.”
They wrote:
“For every trending news topic, it automatically creates a public webpage.
These pages are now showing up in Google Search results.
When clicked, users land on a summary + can ask follow-up questions in the chatbot.
…This is such a good Programmatic SEO tactic put on steroids!”
One of the comments in that discussion hailed the Perplexity pages as an example of good programmatic SEO:
“This is a very bold move by Perplexity. Programmatic SEO at scale, backed by trending topics, is a smart way to capture attention and traffic. The key challenge will be sustainability – Google may see this as thin content or adjust algorithms against it. Still, it shows how AI + SEO is evolving faster than expected.”
Another person agreed:
“SEO has been part of their growth strategy since last year, and it works for them quite well”
The rest of the comments praised Perplexity’s SEO as “bold” and “clever” as well as providing “genuine user value.”
But there were also some that predicted that “Google won’t allow this trend…” and that “Google will nerf it in a few weeks…”
The overall sentiment of Perplexity’s implementation of programmatic SEO was positive.
Except that there is no SEO.
Perplexity Discover Is Not Programmatic SEO
Contrary to what was said in the LinkedIn discussion, Perplexity is not engaging in “programmatic SEO,” nor are they trying to rank in Google.
A peek at the source code of any of the Discover pages shows that the title elements and the meta descriptions are not optimized to rank in search engines.
Screenshot Of A Perplexity Discover Web Page
Every single page created by Perplexity appears to have the exact same title and meta description elements:
Perplexity
Every page contains the same canonical tag:
https://www.perplexity.ai” />
It’s clear that Perplexity’s Discover pages are not optimized for Google Search and that the pages are not created for search engines.
The pages are created for humans.
Given how the Discover pages are not optimized, it’s not a surprise that:
Every page I tested failed to rank in Google Search.
It’s clear that Perplexity is engaged in programmatic SEO.
Perplexity’s Discover pages are not created to rank in Google Search.
Perplexity’s Discover pages are created specifically for humans.
If any pages rank in Google, that’s entirely an accident and not by design.
What Is Perplexity Actually Doing?
Perplexity’s Discover pages are examples of something bigger than SEO. They are web pages created for the benefit of users. The fact that no SEO is applied shows that Perplexity is focused on making the Discover pages destinations that users turn to in order to keep in touch with the events of the day.
Perplexity Discover is a user-first web destination created with zero SEO, likely because the goals are more ambitious than depending on Google for traffic.
The Surprising SEO Insight?
It may well be that a good starting point for creating a website and forming a strategy for promoting it lies outside the SEO sandbox. In my experience, I’ve had success creating and promoting outside the standard SEO framework, because SEO strategies are inherently limited: they have one goal, ranking, and miss out on activities that create popularity.
SEO limits how you can promote a site with arbitrary rules such as:
Don’t obtain links from sites that nofollow their links.
Don’t get links from sites that have low popularity.
Offline promotion doesn’t help your site rank.
And here’s the thing: promoting a site with strategies focused on building brand name recognition with an audience tends to create the kinds of user behavior signals that we know Google is looking for.