The Behavioral Data You Need To Improve Your Users’ Search Journey via @sejournal, @SequinsNsearch

We’re more than halfway through 2025, and SEO has already changed names many times to take into account the new mission of optimizing for the rise of large language models (LLMs): We’ve seen GEO (Generative Engine Optimization) floating around, AEO (Answer Engine Optimization), and even LEO (LLM Engine Optimization) has made an apparition in industry conversations and job titles.

However, while we are all busy finding new nomenclatures to factor in the machine part of the discovery journey, there is someone else in the equation that we risk forgetting about: the end beneficiary of our efforts, the user.

Why Do You Need Behavioral Data In Search?

Behavioral data is vital to understand what leads a user to a search journey, where they carry it out, and what potential points of friction might be blocking a conversion action, so that we can better cater to their needs.

And if we learned anything from the documents leaked from the Google trial, it is that users’ signals might actually be one of the many factors that influence rankings, something that was never fully confirmed by the company’s spokespeople, but that’s also been uncovered by Mark Wiliams Cook in his analysis of Google exploits and patents.

With search becoming more and more personalized, and data about users becoming less transparent now that simple search queries are expanding into full funnel conversations on LLMs, it’s important to remember that – while individual needs and experiences might be harder to isolate and cater for – general patterns of behavior tend to stick across the same population, and we can use some rules of thumb to get the basics right.

Humans often operate on a few basic principles aimed at preserving energy and resources, even in search:

  • Minimizing effort: following the path of least resistance.
  • Minimizing harm: avoiding threats.
  • Maximizing gain: seeking opportunities that present the highest benefit or rewards.

So while Google and other search channels might change the way we think about our daily job, the secret weapon we can use to future-proof our brands’ organic presence is to isolate some data about behavior, as it is, generally, much more predictable than algorithm changes.

What Behavioral Data Do You Need To Improve Search Journeys?

I would narrow it down to data that cover three main areas: discovery channel indicators, built-in mental shortcuts, and underlying users’ needs.

1. Discovery Channel Indicators

The days of starting a search on Google are long gone.

According to the Messy Middle research by Google, the exponential increase in information and new channels available has determined a shift from linear search behaviors to a loop of exploration and evaluation guiding our purchase decisions.

And since users now have an overwhelming amount of channels, they can consult in order to research a product or a brand. It’s also harder to cut through the noise, so by knowing more about them, we can make sure our strategy is laser-focused across content and format alike.

Discovery channel indicators give us information about:

  • How users are finding us beyond traditional search channels.
  • The demographic that we reach on some particular channels.
  • What drives their search, and what they are mostly engaging with.
  • The content and format that are best suited to capture and retain their attention in each one.

For example, we know that TikTok tends to be consulted for inspiration and to validate experiences through user-generated content (UGC), and that Gen Z and Millennials on social apps are increasingly skeptical of traditional ads (with skipping rates of 99%, according to a report by Bulbshare). What they favor instead is authentic voices, so they will seek out first-hand experiences on online communities like Reddit.

Knowing the different channels that users reach us through can inform organic and paid search strategy, while also giving us some data on audience demographics, helping us capture users that would otherwise be elusive.

So, make sure your channel data is mapped to reflect these new discovery channels at hand, especially if you are relying on custom analytics. Not only will this ensure that you are rightfully attributed what you are owed for organic, but it will also be an indication of untapped potential you can lean into, as searches become less and less trackable.

This data should be easily available to you via the referral and source fields in your analytics platform of choice, and you can also integrate a “How did you hear about us” survey for users who complete a transaction.

And don’t forget about language models: With the recent rise in queries that start a search and complete an action directly on LLMs, it’s even harder to track all search journeys. This replaces our mission to be relevant for one specific query at a time, to be visible for every intent we can cover.

This is even more important when we realize that everything contributes to the transactional power of a query, irrespective of how the search intent is traditionally labelled, since someone might decide to evaluate our offers and then drop out due to the lack of sufficient information about the brand.

2. Built-In Mental Shortcuts

The human brain is an incredible organ that allows us to perform several tasks efficiently every day, but its cognitive resources are not infinite.

This means that when we are carrying out a search, probably one of many of the day, while we are also engaged in other tasks, we can’t allocate all of our energy into finding the most perfect result among the infinite possibilities available. That’s why our attentional and decisional processes are often modulated by built-in mental shortcuts like cognitive biases and heuristics.

These terms are sometimes used interchangeably to refer to imperfect, yet efficient decisions, but there is a difference between the two.

Cognitive Biases

Cognitive biases are systematic, mostly unconscious errors in thinking that affect the way we perceive the world around us and form judgments. They can distort the objective reality of an experience, and the way we are persuaded into an action.

One common example of this is the serial position effect, which is made up of two biases: When we see an array of items in a list, we tend to remember best the ones we see first (primacy bias) and last (recency bias). And since cognitive load is a real threat to attention, especially now that we live in the age of 24/7 stimuli, primacy and recency biases are the reason why it’s recommended to lead with the core message, product, or item if there are a lot of options or content on the page.

Primacy and recency not only affect recall in a list, but also determine the elements that we use as a reference to compare all of the alternative options against. This is another effect called anchoring bias, and it is leveraged in UX design to assign a baseline value to the first item we see, so that anything we compare against it can either be perceived as a better or worse deal, depending on the goal of the merchant.

Among many others, some of the most common biases are:

  • Distance and size effects: As numbers increase in magnitude, it becomes harder for humans to make accurate judgments, reason why some tactics recommend using bigger digits in savings rather than fractions of the same value.
  • Negativity bias: We tend to remember and assign more emotional value to negative experiences rather than positive ones, which is why removing friction at any stage is so important to prevent abandonment.
  • Confirmation bias: We tend to seek out and prefer information that confirms our existing beliefs, and this is not only how LLMs operate to provide answers to a query, but it can be a window into the information gaps we might need to cover.

Heuristics

Heuristics, on the other hand, are rules of thumb that we employ as shortcuts at any stage of decision-making, and help us reach a good outcome without going through the hassle of analyzing every potential ramification of a choice.

A known heuristic is the familiarity heuristic, which is when we choose a brand or a product that we already know, because it cuts down on every other intermediate evaluation we would otherwise have to make with an unknown alternative.

Loss aversion is another common heuristic, showing that on average we are more likely to choose the least risky option among two with similar returns, even if this means we might miss out on a discount or a short-term benefit. An example of loss aversion is when we choose to protect our travels for an added fee, or prefer products that we can return.

There are more than 150 biases and heuristics, so this is not an exhaustive list – but in general, getting familiar with which ones are most common among our users helps us smooth out the journey for them.

Isolating Biases And Heuristics In Search

Below, you can see how some queries can already reveal subtle biases that might be driving the search task.

Bias/Heuristic Sample Queries
Confirmation Bias • Is [brand/products] the best for this [use case]?
• Is this [brand/product/service] better than [alternative brand/product service]?
• Why is [this service] more efficient than [alternative service]?
Familiarity Heuristic • Is [brand] based in [country]?
• [Brand]’s HQs
• Where do I find [product] in [country]?
Loss Aversion • Is [brand] legit?
• [brand] returns
• Free [service]
Social Proof • Most popular [product/brand]
• Best [product/brand]

You can use Regex to isolate some of these patterns and modifiers directly in Google Search Console, or you can explore other query tools like AlsoAsked.

If you’re working with large datasets, I recommend using a custom LLM or creating your own model for classifications and clustering based on these rules, so it becomes easier to spot a trend in the queries and figure out priorities.

These observations will also give you a window into the next big area.

3. Underlying Users’ Needs

While biases and heuristics can manifest a temporary need in a specific task, one of the most beneficial aspects that behavioral data can give us is the need that drives the starting query and guides all of the subsequent actions.

Underlying needs don’t only become apparent from clusters of queries, but from the channels used in the discovery and evaluation loop, too.

For example, if we see high prominence of loss aversion based on our queries, paired with low conversion rates and high traffic on UGC videos for our product or brand, we can infer that:

  • Users need reassurance on their investment.
  • There is not enough information to cover this need on our website alone.

Trust is a big decision-mover, and one of the most underrated needs that brands often fail to fulfill as they take their legitimacy for granted.

However, sometimes we need to take a step back and put ourselves in the users’ shoes in order to see everything with fresh eyes from their perspective.

By mapping biases and heuristics to specific users’ needs, we can plan for cross-functional initiatives that span beyond pure SEO and are beneficial for the entire journey from search to conversion and retention.

How Do You Obtain Behavioral Data For Actionable Insights?

In SEO, we are used to dealing with a lot of quantitative data to figure out what’s happening on our channel. However, there is much more we can uncover via qualitative measures that can help us identify the reason something might be happening.

Quantitative data is anything that can be expressed in numbers: This can be time on page, sessions, abandonment rate, average order value, and so on.

Tools that can help us extract quantitative behavioral data are:

  • Google Search Console & Google Merchant Center: Great for high-level data like click-through rates (CTRs), which can flag mismatches between the user intent and the page or campaign served, as well as cannibalization instances and incorrect or missing localization.
  • Google Analytics, or any custom analytics platform your brand relies on: These give us information on engagement metrics, and can pinpoint issues in the natural flow of the journey, as well as point of abandonment. My suggestion is to set up custom events tailored to your specific goals, in addition to the default engagement metrics, like sign-up form clicks or add to cart.
  • Heatmaps and eye-tracking data: Both of these can give us valuable insights into visual hierarchy and attention patterns on the website. Heatmapping tools like  Microsoft Clarity can show us clicks, mouse scrolls, and position data, uncovering not only areas that might not be getting enough attention, but also elements that don’t actually work. Eye-tracking data (fixation duration and count, saccades, and scan-paths) integrate that information by showing what elements are capturing visual attention, as well as which ones are often not being seen at all.

Qualitative data, on the other hand, cannot be expressed in numbers as it usually relies on observations. Examples include interviews, heuristic assessments, and live session recordings. This type of research is generally more open to interpretation than its quantitative counterpart, but it’s vital to make sure we have the full picture of the user journey.

Qualitative data for search can be extracted from:

  • Surveys and CX logs: These can uncover common frustrations and points of friction for returning users and customers, which can guide better messaging and new page opportunities.
  • Scrapes of Reddit, Trustpilot, and online communities conversations: These give us a similar output as surveys, but expand the analysis of blockers to conversion to users that we haven’t acquired yet.
  • Live user testing: The least scalable but sometimes most rewarding option, as it can cut down all the inference on quantitative data, especially when they are combined (for example, live sessions can be combined with eye-tracking and narrated by the user at a later stage via Retrospective Think-Aloud or RTA).

Behavioral Data In The AI Era

In the past year, our industry has been really good at two things: sensationalizing AI as the enemy that will replace us, and highlighting its big failures on the other end. And while it’s undeniable that there are still massive limitations, having access to AI presents unprecedented benefits as well:

  • We can use AI to easily tie up big behavioral datasets and uncover actionables that make the difference.
  • Even when we don’t have much data, we can train our own synthetic dataset based on a sample of ours or a public one, to spot existing patterns and promptly respond to users’ needs.
  • We can generate predictions that can be used proactively for new initiatives to keep us ahead of the curve.

How Do You Leverage Behavioral Data To Improve Search Journeys?

Start by creating a series of dynamic dashboards with the measures you can obtain for each one of the three areas we talked about (discovery channel indicators, built-in mental shortcuts, and underlying users’ needs). These will allow you to promptly spot behavioral trends and collect actions that can make the journey smoother for the user at every step, since search now spans beyond the clicks on site.

Once you get new insights for each area, prioritize your actions based on expected business impact and effort to implement.

And bear in mind that behavioral insights are often transferable to more than one section of the website or the business, which can maximize returns across several channels.

Lastly, set up regular conversations with your product and UX teams. Even if your job title keeps you in search, business success is often channel-agnostic. This means that we shouldn’t only treat the symptom (e.g., low traffic to a page), but curate the entire journey, and that’s why we don’t want to work in silos on our little search island.

Your users will thank you. The algorithm will likely follow.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Interaction To Next Paint: 9 Content Management Systems Ranked via @sejournal, @martinibuster

Interaction to Next Paint (INP) is a meaningful Core Web Vitals metric because it represents how quickly a web page responds to user input. It is so important that the HTTPArchive has a comparison of INP across content management systems. The following are the top content management systems ranked by Interaction to Next Paint.

What Is Interaction To Next Paint (INP)?

INP measures how responsive a web page is to user interactions during a visit. Specifically, it measures interaction latency, which is the time between when a user clicks, taps, or presses a key and when the page visually responds.

This is a more accurate measurement of responsiveness than the older metric it replaced, First Input Delay (FID), which only captured the first interaction. INP is more comprehensive because it evaluates all clicks, taps, and key presses on a page and then reports a representative value based on the longest meaningful latency.

The INP score is representative of the page’s responsive performance. For that reason**,** extreme outliers are filtered out of the calculation so that the score reflects typical worst-case responsiveness.

Web pages with poor INP scores create a frustrating user experience that increases the risk of page abandonment. Fast responsiveness enables a smoother experience that supports higher engagement and conversions.

INP Scores Have Three Ratings:

  • Good: Below or at 200 milliseconds
  • Needs Improvement: Above 200 milliseconds and below or at 500 milliseconds
  • Poor: Above 500 milliseconds

Content Management System INP Champions

The latest Interaction to Next Paint (INP) data shows that all major content management systems improved from June to July, but only by incremental improvements.

Joomla posted the largest gain with a 1.12% increase in sites achieving a good score. WordPress followed with a 0.88% increase in the number of sites posting a good score, while Wix and Drupal improved by 0.70% and 0.64%.

Duda and Squarespace also improved, though by smaller margins of 0.46% and 0.22%. Even small percentage changes can reflect real improvements in how users experience responsiveness on these platforms, so it’s encouraging that every publishing platform in this comparison is improving.

CMS INP Ranking By Monthly Improvement

  1. Joomla: +1.12%
  2. WordPress: +0.88%
  3. Wix: +0.70%
  4. Drupal: +0.64%
  5. Duda: +0.46%
  6. Squarespace: +0.22%

Which CMS Has The Best INP Scores?

Month-to-month improvement shows who is doing better, but that’s not the same as which CMS is doing the best. The July INP results show a different ranking order of content management systems when viewed by overall INP scores.

Squarespace leads with 96.07% of sites achieving a good INP score, followed by Duda at 93.81%. This is a big difference from the Core Web Vitals rankings, where Duda is consistently ranked number one. When it comes to arguably the most important Core Web Vital metric, Squarespace takes the lead as the number one ranked CMS for Interaction to Next Paint.

Wix and WordPress are ranked in the middle with 87.52% and 86.77% of sites showing a good INP score, while Drupal, with a score of 86.14%, is ranked in fifth place, just a fraction behind WordPress.

Ranking in sixth place in this comparison is Joomla, trailing the other five with a score of 84.47%. That score is not so bad considering that it’s only two to three percent behind Wix and WordPress.

CMS INP Rankings for July 2025

  1. Squarespace – 96.07%
  2. Duda: 93.81%
  3. Wix: 87.52%
  4. WordPress: 86.77%
  5. Drupal: 86.14%
  6. Joomla: 84.47%

These rankings show that even platforms that lag in INP performance, like Joomla, are still improving, and it could be that Joomla’s performance may best the other platforms in the future if it keeps up its improvement.

In contrast, Squarespace, which already performs well, posted the smallest gain. This indicates that performance improvement is uneven, with systems advancing at different speeds. Nevertheless, the latest Interaction to Next Paint (INP) data shows that all six content management systems in this comparison improved from June to July. That upward performance trend is a positive sign for publishers.

What About Shopify’s INP Performance?

Shopify has strong Core Web Vitals performance, but how well does it compare to these six content management systems? This might seem like an unfair comparison because shopping platforms require features, images, and videos that can slow a page down. But Duda, Squarespace, and Wix offer ecommerce solutions, so it’s actually a fair and reasonable comparison.

We see that the rankings change when Shopify is added to the INP comparison:

Shopify Versus Everyone

  1. Squarespace: 96.07%
  2. Duda: 93.81%
  3. Shopify: 89.58%
  4. Wix: 87.52%
  5. WordPress: 86.77%
  6. Drupal: 86.14%
  7. Joomla: 84.47%

Shopify is ranked number three. Now look at what happens when we compare the three shopping platforms against each other:

Top Ranked Shopping Platforms By INP

  1. BigCommerce: 95.29%
  2. Shopify: 89.58%
  3. WooCommerce: 87.99%

BigCommerce is the number-one-ranked shopping platform for the important INP metric among the three in this comparison.

Lastly, we compare the INP performance scores for all the platforms together, leading to a surprising comparison.

CMS And Shopping Platforms Comparison

  1. Squarespace: 96.07%
  2. BigCommerce: 95.29%
  3. Duda: 93.81%
  4. Shopify: 89.58%
  5. WooCommerce: 87.99%
  6. Wix: 87.52%
  7. WordPress: 86.77%
  8. Drupal: 86.14%
  9. Joomla: 84.47%

All three ecommerce platforms feature in the top five rankings of content management systems, which is remarkable because of the resource-intensive demands of ecommerce websites. WooCommerce, a WordPress-based shopping platform, ranks in position five, but it’s so close to Wix that they are virtually tied for position five.

Takeaways

INP measures the responsiveness of a web page, making it a meaningful indicator of user experience. The latest data shows that while every CMS is improving, Squarespace, BigCommerce, and Duda outperform all other content platforms in this comparison by meaningful margins.

All of the platforms in this comparison show high percentages of good INP scores. The number four-ranked Shopify is only 6.49 percentage points behind the top-ranked Squarespace, and 84.47% of the sites published with the bottom-ranked Joomla show a good INP score. These results show that all platforms are delivering a quality experience for users

View the results here (must be logged into a Google account to view).

Featured Image by Shutterstock/Roman Samborskyi

Make AI Writing Work for Your Content & SERP Visibility Strategy [Webinar] via @sejournal, @hethr_campbell

Are your AI writing tools helping or hurting your SEO performance?

Join Nadege Chaffaut and Crystie Bowe from Conductor on September 17, 2025, for a practical webinar on creating AI-informed content that ranks and builds trust.

You’ll Learn How To:

  • Engineer prompts that produce high-quality content
  • Keep your SEO visibility and credibility intact at scale
  • Build authorship and expertise into AI content workflows

Why You Can’t Miss This Session

AI can be a competitive advantage when used the right way. This webinar will give you the frameworks and tactics to scale content that actually performs.

Register Now

Sign up to get actionable strategies for AI content. Can’t make it live? Register anyway, and we’ll send you the full recording.

Google Avoids Breakup As Judge Bars Exclusive Default Search Deals via @sejournal, @MattGSouthern

A federal judge outlined remedies in the U.S. search antitrust case that bar Google from using exclusive default search deals but stop short of forcing a breakup.

Reuters reports that Google won’t have to divest Chrome or Android, but it may have to share some search data with competitors under court-approved terms.

Google says it will appeal.

What The Judge Ordered

Judge Amit P. Mehta barred Google from entering or maintaining exclusive agreements that tie the distribution of Search, Chrome, Google Assistant, or the Gemini app to other apps, licenses, or revenue-share arrangements.

The ruling allows Google to continue paying for placement but prohibits exclusivity that could block rivals.

The order also envisions Google making certain search and search-ad syndication services available to competitors at standard rates, alongside limited data sharing for “qualified competitors.”

Mehta ordered Google to share some search data with competitors under specific protections to help them improve their relevance and revenue. Google argued this could expose its trade secrets and plans to appeal the decision.

The judge directed the parties to meet and submit a revised final judgment by September 10. Once entered, the remedies would take effect 60 days later, run for six years, and be overseen by a technical committee. Final language could change based on the parties’ filing.

How We Got Here

In August 2024, Mehta found Google illegally maintained a monopoly in general search and related text ads.

Judge Amit P. Mehta wrote in his August 2024 opinion:

“Google is a monopolist, and it has acted as one to maintain its monopoly.”

This decision established the need for remedies. Today’s order focuses on distribution and data access, rather than breaking up the company.

What’s Going To Change

Ending exclusivity changes how contracts for default placements can be made across devices and browsers. Phone makers and carriers may need to update their agreements to follow the new rules.

However, the ruling doesn’t require any specific user experience change, like a choice screen. The results will depend on how new contracts are created and approved by the court.

Next Steps

Expect a gradual rollout if the final judgment follows today’s outline.

Here are the next steps to watch for:

  • The revised judgment that the parties will submit by September 10.
  • Changes to contracts between Google and distribution partners to meet the non-exclusivity requirement.
  • Any pilot programs or rules that specify who qualifies as a “qualified competitor” and what data they can access.

Separately, Google faces a remedies trial in the ad-tech case in late September. This trial could lead to changes that affect advertising and measurement.

Looking Ahead

If the parties submit a revised judgment by September 10, changes could start about 60 days after the court’s final order. This might shift if Google gets temporary relief during an appeal.

In the short term, expect contract changes rather than product updates.

The final judgment will determine who can access data and which types are included. If the program is limited, it may not significantly affect competition. If broader, competitors might enhance their relevance and profit over the six-year period.

Also watch the ad tech remedies trial this month. Its results, along with the search remedies, will shape how Google handles search and ads in the coming years.

Control AI Answers about Your Brand

Search engine optimization has shifted from traditional organic rankings in AI-generated mentions, citations, and recommendations.

Success with AI optimization boils down to two questions:

  • What does the training data of large language models contain about a company?
  • What can the LLMs learn about the business when performing live searches?

LLM training data is fundamental to optimizing AI answers, even if the platform runs real-time searches, because the fan-out components stem from what the model already knows.

For example, if the training data indicates that a business is an organic skincare brand, the fan-out component might search for certifications.

Citations

AI answers often include citations (URLs of sources), which come from live searches, not training data. LLMs do not store URLs.

Citations (i) are branded responses that may influence buying decisions and (ii) likely impact the training data containing info on a brand. Thus citations are key to AI optimization.

A consumer considering a skincare brand may prompt Google’s AI Mode for reviews and certifications. The response will likely contain sources.

Here’s an example prompt addressing The Ordinary, a skincare brand:

Is The Ordinary skincare good and certified?

AI Mode’s answer included an advisory warning from the U.S. Food and Drug Administration, as well as links to a magazine article and influencer posts that questioned the ingredients.

A brand cannot control the sentiments of others, but it’s critical to address these concerns on-site to increase the chances of being cited.

Clicking each link in the AI Mode will usually highlight the relevant, sourced paragraph. Then address the question or concern on an FAQ page or a separate article.

For example, the screenshot below is what a competitor stated about The Ordinary’s ingredients. In response, The Ordinary could create a page answering “Is The Ordinary clean beauty?”

Screenshot of a text excerpt from TNK Beautry questioning if The Ordinary, a competitor, is “clean beauty.”

Article from TNK Beauty criticizing the ingredients of The Ordinary, a competitor.

Better Content

Hence content marketing has changed. Only a year or so ago, consumers had to research to find answers about brands and products, such as certifications, alternative pricing or additional rates, and countries where products are manufactured or shipped from.

LLMs can reveal these answers in seconds. Brands that remain silent lose control over that sentiment and fail to contribute to the answers.

Moreover, by creating more brand and product knowledge content, companies increase their chances of being surfaced in answers to non-branded, generic prompts.

SEO for AI

Here’s what to do:

  • Prompt LLM platforms such as AI Mode, ChatGPT, Perplexity, and Claude about your business. (“What do you know about NAME?”)
  • Note the fan-out directions to signal your brand’s associations in training data.
  • Identify third-party citations and their contributions to the answer.
  • Ensure your site provides better answers than the third parties.
  • Address frequent confusion or irritation about your brand on social media channels.
  • Prompt LLMs for your direct competitors and compare the answers to yours.

A few solution providers can track citations for prompts containing your brand.

I use Peec AI, which monitors citations in ChatGPT, Perplexity, and AI Overviews. I can view a report in Peec AI to see the most-cited domains in answers to prompts that include my company.

According to Peec AI’s report, answers to prompts containing Smarty Marketing rarely include our own site! I need to create more content about my brand and products.

Stop Trying To Make GEO Happen! via @sejournal, @cshel

“Stop trying to make GEO happen. It’s not going to happen.”

With apologies to Gretchen Wieners and the writers of “Mean Girls,” the line feels like the only way to start this conversation about a buzzword making the rounds: GEO (which is now, allegedly, supposed to mean Generative Engine Optimization).

This article grew out of a LinkedIn post/open plea I wrote recently about this furore, which unexpectedly took off – approaching 10,000 impressions, four dozen comments, and plenty of laughter at bad acronym ideas. Clearly, this struck a nerve with the SEO and marketing community.

On the surface, to be fair, the concept makes sense. We’re in a new era where AI-driven search engines are shaping how content is retrieved, summarized, and delivered. Adapting SEO strategies for that reality is important; however…

Nobody Will Say “G-E-O”

Acronyms survive if they’re pronounceable. If they aren’t easy to say aloud, and also happen to spell an actual word, people will say it like a word.

To my point, no one is going to spell out “G-E-O” when talking about Generative Engine Optimization. It simply doesn’t roll off the tongue nicely. Inevitably, it becomes the word “geo” – and that’s where the trouble starts.

The word geo is ancient. It comes from the Greek word (γη), meaning earth or ground. It’s the root of hundreds of words we already use every day: geography, geology, geothermal, geopolitics, geospatial, geotracking, geotagging, geomapping. In technology, it’s baked into concepts like geo-targeting and geo-fencing, and in all cases, geo explicitly means “the earth” in some form or another.

The linguistic baggage here is too heavy. There is no amount of wishful thinking that will make “gee-ee-oh” mean something not related to the earth.

The Branding Problem: Words Have Meaning

Words and acronyms aren’t blank slates. They carry cultural, historical, and linguistic connotations and memories that can’t be erased by decree.

Try to rebrand “GEO” and people’s brains will still instantly (or at least initially) read it as “geography.” They might pause and look at the context, and then decide “Oh, this must be G-E-O which means generative engine optimization, which is like S-E-O but for AI.” That’s a lot of work we are asking the public to do for three little letters.

It’s the same reason I could never (not that I would ever) convince our marketing team to rebrand our SEO plugin as an “FBI” plugin. No matter how hard we try to make FBI mean For Better Indexing, we are not going to be able to overcome the decades of heavy usage that says FBI means Federal Bureau of Investigation.

In this case, GEO doesn’t have decades of historical usage; it has literally millennia of meaning that IS NOT THIS. Hijacking an acronym with multiple centuries of usage is not innovation; it is confusion.

The SEO Problem: Competing With Entrenched Meaning

Let’s set branding aside and look at this purely from an SEO perspective.

Search engines reward authority, longevity, and relevance. The word geo has decades of backlinks, established search volume, and deeply entrenched usage. Every authoritative signal in Google’s system points to geo = geography/geographical/earth-related or adjacent.

Generative Engine Optimization will be competing against that established meaning forever. It won’t matter how many blog posts declare that “GEO is the new SEO” – the search results for “geo” will belong to geography, not generative optimization.

Then we can look beyond Google’s index – the training data behind large language models (LLMs) already “knows” that geo refers to Earth and geography, because that’s what the word has meant in every corpus of text for thousands of years. The idea that we can overwrite that meaning in a few quarters of (AI-generated) blog posts and conference talks is, frankly, wishful thinking.

Acronym Soup: Why Hijacking Fails

This isn’t the first time people have tried to coin a buzzword by hijacking an acronym. It never works. Acronyms only stick when they are:

  • Unique (no heavy pre-existing baggage).
  • Clear (people know, or can easily surmise, what they stand for).
  • Pronounceable (people can easily say them in conversation).

When they aren’t, they dissolve into acronym soup. Everyone gets confused, nobody adopts the term consistently, and the idea dies.

Humor Break: Acronyms We Can Safely Reject Now

Since I’m sure there will be a scramble to come up with something “better” than GEO, let me save you the trouble and pre-remove a few tempting, but alas already in use, options from the list.

  • FBI – For Better Indexing (all your queries are under surveillance).
  • PDF – Prompt-Driven Framework (optimized for clients who never open them).
  • BIO – Bot Interaction Optimization (because the LLMs need to “like” you).
  • CEO – Crawl Efficiency Orchestration (manage your bots like a boss).
  • URL – Unified Retrieval Layer (ranking starts at the root).
  • GPS – Generative Prompt Sequencing (your AI still needs directions).
  • API – Automated Prompt Injection (though to be fair, my brain always defaults to “armor piercing incendiaries” but that’s probably just a me problem).
  • HTML – Human-Tuned Model Language (teach the bots to “speak search”).
  • INFO – Intelligent Neural Findability Optimization (make your content “discoverable” to AI).
  • PRO – Prompt Response Optimization (win the answer box in AI).
  • EV – Enhanced Visibility (because apparently that’s the whole point).
  • SEO – Synthetic Engine Optimization (yes, we’ve come full circle).

They’re funny, but none of them should happen for all of the reasons outlined above.

What Actually Works When Naming Concepts

So, if GEO is a lost cause, what should we be doing instead?

1. Start Unique

  • Don’t hijack a word or acronym already in heavy use.
  • The cleanest acronyms are invented, not repurposed.

2. Make It Pronounceable

  • SEO works because people can say it.
  • SaaS (Software as a Service) works because it’s short and phonetically easy (“sass” in case you didn’t know).

3. Anchor It In Authority

  • Google’s own acronyms, like E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness), stuck because Google itself enforced them.
  • A community can rally around a term, but only if it feels backed by authority or usefulness.

4. Check The SERPs First

  • Before you try to coin an acronym, search it.
  • If the first three pages of results are about something else entirely, you might be sunk before you begin.

The Bottom Line: Stop Trying To Make GEO Happen

Generative Engine Optimization as a concept makes sense, but GEO as an acronym is doomed.

It fails linguistically (nobody will say “G-E-O”), historically (the word is ancient and already claimed), and strategically (search engines and LLMs already associate “geo” with geography, not generative search).

If you want a new term to catch on, start with one that isn’t already taken. Otherwise, you’re not innovating language – you’re just creating acronym soup … and sabotaging your own visibility from day one.

So please, stop trying to make GEO happen. It’s not going to happen.

More Resources:


Featured Image: CHIEW/Shutterstock

Google Adds Guidance On JavaScript Paywalls And SEO via @sejournal, @martinibuster

Google is apparently having trouble identifying paywalled content due to a standard way paywalled content is handled by publishers like news sites. It’s asking that publishers with paywalled content change the way they block content so as to help Google out.

Search Related JavaScript Problems

Google updated their guidelines with a call for publishers to consider changing how they block users from paywalled content. It’s fairly common for publishers to use a script to block non-paying users with an interstitial although the full content is still there in the code. This may be causing issues for Google in properly identifying paywalled content.

A recent addition to their search documentation about JavaScript issues related to search they wrote:

“If you’re using a JavaScript-based paywall, consider the implementation.

Some JavaScript paywall solutions include the full content in the server response, then use JavaScript to hide it until subscription status is confirmed. This isn’t a reliable way to limit access to the content. Make sure your paywall only provides the full content once the subscription status is confirmed.”

The documentation doesn’t say what problems Google itself is having, but a changelog documenting the change offers more context about why they are asking for this change:

“Adding guidance for JavaScript-based paywalls

What: Added new guidance on JavaScript-based paywall considerations.

Why: To help sites understand challenges with the JavaScript-based paywall design pattern, as it makes it difficult for Google to automatically determine which content is paywalled and which isn’t.”

The changelog makes it clear that the way some publishers use JavaScript for blocking paywalled content is making it difficult for Google to know if the content is or is not paywalled.

The change was an addition to a numbered list of JavaScript problems publishers should be aware of, item number 10 on their “Fix Search-related JavaScript Problems” page.

Featured Image by Shutterstock/Kues

Why WooCommerce Slows Down (& How to Fix It With the Right Server Stack)

This post was sponsored by Cloudways. The opinions expressed in this article are the sponsor’s own.

Wondering why your rankings may be declining?

Just discovered your WooCommerce site has slow load times?

A slow WooCommerce site doesn’t just cost you conversions. It affects search visibility, backend performance, and customer trust.

Whether you’re a developer running your own stack or an agency managing dozens of client stores, understanding how WooCommerce performance scales under load is now considered table stakes.

Today, many WordPress sites are far more dynamic, meaning many things are happening at the same time:

  • Stores run real-time sales.
  • LMS platforms track user progress.
  • Membership sites deliver highly personalized content.

Every action a user takes, from logging in, updating a cart, or initiating checkout, relies on live data from the server. These requests cannot be cached.

Tools like Varnish or CDNs can help with public pages such as the homepage or product listings. But once someone logs in to their account or interacts with their session, caching no longer helps. Each request must be processed in real time.

This article breaks down why that happens and what kind of server setup is helping stores stay fast, stable, and ready to grow.

Why Do WooCommerce Stores Slow Down?

WooCommerce often performs well on the surface. But as traffic grows and users start interacting with the site, speed issues begin to show. These are the most common reasons why stores slow down under pressure:

1. PHP: It Struggles With High User Activity

WooCommerce depends on PHP to process dynamic actions such as cart updates, coupon logic, and checkout steps. Traditional stacks using Apache for PHP handling are slower and less efficient.

Modern environments use PHP-FPM, which improves execution speed and handles more users at once without delays.

2. A Full Database: It Becomes A Bottleneck

Order creation, cart activity, and user actions generate a high number of database writes. During busy times like flash sales, new merchandise arrivals, or course launches, the database struggles to keep up.

Platforms that support optimized query execution and better indexing handle these spikes more smoothly.

3. Caching Issues: Object Caching Is Missing Or Poorly Configured

Without proper object caching, WooCommerce queries the database repeatedly for the same information. That includes product data, imagery, cart contents, and user sessions.

Solutions that include built-in Redis support help move this data to memory, reducing server load and improving site speed.

4. Concurrency Limits Affect Performance During Spikes

Most hosting stacks today, including Apache-based ones, perform well for a wide range of WordPress and WooCommerce sites. They handle typical traffic reliably and have powered many successful stores.

As traffic increases and more users log in and interact with the site at the same time, the load on the server begins to grow. Architecture starts to play a bigger role at that point.

Stacks built on NGINX with event-driven processing can manage higher concurrency more efficiently, especially during unanticipated traffic spikes.

Rather than replacing what already works, this approach extends the performance ceiling for stores that are becoming more dynamic and need consistent responsiveness under heavier load.

5. Your WordPress Admin Slows Down During Sales Seasons

During busy periods like seasonal sales campaigns or new stock availability, stores can often slow down for the team managing the site, too. The WordPress dashboard takes longer to load, which means publishing products, managing orders, or editing pages also becomes slower.

This slowdown happens because both shoppers and staff are using the site’s resources at the same time, and the server has to handle all those requests at once.

Modern stacks reduce this friction by balancing frontend and backend resources more effectively.

How To Architect A Scalable WordPress Setup For Dynamic Workloads?

WooCommerce stores today are built for more than stable traffic. Customers are logging in, updating their carts, taking actions to manage their subscription profile, and as a result, are interacting with your backend in real time.

The traditional WordPress setup, which is primarily designed for static content, cannot handle that kind of demand.

Here’s how a typical setup compares to one built for performance and scale:

Component Basic Setup         Scalable Setup
Web Server Apache NGINX
PHP Handler mod_php or CGI PHP-FPM
Object Caching None or database transients Redis with Object Cache Pro
Scheduled Tasks WP-Cron System cron job
Caching CDN or full-page caching only Layered caching, including object cache
.htaccess Handling Built-in with Apache Manual rewrite rules in NGINX config
Concurrency Handling Limited Event-based, memory-efficient server

How To Manually Setup A Performance-Ready & Scalable WooCommerce Stack

Don’t have bandwidth? Try the easy way.

If you’re setting up your own server or tuning an existing one, are the most important components to get right:

1) Use NGINX For Static File Performance

NGINX is often used as a high-performance web server for handling static files and managing concurrent requests efficiently. It is well suited for stores expecting high traffic or looking to fine-tune their infrastructure for speed.

Unlike Apache, NGINX does not use .htaccess files. Rewrite rules, such as permalinks, redirects, and trailing slashes, need to be added manually to the server block. For WordPress, these rules are well-documented and only need to be set once during setup.

This approach gives more control at the server level and can be helpful for teams building out their own environment or optimizing for scale.

2) Enable PHP-FPM For Faster Request Handling

PHP-FPM separates PHP processing from the web server. It gives you more control over memory and CPU usage. Tune values like pm.max_children and pm.max_requests based on your server size to prevent overload during high activity.

3) Install Redis With Object Cache Pro

Redis allows WooCommerce to store frequently used data in memory. This includes cart contents, user sessions, and product metadata.

Pair this with Object Cache Pro to compress cache objects, reduce database load, and improve site responsiveness under load.

4) Replace WP-Cron With A System-Level Cron Job

By default, WordPress checks for scheduled tasks whenever someone visits your site. That includes sending emails, clearing inventory, and syncing data. If you have steady traffic, it works. If not, things get delayed.

You can avoid that by turning off WP-Cron. Just add define(‘DISABLE_WP_CRON’, true); to your wp-config.php file. Then, set up a real cron job at the server level to run wp-cron.php every minute. This keeps those tasks running on time without depending on visitors.

5) Add Rewrite Rules Manually For NGINX

NGINX doesn’t use .htaccess. That means you’ll need to define URL rules directly in the server block.

This includes things like permalinks, redirects, and static file handling. It’s a one-time setup, and most of the rules you need are already available from trusted WordPress documentation. Once you add them, everything works just like it would on Apache.

A Few Tradeoffs To Keep In Mind

This kind of setup brings a real speed boost. But there are some technical changes to keep in mind.

  • NGINX won’t read .htaccess. All rewrites and redirects need to be added manually.
  • WordPress Multisite may need extra tweaks, especially if you’re using subdirectory mode.
  • Security settings like IP bans or rate limits should be handled at the server level, not through plugins.

Most developers won’t find these issues difficult to work with. But if you’re using a modern platform, much of it is already taken care of.

You don’t need overly complex infrastructure to make WooCommerce fast; just a stack that aligns with how modern, dynamic stores operate today.

Next, we’ll look at how that kind of stack performs under traffic, with benchmarks that show what actually changes when the server is built for dynamic sites.

What Happens When You Switch To An Optimized Stack?

Not all performance challenges come from code or plugins. As stores grow and user interactions increase, the type of workload becomes more important, especially when handling live sessions from logged-in users.

To better understand how different environments respond to this kind of activity, Koddr.io ran an independent benchmark comparing two common production setups:

  • A hybrid stack using Apache and NGINX.
  • A stack built on NGINX with PHP-FPM, Redis, and object caching.

Both setups were fully optimized and included tuned components like PHP-FPM and Redis. The purpose of the benchmark was to observe how each performs under specific, real-world conditions.

The tests focused on uncached activity from WooCommerce and LearnDash, where logged-in users trigger dynamic server responses.

In these scenarios, the optimized stack showed higher throughput and consistency during peak loads. This highlights the value of having infrastructure tailored for dynamic, high-concurrency traffic, depending on the use case.

WooCommerce Runs Faster Under Load

One test simulated 80 users checking out at the same time. The difference was clear:

Scenario Hybrid Stack Optimized Stack Gain
WooCommerce Checkout 3,035 actions 4,809 actions +58%
Screenshot from Koddr.io, August 2025

LMS Platforms Benefit Even More

For LearnDash course browsing—a write-heavy and uncached task, the optimized stack completed 85% more requests:

Scenario Hybrid Stack Optimized Stack Gain
LearnDash Course List View 13,459 actions 25,031 actions +85%

This shows how optimized stacks handle personalized or dynamic content more efficiently. These types of requests can’t be cached, so the server’s raw efficiency becomes critical.

Screenshot from Koddr.io, August 2025

Backend Speed Improves, Too

The optimized stack wasn’t just faster for customers. It also made the WordPress admin area more responsive:

  • WordPress login times improved by up to 31%.
  • Publish actions ran 20% faster, even with high traffic.

This means your team can concurrently manage products, update pages, and respond to sales in real time, without delays or timeouts.

It Handles More Without Relying On Caching

When Koddr turned off Varnish, the hybrid stack experienced a 71% drop in performance. This shows how effectively it handles cached traffic. The optimized stack dropped just 7%, which highlights its ability to maintain speed even during uncached, logged-in sessions.

Both setups have their strengths, but for stores with real-time user activity, reducing reliance on caching can make a measurable difference.

Stack Type With Caching Without Caching Drop
Hybrid Stack 654,000 actions 184,000 actions -7%
Optimized Stack 619,000 actions 572,000 actions -7%
Screenshot from Koddr.io, August 2025

Why This Matters?

Static pages are easy to optimize. But WooCommerce stores deal with real-time traffic. Cart updates, login sessions, and checkouts all require live processing. Caching cannot help once a user has signed in.

The Koddr.io results show how an optimized server stack:

  • Reduces CPU spikes during traffic surges.
  • Keeps the backend responsive for your team.
  • Delivers more stable speed for logged-in users.
  • Helps scale without complex performance workarounds.

These are the kinds of changes that power newer stacks purpose-built for dynamic workloads like Cloudways Lightning, built for real WooCommerce workloads.

Core Web Vitals Aren’t Just About The Frontend

You can optimize every image. Minify every line of code. Switch to a faster theme. But your Core Web Vitals score will still suffer if the server can’t respond quickly.

That’s what happens when logged-in users interact with WooCommerce or LMS sites.

When a customer hits “Add to Cart,” caching is out of the picture. The server has to process the request live. That’s where TTFB (Time to First Byte) becomes a real problem.

Slow server response means Google waits longer to start rendering the page. And that delay directly affects your Largest Contentful Paint and Interaction to Next Paint metrics.

Frontend tuning gets you part of the way. But if the backend is slow, your scores won’t improve. Especially for logged-in experiences.

Real optimization starts at the server.

How Agencies Are Skipping The Manual Work

Every developer has a checklist for WooCommerce performance. Use NGINX. Set up Redis. Replace WP-Cron. Add a WAF. Test under load. Keep tuning.

But not every team has the bandwidth to maintain all of it.

That’s why more agencies are using pre-optimized stacks that include these upgrades by default. Cloudways Lightning, a managed stack based on NGINX + PHP-FPM, designed for dynamic workloads is a good example of that.

It’s not just about speed. It’s also about backend stability during high traffic. Admin logins stay fast. Product updates don’t hang. Orders keep flowing.

Joe Lackner, founder of Celsius LLC, shared what changed for them:

“Moving our WordPress workloads to the new Cloudways stack has been a game-changer. The console admin experience is snappier, page load times have improved by +20%, and once again Cloudways has proven to be way ahead of the game in terms of reliability and cost-to-performance value at this price point.”

This is what agencies are looking for. A way to scale without getting dragged into infrastructure management every time traffic picks up.

Final Takeaway

WooCommerce performance is no longer just about homepage load speed.

Your site handles real-time activity from both customers and your team. Once a user logs in or reaches checkout, caching no longer applies. Each action hits the server directly.

If the infrastructure isn’t optimized, site speed drops, sales suffer, and backend work slows down.

The foundations matter. A stack that’s built for high concurrency and uncached traffic keeps things fast across the board. That includes cart updates, admin changes, and product publishing.

For teams who don’t want to manage server tuning manually, options like Cloudways Lightning deliver a faster, simpler path to performance at scale.

Use promo code “SUMMER305” and get 30% off for 5 months + 15 free migrations. Signup Now!


Image Credits

Featured Image: Image by Cloudways. Used with permission.

In-Post Images: Images by Cloudways. Used with permission.

Research Shows How To Optimize For Google AIO And ChatGPT via @sejournal, @martinibuster

New research from BrightEdge shows that Google AI Overviews, AI Mode, and ChatGPT recommend different brands nearly 62% of the time. BrightEdge concludes that each AI search platform is interpreting the data in different ways, suggesting different ways of thinking about each AI platform.

Methodology And Results

BrightEdge’s analysis was conducted with its AI Catalyst tool, using tens of thousands of the same queries across ChatGPT, Google AI Overviews (AIO), and Google AI Mode. The research documented a 61.9% overall disagreement rate, with only 33.5% of queries showing the exact same brands in all three AI platforms.

Google AI Overviews averaged 6.02 brand mentions per query, compared to ChatGPT’s 2.37. Commercial intent search queries containing phrases like “buy,” “where,” or “deals” generated brand mentions 65% of the time across all platforms, suggesting that these kinds of high-intent keyword phrases continue to be reliable for ecommerce, just like in traditional search engines. Understandably, e-commerce and finance verticals achieved 40% or more brand-mention coverage across all three AI platforms.

Three Platforms Diverge

Not all was agreement between the three AI platforms in the study. Many identical queries led to very different brand recommendations depending on the AI platform.

BrightEdge shares that:

  • ChatGPT cites trusted brands even when it’s not grounding on search data, indicating that it’s relying on LLM training data.
  • Google AI Overviews cites brands 2.5 times more than ChatGPT.
  • Google AI Mode cites brands less often than both ChatGPT and AIO.

The research indicates that ChatGPT favors trusted brands, Google AIO emphasizes breadth of coverage with more brand mentions per query, and Google AI Mode selectively recommends brands.

Next we untangle why these patterns exist.

Differences Exist

BrightEdge asserts that this split across the three platforms is not random. I agree that there are differences, but I disagree that “authority” has anything to do with it and offer an alternate explanation later on.

These are the conclusions that they draw from the data:

  • The Brand Authority Play:
    ChatGPT’s reliance on training data means established brands with strong historical presence can capture mentions without needing fresh citations. This creates an “authority dividend” that many brands don’t realize they’re already earning—or could be earning with the right positioning.
  • The Volume Opportunity:
    Google AI Overview’s hunger for brand mentions means there are 6+ available slots per relevant query, with clear citation paths showing exactly how to earn visibility. While competitors focus on traditional SEO, innovative brands are reverse-engineering these citation networks.
  • The Quality Threshold:
    Google AI Mode’s selectivity means fewer brands make the cut, but those that do benefit from heavy citation backing that reinforces their authority across the web.”

Not Authority – It’s About Training Data

BrightEdge refers to “authority signals” within ChatGPT’s underlying LLM. My opinion differs in regard to an LLM’s generated output, not retrieval-augmented responses that pull in live citations. I don’t think there are any signals in the sense of ranking-related signals. In my opinion, the LLM is simply reaching for the entity (brand) related to a topic.

What looks like “authority” to someone with their SEO glasses on is more likely about frequency, prominence, and contextual embedding strength.

  • Frequency:
    How often the brand appears in the training data.
  • Prominence:
    How central the brand is in those contexts (headline vs. footnote).
  • Contextual Embedding Strength:
    How tightly the brand is associated with certain topics based on the model’s training data.

If a brand appears widely in appropriate contexts within the training data, then, in my opinion, it is more likely to be generated as a brand mention by the LLM, because this reflects patterns in the training data and not authority.

That said, I agree with BrightEdge that being authoritative is important, and that quality shouldn’t be minimized.

Patterns Emerge

The research data suggests that there are unique patterns across all three platforms that can behave as brand citation triggers. One pattern all three share is that keyword phrases with a high commercial intent generate brand mentions in nearly two-thirds of cases. Industries like e-commerce and finance achieve higher brand coverage, which, in my opinion, reflects the ability of all three platforms to accurately understand the strong commercial intents for keywords inherent to those two verticals.

A little sunshine in a partly cloudy publishing environment is the finding that comparison queries for “best” products generate 43% brand citations across all three AI platforms, again reflecting the ability of those platforms to understand user query contexts.

Citation Network Effect

BrightEdge has an interesting insight about creating presence in all three platforms that it calls a citation network effect. BrightEdge asserts that earning citations in one platform could influence visibility in the others.

They share:

“A well-crafted piece… could:
Earn authority mentions on ChatGPT through brand recognition

Generate 6+ competitive mentions on Google AI Overview through comprehensive coverage

Secure selective, heavily-cited placement on Google AI Mode through third-party validation

The citation network effect means that earning mentions on one platform often creates the validation needed for another. “

Optimizing For Traditional Search Remains

Nevertheless, I agree with BrightEdge that there’s a strategic opportunity in creating content that works across all three environments, and I would make it explicit that SEO, optimizing for traditional search, is the keystone upon which the entire strategy is crafted.

Traditional SEO is still the way to build visibility in AI search. BrightEdge’s data indicates that this is directly effective for AIO and has a more indirect effect for AI Mode and ChatGPT.

ChatGPT can cite brand names directly from training data and from live data. It also cites brands directly from the LLM, which suggests that generating strong brand visibility tied to specific products and services may be helpful, as that is what eventually makes it into the AI training data.

BrightEdge’s conclusion about the data leans heavily into the idea that AI is creating opportunities for businesses that build brand awareness in the topics they want to be surfaced in.
They share:

“We’re witnessing the emergence of AI-native brand discovery. With this fundamental shift, brand visibility is determined not by search rankings but by AI recommendation algorithms with distinct personalities and preferences.

The brands winning this transition aren’t necessarily the ones with the biggest SEO budgets or the most content. They’re the ones recognizing that AI disagreement creates more paths to visibility, not fewer.

As AI becomes the primary discovery mechanism across industries, understanding these platform-specific triggers isn’t optional—it’s the difference between capturing comprehensive brand visibility and watching competitors claim the opportunities you didn’t know existed.

The 62% disagreement gap isn’t breaking the system. It’s creating one—and smart brands are already learning to work it.”

BrightEdge’s report:

ChatGPT vs Google AI: 62% Brand Recommendation Disagreement

Featured Image by Shutterstock/MMD Creative

Track, Prioritize & Win In AI Search [Webinar] via @sejournal, @hethr_campbell

AI search is reshaping buyer discovery. 

Every week, 800 million searches happen across ChatGPT, Claude, Perplexity, and other AI engines. 

If your brand isn’t showing up, you’re losing leads and opportunities.

Join Samanyou Garg, Founder of Writesonic, on September 10, 2025, for a webinar designed to help marketers and SEO teams master AI visibility. In this session, you’ll learn practical tactics to measure, prioritize, and optimize your AI footprint.

Here’s what you’ll walk away with

  • AI Visibility Tracking Framework: Measure mentions, citations, sentiment, and share of voice across AI engines
  • Data-Driven Prioritization: Focus on high-impact prompts and competitor gaps for the best ROI
  • 3-Pillar GEO Action Plan: Improve crawler access, craft prompt-specific content, and earn authority-building citations

Why you can’t miss this webinar:

AI-driven search is no longer optional. Your brand’s presence in AI answer engines directly impacts traffic, leads, and revenue. This session will equip you with a step-by-step process to turn AI visibility into real business results.

Save your spot now to learn actionable strategies that top brands are using to dominate AI search.

Can’t attend live? Register anyway, and we’ll send you the full recording.