The Post-Traffic SEO Shift

Google’s new AI Mode highlights the dramatic changes in organic search. AI answers often eliminate the need to click, though users wanting more details must search unlinked brand names separately.

The result is massive shifts in optimizing for search engines:

  • Traffic is no longer a key ecommerce performance indicator, as many shoppers will make purchase decisions without clicking.
  • Optimizing for brand-name search is increasingly important because consumers often query a brand or product name, as AI answers don’t usually link to them.

Business owners are understandably concerned and unsure how to adjust SEO.

Here’s my overview.

Position Products

Generative AI platforms use external sources to recommend products and brands. Unless they encounter the benefits of a product or company, the platforms are unlikely to include or recommend them.

Thus an AI-driven SEO strategy includes creating and marketing “brand knowledge content,” which explains:

  • The brand’s unique value proposition.
  • Differences from competitors (e.g., price, quality, service, etcetera).
  • Targeted audience, including geographic focus.

The goal is to supply info to large language models about your business to increase its chances of being surfaced in AI answers for related consumer questions.

The example below is a chart from Google’s AI Mode comparing Zoho and HubSpot, two popular customer management platforms, in response to a query.

Comparison table from AI Mode summarizing key differences between Zoho CRM and HubSpot CRM, including customization, integration, user interface, AI capabilities, and pricing. HubSpot is noted for ease of use and advanced features, while Zoho is highlighted for customization and affordability.

Table from AI Mode comparing Zoho CRM to HubSpot. Click image to enlarge.

Brand Mentions, Backlinks

Brand mentions are as important as backlinks for genAI algorithms. ChatGPT, Gemini, and others rely on “similarity” and co-occurrence, i.e., where a brand name appears in a relevant context of a query.

Yet backlinks remain important for traditional organic search rankings, and genAI platforms also rely on those engines: Google, Bing, others.

Hence optimizing for AI search should include link building and brand marketing. The following tactics will help with both:

  • Co-citation link building, such as appearing or being linked in listicles alongside competitors.
  • Media outreach for generating links and mentions from reputable outlets.
  • Reddit community building: Participating in relevant subreddits or managing your own. Reddit can raise visibility with journalists, Google, and ChatGPT.

Solve Problems, not Keywords

Generative AI search engines use a so-called “query fan-out” technique. Google introduced the term, but other LLMs use similar methods.

This technique goes beyond direct answers. It includes related and follow-up concepts to provide a more detailed explanation and solve users’ problems more efficiently.

Keyword research remains essential for understanding shoppers’ journeys, but optimizing for those terms is more than including them in titles, headings, and body text. Think about the problems driving each keyword and address them with additional info on your page.

My GPT, “SEO: Search Query Analyzer,” can assist, as can ChatGPT and Gemini via this prompt:

My target keyword is [KEYWORD]. What follow-up questions and additional information would help my target audience searching on this query?

Google Search Console Fails To Report Half Of All Search Queries via @sejournal, @MattGSouthern

New research from ZipTie reveals an issue with Google Search Console.

The study indicates that approximately 50% of search queries driving traffic to websites never appear in GSC reports. This leaves marketers with incomplete data regarding their organic search performance.

The research was conducted by Tomasz Rudzki, co-founder of ZipTie. His tests show that Google Search Console consistently overlooks conversational searches. These are the natural language queries people use when interacting with voice assistants or AI chatbots.

Simple Tests Prove The Data Gap

Rudzki started with a basic experiment on his website.

For several days, he searched Google using the same conversational question from different devices and accounts. These searches directed traffic to his site, which he could verify through other analytics tools.

However, when he checked Google Search Console for these specific queries, he found nothing. “Zero. Nada. Null,” as Rudzki put it.

To confirm this wasn’t isolated to his site, Rudzki asked 10 other SEO professionals to try the same test. All received identical results: their conversational queries were nowhere to be found in GSC data, even though the searches generated real traffic.

Search Volume May Affect Query Reporting

The research suggests that Google Search Console uses a minimum search volume threshold before it begins tracking queries. A search term may need to reach a certain number of searches before it appears in reports.

According to tests conducted by Rudzki’s colleague Jakub Łanda, when queries finally become popular enough to track, historical data from before that point appears to vanish.

Consider how people might search for iPhone information:

  • “What are the pros and cons of the iPhone 16?”
  • “Should I buy the new iPhone or stick with Samsung?”
  • “Compare iPhone 16 with Samsung S25”

Each question may receive only 10-15 searches per month individually. However, these variations combined could represent hundreds of searches about the same topic.

GSC often overlooks these low-volume variations, despite their significant combined impact.

Google Shows AI Answers But Hides the Queries

Here’s the confusing part: Google clearly understands conversational queries. Rudzki analyzed 140,000 questions from People Also Asked data and found that Google shows AI Overviews for 80% of these conversational searches.

Rudzki observed:

“So it seems Google is ready to show the AI answer on conversational queries. Yet, it struggles to report conversational queries in one of the most important tools in SEO’s and marketer’s toolkits.”

Why This Matters

When half of your search data is missing, strategic decisions turn into guesswork.

Content teams create articles based on keyword tools instead of genuine user questions. SEO professionals optimize for visible queries while overlooking valuable conversational searches that often go unreported.

Performance analysis becomes unreliable when pages appear to underperform in GSC but draw significant unreported traffic. Teams also lose the ability to identify emerging trends ahead of their competitors, as new topics only become apparent after they reach high search volumes.

What’s The Solution?

Acknowledge that GSC only shows part of the picture and adjust your strategy accordingly.

Switch from the Query tab to the Pages tab to identify which content drives traffic, regardless of the specific search terms used. Focus on creating comprehensive content that fully answers questions rather than targeting individual keywords.

Supplement GSC data with additional research methods to understand conversational search patterns. Consider how your users interact with an AI assistant, as that’s increasingly how they search.

What This Means for the Future

The gap between how people search and the tools that track their searches is widening. Voice search is gaining popularity, with approximately 20% of individuals worldwide using it on a regular basis. AI tools are training users to ask detailed, conversational questions.

Until Google addresses these reporting gaps, successful SEO strategies will require multiple data sources and approaches that account for the invisible half of search traffic, which drives real results yet remains hidden from view.

The complete research and instructions to replicate these tests can be found in ZipTie’s original report.


Featured Image: Roman Samborskyi/Shutterstock

WordPress Takes Steps To Integrate AI via @sejournal, @martinibuster

WordPress announced the formation of an AI Team that will focus on coordinating the development and integration of AI within the WordPress core. The team is to function similarly to the Performance Team, focusing on developing canonical plugins that users can install to test new functionality before a decision is made about whether or how to integrate new functionalities into the WordPress core itself.

The goal for the team is to help create a strategic focus, rapid testing to deployment and to provide a centralized location for collaborating on ideas and projects.

The team will include two Google employees, Felix Arntz and Pascal Birchler. Arntz is a Senior Software Engineer at Google who contributes to the WordPress core and to other WordPress plugins and has worked as a lead for the Performance Team.

Pascal Birchler, a Developer Relations Engineer and WordPress core committer, recently led a project to integrate the Model Context Protocol (MCP) with WordPress via WP-CLI.

The WordPress announcement called it an important step:

“This is an exciting and important step in WordPress’s evolution. I look forward to seeing what we’ll create together and in the open.”

WordPress First Steps On Path Blazed By Competitors

The formation of an AI team is long overdue, as even the new open source Drupal CMS designed to provide an easy to use interface for marketers and creators has AI-powered features built-in. Third-party proprietary CMS provider Wix already and shopping platform Shopify have both integrated AI into their user’s workflows.

Read the official WordPress announcement:

Announcing the Formation of the WordPress AI Team

Featured Image by Shutterstock/Hananeko_Studio

WordPress Unpauses Development But Has It Run Out Of Time? via @sejournal, @martinibuster

Automattic announced that it is reversing its four-month pause in WordPress development and will return to focusing on the WordPress core, Gutenberg, and other projects. The pause in contributions came at a critical moment, as competitors outpaced WordPress in ease of use and technological innovation left the platform behind.

Did WordPress Need A Four-Month Pause?

Automattic’s return to normal levels of contributions were initially contingent on WP Engine withdrawing their lawsuit against Automattic and Mullenweg, with the announcement stating:

“We’re excited to return to active contributions to WordPress core, Gutenberg, Playground, Openverse, and WordPress.org when the legal attacks have stopped.”

WP Engine and Automattic are still locked in litigation, so what changed?

Automattic suggests that it has reconsidered its place as the future of content management:

“After pausing our contributions to regroup, rethink, and plan strategically, we’re ready to press play again and return fully to the WordPress project.

…We’ve learned a lot from this pause that we can bring back to the project, including a greater awareness of the many ways WordPress is used and how we can shape the future of the web alongside so many passionate contributors. We’re committed to helping it grow and thrive…”

Automattic’s announcement suggests that they realized moving forward with WordPress is important despite continued litigation.

But did Automattic really need a four-month pause to come to that realization?

Where Did The WordPress Money Go?

And it’s not like Automattic was hurting for money to throw at WordPress. Salesforce Ventures invested $300 million dollars into Automattic in 2019 and an elated Mullenweg wrote that this would enable them to almost double the pace of innovation for WP.com, their enterprise offering WordPress VIP, WooCommerce, Jetpack, and increase resources to WordPress.org and Gutenberg.

Mullenweg wrote:

“For Automattic, the funding will allow us to accelerate our roadmap (perhaps by double) and scale up our existing products—including WordPress.com, WordPress VIP, WooCommerce, Jetpack, and (in a few days when it closes) Tumblr. It will also allow us to increase investing our time and energy into the future of the open source WordPress and Gutenberg.”

In the years immediately following the $300 million investment, updates to WooCommerce increased by 47.62% and as high as 80.95% and just a little bit higher for the year 2024. Jetpack continued at an average release schedule of 7 updates per year although it shot up to 22 updates in 2024. The enterprise level WordPress VIP premium service may have also benefited (changelog here).

Updates to the WordPress Core remained fairly unchanged according to the official release announcements and the pace of Gutenberg releases also followed a steady pace, with no significant increases.

List of number of WordPress release announcements per year:

  • 2019 – 29 announcements
  • 2020 28 announcements
  • 2021 26 announcements
  • 2022 27 announcements
  • 2023 26 announcements
  • 2024 30 announcements
  • 2025 9 announcements

All the millions of dollars invested in Automattic, along with any other income earned, had no apparent effect on the pace of innovation in the WordPress core.

Survival Of The Fittest CMS

A positive development from Automattic’s pause to rethink is the announcement of a new AI group, modeled after their Performance group. The new team is tasked with coordinating AI initiatives within WordPress’ core development. Like their Performance group, the new AI group was formed after their competitors had outpaced them, so WordPress is once again late in adapting to user needs and the fast pace of technology.

Matt Mullenweg struggled to answer where WordPress would be in five years when asked at the February 2025 WordCamp Asia event. He asked someone from Automattic to join him on stage to answer the question, but that other person also couldn’t answer because there was, in fact, no plan or idea other than the short-term roadmap focused on the immediate future.

Mullenweg explained the lack of a long-term vision as a strategic decision to remain adaptable to the fast pace of technology:

“Outside of Gutenberg, we haven’t had a roadmap that goes six months or a year, or a couple versions, because the world changes in ways you can’t predict.

But being responsive is, I think, really is how organisms survive.

You know, Darwin, said it’s not the fittest of the species that survives. It’s the one that’s most adaptable to change. I think that’s true for software as well.”

That’s a somewhat surprising statement, given that WordPress has a history of being years late to prioritizing website performance and AI integration. Divi, Elementor, Beaver Builder, and other WordPress editing environments had already cracked the code on democratizing web design in 2017 with block-based, point-and-click editors when WordPress began their effort to develop their own block-based editor.

Eight years later, Gutenberg is so difficult for many users that the official Classic Editor plugin has over ten million installations, and advanced web developers prefer other, more advanced web builders.

Takeaways:

  • Automattic’s Strategic Reversal
    Automattic reversed its pause on WordPress contributions despite unresolved litigation with WP Engine, perhaps signaling a change in internal priorities or external pressures.
  • Delayed Response to AI Trends
    A new AI group has been formed within WordPress core development, but this move comes years after competitors embraced AI—suggesting a reactive rather than proactive strategy.
  • Lack of Long-Term Vision
    WordPress leadership admits to having no roadmap beyond the short term, framing adaptability as a strength even as the platform lags in addressing user needs and keeping up with technological trends.
  • Minimal Impact from Major Investments
    Despite receiving hundreds of millions in funding, core WordPress and Gutenberg development showed no significant acceleration, raising questions about where investment actually went.
  • Usability and Competitive Lag
    Gutenberg arguably struggles with usability, as shown by the popularity of the Classic Editor plugin and user preference for third-party builders.
  • WordPress at a Competitive Disadvantage
    WordPress now finds itself needing to catch up in a CMS market that has evolved rapidly in both ease of use and innovation.

The bottom line is that the pace of development for the WordPress core and Gutenberg remained steady after the 2019 investment, and after all of the millions of dollars that Automattic received from companies like Newfold Digital, sponsored contributions, and volunteer contributions from individuals themselves, the effect on the speed of development and innovation maintained the same follow-the-competitors-from-behind pace.

Automattic’s return to WordPress core development inadvertently calls attention to how far the platform has fallen behind competitors like Wix in usability and innovation, despite major investments and years of community support. For users and developers, this means that WordPress must now work to regain trust by proving it can adapt quickly and deliver the tools that modern site developers, businesses, and content creators actually need.

Automattic has a legitimate dispute with WP Engine, but the way it was approached became a major distraction that resulted in an arguably unnecessary four-month pause to WordPress development. The platform might have been in danger of losing relevance if not for the work of third-party innovators, and it still arguably lags behind competitors.

Yoast AI Optimize is now available for Classic Editor

We’re excited to announce that Yoast AI Optimize is now also available when using the Classic Editor in WordPress!

You’ve finished your copy, great! But those pesky Yoast SEO Analysis lights aren’t all green and you have to make manual changes. That’s where Yoast AI Optimize comes in. With Yoast SEO Premium, you can now get AI-powered suggestions right inside your Classic Editor to help fine-tune your content.

What is Yoast AI Optimize?

Yoast AI Optimize brings smart, targeted SEO support into your writing flow. It gives AI-powered suggestions for specific assessments in the Yoast SEO analysis, such as length, structure, and keyphrase distribution.  You’ll see exactly where improvements can be made and get quick, editable suggestions to help you fix them. You can quickly apply or dismiss them; the final decision always remains yours.

Benefits:

  • Get real-time AI suggestions that help improve SEO and readability
  • Edit suggestions to match your style and tone of voice
  • Apply or dismiss suggestions easily without breaking your writing flow
  • Use it in both the Classic and Block editors with Yoast SEO Premium
  • Supports optimization for:
    • Keyphrase in introduction
    • Keyphrase distribution
    • Keyphrase density
    • Sentence length
    • Paragraph length

Whether you’re using the Classic Editor or sticking with the Block Editor, Yoast AI Optimize helps you improve your SEO score faster, without losing the personal touch.

If you’re curious to know how we built this feature, check out our developer blog post with all the behind the scenes.

Ready to optimize smarter?
Update to Yoast SEO Premium to try AI Optimize in the Classic Editor today!

How CMOs Can Use Conversion Tracking & Attribution For Smarter Paid Media Strategy via @sejournal, @MenachemAni

For chief marketing officers of retail brands and businesses, knowing which channels and campaigns deserve the marketing budget can directly impact the success and length of their tenure.

But in today’s omnichannel environment of walled gardens, customers engage with your campaigns (and other assets) multiple times before converting.

Since there is no perfect conversion tracking or attribution, you need a system to decide where to spend your money.

Too many marketers still rely on outdated or overly complex attribution models, incomplete data, or pure guesswork.

Common side effects include over-investing in either the upper or lower funnel, while underfunding channels and campaigns that balance demand generation and demand capture.

In this article, we’ll break down how CMOs and marketing leaders can use conversion tracking and attribution data to:

  • Understand true channel performance.
  • Make better budget decisions.
  • Improve full-funnel efficiency.

Conversion Tracking In Google Ads: Limitations & Blind Spots

Running a Google Ads or paid media campaign without native conversion tracking is asking for trouble.

Not only will your account operate with blinders that prevent the system from finding improvements and patterns, but you won’t also have any in-platform metrics to measure your own database against.

I also see some accounts can take several weeks for reporting data to be attributed fully, primarily because of the click-to-purchase duration.

Google may not be fully accurate with all metrics, but you want it to understand what actions are meaningful to your business.

Lead Generation

  • Online conversion actions: form fill, chat, phone call.
  • Offline conversion stages: qualified lead, converted lead.
  • Support tools: WhatConverts, HubSpot, or other CRM to track lead data + Zapier for connectivity.

With leads, there is a challenge in terms of reconciling what is recorded online and what happens outside of the Google ecosystem.

Google’s system knows it got you a certain number of form fills, chats, or calls. It needs to know how many of those were good quality leads. How many of those went on to become actual sales?

That would lead you to create a “next step” in the process, such as qualified leads, and feed this back into Google. You can also then bid against those or use them as observations, but they will be in the system as a positive funnel event.

Read more: Building A Lead Generation Plan

Ecommerce

  • Online conversion actions: purchase, add to cart.
  • Offline conversion stages: subscriber, repeat buyer.
  • Support tools: Shopify to track returns, exchanges, etc.

For ecommerce, it’s typically a lot simpler to track the right events, but it’s trickier to rate their value to the business.

Google can record purchase transactions as an event, but it lacks your backend data on which locations have the fewest returns or exchanges, which products lead to higher rates of subscription and repeat purchases, and what each product’s margin is.

If you’re using Shopify, they have a Google and YouTube app that does pretty much all the heavy lifting you need to do to link the two platforms and track ecommerce sales.

How To Use Performance Data To Fuel Better Marketing Investments

“I know which channels and campaigns are providing the best ROI” is verbal gold for a CMO.

Being able to quantify the impact of where they spend their marketing budget positions them to make smarter decisions and increase their value to the business that employs them.

Unfortunately, this is easier said than done. Here are some ways to think through the more common hurdles that get in your way as a marketing leader.

Thinking Through In-Platform Attribution

Once you set up tracking and make sure you’re getting good performance data in, you can use it to inform attribution and omnichannel strategy.

My methodology is different from how many marketers approach this. I’m of the mind that attribution is not something that can be fully solved, and over-relying on third-party tools will set you in the wrong direction because they all have different biases.

Certain tools can’t see the actual power of YouTube, for example.

One study by Haus showed that YouTube in-platform reporting is three times less than what they see. So many third-party attribution tools can’t see view-through or engagement data for YouTube, so they end up with a higher-than-ideal margin of error on the reporting.

Some other tools can see the click and view attribution for Meta, but only click attribution for Google. What I like to do is optimize each campaign in-platform based on that platform’s data.

Handling Conflicting Attribution Data

When we come across situations where different platforms show us conflicting attribution data, we use overall sales reports and tools like TripleWhale or Northbeam to help validate that data.

This helps us understand directionally, if we put another 20% of our budget into a specific campaign type, how does that impact the overall revenue?

It’s really about looking at blended numbers – some people call it media efficiency rate (MER) or blended return on ad spend (ROAS) – to see how that data changes over time with different campaign and marketing changes.

We use this to allocate budget according to what really moves the needle as far as revenue and profit are concerned. This is much better than just relying on what a platform tells you.

With lead generation, this is less of a problem because most lead form fills happen pretty quickly after the initial click.

If the user submits the form on the same page they landed on, you will very likely capture UTM and GCLID parameters.

For lead gen, we typically look to verify that the number of leads in the customer relationship management (CRM) is within 10% of what Google attributes to itself.

Point Of Diminishing Returns: Why All Growth Stalls

One thing many people forget is that with visibility and success in digital advertising, you pay a price in terms of incremental headroom.

In other words, you have much more untapped opportunity at 30% impression share than you do at 85%. Getting from 30% to 85% is going to probably be much less expensive than going from 85% to 90%.

If you look at Google Ads’ own attribution, there’s a finite amount of headroom with Search and Shopping.

Once you hit the top of that, it usually tapers off somewhere between 70-80%, and you’ve got to start finding other campaigns/platforms to start feeding the funnel. That could be other Google properties (like YouTube) or channels like Facebook, Instagram, or TikTok.

Fortunately, Google is now starting the rollout to show you this data for Performance Max in addition to Search and Shopping. This means you can take advantage of benefits like finding new advertising opportunities while still applying optimization tactics that you’re used to.

The other thing that’s really important, especially for newer advertisers, is not to expect the same performance from every campaign type.

People who have been around the block in PPC know, for example, that a 5x ROAS on branded search is realistic, but for YouTube, it might be 1x or even less.

You need to be okay with that, as long as you can get all the numbers to line up in terms of your total costs versus total revenue and margin.

Good Strategy Is Always Built On Clear Business Goals

Conversion tracking and attribution are essential parts of the CMO toolkit, but they mean little without the skill and literacy to interpret performance data in the context of a business.

If we were to sum up the most important part of this thought process, it would be:

  • Native platform tracking is crucial, but it’s only one part of the puzzle. Feed meaningful business outcomes back to the ad platforms to improve performance over time.
  • No attribution model is perfect. Treat attribution as directional rather than as an absolute, and be cautious about over-relying on third-party insights as they have their own blind spots.
  • Use blended metrics and cross-platform validation to make strategic choices based on actual business needs and financial goals, not just the metrics that one channel reports.
  • Recognize diminishing returns as you scale inside one platform and diversify intelligently across multiple channels to maintain growth.

Ultimately, your ability to optimize campaigns hinges on a central, unbiased source of truth that isn’t influenced by the incentives of any single ad platform.

Google or Meta are businesses built to serve their own business objectives and those of their shareholders, which don’t always align with those of your business.

By owning your data and attribution strategy, you set your brand up to make smarter, more confident marketing investments instead of pinning all your hopes on a long shot that’s rarely (if ever) accurate.

More Resources:


Featured Image: voronaman/Shutterstock

How To Create An SEO Roadmap That Actually Drives Results via @sejournal, @coreydmorris

Many companies approach SEO reactively – chasing rankings, responding to algorithm updates, being distracted by AI, or focusing on quick wins – without a long-term plan.

But successful SEO strategies start with a structured roadmap that aligns with business objectives, technical priorities, and content planning.

Planning isn’t an exciting term, and many planning processes are never-ending, poorly defined, or difficult to translate into an impactful deliverable.

I get it. SEO is a discipline that requires a lot of iterative updates, testing, learning, and can have a seemingly infinite number of ways to stack your tactics to end at the same goal result.

I will point out, though, that if you’ve ever been disillusioned with the results you received or the (lack) of return on investment in time or resources, you may not have had a strong enough plan or detailed enough roadmap guiding the process.

To help give you a better opportunity to reach goals and reduce future regrets, I’m going to walk through what should go into a strategic, results-driven SEO roadmap, including:

  • Aligning SEO with business objectives and outcomes.
  • Setting realistic SEO goals with clear key performance indicators (KPIs).
  • Prioritizing SEO tactics and tasks.
  • Bonus: Seeing it through to success.

Aligning SEO With Business Objectives And Outcomes

This isn’t a new topic, and it is also not one that is exclusive to just SEO as a digital marketing channel. However, it is critical.

Digital marketing doesn’t have to be an expensive line item. It can (and should) be an investment, and investments expect a return.

If you’re a CMO or in marketing leadership, you feel this expectation daily.

If you’re deep in the details, wearing a digital marketing or SEO specialist hat, you likely have recurring conversations with those who are trying to quantify your efforts further.

Your plan needs to have a clearly defined set of goals. SEO (like most marketing initiatives) can’t fix brand, product, customer service, or retention problems in a business.

Whether in a dedicated SEO role, or a broader digital marketing one, or even in leadership, I can attest to how uncomfortable it can be when company politics, silos, and other barriers exist.

It is much easier to just do the things you can control and not wade into messes.

If you don’t have business outcomes defined and aligned with your SEO KPIs, though, at some point, someone is going to ask and want to connect the dots between the efforts and resources and how it impacts the bottom line of the organization.

I recommend getting people at the highest levels possible, as well as the broader business plans, metrics, and overall performance, connected with those doing SEO, to utilize in the roadmap.

That way, when down the road things are happening at a technical or detailed level and questions arise about the direction of the plan and resources, there’s a business-case foundation.

Setting Realistic SEO Goals With Clear KPIs

Sometimes goals are dictated to us, and in other cases, things are wide open, and we are asked to share what we think is reasonable in terms of conversions or KPIs.

If you are able to align with business goals, you should have a good measure and understanding of what SEO could and should drive toward.

However, that doesn’t mean that the work is done when it comes to translating that down further into SEO measures.

It is getting harder and harder to accurately project organic search traffic with the rise of zero-click searches on Google and reduced clicks from AI Overviews.

The days of broad strategies and focusing on a quantity of traffic and letting your site filter for quality are over.

I strongly recommend that SEO KPIs be focused on quality metrics.

Working backwards from the business outcomes, in alignment with your funnels and customer journey maps, back to the first possible touch point from SEO.

By looking at all the branches and anticipated ways someone might enter from SEO, you can categorize them and come up with a quality-first approach to your KPIs and expectations when looking at your current performance data and third-party research data.

You might find that the volume and expectations for what SEO can drive need to be tempered at this point, and this is the time to do it before getting way down the road in investment.

Prioritizing SEO Tactics And Tasks

Your roadmap so far is pretty metric and goal-heavy. That’s on purpose. However, the other big category that I often see tank, even the most data-driven approach to SEO, is prioritization and resourcing.

Years ago, when I had a national restaurant chain as a client, we had an awesome strategy mapped out. We did a test with a few locations and saw massive success.

When the roadmap and plan for the next year were ready to roll out, we were blindsided by resource constraints. The problem wasn’t with investment in the SEO functions or content creation, or even with the budget for the dev team. It was a priority for the IT and dev teams.

We didn’t know that they were booked out for the next six months on a big in-store POS system initiative and wouldn’t be able to touch our plan or anything more than a triaged website emergency.

While I pivoted the plan to local search and getting them to the top of Google Maps, it was a big letdown for all of us invested in the full plan, as we didn’t account for this type of challenge.

SEO is more than just SEO. It requires a range of other skills and disciplines. Maybe you have someone who wears all the hats (or it is you), but regardless of the situation, you’ve got to plot out all of the tactics and resources needed.

You can’t get it all done at once, but also in the pacing of the plan, you can’t allow things to get put on the shelf when other tasks are stealing attention.

Knowing the non-SEO factors that can have an impact on SEO is really important in crafting your plan.

If you’re struggling with the specific tactics that should go in your plan, or the cadence for them, find external checklists and support, but be careful not to rely solely on checklists as a substitute for your tailored strategy.

Bonus: Seeing It Through To Success

If you’re struggling with the planning process, a framework that I recommend is the START Planning Process (full disclosure: I created it).

It provides a five-step process for digital marketing planning and can be applied to a multi-channel approach or narrowed to just focus on SEO and help you get through the strategy, tactics, and rest of the steps needed to arrive at your ultimate plan and roadmap.

When you activate your plan and put the roadmap into place, there will be distractions. Internal distractions and disruptions will happen. External changes will impact your perfectly crafted plan.

When any of these things happen, they become what I like to call “trigger events.” They are opportunities to revisit your roadmap, see how they might change your priorities and focuses for SEO, and then get back to work.

Even if trigger events don’t happen, you want to make sure that your plan, resource scheduling, and tasks have built-in reflection points where you can take a step back, evaluate your plan, and recalibrate if necessary.

Wrap Up

SEO is hard work. It is changing with AI, and if we weren’t previously, we definitely are focused now on quality more than quantity.

Traffic sources are diversifying, and we are working hard to keep up with where things are going, balanced with what works still for our businesses and growth today. Whew.

I hate hearing that “SEO doesn’t work for my company” or for other reasons why it gets written off when I see it working for competitors or others in the same industry.

Yes, there are some limited cases where that’s true, and it isn’t something to consider.

In most others, though, so many symptoms of it not working are connected to a root cause of not having a roadmap or plan.

I’m a strong believer in the more we’re being distracted, the more – now than ever – that we need to be disciplined, documented, and working off of a solid foundation.

More Resources:


Featured Image: KT Stock photos/Shutterstock

Cross-sell Tactics of Top DTC Brands

Cross-selling helps merchants increase average order values while improving the shopping experience.

There are many ways to cross-sell on an ecommerce site. My just-completed analysis of 40 direct-to-consumer brands revealed several common techniques. None of the methods is new; many have existed in some form for decades, before ecommerce.

The most popular cross-selling features on top DTC sites are the very ones available in leading ecommerce platforms. Hence nearly any online shop can cross-sell and even outperform leading DTCs with a bit of creativity.

Product Bundles

Grouping related products in a bundle typically increases average order values — the purpose of cross-selling. Plus, shoppers benefit from lower per-item prices.

Screenshot of a cosmetics bundle on Glossier

Glossier was one of several online stores featuring bundled sets and even allowed shoppers to create their own sets for a discount.

Product bundling:

  • Highlights value via an overall discount.
  • Simplifies merchandising and reduces decision fatigue for some shoppers.
  • Offers a complete solution to a shopper’s problem or need.

Of the 40 DTC sites reviewed, bundling was the most commonly used form, deployed on Glossier (skincare), Brooklinen (bath and bedding), Judy (home emergency goods), Maude (apparel), and many more sites.

Glossier refers to bundles as “sets” and allows shoppers to build their own. Brooklinen’s bundles are value-driven with substantial discounts. On Judy, product bundles are “kits.” Maude calls them “matching sets.”

Screenshot of bundled linens at Brooklinen

Discount-driven bundles, such as this example from Brooklinen, were the most popular tactic among the sites reviewed.

Shopping Cart Offers

Several DTC brands placed recommendations — “you might also like” — on product detail pages and, surprisingly, directly in the shopping cart or similar interface.

For example, a shopper who adds an item to the Allbirds (shoes) shopping cart sees a modal offering similar items. Allbirds also offers similar items in the checkout process, such as socks for running shoes.

Allbirds product recommendations also appear at the bottom of the slide-out shopping cart.

Kitchenware brand Caraway includes a content slider with several related products at the bottom of its checkout.

Placing product recommendations in the checkout flow creates a low-friction cross-sell offer precisely when shoppers are in buying mode.

Screenshot of Caraway's slideout recommendations

Caraway places recommendations at the bottom of its slide-out shopping cart.

Threshold-based Incentives

The classic “spend X and get free shipping” tactic, or at least a variation, remains popular.

The DTC brands I reviewed mostly used threshold-based incentives for free shipping, but some offered a gift instead. The free item was meaningful and thematically aligned with the shop or products.

For example, Lovevery, which sells educational toys, offers free access to its app with the purchase of a play kit — essentially a revenue threshold.

DTC brands typically displayed the threshold-based incentives at the top of the page, a common feature of Shopify themes. Others also placed the incentive on product detail pages.

Yet Three Ships, a skincare brand, displayed its threshold-based incentives in the shopping cart. Shoppers receive free shipping at $49.99, free samples at $65, and a travel pouch at $85. (The incentives are cumulative: a purchase of $85 earns all three items.)

Screenshot from Three Ships of the three thresholds.

Three Ships offers cumulative rewards for spending more.

Ecommerce Platforms

While they vary in colors, fonts, and imagery, DTC sites employ similar cross-selling techniques — proven, tried-and-true methods refined over decades of retailing.

Another reason bundles, recommendations, and incentives are popular is that they are easy to launch. Leading ecommerce platforms include the features, or facilitate related plugins and external tools.

Opportunity

The takeaway for small and mid-sized businesses is two-fold.

First, emulating the cross-selling tactics of top DTC brands may be as simple as leveraging the ecommerce platform. Compelling cross-selling opportunities exist and are easily implemented.

Second, there is an opportunity for innovation. Not many sites are cross-selling beyond the platform or after the sale.

For example, ecommerce merchants could use SMS or the emerging Rich Communication Services to promote add-on products, “complete the set,” or upgrade to a subscription and save 10%.

This benchmark used Reddit’s AITA to test how much AI models suck up to us

Back in April, OpenAIannounced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic

An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed, as OpenAI found out.

A new benchmark that measures the sycophantic tendencies of major AI models could help AI companies avoid these issues in the future. The team behind Elephant, from Stanford, Carnegie Mellon, and the University of Oxford, found that LLMs consistently exhibit higher rates of sycophancy than humans do.

“We found that language models don’t challenge users’ assumptions, even when they might be harmful or totally misleading,” says Myra Cheng, a PhD student at Stanford University who worked on the research, which has not been peer-reviewed. “So we wanted to give researchers and developers the tools to empirically evaluate their models on sycophancy, because it’s a problem that is so prevalent.”

It’s hard to assess how sycophantic AI models are because sycophancy comes in many forms. Previous research has tended to focus on how chatbots agree with users even when what the human has told the AI is demonstrably wrong—for example, they might state that Nice, not Paris, is the capital of France.

While this approach is still useful, it overlooks all the subtler, more insidious ways in which models behave sycophantically when there isn’t a clear ground truth to measure against. Users typically ask LLMs open-ended questions containing implicit assumptions, and those assumptions can trigger sycophantic responses, the researchers claim. For example, a model that’s asked “How do I approach my difficult coworker?” is more likely to accept the premise that a coworker is difficult than it is to question why the user thinks so.

To bridge this gap, Elephant is designed to measure social sycophancy—a model’s propensity to preserve the user’s “face,” or self-image, even when doing so is misguided or potentially harmful. It uses metrics drawn from social science to assess five nuanced kinds of behavior that fall under the umbrella of sycophancy: emotional validation, moral endorsement, indirect language, indirect action, and accepting framing. 

To do this, the researchers tested it on two data sets made up of personal advice written by humans. This first consisted of 3,027 open-ended questions about diverse real-world situations taken from previous studies. The second data set was drawn from 4,000 posts on Reddit’s AITA (“Am I the Asshole?”) subreddit, a popular forum among users seeking advice. Those data sets were fed into eight LLMs from OpenAI (the version of GPT-4o they assessed was earlier than the version that the company later called too sycophantic), Google, Anthropic, Meta, and Mistral, and the responses were analyzed to see how the LLMs’ answers compared with humans’.  

Overall, all eight models were found to be far more sycophantic than humans, offering emotional validation in 76% of cases (versus 22% for humans) and accepting the way a user had framed the query in 90% of responses (versus 60% among humans). The models also endorsed user behavior that humans said was inappropriate in an average of 42% of cases from the AITA data set.

But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. The authors had limited success when they tried to mitigate these sycophantic tendencies through two different approaches: prompting the models to provide honest and accurate responses, and training a fine-tuned model on labeled AITA examples to encourage outputs that are less sycophantic. For example, they found that adding “Please provide direct advice, even if critical, since it is more helpful to me” to the prompt was the most effective technique, but it only increased accuracy by 3%. And although prompting improved performance for most of the models, none of the fine-tuned models were consistently better than the original versions.

“It’s nice that it works, but I don’t think it’s going to be an end-all, be-all solution,” says Ryan Liu, a PhD student at Princeton University who studies LLMs but was not involved in the research. “There’s definitely more to do in this space in order to make it better.”

Gaining a better understanding of AI models’ tendency to flatter their users is extremely important because it gives their makers crucial insight into how to make them safer, says Henry Papadatos, managing director at the nonprofit SaferAI. The breakneck speed at which AI models are currently being deployed to millions of people across the world, their powers of persuasion, and their improved abilities to retain information about their users add up to “all the components of a disaster,” he says. “Good safety takes time, and I don’t think they’re spending enough time doing this.” 

While we don’t know the inner workings of LLMs that aren’t open-source, sycophancy is likely to be baked into models because of the ways we currently train and develop them. Cheng believes that models are often trained to optimize for the kinds of responses users indicate that they prefer. ChatGPT, for example, gives users the chance to mark a response as good or bad via thumbs-up and thumbs-down icons. “Sycophancy is what gets people coming back to these models. It’s almost the core of what makes ChatGPT feel so good to talk to,” she says. “And so it’s really beneficial, for companies, for their models to be sycophantic.” But while some sycophantic behaviors align with user expectations, others have the potential to cause harm if they go too far—particularly when people do turn to LLMs for emotional support or validation. 

“We want ChatGPT to be genuinely useful, not sycophantic,” an OpenAI spokesperson says. “When we saw sycophantic behavior emerge in a recent model update, we quickly rolled it back and shared an explanation of what happened. We’re now improving how we train and evaluate models to better reflect long-term usefulness and trust, especially in emotionally complex conversations.”

Cheng and her fellow authors suggest that developers should warn users about the risks of social sycophancy and consider restricting model usage in socially sensitive contexts. They hope their work can be used as a starting point to develop safer guardrails. 

She is currently researching the potential harms associated with these kinds of LLM behaviors, the way they affect humans and their attitudes toward other people, and the importance of making models that strike the right balance between being too sycophantic and too critical. “This is a very big socio-technical challenge,” she says. “We don’t want LLMs to end up telling users, ‘You are the asshole.’”

The Download: sycophantic LLMs, and the AI Hype Index

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This benchmark used Reddit’s AITA to test how much AI models suck up to us

Back in April, OpenAI announced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.

An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed.

A new benchmark called Elephant that measures the sycophantic tendencies of major AI models could help companies avoid these issues in the future. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. Read the full story.

—Rhiannon Williams

The AI Hype Index

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Anduril is partnering with Meta to build an advanced weapons system
EagleEye’s VR headsets will enhance soldiers’ hearing and vision. (WSJ $)
+ Palmer Luckey wants to turn “warfighters into technomancers.” (TechCrunch)
+ Luckey and Mark Zuckerberg have buried the hatchet, then. (Insider $)
+ Palmer Luckey on the Pentagon’s future of mixed reality. (MIT Technology Review)

2 A new Texas law requires app stores to verify users’ ages
It’s following in Utah’s footsteps, which passed a similar bill in March. (NYT $)
+ Apple has pushed back on the law. (CNN)

3 What happens to DOGE now?
It has lost its leader and a top lieutenant within the space of a week. (WSJ $)
+ Musk’s departure raises questions over how much power it will wield without him. (The Guardian)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

4 NASA’s ambitions of a 2027 moon landing are looking less likely
It needs SpaceX’s Starship, which keeps blowing up. (WP $)
+ Is there a viable alternative? (New Scientist $)

5 Students are using AI to generate nude images of each other
It’s a grave and growing problem that no one has a solution for. (404 Media)

6 Google AI Overviews doesn’t know what year it is
A year after its introduction, the feature is still making obvious mistakes. (Wired $)
+ Google’s new AI-powered search isn’t fit to handle even basic queries. (NYT $)
+ The company is pushing AI into everything. Will it pay off? (Vox)
+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review)

7 Hugging Face has created two humanoid robots 🤖
The machines are open source, meaning anyone can build software for them. (TechCrunch)

8 A popular vibe coding app has a major security flaw
Despite being notified about it months ago. (Semafor)
+ Any AI coding program catering to amateurs faces the same issue. (The Information $)
+ What is vibe coding, exactly? (MIT Technology Review)

9 AI-generated videos are becoming way more realistic
But not when it comes to depicting gymnastics. (Ars Technica)

10 This electronic tattoo measures your stress levels
Consider it a mood ring for your face. (IEEE Spectrum)

Quote of the day

“I think finally we are seeing Apple being dragged into the child safety arena kicking and screaming.”

—Sarah Gardner, CEO of child safety collective Heat Initiative, tells the Washington Post why Texas’ new app store law could signal a turning point for Apple.

One more thing

House-flipping algorithms are coming to your neighborhood

When Michael Maxson found his dream home in Nevada, it was not owned by a person but by a tech company, Zillow. When he went to take a look at the property, however, he discovered it damaged by a huge water leak. Despite offering to handle the costly repairs himself, Maxson discovered that the house had already been sold to another family, at the same price he had offered.

During this time, Zillow lost more than $420 million in three months of erratic house buying and unprofitable sales, leading analysts to question whether the entire tech-driven model is really viable. For the rest of us, a bigger question remains: Does the arrival of Silicon Valley tech point to a better future for housing or an industry disruption to fear? Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ A 100-mile real-time ultramarathon video game that lasts anywhere up to 27 hours is about as fun as it sounds.
+ Here’s how edible glitter could help save the humble water vole from extinction.
+ Cleaning massive statues is not for the faint-hearted ($)
+ When is a flute teacher not a flautist? When he’s a whistleblower.