Google has officially confirmed the completion of its June 2024 spam update, a week-long process aimed at enhancing search result quality by targeting websites that violate the company’s spam policies.
The update began on June 20, 2024, and was announced via Google’s Search Central Twitter account.
This spam update is part of Google’s ongoing efforts to combat web spam and improve user experience.
It’s important to note that this is not the algorithmic component of the site reputation abuse update, which Google has clarified is yet to be implemented.
Key Points Of The June 2024 Spam Update
The update targets websites violating Google’s spam policies.
It is separate from the anticipated site reputation abuse algorithmic update.
The rollout process lasted approximately one week.
Google’s spam updates typically focus on eliminating various forms of web spam, including:
Automatically generated content aimed solely at improving search rankings
Purchased or sold links intended to manipulate rankings
Thin, duplicated, or poor-quality content
Hidden redirects or other deceptive techniques
This latest update follows Google’s previous spam update in March 2024.
Despite that update’s impact, some AI-generated content performed well in search results.
An analysis by Search Engine Journal’s Roger Montti revealed that certain AI spam sites ranked for over 217,000 queries, with more than 14,900 ranking in the top 10 search results.
The June update is expected to refine Google’s spam detection capabilities further. However, as with previous updates, it may cause fluctuations in website search rankings.
Those engaging in practices that violate Google’s spam policies or heavily relying on AI-generated content may see a decline in their search visibility.
Conversely, legitimate websites adhering to Google’s guidelines may benefit from reduced competition from spammy sites in search results.
SEO professionals and website owners are advised to review their sites for spammy practices and ensure compliance with Google’s Webmaster Guidelines.
For more information about the June 2024 spam update and its potential impact, refer to Google’s official communication channels, including the Google Search Central Twitter account and the Google Search Status Dashboard.
How does Google know if its search results are improving?
As Google rolls out algorithm updates and claims to reduce “unhelpful” content, many wonder about the true impact of these changes.
In an episode of Google’s Search Off The Record podcast, Google Search Directer, Product Management, Elizabeth Tucker discusses how Google measures search quality.
This article explores Tucker’s key revelations, the implications for marketers, and how you can adapt to stay ahead.
Multifaceted Approach To Measurement
Tucker, who transitioned to product management after 15 years as a data scientist at Google, says it’s difficult to determine whether search quality is improving.
“It’s really hard,” she admitted, describing a comprehensive strategy that includes user surveys, human evaluators, and behavioral analysis.
Tucker explained
“We use a lot of metrics where we sample queries and have human evaluators go through and evaluate the results for things like relevance.”
She also noted that Google analyzes user behavior patterns to infer whether people successfully find the information they seek.
The Moving Target Of User Behavior
Tucker revealed that users make more complex queries as search quality improves.
This creates a constantly shifting landscape for Google’s teams to navigate.
Tucker observed:
“The better we’re able to do this, the more interesting and difficult searches people will do.”
Counterintuitive Metrics
Tucker shared that in the short term, poor search performance might lead to increased search activity as users struggle to find information.
However, this trend reverses long-term, with sustained poor performance resulting in decreased usage.
Tucker cautioned:
“A measurement that can be good in the long term can be misleading in the short term.”
Quantifying Search Quality
To tackle the challenge of quantifying search quality, Google relies on an expansive (and expanding) set of metrics that gauge factors like relevance, accuracy, trustworthiness, and “freshness.”
But numbers don’t always tell the full story, Tucker cautioned:
“I think one important thing that we all have to acknowledge is that not everything important is measurable, and not everything that is measurable is important.”
For relatively straightforward queries, like a search for “Facebook,” delivering relevant results is a comparatively simple task for modern search engines.
However, more niche or complex searches demand rigorous analysis and attention, especially concerning critical health information.
The Human Element
Google aims to surface the most helpful information for searchers’ needs, which are as diverse as they are difficult to pin down at the scales Google operates at.
Tucker says:
“Understanding if we’re getting it right, where we’re getting it right, where needs focus out of those billions of queries – man, is that a hard problem.”
As developments in AI and machine learning push the boundaries of what’s possible in search, Tucker sees the “human element” as a key piece of the puzzle.
From the search quality raters who assess real-world results to the engineers and product managers, Google’s approach to quantifying search improvements blends big data with human insight.
Looking Ahead
As long as the web continues to evolve, Google’s work to refine its search quality measurements will be ongoing, Tucker says:
“Technology is constantly changing, websites are constantly changing. If we just stood still, search would get worse.”
What Does This Mean?
Google’s insights can help align your strategies with Google’s evolving standards.
Key takeaways include:
Quality over quantity: Given Google’s focus on relevance and helpfulness, prioritize creating high-quality, user-centric content rather than aiming for sheer volume.
Embrace complexity: Develop content that addresses more nuanced and specific user needs.
Think long-term: Remember that short-term metrics can be misleading. Focus on sustained performance and user satisfaction rather than quick wins.
Holistic approach: Like Google, adopt a multifaceted approach to measuring your content’s success, combining quantitative metrics with qualitative assessments.
Stay adaptable: Given the constant changes in technology and user behavior, remain flexible and ready to adjust your strategies as needed.
Human-centric: While leveraging AI and data analytics, don’t underestimate the importance of human insight in understanding and meeting user needs.
As Tucker’s insights show, this user-first approach is at the heart of Google’s efforts to improve search quality – and it should be at the center of every marketer’s strategy as well.
Listen to the discussion on measuring search quality in the video below, starting at the 17:39 mark:
Featured Image: Screenshot from YouTube.com/GoogleSearchCentral, June 2024
Many owners and executives still have difficulty budgeting for online marketing.
Budgeting for SEO can be complex and influenced by factors like project scope, industry competition, and specific services needed. There is no universal calculator for calculating costs.
This article explores key SEO pricing components and how to calculate and plan your budget.
What Businesses Don’t Understand About Investing In SEO
SEO is an area where you truly get what you pay for. Investing adequately in SEO services can significantly impact your online presence and business growth.
According to recent data, over half of all SEO professionals work with monthly budgets ranging from $500 to $5,000, with 28.6% reporting budgets in the $1,001-$5,000 range.
Image from Search Engine Journal, May 2024
Many business owners are reluctant to invest in SEO, often because they lack understanding of how search marketing works and are too busy running their businesses to learn about SEO.
Most industries follow a standardized, step-by-step process to achieve specific outcomes.
Many business owners mistakenly assume SEO works the same way, treating it as a commodity.
This misconception leads them to fall for low-cost offers like $99/month “guaranteed page one” services from spammers and scammers, which never deliver meaningful results.
The Cost Of Cheap SEO
I belong to several internet marketing groups on Facebook. It’s truly frightening the number of noobs posing as SEO professionals and taking on clients.
It’s common to see a question like: “I just landed a client that wants to rank for [keyword x] – how do I do it?”
A close second is people using link schemes, specifically private blog networks and third-party pages known as parasite SEO, without ever explaining the risk to clients. Many use AI to generate content at a scale without fact-checking.
However, AI can be a powerful tool when used ethically in SEO.
AI helps automate data analysis, identify patterns, and streamline content creation and optimization, which in turn helps to reduce SEO costs.
If business owners were just throwing money away by hiring an incompetent SEO, that would be bad enough. Unfortunately, the collateral damage from “cheap SEO” can go much deeper.
Business owners must remember that they’re ultimately responsible for any SEO work performed on their site. They should discuss the specific tactics service providers use before entering into an agreement.
Managing Your Resources
With Google utilizing 200+ (and likely exponentially more) ranking factors, it’s easy to become intimidated and paralyzed.
The good news is that if you focus on just three factors, you can still crush it, regardless of your niche.
It must be natural. Avoid popular link schemes like private blog networks (PBNs) and paid guest posts. Instead, focus on building real links from topically relevant websites with high-quality content.
Quality is key: A lower number of high trust/high authority/relevant links can outperform a large quantity of lower quality links.
You Manage What You Measure – Set Goals
Before establishing a budget, one must define specific goals for a campaign.
Your goals should include measurable results, a defined timeframe, and an actual measurement for success.
At one time, success was measured solely by keyword rankings. While SERPs remain an important metric, they are not the most important.
I would argue that the most important metrics are those that directly impact the bottom line. Organic sessions, goal conversions, and revenue fall into that category.
Goal setting could include improving organic sessions by X%, increasing conversions by Y per month, and/or increasing revenues by Z%.
When setting goals, it’s important to keep a couple of things in mind.
First, they need to be achievable. Stretch goals are fine, but pie-in-the-sky benchmarks can actually work as a disincentive.
Equally important: you need to give the campaign time to work.
“…in most cases, SEOs need four months to a year to help your business first implement improvements and then see potential benefit.”
Developing A Budget
Your goals will determine what tactics are needed for success. This, in turn, sets up a framework for developing an action plan and the budget necessary to support that plan.
This brings us full circle to positioning and paying attention to those factors that move the dial.
The answers to those questions will determine priorities as well as the volume of work needed to reach your goals.
In many cases, the actual work performed will be the same, regardless of budget level. The difference is the volume of work performed.
If you add twice the content and twice the links at budget level “B” compared to budget level “A,” you are more likely to achieve earlier success at the higher budget.
It takes time to properly plan, implement, and tweak a campaign to evaluate its success.
Also, remember that the lower the budget, the longer the journey.
How Much Can You Expect To Spend On SEO?
To execute a local campaign, you could budget between $1,001 and $5,000 per month, the most common budget range among SEO professionals SEJ surveyed in 2023.
The budget will likely be higher for a national or international campaign, with many SEO pros working with budgets exceeding $10,000 per month for broader campaigns.
Some firms offer a “trial package” at a lower price with no contract. This allows prospective clients to test their services while minimizing risk.
There are some options if you can’t afford to retain a top-level SEO pro. The most common is a one-time website SEO audit with actionable recommendations.
Just fixing your website will often lead to a meaningful boost in organic traffic. Content development and keyword analysis are other areas where you can get help from a pro for a one-time fixed rate.
SEO Cost Calculator – Measuring Organic Search (SEO) ROI
The following is a calculator commonly used for (incorrectly) measuring return on investment for SEO.
Organic Search ROI Calculation Assuming “One Shots”
Example: selling blue widgets
Number of new customers acquired via organic search in a given month
10
Average net income (profit) per order
$100
Total profits from new organic search customers in a given month
$1,000
Monthly marketing budget (expense)
$2,500
Monthly profits from new customers ($1,000) divided by monthly organic marketing spend ($2,500)
ROI = -60%
The flaw in the above calculator is that it fails to take into consideration the lifetime value of a new customer.
Online retailers need repeat business to grow. By not calculating the lifetime value of a new customer, the true ROI is grossly understated.
The right way to calculate ROI is to build lifetime value into the calculator.
To calculate the cost of SEO and its true ROI use this formula:
Average lifetime profits from new customers acquired in one month divided by monthly organic marketing spend.
Organic Search SEO ROI Calculation Assuming Lifetime Value
Same example: selling blue widgets
Number of new customers acquired via organic search in a given month
10
Average net income (profit) per order
$100
Total profits from new organic search customers in a given month
$1,000
Average number of orders per customer over a “lifetime”
5
Total average lifetime profit
$5,000
Monthly marketing budget (expense)
$2,500
Average lifetime profits from new customers ($5,000) divided by monthly organic marketing spend ($2,500)
ROI = 200%
As you can see, that one variable makes a huge difference in how the ROI is stated.
SEO Campaigns Are Long-Term Investments
Unlike PPC, an organic search campaign will not yield immediate results.
A comprehensive SEO campaign will involve a combination of technical SEO, content marketing, and link-building. Even when executed to perfection, it takes time for Google to recognize and reward these efforts.
That said, the traffic earned from these efforts is often the most consistent and highest-converting among all channels.
FAQ
How do SEO professionals measure success?
The top metrics used to measure SEO performance are click-through rate (CTR), keyword rankings, and branded vs. non-branded traffic.
What is the most common budget range for SEO campaigns?
The most common SEO budget range is between $1,001 and $5,000 per month, with 28.6% of respondents working within this range.
What are the primary factors that affect SEO budgeting?
Determining the appropriate budget for SEO involves considering several key components that can influence the overall cost. These factors include the scope of the SEO project, the level of competition within the industry, and the specific types of SEO services that are required. For example, a small business in a niche market with low competition might budget around $1,000 per month for local SEO services, focusing on optimizing its Google My Business profile and building local citations. In contrast, an ecommerce company targeting an audience in a highly competitive industry might need to budget $5,000 to $10,000 monthly for a comprehensive SEO strategy that includes extensive link building and technical SEO audits.
What risks are associated with choosing low-cost SEO services?
Opting for low-cost SEO services poses significant risks to a business. These services often fail to comply with ethical SEO practices, resulting in the use of tactics such as link schemes or private blog networks (PBNs), which can be detrimental to a site’s reputation and rankings. Such practices can potentially attract penalties from Google, severely compromising a site’s online visibility and trustworthiness. It is crucial for business owners to be vigilant and discerning when selecting SEO professionals to avoid these harmful consequences.
Google’s SearchLiaison responded to a plea on X (formerly Twitter) about ridiculously poor search results in which he acknowledged that Google’s reviews algorithm could be doing a better job and outlined what’s being done to stop rewarding sites that shouldn’t be ranking in the first place.
Questioning Google’s Search Results
The exchange with Google began with a post about a high ranking sites that was alleged to fall short of Google’s guidelines.
“Instead of a third-party review (which is likely what searchers are looking for), Google ranks an article backed by the brand:
Searchers land in an advertorial built off marketing materials:
So little care that they even left briefing notes in the published version 😞
And I think I found the reason why it ranks #1… Money.”
The general responses to the tweets were sympathetic, such as this one:
“WILD.
And this is on page 1…
Is this what writing for readers is? Is this what people need/want?
I think of folks like my mom here who wouldn’t know better and to dig more.
It looks and seems nice, must be trustworthy.
I mean, that’s their goals, right? Dupe and dip.”
Google’s Algorithms Aren’t Perfect
SearchLiaison responded to those tweets to explain that he personally goes through the feedback submitted to Google and discusses them with the search team. He also shared about the monumental scale of ranking websites, saying that Google is indexing trillions of web pages, and because of that the ranking process is itself scaled and automated.
“Danny, I appreciate where you’re coming from — just as I appreciated the post that HouseFresh originally shared, as well as this type of feedback from others. I do. I also totally agree that the goal is for us to reward content that’s aligned with our guidance. From the HouseFresh post itself, there seemed to be some sense that we had actually improved over time:
“In our experience, each rollout of the Products Review Update has shaken things up, generally benefitting sites and writers who actually dedicated time, effort, and money to test products before they would recommend them to the world.”
That said, there’s clearly more we should be doing. I don’t think this is particularly new, as I’ve shared before that our ranking systems aren’t perfect and that I see content that we ought to do better by, as well as content we’re rewarding when we shouldn’t.
But it’s also not a system where any individual reviews content and says “OK, that’s great — rank it better” or “OK that’s not great, downrank it.” It simply wouldn’t work for a search engine that indexes trillions of pages of content from across the web to operate that way. You need scalable systems. And you need to keep working on improving those systems.
That’s what we’ll keep doing. We’re definitely aware of these concerns. We’ve seen the feedback, including the feedback from our recent form. I’ve personally been through every bit of that feedback and have been organizing it so our teams can look further at different aspects. This is in addition to the work they’re already doing, based on feedback we’ve already seen.”
Some of the takeaways from SearchLiaison’s statement is that:
1. Google agrees that their algorithms should reward content that is aligned with their guidance (presumably guidance about good reviews, helpfulness, and spam).
2. He acknowledged that the current ranking systems can still use improvement in rewarding the useful content and not rewarding inappropriate content.
3. Google’s systems are scaled.
4. Google is committed to listening to feedback and working toward improving their algorithms.
5. SearchLiaison confirmed that they are reviewing the feedback and organizing it for further analysis to identify what needs attention for improvement to rankings.
What Is Taking So Long To Fix Google?
Someone else questioned Google’s process for rolling out updates that subsequently shakes things up. It’s a good question because it makes sense to test an update to rankings to make sure that the changes improve the quality of sites being ranked and not do the opposite.
“Danny, aren’t all your ‘system improvements’ fully tested BEFORE rolling them out?
Surely your team was aware of the shakeup in the SERPs that these last few updates would cause.
Completely legitimate hobby sites written by passionate creators getting absolutely DECIMATED by these updates.
All in favor of Reddit, Pinterest, Quora, Forbes, Business Insider, and other nonsense gaining at their expense.
I guess what I’m saying is — surely this was not a surprise.
You guys knew this carnage was coming as a direct result of the updates.
And now — here we are, NINE months later — and there have been ZERO cases of these legitimate sites recovering. In fact, the March update just made it 100x worse.
And so Google is saying ‘yeah we f-d up, we’re working on it.’
But the question is—and I think I speak on behalf of thousands of creators when I ask—’What the hell is taking so long?’”
We know that Google’s third party quality raters review search results before an update is rolled out. But clearly there are many creators, site owners and search marketers who feel that Google’s search results are going the wrong way with every update.
SearchLiaison’s response is a good one because it acknowledges that Google is not perfect and that they are actively trying to improve the search results. But that does nothing to help the thousands of site owners who are disappointed in the direction that Google’s algorithm is headed.
As the July 1, 2024 shutdown date for Universal Analytics (UA) draws near, Google has announced new features and improvements for Google Analytics 4 (GA4).
These enhancements give marketers deeper insights and tools for cross-channel measurement and budget optimization.
Expanded Cross-Channel Reporting
GA4 is getting improved cross-channel reporting capabilities.
You will soon be able to integrate data from third-party advertising partners such as Pinterest, Reddit, and Snap directly into GA4 properties.
This will allow for a more complete view of campaign performance across platforms.
Additionally, GA4 will introduce aggregated impressions from linked Campaign Manager 360 accounts in the advertising workspace.
This feature will give advertisers a thorough overview of campaign performance across the entire marketing funnel.
AI-Powered Insights
Google is leveraging its AI capabilities to provide users with generated insights.
These AI-driven summaries will explain data trends and fluctuations using plain language, enabling businesses to make faster, more informed decisions based on their analytics data.
Advanced Planning & Budgeting Tools
Later this year, GA4 will introduce cross-channel budgeting features, including a projections report.
This tool will allow advertisers to track media pacing and projected performance against target objectives across multiple channels.
This addition should improve marketers’ ability to optimize media spend and allocate budgets more effectively.
Privacy-First Approach
GA4 continues to prioritize user privacy while delivering effective measurement solutions.
Upcoming features include support for Chrome Privacy Sandbox APIs and improvements to enhanced conversions.
Google says these updates will offer complete picture of cross-channel conversion attribution in a privacy-safe manner.
Preparing For The Future
Steve Ganem, Director of Product Management for Google Analytics, highlights the platform’s commitment to adaptability:
“Google Analytics 4 is truly built to be durable for the future. We’ll continue to invest in giving you a tool that helps answer fundamental questions about your business across your consumer’s entire path to purchase, despite ongoing changes in the measurement landscape.”
As the sunset date for Universal Analytics approaches, Google encourages users who haven’t yet made the switch to complete their migration to GA4.
The company also reminds UA users to download any historical data they wish to retain before the July 1 shutdown date.
Google announced that they are ending continuous scrolling in the search results (SERPs) as a way to speed up the serving of search results. Many in the search marketing community question that reason and raise questions about it. What’s really going on here?
Continuous Scroll In Search Results
Infinite scroll is a way of showing content that was popularized by social media in which users can aimlessly navigate in a state of constant discovery. It’s purposeless navigation.
In 2021 Google adopted Continuous Scrolling in the mobile search results, which showed up to four pages worth of web results before requiring users to click a link to see more. This change was welcomed by site owners and the search marketing community because it created the possibility of exposing more sites to searchers.
No More Continuous Scroll
The Verge recently published a report that Google has decided to remove continuous search in order to be able to serve faster search results. The change happens first to desktop search results to be followed later by a change to the mobile search results.
“In its place on desktop will be Google’s classic pagination bar, allowing users to jump to a specific page of search results or simply click “Next” to see the next page. On mobile, a “More results” button will be shown at the bottom of a search to load the next page.”
What’s The Real Impact?
While Google claims that the change is to help Google serve faster search results, many in the search marketing community are skeptical about the impact and with good reason. The U.S. Department of Justice released emails showing Google’s top management colluding about ways to show more advertising in the search results.
Brett Tabke, founder of Pubcon search marketing conference (and the person who invented the acronym SERPs), offered his opinion about the change to continuous scroll:
“It effectively boxes more clicks on to page one. That will result in a higher percentage of clicks going to Ads and Google properties. I think it is more evidence that Google is on a path to a new version of portal and away from search. Organic search itself will move to page 2, and I believe eventually to a new domain.
They will move away from organic results on page one. So what is left?
1) Google Ads
2) Google property links
3) Google Overviews vomit and
4) a link to page two.
They are on a path to fulfilling all general “searches” with their own responses in some form or another. When they don’t have a perfect response, maybe they will do “people also ask” and those lead back to a SERP where they can fulfill the search with their own properties and responses.”
Brett is not alone in his skepticism.
In what can be seen as a general sign of disbelief of Google’s motivations, many people have posted their skeptical opinions on X (formerly Twitter).
“Continuous scroll allows everyone to be on page one.
We prefer to crush your spirit.
It’s far more humiliating to be on page 6.
Pagination in search allows this ✅”
Good For Goose. Not For Gander?
While there are many voices who see dark reasons for Google’s decision to end continuous scrolling in the SERPs, there are some who see it differently.
Kevin Indig tweeted about an uncomfortable truth about continuous scrolling which is that they are not universally a good feature.
I’ve found continuous scroll to be a subpar solution for websites as well.”
Continuous scrolling is a useful feature for social media but when it comes to other kinds of websites, it’s the answer to a question that nobody is asking. Infinite scrolling is generally a poor user experience outside of the context of social media.
What’s kind of hard to ignore is that (arguably) most site owners and search marketers agree that it’s a poor user experience, inappropriate for many contexts or in some cases problematic for SEO.
So in a way, one should step back and at least consider the possibility that infinite scroll is great within the context of social media where aimless browsing and interaction makes sense but maybe infinite scrolling makes less sense within the context of purposeful browsing like in an ecommerce site, an informational site, or even in a search result. Purposeful browsing demands purposeful navigation, not aimless navigation.
Seen in that light, perhaps it might have been more believable had Google insisted that continuous scrolling was a poor user experience that didn’t fit the context of search results. Google’s chosen explanation is not going over very well.
In a recent exchange, Google’s Search Liaison addressed concerns about using branded keywords in articles.
The discussion, which unfolded over several tweets, centered on the impact of mentioning specific brand names in product reviews and other content.
Jake Boly, a content creator, initially asked why his articles featuring unique content consistently ranked on pages 3-4 of search results, speculating that it might be due to the presence of branded terms.
This sparked a debate about SEO best practices and Google’s ranking algorithms.
Conflicting Advice from SEO Experts
Taleb Kabbara, an SEO professional, suggested mentioning branded keywords could harm rankings, advising against using terms like “new balance” in review titles.
He claimed to have audited numerous sites and observed negative ranking impacts due to such keywords.
Jake, trust me it’s the word “new balance”. Don’t expect to rise to page 1 with 3rd party branded content.
On a related note, your site tanked because of those keywords.
I sound like a conspiracy theorist here, but i’ve audited dozens of sites already, feel free to disagree…
“No, you shouldn’t be afraid to mention the brand name of something you are reviewing. It’s literally what readers would expect you to do, and our systems are trying to reward things that are helpful to readers.”
The Google representative explained that writing a review without mentioning the product being reviewed would be counterintuitive.
They emphasized that Google’s systems aim to find and rank content that’s genuinely useful to readers, regardless of using branded terms.
I’ll disagree. No, you shouldn’t be afraid to mention the brand name of something you are reviewing. It’s literally what readers would expect you to do, and our systems are trying to reward things that are helpful to readers. How would you write a review of something and not…
— Google SearchLiaison (@searchliaison) June 25, 2024
Evidence Supporting Google’s Stance
To further support their point, the Liaison provided evidence from a specific search query for “new balance minimus tr v2 review.”
They highlighted that the top result for this query was not from a big brand but from an individual reviewer, demonstrating that Google can rank independent content when it’s relevant and helpful.
As I said in my earlier reply: “This is something anyone can easily debunk themselves by simply searching on the results.”
Top result for this query isn’t a big brand. The YouTube videos aren’t from big brands. They’re from Jake. And ideally, I would agree that if our systems… pic.twitter.com/hKhGZxLuOH
— Google SearchLiaison (@searchliaison) June 25, 2024
Reaffirming Best Practices
The conversation took an additional turn when Mike Hardaker shared advice he had received about no longer ranking for branded keywords. Google’s Search Liaison responded succinctly, “Yeah, don’t do that,” reaffirming their stance against avoiding branded terms in content.
Yeah, don’t do that.
— Google SearchLiaison (@searchliaison) June 25, 2024
Why SEJ Cares
This exchange clarifies a misconception with direct communication from Google on its approach to ranking content containing branded keywords.
It reminds publishers to write the best content for readers rather than attempt to game the system by avoiding specific terms.
Now, what about the content of the actual calls? Well, you can use that, too.
Let dive into how you can use call analysis to further inform your strategy.
How To Analyze Your Call Data
The insights you collect from customer phone interactions can have a game-changing impact on your business.
But you want to make sure the effort required to dig into those calls is worth it for your team.
This is where AI and machine learning technology can be utilized effectively to streamline your process and save time.
For example, Conversation Intelligence is an AI-powered tool by CallRail that constantly records, transcribes, and analyzes each inbound and outbound call.
With transcriptions that have near-human level accuracy, Conversation Intelligence goes a step further by spotting keywords, tagging calls automatically, and qualifying leads with powerful automation rules.
Plus, with multi-conversation insights, you can easily transform countless conversations into actionable insights at scale.
Not only does this analysis unlock deeper insights to help you catch customer trends and spot long-term shifts, but it also tells you what you should focus on in your content.
2. Website Form Submissions
Another effective way to gather essential audience insights is through website form tracking.
When this data is paired with deeper analytics, you can gain a clear understanding of what drives the most qualified leads for your business.
With Form Tracking, you can find out exactly which ad or keyword made someone click “submit” on your form.
Launched last year by CallRail, this tool allows you to build custom forms or integrate existing ones, pairing the data with inbound call conversions for a holistic view of your marketing efforts.
Combining Call Tracking And Form Tracking
Leads often connect with businesses through multiple channels, so focusing on just one source isn’t really enough.
By using Call Tracking and Form Tracking together, you get a comprehensive overview of your leads’ entire customer journey.
Both of these tools essentially work by installing a single line of JavaScript code on your site, which captures and relays information about each of your leads back to CallRail.
You can easily evaluate the various campaigns that you’re running, like paid ads, social media posts, email nurture campaigns, etc. – all of which could be opportunities to incorporate tracking numbers and links to your forms.
Using both a tracking number and a form tracking link gives your leads the option to choose how they prefer to contact your business.
And as they reach out, you’ll be able to measure which campaigns and which conversion type – calls or forms – is getting the best results.
3. Customer Feedback & Surveys
If you really want a deep dive into the minds of your customers, surveys are an incredibly effective way to get feedback directly from the source.
Surveys allow you to ask your users targeted questions and receive precise answers about their preferences, pain points, and expectations.
You can then leverage this comprehensive data to guide your marketing strategy and fill any content gaps you may have.
Discover the type of content your customers prefer, the topics they are most interested in, and how they like to consume information.
Once they point out areas where they feel your content is lacking or what they would like to see more of, you can then fill the gaps in your strategy to give them what they want.
Integrating Customer Feedback Into Your Content
Understanding your audience can help you tailor your content to better meet their needs and preferences.
Here are some tips for how you can effectively integrate customer feedback into your content creation process:
Create a Feedback Loop: Ask your audience to rate the usefulness, quality, and relevance of your content to gain a clear picture of where you can improve. Then establish a system where their feedback continuously informs your content. Regularly conduct surveys and update your strategy based on the latest insights.
Prioritize High-Impact Content: Identify the topics and formats that resonate most with your audience and prioritize them in your content calendar. For example, if customers indicate a preference for video tutorials over written guides, focus more on creating video content. This ensures that you’re always aligned with what your audience finds most valuable.
Test and Iterate: After publishing content based on customer feedback, monitor its performance to see if it meets the intended goals. Use analytics to track engagement, shares, and other metrics. Be prepared to refine your content based on ongoing feedback and performance data.
Communicate Changes: Let your audience know that their feedback has been heard and implemented. This not only builds trust but also encourages more customers to participate in future surveys.
Unlock Higher Search Rankings With CallRail’s Data Solutions
Google is constantly changing its algorithms to produce higher quality search results for users, which presents numerous challenges for marketers and website owners.
Between the upcoming phase-out of third-party cookies and the recent core update, the search engine is cracking down heavily on content it deems as unhelpful.
That’s why it’s time to take a user-first approach to your content strategy.
By leveraging first- and zero-party data through methods like call tracking, form submissions, and customer surveys, you can create high-quality, relevant content that meets your audience’s needs and boosts your Google rankings.
CallRail’s suite of tools makes it easier to gather and analyze this data, helping you refine your marketing strategy and drive sustainable growth.
Ready to see the impact for yourself?
Try CallRail free for 14 days and start transforming your data into actionable strategies for higher ranking content.
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
Perplexity’s strategy behind its new Pages feature created a deep rift with publishers, but the reaction seems blown out of proportion. It’s much more interesting as a case study for user-directed AI content (UDC instead of UGC).
Perplexity Pages allows users to “create beautifully designed, comprehensive articles on any topic.” You can turn a thread, a prompt sequence, into a page about a topic.
As a regular Growth Memo reader, you quickly grasp that this is a growth strategy where, ideally, users create AI content that ranks in organic search and brings visitors to perplexity.ai that converts into paying subscribers.
The growth strategy fits into what CEO Srinivas explains as “an aggregator of information.” It holds power by providing a superior user experience, which allows it to channel demand and commoditize supply.
Drop In The Bucket
When we look at actual data, we can see that the media reaction is overblown. Not in the critique but in impact. It’s fair to ask Perplexity to adjust attribution, follow web standards like robots.txt, and use official IPs like search engines do as well.
According to developer Ryan Knight, Perplexity crawls the web with a headless browser that masks its IP string.
CEO Srinivas said Perplexity obeys robots.txt, and the masked IP came from a third-party service. But he also mentioned that “the emergence of AI requires a new kind of working relationship between content creators, or publishers, and sites like his.”
But in terms of benefit for Perplexity, Pages is a drop in the bucket.
Image Credit: Kevin Indig
91% of organic traffic to perplexity.ai comes from branded terms like “perplexity.”
Only 47,000 out of 217,000 (21.6%) monthly visitors to Pages come from organic, non-branded keywords globally.
In the US, it’s 55% (20,000/36,000). However, compared to x monthly visits from branded terms, Pages doesn’t make a dent in Perplexity’s organic traffic.
Image Credit: Kevin Indig
In reality, most traffic to Perplexity comes through its brand and word of mouth. The recent media coverage might have helped Perplexity more than it harmed. The site has hit new all-time traffic highs every day since January 2024, according to Similarweb.
Perplexity’s whole domain has only 950 pages, of which Pages make up almost 600. Compared to other sites – like Wikipedia’s 6.8 million articles on the English version alone – that’s just not a lot. Stronger scale effects will emerge as Pages get more traction. Right now, Pages is a nascent beta feature.
Taking a closer look at its performance, the most searched-for keyword Pages rank in the top 3 for is “was candy montgomery guilty” (600 MSV). The most difficult keyword it ranks in position one for is “when was the first bitcoin purchase” (KD: 76, MSV: 30). In other words, Pages still has a long way to go.
An n=1 (!) text similarity comparison with GoTranscript between Perplexity’s page for “bitcoin pizza day” and its four linked sources shows little evidence of plagiarism:
Text comparison between Perplexity’s and NationalToday’s article about Bitcoin Pizza Day (Image Credit: Kevin Indig)
The “missing” attribution issue seems to have been fixed, as the example below shows.
Perplexity highlights sources for answers at the top (Image Credit: Kevin Indig)
The results showed the chatbot at times closely paraphrasing WIRED stories, and at times summarizing stories inaccurately and with minimal attribution.
I wasn’t able to confirm or deny cases of hallucination, but I expect better models to get to a point at which they can summarize existing content flawlessly. The reality is, we’re not there yet. Google’s AI Overviews have also been shown to include wrong facts or make things up.
Google seems to have been able to improve the problem quickly, which is why I expect the degree of hallucination to drop.
One underlying issue of the plagiarism critique is that a search for the exact title of an article returns that article.
Of course, Perplexity should return a summary of an article when users prompt it. What else should Perplexity show? The same argument came up in the lawsuit between OpenAI and the NY Times.
Triggered
Besides the crawling issues Perplexity needs to fix, the media’s reaction seems to be triggered by Perplexity’s positioning.
One sentence in Perplexity’s announcement of Pages gets to the heart of the underlying issue:
“With Pages, you don’t have to be an expert writer to create high quality content.”
The page also mentions:
”Crafting content that resonates can be difficult. Pages is built for clarity, breaking down complex subjects into digestible pieces and serving everyone from educators to executives.”
All examples of Pages listed in the announcement are about “how to” or “what is” topics:
“Beginner’s guide to drumming”
“How to use an AeroPress”
“Writing Kubernetes CronJobs”
“Steve Jobs: Visionary CEO”
Etc.
That’s exactly the challenge AI poses to writers: AI can increasingly cover clearly defined content formats like guides or tutorials. I can see how this is triggering to journalists.
User-Directed Content
Note how Perplexity doesn’t create all the content for Pages but takes direction from humans through prompts (UDC).
Instead of writing a whole article, humans put the puzzle pieces together and their author bio stamp on a Page.
I expect the same to happen with other content types like reviews and platforms like Google, Tripadvisor, Yelp, G2 & Co. to provide corresponding tools to make content creation easier. The biggest challenge will be to keep quality high and reduce useless information to a minimum.
The big question is whether a build like Pages can compete with a purely human-written site like Wikipedia, which currently has 116,000 active contributors.
The bigger “Growth play” behind pages (IMHO) is how Perplexity creates AI (video) podcasts out of summarized articles that outrank original results.
“Perplexity then sent this knockoff story to its subscribers via a mobile push notification. It created an AI-generated podcast using the same (Forbes) reporting — without any credit to Forbes, and that became a YouTube video that outranks all Forbes content on this topic within Google search.”
Perplexity outranks publishers with video podcasts summarizing articles (Image Credit: Kevin Indig)
Google will have to figure out how to prevent LLMs from repurposing the content of publishers.
What remains after examining the facts is the realization of how difficult it is to balance giving an AI answer while sending traffic to sources. Why should users click when most of their questions are answered?
On the other side of the coin, publishers themselves can provide summaries of their articles. Therefore, the key challenge for Perplexity – and anyone else who wants to create large-scale AI content for Search – is adding unique value on top of AI summaries.
The path to unique value from AI summaries and other AI content is personalization.
A system that can recognize your preferences of level of understanding for a topic can make AI summaries more useful to you. Perplexity is a wrapper around different LLMs, but if it collects significant information about users and personalizes output, it can add value beyond fast answers.
Device operating system makers like Alphabet and Apple have the biggest advantage when it comes to user data since they sit on top of the food chain.
A strong example is Apple Intelligence, which could likely answer questions currently provided by guides and tutorials on Google or Perplexity.
Apple Intelligence (abbreviated “AI” – nice one, Apple!) has full context through location (Apple Maps), third-party app usage, Siri prompts, email (Apple Mail), and other sources, which creates a nice base to personalize results on. The web is just one body of knowledge, with a much sexier one waiting on our Dropbox, Gmail inbox, and iPhone photos.
Today, personalized answers are a vision and a demo.
But at some point in the future, personalization will create better answers than any generic LLM summary and surely more than any human-written guide.
The value of defined and generic knowledge is on a collision course with LLM bombers. At the same time, the value of personalized knowledge, human experience, and trustworthy expert expertise is skyrocketing.
Google updated the structured data guidance on a beta carousels structured data that is intended for users in the European Economic Area, which is related to Google’s preparations for the Digital Markets Act (DMA). The new guidance covers how to use this structured data on summary or category pages and for paginated categories.
Carousel (Beta) Rich Results For Aggregators & Suppliers
The Carousel (Beta) rich results is intended for sites about travel, local, and for shopping queries shown to users in the EEA. Google search users can scroll horizontally across tiles that contain information specific to the context of the search. Sites that use this new structured data become eligible to be featured in the new carousel rich results.
The original announcement from February 2024 explained that display for shopping queries will at first be limited to Czechia, France, Germany and the UK.
Updated Documentation
The new documentation consists of a new three sentence paragraph that is added to the current Structured data carousels (beta) documentation.
This is the new guidance that was added to the documentation:
“Mark up all items that are on the summary or category page. For paginated categories, add an ItemList to each subsequent page and include the entities that are listed on that page. For infinite scroll, focus on marking up the entities that are initially loaded in the viewport.”
The changelog has a notation that explains the reason for the update:
“Marking up categories with many items for structured data carousels (beta)
What: Added guidance on how to mark up categories with many items to the structured data carousels (beta).
Why: We received a question through our feedback button about how to implement this markup for categories with many items, such as paginated content or infinite scroll.”