Unlocking The Secrets Of Google Ad Auctions via @sejournal, @siliconvallaeys

In the world of search marketing, ad auction dynamics play a crucial role in determining ad placements and costs.

Since the DOJ trial against Google, a few elements of the ad auction have gained visibility in the advertising community.

Due to the nature of the trial, the nuances of the auction have been portrayed as serving primarily to increase ad costs. But while higher cost per click (CPCs) are rightfully viewed with skepticism, consider that they may be a side effect of something advertisers would actually want.

I believe nobody should care about CPC.

Instead, the focus should be on cost per action (CPA), return on ad spend (ROAS), return on investment (ROI), or another metric more closely related to business outcomes than CPC.

If you disagree with that premise, you will disagree with the rest of my post. But if you are willing to consider that a higher CPC is not always a bad thing, read on to learn how to explain it to a boss or client who is always on your case about CPCs being too high.

We’ll explore key components of ad auctions, including ad rank thresholds and reserve prices, out-of-order promotions, Randomized Generalized Second-Price (RGSP) mechanisms, and pCTR normalizers to understand how these elements work to create a more effective advertising ecosystem.

But first, let’s cover some of the basics of the ad auction.

The Importance Of Ad Rank

Ad Rank is a fundamental component of ad auctions, balancing bid amounts with ad quality to determine ad placement on the search results page. The basic formula is:

Ad Rank = Max CPC × predicted CTR

This formula ensures that both bid amount and ad quality are considered when determining ad placement.

Predicted CTR (pCTR) is an estimate of how likely it is that an ad will be clicked when shown for a particular search query. This metric is critical because it reflects the ad’s relevance and expected performance.

pCTR Impacts Actual CPC

The actual cost-per-click (CPC) that advertisers pay in an ad auction is influenced by the projected click-through rate (pCTR) of their ads.

Essentially, ads with higher pCTR can achieve better ad positions at a lower actual CPC compared to ads with lower pCTR.

This encourages advertisers to create highly relevant and engaging ads that align with user intent, as improving pCTR can lead to more efficient spending and better ad placements.

Google Ranks Ads Based On CPM

You read that right, and I haven’t gone mad. Since we’re exploring the dynamics of ad auctions and how they influence costs, a helpful point for advertisers to understand is that Google’s ad auction is not a CPC auction but rather a cost-per-thousand-impressions (CPM) auction.

Its not being a CPC auction should be obvious. After all, the pCTR is an equally important factor, and the ad with the highest MaxCPC doesn’t automatically win.

Advertisers bid a maximum CPC (or set a tROAS or tCPA, which gets turned into a MaxCPC at the time of each auction), and when that is combined with pCTR, you get an estimated CPM (eCPM).

The ad with the highest eCPM wins the auction. Since the ad with the highest ad rank wins the auction, we can see that ad rank and eCPM are interchangeable.

And by the way, any publisher can tell you that the best way to monetize a finite number of web visits is by maximizing the CPM, so it should make sense that Google wants to sell ads to the advertisers with the highest CPMs. I explain this in a video.

The Role Of pCTR In Ad Auctions

pCTR is a dynamic metric that influences ad placement and cost. It is calculated for each auction based on the specific context of the search query.

Advertisers with high pCTR benefit from lower CPCs and better ad positions, as the system rewards ads that are more relevant and provide a better user experience.

Understanding and optimizing relevance is crucial for advertisers. High-quality ads that resonate with users are more likely to achieve higher pCTR, reducing overall costs and improving campaign effectiveness.

This dynamic nature of pCTR ensures that advertisers continuously strive to improve ad quality, benefiting both users and advertisers.

Quality Score Is Not pCTR

Quality Score (QS) and projected click-through rate (pCTR) are both critical components for advertisers, but they are not the same.

QS is a 1-10 integer representing the quality and relevance of an ad, taking into account factors such as ad relevance, landing page experience, and historical performance. It is a key performance indicator to help advertisers navigate their way to more relevant ads.

On the other hand, pCTR is a dynamic metric that estimates the likelihood of an ad being clicked for a specific search query.

It varies with each auction and reflects the ad’s expected performance in real time. While QS provides a broad assessment of ad quality, pCTR focuses specifically on predicting user engagement for individual auctions.

Now that I’ve covered the foundation of the ad auction, let’s explore the nuanced aspects that surfaced during the trial.

Thresholds And Reserve Prices

What Are Thresholds And Reserve Prices?

The ad auction is not as simple as ranking ads and then showing them from highest to lowest rank. There are thresholds that determine a number of things, including an ad’s eligibility for a more prominent location on the page and the reserve price for it to be shown at all.

These thresholds vary based on factors such as ad quality, position, user signals, and the specific topic of the search.

Google believes ads are information, too, and should help answer questions. So, there is a quality threshold an ad must meet before it can be shown above organic results.

This is why many searches have fewer than 4 ads above the search results. According to Google’s internal data, as of 2020, fewer than 2% of all searches on Google had 4 or more ads, regardless of position on the page.

How Thresholds And Reserve Prices Impact Costs

To explain this, we need to introduce the notion of an ad’s long-term value (LTV), a measure of the economic benefit of showing the ad minus the expected cost of showing it.

The economic benefit is the ad rank, or pCTR X Max CPC, i.e. how much Google predicts they will earn from showing the ad.

The cost of showing the ad is a prediction of the possibility that the ad will harm user experience and cause them to start avoiding future ads or suffer ad blindness.

The predicted negative impact is the threshold, or reserve price, for an ad. Only if its economic benefit exceeds the expected cost can the ad be shown. So if LTV > 0, the ad may show.

This means that ads may need to pay more than $0.01 (or the equivalent lowest currency in other markets) in order to appear, and that raises prices.

How Do Thresholds And Reserve Prices Benefit Advertisers?

If all second-price auction prices were determined by the next competitor, many advertisers would fall below the LTV > 0 thresholds even though they have a maxCPC that could get them above the threshold.

Google honors the advertiser’s wish to show their ad by collecting the CPC necessary to offset the predicted negative value of showing the ad.

You can think of the threshold as a hidden participant in the ad auction whose ad is tied to the position of the threshold. Beating this threshold raises the effective CPC an advertiser pays, but it also enables the advertiser to get their ads to show in scenarios where they otherwise may not have shown while paying no more than their maximum bid.

For example, in a scenario where your ad is the sole eligible contender, you may be required to pay the reserve price, which is influenced by the thresholds.

In a scenario without strong competition, a very good ad with high quality and a high MaxCPC could find itself unable to meet the threshold. To ensure the advertiser gets what they want, Google bumps their effective CPC so that they meet the threshold and their ad can be shown (LTV > 0).

Out-Of-Order Ad Promotion

Now that we understand reserve prices and thresholds, let’s look at a particular example that involves the threshold for ads to be shown at the top of the page.

What Is Out-Of-Order Ad Promotion?

Out of order ad promotion is when an ad with a lower Ad Rank is allowed to be promoted above an ad with higher Ad Rank.

Let’s dive into this.

The thresholds have a relevance component; for example, Google may say that an ad can only be promoted to the top of the page if it has at least a certain level of relevance (pCTR).

Because Ad Rank is made up of MaxCPC and pCTR, it is possible that a lower-ranked ad (Ad B) could have a better pCTR but be stuck at the bottom of the page behind a higher-ranked ad (Ad A) with a lower pCTR.

If the pCTR promotion threshold was 5%, and Ad Rank was honored, neither of these ads could appear at the top of the page even though ad B has a high enough quality. It would be forced to stay behind Ad A in order to honor Ad Rank.

Ad MaxCPC pCTR Ad Rank
A 10 3 30
B 2 10 20

In out-of-order promotion, ad B is allowed to jump over ad A.

How Out-Of-Order Ad Promotion Impacts Costs

When advertiser A’s low quality doesn’t meet the promotion threshold but advertiser B does meet it, rather than pushing both advertisers to the bottom of the page, advertiser B is allowed to be promoted out of order above advertiser A.

Now, advertiser B pays the CPC needed to beat the top of page threshold (reserve price) which is more than if they were left at the bottom of the page. It can also be more than if they had to beat the Ad Rank of Ad A.

How Out-Of-Order Ad Promotion Benefits Advertisers

Out-of-order ad promotion, where ads are promoted based on factors beyond just the bid amount, benefits advertisers. This approach considers various thresholds, including ad relevance, ensuring that high-quality ads have a chance to appear in top positions even if their Ad Ranks are not the highest.

This can help smaller advertisers with highly relevant ads compete effectively against larger competitors with bigger budgets.

By promoting ads based on relevance and quality, advertisers are incentivized to create more engaging and useful ads, ultimately leading to better user experiences and higher conversion rates.

Randomized Generalized Second-Price (RGSP)

What Is RGSP?

In a traditional second-price auction, the highest bidder wins the ad spot at the price of the second-highest bid.

But remember that the second price depends on pCTR, a number predicted with machine learning. Predictions are not precise, and it can happen that multiple advertisers are competing very closely, and the only thing that sets them apart is an ML-generated pCTR.

To ensure that inaccurate predictions don’t become self-reinforcing truths, ads can be randomly re-ordered. This introduces chances for experimentation that the ML algorithm can use to evaluate its accuracy and improve future predictions.

RGSP is a system to help ensure normalization is handled correctly. It’s hard to have data to do normalization if ads don’t vary. You need to see the same ad’s performance when it wins and loses to be able to identify how much of its performance is due to its inherent quality vs external factors like where it showed.

How RGSP Impacts Costs

RGSP introduces an element of unpredictability, which encourages advertisers to bid their true value rather than strategically underbidding.

When ads are re-ordered and don’t follow the pure ad ranking mechanism, CPCs will be different, and that can raise prices for some advertisers.

How RGSP Helps Advertisers

This mechanism helps prevent ads with high predicted relevance from consistently hogging top positions, promoting a diverse range of ads. By fostering a competitive environment, RGSP mechanisms encourage advertisers to focus on ad quality and relevance, which can lead to better performance and higher return on investment (ROI).

It prevents ads with incorrectly predicted high pCTRs from unfairly remaining in top positions and beating newer ads with inaccurate low pCTRs.

Normalization Techniques

What Are Normalization Techniques?

Google’s normalization techniques ensure that ad rankings reflect relevance rather than being influenced by external factors like ad format or position.

By incorporating metrics such as projected click-through rate (pCTR) and adjusting for factors like ad format, the system creates a level playing field for all advertisers.

Ad rank is partially based on pCTR. But we know that CTR depends on a lot more than just the text of the ad itself. For example, all else being equal, ads in higher positions will get a higher CTR than those in lower positions. Ads with more visible lines of ad text will get higher CTRs than those with fewer lines of text.

Project Momiji works to normalize pCTRs so that a more appealing ad format doesn’t unfairly penalize advertisers whose ads didn’t get the same visual treatment.

How Normalization Techniques Impact Costs

When pCTR is normalized for ad formats and page position, some advertisers with high pCTRs will see a downward adjustment. This is to say that the high pCTR was driven in part by the inherent benefit of a more appealing ad format or a higher page position.

Advertisers should compete on a level playing field, so when this normalization happens, some advertisers will pay more than if the normalization hadn’t happened.

For example, an ad shown in position 1 with a pCTR of 10% may only have had a pCTR of 8% if it had been shown in position 2. There’s an underlying ad relevance pCTR that can be estimated by removing all factors that boost the pCTR due to factors out of the advertiser’s control, like ad formats, position on the page, number of additional ads, etc.

Google can then price all ads based on their normalized pCTR. So, in our example, if the pCTR for the auction is 10% but normalized for all factors, it would only be 8%, then the advertiser’s effective CPC will be higher.

How Normalization Techniques Help Advertisers

Normalization techniques prevent unfair advantages stemming from superior positions or ad treatments, ensuring that ad pricing reflects true relevance. This approach benefits advertisers by promoting fair competition and encouraging investment in high-quality ads that align with user intent.

Focus Less On CPC

Understanding the intricacies of ad auction dynamics is crucial for advertisers seeking to optimize their campaigns and achieve better outcomes.

While higher CPCs might initially appear disadvantageous, they often result from mechanisms designed to promote ad quality, relevance, and a better user experience.

By focusing on metrics that truly matter, such as CPA, ROAS, and ROI, advertisers can better appreciate the benefits of these dynamics.

The components of the ad auction, from ad rank thresholds to out-of-order promotions and RGSP mechanisms, work together to create a competitive yet fair environment.

This encourages advertisers to continuously improve their ads, ultimately benefiting both their business and the users they aim to reach. By embracing these complexities and striving for high-quality, relevant ads, advertisers can navigate the ad auction landscape more effectively and achieve greater success in their digital marketing efforts.

More resources:


Featured Image: ImageFlow/Shutterstock

Apple is promising personalized AI in a private cloud. Here’s how that will work.

At its Worldwide Developer Conference on Monday, Apple for the first time unveiled its vision for supercharging its product lineup with artificial intelligence. The key feature, which will run across virtually all of its product line, is Apple Intelligence, a suite of AI-based capabilities that promises to deliver personalized AI services while keeping sensitive data secure. It represents Apple’s largest leap forward in using our private data to help AI do tasks for us. To make the case it can do this without sacrificing privacy, the company says it has built a new way to handle sensitive data in the cloud.

Apple says its privacy-focused system will first attempt to fulfill AI tasks locally on the device itself. If any data is exchanged with cloud services, it will be encrypted and then deleted afterward. The company also says the process, which it calls Private Cloud Compute, will be subject to verification by independent security researchers. 

The pitch offers an implicit contrast with the likes of Alphabet, Amazon, or Meta, which collect and store enormous amounts of personal data. Apple says any personal data passed on to the cloud will be used only for the AI task at hand and will not be retained or accessible to the company, even for debugging or quality control, after the model completes the request. 

Simply put, Apple is saying people can trust it to analyze incredibly sensitive data—photos, messages, and emails that contain intimate details of our lives—and deliver automated services based on what it finds there, without actually storing the data online or making any of it vulnerable. 

It showed a few examples of how this will work in upcoming versions of iOS. Instead of scrolling through your messages for that podcast your friend sent you, for example, you could simply ask Siri to find and play it for you. Craig Federighi, Apple’s senior vice president of software engineering, walked through another scenario: an email comes in pushing back a work meeting, but his daughter is appearing in a play that night. His phone can now find the PDF with information about the performance, predict the local traffic, and let him know if he’ll make it on time. These capabilities will extend beyond apps made by Apple, allowing developers to tap into Apple’s AI too. 

Because the company profits more from hardware and services than from ads, Apple has less incentive than some other companies to collect personal online data, allowing it to position the iPhone as the most private device. Even so, Apple has previously found itself in the crosshairs of privacy advocates. Security flaws led to leaks of explicit photos from iCloud in 2014. In 2019, contractors were found to be listening to intimate Siri recordings for quality control. Disputes about how Apple handles data requests from law enforcement are ongoing. 

The first line of defense against privacy breaches, according to Apple, is to avoid cloud computing for AI tasks whenever possible. “The cornerstone of the personal intelligence system is on-device processing,” Federighi says, meaning that many of the AI models will run on iPhones and Macs rather than in the cloud. “It’s aware of your personal data without collecting your personal data.”

That presents some technical obstacles. Two years into the AI boom, pinging models for even simple tasks still requires enormous amounts of computing power. Accomplishing that with the chips used in phones and laptops is difficult, which is why only the smallest of Google’s AI models can be run on the company’s phones, and everything else is done via the cloud. Apple says its ability to handle AI computations on-device is due to years of research into chip design, leading to the M1 chips it began rolling out in 2020.

Yet even Apple’s most advanced chips can’t handle the full spectrum of tasks the company promises to carry out with AI. If you ask Siri to do something complicated, it may need to pass that request, along with your data, to models that are available only on Apple’s servers. This step, security experts say, introduces a host of vulnerabilities that may expose your information to outside bad actors, or at least to Apple itself.

“I always warn people that as soon as your data goes off your device, it becomes much more vulnerable,” says Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project and practitioner in residence at NYU Law School’s Information Law Institute. 

Apple claims to have mitigated this risk with its new Private Cloud Computer system. “For the first time ever, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud,” Apple security experts wrote in their announcement, stating that personal data “isn’t accessible to anyone other than the user—not even to Apple.” How does it work?

Historically, Apple has encouraged people to opt in to end-to-end encryption (the same type of technology used in messaging apps like Signal) to secure sensitive iCloud data. But that doesn’t work for AI. Unlike messaging apps, where a company like WhatsApp does not need to see the contents of your messages in order to deliver them to your friends, Apple’s AI models need unencrypted access to the underlying data to generate responses. This is where Apple’s privacy process kicks in. First, Apple says, data will be used only for the task at hand. Second, this process will be verified by independent researchers. 

Needless to say, the architecture of this system is complicated, but you can imagine it as an encryption protocol. If your phone determines it needs the help of a larger AI model, it will package a request containing the prompt it’s using and the specific model, and then put a lock on that request. Only the specific AI model to be used will have the proper key.

When asked by MIT Technology Review whether users will be notified when a certain request is sent to cloud-based AI models instead of being handled on-device, an Apple spokesperson said there will be transparency to users but that further details aren’t available.

Dawn Song, co-Director of UC Berkeley Center on Responsible Decentralized Intelligence and an expert in private computing, says Apple’s new developments are encouraging. “The list of goals that they announced is well thought out,” she says. “Of course there will be some challenges in meeting those goals.”

Cahn says that to judge from what Apple has disclosed so far, the system seems much more privacy-protective than other AI products out there today. That said, the common refrain in his space is “Trust but verify.” In other words, we won’t know how secure these systems keep our data until independent researchers can verify its claims, as Apple promises they will, and the company responds to their findings.

“Opening yourself up to independent review by researchers is a great step,” he says. “But that doesn’t determine how you’re going to respond when researchers tell you things you don’t want to hear.” Apple did not respond to questions from MIT Technology Review about how the company will evaluate feedback from researchers.

The privacy-AI bargain

Apple is not the only company betting that many of us will grant AI models mostly unfettered access to our private data if it means they could automate tedious tasks. OpenAI’s Sam Altman described his dream AI tool to MIT Technology Review as one “that knows absolutely everything about my whole life, every email, every conversation I’ve ever had.” At its own developer conference in May, Google announced Project Astra, an ambitious project to build a “universal AI agent that is helpful in everyday life.”

It’s a bargain that will force many of us to consider for the first time what role, if any, we want AI models to play in how we interact with our data and devices. When ChatGPT first came on the scene, that wasn’t a question we needed to ask. It was simply a text generator that could write us a birthday card or a poem, and the questions it raised—like where its training data came from or what biases it perpetuated—didn’t feel quite as personal. 

Now, less than two years later, Big Tech is making billion-dollar bets that we trust the safety of these systems enough to fork over our private information. It’s not yet clear if we know enough to make that call, or how able we are to opt out even if we’d like to. “I do worry that we’re going to see this AI arms race pushing ever more of our data into other people’s hands,” Cahn says.

Apple will soon release beta versions of its Apple Intelligence features, starting this fall with the iPhone 15 and the new macOS Sequoia, which can be run on Macs and iPads with M1 chips or newer. Says Apple CEO Tim Cook, “We think Apple intelligence is going to be indispensable.”

19 Free Web Design Tools, Spring 2024

Here is a list of new web tools and design elements from spring 2024. There are tools for AI images, audio and video editors, color palettes, multisite management, visual testing, website builders, free fonts, and more.

All of these tools are free or have free plans, though most also offer premium versions. The fonts are free for commercial projects. Before using a font, be sure to verify its terms.

Free Design Tools

StockCake offers a diverse collection of AI-generated photos, all available in the public domain for free. No account is needed to download.

Home page of StockCake

StockCake

GPT-4o is a new generative AI model from OpenAI that accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, similar to human response time in a conversation.

Free Faces is a curated collection of typefaces available online under various free licenses. The typefaces can be sorted by cursive, display, monospace, sans serif, serif, and slab.

Wondercraft is an AI editor for creating audio content in any language. Use the AI Script Assistant to generate copy. Select from a library of realistic voices, or clone your own. Mix in songs and sound effects, and finalize with your team. Free for up to four credits per month.

Home page of Wondercraft

Wondercraft

Chromafy lets you generate and apply color palettes to any website with just a few clicks. Choose from various color schemes, create custom palettes, and export and apply them to specific elements by adding class names or CSS variables.

Buglab is a no-code visual testing platform. It automates website testing by allowing users to simulate visitor actions with just a few clicks. Once a test is set, the system displays the results and highlights the differences. Buglab is free for up to 250 pages.

Deblank Colors is a free, AI-powered color palette generator. Generate palettes from text prompts, an image, or a color.

Home page of Deblank Colors

Deblank Colors

WP Foundry is a tool for managing multiple WordPress websites. Administer and back up plugins, themes, databases, and more. Free for a single site.

Dub is an open-source link management tool for marketing. In addition to link-shortening tools, Dub features advanced analytics, QR codes for your links, branded and personalized short links, and more. Free for three domains.

Font Interceptor downloads all fonts on a website. Just enter a URL and intercept the fonts — for free.

Home page of Font Interceptor

Font Interceptor

Fliki is an AI video generator. Start with text, choose and personalize AI voiceover, select the media or let AI generate appropriate images, and preview the output. Easily transform product listings into engaging videos. Free for five minutes per month.

Ginix is a tool to discover, analyze, and monitor online reviews. Ginix searches for new reviews, analyzes sentiments, and informs about any new feedback. Free monitoring and collection of reviews on 100-plus sites.

MarsCode is an AI-powered cloud-based development environment. To reduce costs, MarsCode IDE provides a series of capabilities covering development, debugging, storage, deployment, and schema generation.

Home page of MarsCode

MarsCode

Free Fonts

This Cafe is a rounded outline font with a drop shadow. Plump and friendly, it’s a display font that draws attention with a cheerful vintage flair.

Home page of This Cafe

This Cafe

Hollistticated is a thick script font with energy. Legible for commercial use, it also simulates street art.

Home page of Hollistticated

Hollistticated

Okesip is a friendly, handwritten font that’s thick enough for display use but simple enough to use across a website.

Home page of Okesip

Okesip

Fiesta is a bold display font with divided characters to deliver a distinctive and playful vibe.

Home page of Fiesta

Fiesta

Blue Bubbles is a handwritten display font with standard characters and medium thickness. It is suitable for display or descriptive text.

Home page of Blue Bubbles

Blue Bubbles

Dobedoo is a thick and rounded all-caps display font with a touch of warmth.

Home page of Dobedoo

Dobedoo

YouTube Rolls Out Thumbnail A/B Testing To All Channels via @sejournal, @MattGSouthern

YouTube will launch a new “Thumbnail Test & Compare” feature for all channels over the next few weeks.

This tool allows you to upload and test up to three different thumbnails for each video to see which performs best.

How Thumbnail Testing Works

The ‘Thumbnail Test & Compare‘ feature lets you upload multiple thumbnail options when publishing a new YouTube video.

During the testing period, YouTube will randomly display the different thumbnails to some of the video’s viewers.

After collecting enough data, which takes around two weeks, YouTube analyzes which thumbnail generated the highest “watch time share” from viewers.

It will then designate one of three potential outcomes:

  • Winner: A clear winner outperforming the other options based on watch time. The winning thumbnail is automatically applied.
  • Preferred: One thumbnail likely performed better than others, but the results are less certain statistically.
  • None: No thumbnail emerged as a clear winner. The original uploaded thumbnail is kept.

You can manually select your preferred video thumbnail even if it isn’t the winning option.

For a full demonstration, see the video below:

YouTube Thumbnail Best Practices

As part of the demonstration, YouTube outlined best practices for designing and testing thumbnails.

YouTube suggests creators start by testing thumbnails on a limited number of older videos to get initial guidance. Then, apply any learnings to testing thumbnails for more recent videos.

For thumbnail design itself, YouTube offers these tips:

Balance & Focal Point

“Ensure your images are balanced with a focal point to direct viewers’ attention towards.”

High Contrast

“Utilize high contrast allowing your subject to stand out against the background in both Light and Dark mode.”

Facial Expressions

“If there’s a face in your thumbnail, consider emotion. Be expressive and consider how you want viewers to feel when seeing your thumbnail.”

Concise Text

“With text, remember that fewer words can be impactful while too much text can be difficult to process while scrolling.”

Depth & Blank Space

“When it comes to depth of field keep your background in mind and play with blank space.”

Rollout To All Eligible Channels

All channels can access the ‘Thumbnail Test & Compare’ feature in the YouTube Studio desktop application. To do so, the “Advanced features” setting must be enabled.

YouTube is gradually rolling this out over the next few weeks to all channels that have opted in and meet those requirements.

The company says it will provide updates on expanding availability, such as potential mobile app support, in the future.

Optimizing For Watch Time

In an FAQ addressing common questions, YouTube explains that thumbnails are judged solely based on their ability to drive watch time, not other metrics like click-through rates.

YouTube states

“We want to make sure that your thumbnail and content gets you the highest amount of viewer engagement, so we are optimizing for overall watch time share over other metrics.

We believe that this metric is the best way to guide your content strategy decisions & support your chances of success on the platform.”

Why SEJ Cares

The Thumbnail Test & Compare tool addresses a pain point by allowing true A/B testing. Previously, creators had to rely on best guesses or small-sample polls when selecting thumbnails for new videos.

By optimizing for watch time as the key success metric, YouTube is putting an emphasis on long-term viewer engagement over short-term clicks.

However, it’s understandable that some channels may also want data on how thumbnails impact initial impressions and click-through rates.

How This Can Help You

Smarter, higher-performing thumbnails could boost your content in YouTube’s recommendations and keep viewers watching more videos.

Video openers and thumbnails are the first make-or-break moments on YouTube, so having data-backed tools to perfect those first impressions could be a difference-maker.


Featured Image: Chayjitti Hongmanee/Shutterstock

Google’s Structured Data Update May Boost Merchant Sales via @sejournal, @martinibuster

Google updated their structured data guidelines to reflect support for a sitewide return policy within the Organization structured data. This eliminates the need to add redundant return policy information for every product listing structured data and can result in more traffic and sales to online merchants.

This doesn’t mean that merchants are required to change their current structured data, the old method remains unchanged. This simply adds an alternative way that is more streamlined and reduces the size of product structured data.

Improvement To Brand Knowledge Panel

Google’s change to the organization structured data will be reflected in the brand panel that Google shows when someone searches on a brand name. The updated brand panel will feature a new entry that reflects the company’s return policy.

Screenshot Of Brand Knowledge Panel Example

Benefits Of Organization-Level Return Policy

As part of this change Google is adding search features in Knowledge Panels and in Brand Panels that can show a merchant’s return policies. This means that a merchant’s search feature will be eligible to show a returns policy which in turn can encourage a higher clickthrough rate from the search engine results pages (SERPs) and a higher conversion rate.

Research conducted by the International Council of Shopping Centers (ICSC) in 2024 shows that online shoppers are strongly influenced by a merchant’s returns policy.

They discovered:

“82% of respondents said that when shopping online, return policies influence whether they decide to purchase from a retailer.

… If retailers charged a fee to ship back purchases made online, nearly three-fourths (71%) of respondents said they’d likely stop shopping online from that company altogether, while 6 in 10 said they’d likely stop shopping online with retailers that shortened the free return window.”

Clearly a return policy can be a way to generate more online sales and Google’s new support for a sitewide returns policy structured data helps to communicate that information to online shoppers directly from search.

Google’s announcement explained:

“A return policy is a major factor considered by shoppers when buying products online, and so last year we enabled the extraction of structured data return policies for individual products. Today we’re adding support for return policies at the organization level as well, which means you’ll be able to specify a general return policy for your business instead of having to define one for each individual product you sell.

Adding a return policy to your organization structured data is especially important if you don’t have a Merchant Center account and want the ability to provide a return policy for your business. Merchant Center already lets you provide a return policy for your business, so if you have a Merchant Center account we recommend defining your return policy there instead.

…If your site is an online or local business, we recommend using one of the OnlineStore, or LocalBusiness subtypes of Organization.

We hope this addition makes it easier for you to add return policies for your business, and enable them to be shown across Google shopping experiences.”

Google Updates Organization Structured Data Documentation

Google added a new section to their Organization structured data documentation to reflect support for this new way to show return policies in the search results.

The new documentation states:

“MerchantReturnPolicy
Use the following properties to describe general return policies for your entire Organization, if applicable to your business. If you have specific policies for individual products, use merchant listing markup instead.”

Read Google’s announcement:

Adding markup support for organization-level return policies

Read the new MerchantReturnPolicy documentation on Google’s official Organization structured data page:

Organization (Organization) structured data – MerchantReturnPolicy

6 Local SEO Full-Guides That Help You Rank For Your Business Type

The elusive five-star review used to be something you could only flaunt in a rotating reviews section on your website.

But today, Google has pulled these stars out of the shadows and features them front and center across branded SERPs and beyond.

Star ratings can help businesses earn trust from potential customers, improve local search rankings, and boost conversions.

This is your guide to how they work.

Stars And SERPs: What Is The Google Star Rating?

A Google star rating is a consumer-powered grading system that lets other consumers know how good a business is based on a score of one to five stars.

These star ratings can appear across maps and different Google search results properties like standard blue link search listings, ads, rich results like recipe cards, local pack results, third-party review sites, and on-app store results.

How Does The Google Star Rating Work?

When a person searches Google, they will see star ratings in the results. Google uses an algorithm and an average to determine how many stars are displayed on different review properties.

Google explains that the star score system operates based on an average of all review ratings for that business that have been published on Google.

It’s important to note that this average is not calculated in real-time and can take up to two weeks to update after a new review is created.

When users leave a review, they are asked to rate a business based on specific aspects of their customer experience, as well as the type of business being reviewed and the services they’ve included.

For example, “plumbers may get “Install faucet” or “Repair toilet” as services to add,” and Google also allows businesses to add custom services that aren’t listed.

When customers are prompted to give feedback, they can give positive or critical feedback, or they can choose not to select a specific aspect to review, in which case this feedback aspect is considered unavailable.

This combination of feedback is what Google uses to determine a business’s average score by “dividing the number of positive ratings by the total number of ratings (except the ones where the aspect was not rated).”

Google star ratings do have some exceptions in how they function.

For example, the UK and EU have certain restrictions that don’t apply to other regions, following recent scrutiny by the EU Consumer Protection Cooperation and the UK Competitions and Market Authority about fake reviews being generated.

Additionally, the type of rating search property will determine the specifics of how it operates and how to gather and manage reviews there.

Keep reading to get an in-depth explanation of each type of Google star rating available on the search engine results pages (SERPs).

How To Get Google Star Ratings On Different Search Properties

As mentioned above, there are different types of Google star ratings available across search results, including the standard blue-link listings, ads, local pack results, rich snippets, third-party reviews, and app store results.

Here’s what the different types of star-rating results look like in Google and how they work on each listing type.

Standard “Blue Link” Listings And Google Stars

In 2021, Google started testing star ratings in organic search and has since kept this SERP feature intact.

Websites can stand out from their competitors by getting stars to show up around their organic search results listing pages.

Text result showing google star ratings in the SERPsScreenshot from SERPs, Google, February 2024

How To Get Google Stars On Organic SERPs

If you want stars to show up on your organic search results, add schema markup to your website.

Learn how to do that in the video below:

As the video points out, you need actual reviews to get your structured data markup to show.

Then, you can work with your development team to input the code on your site that indicates your average rating, highest, lowest, and total rating count.

structured markup example for google star ratings and reviewsScreenshot JSON-LD script on Google Developers, August 2021

Once you add the rich snippet to your site, there is no clear timeline for when they will start appearing in the SERPs – that’s up to Google.

In fact, Google specifically mentions that reviews in properties like search can take longer to appear, and often, this delay is caused by business profiles being merged.

When you’re done, you can check your work with Google’s Structured Data Testing Tool.

Adding schema is strongly encouraged. But even without it, if you own a retail store with ratings, Google may still show your star ratings in the search engine results.

They do this to ensure searchers are getting access to a variety of results. Google says:

“content on your website that’s been crawled and is related to retail may also be shown in product listings and annotations for free across Google.”

If you want star ratings to show up on Shopping Ads, you’ll have to pay for that.

Paid Ads And Google Stars

When Google Stars appear in paid search ads, they’re known as seller ratings, “an automated extension type that showcases advertisers with high ratings.”

These can appear in text ads, shopping ads, and free listings. Both the star rating and the total number of votes or reviews are displayed.

In addition to Google star ratings, shopping ads may include additional production information such as shipping details, color, material, and more, as shown below.

Google shopping ads showing star ratingsScreenshot from SERPs ads, Google, February 2024

Paid text ads were previously labeled as “ads” and recently have been upgraded to a “sponsored” label, as shown below.

paid ad showing google star ratingsScreenshot from SERPs ads, Google, February 2024

How To Get Google Stars On Paid Ads

To participate in free listings, sellers have to do three things:

  • Follow all the required policies around personally identifiable information, spam, malware, legal requirements, return policies, and more.
  • Submit a feed through the Google Merchant Center or have structured data markup on their website (as described in the previous section).
  • Add their shipping settings.

Again, some ecommerce sellers who do not have schema markup may still have their content show up in the SERPs.

For text ads and shopping ads to show star ratings, sellers are typically required to have at least 100 reviews in the last 12 months.

Paid advertisers must also meet a minimum number of stars for seller ratings to appear on their text ads. This helps higher-quality advertisers stand out from the competition.

For example, text ads have to have a minimum rating of 3.5 for the Google star ratings to show.

Google treats reviews on a per-country basis, so the minimum review threshold of 100 also applies only to 1 region at a time.

For star ratings to appear on a Canadian ecommerce company’s ads, for example, they would have to have obtained a minimum of 100 reviews from within Canada in the last year.

Google considers reviews from its own Google Customer Reviews and also from approved third-party partner review sites from its list of 29 supported review partners, which makes it easier for sellers to meet the minimum review threshold each year.

Google also requests:

  • The domain that has ratings must be the same as the one that’s visible in the ad.
  • Google or its partners must conduct a research evaluation of your site.
  • The reviews included must be about the product or service being sold.

Local Pack Results And Google Stars

Local businesses have a handful of options for their business to appear on Google via Places, local map results, and a Google Business Profile page – all of which can show star ratings.

Consumers even have the option to sort local pack results by their rating, as shown in the image example below.

Google star ratings on search resultsScreenshot from SERPs local pack, Google, February 2024

How To Get Google Stars On Local Search Results

To appear in local search results, a Google Business Profile is required.

Customers may leave reviews directly on local business properties without being asked, but Google also encourages business owners to solicit reviews from their customers and shares best practices, including:

  • Asking your customers to leave you a review and make it easy for them to do so by providing a link to your review pages.
  • Making review prompts desktop and mobile-friendly.
  • Replying to customer reviews (ensure you’re a verified provider on Google first).
  • Be sure you do not offer incentives for reviews.

Customers can also leave star ratings on other local review sites, as Google can pull from both to display on local business search properties. It can take up to two weeks to get new local reviews to show in your overall score.

Once customers are actively leaving reviews, Google Business Profile owners have a number of options to help them manage these:

options to manage review on google business profileScreenshot from Google Business Profile Help, Google, February 2024

Rich Results, Like Recipes, And Google Stars

Everybody’s gotta eat, and we celebrate food in many ways — one of which is recipe blogs.

While restaurants rely more on local reviews, organic search results, and even paid ads, food bloggers seek to have their recipes rated.

Similar to other types of reviews, recipe cards in search results show the average review rating and the total number of reviews.

recipe search results on desktopScreenshot from search for [best vegan winter recipes], Google, February 2024

The outcome has become a point of contention among the food blogging community, since only three recipes per search can be seen on Google desktop results (like shown in the image above), and four on a mobile browser.

These coveted spots will attract clicks, leaving anyone who hasn’t mastered online customer reviews in the dust. That means that the quality of the recipe isn’t necessarily driving these results.

Google gives users the option to click “Show more” to see two additional rows of results:

expanded desktop recipe search resultsScreenshot from SERPs, Google, February 2024

Searchers can continue to click the “Show more” button to see additional recipe results.

Anyone using Google Home can search for a recipe and get results through their phone:

Google assistant recipesScreenshot from Elfsight, February 2024

Similarly, recipe search results can be sent from the device to the Google Home assistant. Both methods will enable easy and interactive step-by-step recipe instructions using commands like “start recipe,” “next step,” or even “how much olive oil?”

How To Get Google Stars On Recipe Results

Similar to the steps to have stars appear on organic blue-link listings, food bloggers and recipe websites need to add schema to their websites in order for star ratings to show.

However, it’s not as straightforward as listing the average and the total number of ratings. Developers should follow Google’s instructions for recipe markup.

There is both required and recommended markup:

Required Markup For Recipes

  • Name of the recipe.
  • Image of the recipe in a BMP, GIF, JPEG, PNG, WebP, or SVG format.

Recommended Markup For Recipes

  • Aggregate rating.
  • Author.
  • Cook time, preparation time, and total duration.
  • Date published.
  • Description.
  • Keywords.
  • Nutrition information.
  • Prep time.
  • Recipe category by meal type, like “dinner.”
  • Region associated with the recipe.
  • Ingredients.
  • Instructions.
  • Yield or total serving.
  • Total time.
  • Video (and other related markup, if there is a video in the recipe).

To have recipes included in Google Assistant Guided Recipes, the following markup must be included:

  • recipeIngredient
  • recipeInstructions
  • To have the video property, add the contentUrl.

For example, here’s what the structured markup would look like for the recipeIngredient property:

example of structured markup for recipe steps in Google AssistantScreenshot from Google Developer, February 2024

Third-Party Review Sites And Google Stars

Many software companies rely on third-party review sites to help inform their customer’s purchasing decisions.

Third-party review sites include any website a brand doesn’t own where a customer can submit a review, such as Yelp, G2, and many more.

Many of these sites, like Featured Customers shown below, can display star ratings within Google search results.

Example of star ratings showing in SERPs from third-party review sitesScreenshot from SERPs listing of a review site, Google, February 2024

Rich snippets from third-party reviews, such as stars, summary info, or ratings, can also appear on a Google Business Profile or map view from approved sites.

For local businesses, Google star ratings appear in different locations than the third-party reviews on a desktop:

third party reviews and google stars on desktop resultsScreenshot from SERPs listing of a review site, Google, February 2024

On mobile, ratings are displayed on a company’s Google Business Profile. Users need to click on Reviews or scroll down to see the third-party reviews:

third party reviews in local mobile resultsScreenshot from SERPs listing of a review site, Google, February 2024

On a map, the results from third parties may be more prominent, like the Tripadvisor review that shows up for a map search of The Hilton in Vancouver (although it does not display a star rating even though Tripadvisor does provide star ratings):

third party reviews in map resultsScreenshot from SERPs listing of a review site, Google, February 2024

How To Get Google Stars On Third-Party Review Sites

The best way to get a review on a third-party review site depends on which site is best for the brand or the business.

For example, if you have active customers on Yelp or Tripadvisor, you may choose to engage with customers there.

third-party reviews in search resultsScreenshot from SERPs listing of a review site, Google, February 2024

Similarly, if a software review site like Trustpilot shows up for your branded search, you could do an email campaign with your customer list asking them to leave you a review there.

Here are a few of the third-party review websites that Google recognizes:

  • Trustpilot.
  • Reevoo.
  • Bizrate – through Shopzilla.

When it comes to third-party reviews, Google reminds businesses that there is no way to opt out of third-party reviews, and they need to take up any issues with third-party site owners.

App Store Results And Google Stars

When businesses have an application as their core product, they typically rely on App Store and Google Play Store downloads.

Right from the SERPs, searchers can see an app’s star ratings, as well as the total votes and other important information, like whether the app is free or not.

App store reviews in search resultsScreenshot from SERP play store results, Google, February 2024

How To Get Google Stars On App Store Results

Businesses can list their iOS apps in the App Store or on the Google Play store, prompt customers to leave reviews there, and also respond to them.

Does The Google Star Rating Influence SEO Rankings?

John Mueller confirmed that Google does not factor star ratings or customer reviews into web search rankings. However, Google is clear that star ratings influence local search results and rankings:

“Google review count and review score factor into local search ranking. More reviews and positive ratings can improve your business’ local ranking.”

Even though they are not a ranking factor for non-local organic search, star ratings can serve as an important conversion element, helping you display social proof, build credibility, and increase your click-through rate from search engines (which may indirectly impact your search rankings).

For local businesses, both Google stars and third-party ratings appear in desktop and mobile searches, as seen above.

These ratings not only help local businesses rank above their competitors for key phrases, but they will also help convince more customers to click, which is every company’s search game.

How Do I Improve My Star Rating?

Businesses that want to improve their Google star rating should start by claiming their Google Business Profile and making sure all the information is complete and up to date.

If a company has already taken these steps and wants to offset a poor rating, they are going to need more reviews to offset the average.

Companies can get more Google reviews by making it easy for customers to leave one. The first step for a company is to get the link to leave a review inside their Google Business Profile:

Ask customers for reviews linkScreenshot from Wordstream, February 2024

From there, companies can send this link out to customers directly (there are four options displayed right from the link as seen above), include it on social media, and even dedicate sections of their website to gathering more reviews and/or displaying reviews from other users.

It isn’t clear whether or not responding to reviews will help improve a local business’s ranking; however, it’s still a good idea for companies to respond to reviews on their Google Business Profile in order to improve their ratings overall.

That’s because responding to reviews can entice other customers to leave a review since they know they will get a response and because the owner is actually seeing the feedback.

For service businesses, Google provides the option for customers to rate aspects of the experience.

This is helpful since giving reviewers this option allows anyone who had a negative experience to rate just one aspect negatively rather than giving a one-star review overall.

Does Having A Star Rating On Google Matter? Yes! So Shoot For The Stars

Stars indicate quality to consumers, so they almost always improve click-through rates wherever they are present.

Consumers tend to trust and buy from brands with higher star ratings in local listings, paid ads, or even app downloads.

Many, many, many studies have demonstrated this phenomenon time and again. So, don’t hold back when it comes to reviews.

Do an audit of where your brand shows up in SERPs and get stars next to as many placements as possible.

The most important part of star ratings across Google, however, will always be the service and experiences companies provide that fuel good reviews from happy customers.

More resources:


Feature Image: BestForBest/Shutterstock
All screenshots taken by author

Google’s Gary Illyes: Lastmod Signal Is Binary via @sejournal, @MattGSouthern

In a recent LinkedIn discussion, Gary Illyes, Analyst at Google, revealed that the search engine takes a binary approach when assessing a website’s lastmod signal from sitemaps.

The revelation came as Illyes encouraged website owners to upgrade to WordPress 6.5, which now natively supports the lastmod element in sitemaps.

When Mark Williams-Cook asked if Google has a “reputation system” to gauge how much to trust a site’s reported lastmod dates, Illyes stated, “It’s binary: we either trust it or we don’t.”

No Shades Of Gray For Lastmod

The lastmod tag indicates the date of the most recent significant update to a webpage, helping search engines prioritize crawling and indexing.

Illyes’ response suggests Google doesn’t factor in a website’s history or gradually build trust in the lastmod values being reported.

Google either accepts the lastmod dates provided in a site’s sitemap as accurate, or it disregards them.

This binary approach reinforces the need to implement the lastmod tag correctly and only specify dates when making meaningful changes.

Illyes commends the WordPress developer community for their work on version 6.5, which automatically populates the lastmod field without extra configuration.

Accurate Lastmod Essential For Crawl Prioritization

While convenient for WordPress users, the native lastmod support is only beneficial if Google trusts you’re using it correctly.

Inaccurate lastmod tags could lead to Google ignoring the signal when scheduling crawls.

With Illyes confirming Google’s stance, it shows there’s no room for error when using this tag.

Why SEJ Cares

Understanding how Google acts on lastmod can help ensure Google displays new publish dates in search results when you update your content.

It’s an all-or-nothing situation – if the dates are deemed untrustworthy, the signal could be disregarded sitewide.

With the information revealed by Illyes, you can ensure your implementation follows best practices to the letter.


Featured Image: Danishch/Shutterstock

Google Reminds Websites To Use Robots.txt To Block Action URLs via @sejournal, @MattGSouthern

In a LinkedIn post, Gary Illyes, an Analyst at Google, reiterated long-standing guidance for website owners: Use the robots.txt file to prevent web crawlers from accessing URLs that trigger actions like adding items to carts or wishlists.

Illyes highlighted the common complaint of unnecessary crawler traffic overloading servers, often stemming from search engine bots crawling URLs intended for user actions.

He wrote:

“Looking at what we’re crawling from the sites in the complaints, way too often it’s action URLs such as ‘add to cart’ and ‘add to wishlist.’ These are useless for crawlers, and you likely don’t want them crawled.”

To avoid this wasted server load, Illyes advised blocking access in the robots.txt file for URLs with parameters like “?add_to_cart” or “?add_to_wishlist.”

As an example, he suggests:

“If you have URLs like:
https://example.com/product/scented-candle-v1?add_to_cart
and
https://example.com/product/scented-candle-v1?add_to_wishlist

You should probably add a disallow rule for them in your robots.txt file.”

While using the HTTP POST method can also prevent the crawling of such URLs, Illyes noted crawlers can still make POST requests, so robots.txt remains advisable.

Reinforcing Decades-Old Best Practices

Alan Perkins, who engaged in the thread, pointed out that this guidance echoes web standards introduced in the 1990s for the same reasons.

Quoting from a 1993 document titled “A Standard for Robot Exclusion”:

“In 1993 and 1994 there have been occasions where robots have visited WWW servers where they weren’t welcome for various reasons…robots traversed parts of WWW servers that weren’t suitable, e.g. very deep virtual trees, duplicated information, temporary information, or cgi-scripts with side-effects (such as voting).”

The robots.txt standard, proposing rules to restrict well-behaved crawler access, emerged as a “consensus” solution among web stakeholders back in 1994.

Obedience & Exceptions

Illyes affirmed that Google’s crawlers fully obey robots.txt rules, with rare exceptions thoroughly documented for scenarios involving “user-triggered or contractual fetches.”

This adherence to the robots.txt protocol has been a pillar of Google’s web crawling policies.

Why SEJ Cares

While the advice may seem rudimentary, the re-emergence of this decades-old best practice underscores its relevance.

By leveraging the robots.txt standard, sites can help tame overzealous crawlers from hogging bandwidth with unproductive requests.

How This Can Help You

Whether you run a small blog or a major e-commerce platform, following Google’s advice to leverage robots.txt for blocking crawler access to action URLs can help in several ways:

  • Reduced Server Load: You can reduce needless server requests and bandwidth usage by preventing crawlers from hitting URLs that invoke actions like adding items to carts or wishlists.
  • Improved Crawler Efficiency: Giving more explicit rules in your robots.txt file about which URLs crawlers should avoid can lead to more efficient crawling of the pages/content you want to be indexed and ranked.
  • Better User Experience: With server resources focused on actual user actions rather than wasted crawler hits, end-users will likely experience faster load times and smoother functionality.
  • Stay Aligned with Standards: Implementing the guidance puts your site in compliance with the widely adopted robots.txt protocol standards, which have been industry best practices for decades.

Revisiting robots.txt directives could be a simple but impactful step for websites looking to exert more control over crawler activity.

Illyes’ messaging indicates that the ancient robots.txt rules remain relevant in our modern web environment.


Featured Image: BestForBest/Shutterstock

How To Make Sure Google’s Automation Settings Aren’t Working Against You via @sejournal, @MenachemAni

Automation can be powerful, but it can cause performance and output issues without human supervision. Google Ads is no exception.

Despite the truly compelling advances Google has made in machine learning and automation over the past five years, it’s important not to lose sight of the reality behind that technology.

In this article, I’ll walk you through the importance of being the ultimate decision-maker for your account, as well as some of my must-track settings in Google Ads that help you exert more control over campaign performance.

Why Automation Requires Human Supervision

Google’s goal with its ad product is to get advertisers and brands to spend more. There’s nothing wrong with this – every business exists to generate profit and grow shareholder value.

But a side effect of this is that it’s not always evident how to exercise control over that automation in Google Ads.

Whether this is by design or oversight, it means that ad managers (especially new ones) will put too much trust and control in the hands of machines that don’t have context on the nuances and needs of individual businesses.

Google continues to make improvements to its automation, but at no point will it be able to make such a large-scale model work just right for every business.

This is where account managers come in.

As our jobs morph from optimizing campaigns to optimizing the machines that run campaigns, we find ourselves focusing on new ways of controlling performance – through data inputs, decision-making, and automation of our own.

9 Automation Settings In Google Ads That Deserve Your Attention

Whether you’re starting a new campaign or diagnosing an ongoing one, these nine settings are worth your attention and effort.

This list is not exhaustive; there are far more than nine things you can control.

These are the ones I focus on before others because they have a stronger-than-usual tendency to negatively impact the campaign or account.

1. Location Settings

With location targeting, the default setting is “All countries and territories.” Make sure to only target the specific locations that make sense for your business.

With location targeting, the default setting is “All countries and territories.”Screenshot from Google Ads, May 2024

We typically recommend only targeting “Presence,” not “Presence or interest.”

If you only want to show your ads for home repairs in New York or London, selecting “Presence or interest” will show them to people who might have been searching for things to do before visiting those cities, but who live in Miami or Manchester.

Targetting settingsScreenshot from Google Ads, May 2024

This is especially vital for localized campaigns and slightly less critical (but still important) if you’re targeting an entire country such as the United States.

Advertisers who actively want to target people with interest in a location (like hotels or bed and breakfasts) should feel free to enable that setting.

2. Campaign Creation Settings

It’s important that you know what you want to do in Google Ads before you create a campaign because Google will nudge you toward its automation nearly every time.

For example, if you’re looking to launch a Shopping campaign, the system will push you to create a Performance Max campaign instead.

If you don’t know the difference, you could spend considerably more money on a lower-quality campaign.

set campaign subtypeScreenshot from Google Ads, May 2024
select standing shopping campaignScreenshot from Google Ads, May 2024

Ordinarily, this wouldn’t be a problem, but for advertisers who don’t know all the safeguards that need to be built to make Performance Max work, this could cost them far more than they’re willing to spend on ads – or can afford in the first place.

3. Display Network

If you’re running a Search campaign, Google will allow you to choose (to some degree) where your ads are shown.

The Search Network is enabled by default. This is the standard text-based ad you see in search results, but there are two settings you’ll likely want to keep an eye on.

Google Ads Search and Display NetworkScreenshot from Google Ads, May 2024
  • Leaving the “Include Google search partners” setting selected will allow your ad to be shown on third-party partner websites with Adsense enabled, as well as YouTube and other Google properties outside of the search results. Unless you know how to monitor the quality of these placements, it’s best to opt out.
  • You also probably want to opt out of Display Network placements. These are the image ads that you see as you browse YouTube videos, third-party websites, and apps. Again, these can bring in a disproportionately high amount of low-quality traffic.

4. Auto-Apply Recommendations

Auto-apply recommendations (AAR) are a source of stress for many modern Google Ads marketers, largely because outsourced Google reps push for their enablement and go around account managers to make that happen.

As with everything else in Google Ads, there is balance beneath the surface.

Google Ads Auto Applied RecommendationsScreenshot from Google Ads, May 2024

Many of these are too general, even for my liking, while others actually make an impact. As a rule of thumb, I like to disable any AARs that can change any of the following:

  • Budget, i.e., how much the account is spending.
  • Bids, i.e., how much we’re willing to pay per click, conversion, or other action.
  • Targeting, i.e., deciding who sees the ads via keywords, audiences, etc.

In other words, most settings will be turned off. In some cases, we’ll leave the following ones enabled:

  • Upgrade to data-driven attribution.
  • Optimized ad rotation.
  • Remove non-serving keywords.
  • Add audience reporting.

5. Smart Bidding And Conversion Tracking

Smart Bidding can be a very powerful tool, but it relies on two things: historical data and conversion tracking.

Newer accounts tend to be short when it comes to both of these, but conversion tracking is something you can at least control from day one.

We still regularly see and hear about accounts that are months or even years old with non-existent or poor conversion tracking setups. This is a direct result of subpar account management.

One common culprit is setting conversion actions that have no identifiable business value, such as page views.

This is usually done to make a questionable PPC professional look good by inflating conversion count and value – but it costs a great deal of money with little return. It also has the added effect of making your historical conversion data almost worthless.

Genuine errors can also cause problems. We had a client who made a change to their website, and Google started double-counting ecommerce transactions.

Smart Bidding incorrectly thought the campaign was performing twice as well as it actually was, which led the system to spend more of the budget. If we hadn’t caught it quickly, it would have wasted a lot of money.

The inverse is true as well. If you aren’t tracking all conversions, the system will think it’s not hitting targets and spend less, causing you to lose out on potential revenue.

I will always push for conversion tracking to be part of any Google Ads account.

But if you don’t have it set up for whatever reason, avoid automated and Smart Bidding strategies like Maximize Conversions and Maximize Conversion Value. Stick to manual bidding instead.

6. Gemini AI And Auto-Fill Ad Text

I’ve been using Google’s generative AI model Gemini (formerly Bard) on and off for a few months. This is a new technology with challenges of its own, so questionable output is still common.

Because it relies on your website as the primary data input, you really have to have your messaging there dialed in to get any meaningful copy out of Gemini.

But even with a great website, you’ll still get subpar copy options.

Gemini is good at taking input and giving you a variety of ideas, which you can then fine-tune with feedback and additional prompting.

So, while you can’t use its output as is, Gemini is a solid ideation tool – especially for small business owners with limited resources.

7. Keyword Match Types

Keywords are (for now) the heart and soul of Google Ads, but their match types determine how they will behave when it comes time to show your ads.

In my experience, phrase and exact match are your best options when using manual bidding.

You can still use broad match, but limit it to a handful of highly relevant terms. This is because broads operate with far more freedom when interpreting intent.

When you pair that with Smart Bidding, it allows Google to determine which people searching for what you have are closest to converting.

But when adding new keywords to Search, the system will always add them in as broad match unless otherwise specified. Be careful with this, and remember to format your keywords appropriately.

Similarly, adding negative keywords to a campaign will default to negative exact match.

We much prefer to pare it down to the root theme and block using phrase match negatives for both Search and Shopping.

8. New Customer Acquisition

With the New Customer Acquisition setting in Search and Performance Max campaigns, you have the option to tell Google to only go after people who haven’t done business with you in the past.

It’s as straightforward as uploading a customer list and keeping it fresh, but the targeting is not perfect. Causes include expired tracking cookies, delays in updating customer lists, and general errors that are part of all automation.

But even though the system will still go after some existing customers, it should still largely work the way it’s meant to.

Remember that turning this option on will count the increased value of a new customer as revenue. This affects reporting and Smart Bidding’s choices, so it’s generally a good idea to build a separate campaign specifically for New Customer Acquisition.

9. Retargeting & Enhanced Audiences

If you create a dedicated retargeting campaign that only targets people who have visited your website, Google will automatically enable “Enhanced Audiences.”

This allows the system to go beyond your defined audience, in a manner similar to how audience signals work in Performance Max.

Don’t Blame Google If You Don’t Want to Adapt

Nothing in Google Ads is all good or all bad – a philosophy that I extend to all advertising and business in general.

These machine-driven automation capabilities can deliver great results when used appropriately, but they have their drawbacks. Something that is built for all advertisers can never work the way you want it to out of the box.

The truth is that we’re all fending for ourselves. Anyone managing Google Ads campaigns is answerable to their accounts, clients, brands, businesses, and bosses.

It’s a tough journey, but one that we’re on together, like it or not. If you aren’t ready to put in the effort to adapt, you can’t blame anyone but yourself down the line – not even Google.

More resources:


Featured Image: BestForBest/Shutterstock

Something Weird Is Going On In Google’s SERPs via @sejournal, @martinibuster

People are always complaining that there’s something wrong with Google’s search results but what’s going on with search results for queries with the acronym “SEO” is in a class by itself and has to be seen to be believed.

Anomalies In Search Results

An anomaly is something that deviates from the norm or what’s expected. A lot of time when there’s something wrong with the search engine results pages (SERPs) the anomaly is explainable. For example, queries that combine a geographical element with a relatively longtail phrase tend to generate weird results. Another driver of strange search results is when there simply isn’t enough data about a specific combination of words, which sometimes leads to offensive search results.

What’s happening with a particular group of keyword phrases that are related to the word “SEO ” is not any of those kinds of anomalies. It’s a true anomaly.

Here are the keywords that Google is (arguably) getting wrong:

  • SEO program
  • What is an SEO program?
  • SEO New York (City)
  • SEO NYC
  • SEO Conference
  • SEO Events
  • SEO Education
  • SEO Awards
  • SEO-USA.Org

The site that’s ranking for all those SEO search queries (and probably more) is a site called SEO-USA.org. The acronym SEO in that website stands for Sponsors for Educational Opportunity. It’s not a spam site, it’s a legit non-profit website that’s been around since 1963. The purpose of the non-profit is to provide mentorship to young people who are underserved to help them get into colleges and universities. That program evolved in the SEO Scholars, an eight year academic program for talented young people to help them through high school and college.

“SEO Scholars creates a more equitable society by closing the academic opportunity gap for motivated young people, setting the standard for academics, mentorship, community, peer-to-peer support, and a powerful, lifelong network.”

SEO-USA.org Is Not Relevant For SEO

The acronym SEO is heavily relevant for the context of online marketing. A search for “SEO” in Google spawns suggestions that are all relevant for SEO in the sense of search marketing.

Google Trends shows that the phrase SEO Scholars and SEO Scholars Application are not widely searched in the United States, most of the searches occur in New York. But SEO-USA.org is top ranked for the group of keywords listed above in other areas outside of New York.

Screenshot Of SERPs For Keyword Phrase “SEO Awards”

It’s kind of obvious that SEO-USA.org is not relevant for the most commonly understood meaning for the acronym SEO.

Could Backlinks Be The Reason?

It’s possible that the reason SEO-USA.org is ranking for all of those phrases is because of backlinks. A search for the domain name but restricted to .edu sites shows almost seventy .edu websites that link to the the SEO-USA.org domain name.

This is the advanced search that shows scores of .edu sites that link or mention SEO-USA.org:

"seo-usa.org" site:.edu"

Screenshot Of Site:.EDU Search

There are also a large amount of high quality sites with dot org domains that link to SEO-USA.org as well, which is observable using the following advanced search:

"seo-usa.org" site:.org -site:seo-usa.org"

On the surface it looks clear that backlinks are the reason why SEO-USA.org ranks for irrelevant keywords.

But of course, the most obvious answer isn’t always the right answer. There’s more to the picture.

Why Links Probably Don’t Explain The Rankings

If links were the reason for SEO-USA.org’s rankings then it would follow that virtually every keyword phrase related to SEO would be littered with .edu and .org websites but that’s not the case.

I’ve been doing SEO for about 25 years now and I remember the days when sites that had the maximum level of PageRank used to rank for virtually anything. Also, dot edu links were regarded as powerful because SEOs were able to rank quite well with them.

Google’s algorithms improved and the effect from .edu links started to wane because context of a link started counting more. The words in the title element and the words in the surrounding text influenced the links. I know this too from my experience.

Another important change in Google’s link ranking algorithms was to dampen the effect of quantity of links. It used to be that an avalanche of links was enough to help a site rank over more authoritative sites. I know this from my experience too.

But the effect of a huge amount of links also changed in many ways, like hundreds of links from one domain stopped counting as hundreds of links and began counting as just one link. The position of a link within a page also mattered more, there were lots of changes that whittled down the power of links so that less and less links mattered for the wrong reasons.

I’m kind of skeptical that links is the reason why SEO-USA.org ranks.

What’s The Answer?

For some reason, a relevance factor is not kicking in, which allows the (arguably) irrelevant SEO-USA.org site to rank for keywords it probably shouldn’t rank for.

I think that’s a clue, a reason for why that site is ranking where it should not. It’s slipping through because something is missing that would ordinarily be there to keep it out.

It may very well be that there’s a factor related to trustworthiness that is allowing that site to slip through. That’s just speculation. Do you have any ideas?

Featured Image by Shutterstock/SS 360