Google Business Profile Update Targets Delivery Of Age-Restricted Products via @sejournal, @MattGSouthern

Google has updated its Business Profile rules for service-area businesses that sell age-restricted products.

Now, businesses selling alcohol, cannabis, weapons, and similar items must have a physical storefront to maintain their Google Business Profile.

We were alerted to this update by Stefan Somborac on X:

Changes To Service-Area Business Guidelines

Google has updated its guidelines to prevent businesses that sell age-restricted products from operating only as service-area businesses.

The updated guidelines state:

“Businesses associated with products or services that require the customer to be a certain minimum age, like alcohol, cannabis, or weapons, aren’t permitted as service-area businesses without a storefront.”

This is a notable change in how Google handles business listings for delivery and mobile services.

The policy outlines two main types of businesses:

  1. Service-area businesses: These companies deliver to customers but do not have a physical business location.
  2. Hybrid businesses: These operations have a physical location and offer delivery or mobile services.

Service Area Limitations

Google maintains its existing restrictions on service areas, including:

  • A maximum of 20 service areas per business
  • Service boundaries limited to approximately 2 hours of driving time from the business base
  • Service areas must be defined by city, postal code, or specific geographic region rather than radius

Impact On Businesses

This update affects certain types of businesses:

  • Mobile alcohol delivery services
  • Cannabis delivery services
  • Weapons dealers without a physical store
  • Vendors of age-restricted products that only deliver

The new rules require these businesses to have a physical storefront to keep their Google Business Profiles.

This change aims to ensure proper age checks and compliance with sensitive product and service regulations.

What This Means

The policy update addresses concerns about selling age-restricted products through delivery-only businesses.

This change mainly impacts new delivery services for cannabis and alcohol, which have grown in some cities.


Featured Image: Alexandre.ROSA/Shutterstock

Charts: U.S. Retail Ecommerce Sales Q3 2024

Recent data from the U.S. Department of Commerce reveals that ecommerce continues to outpace traditional retail growth. In the third quarter of 2024, total domestic retail sales reached $1.85 trillion, a modest 1.3% increase from Q2. Online shopping showed stronger momentum, with ecommerce sales climbing to $300.1 billion, a more robust growth rate of 2.6% over the prior quarter.

According to the DoC, ecommerce sales are for “goods and services where the buyer places an order (or the price and terms of the sale are negotiated) over an Internet, mobile device, extranet, electronic data interchange network, electronic mail, or other comparable online system. Payment may or may not be made online.”

Ecommerce accounted for 16.2% of total U.S. retail sales in Q3 2024, up slightly from 16.0% in the prior quarter.

The DoC reports U.S. ecommerce retail sales in Q3 2024 grew by 7.4% compared to Q3 2023, while total quarterly retail sales experienced a 2.1% annual rise over the same period last year.

Google Search Sees UK Decline, Users Express Low Trust In AI via @sejournal, @MattGSouthern

Google Search’s market share in the UK weakened this year, with user reach dropping to 83% from 86%, according to Ofcom’s Online Nation report.

This decline comes as concerns grow over AI-powered search results, with only 18% of users confident in their accuracy.

The Rise & Fall of AI Search Adoption

Microsoft’s Bing gained popularity after adding ChatGPT in February 2023, peaking at 46% reach in the UK in April.

By May 2024, it settled at 39%, still above pre-AI levels.

This suggests the initial excitement about AI search tools is fading, and users are now more cautious with AI-generated results.

Trust Gap Emerges

Despite the widespread adoption of AI search features, trust remains a concern:

  • Only 18% of UK users believe AI search results are reliable
  • Younger users (ages 16-24) show marginally higher trust at 21%
  • A third of users express neutral confidence in AI-generated results
  • Men show higher confidence in AI search results than women

Demographics & Device Usage

The report reveals variations in search behavior across age groups and devices:

  • Google maintains 83% reach across smartphones, tablets, and computers
  • Google maintains 49% daily active users
  • Bing sees 39% reach, primarily driven by desktop users
  • Alternative search engines like DuckDuckGo show modest growth (3% reach)
  • Bing shows stronger performance among older users (43% of 65+ vs. 36% of 25-34-year-olds)
  • Mobile search dominates, with Google capturing 84% of mobile searches
  • Desktop usage remains stronger for traditional search engines like Bing and Yahoo
  • 69% of UK online adults visit at least one search engine daily.

What This Means

As we approach 2025, search is changing with AI integration, but user trust remains essential.

Key points for search marketers and content creators include:

  • Many users still prefer traditional search methods despite the rise of AI.
  • Trust issues create both challenges and opportunities for content improvement.
  • Different age groups affect how people accept and use AI in search.
  • A successful strategy blends AI tools with established methods.

View AI search as an added layer rather than a replacement for current practices.

Focus on quality content and reliable information, optimizing for AI wisely where it adds value.

Methodology

The Online Nation 2024 report combines two main data sources:

Online Experiences Tracker:

  • 7,280 UK internet users aged 13-84
  • Fielded May-June 2024 via YouGov panel
  • Standard demographic weighting applied

Ipsos iris Panel Data:

  • Passive tracking of 10,700 UK adults
  • Monitors actual device usage across mobile, tablet, and desktop
  • Continuous measurement through May 2024
  • Covers in-home and out-of-home usage

Worth noting: Some year-over-year comparisons, particularly around time spent metrics, may be affected by methodology updates. Apple News tracking began in October 2023, which impacts certain platform comparisons.

The data focuses on UK users, so global markets may show different patterns. All population estimates have standard margins of error.


Featured Image: William Barton/Shutterstock

YouTube Previews AI Tools To Overcome Creative Blocks via @sejournal, @MattGSouthern

YouTube is enhancing its Inspiration Tab, a tool for creators to understand their audience and improve content.

In a video demonstration, the company previewed new AI features that will launch in the coming months.

Initially a research tool, the Inspiration Tab now helps creators identify audience interests and content gaps.

The new AI features are designed to boost creativity and streamline content creation.

Personalized Ideas and Audience Insights

You’ll find five tailored ideas for your channel in the updated Inspiration Tab.

Screenshot from: YouTube.com/CreatorInsider, Nov 2024.

Each idea includes a thumbnail, title, summary, and audience interest insights, helping you see how well it fits your audience.

You can also input any topic as a text prompt, and the AI will generate ideas based on your request.

Screenshot from: YouTube.com/CreatorInsider, Nov 2024.

In the Idea Playground, you can personalize your idea by exploring different angles.

Choose from suggested angles or enter your own prompt.

The Playground also offers undo and redo options, so you don’t lose your work.

Screenshot from: YouTube.com/CreatorInsider, Nov 2024.
Screenshot from: YouTube.com/CreatorInsider, Nov 2024.

You can access outlines and thumbnails in the Playground. The AI will suggest ways to adjust your talking points. You can modify the entire outline or focus on specific sections.

Similar options are available for titles and thumbnails. You can download images for use as backgrounds or modify them to visualize before uploading.

Screenshot from: YouTube.com/CreatorInsider, Nov 2024.

Availability

The Inspiration Tab is the updated Trends Tab, formerly the Research Tab. It will be a central hub where you can use AI to brainstorm ideas, outlines, titles, thumbnails, and concepts.

YouTube plans to roll out these features over the next few months. Note that these features are not widely available yet, as YouTube is previewing them to gather creator feedback.

See the full demo below:


Featured Image: Geobor/Shutterstock

WordPress Anti-Spam Plugin Vulnerability Hits 200k+ Sites via @sejournal, @martinibuster

A flaw in a WordPress anti-spam plugin with over 200,000 installations allows rogue plugins to be installed on affected websites. Security researchers rated the vulnerability 9.8 out of 10, reflecting the high level of severity determined by security researchers.

Screenshot Of CleanTalk Vulnerability Severity Rating

CleanTalk Anti-Spam WordPress Plugin Vulnerability

A highly rated anti-spam firewall with over 200,000 installations was found to have an authentication bypass vulnerability that enables attackers to gain full access to websites without providing a username or password. The flaw lets attackers upload and install any plugin, including malware, granting them full control of the site.

The flaw in the Spam protection, Anti-Spam, FireWall by CleanTalk plugin, was pinpointed by security researchers at Wordfence as caused by reverse DNS spoofing. DNS is the system that turns an IP address to a domain name. Reverse DNS spoofing is where an attacker manipulates the system to show that it’s coming from a different IP address or domain name. In this case the attackers can trick the Ant-Spam plugin that the malicious request is coming from the website itself and because that plugin doesn’t have a check for that the attackers gain unauthorized access.

This vulnerability is categorized as: Missing Authorization. The Common Weakness Enumeration (CWE) website defines that as:

“The product does not perform an authorization check when an actor attempts to access a resource or perform an action.”

Wordfence explains it like this:

“The Spam protection, Anti-Spam, FireWall by CleanTalk plugin for WordPress is vulnerable to unauthorized Arbitrary Plugin Installation due to an authorization bypass via reverse DNS spoofing on the checkWithoutToken function in all versions up to, and including, 6.43.2. This makes it possible for unauthenticated attackers to install and activate arbitrary plugins which can be leveraged to achieve remote code execution if another vulnerable plugin is installed and activated.”

Recommendation

Wordfence recommends users of the affected plugin to update to version 6.44 or higher.

Read the Wordfence advisory:

Spam protection, Anti-Spam, FireWall by CleanTalk <= 6.43.2 – Authorization Bypass via Reverse DNS Spoofing to Unauthenticated Arbitrary Plugin Installation

Featured Image by Shutterstock/SimpleB

Improve local SEO with Google My Business/Business Profile

Every business owner with a website is looking for ways to get noticed in the search results. There are loads of tactics to rank well as a local business, but unfortunately, there is no silver bullet. As with most things SEO, this is a combined effort. One of the ways to get noticed is by offering customers up-to-date information like opening hours and contact information. You can do this using Google Business Profile, formerly Google My Business. But what is it exactly, and why is it so important for local SEO?

Table of contents

What is Google My Business?

Before we start, it’s good to know that Google recently changed its name from Google My Business to Google Business Profile. But even with its new name, Google Business Profile is still your one-stop shop for managing your business information and how it will be presented in the search results. It is an essential tool to adjust how your business is shown in Google Maps, the Knowledge Graph, and organic search results. According to the most recent edition of Whitespark’s Local Search Ranking Factors Survey, Google My Business remains the biggest driver of local SEO success, with reviews coming in second place.

You can manage your business listing by adding NAP details, opening hours, photos, etcetera. There are even unique options to show multiple ways to shop (delivery or curbside pickup). In addition, you get many other options, like managing the reviews your customers leave behind. As you might already know, reviews should be a critical part of your local SEO efforts.

Want to learn more about local SEO and how to get higher rankings in the local search results? Our Local SEO training helps you improve your online presence and attract more online and offline visitors.

How does it work?

Getting started with Google Business Profile is easy; you make an account and claim your business. After filling in your details, you need to verify your business. You can choose between receiving an automated call or text with a verification code, or receiving a real-life postcard from Google at your specified address with the verification code. Some businesses may need to verify with a video. You may have to verify with more than one method. So, why does Google want you to verify your account? They want to ensure that you’re telling the truth and that you’re the business owner listed at the address. Google has a special page that helps you check your verification status.

Once verified, you can fill in all the necessary details and check how your listing is doing. You can get regular insights to see how many impressions, clicks, and subscribers your listing got over time. It’s a great way of getting a feel for how Google and customers perceive your business.

Keep in mind that Google My Business (or Google Business Profile) is not the catch-all tool for your local SEO. It has to work in tandem with your other off-page and on-page SEO efforts. You probably won’t climb the charts if your profile is inaccurate, but you won’t reach the top without a well-optimized site and localized content. These things go hand in hand.

Ranking factors

Google Business Profile uses many factors to determine rankings for businesses. We’ll highlight the three most important ones:

  • Relevance
  • Distance
  • Prominence

Relevance

Relevance determines how well your business fits the search intent of the customer. Is your focus identical to what the customer needs, or are you a bit opaque about what your company does? Vagueness doesn’t rank or sell. Be as clear as you can be, and keep your focus. That way, Google knows what people can expect from you and will be more inclined to show your business in the search results.

Tip: When setting up your profile, Google offers you common services that you can add to your category of business. It’s smart to include these, because they will help your business get found. We’ll talk about these in more detail later on.

Proximity

Proximity is a well-known factor for ranking local businesses. And quite logical when you think about it. You can’t rank in a local search for dentist in Philadelphia when you have located your company in Manhattan. However, the exact way Google determines which businesses to show in a local search is unknown, and it can be pretty hard to rank in a given area. Other factors play a significant role as well. It helps to say you are located in a particular area and show it by having local-oriented content around your business on your site. Google uses what’s known about the searcher’s location to present the most relevant local businesses.

Prominence

Prominence is all about the activity around your listing; this could be the number of reviews, events, local content, et cetera. It also helps to have loads of quality links to your site. It is somewhat hard to determine what prominence means exactly, but one thing is for sure: no one likes inactive profiles. You have to keep it updated with new photos and manage your reviews. As said before, this goes hand in hand with your site, so make sure both listings align and publish content focusing on your local area(s).

It is vital that you keep your business listing accurate and up to date

Optimize your Google Business Profile listing

Any business with a bona fide brick-and-mortar location or an online outlet is eligible for a Google Business listing. But what if you operate your business out of your house and don’t want people to know your address?

If you don’t operate a walk-up brick-and-mortar location but visit your customers in a particular geographic area, you’re called a “Service Business.” Service businesses include plumbers, carpet cleaners, and courier services. In this case, you’re still eligible for a listing. However, you’ll want to choose “Service business” when Google asks what kind of business you have.

Google asks what kind of business you are operating

Verifying your location

Google tries to make sure that only legitimate businesses are represented in GMB. It requires anyone claiming a Listing to verify their association with the company. The easiest way to start the process is to perform a desktop search on Google for your business name (for example, “Kido Chicago”). You’ll see a link in the panel on the right-hand side of the page that asks, “Own this business?”

Before you click that link to begin the verification process, make sure you are either not signed in to Google (you can create an account in the next step) or are signed into a Google account for your business instead of your personal Gmail. It’s not a requirement; however, sharing access to your listing with employees or other company agents from a business account will be much easier.

Screenshot of a listing for a coffeehouse. Highlighted is the
Click on Claim this business to claim the profile of this business; if you are the owner

Once you fill out the most basic information (see below for what these details are), if it can corroborate your address and phone number, Google will call and ask you to enter a PIN on the screen. If it hasn’t previously seen a business with the phone number and address you submitted, you’ll be mailed a postcard within a week with instructions on how to PIN verify.

Verify with video

Some businesses may need to verify with a video. If you own a physical store, you will need to record your location by showing street signs, your business exterior and interior, and you need to show that you’re authorized to represent this business. You will need to show all these requirements in one continuous recording, so no editing.

If you own a service business, like a plumbing or landscaping business, you don’t have a physical store. That’s why you’ll need to show street signs or the surrounding area where your business is located, your business card or a branded vehicle or business tools, and finally a proof of address. Again, this needs to be done in one continuous recording.

Primary business information

Name, Address, and Phone

This sounds simple, but it’s surprising how many business owners overthink these core attributes or try to “optimize” them. Your Name, Address, and Phone (NAP) are your thumbprint online. If they don’t reflect your business accurately in your Google Business Profile, Google—and your customers—lose trust that you are who you say you are. They will stop sending business your way.

  • Do NOT stuff keywords in your business name. Represent yourself as you would answer the phone or welcome a customer into your store. You probably see spammers doing this and succeeding all the time, but it’ll come back to bite them at some point. Google is monitoring for these kinds of abuses all the time, and they’re getting better at blocking the abusers.
  • Submit the same address you use on your website. If you’re a Yoast user, this should be the address you enter in the Yoast Local SEO plugin. Even if you’re a service-area business, you’ll have to submit a physical address, not a PO box or other mailing-only address.
  • You’ll see a map displayed just alongside your address. Zoom in and double-check that the pin is in the correct place on your business. Google’s pin precision for U.S. addresses is typically pretty good, but it can be spotty in other countries.
  • Don’t use a tracking phone number to segment customers coming from Google vs. other sources. There are ways to do this, but they’re pretty advanced. Implementing tracking numbers incorrectly can do tremendous damage to your local search rankings.

Choose a category for your business

From a rankings standpoint, the business category is the most important attribute you can optimize in your Google Business Profile. In our experience, it’s best to keep an eye on Google’s advice on categories. Remember that advice has changed frequently over the years and might change again.

Start typing, and the categories magically appear. You can add more later on

Google maintains a taxonomy of several thousand categories to describe local businesses. You’ll probably find a match pretty closely by typing in a few characters of a keyword that describes your business.

Google suggests “using as few categories as possible,” as well as categories that are “as specific as possible.” And while it’s true that Google can and does “detect category information from your website and mentions about your business throughout the web,” our advice is to specify as many relevant categories as possible on your Google Business Profile listing.

Google’s automated review system may remove one or two from your listing, but this is not spam—provided you select relevant categories. It helps you show up for as broad a range of searches as possible.

Services businesses can add the specific services you offer

Add your website

In the process, there’s a field to add your website URL, but it doesn’t have to be your “website” per se. In particular, if you operate in more than one location, you may want to enter the page on your website corresponding to the location you’re submitting to Google—rather than your homepage. Opinions are mixed on whether listing your homepage, or a location page will help you rank better, so do what’s best for prospective customers. If you think your homepage will give them the best initial sense of your business, submit it as your “website”. If a location page—or even some other page—will give your audience a better sense, submit that instead.

Add a business description

Later on, you can add a business description to your profile. It’s essential to take a moment and find an excellent way to describe your business on Google. The business description field gives you a maximum of 750 characters to convince people to visit your business. This might be the first thing people read about your business, so make it unique and exciting. Describe what your business offers precisely and how you set yourself apart from your competitors. You can also talk about your history and anything else that’s helpful for customers.

Keep the description short and sweet, and don’t try to push promotions or deals you have running. You also shouldn’t include URLs or HTML code. Give it a think, write a few descriptions and pick the best one!

A unique business description can make you stand out from the rest

More business information

After entering the attributes above, you’re asked to verify your listing. But don’t stop there. There are a few other attributes that are well worth your time to add.

Photos and images

Photos may be the most neglected attribute in all local SEO. The success of Instagram, Pinterest, and any lesser-known apps indicates how visual our internet culture has become. Consumers often select—or reject!—a business because of its photos. Not only on the content of the photos, but consumers also judge the quality and professionalism of the photos.

Photos are essential in the mobile ecosystem that Google Business Profile powers (including Google Maps), where they dominate a business’s representation in Google’s user interfaces. Try to add authentic photos instead of stock photos, and make sure the pictures reflect your business.

As with all local media or social media sites, Google Business Profile has its image format requirements. Take some time to review them and make sure you have high-quality assets for each format. You can find more on improving your images in our image SEO guide.

Business hours

Selecting your opening hours is pretty straightforward. Google has dramatically improved its interface for telling customers when you’re open over the past several years. Hours will be front-and-center wherever customers interact with your business on Google, so they should be accurate. You can even daypart multiple times during the day and add specific hours for holidays and special events.

While you can’t control it, you may be interested to know that Google now displays the busyness of your business in real time. This is based on aggregate location-tracking visitors with Android phones and iOS Google Maps users with location services enabled.

See when a business opens and when the most popular times are

Specific categories of businesses will have the option to add their menu. Suppose you’re lucky enough to be in one of these categories. In that case, we highly recommend adding this, as it gives Google an additional set of keywords that your business should consider relevant.

Accept messages for more leads

By default, this feature is turned on. It will add a messaging feature to your Business Profile, and it can be a great way to generate more leads. Of course, you need to keep this chat feature in mind and check regularly for any messages. If you respond quickly and helpfully, you’ll probably turn visitors into customers.

A screenshot of a Levi's store Google Business profile. The chat function is made bigger to highlight where it is on the page.
The chat function

If you’re slow to respond to customers, however, Google might disable this feature for you. So, if you’re not sure you’ll be able to keep track of this feature, it might be better to turn it off.

Share posts on your Business Profile

The Posts feature is a very useful way to communicate with your customers from your profile. With posts, you can connect with existing and potential customers through your Business Profile on Google Search and Maps. You can create and share announcements, offers, new or popular items in stock, or event details directly with your customers. Use this to promote your sales, specials, events, news, and offers. You can even add videos and photos to posts to make them more interesting!

A screenshot of the Seattle Children's Museum post on their Business Profile about their prideful playdate event in July.
Example of the post feature

Many options

Since Google Business Profile has grown quite a bit over the years, there are many more options to explore. You don’t need to use all of them, but some are valuable. For example, the Products feature to highlight your products in your listing is a great way to get people interested in what your offer.

Another great option is the possibility of getting potential customers to ask questions that you can respond to. Keep in mind that others can also answer these questions, so keep an eye on this. When it works, it can be a real help in turning visitors into customers.

And, as always, get reviews for your business! With a Business Profile, you can even respond to these reviews. It doesn’t have to be a message. Nowadays, you can also quickly respond with an emoji, like a heart or fire emoji.

Ranking factors beyond your control

There are two significant local search and local SEO ranking factors over which you have little control. They both have to do with the physical location of your business.

The location of your potential customer

The first is the proximity of your business to the location where your prospective customer is performing their search. All other things being equal, Google will display a company closer to the searcher than one farther away from them.

In the early years of Google, its algorithm favored businesses close to the center of a given city or its “centroid.” This was partly because Google wasn’t as good at detecting the location of the searcher as they are now. It defaulted to showing businesses in the areas with the highest population density.

Nowadays, this factor has declined in importance. Especially for mobile searches where Google has a precise idea of where you are. Google has also partially improved at detecting desktop searchers’ location information through surreptitious collection means.

A physical store in the city of your customer’s search question

The second factor is having an address in the city in which your customer is searching.  If your customer is searching in Seattle, your Tacoma or Bellevue-based coffee shop won’t appear. This is simply because it’s irrelevant to the customer’s search.

Short of opening additional locations to target areas where high concentrations of your customers are searching, there’s not much you can do to optimize for these ranking factors. Still, you should be aware of their importance.

Google Business Profile Insights

Google provides a free, lightweight analytics package as part of GBP. This gives you a basic sense of how customers and potential customers view and interact with your listing.

Insights show how often your listing appears in plain old search vs. Google Maps. It also shows the number of clicks to your website, requests for driving directions, phone calls, and more.

There’s also a breakdown of how many customers see your listing for direct searches (for your business specifically) vs. discovery searches (for businesses in your category). While no one outside of Google is entirely sure how they calculate the discovery number, it’s probably as good a barometer for the overall strength of your local SEO as any, mainly if you track it over time.

Troubleshooting listing issues

The most typical GMB troubleshooting issue continues to be duplicate listings for the same business. While it’s gotten harder to detect duplicate listings, it’s much easier to close them.

The first step to identifying duplicates is to search for your business name on Google. If it looks like multiple listings refer to your business, select the one you’d like to report as a duplicate and click “Suggest an Edit.” Follow the suggestions to have the listing marked as duplicate.

Google support staff are generally responsive to these reports within a reasonable time. If you continue to have trouble, ask multiple people—co-workers, friends, family members, or relatives—to report the same problem, and it’s more likely Google will look at it. If your issue seems particularly thorny, you’re most likely to get a response by tweeting @googlemybiz, the official Twitter support channel for Google Business Profile.

Some time ago, Google added a new feature to the Google My Business dashboard. A feature that allows you to manage your reviews and report reviews for removal. You can also check the status of reviews you’ve already reported here. Keep in mind that Google will only remove reviews that violate their policies, such as irrelevant or offensive content.

Structured data and Yoast Local SEO

Google increasingly depends on structured data to determine what your site is about and which elements represent what. This is true for your business information, including the information that Google Business Profile uses. Make sure you add the correct structured data to your site. Enhance your NAP details, opening hours, reviews, product information, et cetera, with Schema.org data. This will make it much easier for Google to determine the validity of your listing. Several tools can help you with this, including our Yoast Local SEO plugin.

Local SEO is critical, even with Google Business Profile

So, you should activate and maintain your Google My Business account and make it awesome. But to get the most out of your listings and to get good rankings, you must have your site in order as well. Optimize every part of it. Create local content for your chosen keyword and business location and get quality local backlinks to build a solid link profile. Ask customers to review your business onsite or on My Business. Make sure your listing is active and attractive; as we said, inaccurate profiles are no good.

Read more: The ultimate guide to small business SEO »

Coming up next!

The Modern Guide To Robots.txt: How To Use It Avoiding The Pitfalls via @sejournal, @abbynhamilton

Robots.txt just turned 30 – cue the existential crisis! Like many hitting the big 3-0, it’s wondering if it’s still relevant in today’s world of AI and advanced search algorithms.

Spoiler alert: It definitely is!

Let’s take a look at how this file still plays a key role in managing how search engines crawl your site, how to leverage it correctly, and common pitfalls to avoid.

What Is A Robots.txt File?

A robots.txt file provides crawlers like Googlebot and Bingbot with guidelines for crawling your site. Like a map or directory at the entrance of a museum, it acts as a set of instructions at the entrance of the website, including details on:

  • What crawlers are/aren’t allowed to enter?
  • Any restricted areas (pages) that shouldn’t be crawled.
  • Priority pages to crawl – via the XML sitemap declaration.

Its primary role is to manage crawler access to certain areas of a website by specifying which parts of the site are “off-limits.” This helps ensure that crawlers focus on the most relevant content rather than wasting the crawl budget on low-value content.

While a robots.txt guides crawlers, it’s important to note that not all bots follow its instructions, especially malicious ones. But for most legitimate search engines, adhering to the robots.txt directives is standard practice.

What Is Included In A Robots.txt File?

Robots.txt files consist of lines of directives for search engine crawlers and other bots.

Valid lines in a robots.txt file consist of a field, a colon, and a value.

Robots.txt files also commonly include blank lines to improve readability and comments to help website owners keep track of directives.

Sample robots.txt fileImage from author, November 2024

To get a better understanding of what is typically included in a robots.txt file and how different sites leverage it, I looked at robots.txt files for 60 domains with a high share of voice across health, financial services, retail, and high-tech.

Excluding comments and blank lines, the average number of lines across 60 robots.txt files was 152.

Large publishers and aggregators, such as hotels.com, forbes.com, and nytimes.com, typically had longer files, while hospitals like pennmedicine.org and hopkinsmedicine.com typically had shorter files. Retail site’s robots.txt files typically fall close to the average of 152.

All sites analyzed include the fields user-agent and disallow within their robots.txt files, and 77% of sites included a sitemap declaration with the field sitemap.

Fields leveraged less frequently were allow (used by 60% of sites) and crawl-delay (used by 20%) of sites.

Field % of Sites Leveraging
user-agent 100%
disallow 100%
sitemap 77%
allow 60%
crawl-delay 20%

Robots.txt Syntax

Now that we’ve covered what types of fields are typically included in a robots.txt, we can dive deeper into what each one means and how to use it.

For more information on robots.txt syntax and how it is interpreted by Google, check out Google’s robots.txt documentation.

User-Agent

The user-agent field specifies what crawler the directives (disallow, allow) apply to. You can use the user-agent field to create rules that apply to specific bots/crawlers or use a wild card to indicate rules that apply to all crawlers.

For example, the below syntax indicates that any of the following directives only apply to Googlebot.

user-agent: Googlebot

If you want to create rules that apply to all crawlers, you can use a wildcard instead of naming a specific crawler.

user-agent: *

You can include multiple user-agent fields within your robots.txt to provide specific rules for different crawlers or groups of crawlers, for example:

user-agent: *

#Rules here would apply to all crawlers

user-agent: Googlebot

#Rules here would only apply to Googlebot

user-agent: otherbot1

user-agent: otherbot2

user-agent: otherbot3

#Rules here would apply to otherbot1, otherbot2, and otherbot3

Disallow And Allow

The disallow field specifies paths that designated crawlers should not access. The allow field specifies paths that designated crawlers can access.

Because Googlebot and other crawlers will assume they can access any URLs that aren’t specifically disallowed, many sites keep it simple and only specify what paths should not be accessed using the disallow field.

For example, the below syntax would tell all crawlers not to access URLs matching the path /do-not-enter.

user-agent: *

disallow: /do-not-enter

#All crawlers are blocked from crawling pages with the path /do-not-enter

If you’re using both allow and disallow fields within your robots.txt, make sure to read the section on order of precedence for rules in Google’s documentation.

Generally, in the case of conflicting rules, Google will use the more specific rule.

For example, in the below case, Google won’t crawl pages with the path/do-not-enter because the disallow rule is more specific than the allow rule.

user-agent: *

allow: /

disallow: /do-not-enter

If neither rule is more specific, Google will default to using the less restrictive rule.

In the instance below, Google would crawl pages with the path/do-not-enter because the allow rule is less restrictive than the disallow rule.

user-agent: *

allow: /do-not-enter

disallow: /do-not-enter

Note that if there is no path specified for the allow or disallow fields, the rule will be ignored.

user-agent: *

disallow:

This is very different from only including a forward slash (/) as the value for the disallow field, which would match the root domain and any lower-level URL (translation: every page on your site).  

If you want your site to show up in search results, make sure you don’t have the following code. It will block all search engines from crawling all pages on your site.

user-agent: *

disallow: /

This might seem obvious, but believe me, I’ve seen it happen.

URL Paths

URL paths are the portion of the URL after the protocol, subdomain, and domain beginning with a forward slash (/). For the example URL https://www.example.com/guides/technical/robots-txt, the path would be /guides/technical/robots-txt.

Example URL structureImage from author, November 2024

URL paths are case-sensitive, so be sure to double-check that the use of capitals and lower cases in the robot.txt aligns with the intended URL path.

Special Characters

Google, Bing, and other major search engines also support a limited number of special characters to help match URL paths.

A special character is a symbol that has a unique function or meaning instead of just representing a regular letter or number. Special characters supported by Google in robots.txt are:

  • Asterisk (*) – matches 0 or more instances of any character.
  • Dollar sign ($) – designates the end of the URL.

To illustrate how these special characters work, assume we have a small site with the following URLs:

  • https://www.example.com/
  • https://www.example.com/search
  • https://www.example.com/guides
  • https://www.example.com/guides/technical
  • https://www.example.com/guides/technical/robots-txt
  • https://www.example.com/guides/technical/robots-txt.pdf
  • https://www.example.com/guides/technical/xml-sitemaps
  • https://www.example.com/guides/technical/xml-sitemaps.pdf
  • https://www.example.com/guides/content
  • https://www.example.com/guides/content/on-page-optimization
  • https://www.example.com/guides/content/on-page-optimization.pdf

Example Scenario 1: Block Site Search Results

A common use of robots.txt is to block internal site search results, as these pages typically aren’t valuable for organic search results.

For this example, assume when users conduct a search on https://www.example.com/search, their query is appended to the URL.

If a user searched “xml sitemap guide,” the new URL for the search results page would be https://www.example.com/search?search-query=xml-sitemap-guide.

When you specify a URL path in the robots.txt, it matches any URLs with that path, not just the exact URL. So, to block both the URLs above, using a wildcard isn’t necessary.

The following rule would match both https://www.example.com/search and https://www.example.com/search?search-query=xml-sitemap-guide.

user-agent: *

disallow: /search

#All crawlers are blocked from crawling pages with the path /search

If a wildcard (*) were added, the results would be the same.

user-agent: *

disallow: /search*

#All crawlers are blocked from crawling pages with the path /search

Example Scenario 2: Block PDF files

In some cases, you may want to use the robots.txt file to block specific types of files.

Imagine the site decided to create PDF versions of each guide to make it easy for users to print. The result is two URLs with exactly the same content, so the site owner may want to block search engines from crawling the PDF versions of each guide.

In this case, using a wildcard (*) would be helpful to match the URLs where the path starts with /guides/ and ends with .pdf, but the characters in between vary.

user-agent: *

disallow: /guides/*.pdf

#All crawlers are blocked from crawling pages with URL paths that contain: /guides/, 0 or more instances of any character, and .pdf

The above directive would prevent search engines from crawling the following URLs:

  • https://www.example.com/guides/technical/robots-txt.pdf
  • https://www.example.com/guides/technical/xml-sitemaps.pdf
  • https://www.example.com/guides/content/on-page-optimization.pdf

Example Scenario 3: Block Category Pages

For the last example, assume the site created category pages for technical and content guides to make it easier for users to browse content in the future.

However, since the site only has three guides published right now, these pages aren’t providing much value to users or search engines.

The site owner may want to temporarily prevent search engines from crawling the category page only (e.g., https://www.example.com/guides/technical), not the guides within the category (e.g., https://www.example.com/guides/technical/robots-txt).

To accomplish this, we can leverage “$” to designate the end of the URL path.

user-agent: *

disallow: /guides/technical$

disallow: /guides/content$

#All crawlers are blocked from crawling pages with URL paths that end with /guides/technical and /guides/content

The above syntax would prevent the following URLs from being crawled:

  • https://www.example.com/guides/technical
  • https://www.example.com/guides/content

While allowing search engines to crawl:

  • https://www.example.com/guides/technical/robots-txt
  • https://www.example.com/guides/content/on-page-optimization

Sitemap

The sitemap field is used to provide search engines with a link to one or more XML sitemaps.

While not required, it’s a best practice to include XML sitemaps within the robots.txt file to provide search engines with a list of priority URLs to crawl.  

The value of the sitemap field should be an absolute URL (e.g., https://www.example.com/sitemap.xml), not a relative URL (e.g., /sitemap.xml). If you have multiple XML sitemaps, you can include multiple sitemap fields.

Example robots.txt with a single XML sitemap:

user-agent: *

disallow: /do-not-enter

sitemap: https://www.example.com/sitemap.xml

Example robots.txt with multiple XML sitemaps:

user-agent: *

disallow: /do-not-enter

sitemap: https://www.example.com/sitemap-1.xml

sitemap: https://www.example.com/sitemap-2.xml

sitemap: https://www.example.com/sitemap-3.xml

Crawl-Delay

As mentioned above, 20% of sites also include the crawl-delay field within their robots.txt file.

The crawl delay field tells bots how fast they can crawl the site and is typically used to slow down crawling to avoid overloading servers.

The value for crawl-delay is the number of seconds crawlers should wait to request a new page. The below rule would tell the specified crawler to wait five seconds after each request before requesting another URL.

user-agent: FastCrawlingBot

crawl-delay: 5

Google has stated that it does not support the crawl-delay field, and it will be ignored.

Other major search engines like Bing and Yahoo respect crawl-delay directives for their web crawlers.

Search Engine Primary user-agent for search Respects crawl-delay?
Google Googlebot No
Bing Bingbot Yes
Yahoo Slurp Yes
Yandex YandexBot Yes
Baidu Baiduspider No

Sites most commonly include crawl-delay directives for all user agents (using user-agent: *), search engine crawlers mentioned above that respect crawl-delay, and crawlers for SEO tools like Ahrefbot and SemrushBot.

The number of seconds crawlers were instructed to wait before requesting another URL ranged from one second to 20 seconds, but crawl-delay values of five seconds and 10 seconds were the most common across the 60 sites analyzed.

Testing Robots.txt Files

Any time you’re creating or updating a robots.txt file, make sure to test directives, syntax, and structure before publishing.

This robots.txt Validator and Testing Tool makes this easy to do (thank you, Max Prin!).

To test a live robots.txt file, simply:

  • Add the URL you want to test.
  • Select your user agent.
  • Choose “live.”
  • Click “test.”

The below example shows that Googlebot smartphone is allowed to crawl the tested URL.

Example robots.txt test - crawling allowedImage from author, November 2024

If the tested URL is blocked, the tool will highlight the specific rule that prevents the selected user agent from crawling it.

Example robots.txt test - crawling disallowedImage from author, November 2024

To test new rules before they are published, switch to “Editor” and paste your rules into the text box before testing.

Common Uses Of A Robots.txt File

While what is included in a robots.txt file varies greatly by website, analyzing 60 robots.txt files revealed some commonalities in how it is leveraged and what types of content webmasters commonly block search engines from crawling.

Preventing Search Engines From Crawling Low-Value Content

Many websites, especially large ones like ecommerce or content-heavy platforms, often generate “low-value pages” as a byproduct of features designed to improve the user experience.

For example, internal search pages and faceted navigation options (filters and sorts) help users find what they’re looking for quickly and easily.

While these features are essential for usability, they can result in duplicate or low-value URLs that aren’t valuable for search.

The robots.txt is typically leveraged to block these low-value pages from being crawled.

Common types of content blocked via the robots.txt include:

  • Parameterized URLs: URLs with tracking parameters, session IDs, or other dynamic variables are blocked because they often lead to the same content, which can create duplicate content issues and waste the crawl budget. Blocking these URLs ensures search engines only index the primary, clean URL.
  • Filters and sorts: Blocking filter and sort URLs (e.g., product pages sorted by price or filtered by category) helps avoid indexing multiple versions of the same page. This reduces the risk of duplicate content and keeps search engines focused on the most important version of the page.
  • Internal search results: Internal search result pages are often blocked because they generate content that doesn’t offer unique value. If a user’s search query is injected into the URL, page content, and meta elements, sites might even risk some inappropriate, user-generated content getting crawled and indexed (see the sample screenshot in this post by Matt Tutt). Blocking them prevents this low-quality – and potentially inappropriate – content from appearing in search.
  • User profiles: Profile pages may be blocked to protect privacy, reduce the crawling of low-value pages, or ensure focus on more important content, like product pages or blog posts.
  • Testing, staging, or development environments: Staging, development, or test environments are often blocked to ensure that non-public content is not crawled by search engines.
  • Campaign sub-folders: Landing pages created for paid media campaigns are often blocked when they aren’t relevant to a broader search audience (i.e., a direct mail landing page that prompts users to enter a redemption code).
  • Checkout and confirmation pages: Checkout pages are blocked to prevent users from landing on them directly through search engines, enhancing user experience and protecting sensitive information during the transaction process.
  • User-generated and sponsored content: Sponsored content or user-generated content created via reviews, questions, comments, etc., are often blocked from being crawled by search engines.
  • Media files (images, videos): Media files are sometimes blocked from being crawled to conserve bandwidth and reduce the visibility of proprietary content in search engines. It ensures that only relevant web pages, not standalone files, appear in search results.
  • APIs: APIs are often blocked to prevent them from being crawled or indexed because they are designed for machine-to-machine communication, not for end-user search results. Blocking APIs protects their usage and reduces unnecessary server load from bots trying to access them.

Blocking “Bad” Bots

Bad bots are web crawlers that engage in unwanted or malicious activities such as scraping content and, in extreme cases, looking for vulnerabilities to steal sensitive information.

Other bots without any malicious intent may still be considered “bad” if they flood websites with too many requests, overloading servers.

Additionally, webmasters may simply not want certain crawlers accessing their site because they don’t stand to gain anything from it.

For example, you may choose to block Baidu if you don’t serve customers in China and don’t want to risk requests from Baidu impacting your server.

Though some of these “bad” bots may disregard the instructions outlined in a robots.txt file, websites still commonly include rules to disallow them.

Out of the 60 robots.txt files analyzed, 100% disallowed at least one user agent from accessing all content on the site (via the disallow: /).

Blocking AI Crawlers

Across sites analyzed, the most blocked crawler was GPTBot, with 23% of sites blocking GPTBot from crawling any content on the site.

Orginality.ai’s live dashboard that tracks how many of the top 1,000 websites are blocking specific AI web crawlers found similar results, with 27% of the top 1,000 sites blocking GPTBot as of November 2024.

Reasons for blocking AI web crawlers may vary – from concerns over data control and privacy to simply not wanting your data used in AI training models without compensation.

The decision on whether or not to block AI bots via the robots.txt should be evaluated on a case-by-case basis.

If you don’t want your site’s content to be used to train AI but also want to maximize visibility, you’re in luck. OpenAI is transparent on how it uses GPTBot and other web crawlers.

At a minimum, sites should consider allowing OAI-SearchBot, which is used to feature and link to websites in the SearchGPT – ChatGPT’s recently launched real-time search feature.

Blocking OAI-SearchBot is far less common than blocking GPTBot, with only 2.9% of the top 1,000 sites blocking the SearchGPT-focused crawler.

Getting Creative

In addition to being an important tool in controlling how web crawlers access your site, the robots.txt file can also be an opportunity for sites to show their “creative” side.

While sifting through files from over 60 sites, I also came across some delightful surprises, like the playful illustrations hidden in the comments on Marriott and Cloudflare’s robots.txt files.

Marriot robots.txt fileScreenshot of marriot.com/robots.txt, November 2024
Screenshot of cloudflare.com/robots.txt, November 2024

Multiple companies are even turning these files into unique recruitment tools.

TripAdvisor’s robots.txt doubles as a job posting with a clever message included in the comments:

“If you’re sniffing around this file, and you’re not a robot, we’re looking to meet curious folks such as yourself…

Run – don’t crawl – to apply to join TripAdvisor’s elite SEO team[.]”

If you’re looking for a new career opportunity, you might want to consider browsing robots.txt files in addition to LinkedIn.

How To Audit Robots.txt

Auditing your Robots.txt file is an essential part of most technical SEO audits.

Conducting a thorough robots.txt audit ensures that your file is optimized to enhance site visibility without inadvertently restricting important pages.

To audit your Robots.txt file:

  • Crawl the site using your preferred crawler. (I typically use Screaming Frog, but any web crawler should do the trick.)
  • Filter crawl for any pages flagged as “blocked by robots.txt.” In Screaming Frog, you can find this information by going to the response codes tab and filtering by “blocked by robots.txt.”
  • Review the list of URLs blocked by the robots.txt to determine whether they should be blocked. Refer to the above list of common types of content blocked by robots.txt to help you determine whether the blocked URLs should be accessible to search engines.
  • Open your robots.txt file and conduct additional checks to make sure your robots.txt file follows SEO best practices (and avoids common pitfalls) detailed below.
Sample Screaming Frog ReportImage from author, November 2024

Robots.txt Best Practices (And Pitfalls To Avoid)

The robots.txt is a powerful tool when used effectively, but there are some common pitfalls to steer clear of if you don’t want to harm the site unintentionally.

The following best practices will help set yourself up for success and avoid unintentionally blocking search engines from crawling important content:

  • Create a robots.txt file for each subdomain. Each subdomain on your site (e.g., blog.yoursite.com, shop.yoursite.com) should have its own robots.txt file to manage crawling rules specific to that subdomain. Search engines treat subdomains as separate sites, so a unique file ensures proper control over what content is crawled or indexed.
  • Don’t block important pages on the site. Make sure priority content, such as product and service pages, contact information, and blog content, are accessible to search engines. Additionally, make sure that blocked pages aren’t preventing search engines from accessing links to content you want to be crawled and indexed.
  • Don’t block essential resources. Blocking JavaScript (JS), CSS, or image files can prevent search engines from rendering your site correctly. Ensure that important resources required for a proper display of the site are not disallowed.
  • Include a sitemap reference. Always include a reference to your sitemap in the robots.txt file. This makes it easier for search engines to locate and crawl your important pages more efficiently.
  • Don’t only allow specific bots to access your site. If you disallow all bots from crawling your site, except for specific search engines like Googlebot and Bingbot, you may unintentionally block bots that could benefit your site. Example bots include:
    • FacebookExtenalHit – used to get open graph protocol.
    • GooglebotNews – used for the News tab in Google Search and the Google News app.
    • AdsBot-Google – used to check webpage ad quality.
  • Don’t block URLs that you want removed from the index. Blocking a URL in robots.txt only prevents search engines from crawling it, not from indexing it if the URL is already known. To remove pages from the index, use other methods like the “noindex” tag or URL removal tools, ensuring they’re properly excluded from search results.
  • Don’t block Google and other major search engines from crawling your entire site. Just don’t do it.

TL;DR

  • A robots.txt file guides search engine crawlers on which areas of a website to access or avoid, optimizing crawl efficiency by focusing on high-value pages.
  • Key fields include “User-agent” to specify the target crawler, “Disallow” for restricted areas, and “Sitemap” for priority pages. The file can also include directives like “Allow” and “Crawl-delay.”
  • Websites commonly leverage robots.txt to block internal search results, low-value pages (e.g., filters, sort options), or sensitive areas like checkout pages and APIs.
  • An increasing number of websites are blocking AI crawlers like GPTBot, though this might not be the best strategy for sites looking to gain traffic from additional sources. To maximize site visibility, consider allowing OAI-SearchBot at a minimum. 
  • To set your site up for success, ensure each subdomain has its own robots.txt file, test directives before publishing, include an XML sitemap declaration, and avoid accidentally blocking key content.

More resources:


Featured Image: Se_vector/Shutterstock

WP Engine Vs Automattic: Judge Inclined To Grant Preliminary Injunction via @sejournal, @martinibuster

WP Engine had their day in court, but it didn’t go entirely in their favor, as Judge Araceli Martínez-Olguín ruled the request for a preliminary injunction was too vague. However, the judge said they were “inclined to grant some sort of injunction.”

“That’s How You Set A Ransom”

The attorney for plaintiff offered new details about what happened behind the scenes on the day that Matt Mullenweg went “nuclear” on WP Engine at WordCamp USA. She first explained that Mullenweg’s demand for trademark license was a sham. Then showed how Mullenweg failed to enforce his trademark claim for fifteen years.

Among the new details was that Mullenweg’s demand for $32 million dollars was communicated in a one-page letter and that the agreement was for a seven year period that automatically renews “essentially forever.” She then revealed new details of how Mullenweg decided on the $32 million dollars, explaining that it was just “a number” that Mullenweg felt WP Engine was able to pay.

The point of this part of the plaintiff’s argument was to show that the royalty rate that Mullenweg was asking for was not based on any value of the mark but rather the rate was a figure that Mullenweg felt he was able to squeeze out of WP Engine, saying that the rate was “set in an extortionate manner.

WP Engine’s attorney offered this narrative of events:

“We know that defendants had no right to offer that, quote unquote, service because it is a pretext. It is a sham. …You look at the record. We see that for 15 years, WP Engine was making nominative fair use of the WordPress mark as the entire community did for 15 years without so much as a shoulder tap. ‘Excuse me.’ ‘Here’s an email.’ ‘Here’s a text.’ ‘Here’s a cease and desist letter.’  Nothing.

Nothing whatsoever, until the morning of September 20th when we receive this one page bizarre trademark license agreement. That’s not how trademark owners operate. That is not how you protect and enforce your mark. You don’t wait 15 years and then drop a demand for thirty two million dollars on the recipient.

We also know from the price set, …this one page license listed a price of eight percent of WP Engines gross revenues, which happens to amount to thirty two million dollars. And it set that price for a seven year period to automatically renew essentially forever.

And when asked, how did you set that price? Mr. Mullenweg, defendant Matthew Mullenweg, acknowledged, “it’s what I thought they could pay. We did an analysis to figure out what the free cash flow was. That’s how we set that number.” That’s not how you calculate a royalty. That’s how you set a ransom.”

Judge Questioned WP Engine’s Attorneys

There was a point in the proceedings where the Judge Araceli Martínez-Olguín asked WP Engine’s attorneys what right to continued acces did they have without paying any kind of license.

WP Engine’s attorney answered:

“So there’s just simply no connection there, your Honor, whatsoever. The test is not: does WP Engine have a right to be free from a trademark license?”

The attorney also pointed out that free access to WordPress.org was the “status quo” for fifteen years, which changed on September 20th when Mullenweg initiated his dispute with WP Engine.

Automattic’s Defense Tactic

The attorney for Automattic and Mullenweg argued several technical points as to why the judge should not grant an injunction. One key point was that WP Engine’s extortion claim, under California law, fails because California courts do not recognize a private cause of action for attempted extortion under the California Penal Code.

They then point out that the case law WP Engine’s attorney is relying on (Tran v. Winn) concerns a different legal concept (duress and rescission) rather than extortion. They said that the plaintiff’s legal theory doesn’t match extortion claims and involves different legal principles.

Automattic’s attorney then follows that up by stating that even if WP Engine could use the Tran v. Winn case law, the plaintiff’s argument still fails under the other case law they are citing to base their claims on (Levitt case). They argue that the plaintiff cannot meet the legal standard for economic extortion because they are unable to show that the defendant had no right to demand payment for the services in question.

An argument made by Automattic’s attorney about the trademark license demand is that the plaintiffs omit a second option in the license, which was to provide volunteer hours equivalent to the payment. Shaw also pointed out that Mullenweg had made a reference to negotiating the terms the following week, but WP Engine never responded to his message.

The attorney said:

“…there is a text from Mr. Mullenweg in which he says, or he makes reference to even negotiating the terms the following week. They just never responded to Mr. Mullenweg’s response.”

What The Judge Said

Judge Araceli Martínez-Olguín had a lot to untangle, with perhaps the main thing being that WP Engine’s injunction was too vague.

The judge gave an indication of what direction she was leaning but also explained that the request was a “non-starter.”

“Having reviewed everything, I am inclined to grant some sort of injunction. Here’s the problem that I have with your proposed injunction, though. This is a nonstarter because it is exceedingly vague.”

The judge then encouraged the parties to work together to narrow down the preliminary injunction to something that isn’t vague and failing that they could submit “dueling submissions.” There was some back and forth about what date to return to court with, with WP Engine asking for a Friday date and eventually agreeing to return on Tuesday, December 3rd.

Reaction To Preliminary Injunction Hearing

A lawyer live blogging the proceedings on Bluesky wrote up their take on what happened:

“I knew that WPE was in very good shape when the opening question was “tell me about your one best shot” because that’s not generally a question you’d ask if you thought nothing had any merit.

I thought that tortious interference was the best shot. I’m pretty sure WordPress’s lawyers did too.”

And followed up with:

“I was reasonably sure that this was leaning toward a grant on the PI. I think that Automattic was close to getting their alternative, but Mack may have saved things with his tech walk through.”

He offered a good opinion about the judge, saying that she appears to recognize that some of the technical issues are outside of her area of expertise and that she expressed a willingness to ask questions to better understand.

He offered his opinion about the judge and the final outcome:

“It’s clear that the Judge isn’t overly technical in her background, but is aware of that and is willing to listen attentively – this is very good, and not a universal federal judge trait.

It will be interesting to see what we get on Monday.

Almost certainly, dueling proposals.”

This summary of what happened in court is based on a live blog and a post on Bluesky by a lawyer of the proceedings over Zoom.

Featured Image by Shutterstock/Jidvg

Accelerating generative AI deployment with microservices

In this exclusive webcast, we delve into the transformative potential of portable microservices for the deployment of generative AI models. We explore how startups and large organizations are leveraging this technology to streamline generative AI deployment, enhance customer service, and drive innovation across domains, including chatbots, document analysis, and video generation.

Our discussion focuses on overcoming key challenges such as deployment complexity, security, and cost management. We also discuss how microservices can help executives realize business value with generative AI while maintaining control over data and intellectual property.

Data Suggest Solid 2024 Holiday Sales

Based on expert projections and the latest economic data, the 2024 holiday shopping season has the potential to be a good one for online retailers.

The National Retail Federation says a record 183.4 million Americans plan to shop in physical stores and online from Thanksgiving Day through Cyber Monday this year. That would be up from the previous record of 182 million in 2023.

For the entire holiday season, NRF projects that the internet will again be the top shopping destination for holiday shoppers, with 57% of U.S. consumers planning to do at least some of their holiday shopping online. Department and grocery stores will tie for second at 46% each, and discount stores will be in third place at 45%.

The NRF expects total (online and offline) 2024 U.S. holiday sales in November and December to increase up to 2.6% over 2023, reaching $989.9 billion. That would amount to a record average spend of $902 per person for gifts, food, decorations, and other seasonal items. Online holiday sales in 2024 will reach $300.0 billion, up 8.3% from the prior year, according to the NRF.

‘The most mobile’ year for ecommerce

Adobe expects U.S. online holiday sales to hit a record of $240.8 billion (PDF) in November and December, up 8.4% from 2023. Adobe says that includes a projected record of $128.1 billion in mobile device purchases, giving mobile a 53.2% share of online holiday spending. That would make 2024’s holiday season “the most mobile of all time,” Adobe says.

Adobe expects online sales for Cyber Week (Thanksgiving through Cyber Monday) to hit $40.6 billion, with Cyber Monday being the biggest shopping day of the season at $13.2 billion in online sales.

Another indication of the strength of online shopping comes from the annual holiday spending survey of The Conference Board, a think tank. The survey found that 43% of U.S. consumers expect to purchase at least half of their gifts online in 2024, while just 8% expect to buy gifts entirely in physical stores.

The Conference Board also found that consumers plan to spend an average of $1,063 on holiday-related purchases in 2024, up 7.9% from 2023. That includes an average of $677 spent on gifts, up 3.4% from last year.

The think tank’s survey found that 52% of U.S. shoppers plan to spend the same amount on gifts in 2024 as in 2023, while 23% plan to spend more and 25% less than last year.

A $1 trillion year?

The data and consulting firm Forrester Research expects total (online and offline) U.S. holiday retail sales to hit a cool $1 trillion this year, a 3.7% increase from last year. Forrester says online sales will grow 10.1% year-over-year, reaching $257 billion, or nearly 26% of total U.S. holiday retail sales, up from 24.2% in 2023.

In a report, Forrester says, “Online retail sales for the holiday period will grow faster than in the previous two years but slower than the average of pre-pandemic years when they mostly grew by double digits.”

“2024’s holiday season for e-commerce looks promising but not exceptional,” Forrester analyst Jitender Miglani, based in New Delhi, India, told Practical Ecommerce in an email. “Sales are expected to grow faster than the past two years but slower than the double-digit growth seen pre-pandemic.”

Miglani expects holiday sales growth “to be largely volume-driven rather than inflation-driven.” That’s because “goods inflation is nearly zero, while headline inflation remains slightly above 2%, primarily due to higher inflation in services,” he added.

Additional indicators

Other surveys, projections, and economic data point to a cheerful, if not exuberant, holiday season for retailers.

  • The consulting firm Bain & Co. expects online retail sales growth of 9.5% in 2024. That’s down significantly from a 13.1% 10-year average. But Bain expects physical store sales to grow at a slim 0.5%, leading to a total holiday sales growth of 3%, the smallest year-over-year gain since 2018 and more than two points lower than the 10-year average of 5.2%, Bain says.
  • The U.S. Bureau of Labor Statistics reports that average hourly earnings in the U.S. grew faster than inflation from September to October and year over year. The September-to-October increase was 0.1%. The agency says average hourly earnings after inflation rose 1.4%, seasonally adjusted, from October 2023 to October 2024.
  • The BLS also reports that the economy kept generating net jobs in October, albeit slowly after hurricanes Helene and Milton. Total nonfarm payroll employment grew by 12,000, and the unemployment rate was unchanged at 4.1%, BLS reported. Employment in health care and government continued to increase, while temporary help services lost jobs and manufacturing employment declined due to strike activity.
  • The payroll processing firm ADP presented a much rosier employment picture than the BLS. The firm, which uses a different methodology than the government, says private employers added 233,000 jobs in October, the most since July 2023.