Google Rolls Out AI-Organized Search Results Pages via @sejournal, @MattGSouthern

Google is introducing AI-organized search results pages in the United States.

The new feature, set to launch this week, returns a full page of multi-format results personalized for the searcher.

Google’s announcement states:

“This week, we’re rolling out search results pages organized with AI in the U.S. — beginning with recipes and meal inspiration on mobile. You’ll now see a full-page experience, with relevant results organized just for you. You can easily explore content and perspectives from across the web including articles, videos, forums and more — all in one place.”

Key Features

The AI-organized pages will compile various content types, including articles, videos, and forum discussions.

Google claims this approach will provide users with a more diverse range of information sources and perspectives.

In its announcement, Google adds:

“… with AI-organized search results pages, we’re bringing people more diverse content formats and sites, creating even more opportunities for content to be discovered.”

Industry Implications

While Google touts the benefits of AI-organized search results pages, the update raises several questions:

  1. How will the AI-organized pages affect traffic to individual websites? Keeping users on Google’s results page might reduce clicks to source websites.
  2. With AI determining content organization, there are concerns about potential biases in how information is presented.
  3. The new format may require new strategies to ensure visibility within these AI-organized results.
  4. It’s unclear how this change will impact ad visibility.

This update could alter how we approach SEO. We may need to adapt strategies to ensure content is discoverable and presentable in this new format.

Microsoft’s Bing recently announced an expansion of its generative search capabilities, focusing on handling complex, informational queries. Google’s reorganizing of entire results pages appears to be a unique offering compared to Bing’s.

The initial rollout focusing on mobile devices for recipe and meal-related queries aligns with Google’s mobile-first indexing approach.

It remains to be seen how this feature will translate to desktop searches.

Google’s Response to Industry Concerns

In light of the questions raised by this update, we contacted Google for clarification on several key points.

Impact on Search Console Tracking

Regarding how AI-organized search results will be tracked in Google Search Console, a Google spokesperson stated:

“We do not separate traffic by every feature in Search Console, but publishers will continue to see their traffic from Search reflected there. Check out the supported search appearances in our documentation.”

This suggests that while specific metrics will not be available for AI-organized pages, site owners will still be able to access overall traffic data.

Timeline for Expansion

When asked about the timeline for expanding this feature to other categories and regions, Google responded:

“When we previewed this feature, we mentioned expanding this to additional categories including dining, movies, music, books, hotels, and shopping. No further details to share at this time.”

While this confirms expansion plans, Google has not provided specific timelines for these rollouts.

Guidance for SEO Professionals and Content Creators

On whether new tools or guidance will be provided for optimizing content for AI-organized search results, Google emphasized that no changes are necessary:

“SEO professionals and creators don’t need to do anything differently. Search results pages organized with AI are rooted in our core Search ranking and quality systems, which we have been honing for decades to surface high quality information.”

This response suggests that existing SEO best practices should continue to be effective for visibility in these new result formats.

Looking Ahead

Google’s responses provide some clarity but also leave room for speculation.

The lack of specific tracking for AI-organized pages in Search Console may present challenges for SEO professionals in understanding the direct impact of this new feature on their traffic.

The confirmation of plans to expand to other categories like dining, movies, music, books, hotels, and shopping indicates that this update could have far-reaching effects across various industries.

Despite Google’s assurances, new best practices may emerge as the SEO community adapts to this significant change in search result presentation.

We here at SEJ will closely monitor the rollout and report on its effects and what it means for you in the coming months. Sign up for the SEJ newsletter to stay up to date.


Featured Image: JarTee/Shutterstock

WordPress Co-Founder & Automattic Sued For Attempted Extortion via @sejournal, @martinibuster

WP Engine, the managed web host recently banned from WordPress.org, has filed a federal lawsuit against WordPress co-founder Matt Mullenweg and Automattic, alleging attempted extortion as well as seeking relief from trademark infringement claims.

The goal of the federal lawsuit is to stop Mullenweg from continuing to “harm” WP Engine and the WordPress ecosystem as well as to hold Mullenweg and Automattic accountable for their “broken promises and malfeasance.” Malfeasance refers to wrongful or illegal conduct that may involve violations of the law or unethical actions, particularly by someone in a position of authority.

What The 98 Page Lawsuit Is About

The TL/DR of the lawsuit is that it’s “about abuse of power, extortion, and greed.” It states that the open source WordPress ecosystem was built on the promise of the freedom to create with WordPress without any constraints but that those promises were not kept and that the failure to keep those promises constituted wrongful acts, including making false statements to the IRS, among many other accusations backed up with evidence.

WP Engine made the following statement to Search Engine Journal:

“Matt Mullenweg and Automattic’s self-proclaimed scorched earth campaign against WP Engine has harmed not just our company, but the entire WordPress ecosystem. The symbiotic relationship between WordPress, its community and the businesses that invest millions to support WordPress users, and advance the ecosystem, is based on trust in the promises of openness and freedom.

Matt Mullenweg’s conduct over the last ten days has exposed significant conflicts of interests and governance issues that, if left unchecked, threaten to destroy that trust. WP Engine has no choice but to pursue these claims to protect its people, agency partners, customers, and the broader WordPress community.”

WP Engine’s lawsuit asks for judgment on these eleven points:

(1) Intentional Interference with Contractual Relations;
(2) Intentional Interference with Prospective Economic Relations;
(3) Computer Fraud and Abuse Act, 18 U.S.C. § 1030 et seq.;
(4) Attempted Extortion;
(5) Unfair Competition, Cal. Bus. Prof. Code § 17200, et seq.;
(6) Promissory Estoppel;
(7) Declaratory Judgment of Non-Infringement;
(8) Declaratory Judgment of Non-Dilution;
(9) Libel;
(10) Trade Libel; and
(11) Slander

Denial Of Trademark Infringement

The lawsuit filed by WP Engine denies the assertion by Mullenweg that the use of the word “WordPress” to refer to the open source project is subject to licensing fees, asserting that not only is the use of the name of the project legal and necessary in order to communicate information that consumers need, WP Engine asserts that the use of the name WordPress is freely used by the entire WordPress community and that said use has been condoned by the defendant Mullenweg.

Many times, lawsuits are won or lost based on the evidence provided and WP Engine’s lawsuit offers evidence to back up every allegation made against Mullenweg and Automattic.

For example, the lawsuit provides evidence that WP Engine was not misusing or infringing on the trademarks and that Mullenweg had been fully aware of WPE’s use of the challenged terms.

  • WP Engine’s membership in the ‘Five for the Future’ program is an acknowledgment that WPE was not infringing because one of the requirements of membership is to not currently be infringing on trademarks
  • Matt Mullenweg praised WP Engine on March 21, 2023 at the DE{CODE} developer conference, despite the fact that WPE used the challenged terms on their website at the time. The lawsuit quotes Mullenweg’s public statement: “vote with your wallet. So when you support companies like WPE, who don’t just provide a commercial service, but are also part of a wider open source community, you’re saying, hey, I want more of this in the world.”
  • Matt Mullenweg publicly acknowledged in an X livestream on September 26, 2024 that he knew WP Engine used the terms “for years” but that he had chosen not to take action.

Accusation Of False Statements To State Of California

The lawsuit also accuses that following the official incorporation of WordPress.org as a non-profit that Mullenweg of publicly transferred the WordPress trademark from Automattic to the non-profit WordPress Foundation but then secretly transferred it right back to Automattic through an exclusive sublicensable license which allowed Automattic to control the WordPress trademark. This contract between the WordPress Foundation and Automattic is the basis of their accusation that Mullenweg had made false statements to the state of California.

The lawsuit asserts:

“Mullenweg failed to disclose this exclusive licensing arrangement between his nonprofit (the WordPress Foundation) and his for-profit (Automattic) in the WordPress Foundation’s tax filings with the California government, claiming that there were no ‘contracts . . . between [WordPress Foundation] and any officer, director or trustee . . . or with an entity in which any such officer, director or trustee had any financial interest’ …This statement was false, given that Mullenweg was a director of the WordPress Foundation while also having a financial interest in Automattic, the entity with which the Foundation entered into a trademark license agreement—an apparent self-dealing transaction constituting inurement under federal tax law.”

Allegation: Mullenweg Made False Statements To IRS

WP Engine also states that Mullenweg has made false statements to the IRS.

The lawsuit documents the following:

“It appears Mullenweg also did not disclose the license agreement in the WordPress Foundation’s filings with the IRS, and none of WordPress Foundation’s fourteen years of publicly available federal reporting to the IRS indicates that the WordPress Foundation was compensated in any form for granting an exclusive, fully-paid, royalty-free, perpetual, irrevocable, worldwide, sublicensable license for trademarks Defendants now claim are incredibly valuable. Indeed, while the Foundation has failed to ever disclose to the IRS its ownership of the trademarks or existence of the exclusive royalty-free license to Automattic, for the past seven years Mullenweg himself executed the IRS forms on behalf of the Foundation under penalties of perjury, an apparent false certification to the IRS and public that the Foundation’s Forms 990 were true, correct, and complete.”

Additionally, the lawsuit shows that in 2011 Mullenweg filed IRS forms claiming that the non-profit WordPress Foundation only had assets of $14,071 and made no mention of assets like trademarks or other IP (intellectual property).

False Statements About Trademark Ownership

Another of the accusations made in the lawsuits is that Mullenweg had made misleading public statements asserting that the trademarks had been transferred to the non-profit WordPress foundation, even though he had secretly transferred the trademark back to Automattic through an exclusive sublicensable license. A sublicensable license is an agreement in which the original license holder (the licensor) grants a second party (the licensee) the right to grant a license to a third party (the sublicensee).

The lawsuit states:

“In 2010, in response to mounting public concern, the WordPress source code and trademarks were placed into the nonprofit WordPress Foundation (which Mullenweg created), with Mullenweg and Automattic making sweeping promises of open access for all: “Automattic has transferred the WordPress trademark to the WordPress Foundation, the nonprofit dedicated to promoting and ensuring access to WordPress and related open source projects in perpetuity. This means that the most central piece of WordPress’s identity, its name, is now fully independent from any company.” Mullenweg and Automattic reiterated this promise later, in even more forceful terms: ‘What’s important is that [] longer than I’m alive, longer than Automattic is alive, longer than any of us are alive, there is something that holds the WordPress code and trademark for the free access for the world.”

What Defendants’ statements and assurances did not disclose is that while they were publicly touting their purported good deed of moving this intellectual property away from a private company, and into the safe hands of a nonprofit, Defendants in fact had quietly transferred irrevocable, exclusive, royalty-free rights in the WordPress trademarks right back to Automattic that very same day in 2010. This meant that far from being “independent of any company” as Defendants had promised, control over the WordPress trademarks effectively never left Automattic’s hands. “

WP Engine writes that it relied on the promises made when it was founded in 2010 as a business built around WordPress, investing hundreds of millions of dollars over the past 14 years only to see WordPress turned against it by the “petulant whims” of Automattic’s CEO, Matt Mullenweg to “inflict harm” to WP Engine’s business.

Further claims are made that the request for payment of tens of millions of dollars for a trademark license (that WPE claims is not needed) came “without warning” and that they were given only 48 hours to make the decision to pay or face being banned and “publicly smeared.”

Alleges Mullenweg Controls WordPress Foundation For Self-Interest

The lawsuit also claims that the recent events at WordCamp and the days thereafter exposes that the non-profit WordPress Foundation is an entity that Mullenweg personally controls for his own and Automattic’s commercial interest.

“Mullenweg’s recent actions have exposed and highlighted his long history of obfuscating the true facts about his control and manipulation of the WordPress Foundation and wordpress.org—which he presents as a not-for-profit ‘dot-org’ enterprise, but which in fact he solely owns and directs with an iron fist to further his own commercial interests in Automattic and associated commercial businesses, to the detriment of Defendants’ competitors.”

Fallout From Lawsuit

The lawsuit was filed October 2nd and there is already a credible report that the WordPress Executive Director/General Manager, Open Source Division; Automattic, Josepha Haden Chomphosy, has resigned and will be making an announcement today October 3rd. The statement about this was posted on X by WP Tavern journalist Jeff Chandler who has been involved with WordPress since 2007.

He tweeted:

“I spoke with Josepha tonight. I can confirm that she’s no longer at Automattic.

She’s working on a statement for the community. She’s in good spirits despite the turmoil.”

Read the 98 page federal lawsuit here: (PDF)

Featured Image by Shutterstock/Billion Photos

Google Rolls Out CrUX Vis Core Web Vitals Tool via @sejournal, @martinibuster

Google rolled out a new Core Web Vitals tool called CrUX Vis that shows you hidden patterns in performance scores and offers guidance on what to improve. The data is sourced from the CrUX dataset which is based on actual user experiences on the URLs and websites that are analyzed and explored in the new tool.

CrUX

The new tool is based on the CrUX dataset which is what the Core Web Vitals scores are based on.

Chrome’s documentation of CrUX explains:

“The Chrome User Experience Report (also known as the Chrome UX Report, or CrUX for short) is a dataset that reflects how real-world Chrome users experience popular destinations on the web.

CrUX is the official dataset of the Web Vitals program. All user-centric Core Web Vitals metrics are represented.

CrUX data is collected from real browsers around the world, based on certain browser options which determine user eligibility. A set of dimensions and metrics are collected which allow site owners to determine how users experience their sites.

The data collected by CrUX is available publicly through a number of Google tools and third-party tools and is used by Google Search to inform the page experience ranking factor.

Not all origins or pages are represented in the dataset. There are separate eligibility criteria for origins and pages, primarily that they must be publicly discoverable and there must be a large enough number of visitors in order to create a statistically significant dataset.”

Debugging Core Web Vitals

Improving website performance scores may not offer the direct ranking benefit that many SEOs and publishers hoped it would but it’s still the same critical factor to get right it’s always been. High performance scores improve earnings, ad clicks, conversions, user experience, website popularity and virtually every goal an SEO and publisher has for a site, including indirect benefits to rankings. A site can still limp along with poor performance scores but it will not be living up to its full earnings potential.

Although tools based on Chrome’s Lighthouse offer performance snapshots and estimated scores those tools were unable to provide a sense of how the site was performing over time or provide a breakout of important performance metrics to gauge whether performance is trending up or down.

CrUX Vis

Chrome’s new tool is called CrUX Vis, a data visualization tool that enables users to visualize the Chrome User Experience data (CrUX). CrUX Vis provides an entirely new way to understand website performance and gain a big picture view of what’s going on at the URL and website level (called origin).

The different variables for what is visualized can be changed in the section at the top of the page called Controls, covering data, device and period.

Screenshot Of CrUX Vis Controls

Segment Data By Multiple Variables

As seen in the screenshot above, the data can be segmented in three ways:

  1. Data
    Performance scores can be viewed by origin (the entire site) or by URL
  2. Device
    Data can be segmented and visualized by mobile, data and a combined view.
  3. Period (Date Range)
    The tool currently allows data visualization by 25 overlapping time periods stretching back about six months. It currently shows performance visualizations from 3/17/2024 through 09/28/2024.

Five Views Of Metrics

There are five ways to analyze the data, covering core web vitals, three categories of metrics and all metrics combined. These variables are accessible on left hand navigation panel on the desktop UI (user interface).

  1. Core Web Vitals
  2. Loading Performance
  3. Interactivity
  4. Visual Stability
  5. All Metrics Combined

Visualizing Data

The visualization for Core Web Vitals shows a time-based trend graph that’s colored with green, yellow, and pink. Green is good and pink is not good.

The three core web vitals are represented by a circle, squate and a triangle:

  • Circle = Largest Contentful Paint (LCP):
  • Square = Interaction to Next Paint (INP)
  • Triangle = Cumulative Layout Shift (CLS)

The desktop UI (user interface) shows the trend graph and a summary on the left and a text explanation on the right.

Screenshot Of User Interface

The graph offers a visual snapshot of which direction the core web vitals are moving and an explanation of the kind of trend for each metric.

The three kinds of trends are:

  1. Good And Improving
  2. Good And Stable
  3. Poor And Regressing

Screenshot Showing CWV Performance

A more comprehensive explanation of the data is to the right of the trend graph, with each metric identified by the circle, square, and triangle icons.

Screenshot Of Data Explanation

Loading Performance

Using the left hand navigation to get to the Loading Performance screen shows another trend graph that offers additional metrics related to how fast the site or URL loads.

It offers the following six visualizations:

  • Largest Contentful Paint (LCP)
  • First Contentful Paint (FCP)
  • Time to First Byte (TTFB)
  • Round Trip Time (RTT)
  • Navigation Types
  • Form Factors

Screenshot Of Six Visualization Choices

There’s a toggle next to each choice:

Clicking the toggle shows the trend graph:

The rest of the choices show similar breakdowns of each kind of metric.

The new CrUX Vis tool should be useful to publishers and digital marketers who want to get an accurate measurement of website performance, visualized as a trend. It’s useful for competitior research and for website audits.

Go check it out at:

CrUX Vis

Featured Image by Shutterstock/Krakenimages.com

How To Use The New Google Ads AI Image Editor via @sejournal, @brookeosmundson

In case you missed it, Google just rolled out the AI Image Editor to support additional campaign types.

Originally launched for Performance Max campaigns, it now supports these additional types:

  • Search
  • Display
  • App
  • Demand Gen.

If you’re not familiar with the AI Image Editor tool, have no fear!

This tutorial, originally posted by Google Ads, breaks down how you can start taking advantage of the tool to save you time and money.

What Can the AI Image Editor Do?

With the help of Google AI, the image editor tool can help create new high quality image assets, as well as edit existing assets in your PPC campaigns.

When it comes to editing image assets, there’s a wide variety of features to choose from. These include:

  • Removing image background
  • Replacing image background
  • Add and erase objects and images
  • Extend images outside their existing aspect ratio.

How to Find the Google Ads AI Image Editor Tool

There are two different ways to access the AI image editor tool in Google Ads.

On the left-hand side of the menu, click the “+” Create button, then choose “Asset”.

Additionally, you can click “Assets” in the left-hand menu to be taken to the Asset Library.

From there, you choose “Generate an image with AI” to get started.

AI Image Generator in Google Ads.

To generate a new image, you’ll need to give a description of what you want it to look like.

In the example below, I typed “A blue couch against a neutral colored wall, with canvas photos hanging on the wall. The floor is light grey wood.”

Google Ads AI Image Generator example

In the example, it generated two images to choose from, with the option to generate more.

Once you’ve selected the image you want to use, then you can get to editing more.

How to Edit Image Assets with Google AI

Now that you’ve selected your image to use, you can edit the following aspects:

  • Remove background
  • Erase object
  • Add new object
  • Replace object
  • Expand image
  • Adjust color

How to edit images in Google Ads using Google AI

Click on any of the options on the right-hand side to start editing.

In this example, say I want to remove one of the canvas photos in the background and replace it with a clock.

First, click on “Replace object”, then click and drag over the canvas photo. Then, click “Next”.

Replace function with Google AI Image Editor tool.

Then, to replace an object, create a description of the type of object you want to replace the selected object with. Then, click “Generate”.

Continuing in this example, I wrote, “A square clock that matches the blue color couch.”

An example of an AI replaced clock within an image asset in Google Ads.

In this example, Google AI gave me three options to choose from, with the option to generate more.

You can follow these examples with the other functionalities all within the AI image editor tool.

Summary

The Google Ads AI image editor can help save advertisers time and resources with asset creation.

It gives you creative flexibility with built-in controls to ensure the images meet your level of standards and produce accurate content to complement your ads.

Now that the tool has expanded to more campaign types, it opens up new possibilities for more brands while keeping their brand identity aligned.

Will you be trying the AI image editor tool out anytime soon?

Google Phases Out Support For Noarchive Meta Tag via @sejournal, @MattGSouthern

In a recent update to its Search Central documentation, Google has officially relegated the ‘noarchive’ rule to a historical reference section.

The new text in Google’s help document reads:

“The noarchive rule is no longer used by Google Search to control whether a cached link is shown in search results, as the cached link feature no longer exists.”

This move follows Google’s earlier decision to remove the cache: search operator, which was reported last week.

Implications For Websites

While Google says websites don’t need to remove the meta tag, it noted that “other search engines and services may be using it.”

The ‘noarchive‘ tag has been a staple of SEO practices for years, allowing websites to prevent search engines from storing cached versions of their pages.

Its relegation to a historical reference highlights the dynamic nature of Google Search.

The Gradual Phasing Out of Cached Pages

This documentation update aligns with Google’s gradual phasing out of the cached page feature.

Last week, Google removed the documentation for the cache: search operator, which had allowed users to view Google’s stored version of a webpage.

At the time, Google’s Search Liaison explained on social media that the cache feature was originally intended to help users access pages when loading was unreliable.

With improvements in web technology, Google deemed the feature no longer necessary.

As an alternative, Google has begun incorporating links to the Internet Archive’s Wayback Machine in its “About this page” feature, providing searchers with a way to view historical versions of webpages.

Controlling Archiving In The Wayback Machine

The ‘noarchive’ tag doesn’t affect the Internet Archive’s Wayback Machine.

The Wayback Machine, which Google now links to in search results pages, has its own rules for archiving and exclusion.

To prevent pages from being archived by the Wayback Machine, you have several options:

  1. Robots.txt: Adding specific directives to the robots.txt file can prevent the Wayback Machine from crawling and archiving pages. For example:
    1. User-agent: ia_archiver
      Disallow: /
  2. Direct Request: Website owners can contact the Internet Archive to request removal of specific pages or domains from the Wayback Machine.
  3. Password Protection: Placing content behind a login wall effectively prevents it from being archived.

Note that these methods are specific to the Wayback Machine and differ from Google’s now-deprecated ‘noarchive’ tag.

Conclusion

As search technology advances, it’s common to see legacy features retired in favor of new solutions.

It’s time to update those best practice guides to note Google’s deprecation of noarchive.


Featured Image: Tada Images/Shutterstock

Google Adds Two New Best Practices For Product Markup via @sejournal, @MattGSouthern

Google updates guidance on Product markup, advising ecommerce sites to prioritize HTML implementation and use JavaScript cautiously.

  • Google recommends including Product markup in initial HTML for best results.
  • JavaScript-generated markup can lead to less frequent and reliable crawls.
  • E-commerce sites using JavaScript for product data should ensure servers can handle increased traffic.
Google’s Search Liaison Addresses Brand Bias Concerns via @sejournal, @MattGSouthern

In a recent interview with Aleyda Solis, Google’s Search Liaison, Danny Sullivan, discussed the company’s approach to ranking smaller websites versus larger brands.

This topic has long been a point of contention, with concerns that Google’s ranking systems favor brands over independent sites.

Fairness In Search Results

Sullivan claims that Google doesn’t inherently favor brands, stating:

“Our ranking systems aren’t saying ‘are you a big brand therefore you rank’… The core of it isn’t really whether you’re big or you’re small, the core of it is whether you have the most useful, the most relevant, most satisfying information.”

The Perception Problem

Despite Google’s stance, Sullivan acknowledged the widespread perception that larger, well-established sites have an advantage in search results.

He recognized the frustration of smaller site owners who feel they cannot compete with bigger brands for visibility.

Sullivan states:

“I have looked at cases where people say you don’t like small sites, and I am not taking away from any of the real concerns because they are there… I wish they were doing better, but I can also see co-occurring in some of the same queries that I’m given other independent sites that are doing well.”

Challenges & Improvements

Sullivan admitted that Google’s systems sometimes fail to recognize high-quality content from smaller sites.

He assured that the company is actively improving this aspect of its algorithms.

Sullivan said:

“We don’t want it to be only the very big things rank well and I think in the last update we did talk about how we were taking in a lot of these concerns and trying to understand how we can do more for some of the smaller sites, the so-called independent sites.”

Advice For Smaller Sites

For independent website owners feeling discouraged, Sullivan offered some advice: focus on developing your brand.

He advised:

“If you’re a smaller site that feels like you haven’t really developed your brand, develop it. That’s not because we’re going to rank you because of your brand, but because it’s probably the things that cause people externally to recognize you as a good brand may in turn co-occur or be alongside the kinds of things that our ranking systems are kind of looking to reward.”

On advice for content creators, Sullivan adds:

“Just keep listening to your heart and doing what it is that you think is the right thing to be doing… Our ranking systems are trying to reward great content that’s made for people and if you feel like you’re doing that, then we’re going to try to catch up to you.”

Looking Ahead

Google appears to be taking these concerns seriously.

Sullivan mentioned that recent updates have aimed to do more for smaller sites. However, he maintains that Google’s goal is to show the best content regardless of brand recognition.

While challenges remain, Google’s acknowledgment of the issue and efforts to improve suggests a potential shift with future updates.

Hear Sullivan’s full statements in the video below:


Featured Image: rudall30/Shutterstock

Google’s SEO Tip To Get New Site Picked Up Faster via @sejournal, @martinibuster

Google’s John Mueller offered a useful for technical SEO tip for those launching a new site that will help your site get picked up by Google faster by avoiding this one common mistake.

High Priority For Site Launch

Launching a website is a chance to take everything learned from previous experiences and apply them with the benefit of hindsight. There’s no better teacher for success than failure because lessons learned from mistakes are never forgotten.

Someone who recently registered a new domain started a discussion on Reddit asking what were the top three considerations for launching a successful website before anything else has been done. The person asking the question preemptively ruled out the obvious answer of adding the domain to Google Search Console and set the ground rule that the niche or type of business didn’t matter. What did matter is that the suggestions must be important for scaling traffic within the first six month of the website.

They asked:

“Let’s say you have a brand new domain and you’ve been given a task to build traffic in the next 6 months. The niche, business does not matter, and the basics like ‘adding domain to Google search console’ don’t matter.

Tell me what are the first 3, high-priority things you’ll implement.”

The Most Upvoted Answer

It’s somewhat surprising that the most upvoted answer, with 83 votes, was one that offered the most obvious suggestions.

The top upvoted answer was:

“Create landing pages/content for your lowest funnel keyword opportunities and work the the way up.”

It’s a matter of course that the information architecture of the site should be planned out ahead of time (things like keywords, topics, key pages, a complete org-chart style map of categories with room left for expanding topical coverage, and an interlinking strategy). The upvoted answer is absolutely correct but it’s also fairly obvious.

The rest of that highly upvoted response:

“Claim brand on top social medias.

Build easiest citations and directories that I know get indexed. Plus niche relevant ones.

Start reactive digital PR as main initial link building campaign.”

The obviousness of that upvoted answer is in contrast with the not so obvious quality of Mueller’s response.

John Mueller Advice On SEO Preparation

John Mueller’s advice is excellent and offers an insight into a technical issue that is easy to overlook.

He wrote:

“Just throwing this out there – if you don’t have a site ready, either keep DNS disabled or put up a custom holding page. Don’t use a generic server / CMS holding page. It generally takes longer for a site that’s known to be parked / duplicate to get recognized as a normal site than it does for a site to be initially picked up.”

Keep DNS Disabled

DNS stands for Domain Name System and is a reference to the backend process of converting a domain name to the IP address where the actual content exists. All content exists at an IP address, not at the domain name. The domain name just points to where the content is. By keeping DNS disabled what happens is that Google doesn’t discover the domain pointing to anything so it essentially doesn’t exist.

Don’t Use Generic Server/CMS Holding Page

A generic server holding page is the same as a parked domain, it’s like a false signal to Google that something exists at the IP address that a domain name resolves to.

The effect of Mueller’s advice regarding disabling a DNS and not using a generic holding page is to keep the domain name from resolving to a holding page (assuming that a registrar’s holding page is also turned off). This keeps Google from sniffing out the domain and finding a generic “nothing here” holding page.

Mueller’s advice points to the technical issue that Google will recognize and index a site faster if a generic version is never activated and the domain name essentially doesn’t exist.

So if you want your website to be picked up and indexed quickly then it’s best to not use a generic domain holding page.

Read Mueller’s advice here:

Brand New Domain : What are the first 3 things you’ll do?

Featured Image by Shutterstock/Luis Molinero

Bing Expands Generative Search Capabilities For Complex Queries via @sejournal, @MattGSouthern

Microsoft has announced an expansion of Bing’s generative search capabilities.

The update focuses on handling complex, informational queries.

Bing provides examples such as “how to effectively run a one-on-one” and “how can I remove background noise from my podcast recordings.”

Searchers in the United States can access the new features by typing “Bing generative search” into the search bar. This will present a carousel of sample queries.

Screenshot from: blogs.bing.com, October 2024.

A “Deep search” button on the results page activates the generative search function for other searches.

Screenshot from: blogs.bing.com, October 2024.

Beta Release and Potential Challenges

It’s important to note that this feature is in beta.

Bing acknowledges that you may experience longer loading times as the system works to ensure accuracy and relevance.

The announcement reads:

“While we’re excited to give you this opportunity to explore generative search firsthand, this experience is still being rolled out in beta. You may notice a bit of loading time as we work to ensure generative search results are shown when we’re confident in their accuracy and relevancy, and when it makes sense for the given query. You will generally see generative search results for informational and complex queries, and it will be indicated under the search box with the sentence “Results enhanced with Bing generative search” …”

This is the waiting screen you get after clicking on “Deep search.”

Screenshot from: blogs.bing.com, October 2024.

In practice, I found the wait was long and sometimes the searches would fail before completing.

The ideal way to utilize this search experience is to click on the suggestions provided after entering “Bing generative search” into the search bar.

Potential Impact

Bing’s generative search results include citations and links to original sources.

Screenshot from: blogs.bing.com, October 2024.

This approach is intended to drive traffic to publishers, but it remains to be seen how effective this will be in practice.

Bing encourages users to provide feedback on the new feature using thumbs up/down icons or the dedicated feedback button.

Looking Ahead

This development comes as search engines increasingly use AI to enhance their capabilities.

As Bing rolls out this expanded generative search feature, remember the technology is still in beta, so performance and accuracy may vary.


Featured Image: JarTee/Shutterstock

Why Google Lighthouse Doesn’t Include INP, A Core Web Vital via @sejournal, @MattGSouthern

Google’s Lighthouse doesn’t use the Interaction to Next Paint (INP) metric in its standard tests, despite INP being one of the Core Web Vitals.

Barry Pollard, Web Performance Developer Advocate on Google Chrome, explained the reasoning behind this and offered insights into measuring INP.

Lighthouse Measures Page Loads, Not Interactions

Lighthouse measures a simple page load and captures various characteristics during that process.

It can estimate the Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS) under specific load conditions, identify issues, and advise on improving these metrics.

However, INP is different as it depends on user interactions.

Pollard explained:

“The problem is that Lighthouse, again like many web perf tools, typically just loads the page and does not interact with it. No interactions = No INP to measure!”

Custom User Flows Enable INP Measurement

While Lighthouse can’t measure INP, knowing common user journeys allows you to use “user flows” to measure INP.

Pollard added:

“If you as a site-owner know your common user journeys then you can measure these in Lighthouse using ‘user flows’ which then WILL measure INP.”

These common user journeys can be automated in a continuous integration environment, allowing developers to test INP on each commit and spot potential regressions.

Total Blocking Time As An INP Proxy

Although Lighthouse can’t measure INP without interactions, it can measure likely causes, particularly long, blocking JavaScript tasks.

This is where the Total Blocking Time (TBT) metric comes into play.

According to Pollard:

“TBT (Total Blocking Time) measures the sum time of all tasks greater 50ms. The theory being:

  • Lots of long, blocking tasks = high risk of INP!
  • Few long, blocking tasks = low risk of INP!”

Limitations Of TBT As An INP Substitute

TBT has limitations as an INP substitute.

Pollard noted:

“If you don’t interact during long tasks, then you might not have any INP issues. Also interactions might load MORE JavaScript that is not measure by Lighthouse.”

He adds:

“So it’s a clue, but not a substitute for actually measuring INP.”

Optimizing For Lighthouse Scores vs. User Experience

Some developers optimize for Lighthouse scores without considering the user impact.

Pollard cautions against this, stating:

“A common pattern I see is to delay ALL JS until the user interacts with a page: Great for Lighthouse scores! Often terrible for users 😢:

  • Sometimes nothing loads until you move the mouse.
  • Often your first interaction gets a bigger delay.”

Pollard’s Full Post

Why This Matters

Understanding Lighthouse, INP, and TBT relationships is necessary for optimizing user experience.

Recognizing limitations in measuring INP helps avoid misguided optimizations.

Pollard’s advice for measuring INP is to focus on real user interactions to ensure performance improvements enhance UX.

As INP remains a Core Web Vital, grasping its nuances is essential for keeping it within an acceptable threshold.

Practical Applications

To monitor site performance and INP:

  1. Use Lighthouse’s “user flows” for INP measurement in common journeys.
  2. Automate user flows in CI to monitor INP and catch regressions.
  3. Use TBT as an INP proxy, but understand its limitations.
  4. Prioritize field measurements for accurate INP data.
  5. Balance performance optimizations with UX considerations.

Featured Image: Ye Liew/Shutterstock