WordPress Co-Founder Mullenweg Sparks Backlash via @sejournal, @martinibuster

Matt Mullenweg, co-founder of WordPress.org content management system and CEO of Automattic, ended a successful WordCamp USA conference with a poorly received keynote that sharply criticized a prominent managed WordPress web host. The overwhelming response was negative toward his statements and a subsequent blog post that continued his combative remarks.

The response on social media to his speech and blog post was so immense that at one point “WordPress” was the number one trending topic on X (formerly Twitter).

This article doesn’t take sides, it’s only reporting what was said and the general response to it.

What Happened

WordPress is built on the idea of a worldwide community working together to create an open source system for publishing ideas. It is responsible for the creation of perhaps millions of jobs, enabled countless ecommerce companies to sell online and created multiple markets and services that would not otherwise exist, all of it built on the idea of community.

WordCamp is the physical manifestation of the WordPress community, a conference organized by volunteers that enables WordPress users at every level to meet and exchange ideas. It’s ordinarily an uplifting and inspirational event which is why nobody was prepared for the bombshell that would close the week of events beginning on September 17th and ending on the 20th.

It’s not that there weren’t hints. Matt Mullenweg published a blog post on the first day of the conference that begins on a cheerful note then becomes progressively darker.

He begins by praising the community that powers WordPress and is responsible for WordCamp:

“If you ever have a chance to visit a WordCamp, I recommend it. It’s an amazing group of people brought together by this crazy idea that by working together regardless of our differences or where we came from or what school we went to we can be united by a simple yet groundbreaking idea: that software can give you more Freedom.”

Mullenweg then criticized Meta as “disingenuously” claiming to participate in the open source movement and then praised companies that give back to the open source WordPress community as part of the Five for the Future program (in which companies are encouraged to put 5% back into growing the WordPress platform).

He then openly criticized WP Engine for not contributing enough.

The amounts that companies are giving back to WordPress is the ax that Mullenweg was swinging in his conference closing keynote on Friday, specifically calling out WP Engine by name.

Ending A Conference On A Low Note

Mullenweg stated that there are some companies that use up resources without giving back, following up by pointing a finger at WP Engine for only sponsoring 40 hours per week of work toward improving the WordPress core.

He said:

“And there are those that treat open source simply as a resource to extract from its natural surroundings, like oil from the grounds, a finite resource, something to be extracted and used.

…a lot of this information that I’m sharing with you all has come from WP engine employees who’ve reached out to me and and talked to me about all this. So thank you all for being brave and for sharing this information that you think your company is doing something wrong.

WP Engine has good people, some of whom are listed on that page, but the company is controlled by Silver Lake, a private equity firm with 102 billion in assets under management. Silver Lake doesn’t give a dang about your open source ideals, it just wants return on capital.”

Matt Mullenweg then took the step of encouraging the WordPress community to find a different web host. He didn’t directly name WP Engine or call for a boycott, but the meaning of his words were not lost on the audience, given that he just accused WP Engine of not giving “a dang about …open source ideals.

He said:

“So it’s at this point that I ask everyone in the WordPress community to go vote with your wallet. Who are you giving your money to? Someone who is going to nourish the ecosystem or someone is going to frack every bit of value out of it until it withers?”

Followed a minute later with:

“Think about that next time it comes up to renew your hosting or domain. Weigh your dollars towards companies that give back more…

Those of us who are makers who curate the source need to be wary of those who take our curations and squeeze out the juice. They’re grifters who will hop on to the next fad.”

Mullenweg said that he tried to speak with them beforehand but couldn’t get through.

Shocked Audience Sides With WP Engine

Near the end of his keynote, Mullenweg commented about a potential ban on WP Engine at future WordCamps was met with a surprising silence from the audience, with only a few applauding.

Matt Cromwell, co-Founder of GiveWP, tweeted:

“No one I spoke with at #wcus sympathized with @photomatt’s take on @wpengine’s contributions to WP.

One thing is clear: if you want to encourage more contributions to WP don’t light contributors on fire on stage. There’s more to the story between A8C and Silver Lake than we know”

Someone else tweeted:

“I didn’t know how to feel after the public shaming of WP Engine by Matt today. I tried to see both sides….and I felt upset at WP Engine & at Matt at the same time.

After seeing what transpired the hours since on X, I believe it was wrong to call out WP Engine and believe this did more harm. “

Another wrote:

“I work very closely with @WPEngine in my day job. They’ve got some fantastic people over there, and are doing many different things to further WordPress in many different ways.

And I will continue to work with them happily.”

Mullenweg Doubles Down

Mullenweg’s keynote wasn’t the end of his negative criticism. On Saturday he published an article on the official WordPress.org blog that amplified the remarks from his keynote that also generated a largely negative response on social media, with some on X and Facebook even calling for him to step down.

Mullenweg wrote:

“I spoke yesterday at WordCamp about how Lee Wittlinger at Silver Lake, a private equity firm with $102B assets under management, can hollow out an open source community. Today, I would like to offer a specific, technical example of how they break the trust and sanctity of our software’s promise to users to save themselves money so they can extract more profits from you.”

The rest of the blog post gets worse.

Backlash Overwhelmingly Against Mullenweg

One of the cleverest responses is published on WPHercules website which is word for word copy of Mullenweg’s article but with the words WP Engine replaced with WordPress.com (the managed WordPress hosting service), titled WordPress.com Is Not WordPress.org

WordPress agency owner Kevin Geary wrote in a blog response:

“This wasn’t my first WordCamp, but I legitimately felt bad for first-timers. Imagine an awesome and uplifting week ending like the Payback scene in The Sum of All Fears… A little awkward.

…Matt has presumably attempted diplomacy multiple times in different ways over the years as he passed that collection plate around, but without great success when it comes to WP Engine.

The question now becomes, is public ridicule and shame a valid approach? And should this ridicule and shame get delivered in the closing talk at a WordCamp?”

A WordPress community member tweeted that the post “ridiculous and completely unnecessary” and that WP apparently stands for “We’re petty.”

A negative tweet that is representative of the general mood:

“It’s been concerning for a few years now – at least for me. I don’t think a CEO should attack people/corps based on personal opinions, no matter if right or wrong. Not good for the WordPress ecosystem tbh. Agree?”

Another member of the WordPress community tweeted:

“When I go to an event or trade show, I do not assume the organizers support or endorse every vendor.

I also don’t expect them to criticize any vendor publicly at the event.”

Another tweet:

  • “Congrats on embarrassing yourself and alienating the #WordPress community to close out #WCUS!
  • Truly inspirational.”

And another:

“There’s been talk of the “existential” threat to WordPress’ standing for a number of years. Now it’s crystal clear that Matt is that existential threat.”

Targeting Of WP Engine Perceived As Unfair

This article isn’t taking sides, it’s only reporting what was said and how the WordPress community responded.

Some background information for those who may not be aware is that WP Engine is a managed web host that voluntarily contributes to the development of WordPress core, supports WordCamp and develops free plugins enjoyed by millions of WordPress publishers such as Advanced Custom Fields, LocalWP, WPGraphQL, Better Search Replace, and WP Migrate Lite.

The backlash on social media is firmly against Matt Mullenweg, including in the private Facebook group Dynamic WordPress (registration required) where a discussion generated over 100 posts. One member of the group who attended WordCamp remarked on the shocked faces of WordCamp attendees and more than one person wrote “Matt needs to go!” as others sympathized with WP Engine.

Watch Mullenweg’s keynote at the 7:08:25 minute mark ( 7 hour, 8 minute, 25 seconds):

Featured Image by Shutterstock/Krakenimages.com

Google Revamps Entire Crawler Documentation via @sejournal, @martinibuster

Google has launched a major revamp of its Crawler documentation, shrinking the main overview page and splitting content into three new, more focused pages.  Although the changelog downplays the changes there is an entirely new section and basically a rewrite of the entire crawler overview page. The additional pages allows Google to increase the information density of all the crawler pages and improves topical coverage.

What Changed?

Google’s documentation changelog notes two changes but there is actually a lot more.

Here are some of the changes:

  • Added an updated user agent string for the GoogleProducer crawler
  • Added content encoding information
  • Added a new section about technical properties

The technical properties section contains entirely new information that didn’t previously exist. There are no changes to the crawler behavior, but by creating three topically specific pages Google is able to add more information to the crawler overview page while simultaneously making it smaller.

This is the new information about content encoding (compression):

“Google’s crawlers and fetchers support the following content encodings (compressions): gzip, deflate, and Brotli (br). The content encodings supported by each Google user agent is advertised in the Accept-Encoding header of each request they make. For example, Accept-Encoding: gzip, deflate, br.”

There is additional information about crawling over HTTP/1.1 and HTTP/2, plus a statement about their goal being to crawl as many pages as possible without impacting the website server.

What Is The Goal Of The Revamp?

The change to the documentation was due to the fact that the overview page had become large. Additional crawler information would make the overview page even larger. A decision was made to break the page into three subtopics so that the specific crawler content could continue to grow and making room for more general information on the overviews page. Spinning off subtopics into their own pages is a brilliant solution to the problem of how best to serve users.

This is how the documentation changelog explains the change:

“The documentation grew very long which limited our ability to extend the content about our crawlers and user-triggered fetchers.

…Reorganized the documentation for Google’s crawlers and user-triggered fetchers. We also added explicit notes about what product each crawler affects, and added a robots.txt snippet for each crawler to demonstrate how to use the user agent tokens. There were no meaningful changes to the content otherwise.”

The changelog downplays the changes by describing them as a reorganization because the crawler overview is substantially rewritten, in addition to the creation of three brand new pages.

While the content remains substantially the same, the division of it into sub-topics makes it easier for Google to add more content to the new pages without continuing to grow the original page. The original page, called Overview of Google crawlers and fetchers (user agents), is now truly an overview with more granular content moved to standalone pages.

Google published three new pages:

  1. Common crawlers
  2. Special-case crawlers
  3. User-triggered fetchers

1. Common Crawlers

As it says on the title, these are common crawlers, some of which are associated with GoogleBot, including the Google-InspectionTool, which uses the GoogleBot user agent. All of the bots listed on this page obey the robots.txt rules.

These are the documented Google crawlers:

  • Googlebot
  • Googlebot Image
  • Googlebot Video
  • Googlebot News
  • Google StoreBot
  • Google-InspectionTool
  • GoogleOther
  • GoogleOther-Image
  • GoogleOther-Video
  • Google-CloudVertexBot
  • Google-Extended

3. Special-Case Crawlers

These are crawlers that are associated with specific products and are crawled by agreement with users of those products and operate from IP addresses that are distinct from the GoogleBot crawler IP addresses.

List of Special-Case Crawlers:

  • AdSense
    User Agent for Robots.txt: Mediapartners-Google
  • AdsBot
    User Agent for Robots.txt: AdsBot-Google
  • AdsBot Mobile Web
    User Agent for Robots.txt: AdsBot-Google-Mobile
  • APIs-Google
    User Agent for Robots.txt: APIs-Google
  • Google-Safety
    User Agent for Robots.txt: Google-Safety

3. User-Triggered Fetchers

The User-triggered Fetchers page covers bots that are activated by user request, explained like this:

“User-triggered fetchers are initiated by users to perform a fetching function within a Google product. For example, Google Site Verifier acts on a user’s request, or a site hosted on Google Cloud (GCP) has a feature that allows the site’s users to retrieve an external RSS feed. Because the fetch was requested by a user, these fetchers generally ignore robots.txt rules. The general technical properties of Google’s crawlers also apply to the user-triggered fetchers.”

The documentation covers the following bots:

  • Feedfetcher
  • Google Publisher Center
  • Google Read Aloud
  • Google Site Verifier

Takeaway:

Google’s crawler overview page became overly comprehensive and possibly less useful because people don’t always need a comprehensive page, they’re just interested in specific information. The overview page is less specific but also easier to understand. It now serves as an entry point where users can drill down to more specific subtopics related to the three kinds of crawlers.

This change offers insights into how to freshen up a page that might be underperforming because it has become too comprehensive. Breaking out a comprehensive page into standalone pages allows the subtopics to address specific users needs and possibly make them more useful should they rank in the search results.

I would not say that the change reflects anything in Google’s algorithm, it only reflects how Google updated their documentation to make it more useful and set it up for adding even more information.

Read Google’s New Documentation

Overview of Google crawlers and fetchers (user agents)

List of Google’s common crawlers

List of Google’s special-case crawlers

List of Google user-triggered fetchers

Featured Image by Shutterstock/Cast Of Thousands

HubSpot Rolls Out AI-Powered Marketing Tools via @sejournal, @MattGSouthern

HubSpot announced a push into AI this week at its annual Inbound marketing conference, launching “Breeze.”

Breeze is an artificial intelligence layer integrated across the company’s marketing, sales, and customer service software.

According to HubSpot, the goal is to provide marketers with easier, faster, and more unified solutions as digital channels become oversaturated.

Karen Ng, VP of Product at HubSpot, tells Search Engine Journal in an interview:

“We’re trying to create really powerful tools for marketers to rise above the noise that’s happening now with a lot of this AI-generated content. We might help you generate titles or a blog content…but we do expect kind of a human there to be a co-assist in that.”

Breeze AI Covers Copilot, Workflow Agents, Data Enrichment

The Breeze layer includes three main components.

Breeze Copilot

An AI assistant that provides personalized recommendations and suggestions based on data in HubSpot’s CRM.

Ng explained:

“It’s a chat-based AI companion that assists with tasks everywhere – in HubSpot, the browser, and mobile.”

Breeze Agents

A set of four agents that can automate entire workflows like content generation, social media campaigns, prospecting, and customer support without human input.

Ng added the following context:

“Agents allow you to automate a lot of those workflows. But it’s still, you know, we might generate for you a content backlog. But taking a look at that content backlog, and knowing what you publish is still a really important key of it right now.”

Breeze Intelligence

Combines HubSpot customer data with third-party sources to build richer profiles.

Ng stated:

“It’s really important that we’re bringing together data that can be trusted. We know your AI is really only as good as the data that it’s actually trained on.”

Addressing AI Content Quality

While prioritizing AI-driven productivity, Ng acknowledged the need for human oversight of AI content:

“We really do need eyes on it still…We think of that content generation as still human-assisted.”

Marketing Hub Updates

Beyond Breeze, HubSpot is updating Marketing Hub with tools like:

  • Content Remix to repurpose videos into clips, audio, blogs, and more.
  • AI video creation via integration with HeyGen
  • YouTube and Instagram Reels publishing
  • Improved marketing analytics and attribution

The announcements signal HubSpot’s AI-driven vision for unifying customer data.

But as Ng tells us, “We definitely think a lot about the data sources…and then also understand your business.”

HubSpot’s updates are rolling out now, with some in public beta.


Featured Image: Poetra.RH/Shutterstock

Google Brings AI Ad Image Editing To Search, Display, & More via @sejournal, @MattGSouthern

Google expands AI-powered ad image editing to more campaigns, enhancing creative capabilities for advertisers across its platform.

  • AI-powered image editing is expanding to search, Display, App, and Demand Gen campaigns.
  • Google’s AI campaign builder is expanding beyond English-speaking markets.
  • Google is balancing AI automation with more granular advertiser controls.
Google Ads Expands AI Campaign Tools To More Languages via @sejournal, @MattGSouthern

Google expands AI search campaign tools to new languages, adds creative capabilities and advertiser controls to optimize performance.

  • Google is rolling out its AI search campaign building tool to German, French, and Spanish.
  • Advertisers get more AI-powered creative tools and customization options across campaigns.
  • New advertiser controls include negative keywords for Performance Max and omnichannel bidding.
Patchstack WordPress Security Secures $5M, Adds Yoast Co-Founder to Board via @sejournal, @martinibuster

WordPress security company Patchstack announced a round of $5 million USD funding and the addition of Joost de Valk, co-founder of Yoast SEO, to their board. The funding will accelerate the development of Patchstack toward becoming the fastest full-cycle security solution.

Patchstack – Trusted Security Partner

Patchstack, based in Estonia, is a fast growing WordPress security company that is trusted by major web hosts, plugins and websites around the world. It recently released a free security tool for open-source software vendors that helps them comply with the upcoming European Cyber Resilience Act compliance.

Patchstack is a highly regarded WordPress security company that is trusted by customers such as GoDaddy, Digital Ocean, Plesk, and cPanel and is a security partner with over 300 WordPress plugins such as Elementor, WP Rocket, WP Bakery Page Builder and Slider Revolution.

Patchstack provides security scans for over five million websites every day and offers a free plugin for vulnerability detection and a low cost real-time protection (starting at $5 per website/month).

The announcement by Patchstack offers details of the $5 million dollar funding:

“Estonian cybersecurity startup Patchstack who in 2022 received €2.7M R&D grant from European Innovation Council announced an additional 5 million USD funding round to further their mission of covering the entire lifecycle of open-source security to provide the fastest mitigation to the emerging security threats.

Patchstack’s Series A round was led by Karma Ventures, an early-stage venture capital fund focusing on deep-tech software companies, with participation from G+D Ventures, the German TrustTech investor, and Emilia Capital, the investment firm of Yoast founders Marieke van de Rakt and Joost de Valk.”

Joost de Valk commented to Search Engine Journal:

“Patchstack is really an amazing company and product. I recently joined their board.”

He’s right, Patchstack currently prevents millions of vulnerability attacks and should be on the shortlist of security solutions for every WordPress website. Although WordPress security is not considered an SEO-related concern it actually should be an important factor of every SEO audit because all it takes is one major vulnerability event to lose the trust of customers and site visitors which can impact earnings and rankings.

Featured Image by Shutterstock/Krakenimages.com

Google On Why Simple Factors Aren’t Ranking Signals via @sejournal, @martinibuster

Google’s John Mueller affirmed in a LinkedIn post that two site characteristics that could be perceived as indicative of site quality aren’t ranking factors, suggesting that other perceived indicators of quality may not be either.

Site Characteristics And Ranking Factors

John Mueller posted something interesting on LinkedIn because it offers insight into how an attribute of quality sometimes isn’t enough to be an actual ranking factor. His post also encourages a more realistic consideration of what should be considered a signal of quality and what is simply a characteristic of a site.

The two characteristics of site quality that Mueller discussed are valid HTML and typos (typographical errors, commonly in reference to spelling errors). His post was inspired by an analysis of 200 home pages of the most popular websites that found that only 0.5% of which had valid HTML. That means that out of the 200 of the most popular sites, only 1 home page was written with valid HTML.

John Mueller said that a ranking factor like valid HTML would be a low bar, presumably because spammers can easily create web page templates that use valid HTML. Mueller also made the same observation about typos.

Valid HTML

Valid HTML means that the code underlying a web page follows all of the rules for how HTML should be used. What constitutes valid HTML is defined by the W3C (World Wide Web Consortium), the international standards making body for the web. HTML, CSS, and Web Accessibility are examples of standards that the W3C creates. The validity of HTML can be tested at the W3C Markup Validation Service which is available at validator.w3.org.

Is Valid HTML A Ranking Factor?

The post begins by stating that a commonly asked question is whether valid HTML is a ranking factor or some other kind of factor for Google Search. It’s a valid question because valid HTML could be seen as a characteristic of quality.

He wrote:

“Every now and then, we get questions about whether “valid HTML” is a ranking factor, or a requirement for Google Search.

Jens has done regular analysis of the validity of the top websites’ homepages, and the results are sobering.”

The phrase, “the results are sobering” means that the results that most home pages use invalid HTML is surprising and possibly cause for consideration.

Given how virtually all content management systems do not generate valid HTML, I’m somewhat surprised that even one site out of 200 used valid HTML. I would expect a number closer to zero.

Mueller goes on to note that valid HTML is a low bar for a ranking factor:

“…this is imo a pretty low bar. It’s a bit like saying professional writers produce content free of typos – that seems reasonable, right? Google also doesn’t use typos as a ranking factor, but imagine you ship multiple typos on your homepage? Eww.

And, it’s trivial to validate the HTML that a site produces. It’s trivial to monitor the validity of important pages – like your homepage.”

Ease Of Achieving Characteristic Of Quality

There have been many false signals of quality promoted and abandoned by SEOs, the most recent one being “authorship” and “content reviews” that are supposed to show that an authoritative author wrote an article and that the article was checked by someone who is authoritative. People did things like invent authors with AI generated images that are associated to fake LinkedIn profiles in the naïve belief that adding an author to the article will trick Google into awarding ranking factor points (or whatever, lol).

The authorship signal turned out to be a misinterpretation of Google’s Search Quality Raters Guidelines and a big waste of a lot of people’s time. If SEOs had considered how easy it was to create an “authorship” signal it would have been apparent to more people that it was a trivial thing to fake.

So, one takeaway from Mueller’s post can be said to be that if there’s a question about whether something is a ranking factor, first check if Google explicitly says it’s a ranking factor and if not then consider if literally any spammer can achieve that “something” that an SEO claims is a ranking factor. If it’s a trivial thing to achieve then there’s a high likelihood it’s not a ranking factor.

There Is Still Value To Be Had From Non-Ranking Factors

The fact that something is relatively easy to fake doesn’t mean that web publishes and site owners should stop doing it. If something is good for users and helps to build trust then it’s likely a good idea to keep doing it. Just because something is not a ranking factor doesn’t invalidate the practice.  It’s always a good practice in the long run to keep doing activities that build trust in the business or the content, regardless of whether it’s a ranking factor or not.  Google tries to pick up on the signals that users or other websites give in order to determine if a website is high quality, useful, and helpful, so anything that generates trust and satisfaction is likely a good thing.

Read John Mueller’s post on LinkedIn here.

Featured Image by Shutterstock/stockfour

OpenAI Claims New “o1” Model Can Reason Like A Human via @sejournal, @MattGSouthern

OpenAI has unveiled its latest language model, “o1,” touting advancements in complex reasoning capabilities.

In an announcement, the company claimed its new o1 model can match human performance on math, programming, and scientific knowledge tests.

However, the true impact remains speculative.

Extraordinary Claims

According to OpenAI, o1 can score in the 89th percentile on competitive programming challenges hosted by Codeforces.

The company insists its model can perform at a level that would place it among the top 500 students nationally on the elite American Invitational Mathematics Examination (AIME).

Further, OpenAI states that o1 exceeds the average performance of human subject matter experts holding PhD credentials on a combined physics, chemistry, and biology benchmark exam.

These are extraordinary claims, and it’s important to remain skeptical until we see open scrutiny and real-world testing.

Reinforcement Learning

The purported breakthrough is o1’s reinforcement learning process, designed to teach the model to break down complex problems using an approach called the “chain of thought.”

By simulating human-like step-by-step logic, correcting mistakes, and adjusting strategies before outputting a final answer, OpenAI contends that o1 has developed superior reasoning skills compared to standard language models.

Implications

It’s unclear how o1’s claimed reasoning could enhance understanding of queries—or generation of responses—across math, coding, science, and other technical topics.

From an SEO perspective, anything that improves content interpretation and the ability to answer queries directly could be impactful. However, it’s wise to be cautious until we see objective third-party testing.

OpenAI must move beyond benchmark browbeating and provide objective, reproducible evidence to support its claims. Adding o1’s capabilities to ChatGPT in planned real-world pilots should help showcase realistic use cases.


Featured Image: JarTee/Shutterstock

Google Expands YouTube First Position Ad Availability via @sejournal, @MattGSouthern

Google has announced the expansion of its First Position ad offering on YouTube, making it available across all content types through Display & Video 360.

This marks a change from the previous limitation of First Position ads to YouTube Select inventory.

What Are First Position Ads?

First Position ads are in-stream advertisements that appear at the beginning of YouTube videos, ensuring they are the first ad viewers see.

This placement is designed to capture audience attention when engagement is at its highest.

Key Changes to First Position:

  • Availability: Now accessible for all YouTube content, not just YouTube Select inventory
  • Pricing: Shifted from a fixed-rate CPM to a dynamic pricing model through Display & Video 360
  • Targeting: Allows advertisers to reach target audiences across a broader range of content

This feature is now available in all markets where First Position ads were previously offered.

Ad Formats & Placement

First Position targeting is available for both in-stream and Shorts ad formats, expanding the potential reach of these ads.

However, it’s worth noting that in-stream line items targeting the first position are not guaranteed to serve in the first position of a user’s session on YouTube TV.

This may affect strategies for connected TV advertising.

Instant Reserve & Implementation

Advertisers can use Instant Reserve, a Display & Video 360 feature, to get a quote and reserve YouTube inventory immediately without negotiations.

This aligns with the new dynamic pricing model, offering more flexibility in ad purchasing.

For implementation, advertisers should note that YouTube videos used in First Position ads must be set to “Public” or “Unlisted” visibility. Private videos cannot be used in these campaigns.

Reporting & Measurement

To assess the performance of First Position ads, advertisers can utilize Basic report templates and YouTube-specific reports available in Display & Video 360.

These tools allow for detailed analysis of ad performance across various metrics.

Case Studies Provided

Google cited two examples in its announcement:

  1. Booking.com reportedly saw a 21% relative lift in ad recall during a holiday campaign.
  2. IHG Hotels & Resorts claimed to achieve twice the YouTube benchmark for ad recall and brand awareness when combining First Position ads with Content Takeovers.

Context

The move may affect how brands allocate their video advertising budgets and could impact competition for prime ad placements on YouTube.

Here are the potential implications of these changes for advertisers:

  • Flexible Budgeting: Dynamic pricing allows for more adaptable spending strategies.
  • Expanded Reach: First Position ads are now available across all YouTube content, not just Select inventory.
  • Increased Competition: Wider availability may drive up costs for premium placements.
  • Strategic Planning: Advertisers may need to be more selective about using First Position ads.

Advertisers interested in leveraging First Position ads should consult Google’s Help Center for information on Instant Reserve in Display & Video 360 and Reservations in Google Ads to understand the implementation process and best practices.


Featured Image: Rokas Tenys/Shutterstock