Google Says AI Creative Should Help Brands Differentiate, Not Blend In via @sejournal, @brookeosmundson

One of the more interesting moments in Google’s latest Ads Decoded podcast centered around a growing advertiser concern about AI-generated creative.

As more brands gain access to the same AI tools, will advertising eventually start feeling repetitive?

Ginny Marvin, Ads Liaison at Google, raised that question directly during the discussion, asking whether the industry was heading toward a “sea of sameness.”

The response from Charles Boyd, Groupe Product Manager for Creative at Google, offered a clearer look on how Google is positioning AI creative tools inside Google Ads and where the company believes advertiser differentiation still comes from.

Google Says AI Creative Should Expand Creative Variation

Throughout the episode, Google repeatedly framed AI creative tools as systems designed to expand variation, accelerate testing, and adapt messaging across different audiences and placements.

Google repeatedly positioned these tools as dependent on advertiser strategy and direction.

Boyd described the value of generative tools as “the ability to quickly create different creative styles and iterations at scale.”

A large part of the industry conversation around AI advertising has focused on concerns about generic outputs and loss of differentiation.

Google appears to be taking the opposite position.

The company seems to believe advertisers with a strong understanding of their audience, messaging, and brand voice will be able to scale those strengths more efficiently through AI-assisted creative workflows.

Instead, Google appears to be positioning AI as infrastructure that helps advertisers produce more combinations, more testing opportunities, and more audience-specific variations.

That distinction gives more context to how Google is approaching AI creative tools.

Google Wants Advertisers Steering AI Creative

Another phrase Google returned to multiple times during the episode was “advertiser-in-the-loop.”

The broader point was that automation should still include advertiser guidance and oversight.

Google highlighted several tools designed to give advertisers more control over how AI-generated assets are created:

  • Text guidelines
  • Brand guidance
  • AI briefs
  • Asset Studio
  • Video enhancement previews
  • Text disclaimers
  • Final URL expansion controls

Boyd explained that advertisers can now provide specific text instructions directly inside campaigns.

For example, a brand could tell Google not to describe products using certain language or positioning:

Google literally will check every asset that gets created against each one of the guidelines that you provide.

According to Google, advertisers can specify up to 40 text guidelines within a campaign.

That is a noticeable shift from earlier automation features, which often felt far more rigid from a brand and messaging perspective.

The addition of text guidelines, AI briefs, and expanded creative controls suggests Google is trying to give advertisers more influence over how AI-generated assets are created and adapted across campaigns.

Google Is Increasingly Focused On Creative Breadth

Another notable takeaway from the episode was how often Google discussed creative diversity and variation.

The conversation repeatedly touched on:

  • Multiple responsive search ads
  • Different landing pages
  • Different aspect ratios
  • Audience-specific messaging
  • Diverse asset combinations
  • Creative tailored to different stages of the customer journey

At one point, Boyd encouraged advertisers to consider having multiple responsive search ads with different landing pages inside the same ad group.

That guidance would have sounded unusual to many PPC practitioners several years ago.

Google’s reasoning is that systems like AI Max can dynamically combine the following o better align messaging with different user journeys:

  • Headlines
  • Descriptions
  • Landing pages
  • Audience intent signals
  • Search context
  • Asset combinations

This feels connected to a larger shift happening across Google Ads.

Campaign optimization increasingly revolves around combinations of signals instead of isolated assets or keywords.

Sarah Hathiramani, Director of Product Management for YouTube Ads, reinforced this idea when discussing Demand Gen and YouTube creative:

There may be different audiences that you’re going after, and those audiences are going to resonate with very different creative messages.

That point becomes more important as Google’s systems increasingly personalize creative combinations dynamically.

Veo Signals Where Google Thinks Creative Production Is Going

The episode also offered another look at how Google sees AI changing creative production itself.

Hathiramani discussed Veo integrations inside Google Ads and Asset Studio.

According to Google, advertisers can upload up to three images and generate multiple short-form video variations automatically.

Google positioned this as a way to reduce production barriers for advertisers that may not have dedicated video resources:

Instead of asking every advertiser to become an in-house video production company, we’re able to use Veo to leverage automation while maintaining transparency and control.

That could be particularly meaningful for smaller advertisers or brands that historically relied heavily on static image creative.

It also reflects a larger trend happening across Google Ads.

The company increasingly wants advertisers participating across more inventory types, placements, formats, and surfaces.

AI-generated creative helps reduce some of the operational burden required to do that.

At the same time, Google repeatedly stressed that advertisers still need strong inputs.

Marvin specifically noted that brands with a clear voice and point of view are likely to benefit most from these tools.

What This Means For Advertisers

One of the more noticeable themes throughout the episode was how often Google emphasized creative breadth.

Multiple landing pages, multiple responsive search ads, audience-specific messaging, different aspect ratios, and structured asset testing all came up repeatedly across Search, Performance Max, Demand Gen, and YouTube.

That guidance reflects how Google’s systems increasingly optimize around combinations of assets, intent signals, placements, and audiences rather than isolated ads or keywords.

For advertisers, that may require a shift away from building a small set of highly controlled assets toward developing broader creative coverage across different audience stages and formats.

Looking Ahead

This episode offered a clearer look at how Google is talking about AI creative internally ahead of Google Marketing Live.

The discussion repeatedly centered around advertiser controls, creative testing, audience-specific messaging, and broader asset variation across campaigns.

That may be one of the more important signals for advertisers paying attention to where Google Ads is heading next.

Google appears to be encouraging advertisers to build more adaptable creative systems rather than relying on a small set of static assets.

Featured image: Google, YouTube

Bing Team Describes How Grounding Differs From Search Indexing via @sejournal, @MattGSouthern

Microsoft’s Bing team published a framework describing how indexing requirements change when the goal is to ground AI answers rather than to rank search results.

The post identifies five measurement areas where the company says the two systems diverge. It also names “abstention” as a design choice for AI-powered retrieval.

What Microsoft Described

The post argues that traditional search indexing and grounding indexing share the same foundation but serve different goals.

Traditional search, the team writes, asks “which pages should a user visit?” The grounding layer asks “what information can an AI system responsibly use to construct a response?”

Microsoft identifies five categories where the measurement requirements differ.

On factual fidelity, the team notes that some ranking mismatch is tolerable in traditional search because a user can click through and evaluate. In grounding, the post describes breaking content into retrievable chunks as a process that “can distort page substance in ways that never appear in any ranking signal.”

For source attribution quality, the Bing team calls attribution helpful in traditional search but “a core signal” in grounding. Not all indexed content matters equally as evidence for an AI answer, the team adds.

On freshness, Microsoft notes a clear difference in cost. Stale content in search is a ranking problem. In grounding, the post says, “a stale fact produces a misleading response.”

For coverage of high-value facts, the post explains that a missed document in search is recoverable because alternative results exist. In grounding, the index must ensure “the specific facts and sources that people are likely to ask about are actually available and groundable.”

On contradictions, traditional search can surface one source above another and let the user decide. A grounding system can’t do that. “An AI system that silently arbitrates between contradictory sources is one that may confidently assert the wrong thing,” the team says.

Abstention And Iterative Retrieval

The post also covers two design differences between the systems.

Microsoft calls declining to answer “abstention.” For a grounding system, that’s a valid outcome when support is missing, stale, or conflicting. Traditional search doesn’t need to make this judgment because it presents options for a human to evaluate.

Iterative retrieval is the other difference. Traditional search is typically a single interaction where a query goes in and ranked results come out. Grounding systems may need to ask follow-up questions, refine retrieval based on intermediate results, and combine evidence from multiple sources.

Errors in early retrieval steps “compound through subsequent reasoning steps in ways that no human reviewer would catch in real time,” the post adds.

Context

This blog post comes after a series of moves by Microsoft to build out its grounding tooling and give publishers visibility into it.

In February, Microsoft launched the AI Performance dashboard in Bing Webmaster Tools, giving sites their first page-level citation data for AI-generated answers. The company rewrote the Bing Webmaster Guidelines in March to include GEO as a named optimization category and added grounding query-to-page mapping to the dashboard the same month. At SEO Week in April, Madhavan previewed four additional features for the dashboard, including Citation Share and grounding query intent labels.

This post is more conceptual than those prior announcements. It doesn’t introduce new tools or features. Instead, it lays out the engineering principles the company describes as guiding its index evolution.

Why This Matters

This framework clarifies what Microsoft says its systems need from the index for AI answers.

Microsoft states grounding relies on the same crawling, quality, and web understanding as search, but grounded answers require accurate, fresh, attributable, and consistent evidence. Stale facts, weak sources, and contradictions pose risks when content is used for answers.

Looking Ahead

The post offers insight into why some content is easier for AI to cite. If the Citation Share and intent-label features previewed at SEO Week ship, they could help test whether the measurement priorities described here show up in actual publisher data.


Featured Image: TY Lim/Shutterstock

Google Adds More Links & Link Context To AI Search via @sejournal, @MattGSouthern

Google is rolling out five updates to how links appear in its generative AI Search experiences, including AI Mode and AI Overviews. The changes add subscription labels and inline links within responses, among other features.

Here’s an example of how the changes will appear:

Image Credit: Google

Hema Budaraju, VP of Product Management, wrote about the updates in a blog post.

What’s New

The updates cover five areas of link display across Google’s generative AI Search features.

Subscription Highlighting In AI Mode & AI Overviews

Google is now labeling links from users’ news subscriptions in AI Mode and AI Overviews.

Google announced subscription highlighting in December for the Gemini app but didn’t provide a timeline for AI Mode or AI Overviews. Today’s announcement confirms the expansion to both surfaces.

Google said that in early testing, people were “significantly more likely” to click links labeled as their subscriptions. The company didn’t share specific numbers.

Publishers who want to help subscribers connect their subscriptions with Google can find details on Google’s developer website.

Topic Suggestions After AI Responses

Serchers will start to see suggestions for related content at the end of many AI responses. These link to articles or analyses on different aspects of the topic.

Discussion and Social Media Previews

Google’s AI responses will include previews of perspectives from public online discussions, social media, and other firsthand sources.

The company is also adding context to these links, such as creator names and community names.

See a provided example:

Image Credit: Google

More Inline Links Within Responses

Users will start to see more links directly within AI response text, positioned next to the relevant passage. Google didn’t quantify how many more inline links users will see or where the change will appear.

See a provided example:

Image Credit: Google

Link Hover Previews on Desktop

On desktop, hovering over an inline link in Google’s AI experiences will show a preview of the linked website. The preview includes the site name and page title. Google noted that people hesitate to click links when they don’t know where they lead.

See a provided example:

Why This Matters

Image Credit: Google

These updates show Google trying to make links more visible in AI Search at a time when publishers are closely monitoring referral traffic.

More inline links, hover previews, discussion cards, and subscription labels all point in the same direction. Google wants AI responses to feel less like dead ends and more like starting points for deeper exploration of the web.

That matters because the debate around AI Search has centered on whether AI answers reduce the need to click. Google is now adding more ways to click, but it isn’t providing the data publishers need to judge the impact.

For websites, that leaves the update in a familiar place. The link treatment may improve visibility, but the traffic impact will still need to be measured in analytics after the rollout reaches its audience.

Looking Ahead

The next question is how consistently these link treatments appear across AI Search surfaces.

Google didn’t provide rollout details for most of the updates, including geography, language, eligibility, or timing. That makes early testing difficult to interpret until we can see where the features appear and which types of queries trigger them.


Featured Image: Danuta Hyniewska/Shutterstock

WordPress Loses Marketshare. Is Astro Eroding Their User Base? via @sejournal, @martinibuster

A discussion on Twitter about the many people posting that they’ve left WordPress for Astro went modestly viral, with longtime WordPress supporters explaining why they ditched WordPress for Astro. Statistics show that WordPress is losing users and Astro is gaining them at a rate of 100% year over year, indicating that the shift toward Astro is more than a passing trend.

WordPress Marketshare Steadily Declining

An underreported statistic about WordPress is that WordPress peaked in mid 2025 with a marketshare of 43.6% and has been on a steady decline ever since, currently sitting at a 42.2% marketshare (according to W3Techs), a drop of 1.4%.  WordPress is losing marketshare, this is a fact.

An argument could be made that the WordPress marketshare percentage is inflated because a significant number of WordPress websites are abandoned or spam. The official WordPress statistics show that 10.56% of WordPress websites haven’t been updated since 2022. That means, if you don’t count abandoned websites, the actual WordPress marketshare is less than the 42.2%.

Astro Downloaded 2.5 Million Times Per Week

Astro is a static site generator and web framework. It’s not a content management system, it is software that generates websites from content. Sites created with Astro are called static because they are classic HTML web pages that are not dynamically generated from a database the way that PHP-based sites built with WordPress are. The consequence is that Astro-based sites download faster and are less complicated.

Astro had been steadily gaining popularity since it debuted in 2021. The idea that Astro is not popular and that the hoopla is loud voices is demonstrably false. Astro’s popularity is real. It is currently being downloaded at a rate of 2.5 million downloads per week. That’s a 100% increase from 2025 when it was being downloaded at a rate of 1.4 million weekly downloads.

Screenshot Of The Astro Download Statistics

Screenshot showing Astro's weekly download rate has grown to 2.5 million per week.

Joost de Valk, founder of the Yoast SEO plugin, may have identified one of the reasons why so many people are turning to Astro. He recently wrote of an epiphany in which he realized he didn’t need a content management system (CMS); he just needed a website.

He wrote:

“For twenty years, ‘I want a website’ meant “I need a CMS.” WordPress, Joomla, Drupal: the conversation was always about which one. That framing is outdated. People never wanted a CMS. They want a website.”

Some Doubt The Astro Reality

Rayhan Arif, a WordPress business person, recently expressed his incredulity over the many tweets and blog posts by people sharing their experience leaving WordPress for Astro.

He tweeted:

“Every “’leaving WordPress’ post I come across seems to point to Astro. But when I dig a little deeper, I often don’t see any prior conversations or context showing those people were actually using WordPress in the first place.

To me, it starts to feel less organic and more like a coordinated narrative almost like a well-planned negative campaign against WordPress. It even makes me wonder whether some companies might be incentivizing this kind of messaging for their own business gains.

I could be wrong, but that’s honestly the impression I’ve been getting.”

That tweet seemed to imply that Cloudflare, which acquired Astro at the beginning of 2026, may have been orchestrating a whisper campaign. But nobody in that discussion agreed with him.

Astro Had Momentum Predates Cloudflare’s Acquisition

One of the first responses pushed back against the idea of a whisper campaign. Tommy J. Vedvik argued that the movement toward Astro was already happening before Cloudflare acquired Astro or created EmDash.

Vedvik responded:

“You’re probably wrong if you think it’s Cloudflare who’s behind it. This happened way before Cloudflare acquired Astro or created EmDash”

David V. Kimball made a similar point from his own experience, saying he had been encouraging people to move away from WordPress before Astro became part of a broader public conversation.

Kimball wrote:

“No, I’ve been pushing people away from WordPress to Astro before it was cool, starting about two years ago.”

He added that he had not seen many others doing the same until recently, but described himself as “a living breathing person” who had already helped many people make that move.

The pushback did not prove that every post was organic, but it did challenge the idea that the pattern began with Cloudflare. The replies suggested that Astro had already been gaining ground among some developers before the recent attention around EmDash.

Not everyone who left WordPress rushed toward Astro. Front end developer Tammy Hart shared that she was a WordPress defector but that she loathed Astro.

Longtime WordPress Defecting To Astro

Several respondents made a point of establishing their WordPress credentials. That became one of the thread’s strongest themes: the most detailed criticisms were not coming from people with no history in WordPress, but from people who said they had built careers, businesses, or client work around it.

Daniel Schutzsmith responded by identifying himself as both an Astro user and a long-time WordPress professional.

Schutzsmith wrote:

“Real Astro user here and I think you’ll see I’ve made 100s of sites with WordPress, been WCUS organizer 3 times, WCNYC organizer 2 times, and WCMIA organizer 1 time.”

Keanan Koppenhaver made a similar credibility claim, writing:

“Former VIP-agency dev, WP agency owner, current plugin owner and multi-time WordCamp speaker, here.

I’m using Astro for a lot now! Still WP in some cases, but Astro, especially when you’re working by yourself or with a git-knowledgeable small team, helps you move way faster.”

Mike Sewell acknowledged that he used both WordPress and Astro but that WordPress is no longer his go-to:

“I have been building client sites with WordPress since 2010. I still use it for some jobs, but I have found myself more and more reaching for other tools – nextjs + sanity, 11ty, and experimenting with EmDash. WordPress isn’t going anywhere but it’s no longer my go to.”

Many others shared similar backgrounds with WordPress, showing that there may be a movement from within the WordPress ecosystem that is moving away from WordPress.

The Matt Mullenweg Effect

While most users shared that the reason for leaving WordPress were pragmatic reasons like Astro’s performance and relative simplicity, Schutzsmith, the former WordCamp organizer who had built hundreds of WordPress sites, explained that his reason for moving away from WordPress was due to clients expressing skittishness about committing to WordPress after Matt Mullenweg’s actions which left hundreds, if not thousands, of WP Engine customers unable to update their WordPress websites.

Schutzsmith shared:

“Matt steered it in a horrible direction and now it’s become very hard to sell WP to enterprise clients that literally see the drama he creates by seeing the articles and videos across publications and influencers about it.

That impact has not gone away. In fact, the monumental expansion of things like Claude Code and OpenAI Codex, make moving to a less dramatic, more stable content management system, a no-brainer.

Selling enterprise on JavaScript based solutions has become much easier than convincing that same buyer that their site won’t be affected if Matt has another meltdown.

The minute he said .org is his personal website to distribute plugins and themes, it made it no longer safe for the enterprise.”

Schutzsmith was speaking from professional experience, later explaining that six of his client’s websites hosted on WP Engine were disrupted because of actions taken by Mullenweg.

AI Coding Tools Are Making Astro Viable

Lastly, AI-assisted coding was one of the highest cited explanations for why Astro is receiving more attention now. Several replies in the discussion suggested that AI tools make code-first site development feel faster and less dependent on traditional CMS interfaces.

David Hamilton described Astro as a strong fit for Claude Code, writing:

“I use astro because it is ridiculously compatible with Claude code.

I haven’t had to open a CMS a figma board or anything.

My latest websites are all made through Claude and astro, I don’t see myself moving back to the traditional website builders anytime soon.”

There were many others who shared the exact same experience.

A Balanced View Of AI And Web Development

Yet there are others like Kevin Geary, developer of the Etch website builder, who express a nuanced opinion of using AI for creating websites.

In a separate post from several weeks earlier he wrote:

“A logical, evidence-based conclusion about where AI fits into a quality development workflow is: AI is a great tool for improving productivity but has to be heavily reviewed and steered by someone who actually knows what they’re doing.

If you’re in the 100% anti-AI camp, you’re likely taking a purely emotional position.

And if you’re in the “AI can do it all, run 18 agents at a time, coders are cooked” camp, you’re also taking a purely emotional position.”

Is The WordPress Ecosystem Eroding?

Anecdotal evidence indicates that WordPress veterans are leaving or minimizing their use of WordPress, largely because of the benefits of Astro, not necessarily because WordPress is a poor experience. Some are choosing Astro because it is faster. Others are using it because AI coding tools make code-first workflows easier and faster. Some are choosing it because their sites are mostly static, making WordPress somewhat overkill for their situation.

Then there are some who believe that the WordPress governance drama has become a business risk.

The larger story is not that Astro is replacing WordPress; it’s still too early to make that claim. The more important question is whether the turn toward Astro is a sign that WordPress has become overly complex and that it’s now easier to build with AI.

Ironically, on the other side of that argument, WordPress is on the verge of a major transformational change due to AI. WordPress version 7.0 is set to bring all the benefits of AI into WordPress at a scale that no other CMS or website-building framework can match. The massive community of plugin and theme developers is poised to roll out AI-assisted features that will be hard to compete against.

Featured Image by Shutterstock/yulsiart

OpenAI Launches Self-Serve Ads Manager for ChatGPT via @sejournal, @brookeosmundson

OpenAI has officially launched the next phase of advertising inside ChatGPT, introducing a beta self-serve Ads Manager alongside new CPC bidding and expanded measurement tools.

The update moves ChatGPT advertising further beyond its original pilot phase. Advertisers can now create and manage campaigns directly through OpenAI instead of relying only on managed partnerships and agency relationships.

While marketers already expected self-serve buying to arrive, this launch adds several pieces advertisers have been waiting for. That includes direct campaign management, click-based bidding, and conversion measurement capabilities.

OpenAI says U.S. advertisers can now register for access, upload ads, manage budgets, control pacing, and monitor campaign performance through the new platform.

What’s New With ChatGPT Ads

OpenAI originally launched ChatGPT ads with a smaller group of advertisers to test demand, delivery, and performance.

Since then, the company has expanded partnerships with major agency groups including Dentsu, Omnicom Group, Publicis Groupe, and WPP.

The company also added technology partners including Adobe, Criteo, Kargo, Pacvue, and StackAdapt.

Now, OpenAI is opening direct access through its own Ads Manager platform.

The rollout is currently limited and still in beta. OpenAI says it plans to gradually expand access as testing continues.

For advertisers, the move makes ChatGPT feel much closer to a traditional media buying platform than an experimental ad environment.

CPC Bidding Brings A Familiar Performance Model

One of the larger updates is the addition of cost-per-click bidding.

During the early pilot phase, advertisers primarily purchased ChatGPT ads on a CPM basis. OpenAI says CPC bidding gives advertisers more flexibility to align spend with engagement and downstream actions.

Many ChatGPT sessions involve active research and decision-making behavior. Users are often comparing products, evaluating services, or asking for recommendations before taking action elsewhere.

That creates a very different environment from passive scrolling on social platforms.

For performance marketers, CPC buying also creates a more familiar testing framework. Advertisers can evaluate traffic quality and engagement without relying entirely on impression-based buying models.

In a LinkedIn post, David Dugan, Head of Global Solutions at OpenAI stated:

What’s stood out most in my first month is how thoughtfully this is being built. We’re creating a new ads model – one that supports businesses and broader access to AI while staying grounded in clear principles around answer independence, privacy, and user control.

OpenAI says both CPM and CPC bidding will remain available moving forward.

More Conversion Measurement Coming

OpenAI also announced expanded measurement capabilities through Conversions API support and pixel-based tracking.

Advertisers can now measure actions like purchases, sign-ups, or lead submissions after someone interacts with an ad.

At the same time, OpenAI continues to emphasize privacy protections around ChatGPT advertising.

The company says advertisers will receive aggregated reporting and campaign insights without access to private conversations or personal user data.

That distinction will likely remain important as advertising inside AI platforms continues to expand.

OpenAI also says stronger conversion signals will help improve ad relevance and optimization over time.

What Advertisers Should Watch Next

This launch gives advertisers more legitimate ways to test ChatGPT as a performance channel.

Self-serve buying lowers the barrier to entry for smaller businesses and in-house teams. CPC bidding also gives marketers more control over how budgets are evaluated during early testing.

Still, advertisers should keep expectations realistic in the near term.

This platform is still early. Benchmarks are limited. Measurement standards are still developing, and user behavior inside AI platforms continues to evolve quickly.

The more interesting shift may be how quickly ChatGPT is adopting the same infrastructure advertisers expect from larger ad platforms.

Self-serve buying, conversion tracking, bidding flexibility, and partner integrations are now becoming standard parts of the platform.

Now that the ads platform is out, will you be testing ChatGPT ads in 2026?

Featured image: Samuel Boivin / Shutterstock

Google Is Testing New Bot Authorization Standard via @sejournal, @martinibuster

Google is testing Web Bot Auth, an experimental protocol designed to help websites verify that automated traffic is really coming from the bot or service it claims to represent. The new protocol could give site owners a dependable way to separate legitimate automated traffic from bots that hide or misrepresent who they are.

A new developer support page was published provide information on how to verify requests with the Web Bot Auth protocol, which is currently in an experimental phase.

What Google’s Web Bot Auth Is Based On

The new protocol is technically called the HTTP Message Signatures Directory. It’s a proposed technical standard designed to automate trust between web services. It helps websites recognize verified automated services without requiring each side to manually exchange security keys beforehand.

The basic idea is similar to giving verified automated services a standardized way to present credentials. Instead of relying only on names, user-agent strings, or private setup between companies, the protocol gives websites a repeatable way to check whether an automated request can be verified. That matters because many bots can claim to be something they are not. Web Bot Auth does not decide whether a bot is good or bad, but it can give site owners a stronger signal about whether the bot is really the service it claims to be.

A Reliable Way To Identify Bots

The cryptographic part is important because it makes identity harder to fake. Today, a rogue bot can claim to be a legitimate crawler by copying a name or user-agent string. Web Bot Auth is designed to move beyond that kind of self-identification by giving websites a way to check whether an automated request matches the service’s cryptographic credentials.

Under this protocol, a bot would need more than a label saying who it is. It would need to prove that identity in a way that a website can validate. That could give site owners a secure basis for allowing verified automated services while blocking bots that cannot prove who they are. The protocol does not automatically decide which bots should be allowed or blocked, but it could give websites a more dependable signal for making that decision.

Cryptographic verification is what makes Web Bot Auth better than current bot identification methods. Instead of relying on signals that can be misrepresented, it gives websites a way to verify automated requests. That means recognition is based less on what a bot says about itself and more on whether its identity can be confirmed by cryptographic credentials.

Caveat: It’s In An Experimental Phase

The proposed protocol will make it possible to distinguish between rogue bots that are impersonating trusted crawlers from the genuine bots from trusted services. This protocol is like a whitelist of what’s allowed which may make it easier to isolate untrusted crawlers.

However, because this is an experimental phase, the “whitelist” currently only applies to a subset of traffic, such as the Google-Agent . Google is “not yet signing every request,” so a missing signature does not automatically mean a bot is rogue. Site owners are advised to continue using IP addresses and reverse DNS alongside the protocol to avoid accidentally blocking legitimate traffic that hasn’t migrated yet.

What It Does

The new standard replaces manual setup between websites and bots, crawlers, and other automated services with a three-step discovery process:

  • Standardized Key Files:
    Keys are stored in a common format, JSON Web Key Set (JWKS), that all servers can read.
  • Well-Known Addresses:
    It defines a specific “home” on a website (/.well-known/) where these keys are always kept.
  • Self-Identifying Requests:
    It adds a new header, Signature-Agent, to HTTP requests that acts like a digital business card, pointing the receiver directly to the sender’s key directory.

Benefits For Automated Services And Websites

Web Bot Auth could make bot verification easier to scale by reducing the need for manual setup between each website and automated service. It also gives automated services a more consistent way to stay recognizable when their security details change, which can help avoid broken verification over time.

Web Bot Auth Is Experimental

Google stresses that users should continue using existing standards such as user-agent IP-based bot verification, stressing that the standard itself is a proposal that is subject to change.

The new documentation provides the following warning:

“The experimental status means that:

Not all Google user agents are using Web Bot Auth.

Google is not yet signing every request of agents using the protocol.

We recommend that in addition to Web Bot Auth you continue relying on IP addresses, reverse DNS, and user-agent strings as we gradually roll out signed traffic.

If you’re a developer or system administrator looking to allowlist our experimental AI agents, you can implement verification through the Web Bot Auth protocol:

  • Using a product or service that supports Web Bot Auth
  • Verifying requests yourself”

Nevertheless, the standard does aim to simplify bot identification and controlling bot traffic by using a cryptographic protocol that a rogue agent can’t spoof, provide insights into how bots are interacting with your traffic, and to build a better way to control the currently out of control situation with bot crawling.

Google encourages users interested in the protocol to contact their web hosting providers to see if they intend to support the experimental protocol, keep up to date with the latest changes published by the Web Bot Auth Working Group and to send feedback through Google’s official Web Bot Auth feedback form.

Read Google’s new documentation:

Authenticate requests with Web Bot Auth (experimental)

Featured Image by Shutterstock/Efkaysim

Google Testing Web Bot Auth To Verify AI Agent Requests via @sejournal, @MattGSouthern

Google published documentation explaining its testing of Web Bot Auth, an experimental IETF protocol that can help websites cryptographically verify some automated requests from bots and AI agents.

The protocol adds another verification layer by letting agents sign HTTP requests with cryptographic keys. Websites can then verify those signatures against published public keys to confirm the request came from who it claims to be.

What’s New

Web Bot Auth uses HTTP Message Signatures (RFC 9421) to let automated clients sign outgoing requests. A bot holds a private key, publishes its public key at a known URL, and signs each request. The receiving website checks the signature against the public key to confirm identity.

Google says a subset of signed Google-Agent requests are authenticated as https://agent.bot.goog. Signed requests include a Signature-Agent HTTP header set to g="https://agent.bot.goog", and the corresponding signature can be verified using public keys published at that domain’s .well-known directory.

According to Google’s documentation, bot-detection services, CDNs, and WAFs already support the protocol. The IETF draft is authored by Thibault Meunier of Cloudflare and Sandor Major of Google. Cloudflare publishes a reference implementation on GitHub.

The IETF Web Bot Auth Working Group was chartered in early 2026 with milestones for standards-track specifications and a best current practice document.

What Google Is Not Doing Yet

Not all Google user agents are participating. The documentation says Google is testing with “some AI agents hosted on Google infrastructure” but does not name which ones beyond the Google-Agent user-triggered fetcher.

Even for participating agents, not every request is signed. The documentation recommends that sites continue relying on IP addresses, reverse DNS, and user-agent strings as the primary verification method while signed traffic rolls out gradually.

The Internet-Draft could change as the working group develops the standard.

Why This Matters

Bot impersonation has been a persistent problem. Scrapers and bad actors can spoof user-agent strings to disguise their traffic as Googlebot or other legitimate crawlers, making it harder for site owners to tell real bot traffic from fake.

We covered this issue when Google’s Martin Splitt warned that “not everyone who claims to be Googlebot actually is Googlebot.” The available verification methods at the time were reverse DNS lookups and IP range checks. Web Bot Auth would add a layer that can’t be forged without the agent’s private key.

For sites already using a CDN or WAF that supports the protocol, verification may happen automatically. For everyone else, the experimental status means there is no urgency to act. The documentation recommends treating existing verification as the default and Web Bot Auth as supplementary.

Looking Ahead

Web Bot Auth is still moving through the standards process, and Google’s implementation remains experimental.

For now, the practical change is visibility. Websites may start seeing signed requests from some Google-Agent traffic, while existing verification methods remain the default.

The next question is whether more AI agents adopt signed requests, and whether hosting providers make verification automatic for websites that don’t want to manage keys.

Google Sued Over False AI Overview About Musician via @sejournal, @MattGSouthern

Canadian fiddler Ashley MacIsaac has filed a civil lawsuit against Google, alleging an AI Overview falsely identified him as a convicted sex offender. The lawsuit could test how courts treat liability for false AI-generated search summaries.

The statement of claim, filed in February with the Ontario Superior Court of Justice, seeks at least $1.5 million in damages from Google LLC. None of the claims have been tested in court.

What The Lawsuit Alleges

MacIsaac, a Juno Award-winning musician, says he learned of the false summary in December 2025 after the Sipekne’katik First Nation confronted him with it and cancelled one of his concerts. The First Nation later issued a public apology.

According to the filing, the AI Overview falsely stated MacIsaac had been convicted of sexual assault, internet luring involving a child, and assault causing bodily harm, and wrongly claimed he’d been listed on the national sex offender registry.

The lawsuit argues Google is liable for the output its AI system generated, stating that Google “knew, or ought to have known, that the AI overview was imperfect and could return information that was untrue.”

It also alleges Google didn’t admit responsibility, didn’t reach out to MacIsaac, and didn’t offer an apology or retraction.

The filing makes a direct argument about AI liability:

“If a human spokesperson made these false allegations on Google’s behalf, a significant award of punitive damages would be warranted. Google should not have lesser liability because the defamatory statements were published by software that Google created and controls.”

MacIsaac said Google must take responsibility for what AI Overviews display. “This was not a search engine just scanning through things and giving somebody else’s story,” he said.

Google’s Response

Google hasn’t commented on the lawsuit. In December, spokesperson Wendy Manton said AI Overviews are “dynamic and frequently changing” and that when the feature misinterprets web content, Google uses those cases to improve its systems. The false summary tying MacIsaac to criminal offences no longer appears.

Why This Matters

AI Overviews can appear in Google search results as AI-generated snapshots with links to more information. Google’s Search Help documentation says AI responses may include mistakes.

When those summaries display false claims about real people, the consequences can extend beyond a bad search result. In MacIsaac’s case, the lawsuit alleges the AI Overview led to a cancelled concert and reputational harm.

MacIsaac’s case isn’t the first time AI-generated content has led to defamation allegations. In 2023, an Australian mayor threatened legal action after ChatGPT falsely claimed he’d been imprisoned for bribery. The lawsuit targets Google’s AI Overviews directly and argues the product had a defective design.

The case adds to a growing legal question around AI-generated content: whether platforms are responsible when automated summaries present false claims as search results.

Looking Ahead

The case is at the statement-of-claim stage, and Google hasn’t filed a response. Until then, the core questions are unresolved: whether Google will contest liability, how it will characterize AI Overview output, and how the court will treat automated summaries in a defamation claim.

Google Previews Meridian GeoX, Data Manager Measurement Updates via @sejournal, @MattGSouthern

Google previewed several updates to its advertising measurement tools, including a new open-source incrementality testing solution and an enterprise platform for marketing mix modeling.

The announcements come ahead of Google Marketing Live on May 20, where the company says it will share more about how Google Analytics is evolving.

What’s New

The updates cover data management, incrementality testing, and marketing mix modeling.

Data Manager Updates

Google’s Data Manager is getting a new visual summary with a map view in the coming months. The feature will show how data flows from platforms like BigQuery, Google Drive, HubSpot, and Shopify into Google Ads, Google Analytics, and Google Marketing Platform.

In the coming weeks, the Data Manager API will also allow advertisers to combine foundational tags with additional data signals, including store sales.

The Google tag is also getting a visual setup flow so marketers can set it up without writing code. The company says the update will centralize settings and user access while making tag setup easier for marketers who don’t write code.

Citing internal finance-sector data, Google said advertisers who upgraded to Google tag gateway saw an average of 14% more conversions. The company cited partners including Akamai, Cloudflare, Fastly, Google Cloud, and Webflow as helping with adoption.

Meridian GeoX

Google announced Meridian GeoX, which it describes as an open-source, geographic-based incrementality solution. The tool is designed to run geographic experiments that provide causal measurement of media performance.

The tool is built on an auditable codebase and integrates with Meridian, the company’s open-source marketing mix model, Google says. GeoX will begin testing later this year.

The methodology behind GeoX is not entirely new. Google’s GitHub repositories for geographic experiment matching (google/trimmed_match and google/matched_markets) have existed for some time. This announcement formalizes the technology as a named product within the Meridian ecosystem.

Meridian Studio

Google is introducing Meridian Studio, a Google Cloud-powered enterprise platform for teams that manage high-volume marketing mix models. The company says it gives teams options for customization and access to a richer signal base.

Google listed eight measurement partners for its Meridian and Data Manager tools, including Adswerve, Choreograph (WPP), Brainlabs, Epsilon, Fifty-Five, Jellyfish, Making Science, and Merkle.

Why This Matters

Google is tying several measurement tools into the same story: stronger first-party data, causal experiments, and marketing mix modeling.

Meridian GeoX adds an incrementality layer to Meridian, giving advertisers a way to test media impact by geography. That matters because MMM can show modeled relationships, while experiments can help validate whether media drove incremental results.

Google has been building toward this across its measurement products. Recent examples include Tag Diagnostics, Meridian’s launch, a Data Manager API expansion, and Scenario Planner.

The main limitation is timing. Data Manager updates are coming in stages, GeoX begins testing later this year, and Google didn’t share access details for Meridian Studio.

Looking Ahead

Google said it will share more at Google Marketing Live on May 20, including how it plans to connect data and causal measurement across its ad products.

The company also teased upcoming Google Analytics changes but didn’t provide details. Until then, the main open questions are when each tool becomes available, who can access Meridian Studio, and how GeoX testing will work in practice.

Featured Image: PJ McDonnell/Shutterstock

Google Says A New Wave Of AI Users Is Transforming Search via @sejournal, @martinibuster

Google’s Martin Splitt and Nikola Todorovic discussed the impact of AI on search, revealing that there’s a new wave of people that are doing things with Google search that is markedly different than in the past and that this is an upward trend.

Martin Splitt noted that AI in search is not new and that it had always been there behind the scenes assisting in the organic search results. It’s only recently that it’s been moved to the forefront where it is now assisting users with increasingly complex multimodal search queries. Funny thing about AI search is that whereas AI plays a role in the background of organic, organic search plays a role in the background of AI search.

Martin asked if AI Search is evolutionary or a revolutionary change:

“Yeah, because I think everyone is talking about AI in search as if it’s a new thing, but it has been there behind the scenes, so to speak, before that.

  • So what makes these AI features that people are using now and that are progressively enhancing the search experience for them so different from the features we had before?
  • Would you consider these new features revolutionary and completely different from what we’ve been doing so far?
  • Or is it more like an evolution of what we have been doing in the past?”

Google’s Nikola Todorovic, Director of Software Engineering at Google Search, answered that it’s revolutionary and that search today is very different from what it was ten years ago. he also noted that current AI-driven search behavior is changing because users are becoming increasingly confident about the kinds of questions that Google is able to answer.

Todorovic replied:

“I think the way they are being used, and I think it is a revolution that they’re speaking of right now. But clearly in the whole process, there’s like small steps. But if you compare search now and search 10 years ago, it’s a very different product. So I would say yes, this is like a big step change and it is absolutely changing the way the users are searching.

So if you think about it, any feature is changing in some way. For example, if you bring like more images, videos, etc, then it is bringing this kind of experience. So people are going more to image search. For example, when we added what we call the image universal blocks on the main page. Now that this new wave is also changing the way the users are searching because they are uncovering that search can actually answer to more complex questions.

And for that reason, we do see that user queries or you call them prompts now, so they’re getting longer. They become more detailed and the average query length is growing.

So we do see the new traffic and this new wave of traffic is a consequence of users being able to see, aha, there is something new I can do over here. That’s from that perspective, it is revolution, but it is obviously a bunch of steps in between that happened and have been improving search all the time.”

Key insights about search behavior today:

  • User queries are becoming longer and more detailed.
  • Users are discovering new things they can do with search.

That last one is important and may partially explain where some of the traffic is going. People are doing more complex searches, plus as noted in a podcast interview of Liz Reid, people are using multiple AI chat services.

While some SEOs say that AI Search is longtail now, that’s not really what’s happening behind the scenes because Classic Search is still happening behind the scenes because the AI is splitting complex queries into simpler fan-out queries. “Keyword-ese” queries are still happening to a certain extent but now they’re components of a larger query that itself is longtail.

Takeaways

  • AI in search is not new, but AI at the front of the search experience is changing how people use Google.
  • Google says search today is a different product than it was ten years ago because users are asking longer, more detailed, and more complex questions.
  • Users are discovering that search can handle questions they may not have tried before, which is creating new search behavior.
  • AI Search may look like longtail search on the surface, but Google can break complex prompts into simpler fan-out queries behind the scenes.
  • Classic Search still matters because AI Search depends on retrieval. Organic search has not disappeared. It has moved into the background of the AI experience.
  • Keywords are not dead. They may now function as smaller pieces inside larger prompts and more complex search sessions.
  • Content has to work at two levels: retrievable for classic search and useful for more complex AI Search behavior.

The important insight is not that users are writing longer queries, but that users are learning what search can do now. As AI Search solves more complex queries, SEO begins to feel more uncertain. It may be useful to consider that simpler fan-out queries are what is being optimized for. But also see the insights about Browsy Queries.

Listen to Search Off The Record

Featured Image by Shutterstock/takasu